text
stringlengths 1
2.28M
| meta
dict |
|---|---|
\section{Introduction}\label{s1}
In this letter, we consider distributed optimization over multi-agent networks. Formally, each agent~$i$ has access only to a private function,~$f_i:\mathbb{R}^p\rightarrow\mathbb{R}$. The goal is to minimize the average of {\color{black}these functions},~$\frac{1}{n}\sum_{i=1}^nf_i(\mathbf{x})$, via information exchange among the agents. We focus on the case where the communication network is described by {\color{black} an arbitrary \emph{directed} graph}. Early work on distributed optimization {\color{black}includes distributed sub-gradient descent (DGD)~\cite{uc_Nedic},} which converges to the optimal solution at a sublinear rate, i.e.,~$O(\frac{{\rm ln} k}{\sqrt{k}})$ for arbitrary (possibly non-differentiable) convex functions and~$O(\frac{{\rm ln} k}{k})$ for strongly-convex functions, where~$k$ is the number of iterations. These methods are slow due to the diminishing step-sizes. With the help of strong-convexity and Liptschiz-continuous gradients, algorithms with faster convergence rates have been developed. In particular, DGD with a constant step-size~\cite{DGD_Yuan} converges geometrically to an error ball around the optimal solution. Another method, EXTRA~\cite{EXTRA}, achieves geometric convergence to the global optimal solution with the requirement of symmetric weights. Of relevance are Refs.~\cite{xu2015augmented,Augmented_EXTRA,xu2018convergence,GQu_nesterov}, which combine inexact gradient methods and a gradient estimation technique based on dynamic average consensus~\cite{zhu2010discrete}. Additional related work and applications can be found in~\cite{6119236,jakovetic2017unification,RajaBajwa.ITSP16,8123915,8264076,YING2018253}.
All of the aforementioned methods require the underlying graphs to be undirected or weight-balanced. This requirement, however, may not be practical, for example, when the agents broadcast at different power levels {\color{black}leading to communication capability in one direction but not in the other.} It is natural thus to develop optimization and learning algorithms that are applicable to directed graphs. The primary challenge in dealing with directed graphs is that it may not be possible to construct doubly-stochastic weight matrices for information fusion. The weighted adjacency matrix for directed graphs, in general, may only be either row-stochastic or column-stochastic, but not both. See{\color{black}~\cite{gharesifard2012distributed}} for work on balancing the weights in strongly-connected directed graphs.
{\color{black}The existing approaches for optimization over directed graphs are motivated by combining average-consensus methods developed for directed graphs with optimization algorithms designed for undirected graphs.} For instance, subgradient-push introduced in~\cite{opdirect_Tsianous} and further studied in~\cite{opdirect_Nedic} combines push-sum consensus~\cite{ac_directed0} and DGD; {\color{black}A linear algorithm over directed graphs, called Directed-Distributed Gradient Descent (D-DGD)}, was introduced in~\cite{D-DGD,D-DPS}, and is based on surplus consensus~\cite{ac_Cai1} and DGD. Such DGD-based methods, however, restricted by the diminishing step-size, converge relatively slowly at~$O(\frac{{\rm ln} k}{\sqrt{k}})$ for general convex functions and~$O(\frac{{\rm ln} k}{k})$ for strongly-convex functions. The convergence rate has been recently improved in DEXTRA~\cite{DEXTRA}, which converges geometrically to the global optimal given that its step-size lies in an interval and the objective functions are strongly-convex with Lipschitz-continuous gradients. DEXTRA was subsequently improved in ADD-OPT/Push-DIGing~\cite{add-opt,opdirect_nedicLinear}, which geometrically converges with a sufficiently small step-size. The implementation of DEXTRA and {\color{black}ADD-OPT/Push-DIGing} requires each agent to know its out-degree in order to construct a column-stochastic weight matrix. This requirement is later removed in~\cite{linear_row} and FROST~\cite{xin2018fast}, which use row-stochastic weights and {\color{black}thus require no knowledge of out-degrees as each agent locally decides weights assigned to the incoming information.} What is common among these fast methods over directed graphs is that they all are based on {\color{black}push-sum (type) techniques, which make the resulting algorithm nonlinear because an independent algorithm} is used to asymptotically learn either the right or the left eigenvector, corresponding to the eigenvalue of~$1$, of the weight matrix. {\color{black}This strategy causes additional computation and communication on the agents.}
In this paper, we provide a \textit{linear} distributed optimization algorithm that converges geometrically to the global optimal with a sufficiently small step-size and when the objective functions are strongly-convex with Lipschitz-continuous gradients. In the rest of the paper, Section~\ref{s2} provides the algorithm development and its relationship with existing approaches, while Section~\ref{s3} details the convergence analysis. Section~\ref{s4} presents numerical experiments and Section~\ref{s5} concludes the paper.
\textbf{Basic Notation:} We use lowercase bold letters to denote vectors and uppercase italic letters to denote matrices. The matrix,~$I_n$, represents the~$n\times n$ identity, whereas~$\mathbf{1}_n$ is the~$n$-dimensional column vector of all~$1$'s. For an arbitrary vector,~$\mathbf{x}$, we denote its~$i$th element by~$[\mathbf{x}]_i$. We denote by~$X\otimes Y$, the Kronecker product of two matrices,~$X$ and~$Y$. {\color{black}For a matrix,~$X$, we denote~$\rho(X)$ as its spectral radius and~$X_\infty$ as its infinite power (if it exists), i.e.,~$X_\infty=\lim_{k\rightarrow\infty}X^k$.} For a primitive, row-stochastic matrix,~$\underline{A}$, we denote its left and right eigenvectors corresponding to the eigenvalue of~$1$ by~$\boldsymbol{\pi}_r$ and~$\mathbf{1}_n$, respectively, such that~$\boldsymbol{\pi}_r^\top\mathbf{1}_n = 1$. Similarly, for a primitive, column-stochastic matrix,~$\underline{B}$, we denote its left and right eigenvectors corresponding to the eigenvalue of~$1$ by~$\mathbf{1}_n$ and~$\boldsymbol{\pi}_c$, respectively, such that~$\mathbf{1}_n^\top\boldsymbol{\pi}_c = 1$. The notation~$\|\cdot\|_2$ denotes the Euclidean norm of vectors and~$\mn{\cdot}_2$ denotes the spectral norm of matrices.
\section{Algorithm Development}\label{s2}
In this section, we mathematically formulate the optimization problem and describe the proposed algorithm and its relationship with the existing methods. Consider a network of~$n$ agents whose communication links are described by a strongly-connected directed graph,~$\mathcal{G}=(\mathcal{V},\mathcal{E})$, where~$\mathcal{V}$ is the index set of agents, and~$\mathcal{E}$ is the collection of ordered pairs,~$(i,j),i,j\in\mathcal{V}$, such that agent~$j$ can send information to agent~$i$, i.e.,~$j\rightarrow i$. We define~$\mathcal{N}_i^{{\scriptsize \mbox{in}}}$ as the collection of in-neighbors, i.e., the set of agents that can send information to agent~$i$. Similarly,~$\mathcal{N}_i^{{\scriptsize \mbox{out}}}$ is the set of out-neighbors of agent~$i$. Note that both~$\mathcal{N}_i^{{\scriptsize \mbox{in}}}$ and~$\mathcal{N}_i^{{\scriptsize \mbox{out}}}$ include node~$i$. We assume that each agent~$i$ knows\footnote{Such an assumption is standard in the related literature, see, e.g.,~\cite{opdirect_Nedic,opdirect_Tsianous,ac_Cai1,D-DGD,D-DPS,DEXTRA,add-opt}.} its out-degree (the number of out-neighbors), denoted by~$|\mathcal{N}_i^{{\scriptsize \mbox{out}}}|$; see~\cite{bullo_book} for details.
We focus on solving a convex optimization problem distributed over the above multi-agent network. In particular, the network of agents cooperatively solves the following:
\begin{align}
\mbox{P1}:
\quad\mbox{min }&F(\mathbf{x})=\frac{1}{n}\sum_{i=1}^nf_i(\mathbf{x}),\nonumber
\end{align}
where each~$f_i:\mathbb{R}^p\rightarrow\mathbb{R}$ is known only to agent~$i$. We assume that each local function,~$f_i(\mathbf{x})$, is strongly-convex and has Lipschitz-continuous gradients. Our goal is to design a distributed algorithm such that the {\color{black}iterates at each agent converge} to the global optimal solution of Problem P1 via information exchange with nearby agents over the directed graph,~$\mathcal{G}$. We formalize the set of assumptions as follows.
\begin{assump}\label{asp1}
The graph,~$\mathcal{G}$, is strongly-connected and each agent in the network knows its out-degree.
\end{assump}
\begin{assump}\label{asp2}
Each local function,~$f_i$, is strongly-convex, and has globally Lipschitz-continuous gradient, i.e., for any~$i$ and~$\mathbf{x}_1, \mathbf{x}_2\in\mathbb{R}^p$,
\begin{enumerate}[(i)]
\item there exists a positive constant~$\beta$ such that
$$\qquad\|\mathbf{\nabla} f_i(\mathbf{x}_1)-\mathbf{\nabla} f_i(\mathbf{x}_2)\|_2\leq \beta\|\mathbf{x}_1-\mathbf{x}_2\|_2;$$
\item there exists a positive constant~$\alpha$ such that
$$f_i(\mathbf{x}_1)-f_i(\mathbf{x}_2)\leq\mathbf{\nabla} f_i(\mathbf{x}_1)^\top(\mathbf{x}_1-\mathbf{x}_2)-\frac{\alpha}{2}\|\mathbf{x}_1-\mathbf{x}_2\|_2^2.$$
\end{enumerate}
Clearly, the Lipschitz-continuity and strongly-convexity constants for the global objective function~$F(\mathbf{x})$ are~$\beta$ and~$\alpha$, respectively. Assumption~\ref{asp2} ensures that the optimal solution, {\color{black}denoted as~$\mathbf{x}^*$}, for P1 exists and is unique.
\end{assump}
\textbf{Algorithm description:} To solve Problem P1, we propose the following algorithm. Each agent,~$i\in\mathcal{V}$, maintains two variables:~$\mathbf{x}_{i}(k)$,~$\mathbf{y}_{i}(k)\in\mathbb{R}^p$, where $k$ is discrete-time index. The algorithm, initialized with~$\mathbf{y}_i(0)=\nabla f_i(\mathbf{x}_i(0))$ and {\color{black}with arbitrary~$\mathbf{x}_i(0),\forall i$}, performs the following iterations.
\begin{subequations}\label{alg1}
\begin{align}
\mathbf{x}_i(k+1)=&{\color{black}\sum_{j=1}^{n}}a_{ij}\mathbf{x}_{j}(k)-\eta\mathbf{y}_i(k),\label{alg1a}\\
\mathbf{y}_i(k+1)=&{\color{black}\sum_{j=1}^{n}}b_{ij}\Big(\mathbf{y}_j(k)+\nabla f_j\big(\mathbf{x}_j(k+1)\big)-\nabla f_j\big(\mathbf{x}_j(k)\big)\Big),\label{alg1d}
\end{align}
\end{subequations}
where the step-size,~$\eta$, is some positive constant. The weights,~$a_{ij}$'s and~$b_{ij}$'s satisfy the following conditions:
\begin{align}
a_{ij}&=\left\{
\begin{array}{rl}
>0,&j\in\mathcal{N}_i^{{\scriptsize \mbox{in}}},\\
0,&\mbox{otherwise},
\end{array}
\right.
\quad
\sum_{j=1}^na_{ij}=1,\forall i\begin{color}{black},\end{color} \label{a}
\end{align}
\begin{align}
b_{ij}&=\left\{
\begin{array}{rl}
>0,&i\in\mathcal{N}_j^{{\scriptsize \mbox{out}}},\\
0,&\mbox{otherwise},
\end{array}
\right.
\quad
\sum_{i=1}^nb_{ij}=1,\forall j. \label{b}
\end{align}
Eq.~\eqref{a} leads to a row-stochastic matrix~$\underline{A}=\{a_{ij}\}$, which is easy to implement as each agent locally decides the weights. Eq.~\eqref{b}, on the other hand, results in a column-stochastic matrix~$\underline{B}=\{b_{ij}\}$, whose distributed implementation only requires each agent to know its out-degree. In particular, we can construct such weights as~$b_{ij}=1/|\mathcal{N}_j^{{\scriptsize \mbox{out}}}|,\forall i,j$.
The algorithm in Eqs.~\eqref{alg1} can be explained as follows. To implement Eq.~\eqref{alg1a}, the receiving agent~$i$ decides on the weights~$a_{ij}$ assigned to the incoming~$\mathbf{x}_j(k)$'s such that~$a_{ij}$'s sum to~$1$. Implementation of Eq.~\eqref{alg1d} requires the sending agent to scale the transmission~$\mathbf{y}_j(k)+\nabla f_j\big(\mathbf{x}_j(k+1)\big)-\nabla f_j\big(\mathbf{x}_j(k)$ by appropriate choice of~$b_{ij}$'s (to ensure column-stochasticity of~$\underline{B}$) as the out-degree of agent~$j$ may not be known to agent~$i$. Agent~$i$ subsequently adds these received messages to implement Eq.~\eqref{alg1d}. Intuitively, Eq.~\eqref{alg1d} asymptotically learns the average,~$\frac{1}{n}\sum_{i=1}^{n}\nabla f_i(\mathbf{x}_i(k))$, of the local gradients,{\color{black}~\cite{xu2015augmented,xu2018convergence,Augmented_EXTRA,GQu_nesterov,zhu2010discrete}}; and thus~Eq.~\eqref{alg1a} approaches a centralized gradient descent, as the descent direction,~$\mathbf{y}_i(k)$, becomes the gradient of the global objective function over time.
\textbf{Relation with existing work:} We now briefly compare the proposed algorithm with existing techniques. {\color{black}The algorithms in Refs.~\cite{Augmented_EXTRA, xu2015augmented,xu2018convergence},} can be summarized as a single class of algorithms over undirected graphs with the following form:
\begin{subequations}\label{alg2}
\begin{align}
\mathbf{x}_i(k+1)=&\sum_{j=1}^{n}w_{ij}\mathbf{x}_{j}(k)-\eta\mathbf{y}_{i}(k),\label{alg2a}\\
\mathbf{y}_i(k+1)=&\sum_{j=1}^{n}w_{ij}\mathbf{y}_j(k)+\nabla f_i\big(\mathbf{x}_i(k+1)\big)-\nabla f_i\big(\mathbf{x}_i(k)\big),\label{alg2d}
\end{align}
\end{subequations}
where~$W=\{w_{ij}\}$ is doubly-stochastic. It is shown in Ref.~\cite{Augmented_EXTRA,xu2018convergence}, that Eqs.~\eqref{alg2} converge geometrically to the optimal solution of Problem P1 as long as the step-size,~$\eta$, is sufficiently small. This algorithm, however, is not applicable to directed graphs as it may not be possible to construct doubly-stochastic weights.
To overcome this issue, Refs.~\cite{add-opt,opdirect_nedicLinear,linear_row,xin2018fast} leverage push-sum (type) techniques, with either row- or column-stochastic weights, towards the algorithm in Eqs.~\eqref{alg2}. Refs.~\cite{linear_row,xin2018fast}, e.g., propose the following algorithm:
\begin{align*}
\mathbf{y}_i(k+1)=&\sum_{j=1}^na_{ij}\mathbf{y}_i(k),\\
\mathbf{x}_i(k+1)=&\sum_{j=1}^na_{ij}\mathbf{x}_i(k)-\begin{color}{black}\eta_i\end{color}\mathbf{z}_i(k),\\
\mathbf{z}_i(k+1)=&\sum_{j=1}^na_{ij}\mathbf{z}_i(k) +\frac{\nabla f_i\big(\mathbf{x}_i(k+1)\big)}{[\mathbf{y}_i(k+1)]_i}-\frac{\nabla f_i\big(\mathbf{x}_i(k)\big)}{[\mathbf{y}_i(k)]_i},
\end{align*}
{\color{black}where $\underline{A}=\{a_{ij}\}$ is row-stochastic}. Note that the first equation is an independent algorithm, which asymptotically learns the left eigenvector, corresponding to the eigenvalue of~$1$, of~$\underline{A}$. However, it adds nonlinearity to the overall algorithm along with additional computation and communication costs in contrast to the proposed algorithm in Eqs.~\eqref{alg1}.
\textbf{Remarks:} \begin{color}{black}The algorithm, Eqs.~\eqref{alg1}, proposed in this letter can be viewed as related to Eq.~\eqref{alg2} but without doubly-stochastic weights\end{color}, due to which we lose the nice eigenstructure within the weight matrices. It is rather straightforward to notice that a linear extension of Eqs.~\eqref{alg2} to the directed graphs is non-trivial as all earlier attempts were made by adding nonlinearity to the original set of equations. One of the major challenges lies in the fact that even though the contraction of a doubly-stochastic~$W$ is well-established in the subspace orthogonal to~$\mathbf{1}_n$, it is not straightforward to establish simultaneous contractions for a row-stochastic matrix,~$\underline{A}$, and a column-stochastic matrix,~$\underline{B}$. The latter requires working with arbitrary norms (as opposed to the~$2$-norm applicable to doubly-stochastic matrices) and norm-equivalence constants, as we show in Lemma~\ref{lem1} and onwards.
\section{Convergence Analysis}\label{s3}
For the sake of analysis, we now write Eqs.~\eqref{alg1} in matrix form. The variables~$\mathbf{x}(k)$ and~$\mathbf{y}(k)$ collect all the local variables~$\mathbf{x}_i(k)$'s and~$\mathbf{y}_i(k)$'s in a vector, respectively, and
\begin{eqnarray}\label{not1}
\nabla\mathbf{f}(k)=
\left[
\begin{array}{c}
\nabla {f}_1\big(\mathbf{x}_{1}(k)\big)\\
\vdots\\
\nabla {f}_n\big(\mathbf{x}_{n}(k)\big)
\end{array}
\right]\in \mathbb{R}^{np}.
\end{eqnarray}
Let~$A=\underline{A}\otimes I_p$ and~$B=\underline{B}\otimes I_p$, where $\otimes$ is the Kronecker product. {\color{black}We denote~$\mathbf{x}^*$ as the optimal solution of Problem P1.} We now rewrite Eqs.~\eqref{alg1} in a compact matrix form as follows:
\begin{subequations}\label{alg1_matrix}
\begin{align}
\mathbf{x}(k+1)=&A\mathbf{x}(k)-\begin{color}{black}\eta\end{color} \mathbf{y}(k),\label{alg1_ma}\\
\mathbf{y}(k+1)=&B\Big(\mathbf{y}(k)+\nabla \mathbf{f}(k+1)-\nabla \mathbf{f}(k)\Big),\label{alg1_mb}
\end{align}
\end{subequations}
where~$\mathbf{y}(0)=\nabla\mathbf{f}(0)$ and~$\mathbf{x}(0)$ is arbitrary.
\subsection{Auxiliary relations}
We next start the convergence analysis with a key lemma regarding the contraction in consensus process with row- and column-stochastic weight matrices, respectively.
\begin{lem}\label{lem1}
Consider the weight matrices~$A=\underline{A}\otimes I_p$ and~$B=\underline{B}\otimes I_p$. Then there exist vector norms,~$\|\cdot\|_A$ and~$\|\cdot\|_B$, such that for all~$\mathbf{a}\in\mathbb{R}^{np}$,
\begin{align}\label{A_ctr}
\left\|A\mathbf{a}-A_\infty\mathbf{a}\right\|_A&\leq\sigma_A\left\|\mathbf{a}-A_\infty\mathbf{a}\right\|_A,\\\label{B_ctr}
\left\|B\mathbf{a}-B_\infty\mathbf{a}\right\|_B&\leq\sigma_B\left\|\mathbf{a}-B_\infty\mathbf{a}\right\|_B,
\end{align}
where~$0<\sigma_A<1$ and~$0<\sigma_B<1$ are some constants.
\end{lem}
\begin{proof}
Since~$\underline{A}$ is irreducible, row-stochastic with positive diagonals, from Perron-Frobenius theorem we have that~$\rho(\underline{A})=1$, every eigenvalue of~$\underline{A}$ other than~$1$ is strictly less than~$\rho(\underline{A})$, and~$\boldsymbol{\pi}_r^\top$ is a strictly positive left eigenvector corresponding to the eigenvalue of~$1$ with~$\mathbf{1}_n^\top\boldsymbol{\pi}_r = 1$; thus~$\lim_{k\rightarrow\infty} \underline{A}^k = \mathbf{1}_n\boldsymbol{\pi}_r^\top$. We further have
\begin{align*}
A_{\infty}=\lim_{k\rightarrow\infty}{A^k}=
\left(\lim_{k\rightarrow\infty}{\underline{A}^k}\right)\otimes I_p=\left(\mathbf{1}_n\boldsymbol{\pi}_r^\top\right)\otimes I_p.
\end{align*}
It follows that
\begin{align}
AA_{\infty} &= (\underline{A}\otimes I_p)\Big((\mathbf{1}_n\boldsymbol{\pi}_r^\top)\otimes I_p\Big) = A_{\infty}, \nonumber \\
A_{\infty}A_{\infty} &= \Big((\mathbf{1}_n\boldsymbol{\pi}_r^\top)\otimes I_p\Big)\Big((\mathbf{1}_n\boldsymbol{\pi}_r^\top)\otimes I_p\Big)=A_{\infty}. \nonumber
\end{align}
Thus~$AA_{\infty}-A_{\infty}A_{\infty}$ is a zero matrix, which leads to the following relation:
\begin{eqnarray}\label{eq1}
A\mathbf{a}-A_\infty \mathbf{a}=(A-A_{\infty})(\mathbf{a}-A_{\infty}\mathbf{a}).
\end{eqnarray}
\begin{color}{black}Since~$\rho(A-A_{\infty})=\rho((\underline{A}-\mathbf{1}_n\boldsymbol{\pi}_r^\top)\otimes I_p)<1,$ we have from Lemma 5.6.10 in~\cite{hornjohnson:13} that there exists a matrix norm, say~$\mn{\cdot}_A$, such that
\begin{align}
\sigma_A\triangleq\mn{A-A_{\infty}}_A<1.
\end{align}
Moreover, from Theorem 5.7.13 in~\cite{hornjohnson:13}, we know that for any matrix norm,~$\mn{\cdot}_A$, there exists a compatible vector norm, say~$\|\cdot\|_A$, such that~$\|X\mathbf{x}\|_A\leq\mn{X}_A\|\mathbf{x}\|_A$, for all matrices,~$X$, and all vectors,~$\mathbf{x}$; hence, Eq.~\eqref{eq1} leads~to
\vspace{-0.1cm}
\begin{eqnarray*}
\|{A\mathbf{a}-A_\infty \mathbf{a}}\|_A&=&\|{(A-A_{\infty})(\mathbf{a}-A_{\infty}\mathbf{a})}\|_A,\\ &\leq&\mn{A-A_\infty}_A\|\mathbf{a}-A_{\infty}\mathbf{a}\|_A,\\
&=& \sigma_A\|\mathbf{a}-A_{\infty}\mathbf{a}\|_A,
\end{eqnarray*}
and Eq.~\eqref{A_ctr} follows. Similarly, Eq.~\eqref{B_ctr} follows for some matrix norm,~$\mn{\cdot}_B$, with~$\sigma_B\triangleq\mn{B-B_{\infty}}_B$. \end{color}
\end{proof}
The following lemma is a direct consequence of the column-stochasticity of~$\underline{B}$ and {\color{black}the initial condition that~$\mathbf{y}(0)=\nabla\mathbf{f}(0)$.}
\begin{lem}\label{sum_equ}
We have~$(\mathbf{1}_n^\top \otimes I_p) \mathbf{y}(k) = (\mathbf{1}_n^\top \otimes I_p) \nabla\mathbf{f}(k),\forall k$.
\end{lem}
\begin{proof}
Recall Eq.~\eqref{alg1_mb} and multiply both sides of Eq.~\eqref{alg1_mb} with~$\mathbf{1}_n^\top \otimes I_p$. We get
\begin{align}
&(\mathbf{1}_n^\top \otimes I_p)\mathbf{y}(k+1)\nonumber\\
&=(\mathbf{1}_n^\top \otimes I_p)(\underline{B}\otimes I_p)\Big(\mathbf{y}(k)+\nabla \mathbf{f}(k+1)-\nabla \mathbf{f}(k)\Big) \nonumber\\
&= (\mathbf{1}_n^\top \otimes I_p)\mathbf{y}(k) + (\mathbf{1}_n^\top \otimes I_p)\nabla \mathbf{f}(k+1)
- (\mathbf{1}_n^\top \otimes I_p)\nabla \mathbf{f}(k)\nonumber\\
&= (\mathbf{1}_n^\top \otimes I_p)\Big(\mathbf{y}(0)-\nabla\mathbf{f}(0)\Big)+(\mathbf{1}_n^\top \otimes I_p)\nabla\mathbf{f}(k+1)\nonumber\\
&= (\mathbf{1}_n^\top \otimes I_p)\nabla\mathbf{f}(k+1), \nonumber
\end{align}
which completes the proof.
\end{proof}
\newpage
Lemma~\ref{sum_equ} shows that the average of~$\mathbf{y}_i(k)$'s preserves the average of local gradients. The next lemma, a standard result in convex optimization theory from~\cite{opt_literature0,Augmented_EXTRA}, states that the distance to the optimal minimizer shrinks by at least a fixed ratio if we perform a gradient descent step.
\begin{lem}\label{centr_d}
Suppose that~$g:\mathbb{R}^p\rightarrow\mathbb{R}$ is strongly convex with Lipschitz-continuous gradient. Let~$\alpha$ and~$\beta$ be its strong-convexity and Lipschitz-continuity constants respectively. For~$\forall \mathbf{x}\in\mathbb{R}^p$ and~$0<\theta<\frac{2}{\beta}$, we have ~$$\left\|\mathbf{x}-\theta\nabla g(\mathbf{x})-\mathbf{x}^*\right\|_2\leq\tau\left\|\mathbf{x}-\mathbf{x}^*\right\|_2,$$ where~$\tau=\max\left(\left|1-\alpha \theta\right|,\left|1-\beta\theta \right|\right)$.
\end{lem}
The subsequent convergence analysis is based on deriving a contraction relationship in the proposed algorithm, i.e.,~$\|\mathbf{x}(k+1)-A_\infty\mathbf{x}(k+1)\|_A$,~$\|A_\infty\mathbf{x}(k+1)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2$, and~$\|\mathbf{y}(k+1)-B_\infty\mathbf{y}(k+1)\|_B$, are bounded linearly by their values in the last iteration. We capture a relationship on these objects in the next lemmas. Before we proceed, note that all vector norms on finite-dimensional vector space are equivalent, i.e., there exist finite and positive constants,~$c,d,h,l,g,m$, such that:
\begin{align*}
\|\cdot\|_A &\leq c\|\cdot\|_B,~~\|\cdot\|_2 \leq h\|\cdot\|_B,~~\|\cdot\|_2 \leq g\|\cdot\|_A,\\
\|\cdot\|_B &\leq d\|\cdot\|_A,~~\|\cdot\|_B \leq l\|\cdot\|_2,~~\|\cdot\|_A \leq m\|\cdot\|_2.
\end{align*}
\begin{lem} \label{1}
The following inequality holds,~$\forall k$:
\begin{align}
\|\mathbf{x}&(k+1)-A_\infty\mathbf{x}(k+1)\|_A\nonumber\\
\leq&~\sigma_A\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A + \eta m\mn{I_{np}-A_\infty}_2\left\|\mathbf{y}(k)\right\|_2
\nonumber
\end{align}
\end{lem}
\begin{proof}
Using Eq.~\eqref{alg1_ma} and Lemma.~\ref{lem1}, we have
\begin{align}
\|\mathbf{x}&(k+1)-A_\infty\mathbf{x}(k+1)\|_A \nonumber\\
=&~\|A\mathbf{x}(k)-\eta \mathbf{y}(k)-A_{\infty}\Big(A\mathbf{x}(k)-\eta \mathbf{y}(k)\Big)\|_A, \nonumber\\
\leq&~ \sigma_A\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A+\eta m\|\mathbf{y}(k)-A_\infty\mathbf{y}(k)\|_2,\nonumber\\
\leq&~\sigma_A\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A + \eta m\mn{I_{np}-A_\infty}_2\left\|\mathbf{y}(k)\right\|_2
\nonumber
\end{align}
and the lemma follows.
\end{proof}
Next, we develop a relation for~$\|A_\infty\mathbf{x}(k+1)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2$.
\begin{color}{black}\begin{lem} \label{2}
The following holds,~$\forall k$, when~$0<\eta<\frac{2}{n\beta\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c}$:
\begin{align}\label{lem5}
\|A&_\infty\mathbf{x}(k+1)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2\nonumber\\
\leq&~\eta n\beta g(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A\nonumber\\
&+~\lambda\|A_\infty\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2 +~\eta h\mn{A_\infty}_2\|\mathbf{y}(k)-B_{\infty}\mathbf{y}(k)\|_B,
\end{align}{\color{black}
where~$\lambda=\max\left(\left|1-\alpha n\eta(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)\right|,\left|1-\beta n\eta(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c) \right|\right)$.}
\end{lem}
\end{color}
\begin{proof}
With~$A_\infty=(\mathbf{1}_n\boldsymbol{\pi}_r^\top)\otimes I_p=(\mathbf{1}_n\otimes I_p)(\boldsymbol{\pi}_r^\top\otimes I_p)$ and Eq.~\eqref{alg1_ma}, we have
\begin{align}\label{inte_1}
\|A&_\infty\mathbf{x}(k+1)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2 \nonumber\\
=&\begin{color}{black}~\left\|A_{\infty}\Big(A\mathbf{x}(k)-\eta \mathbf{y}(k)+B_{\infty}\mathbf{y}(k)(-\eta+\eta)\Big)-\mathbf{1}_n \otimes \mathbf{x}^*\right\|_2, \nonumber\end{color}\\
\leq&~\left\|\big((\mathbf{1}_n\boldsymbol{\pi}_r^\top) \otimes I_p\big)\mathbf{x}(k)-(\mathbf{1}_n\otimes I_p)\mathbf{x}^*-\eta A_{\infty}B_{\infty}\mathbf{y}(k)\right\|_2 \nonumber\\
&+\eta h\mn{A_\infty}_2\left\|\mathbf{y}(k)-B_{\infty}\mathbf{y}(k)\right\|_B.
\end{align}Since the last term above matches with the last term in Eq.~\eqref{lem5}, what is left is to manipulate the first term. Before we proceed, define~$\nabla F(k)=\nabla F\big((\boldsymbol{\pi}_r^\top\otimes I_p)\mathbf{x}(k)\big)$, which is the global gradient evaluated at~$(\boldsymbol{\pi}_r^\top\otimes I_p)\mathbf{x}(k)$. Note that
\begin{align*}
A_\infty B_\infty=(\mathbf{1}_n\boldsymbol{\pi}_r^\top\otimes I_p)(\boldsymbol{\pi}_c\mathbf{1}_n^\top\otimes I_p)=\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c(\mathbf{1}_n\mathbf{1}_n^\top\otimes I_p).
\end{align*}
We have the following:
{\color{black}\begin{align*}
\|\big(&(\mathbf{1}_n\boldsymbol{\pi}_r^\top) \otimes I_p\big)\mathbf{x}(k)-(\mathbf{1}_n\otimes I_p)\mathbf{x}^*-\eta A_{\infty}B_{\infty}\mathbf{y}(k)\|_2\\
\leq&~\left\|(\mathbf{1}_n \otimes I_p)\Big((\boldsymbol{\pi}_r^\top\otimes I_p)\mathbf{x}(k)-\mathbf{x}^*-n\eta(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)\nabla F(k)\Big)\right\|_2\\ &+\eta(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)\left\|n(\mathbf{1}_n\otimes I_p)\nabla F(k)- (\mathbf{1}_n, \otimes I_p)(\mathbf{1}_n^\top\otimes I_p)\mathbf{y}(k)\right\|_2,\\
:=&~s_1 + \eta s_2.
\end{align*}}From Lemma~\ref{centr_d}, we have that if~$0<\eta<2/(n\beta\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)$,
$$s_1\leq\lambda\|A_\infty\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2.$$
Recall that~$(\mathbf{1}_n^\top \otimes I_p) \mathbf{y}(k) = (\mathbf{1}_n^\top \otimes I_p) \nabla\mathbf{f}(k),\forall k,$ from Lemma~\ref{sum_equ}, we have
\begin{align*}
s_2 \leq& n\beta g(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A.
\end{align*}
The lemma follows by using the above bounds in Eq.~\eqref{inte_1}.
\end{proof}
Next, we develop a relation for~$\|\mathbf{y}(k+1)-B_\infty\mathbf{y}(k+1)\|_B$.
\begin{lem} \label{3}
The following inequality holds,~$\forall k$:
\begin{align}
\|\mathbf{y}&(k+1)-B_\infty\mathbf{y}(k+1)\|_B\nonumber\\
\leq&~ \sigma_B\beta lg\mn{A-I_{np}}_2\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A\nonumber\\
&+~\sigma_B\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\|_B +~\eta\sigma_B\beta l\|\mathbf{y}(k)\|_2.
\end{align}
\end{lem}
\begin{proof}
We note that
\begin{align}\label{inte_2}
\|\mathbf{y}&(k+1)-B_\infty\mathbf{y}(k+1)\|_B \nonumber\\
=&~\left\|B\Big(\mathbf{y}(k)+\nabla \mathbf{f}(k+1)-\nabla \mathbf{f}(k)\Big)-B_{\infty}B\Big(\mathbf{y}(k)+\nabla \mathbf{f}(k+1)-\nabla \mathbf{f}(k)\Big)\right\|_B, \nonumber\\
\leq&~{\sigma_B\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\|_B+\sigma_B\beta l\|\mathbf{x}(k+1)-\mathbf{x}(k)\|_2,}
\end{align}
because of Lemma~\ref{lem1}. Now we analyze $\|\mathbf{x}(k+1)-\mathbf{x}(k)\|_2$.
\begin{align}\label{xd}
\|\mathbf{x}&(k+1)-\mathbf{x}(k)\|_2 \nonumber\\
=&~ \|A\mathbf{x}(k)-\eta \mathbf{y}(k)-\mathbf{x}(k)\|_2, \nonumber\\
=&~ \left\|(A-I_{np})\big(\mathbf{x}(k)-A_\infty\mathbf{x}(k)\big)-\eta\mathbf{y}(k)\right\|_2,\nonumber\\
\leq&~ \mn{A-I_{np}}_2g\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A+\eta\|\mathbf{y}(k)\|_2.
\end{align}
The lemma follows by plugging Eq.~\eqref{xd} into Eq.~\eqref{inte_2}.
\end{proof}
The last step is to bound $\|\mathbf{y}(k)\|_2$ in terms of~$\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A$,~$\|A_\infty\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2$, and~$\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\|_B$. Then we can replace~$\|\mathbf{y}(k)\|_2$ in Lemma~\ref{1}-\ref{3} by this bound to complete the contraction relationship.
\begin{lem} \label{4}
The following inequality holds,~$\forall k$:
\begin{align*}
\|\mathbf{y}(k)\|_2 \leq&~ g\beta\mn{B_{\infty}}_2\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A \nonumber\\
&+~ \beta\mn{B_{\infty}}_2\|A_\infty\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2 +~h\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\|_B.
\end{align*}
\end{lem}
\begin{proof}
Recall that $B_{\infty}= (\boldsymbol{\pi}_c\otimes I_p)(\mathbf{1}_n^\top\otimes I_p).$ We have
\begin{equation}\label{inte_3}
\|\mathbf{y}(k)\|_2 \leq h\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\|_B + \|B_\infty\mathbf{y}(k)\|_2.
\end{equation}
We next bound~$\|B_\infty\mathbf{y}(k)\|_2$:
\begin{align}\label{inte_4}
\|B_\infty\mathbf{y}(k)\|_2 =&~ \|(\boldsymbol{\pi}_c\otimes I_p)(\mathbf{1}_n^\top\otimes I_p)\mathbf{y}(k)\|_2\nonumber\\
=&~\|\boldsymbol{\pi}_c\|_2\|(\mathbf{1}_n^\top\otimes I_p)\nabla\mathbf{f}(k)\|_2 \nonumber\\
=&~ \|\boldsymbol{\pi}_c\|_2\left\|\sum_{i=1}^{n}\nabla f_i(\mathbf{x}_i(k))-\sum_{i=1}^{n}\nabla f_i(\mathbf{x}^*)\right\|_2 \nonumber\\
\leq&~ \|\boldsymbol{\pi}_c\|_2\beta\sum_{i=1}^{n}\|\mathbf{x}_i(k)-\mathbf{x}^*\|_2 \nonumber\\
\leq&~\|\boldsymbol{\pi}_c\|_2\beta\sqrt{n}\|\mathbf{x}(k)-\mathbf{1}_n\otimes \mathbf{x}^*\|_2, \nonumber\\
\leq&~\mn{B_{\infty}}_2\beta g\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A +~\mn{B_{\infty}}_2\beta\|A_\infty\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2,
\end{align}
where the second last inequality uses Jensen's inequality and the last inequality uses the fact that~$\mn{B_{\infty}}_2=\sqrt{n}\|\boldsymbol{\pi}_c\|_2$. The lemma follows by plugging Eqs.~\eqref{inte_4} into Eq.~\eqref{inte_3}.
\end{proof}
Before the main result, we present an additional lemma from nonnegative matrix theory.
\begin{comment}
\begin{lem}\label{pert}(Theorem 6.3.12 in~\cite{hornjohnson:13})
Let $A,E\in\mathbb{R}^{n\times n}$ and let~$q$ be a simple eigenvalue of $A$. Let $\mathbf{x}$ and $\mathbf{y}$ be, respectively, right and left eigenvectors of $A$ corresponding to $q$. Then
\begin{enumerate}[(i)]
\item for each $\epsilon>0$, there exists a $\delta>0$ such that,~$\forall t\in\mathbb{C}$ with~$|t|<\delta$, there is a unique eigenvalue~$q(t)$ of $A+tE$ such that $\left|q(t)-q-t\frac{\mathbf{y}^*E\mathbf{x}}{\mathbf{y}^*\mathbf{x}}\right|\leq|t|\epsilon$,
\item $q(t)$ is continuous at $t=0$, and $\lim_{t\rightarrow0}{q(t)}=q$.
\item $q(t)$ is differentiable $t=0$,
$\frac{dq(t)}{dt}|_{t=0}=\frac{\mathbf{y}^*E\mathbf{x}}{\mathbf{y}^*\mathbf{x}}. $
\end{enumerate}
\end{lem}
\end{comment}
\begin{lem}\label{rho}(Theorem 8.1.29 in~\cite{hornjohnson:13})
Let $X\in\mathbb{R}^{n\times n}$ be a nonnegative matrix and~$\mathbf{x}\in\mathbb{R}^{n}$ be a positive vector. If~$X\mathbf{x}<\omega\mathbf{x}$, then~$\rho(X)<\omega$.
\end{lem}
\subsection{Main results}
With the help of auxiliary relations developed in the previous subsection, we now present the main result, which establishes the geometric convergence of the proposed algorithm.
\begin{theorem}\label{thm1} If~$0<\eta<\frac{2}{n\beta\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c}$, we have the following linear matrix inequality (entry-wise):
\begin{equation} \label{G}
\mathbf{t}(k+1) \leq J(\eta)\mathbf{t}(k),~\forall k,
\end{equation}
where $\mathbf{t}(k)\in\mathbb{R}^3$ and $J(\eta)\in\mathbb{R}^{3\times3}$ are defined as follows:
\begin{align}\label{t,G}
\mathbf{t}(k)&=\left[
\begin{array}{l}
\left\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\right\|_A \\
\left\|A_\infty\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\right\|_2 \\
\left\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\right\|_B
\end{array}
\right],\\
J(\eta)&=\left[
\begin{array}{ccc}
\sigma_A+a_1\eta & a_2\eta &a_3\eta \\
a_4\eta & \lambda & a_5\eta\\
a_6+a_7\eta& a_8\eta & \sigma_B+a_{9}\eta
\end{array}
\right],
\end{align}
with the positive constants $a_i$'s being
{\color{black}
\begin{eqnarray*}
a_1 &=& mg\beta\mn{I_{np}-A_\infty}_2\mn{B_{\infty}}_2, \nonumber\\
a_2 &=& m\beta\mn{I_{np}-A_\infty}_2\mn{B_{\infty}}_2, \nonumber\\
a_3 &=& mh\mn{I_{np}-A_\infty}_2, \nonumber\\
a_4 &=& \begin{color}{black}n\end{color}\beta g(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c), \nonumber\\
a_5 &=& h\mn{A_{\infty}}_2,\nonumber\\
a_6 &=& g\sigma_Bl\beta \mn{A-I_{np}}_2,\nonumber\\
a_7 &=& g\sigma_Bl\beta^2\mn{B_{\infty}}_2,\nonumber\\
a_8 &=& \sigma_Bl\beta^2 \mn{B_{\infty}}_2,\nonumber\\
a_{9} &=& h\sigma_Bl\beta.
\end{eqnarray*}
}When the step-size,~$\eta$, satisfies
\begin{align}
\eta<\min\left\{\frac{\epsilon_1(1-\sigma_A)}{a_1\epsilon_1+a_2\epsilon_2+a_3\epsilon_3},~\frac{(1-\sigma_B)\epsilon_3-\epsilon_1a_6}{a_7\epsilon_1+a_8\epsilon_2+a_9\epsilon_3},~\frac{1}{n\beta\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c}\right\},
\end{align}
where~$\epsilon_1,\epsilon_2,\epsilon_3$ are positive constants such that
\begin{align}
\epsilon_3> 0,\qquad\epsilon_1 < \frac{(1-\sigma_B)\epsilon_3}{a_6}, \qquad
\epsilon_2 > \frac{a_4\epsilon_1+a_5\epsilon_3}{\alpha n(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)},
\end{align}
the spectral radius of $J(\eta)$,~$\rho(J(\eta))$, is strictly less than~$1$, and therefore~$\left\|\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\right\|_2$ converges to zero geometrically at the rate of~$O(\rho(J(\eta))^k)$.
\end{theorem}
\begin{proof}
Combining the results of Lemmas~\ref{1}--\ref{4}, one can verify that Eq.~\eqref{G} holds if~$0<\eta<\frac{2}{n\beta\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c}$. Recall that~$\lambda=\max\left(\left|1-\alpha n\eta(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)\right|,\left|1-\beta n\eta(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c) \right|\right)$. When~$0<\eta<\frac{1}{n\beta\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c}$,~$\lambda=1-\alpha n\eta(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)$, since $\alpha\leq\beta$; see, e.g.,~\cite{opt_literature0} for details.
\begin{comment}
We now split the matrix, $J(\eta)$, into the sum of a fixed matrix and another perturbation matrix as a function $\eta$:
\begin{align}
J(\eta) &= \left[
\begin{array}{ccc}
\sigma_A & 0 & 0\\
0 & 1 & 0\\
a_6 & 0 & \sigma_B
\end{array}
\right]
+
\eta\left[
\begin{array}{ccc}
a_1 & a_2 &a_3 \\
a_4 & -\alpha n(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c) & a_5\\
a_7 & a_8 & a_{9}
\end{array}
\right] := J_0 + \eta E. \nonumber
\end{align}
Clearly, the spectral radius of $J_0$ is 1; recall that both~$\sigma_A$ and~$\sigma_B$ are in~$(0,1)$. It is straightforward to verify that the right and left eigenvector corresponding to the eigenvalue of~$1$ of~$J_0$ is~$\mathbf{v}=\left[0,1,0\right]^\top$. Denote by~$q(\eta)$, the eigenvalues of~$J(\eta)$ as a function of~$\eta$. From Lemma~\ref{pert}, since~$1$ is a simple eigenvalue of~$J(0)$,
\begin{equation}
\frac{d q(\eta)}{d\eta}\Bigg|_{\eta=0,q=1} = \frac{\mathbf{v}^\top E\mathbf{v}}{\mathbf{v}^\top\mathbf{v}} = -\alpha n(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c),
\end{equation}
i.e.,~$\frac{d q}{d\eta}|_{\eta=0,q=1}<0$ and the spectral radius of~$J(\eta)$ is strictly less than $1$ as~$\eta$ slightly increases from zero. This is because the eigenvalues are continuous functions of the elements of the matrix.
\end{comment}
The goal is to find an upper bound of the step-size,~$\widetilde{\eta}$, such that~$\rho(J(\eta))<1$ when~$\eta<\widetilde{\eta}.$ In the light of Lemma~\ref{rho}, we solve for the range of the step-size,~$\eta$, and a positive vector~$\boldsymbol{\epsilon}=\left[\epsilon_1,\epsilon_2,\epsilon_3\right]^\top$
from the following linear matrix inequality (entry-wise):
\begin{align}\label{eta1}
\left[
\begin{array}{ccc}
\sigma_A+a_1\eta & a_2\eta &a_3\eta \\
a_4\eta & 1-\alpha n\eta(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c) & a_5\eta\\
a_6+a_7\eta& a_8\eta & \sigma_B+a_{9}\eta
\end{array}
\right]
\left[
\begin{array}{ccc}
\epsilon_1 \\
\epsilon_2\\
\epsilon_3
\end{array}
\right]
<
\left[
\begin{array}{ccc}
\epsilon_1 \\
\epsilon_2\\
\epsilon_3
\end{array}
\right],
\end{align}
which is equivalent to the following set of inequalities:
\begin{align}
\left\{
\begin{array}{lll}
(a_1\epsilon_1+a_2\epsilon_2+a_3\epsilon_3)\eta&<&\epsilon_1(1-\sigma_A), \\
(a_4\epsilon_1-\alpha n(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)\epsilon_2+a_5\epsilon_3)\eta&<&0, \\
(a_7\epsilon_1+a_8\epsilon_2+a_9\epsilon_3)\eta&<&(1-\sigma_B)\epsilon_3-\epsilon_1a_6,
\end{array} \nonumber
\right.
\end{align}
Solving the inequalities above, we have that when
\begin{align}
\left\{
\begin{array}{lll}
\epsilon_1 &<& \frac{(1-\sigma_B)\epsilon_3}{a_6}, \\
\epsilon_2 &>& \frac{a_4\epsilon_1+a_5\epsilon_3}{\alpha n(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)}, \\
\epsilon_3 &>& 0, \\
\eta &<& \min\left\{\frac{\epsilon_1(1-\sigma_A)}{a_1\epsilon_1+a_2\epsilon_2+a_3\epsilon_3},\frac{(1-\sigma_B)\epsilon_3-\epsilon_1a_6}{a_7\epsilon_1+a_8\epsilon_2+a_9\epsilon_3}\right\},
\end{array} \nonumber
\right.
\end{align}
the inequality in Eq.~\eqref{eta1} holds and the Theorem follows.
\end{proof}
\begin{comment}
\section{Uncoordinated Step-sizes}\label{s4}
In order to inplement the proposed algorithm, Eqs.~\eqref{alg1}, each agent must agree on the same
value of step-size that may be pre-programmed to avoid
implementing an agreement protocol. In this section, we study the proposed algorithm, however, with uncoordinated step-sizes, which has the following form:
\begin{subequations}\label{alg1un}
\begin{align}
\mathbf{x}_i(k+1)=&\sum_{j\in\mathcal{N}_i^{{\tiny \mbox{in}}}}a_{ij}\mathbf{x}_{j}(k)-\eta_i\mathbf{y}_i(k),\label{alg1una}\\
\mathbf{y}_i(k+1)=&\sum_{j\in\mathcal{N}_i^{{\tiny \mbox{in}}}}b_{ij}\Big(\mathbf{y}_j(k)+\nabla f_j\big(\mathbf{x}_j(k+1)\big)-\nabla f_j\big(\mathbf{x}_j(k)\big)\Big),\label{alg1und}
\end{align}
\end{subequations}
where~$\eta_i$ is the constant step-size (not necessarily positive) adopted by agent~$i$. The strategy of uncoordinated step-sizes is easier to implement for distributed and asynchronous setting as each agent locally decides a suitable step-size. More importantly, we show that when the network of agents uses uncoordinated step-sizes, a larger range of step-sizes could be used, i.e., some of the step-sizes could even be negative or zero. {\color{black}Besides, the bounds we derive for uncoordinated step-sizes do not depend on the Heterogeneity of Stepsizes (HoS). These are in contrast to the existing work~\cite{nedic2017geometrically,lu2017geometrical,xu2015augmented,xu2018convergence}, on uncoordinated step-sizes. }
We now study the convergence of Eqs.~\eqref{alg1un} under the same assumptions as described in Section~\ref{s2}. To this aim, we denote~$\boldsymbol{\eta}$ as a column vector which collects all the step-sizes, i.e.,~$\boldsymbol{\eta}=\left[\eta_1,\cdots,\eta_n\right]^\top$ and~$D=\mbox{diag}\{\boldsymbol{\eta}\}\otimes I_p$. We define~$\overline{\eta}$ as the largest magnitude of the step-sizes among the agents, i.e.,~$\overline{\eta}=\max_i\{\left|\eta_i\right|\}$ and therefore~$\mn{D}_2=\overline{\eta}$. We now rewrite Eqs.~\eqref{alg1un} in a compact matrix form:
\begin{subequations}\label{alg1un_matrix}
\begin{align}
\mathbf{x}(k+1)=&A\mathbf{x}(k)-D \mathbf{y}(k),\label{alg1un_ma}\\
\mathbf{y}(k+1)=&B\Big(\mathbf{y}(k)+\nabla \mathbf{f}(k+1)-\nabla \mathbf{f}(k)\Big).\label{alg1un_mb}
\end{align}
\end{subequations}
Note that Lemma~\ref{lem1},~\ref{sum_equ} and~\ref{4} still hold for Eqs.~\eqref{alg1un_matrix}. The rest of the convegence analysis follows a similar procedure as described in Lemma~\ref{1}-\ref{3}, i.e., we formulate a contraction relation between ~$\|\mathbf{x}(k+1)-A_\infty\mathbf{x}(k+1)\|_A$,~$\|A_\infty\mathbf{x}(k+1)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2$, and~$\|\mathbf{y}(k+1)-B_\infty\mathbf{y}(k+1)\|_B$ and their values in the last iteration.
\begin{lem} \label{1pr}
(Lemma~\ref{1}')
The following inequality holds,~$\forall k$:
\begin{align}
\|\mathbf{x}&(k+1)-A_\infty\mathbf{x}(k+1)\|_A\nonumber\\
\leq&~\sigma_A\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A + \overline{\eta} m\mn{I_{np}-A_\infty}_2\left\|\mathbf{y}(k)\right\|_2 \nonumber
\end{align}
\end{lem}
\begin{proof}
Using Eq.~\eqref{alg1un_ma} and Lemma.~\ref{lem1}, we have
\begin{align}
\|\mathbf{x}&(k+1)-A_\infty\mathbf{x}(k+1)\|_A \nonumber\\
=&~\|A\mathbf{x}(k)-D \mathbf{y}(k)-A_{\infty}\Big(A\mathbf{x}(k)-D \mathbf{y}(k)\Big)\|_A, \nonumber\\
\leq&~ \sigma_A\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A+m\left\|(I_{np}-A_\infty)D \mathbf{y}(k)\right\|_2,\nonumber\\
\leq&~\sigma_A\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A + \overline{\eta} m\mn{I_{np}-A_\infty}_2\left\|\mathbf{y}(k)\right\|_2
\nonumber
\end{align}
and the lemma follows.
\end{proof}
\begin{color}{black}\begin{lem} \label{2pr}
(Lemma~\ref{2}') The following holds,~$\forall k$, when~$0<n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i<\frac{2}{\beta}$:
\begin{align}\label{lem5}
\|A&_\infty\mathbf{x}(k+1)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2\nonumber\\
\leq&~\overline{\eta} n\beta g(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A\nonumber\\
&+~\tilde{\lambda}\|A_\infty\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2 \nonumber\\
&+~\overline{\eta} h\mn{A_\infty}_2\|\mathbf{y}(k)-B_{\infty}\mathbf{y}(k)\|_B,
\end{align}
where~$\tilde{\lambda}=\max\left(\left|1-\alpha n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i\right|,\left|1-\beta n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i \right|\right)$.
\end{lem}
\end{color}
\begin{proof}
With~$A_\infty=(\mathbf{1}_n\otimes I_p)(\boldsymbol{\pi}_r^\top\otimes I_p)$ and~$B_\infty=(\boldsymbol{\pi}_c\otimes I_p)(\boldsymbol{1}_n^\top\otimes I_p)$,
\begin{align*}
A_\infty DB_\infty=(\mathbf{1}_n\boldsymbol{\pi}_r^\top\otimes I_p)\big(\mbox{diag}\{\boldsymbol{\eta}\}\otimes I_p\big)(\boldsymbol{\pi}_c\mathbf{1}_n^\top\otimes I_p)=\big(\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i\big)(\mathbf{1}_n\mathbf{1}_n^\top\otimes I_p),
\end{align*}
and recalling Eq.~\eqref{alg1un_ma}, we have
\begin{align}\label{intepr_1}
\|A&_\infty\mathbf{x}(k+1)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2 \nonumber\\
=&\begin{color}{black}~\|A_{\infty}\Big(A\mathbf{x}(k)-D \mathbf{y}(k)+(-D+D)B_{\infty}\mathbf{y}(k)\Big)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2, \nonumber\end{color}\\
\leq&~\|(\mathbf{1}_n\boldsymbol{\pi}_r^\top \otimes I_p)\mathbf{x}(k)-(\mathbf{1}_n\otimes I_p)\mathbf{x}^*- A_{\infty}DB_{\infty}\mathbf{y}(k)\|_2 \nonumber\\
&+\overline{\eta} h\mn{A_\infty}_2\|\mathbf{y}(k)-B_{\infty}\mathbf{y}(k)\|_B.
\end{align}Since the last term above matches with the last term in Eq.~\eqref{lem5}, what is left is to manipulate the first term. Before we proceed, define~$\nabla F(k)=\nabla F\big((\boldsymbol{\pi}_r^\top\otimes I_p)\mathbf{x}(k)\big)$, which is the global gradient evaluated at~$(\boldsymbol{\pi}_r^\top\otimes I_p)\mathbf{x}(k)$. We have
{\color{black}\begin{align*}
\|(&\mathbf{1}_n\boldsymbol{\pi}_r^\top \otimes I_p)\mathbf{x}(k)-(\mathbf{1}_n\otimes I_p)\mathbf{x}^*- A_{\infty}DB_{\infty}\mathbf{y}(k)\|_2\\
\leq&~\left\|(\mathbf{1}_n \otimes I_p)\Big((\boldsymbol{\pi}_r^\top\otimes I_p)\mathbf{x}(k)-\mathbf{x}^*-n\big(\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i\big)\nabla F(k)\Big)\right\|_2\\ &+\big(\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i\big)\left\|n(\mathbf{1}_n\otimes I_p)\nabla F(k)- (\mathbf{1}_n \otimes I_p)(\mathbf{1}_n^\top\otimes I_p)\mathbf{y}(k)\right\|_2\\
:=&~s_1 + s_2.
\end{align*}}From Lemma~\ref{centr_d}, we have that if~$0<n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i<\frac{2}{\beta}$,
$$s_1\leq\tilde{\lambda}\|A_\infty\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\|_2.$$
Recall that~$(\mathbf{1}_n^\top \otimes I_p) \mathbf{y}(k) = (\mathbf{1}_n^\top \otimes I_p) \nabla\mathbf{f}(k),\forall k,$ from Lemma~\ref{sum_equ}, we have
\begin{align*}
s_2 \leq&~n\beta g(\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i)\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A,\\
\leq&~\overline{\eta}n\beta g(\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c)\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A
\end{align*}
The lemma follows by using the above bounds in Eq.~\eqref{intepr_1}.
\end{proof}
\begin{lem}\label{3pr}
(Lemma~\ref{3}') The following inequality holds,~$\forall k$:
\begin{align}
\|\mathbf{y}&(k+1)-B_\infty\mathbf{y}(k+1)\|_B\nonumber\\
\leq&~ \sigma_B\beta lg\mn{A-I_{np}}_2\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\|_A\nonumber\\
&+~\sigma_B\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\|_B \nonumber\\
&+~\overline{\eta}\sigma_B\beta l\|\mathbf{y}(k)\|_2.
\end{align}
\end{lem}
\begin{proof}
According to Eq.~\eqref{inte_2}, we have that
\begin{align}
\|\mathbf{y}&(k+1)-B_\infty\mathbf{y}(k+1)\|_B \nonumber\\
\leq&~\sigma_B\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\|_B+\sigma_B\beta l\|\mathbf{x}(k+1)-\mathbf{x}(k)\|_2, \nonumber\\
=&~\sigma_B\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\|_B+ \sigma_B\beta l\|A\mathbf{x}(k)-D \mathbf{y}(k)-\mathbf{x}(k)\|_2, \nonumber\\
\leq&~ ~\sigma_B\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\|_B+\sigma_B\beta l\|(A-I_{np})\big(\mathbf{x}(k)-A_\infty\mathbf{x}(k)\big)\|_2 \nonumber\\
&+\mn{D}_2\sigma_B\beta l\|\mathbf{y}(k)\|_2,\nonumber
\end{align}
and the Lemma follows.
\end{proof}
Note that Lemma~\ref{1pr}-\ref{3pr} are exactly the same as Lemma~\ref{1}-\ref{3} with~$\eta$ and~$\lambda$ replaced by~$\overline{\eta}$ and~$\tilde{\lambda}$. Besides, Lemma~\ref{4} still holds for Eqs.~\eqref{alg1un}. Therefore, the geometric convergence of Eqs.~\eqref{alg1un} to the global minimizer follows from a very similar argument in Theorem~\ref{thm1}, as formally stated in the next theorem.
\begin{theorem}\label{thm2}
Let Assumptions~\ref{asp1} and~\ref{asp2} hold. If~$0<n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i<\frac{2}{\beta}$, we have the following linear matrix inequality (entry-wise):
\begin{equation} \label{G1}
\mathbf{t}(k+1) \leq \tilde{J}(\boldsymbol{\eta})\mathbf{t}(k),~\forall k,
\end{equation}
where $\mathbf{t}(k)\in\mathbb{R}^3$ and $\tilde{J}(\boldsymbol{\eta})\in\mathbb{R}^{3\times3}$ are defined as follows:
\begin{align}\label{t,G}
\mathbf{t}(k)&=\left[
\begin{array}{l}
\left\|\mathbf{x}(k)-A_\infty\mathbf{x}(k)\right\|_A \\
\left\|A_\infty\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\right\|_2 \\
\left\|\mathbf{y}(k)-B_\infty\mathbf{y}(k)\right\|_B
\end{array}
\right],\\
\tilde{J}(\boldsymbol{\eta})&=\left[
\begin{array}{ccc}
\sigma_A+a_1\overline{\eta} & a_2\overline{\eta} &a_3\overline{\eta} \\
a_4\overline{\eta} &
\tilde{\lambda} & a_5\overline{\eta}\\
a_6+a_7\overline{\eta}& a_8\overline{\eta} & \sigma_B+a_{9}\overline{\eta}
\end{array}
\right],
\end{align}
where the positive constants $a_i$'s are defined in~Theorem~\ref{thm1}.
{\color{black}When the largest step-size,~$\overline{\eta}$, satisfies
\begin{align}
0<\overline{\eta}<\min\left\{~\frac{\epsilon_1(1-\sigma_A)}{a_1\epsilon_1+a_2\epsilon_2+a_3\epsilon_3},~\frac{(1-\sigma_B)\epsilon_3-\epsilon_1a_6}{a_7\epsilon_1+a_8\epsilon_2+a_9\epsilon_3},~\frac{1}{n\beta\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c}~\right\},
\end{align}
with~$\epsilon_1,\epsilon_2,\epsilon_3$ being positive constants such that
\begin{align}
\epsilon_3> 0,\qquad0<\epsilon_1 < \frac{(1-\sigma_B)\epsilon_3}{a_6}, \qquad
\epsilon_2 > \frac{a_4\epsilon_1+a_5\epsilon_3}{\alpha n[\boldsymbol{\pi}_r]_-[\boldsymbol{\pi}_r]_-},
\end{align}}where~$[\boldsymbol{\pi}_r]_-$ and~$[\boldsymbol{\pi}_c]_-$ are respectively the smallest entry in~$\boldsymbol{\pi}_r$ and~$\boldsymbol{\pi}_c$,
the spectral radius of $\tilde{J}(\boldsymbol{\eta})$,~$\rho(\tilde{J}(\boldsymbol{\eta}))$, is strictly less than~$1$, and therefore~$\left\|\mathbf{x}(k)-\mathbf{1}_n \otimes \mathbf{x}^*\right\|_2$ converges to zero geometrically at the rate of~$O(\rho(\tilde{J}(\boldsymbol{\eta}))^k)$.
\end{theorem}
\begin{proof}
Combining the results of Lemmas~\ref{1pr}--\ref{3pr}, one can verify that Eq.~\eqref{G1} holds if~$0<n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i<\frac{2}{\beta}$. Since~$\tilde{\lambda}=\max\left(\left|1-\alpha n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i\right|,\left|1-\beta n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i \right|\right)$, when~$0<n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i<\frac{1}{\beta}$,~$\tilde{\lambda}=1-\alpha n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i$. For~$0<n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i<\frac{1}{\beta}$ to hold, it is suffice to require~$0<\overline{\eta}<\frac{1}{n\beta\boldsymbol{\pi}_r^\top\boldsymbol{\pi}_c}$. In the light of Lemma~\ref{rho}, we solve for the range of the step-sizes and a positive vector~$\boldsymbol{\epsilon}=\left[\epsilon_1,\epsilon_2,\epsilon_3\right]^\top$
such that the following inequality holds:
\begin{align}\label{eta}
\left[
\begin{array}{ccc}
\sigma_A+a_1\overline{\eta} & a_2\overline{\eta} &a_3\overline{\eta} \\
a_4\overline{\eta} & 1-\alpha n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i & a_5\overline{\eta}\\
a_6+a_7\overline{\eta}& a_8\overline{\eta} & \sigma_B+a_{9}
\overline{\eta}
\end{array}
\right]
\left[
\begin{array}{ccc}
\epsilon_1 \\
\epsilon_2\\
\epsilon_3
\end{array}
\right]
<
\left[
\begin{array}{ccc}
\epsilon_1 \\
\epsilon_2\\
\epsilon_3
\end{array}
\right],
\end{align}
which is equivalent to the following set of inequalities:
\begin{align}
\left\{
\begin{array}{lll}\label{eta2}
(a_1\epsilon_1+a_2\epsilon_2+a_3\epsilon_3)\overline{\eta}&<&\epsilon_1(1-\sigma_A), \\
(a_4\epsilon_1+a_5\epsilon_3)\overline{\eta}-\epsilon_2\alpha n\sum_{i=1}^{n}[\boldsymbol{\pi}_r]_i[\boldsymbol{\pi}_c]_i\eta_i&<&0, \\
(a_7\epsilon_1+a_8\epsilon_2+a_9\epsilon_3)\overline{\eta}&<&(1-\sigma_B)\epsilon_3-\epsilon_1a_6.
\end{array}
\right.
\end{align}
Since the rightside of the third inequality in Eqs.~\eqref{eta2} has to be positive, we have that:
\begin{equation}\label{e1}
0<\epsilon_1 < \frac{(1-\sigma_B)\epsilon_3}{a_6}
\end{equation}
In order to find the range of~$\epsilon_2$ such that the second inequality holds, it is suffice to solve the range of~$\epsilon_2$ such that the following inequality holds:
$$(a_4\epsilon_1+a_5\epsilon_3)\overline{\eta}-\epsilon_2\alpha n[\boldsymbol{\pi}_r]_-[\boldsymbol{\pi}_c]_-\overline{\eta}<0,$$
where~$[\boldsymbol{\pi}_r]_-$ and~$[\boldsymbol{\pi}_c]_-$ are respectively the smallest entry in~$\boldsymbol{\pi}_r$ and~$\boldsymbol{\pi}_c$.
Therefore, as long as
\begin{equation}\label{e2}
\epsilon_2 > \frac{a_4\epsilon_1+a_5\epsilon_3}{\alpha n[\boldsymbol{\pi}_r]_-[\boldsymbol{\pi}_r]_-},
\end{equation}
the second inequality in Eqs.~\eqref{eta2} holds. The next step is to solve the range of~$\overline{\eta}$ from the first and third inequality in Eqs.~\eqref{eta2}. We get
$$\overline{\eta} < \min\left\{~\frac{\epsilon_1(1-\sigma_A)}{a_1\epsilon_1+a_2\epsilon_2+a_3\epsilon_3},~\frac{(1-\sigma_B)\epsilon_3-\epsilon_1a_6}{a_7\epsilon_1+a_8\epsilon_2+a_9\epsilon_3}~\right\},$$
where the range of~$\epsilon_1$ and~$\epsilon_2$ is given in~Eq.~\eqref{e1} and Eq.~\eqref{e2} respectively and~$\epsilon_3$ is an arbitrary positive constant. The Theorem follows.
\end{proof}
An important implication of Theorem~\ref{thm2} is that when the network of agents uses uncoordinated step-sizes, each step-size is not necessarily positive. In fact, some step-sizes could be negative or zero as long as these step-sizes satisfy~Eq.~\eqref{eta_range}. Therefore, the strategy of uncoordinated step-sizes offers larger flexibility in choosing step-size at each agent.
\end{comment}
\section{Numerical Experiments}\label{s4}
We consider a binary classification problem in the distributed setting, where we use logistic loss function to train a linear classifier. Each agent~$i$ has access to~$m_i$ training data,~$(\mathbf{c}_{ij},y_{ij})\in\mathbb{R}^p\times\{-1,+1\}$, where~$\mathbf{c}_{ij}$ contains~$p$ features of the~$j$th training data at agent~$i$ and~$y_{ij}$ is the corresponding binary label. For privacy issues, agents do not share training data with each other. In order to use the entire data set for training, the network of agents cooperatively solves the following distributed logistic regression problem:
\begin{align}
\underset{\mathbf{w}\in\mathbb{R}^p,b\in\mathbb{R}}{\operatorname{min}}F(\mathbf{w},b)
=\sum_{i=1}^n\sum_{j=1}^{m_i}{\rm ln}\left[1+\exp\left(-\left(\mathbf{w}^\top\mathbf{c}_{ij}+b\right)y_{ij}\right)\right] \nonumber
+\frac{\xi}{2}\|\mathbf{w}\|_2^2\nonumber,
\end{align}where the private function at each agent,~$i$, is given by:
\[
f_i(\mathbf{w},b)=\sum_{j=1}^{m_i}{\rm ln}\left[1+\exp\left(-\left(\mathbf{w}^\top\mathbf{c}_{ij}+b\right)y_{ij}\right)\right]
+\frac{\xi}{2n}\|\mathbf{w}\|_2^2.
\]
In our setting,{\color{black}~$n=8$},~$p=5$. The feature vectors,~$\mathbf{c}_{ij}$'s, are Gaussian with zero mean and variance~$2$. The binary labels are randomly generated from standard Bernoulli distribution. We first compare the performance of the proposed algorithm in this paper, with ADD-OPT/Push-DIGing~\cite{add-opt,opdirect_nedicLinear}, FROST~\cite{xin2018fast}, and subgradient-push{\color{black}~\cite{opdirect_Tsianous,opdirect_Nedic}}, over the leftmost directed graph,~$\mathcal{G}_1$, shown in Fig.~\ref{g}. The simulation results are shown in the left figure in Fig.~\ref{s}. Next, we evaluate the proposed algorithm on the three different directed graphs,~$\mathcal{G}_1,\mathcal{G}_2,\mathcal{G}_3$, shown in Fig.~\ref{g}, where each graph to the right has a few more edges compared to the one on its left. The simulation results are shown in the right figure in Fig.~\ref{s}. In both cases, we plot the average of the residuals at each agent,{\color{black}~$\frac{1}{n}\sum_{i=1}^{n}\|\mathbf{x}_i(k)-\mathbf{x}^*\|_2$}. We note that the proposed linear algorithm achieves a geometric (linear on the log-scale) convergence speed comparable to other fast algorithms over directed graphs but with less computation and communication. These simulations confirm the theoretical findings in this letter.
\begin{figure}[!h]
\centering
\subfigure{\includegraphics[width=1.8in]{Graph1.pdf}}
\hspace{0.5cm}
\subfigure{\includegraphics[width=1.8in]{Graph2.pdf}}
\subfigure{\includegraphics[width=1.8in]{Graph3.pdf}}
\caption{Strongly-connected but unbalanced directed graphs.}
\label{g}
\end{figure}
\begin{figure}[!h]
\centering
\subfigure{\includegraphics[width=2.5in]{alg_comp1.pdf}}
\hspace{0.5cm}
\subfigure{\includegraphics[width=2.5in]{graph_comp1.pdf}}
\caption{(Left) Comparison across different algorithms. (Right) Proposed algorithm over different graphs.we plot the average residuals at each agent,{\color{black}~$\frac{1}{n}\sum_{i=1}^{n}\|\mathbf{x}_i(k)-\mathbf{x}^*\|_2$}.}
\label{s}
\end{figure}
\section{Conclusions}\label{s5}
In this letter, we describe a linear distributed algorithm for optimization over directed graphs that can be seen as a generalization of earlier work over undirected graphs. Under the assumptions that the objective functions are strongly-convex and have Lipschitz-continuous gradients, the proposed algorithm achieves a geometric convergence to the global optimal. Our analysis is based on a novel approach where we establish simultaneous contractions of both row- and column-stochastic matrices under some arbitrary norms. We then use an elegant result from nonnegative matrix theory to develop the conditions for convergence.
\bibliographystyle{IEEEbib}
|
{
"timestamp": "2018-05-08T02:05:14",
"yymm": "1803",
"arxiv_id": "1803.02503",
"language": "en",
"url": "https://arxiv.org/abs/1803.02503"
}
|
\section{Introduction}
Ontology, loosely defined as ``\textit{the study of what there is}'' \citep{StanfordOntology}, studies questions of the existence of entities, their properties, and the relation between the two \citep{StanfordOntology}. So-called \textit{ontological arguments} traditionally are proofs of the existence of god, deducing this conclusion from a set of properties attributed to the entity `god'\footnote{See St. St. Anselm's proof \citep{Canterbury} for the most well-known example, or \cite{Goedel} for a recent presentation of such a proof originally devised by Kurt G\"{o}del.}. In this article we apply the same technique to show the existence of so-called \textbf{Black Swan} events.
The notion of Black Swan events, originally introduced in \cite{BlackSwan2005}, has been popularized by a series of popular science books \citep{Incerto}. Its formalization is work in progress by \cite{SilentRisk}. Their naming makes reference to the so-called \textbf{problem of induction}, defined by the Oxford English Dictionary as ``\textit{the process of inferring a general law or principle from the observation of particular instances}'' \citep{OED}. It is attributed to David Hume, who stated that such arguments cannot be made rigorous by deductive reasoning alone, given the lack for a justification of the assumption that yet unobserved entities will share the same properties as those already observed \citep{Hume}. The solution to the problem that this causes to scientific reasoning proposed by \cite{Popper} is to replace induction with falsification, the process of continuously trying to find empirical evidence against the theses of a scientific theory \citep{StanfordInduction}. This method was later rejected by critics as relying on a ``\textit{whiff}'' (some extent) of inductivism itself \citep{NewtonSmith1981,Salmon1981}.
The need for induction as part of the process as scientific discovery was already discussed by Aristotle, who argued that scientists should infer explanatory principles from phenomena in order to deduce further statements about them. Aristotle's method was generally accepted by medieval thinkers and many versions of such methods were presented by philosophers including Roger Bacon, Duns Scotus and William of Ockham, amongst others. The difficulty of arriving at general truths instead of accidental generalizations was generally understood by these authors \citep{Losee2001}. A common way of presenting this problem is to point out that the fact that all swans observed hitherto (in Europe) were white could lead an observer to induce that all swans are white -- a case of accidental predication that Duns Scotus seeked to avoid by stating that the most that could be inferred from observations was their ``\textit{aptitudinal union}'', in this case that swans \textit{could} be white \citep{Losee2001}. In this sense, the discovery of a species of black swans in Australia -- \textit{cygnus atratus} -- during the voyages of the 17th century is a good exemplification of this problem. First accounts of such sightings by the Dutch skipper Antony Caen in 1636 were met with skepticism back home, until Willem de Vlamingh brought back real specimen to the continent over 60 years later \citep{cygnus}. \cite{BlackSwan2005} states this historical context as a reason for choosing \textit{cygnus atratus} as the namesake for events with large impact, incomputable probabilities, and surprise effect properties.
The theory of Black Swan events, under development by \cite{SilentRisk}, has already made significant impact in popular language and general media. In this article we undertake to demonstrate that the occurrence of such events is in fact implied by their definition. Several authors have written about Black Swan events from a statistical and risk (management) theory perspective (apart from the already cited works, further examples include e.g. \cite{nafday2009strategies,NAFDAY2011108,HILAL20112374,Taleb2012,Aven2013}). \cite{BlackSwan2005,Yudkowsky2008} discuss Black Swans in the context of human cognition and its limitations. However, there are -- to the best of our knowledge -- no works looking at Black Swans from an analytical perspective.
\section{Definitions} \label{sec:Definitions}
A Black Swan is defined by \cite{SilentRisk} as ``\textit{a) a highly unexpected event} [f]\textit{or a given observer} [that] \textit{b) carries large consequences, and c) is subjected to ex-post rationalization}''.
In order to formalize this definition, we define a set $X$ of all events. We define the predicate $\chi (x)$ to denote that the event $x \in X$ can be \textbf{imagined} (by a given observer). We will discuss further below what it means in the context of our model to be able to imagine an event. In order to discuss the occurrence or non-occurrence of events we further define $\varphi (x)$ to denote an event that \textbf{occurs}. We make very few assumptions about $X$, which are summarized by the axioms presented in section \ref{sec:Reasoning}.
Note that we do not include a notion of time in our theory, i.e. the chronological ordering of the occurrence of an event and its imagination by an observer carries no importance. We associate the non-imaginability of an event to the property of being highly unexpected (denoted (a) in the definition by \cite{SilentRisk}). We will see later that the property of large consequences (b) follows from this definition. We do not include ex-post rationalization (c) in our definition, as we do not consider it essential to the ontological nature of Black Swans. If we denote by $B(x)$ the property that event $x$ is a Black Swan, then its definition in our theory reads:
\[
B(x) \Leftrightarrow \neg \chi(x)
\]
In order to be able to discuss the `size' of consequences we introduce the partial order $<$ which satisfies the following axioms for all elements $a,b,c \in X$:
\begin{itemize}
\item Irreflexivity: $\neg (a < a)$
\item Transitivity: $(a < b) \land (b < c) \Rightarrow a < c$
\end{itemize}
Other than assuming a strict rather than a weak partial order, which we do for technical convenience here, this relation is consistent with von-Neumann-Morgenstern utility theory \citep{Neumann}. It shares the important transitivity property, a consistent equivalent for continuity under a strict partial order can be formulated, completeness and independence are not required but fully consistent with our theory. In this spirit, we will sometimes treat the notion of event $y$ having greater consequences than event $x$, i.e. $x<y$, as semantically equivalent $y$ being `worse' (i.e. yielding lower utility for a given observer) than $x$. This semantic interpretation does not affect the generality of the argument, and we stress that our argument does not require that the size of an event is in any way related to its utility for an observer. This interpretation does, however, help to emphasize the particular importance of unimaginable events when they are associated with negative outcomes (for a given observer).
Our definition of Black Swan events may be seen as stricter than that of \cite{SilentRisk} in one sense, as non-imaginability can be seen as a stronger requirement than being highly unexpected. It may also be seen as wider in the sense that it does not require ex-post rationalization. In any case, we consider it a highly important class of events, as becomes clear when it is viewed in the context of decision-making under risk and uncertainty.
So-called \textbf{Knightian uncertainty} refers to the non-quantifiability of phenomena under conditions of uncertainty \citep{Knight1921}. Uncertainty in decision theory is often interpreted in the sense that the probability distribution over future events is unknown (i.e. not allowing for probabilistic calculations that would be possible under conditions of risk), while the set of possible future states and their respective payoffs are still known. This allows for the application of non-probabilistic computation models such as Wald's maximin-criterion or similar techniques \citep{Wald1939,Wald1945}. Black Swan events, as we consider them here, require a stronger sense of uncertainty, whereunder not even the full set of potential events or their payoffs are available to the decision maker. This notion may be seen as closer to the original definition of uncertainty by \cite{Knight1921}, which states that ``\textit{We} [...] \textit{restrict the term `uncertainty' to cases of the non-quantitive type.}''. It should be noted, however, that the emphasis here lies on a different, arguably even stronger point: the crux of Black Swan events, as defined herein, is not their non-quantifiability, but the fact that they cannot be considered in the decision-making process, regardless of whether quantitative or any other methods are used. \textbf{What we show in this paper is that there exist events which fundamentally cannot be taken into account when making decisions and which occur nonetheless}.
In order to formalize this idea, we define a standard computational model of decision-making comprising the following elements:
\begin{itemize}
\item A set $A$ of actions which are available to the agent.
\item A set $P$ of information associated to the events (such as probabilities, or the property of occurrence).
\item A utility map $\Gamma$ which maps every pair of events and actions an outcome for a given agent: $\Gamma\colon (A,X) \rightarrow O$. Note that if we were to take the axioms of \cite{Neumann}, $\Gamma$ could be made consistent with a non-strict version of the size-relation $<$ introduced before, as shown in their proof. This is, however, not required for the point that we wish to make.
\item A decision map $\Phi$ which maps a vector of outcomes and a vector of associated information to the set of actions: $\Phi\colon (O^n,P^n) \rightarrow A$, where $n$ is the cardinality of the Cartesian product $A \times X$. In accordance with the concept of uncertainty defined above we may assume -- without loss of generality -- that there is no variation in the set of associated information $P$ and write $\Phi(O^n)$ for notational convenience.
\end{itemize}
We say that an event being non-imaginable for an agent is equivalent to her not being able to map it to an actio
.
We consider this as being different from deciding not to react to an event, because the latter entails finishing the computation $\Gamma$ of an outcome, which can then be mapped to whatever action would have been chosen without knowledge of the event (one may also think of $A$ containing another response labeled `Do Nothing'). We make this distinction because it facilitates the discussion of the computational aspects of decision-making, which we present in section \ref{sec:Horatio}, but note that it is not essential to the validity of our argument. For now we present the definition of the decision map, which states that a given set of events has to contain at least one imaginable event in order for it to be mapped to an action:
\[
\Phi(\Gamma^n(A,X)) = \begin{cases} \uparrow & \mbox{if } \forall x \in X \colon \neg \chi(x) \\ a \in A & \mbox{otherwise} \end{cases},
\]
where $\Gamma^n(A,X)$ denotes the element-wise application of the $\Gamma$ map to every element in the Cartesian product $A \times X$, and $\uparrow$ denotes the fact that the computation has not terminated.
\section{Reasoning} \label{sec:Reasoning}
Our reasoning can be compactly summarized as follows:
\textbf{Axiom 1}: \\There exists (at least) one event that occurs that is so bad that an event with greater consequences cannot be imagined:
\[
\exists x (\varphi(x) \land \forall y (x<y \rightarrow \neg \chi(y)))
\]
\textbf{Axiom 2}: \\No matter how bad an event that occurs is, there exists an event with even greater consequences which occurs:
\[
\forall x (\varphi(x) \rightarrow \exists y (\varphi(y) \land x < y))
\]
\textbf{Theorem}: \\
Black Swan events occur:
\[
\exists x (B(x) \land \varphi(x))
\]
In \ref{sec:Proof}, we present a formal proof of the above argument using a Hilbert-style system. Here we give a proof via semantic argument.
\begin{proof} \leavevmode
By axiom 1, there exists an element $x \in X$ such that:
\begin{itemize}
\item[1.] $\varphi(x)$
\item[2.] $\forall y (x<y \rightarrow \neg \chi(y))$.
\end{itemize}
By 1. and axiom 2 we obtain that there exists $y \in X$ such that $\varphi(y)$ and $x<y$. By the latter, and by 2., we obtain $\neg \chi(y)$. Therefore we have obtained $y \in X$ such that $\neg \chi(y) \land \varphi(y)$, that is $B(y) \land \varphi(y)$. This shows that axioms 1 and 2 imply our Theorem.
\end{proof}
We now move on to lay out the rationale underpinning our axioms.
\subsection{Axiom 1} \label{sec:Horatio}
Axiom 1 states that all events greater than a particular event (which occurs) are beyond our imagination, regardless of whether they occur or not. We call this argument the \textbf{Horatio-Principle}, after the following quote:
\begin{centering}
\textit{``There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.''}\\
\end{centering}
\hspace{12cm}{- \textbf{Hamlet (1.5.167-8)}}
It has been argued that Hamlet in this quote is talking about general limitations of human thought rather than trying to insult Horatio's intellect \citep{HamletBradley}, and some versions of the text even talk about `our' instead of `your' philosophy\footnote{The Folio version of Hamlet uses `our', while the first and second Quarto versions use `your'. While it is unclear whether this difference stems from an editor's mistake, it is commonly understood that the usually adopted `your' is meant as a general address and not and not as a direct attack on Horatio \citep{HamletThompson}.}. The Horatio-Principle here states the existence of events that are `not dreamt of' in the `philosophy' of a given observer, i.e. which she cannot imagine.
In order to justify the Horatio-Principle, we refer to the computation model described by the decision map $\Phi$ introduced in section \ref{sec:Definitions}. As stated in the definition, the notion of being able to imagine an event is equivalent to being able to come up with a response to that event. The decision making process could thus be viewed as a Turing machine \citep{Turing1937} that tries to compute a response to a given set of events. In this case $A$ would be the set of terminating states of the Turing machine, $X$ would be the set of alphabet symbols, which here also serves as the set of input symbols, and $\Phi$ the transition function. As stated by the Halting problem \citep{Turing1937}, no such machine could be guaranteed to ever reach a terminating state, i.e. to arrive at a decision upon a given set of events. The Horatio principle states (i) that there exist events for which a response is not computable, which is motivated by the Halting problem. Furthermore (ii), it states that there exists a size threshold for the consequences of an event beyond which this holds for all events (i.e. that the computability of an event is to some extent proportional to the size of its outcome). And lastly (iii), it states that there exists an event which does not exceed the size threshold and which has the property of occurring. Hence, while the Halting problem does not fully extend to the Horatio principle, we consider it a strong motivation thereof.
One may be lead to think that the possibility of deliberate \textbf{Antifragility} \citep{Incerto} might induce a type of Russell paradox \citep{Russell} in our system: we can imagine the occurrence of a Black Swan event and thus adjust our strategies accordingly. Antifragile strategies allow us to benefit from such an occurrence, even though we would not be able to describe the nature of the event in advance. Our set of imaginable events includes the occurrence of events that are not imaginable. Imagination in this case does not specify the event itself, which would thus still be outside the set of imaginable events. According to the Horatio-Principle, this set is still non-empty. In other words, even for (seemingly) antifragile strategies, there exist events with large consequences for which the outcome cannot be known before they occu
.
\subsection{Axiom 2}
Axiom 2 states that for every event that occurs, an event that is greater occurs as well. We argue that the set of occurring events is an open set and call this assumption \textbf{Murphy's law}, the often humorously stated aphorism that ``\textit{anything that can go wrong will go wrong}''. Formally, Axiom 2 can be derived from Murphy's law, as expressed in our system, by using an additional assumption that set of events $X$ is an open set (the formal proof is left to the reader):
\begin{itemize}
\item $\forall x,y (\phi(x) \land (x < y) \supset \phi(y))$ \\ Murphy's law
\item $\forall x \exists y (x < y)$ \\ Open universe
\item $(\forall x) \Big( \varphi (x) \supset (\exists y) (\varphi (y) \land (x<y)) \Big)$ \\ Axiom 2
\end{itemize}
\section{Implications}
In this section we show that it follows as a corollary of our Theorem that every computational model of decision-making is incomplete in the sense that not all events can be taken into account, and that this also concerns events that do occur.
A computational model is said to be \textbf{complete} if the decision map $\Phi$ has the property that for every two sets of events which differ by at least one element $Y,Z\colon (\exists y \in Y\colon y \notin Z) \lor (\exists z \in Z\colon z \notin Y)$ there exist a set of actions $A$ and a map $\Gamma$ such that $\Phi(\Gamma^n(A,Y)) \ne \Phi(\Gamma^n(A,Z))$. This means that a complete decision-model always allows an agent to act differently under different circumstances if this is indicated by her preferences, as expressed by the utility map $\Gamma$. In other words, a complete decision map $\Phi(\Gamma^n(A,X))$ takes into account all events in $X$. A more refined concept of \textbf{completeness with respect to occurring events} only requires this for events that have the property of occurring, which means that for $Y,Z$ such that $\forall y \in Y : \varphi(y)$ and $ \forall z \in Z : \varphi(z) $, we have $\Phi(\Gamma^n(A,Y)) \ne \Phi(\Gamma^n(A,Z))$ if and only if $Y \neq Z$.
Assume for the sake of contradiction that there exists a computational model with a complete decision map $\Phi\colon (O^n,P^n) \rightarrow A$, as defined above. Let $S\subseteq X$ be a set of events which occur and which are not imaginable (i.e. Black Swan events) for a given agent: $S = \{s\colon s \in X \land \varphi(s) \land \neg\chi(s)\}$. It follows from our theorem that the set $S$ is non-empty. It further follows from Axiom 2 that for every event in $S$ there exists an event with greater consequences that occurs as well. The cardinality of $S$ is thus at least $\aleph_0$. Thus there exist two vectors of unimaginable events which differ by at least one element $Y,Z \in S^j \colon (\exists y \in Y\colon y \notin Z) \lor (\exists z \in Z\colon z \notin Y)$. By the definition of $\Phi$, both vectors are not mapped to any response, because the computation will not terminate. Therefore it is impossible to have $\Phi(\Gamma^n(A,Y)) \ne \Phi(\Gamma^n(A,Z))$, contradicting the assumption of the existence of a complete decision-making model with respect to occurring events. The contradiction of the existence of a complete decision-making model can be obtained by setting $S=\{s\colon s \in X \land \neg\chi(s)\}$.
\section{Conclusion}
We developed a first-order deductive system to show that the occurrence of Black Swan events is implied by their definition. We make two assumptions, namely that our imagination is bounded and that the universe of occurring events is an open set, which we call the Horatio Principle and Murphy's law, respectively. We motivate the Horatio principle by showing that under a computational model of human decision-making, the question of whether all events are imaginable can be reduced to the Halting problem. We present a formal proof of our argument using a Hilbert System. We show that it follows as a corollary of this Theorem that every computational model of decision-making is incomplete in the sense that not all events that occur can be taken into account in the decision-making process. When viewed through the lense of decision-making under uncertainty -- as in von-Neumann-Morgenstern utility theory -- we argue that Black Swans entail a stronger sense of uncertainty than Knightian uncertainty because their existence means that even under perfect information no decision criterion -- regardless of whether it is of quantitative nature or not -- can make use of all the information available.
\section*{Acknowledgments}
The idea for this paper developed out of a conversation with Davoud Taghawi-Nejad at the Institute for New Economic Thinking at the Oxford Martin School. We thank Prof. Timothy Williamson for his comments on the paper. We further thank Matthew Deakin and G\"unther Siebenbrunner for useful remarks that have been incorporated into the paper.
\section*{References}\label{sec_References}
\bibliographystyle{apalike}
|
{
"timestamp": "2018-06-26T02:05:59",
"yymm": "1803",
"arxiv_id": "1803.02570",
"language": "en",
"url": "https://arxiv.org/abs/1803.02570"
}
|
\section{Introduction}
In the last decade we have witnessed a profound change in the way energy systems are operated.
A new paradigm called demand response is emerging, according to which the energy requirements of a population of users are tuned, by means of incentives, to account for the operational needs of the power grid \cite{albadi2007demand}. Previous works \cite{ma2013decentralized,gan2013optimal} have suggested to model these demand response methods as a game. Therein each player represents a user that needs to optimize his energy consumption over a given period of time, with the objective of minimizing his electricity bill. What couples the users, and thus makes the charging problem a game, is the assumption that the energy price depends at every instant of time on the sum of the energy demand of the whole population.
The seminal paper \cite{ma2013decentralized} shows that the (unique) Nash equilibrium of such game has desirable properties from the standpoint of the grid operator, in the case of large and homogeneous populations. Under these assumptions, \cite{ma2013decentralized} shows that the equilibrium is socially optimum in the sense that it minimizes the collective electricity bill (including the cost of both flexible and inflexible demand) and fills the overnight demand valley.
\textcolor{black}{As a result, a rich body of literature has focused on devising distributed and decentralized schemes that are numerically efficient, and can be used by the grid operator to coordinate the strategies of the agents to a Nash equilibrium \cite{ma2013decentralized,grammatico:parise:colombino:lygeros:14, dario2015aggregative, gan2013optimal, chen2014autonomous,paccagnan2016distributed}.}
Less attention has been devoted to verify whether the optimality statement made in \cite{ma2013decentralized} is still valid in the presence of more general cost functions, agents heterogeneity and realistic charging constraints (e.g., upper bounds on the instantaneous charging, different charging windows, ramping constraints).
Nonetheless, this is a fundamental prerequisite for the applicability of the aforementioned coordination schemes.\footnote{
While there are multiple factors impacting the choice of a control scheme, if the Nash equilibria do not posses desirable properties, the grid operator has limited incentive in coordinating the agents to such a strategy profile.
}
\\
\indent
\textcolor{black}{Following \cite{ma2013decentralized}, the efficiency of the Nash equilibrium has been studied in \cite{Gonz2015}, under the assumption of linear price functions. Both \cite{ma2013decentralized} and \cite{Gonz2015} focus on \textit{simplex constraints and homogeneous populations}.
The homogeneity assumption is relaxed in \cite{deori2016nash}, where the authors provide similar efficiency results of those in \cite{ma2013decentralized}, but limited to \emph{linear} price functions and in a probabilistic sense.
\emph{Non linear price functions} are considered in \cite{deconvergence}, but the efficiency results pertain to the notion of Wardrop equilibrium and charging constraints are limited to upper bounds.
We observe that all the previous works assume that the price at time $t$ depends only on the consumption at the same time instant.
Finally, we note that \cite{Beaude12} also provides efficiency bounds for charging games, but the setup considered therein is different, in that each agent's decision variable is limited to its starting charging time.}
\textcolor{black}{The aim of this paper is to provide efficiency results for the Nash equilibrium of aggregative charging games under different assumptions involving {\it finite populations} of vehicles, {\it general convex constraints}, {\it non linear price functions} and {\it price dependence on different time instants}}. To do so, we model the charging problem as an aggregative game \cite{jensen2010aggregative}, and study the equilibrium efficiency using the notion of \textit{price of anarchy} ($\poa$). The $\poa$ is a measure introduced in game theory to quantify how much selfish behavior degrades the performance of a given system \cite{koutsoupias1999worst}.
By definition, $\poa\ge1$ and the closer to $1$ the better the overall performance of the system. The result in \cite{ma2013decentralized} can be equivalently stated as the fact that for homogeneous populations with simplex constraints, the $\poa$ converges to~$1$ as the population size grows.
Our main contributions are:
\begin{enumerate}[leftmargin=*]
\vspace*{-1mm}
\item We show that the $\poa$ for charging games with linear price function (that might however depend on all charging instants) and generic convex constraints always converges to $1$, complementing \cite{ma2013decentralized, Gonz2015, deori2016nash, deconvergence};
\item For charging games with generic convex constraints and nonnegative price function that depends only on the instantaneous demand, we show that the $\poa$ converges to $1$ if the price function is a positive pure monomial (i.e., $\alpha z^k$ for some $\alpha, k>0$).
On the contrary, if the price function does not have this form, it is possible to construct a sequence of games whose $\poa$ does not converge to $1$.
\textcolor{black}{In such cases, we show how results for routing games\cite{roughgarden2003price, correa2004selfish} can be used to bound the asymptotic value of the price of anarchy.}
\item In all the previous cases we provide an explicit bound connecting the efficiency of the equilibria with the (finite) number of vehicles in the game. To the best of our knowledge, this is the first result providing a bound on $\poa$ as a function of the population size, for charging games with general convex constraints and price functions.
\end{enumerate}
\subsubsection*{\bf \emph{Organization}} Section \ref{sec:PF} includes the game formulation and some preliminary notions. In Section \ref{sec:poalarge} we define the efficiency metric used throughout this manuscript, and present the main results for linear and nonlinear price functions. Section \ref{sec:ATP} focuses on the application of charging a fleet of electric vehicles. All the proofs are reported in the Appendix.
\subsubsection*{\bf \emph{Notation}}
$\R^n_{\ge0}$ and $\R^n_{>0}$ denote the elements of $\R^n$ whose components are non negative and positive;
$\mathbb{0}_n\in\mb{R}^{n}$ (resp. $\mathds{1}_n$) is the column vector of zero (resp. unit) entries.
Given $A\in\mathbb{R}^{n\times n}$ not necessarily symmetric, $A\succ0$ $\Leftrightarrow$ $x^\top A x>0,$ $\forall x\neq 0$.
Given $g(x):\mathbb{R}^n \rightarrow \mathbb{R}^m$ we define the matrix $\nabla_{x} g(x) \in \mathbb{R}^{n\times m}$ component-wise as
$[\nabla_x g(x)]_{i,j}\coloneqq \frac{\partial g_j(x)}{\partial x\i}$.
An operator $F:\mc{K}\subset\R^n \rightarrow \R^n$ is called $\alpha$ strongly monotone if $(F(x)-F(y))^\top (x-y)\ge\alpha||x-y||^2$ for some $\alpha>0$, $\forall x,y\in\mc{K}$; $\mathcal{U}[a,b]$ is the uniform distribution on the real interval $[a, b]$.
\vspace*{-3mm}
\section{Problem formulation}
\label{sec:PF}
Let us consider a population of $M$ agents, each choosing an action $x\i\!\in\!\mc{X}\i\!\subseteq\!\mb{R}^n$. Agent $i$ incurs the cost $J\i(x\i,\sigma(x)):\mb{R}^n\times \mb{R}^n\rightarrow \mb{R}$ that depends on his own action $x\i \!\in\! \mc{X}\i$ and on the average action $\sigma(x)\!\coloneqq\!\frac{1}{M}\!\sum_{j=1}^{M}x\j$ of the population, as typical of aggregative games~\cite{jensen2010aggregative}. We assume that
\vspace*{-1mm}
\begin{equation}
J\i(x\i,\sigma(x)) \coloneqq p(\sigma(x)+d)^\top x\i\,,
\label{eq:costs}
\vspace*{-1mm}
\end{equation}
with $d\in\mathbb{R}_{\ge0}^n$ and $p:\R^n\to\R^n$.
The cost in~\eqref{eq:costs} can be used to describe applications where $x^i$ denotes the usage level of a certain commodity,
whose per-unit cost $p$ depends on the average usage level of the other players plus some inflexible normalized usage level $d$~\cite{ma2013decentralized,chen2014autonomous}. We denote with $\mc{X} \coloneqq \mc{X}^1\times\ldots\times\mc{X}^M$, and identify such game with the tuple
\vspace*{-1mm}
\begin{equation}
\label{eq:gameG}
\mathcal{G}\coloneqq\{M,\{\mathcal{X}^i\}_{i=1}^M,p\}.
\vspace*{-4mm}
\end{equation}
\subsection{Nash, Wardrop equilibrium and social optimizer}
\vspace*{-1mm}
We consider two notions of equilibrium for the game $\mc{G}$.
\vspace*{-2mm}
\begin{definition}[Nash equilibrium \cite{nash1950equilibrium}]\label{def:NE}
A set of actions $x\NE = [x^1\NE; \dots; x^M\NE] \in \R^{M n}$ is a Nash equilibrium of the game $\mathcal{G}$ if $x\NE\in\mc{X}$ and for all $ i\in\{1,\dots,M\}$ and all $ x\i \in\mc{X}\i$
\vspace*{-1mm}
\begin{equation}
J\i(x\i\NE,\sigma(x\NE)) \le J\i\biggl( x\i,\frac 1M x\i + \frac 1M \sum_{j \neq i} x\j\NE \biggr ).
\label{eq:def_NE}
\vspace*{-2mm}
\end{equation}
\end{definition}
Observe that on the right-hand side of \eqref{eq:def_NE} the variable $x^i$ appears in both arguments of $J^i(\cdot,\cdot)$.
As the population size grows, the contribution of an agent to the average decreases. This motivates the definition of Wardrop equilibrium.
\vspace*{-1mm}
\begin{definition}[Wardrop equilibrium \cite{wardrop1952road,Gentilearxiv17}]\label{def:WE}
A set of actions $x\WE = [x^1\WE; \dots; x^M\WE] \in \R^{M n}$ is a Wardrop equilibrium of $\mathcal{G}$ if $x\WE\in\mc{X}$, and for all $i\in\{1,\dots,M\}$, and all $x\i\in\mc{X}\i$,
\vspace*{-1mm}
\begin{equation}
J\i(x\i\WE,\sigma(x\WE)) \le J\i ( x\i,\sigma(x\WE) )\,.
\label{eq:def_WE}
\vspace*{-2mm}
\end{equation}
\end{definition}
\noindent Note that in this latter definition the average is fixed to $\sigma(x\WE)$ on both sides of \eqref{eq:def_WE}.
\noindent
Consequently, a feasible set of actions is a Wardrop equilibrium if no agent can improve his cost, assuming that the average action is \emph{fixed}.
\begin{definition}[Social optimizer]
A set of actions $x\SO = [x^1\SO; \dots; x^M\SO] \in \R^{M n}$ is a social optimizer of $\mathcal{G}$ if $x\SO\in\mc{X}$ and it minimizes the cost
$
J\SO(\sigma(x))\coloneqq p(\sigma(x)+d)^\top(\sigma(x)+d).
$
\end{definition}
\noindent Note that the cost $J\SO$
is the sum of all the players costs, divided by $M$, and the additional term $p(\sigma(x)+d)^\top d$. The reason why the latter term is included is that we want to compute the total cost of buying the commodity both for the flexible ($\sigma(x)$) and inflexible ($d$) users.
This cost was first introduced in \cite{ma2013decentralized} and then used in \cite{Gonz2015, deori2016nash, deconvergence}.
The inflexible usage level is sometimes modeled in the literature \cite{deori2016nash} as an additional player with constraint set represented by $\{x\in\R^n\mid x=d\cdot M\}$, where $d$ is the \emph{normalized} inflexible demand. We do not follow such approach here because we are interested in large populations and this set is unbounded as $M\rightarrow\infty$.
Throughout the manuscript, we denote with $
\s\coloneqq \bigl\{z\in \R^n ~|~ z\!=\!\frac{1}{M}\sum_{j=1}^M x^j,~x^j\!\in\!\mc{X}^j,~\forall~j=1,\dots,M \bigr\}.
$
\begin{assumption}
\label{A1}
For $i\in\{1,\dots,M\}$, the constraint set $\mathcal{X}\i$ is closed, convex, non empty.
For $z\in \s$, the function $z\mapsto p(z+d)$ is continuously differentiable and strongly monotone while $z\mapsto p(z+d)^\top(z+d)$ is strongly convex.
\vspace*{-1mm}
\end{assumption}
\noindent We denote with $L\SO$, $L_p$ the Lipschitz constant of $J\SO(\cdot)$, $p(\cdot)$, and with $\alpha$ the monotonicity constant of $p(\cdot)$.
%
\vspace*{-4mm}
\section{Price of Anarchy for finite and large populations}
\label{sec:poalarge}
In this section we study the efficiency of equilibria as a function of the population size $M$.
To do so, we consider a sequence of games $(\mc{G}_M)_{M=1}^\infty$ of increasing population size. For fixed $M$, the game $\mc{G}_M$ is played amongst $M$ agents and is defined as in \eqref{eq:gameG} with arbitrary sets $\{\mc{X}\i\}_{i=1}^{M}$. The function $p$ is instead the same for every game of the sequence.
\begin{assumption}
\vspace*{-1mm}
\label{ass:sequence}
There exists a convex, compact set $\mathcal{X}_0\subset\R^n$ s.t. $\cup_{i=1}^M \mathcal{X}^i\subseteq{\mathcal{X}_0}$ for each game $\mc{G}_M$ in $(\mc{G}_M)_{M=1}^\infty$. Moreover, $J^i(x^i,\sigma(x))$ is convex in $x^i\in\mathcal{X}^i$ for all fixed $x^{-i}\in\mathcal{X}^{-i}$, for all $i\in\{1,\dots,M\}$. We let $R\coloneqq \max_{y\in\mc{X}_0}||y||$.
\vspace*{-1mm}
\end{assumption}
For a given a game $\mc{G}_M$, we quantify the efficiency of equilibrium allocations using the notion of price of anarchy~\cite{koutsoupias1999worst}
\begin{equation}\label{eq:poa}
\poa_M \coloneqq \frac{\max_{x_N\in \textup{NE}_M}J\SO(\sigma(x_N)) }{J\SO(\sigma(x\SO))}\,,
\end{equation}
where $\textup{NE}_M\subseteq \mc{X}$ is the set of Nash equilibria of $\mc{G}_M$ and $x\SO$ is a social optimizer of $\mc{G}_M$. The price of anarchy captures the ratio between the cost at the worst Nash equilibrium and the optimal cost; by definition $\poa_M\ge1$.
\textcolor{black}{In the next subsections we study the behavior of $\poa_M$, for three different classes of admissible price functions $p$ (of increasing generality).}
%
\vspace*{-4mm}
\subsection{Linear price function}
Throughout this subsection we consider linear price functions $p$, as detailed in the following.
\vspace*{-1mm}
\begin{assumption}
\label{ass:lin}
The price function $p$ is linear, that is $p(z+d)=C(z+d)$, with $C=C^\top\in\R^{n\times n}$, $C\succ0$.
\vspace*{-1mm}
\end{assumption}
Note that Assumption \ref{ass:lin} implies strong monotonicity of $z\mapsto p(z+d)$ and strong convexity of $z\mapsto p(z+d)^\top(z+d)$, therefore Assumption \ref{ass:lin} is consistent with Assumption \ref{A1}. It is easy to verify that $J^i(x^i,\sigma(x))$ is convex in $x^i$, consistently with Assumption \ref{ass:sequence}.
Nevertheless, $C$ is not required to be diagonal as it was instead in \cite{Gonz2015,deori2016nash}.
\vspace*{-1mm}
\begin{theorem}[$\poa_M$ bound and convergence to 1]
\label{thm:lin}
~
\vspace*{-5mm}
\begin{itemize}
\item[a)]{Under Assumption \ref{A1} and \ref{ass:lin}, for any game $\mc{G}_M$ in the sequence, every Wardrop equilibrium $x\WE$ is a social optimizer i.e. $J\SO(\sigma(x\WE))\le J\SO(\sigma(x)),~\forall x\in \mc{X}$.
}
\item[b)]{With the further Assumption \ref{ass:sequence}, for any fixed game $\mc{G}_M$ in the sequence it holds that
\begin{equation}
\textstyle
J\SO(\sigma(x\SO))\le J\SO(\sigma(x\NE))\le J\SO(\sigma(x\SO))+c/{\sqrt{M}}\,,
\label{eq:boundjlin}
\end{equation}
with $c=RL\SO\sqrt{2L_p\alpha^{-1}}$ constant, $x\SO$ social optimizer.\\
{Thus, if there exists $\hat J\ge 0$ s.t. $J\SO(\sigma(x\SO))>\hat J$ for every game in the sequence $(\mc{G}_M)_{M=1}^\infty$, one has}
{\[
1\le \poa_M\le 1+c/\bigl(\hat J\sqrt{M}\bigl)~~~\text{and}~~
\lim_{M\to\infty}\poa_M=1\,.\]}}
\end{itemize}
\end{theorem}
The proof is reported in the Appendix.
\vspace*{-1mm}
\begin{remark}
The previous theorem extends the results of \cite{ma2013decentralized,Gonz2015,deori2016nash,deconvergence} simultaneously allowing for arbitrary convex constraints, finite populations, and non diagonal price function. Note that the condition $J\SO(\sigma(x\SO))>\hat J\ge0$ is merely technical and required to properly define $\poa_M$. This condition is trivially satisfied in the applications when, e.g., every agent requests an amount of charge bounded away from zero. Even if the latter condition does not hold, the cost at any Nash equilibrium converges to the minimum cost as $M\!\to\!\infty$, see~\eqref{eq:boundjlin}.
\end{remark}
\vspace*{-5mm}
\subsection{Non linear homogeneous price function}
In this section we consider $p(z+d)$ to be a nonlinear function, and assume its $t$-th component to depend only on the $t$-th component $z_t+d_t$, for all $t\in\{1,\dots,n\}$. This models, e.g., cases where the unit cost of electricity at every instant of time depends on the total consumption at that same instant.
\begin{assumption}
\label{ass:nonlin}
The price function $p$ takes the form
\vspace*{-1mm}
\[
p(z+d)=
\begin{bmatrix}
f(z_1+d_1),\hdots,
f(z_n+d_n)
\end{bmatrix}^\top,
\vspace*{-1mm}
\]
with $f(y):\R_{>0}\rightarrow\R_{>0}$.
Further $\mc{X}\i\subseteq \R^n_{\ge0}$ and $d\in\R^n_{>0}$\,.
\end{assumption}
If $f(y)$ is not linear, a simple check shows that, in general, $\nabla_{x^j}(\nabla_{x^i} J^i(x^i,\sigma(x)))\neq\nabla_{x^i}(\nabla_{x^j} J^j(x^j,\sigma(x)))$ when $i\neq j$. Consequently, the game is not potential, \cite[Theorem 1.3.1]{facchinei2007finite}. Hence methods to bound the $\poa$ based on the existence of an underlying potential function \cite{Gonz2015, deori2016nash}, can not be used here.
\begin{theorem}[$\poa_M$ convergence and counterexample]
\label{thm:polypoa}
Suppose that Assumptions \ref{A1}, \ref{ass:sequence} and \ref{ass:nonlin} hold. Further assume that $J\SO(\sigma(x\SO))$ $>$$\hat J$ for some $\hat J \ge 0$, for every game in $(\mc{G}_M)_{M=1}^\infty$.
\begin{itemize}
\item[a)] If $f(y)=\alpha y^k$ with $\alpha>0$ and $k>0$, it holds
\[
1\le \poa_M\le 1+c/\bigl(\hat J\sqrt{M}\bigl)~~~\text{and}~~
\lim_{M\to\infty}\poa_M=1\,,
\vspace*{-2mm}
\]
\vspace*{2mm}
with $c = RL\SO\sqrt{2 L_p \alpha^{-1}}$ constant.
\item[b)] For $n\ge 2$, if $f(y)$ satisfies the assumptions, but does not take the form $\alpha y^k$ for some $\alpha>0$ and $k>0$, it is possible to construct a sequence of games $(\mc{G}_M)_{M=1}^\infty$ for which $\lim_{M\to\infty}\poa_M>1$.
\end{itemize}
\end{theorem}
The proof is reported in the Appendix. Therein, the counterexample relative to b) is constructed using $\mc{X}\i=\bar{\mc{X}}$. In other words our impossibility result holds also for the case of
homogeneous populations. This is not in contrast with the result in \cite{ma2013decentralized} or \cite{deconvergence}, because therein the sets $\bar{\mc{X}}$ were assumed to be simplexes with upper bounds constraints. Here we claim that there exists a convex set $\bar{\mc{X}}$ (not a simplex with upper bounds) such that $\poa_M$ does not converge to $1$.
\begin{remark}
The previous theorem is of fundamental importance from the standpoint of the system operator, in that it suggests the use of monomial price functions to guarantee the highest achievable efficiency (all Nash equilibria become social optimizers for large $M$). If different price functions are chosen, it is always possible to construct a problem instance such that the worst Nash equilibrium is \emph{not} a social optimizer.
\end{remark}
\vspace*{-4mm}
\subsection{Nonlinear heterogeneous price function}
\textcolor{black}{
In the previous subsection we showed that if the price function is not a monomial, then $\poa_M$ may not converge to one. In this section we derive upper bounds for $\poa_M$ when the price function belongs to a general class of functions and may be different at different time instants, as formalized next.
\begin{assumption}
\label{ass:nonlin2}
The price function $p$ takes the form
\[
p(z+d)=
\begin{bmatrix}
l_1(z_1+d_1),
\hdots,
l_n(z_n+d_n)
\end{bmatrix}^\top,
\]
where $l_t(y):\R_{\ge 0}\rightarrow\R_{\ge0}$, $ l_t\in \mathcal{L}$ for all $t$ and $\mathcal{L}$ is a given class of continuous and nondecreasing price functions. Further let $\mc{X}\i\subseteq \R^n_{\ge0}$ be non empty, closed and convex.
\end{assumption}
Note that Assumption \ref{ass:nonlin2} is \emph{less restrictive} than Assumption \ref{ass:nonlin} as we let the price $l_t$ depend on the time instant $t$. The key idea in this case is to show that standard results derived in \cite{roughgarden2003price}, \cite{correa2004selfish} for Wardrop equilibria in routing games can be applied to charging games too. The resulting bounds on $\poa_M$ can then be derived using the converging result in \cite{Gentilearxiv17}.
More formally, given a charging game $\mc{G}_{M}$, we consider an equivalent nonatomic routing game over a parallel network with as many links as charging intervals. To present our next result we introduce the following quantity from \cite[Eq 3.8]{correa2004selfish}
$$\beta(\mathcal{L}):=\sup_{l\in\mathcal{L}} \sup_{v\ge 0} \left( \frac{1}{vl(v)}\max_{w\ge 0} [ (l(v)-l(w))w] \right).$$
It follows from \cite{correa2004selfish} that $\beta(\mathcal{L})\le 1$ and $[ 1- \beta(\mathcal{L}) ]^{-1}=\alpha(\mathcal{L})$, where $\alpha(\mathcal{L})$ is the anarchy value for class $\mathcal{L}$ as defined in \cite{roughgarden2003price}.
Therein (see Table 1), $\alpha(\mathcal{L})$ is computed for classes of functions such as affine, quadratic, polynomials. The key idea of the following theorem is to show that the games considered here are $(1,\beta(\mathcal{L}))$-smooth, as defined in \cite{roughgarden2009intrinsic}.
\begin{theorem}[$\poa_M$ for heterogeneous price function]\label{thm:routing}
a) Suppose that Assumption \ref{ass:nonlin2} holds. Then for any fixed game $\mathcal{G}_M$ and any Wardrop equilibrium $x_W$ it holds
\begin{equation}\label{eq:stepThm3}
J_S(\sigma(x_W))\le J_S(\sigma(x_S))\alpha(\mathcal{L})
\end{equation}
b) Further suppose Assumptions \ref{A1}, \ref{ass:sequence} hold, and there exists $\hat J\ge 0$ s.t. $J\SO(\sigma(x\SO))>\hat J$ for every game in $(\mc{G}_M)_{M=1}^\infty$.
Then, for any game $\mathcal{G}_M$ in the sequence
\[
J_S(\sigma(x_S))\le J_S(\sigma(x_N))\le J_S(\sigma(x_S))\alpha(\mathcal{L})+c/\sqrt{M},
\]
and $1\le\poa_M\le \alpha(\mathcal{L})+c/\bigl(\hat J\sqrt{M}\bigl),$ thus implying
$\lim_{M\rightarrow \infty} \poa_M\le \alpha(\mathcal{L}),$
with $c = RL_s\sqrt{2L_p\alpha^{-1}}$.
\end{theorem}
\vspace*{-3mm}
\begin{remark}
If $\mathcal{L}$ contains constant functions, then \eqref{eq:stepThm3} is tight (see \cite{roughgarden2003price} and the simulation section).
This is not a contradiction of Theorems \ref{thm:lin}, \ref{thm:polypoa} because therein either constant functions are not allowed or the price function is assumed to be time independent. Theorems \ref{thm:lin}, \ref{thm:polypoa} can be seen as refinements of Theorem~\ref{thm:routing} and guarantee that $\lim_{M\rightarrow \infty} \poa_M=1$ by restricting the admissible class of price functions.
\end{remark}
}
\vspace*{-4mm}
\section{Application to charging of electric vehicles}
\label{sec:ATP}
We consider a population of $M$ electric vehicles, where the level of charge of vehicle $i$ at time $t$ is described by $s\i_t$.
Its evolution is specified by the discrete-time system
$
s\i_{t+1} = s\i_t + b\i x\i_t \,, t = 1, \dots, n$,
where $x\i_t$ is the charging control and $b\i > 0$ is the charging efficiency.
We assume that $x\i_t$ is non-negative, that it cannot exceed $\tilde x^i_t \ge 0$ at time $t$ and that the absolute value of the difference between $x\i_{t}$ and $x\i_{t+1}$ is bounded by $r_i$.
The final level of charge is constrained to $s_{n+1}^i\ge\eta\i$, where $\eta\i \ge 0$ is the desired level of charge of agent $i$.
Denoting $x\i =[x\i_1, \dots, x^i_n]^\top \in \R^n$, the constraints of agent $i$ reduce to%
\vspace*{-1mm}
\begin{equation}
\label{eq:vehicle_constraint}
x\i \!\!\in\! \mc{X}\i \!=\!\! \left\{\!x\i\! \!\in\! \mathbb{R}^n \! \left|\!\!
\!\begin{array}{l}
0 \le x\i_t \le \tilde x\i_t, ~~~~~ \forall \, t=1,\dots,n \\
\sum_{t=1}^{n} x\i_t \ge \theta\i\\
|x\i_{t+1}-x\i_{t}|\le r\i, \forall \, t=1,\dots,n\!-\!1
\end{array}
\!\!\!\!\right.
\right\
\vspace*{-1mm}
\end{equation}
where $\theta\i \coloneqq {(b\i)}^{-1} (\eta\i - s\i_1)$, with
$s\i_1 \ge 0$ the level of charge for $t=1$. Note that the vehicles are \textit{heterogeneous} in the \textit{total amount of energy} required $\theta^i$ as well as the \textit{time-varying upper bounds} $\tilde x^i_t$ (that can be used to model deadlines, availability for charging), and the {\it ramping constraints $r^i$}. Such constraints satisfy Assumption \ref{A1}. Further, we assume that there exists $\hat \eta>0$ such that for each $M$ and $i\in\{1,\dots,M\}$, $\eta^i\le \hat \eta$ so that $\mathcal{X}_0$ is compact as required in Assumption \ref{ass:sequence}. Note that this is without loss of generality in any practical scenario. The cost function of each vehicle reads as
\vspace*{-1mm}
\begin{equation}
\textstyle J\i(x\i,\sigma(x))\!=\!\sum_{t=1}^n p_t \left( \frac{\sigma_t(x)+d_t }{\kappa_t} \right) x\i_t \!= \!p(\sigma(x)+d)^\top x^i,
\label{eq:PEV_energy_bill}
\vspace*{-1mm}
\end{equation}
where we assumed that the energy price for each time interval $p_t:\R_{\ge0}\rightarrow \R_{>0}$ depends on the ratio between total consumption and total capacity $(\sigma_t(x)+d_t)/ \kappa_t$, where $d_t$ and $\sigma_t(x):=\frac{1}{M}\sum_{i=1}^M x^i_t$ are the non-EV and EV demand at time $t$ divided by $M$
and $\kappa_t$ is the total production capacity divided by $M$ as in~\cite[eq. (6)]{ma2013decentralized}.
To sum up, we define the game $\mc{G}^\text{EV}_M$ as in~\eqref{eq:gameG}, with $\mathcal{X}\i$ and $J\i(x\i,\sigma(x))$ as in \eqref{eq:vehicle_constraint} and \eqref{eq:PEV_energy_bill} respectively.
Let $x:=[x^1;\ldots; x^M]$ be the vector of charging schedules for the whole population. The social cost of the game is $
J_S(\sigma(x))$$=$$\textstyle \sum_{t=1}^n p_t \left( \frac{\sigma_t(x)+d_t}{\kappa_t} \right) (\sigma_t(x)+d_t) $$=$$ p(\sigma(x)+d)^\top (\sigma(x)+d)$,
that is, the overall electricity bill for the sum of non-EV and EV demand; $n=24$. For the numerical study, we consider four cases as described next.
\emph{Case $1$.} We set $p_t(y)=0.15y^3$ and choose $\tilde x^i_t$ to allow charging in $[t^i_{\textup{min}},t^i_{\textup{max}}]$, with $t^i_{\textup{min}},t^i_{\textup{max}}$ uniformly randomly
distributed between 5pm and 10am;
$\theta^i\sim\mathcal{U}[5, 15]$, $r^i\sim\mathcal{U}[1,7]$ and $d_t$ as in \cite[Figure 1]{ma2013decentralized}.
\textcolor{black}{
\emph{Cases $2$-$4$.} We set $p_t(y)=0.15$ from 5pm to 1am and $p_t(y)=0.15y$ from 2am to 10am. For all vehicles, we choose $\tilde x^i_t$ to allow charging from 5pm to 10am. There are no ramping constraints. Cases 2-4 differ in $\theta^i$, $d_t$ as in the following table.}
\vspace*{-5mm}
\begin{table}[h!]
\centering
\textcolor{black}{
\begin{tabular}{|c|c|c|}
\hline
Case& $\theta^i$ & $d_t$ \\ \hline
2 & $9$ & $\mathbb{0}_n$\\
3 &$9$ & as in \cite[Figure 1]{ma2013decentralized} \\
4 &$\mathcal{U}[5,13]$ & $\mathbb{0}_n$ \\ \hline
\end{tabular}
}
\end{table}%
\noindent For each case, we report the (numerical) price of anarchy as a function of $M$ in Figure \ref{fig:poa_and_diff} (top).
Observe that case $1$ and $4$ feature heterogenous charging needs. For these cases, we have randomly extracted $100$ games $\mc{G}^{\text{EV}}_M$ (for any fixed $M$) and report the worst $\poa$ amongst the $100$ realization.
In order to plot the price of anarchy, we computed the ratio between \emph{one} (instead of the \emph{worst}) Nash equilibrium of $\mc{G}^\text{EV}_M$ and the social optimum. This choice is imposed by the fact that computing all Nash equilibria of $\mc{G}^\text{EV}_M$ is in general a hard problem.\footnote{To compute a Nash equilibrium we applied the extragradient algorithm \cite{facchinei2007finite}, which is not guaranteed to converge for small $M$ as the operator associated with the variational inequality of the Nash problem is not guaranteed to be strongly monotone \cite{Gentilearxiv17}. We thus verified a posteriori that the point where the algorithm stopped was a Nash equilibrium.}
\textcolor{black}{In Figure \ref{fig:poa_and_diff} (bottom) we plot the difference between the cost at the Nash and at the social optimizer, relative to case~1.}
\vspace*{-2mm}
\newlength\figureheight
\newlength\figurewidth
\setlength\figureheight{2.8cm}
\setlength\figurewidth{0.8\linewidth}
\begin{figure}[h!]
\begin{subfigure}[b]{\linewidth}
\centering
\resizebox{1\linewidth}{!}{\input{poa.tikz}}
\end{subfigure
\\[0.1cm]
\begin{subfigure}[b]{\linewidth}
\centering
\resizebox{1\linewidth}{!}{\input{whisker_plot.tikz}}
\end{subfigure}
\vspace*{-2mm}
\caption{Price of anarchy (top), and \textcolor{black}{cost difference between Nash and social optimum (bottom)} as a function of $M$.}
\label{fig:poa_and_diff}
\vspace*{-5mm}
\end{figure}
Thanks to the choice of parameters and price function, case 1 meets the Assumptions \ref{A1}, \ref{ass:sequence} and \ref{ass:nonlin} (see Lemma \ref{lem:ass} in the Appendix). Thus, Theorem \ref{thm:polypoa}b) guarantees that $\lim_{M\rightarrow \infty} \poa_M=1$. The numerical results reported in Figure~\ref{fig:poa_and_diff} (top, black line) are consistent with it: the ratio between the cost at the Nash and the cost at the social optimum converges to one. \textcolor{black}{In addition to this, Figure~\ref{fig:poa_and_diff} (bottom) shows that also the difference between these costs converges to zero, as guaranteed by the proof of theorem \ref{thm:polypoa}a) and the boundedness of $\mc{X}_0$.
A typical plot describing the valley filling property of the equilibrium in case 1 can be found e.g., in \cite[Figure 2]{ma2013decentralized}.
Case 2 has been constructed so that the corresponding Wardrop equilibrium features the worst possible asymptotic price of anarchy within the class of affine cost functions (for which $\alpha(\mathcal{L})=4/3$, see \cite{roughgarden2003price}).
The numerics of Figure \ref{fig:poa_and_diff} (top, red line) show that $\poa_M$ converges to $1.33\approx 4/3=\alpha(\mathcal{L})$. Cases 3 and 4 are a modification of case 2.
While the presence of base demand (case 3) helps in lowering the price of anarchy, the impact of heterogeneity (case 4) on the asymptotic price of anarchy is minor (blue and green plots in Figure~\ref{fig:poa_and_diff}).}
\vspace*{-3mm}
\section{Conclusions}
\vspace*{-1mm}
We considered the problem of charging a fleet of heterogeneous electric vehicles as formulated using game theoretic tools. More precisely, we studied the efficiency of the resulting equilibrium allocations, measured by the concept of price of anarchy.
We showed that the price of anarchy converges to one as the population of vehicles grow
if the price function is linear (but possibly dependent on all the time instants), or if the price function depends only on the instantaneous demand and is a positive pure monomial. \textcolor{black}{We provided efficiency bounds for general non linear functions.} For these three cases, we also provided bounds on the $\poa$ as a function of the population size. Our theoretical findings are corroborated by means of numerical simulations.
\textcolor{black}{We conclude noting that the question regarding the efficiency of equilibria in aggregative games is of interest for a broader class of cost functions than those studied here (e.g., quasi convex costs). We leave this as a future work.}
\vspace*{-5mm}
\section*{Appendix A: Characterization of the average}
\vspace*{-1mm}
\noindent
This section characterizes the average players' action $\sigma(x)$ at the Wardrop equilibrium and at the social optimizer of $\mathcal{G}$.
\begin{definition}[Variational inequality~\cite{facchinei2007finite}]
\label{def:vi}
Given $\mathcal{K}\subseteq \mathbb{R}^\ell$ and $F:\mathcal{K}\rightarrow \mathbb{R}^\ell$. A point $\bar x\in\mathcal{K}$ is a solution of the variational inequality $\textup{VI}(\mathcal{K},F)$ if $\,\forall x\in\mathcal{K}$, $F(\bar x)^\top (x-\bar x)\ge 0.$
\end{definition}
\begin{lemma}[Equivalent characterizations]
\label{lemma:averageVI}
\begin{enumerate}[leftmargin=*]
\item[]
\item[] \hspace*{-5.8mm} Suppose Assumption \ref{A1} holds.
\item
Given $x\WE$ a Wardrop equilibrium, its average $\sigma(x\WE)$ solves $\textup{VI}(\s,F\WE)$, with $F\WE:\mathbb{R}^n\rightarrow\mathbb{R}^n$,
$F\WE(z) \coloneqq p(z+d)$.
The $\textup{VI}(\s,F\WE)$ admits a unique solution $\sigma\WE$. Let us define $\mc{X}\WE\coloneqq\{x\in\mc{X}~\text{s.t.}~\frac{1}{M}\sum_{j=1}^M x\j=\sigma\WE\}$. Then any vector of strategies $x\WE\in\mc{X}\WE$ is a Wardrop equilibrium.
\item
Given $x\SO$ a social optimizer, its average $\sigma(x\SO)$ solves $\textup{VI}(\s,F\SO)$, with $F\SO:\mathbb{R}^n\rightarrow\mathbb{R}^n$,
$F\SO(z)\coloneqq p(z+d)+[\nabla_z p(z+d)](z+d).$
The $\textup{VI}(\s,F\SO)$ admits a unique solution $\sigma\SO$. Define $\mc{X}\SO\coloneqq\{x\in\mc{X}~\text{s.t.}~\frac{1}{M}\sum_{j=1}^M x\j=\sigma\SO\}$. Then any vector of strategies $x\SO\in\mc{X}\SO$ is a social optimizer.
\end{enumerate}
\end{lemma}
%
\begin{proof}
{\bf 1)}
The sets $\mc{X}\i$ are convex and closed by Assumption \ref{A1}; further, for fixed $z\in \s$, the functions $J\i(x\i,z)$ are linear and thus convex in $x\i\in\mc{X}\i$ for all $i\in\{1,\dots,M\}$. It follows that (see \cite{Gentilearxiv17}) a Wardrop equilibrium $x\WE$ satisfies
\vspace*{-1mm}
\begin{equation}
[\mathds{1}_{M}\otimes \,p(\sigma(x\WE)+d)]^\top \!(x-x\WE)\!\ge\!0,~~\forall x\!\in\!\mc{X}.
\label{eq:bigvi}
\vspace*{-1mm}
\end{equation}
Rearranging and dividing by $M$ we get
$
p(\sigma(x\WE)+d)^\top(\frac{1}{M}\sum_{j=1}^M x\i-\frac{1}{M}\sum_{j=1}^M x\WE\i)\ge 0,
$
for all $x\in\mathcal{X}$, or equivalently
$p(\sigma(x\WE)+d)^\top(z-\sigma(x\WE) )\ge 0,~\forall z\in\s,$
that is, $\sigma(x\WE)$ solves $\textup{VI}(\s,F\WE)$.
By Assumption \ref{A1} $F\WE(z)=p(z+d)$ is strongly monotone and $\s$ is closed, convex (since the sets $\mathcal{X}^i$ are closed, convex), hence by \cite{facchinei2007finite} $\textup{VI}(\s,F\WE)$ has a unique solution $\sigma\WE$. By definition of variational inequality, for any $z\in \s$ it holds $p(\sigma\WE+d)^\top(z-\sigma\WE)\ge0$. By definition of $x\WE\in\mc{X}\WE$, we have $\sigma(x\WE)=\sigma\WE$. It follows that $p(\sigma(x\WE)+d)^\top(z-\sigma(x\WE))\ge0$ for any $z\in \s$. By definition of $\s$, we conclude that \eqref{eq:bigvi} holds for all $x\in\mc{X}$. Thus, $x\WE$ is a Wardrop equilibrium (see \cite{Gentilearxiv17}).
\newline
{\bf 2)} By Assumption \ref{A1}, the set $\mc{X}$ is convex and closed and $J\SO(\sigma(x))$ is convex. Hence, a social optimizer $x\SO$ satisfies
\begin{equation}
\label{eq:bigvi2}
\nabla_x[p(\sigma(x)+d)(\sigma(x)+d)]_{|x={x\SO}}^\top (x-x\SO)\ge0
~~\forall x\in\mc{X}\,.
\end{equation}
Note that $M \nabla_{x^i} (p(\sigma(x)+d)^\top(\sigma(x)+d)) =p(\sigma(x\SO)+d)+[\nabla_z p(\sigma(x\SO)+d)](\sigma(x\SO)+d)$ for all $i\in\{1,\ldots,M\}$. Consequently, \eqref{eq:bigvi2} is equivalent to
$
[ p(\sigma(x\SO)+d)+[\nabla_z p(\sigma(x\SO)+d)](\sigma(x\SO)+d)]^\top
(\sigma(x)-\sigma(x\SO))\ge0\,,
$
that is $\sigma(x\SO)$ solves $\textup{VI}(\s,F\SO)$. The remaining claims are proven similarly to 1)
\end{proof}
%
%
\vspace*{-3mm}
\section*{Appendix B: Proofs of Theorem \ref{thm:lin}, \ref{thm:polypoa} and \ref{thm:routing}}
\begin{myproof1}
\newline
{\bf a)} Let $x\WE$ be a Wardrop equilibrium. By Lemma \ref{lemma:averageVI} part 1, $\sigma(x\WE)$ solves $\textup{VI}(\s,F\WE)$. Because of Assumption \ref{ass:lin}, $F\SO(z)=C(z+d)+C^\top(z+d)=2C(z+d)=2F\WE(z)$. Since the two operators $F\WE(z)$ and $F\SO(z)$ are parallel for each $z\in\s$, it follows from the definition of variational inequality that $\sigma(x\WE)$ must solve $\textup{VI}(\s,F\SO)$ too. Using Lemma \ref{lemma:averageVI} part 2 we conclude that $x\WE$ must be a social optimizer. \\
{\bf b)} By definition $J\SO(\sigma(x\SO))\le J\SO(\sigma(x\NE))$ and so $1\le \poa_M$.
Further, Assumption \ref{ass:sequence} and the strong monotonicity of $p(z+d)$ (Assumption \ref{A1}) allow us to use the convergence result of \cite[Theorem 1]{Gentilearxiv17}. That is, for any Nash equilibrium $x\NE$ and Wardrop equilibrium $x\WE$ of the game $\mc{G}_M$,
$
||\sigma(x\WE)-\sigma(x\NE)||\le \sqrt{2 R^2L_p \alpha^{-1}{M}^{-1}}.$ It follows that $|J\SO(\sigma(x\NE))- J\SO(\sigma(x\WE))|\le L\SO\sqrt{2 R^2L_p \alpha^{-1}{M}^{-1}} = c\sqrt{M^{-1}}.$ Since every Wardrop equilibrium is socially optimum (previous point of this proof), one has $|J\SO(\sigma(x\NE))- J\SO(\sigma(x\SO))|\le c\sqrt{M^{-1}}$ and thus
$J\SO(\sigma(x\NE))\le J\SO(\sigma(x\SO))+c\sqrt{M^{-1}}$. The final result regarding the price of anarchy follows from the latter inequality upon dividing both sides by $J\SO(\sigma(x\SO))>\hat J\ge 0$.
\end{myproof1}
\vspace*{2mm}
\begin{myproof2}
\newline
{\bf a)}
We first show that any Wardrop equilibrium is a social optimizer.
To do so, observe that the function $f(y)=\alpha y^k$ satisfies all the assumptions required by Lemma \ref{lemma:averageVI} (see Lemma \ref{lem:ass} in the Appendix).
Let $x\WE$ be a Wardrop equilibrium of $\mc{G}_M$. By Lemma \ref{lemma:averageVI}, $\sigma(x\WE)$ solves $\textup{VI}(\s,F\WE)$. Thanks to Assumption \ref{ass:nonlin} and the choice of $f(y)$,
\[F\SO(z)\!=\!(k+1)
[
\alpha(z_1+d_1)^k,
\hdots,
\alpha(z_n+d_n)^k
]^\top
\!\!=\!(k+1)F\WE(z)\,.
\]
Hence $\sigma(x\WE)$ solves $\textup{VI}(\s,F\SO)$ too. Using Lemma \ref{lemma:averageVI} we conclude that $x\WE$ must be a social optimizer.
The proof is now identical to the proof of Theorem \ref{thm:lin}, part b).
\newline
{\bf b)}
If $f(y)$ does not take the form $\alpha y^k$ for some $\alpha>0$ and $k>0$, by Lemma \ref{lemma:notaligned} there exists a point $\bar z\in\R^n_{>0}$ for which $F\WE(\bar z)$ and $F\SO(\bar z)$ are not aligned, i.e. for which $F\SO(\bar z)\neq h F\WE(\bar z)$ for all $h\in\R$.
We intend to construct a sequence of games $\mc{G}_M$ so that for every $\mc{G}_M$ in the sequence the unique average at the Wardrop equilibrium is exactly $\bar z$, that is $\bar z$ solves $\textup{VI}(\s,F\WE)$, but $\bar z$ does not solve $\textup{VI}(\s,F\SO)$. This fact indeed proves, by Lemma \ref{lemma:averageVI}, that for any game $\mc{G}_M$ the Wardrop equilibria of $\mc{G}_M$ are not social minimizers.
Since $\sigma(x\NE)\to\sigma(x\WE)$ as $M\to\infty$ \cite[Theorem 1]{Gentilearxiv17}, one concludes that $\poa$ cannot converge to~$1$.
In the following we construct a sequence of games with the above mentioned properties.
To this end let us define $\mc{X}\i\coloneqq\bar{\mc{X}}\subseteq \R^n$, so that $\s=\bar{\mc{X}}$ with
$ \bar{\mc{X}}\coloneqq\{\bar z+\alpha v_1 +\beta v_2~~ \alpha,\beta\in[0~1]\}\cap\R^n_{\ge0},$
where $v_1\coloneqq\bar F\WE$, $v_2\coloneqq(\bar F\WE^\top\bar F\SO )\bar F\WE- (\bar F\WE^\top \bar F\WE)\bar F\SO $ and $\bar F\WE\coloneqq F\WE(\bar z)$, $\bar F\SO\coloneqq F\SO(\bar z)$; see Figure \ref{fig:setX}. The intuition is that $-v_2$ is the component of $\bar F_S$ that lives in the same plane as $\bar F_S$ and $\bar F_W$ and is orthogonal to $\bar F_W$, so that $\bar F_W^\top v_2=0$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.8]{setXnew}
\vspace*{-2mm}
\caption{Construction of the set $\bar{\mc{X}}$.}
\label{fig:setX}
\vspace*{-7mm}
\end{figure}
Observe that $\s=\bar{\mc{X}}$ is the intersection of a bounded and convex set with the positive orthant and thus satisfies Assumptions \ref{A1}, \ref{ass:sequence} and \ref{ass:nonlin}.
It is easy to verify that $\bar z \in \bar{\mc{X}}$ and that $F\WE(\bar z)^\top(z-\bar z)=\alpha ||F\WE(\bar z)||^2\ge0$ for all $z\in \s=\bar{\mc{X}}$, so that $\bar z$ solves $\textup{VI}(\s,F\WE)$. Let us pick $\hat z=\bar z+ \beta v_2$. Note that since $\bar z>0$, for $\beta$ small enough $\hat z$ belongs to $\R^n_{>0}$ as well and thus to $\bar{\mc{X}}$. Then $F\SO(\bar z)^\top(\hat z-\bar z)=\beta (\bar F\SO^\top\bar F\WE)^2-\beta ||\bar F\SO||^2||\bar F\WE||^2< 0$. The inequality is strict because $\bar F\WE$, $\bar F\SO$ are neither parallel nor zero (Lemma \ref{lemma:notaligned}).
Thus, $\bar z$ does not solve $\textup{VI}(\s,F\SO)$.
\end{myproof2}
\begin{lemma}
\label{lemma:notaligned}
For $n\ge 2$, if $f(y)$ satisfies Assumptions \ref{A1}, \ref{ass:sequence} and \ref{ass:nonlin}, but does not take the form $\alpha y^k$ for some $\alpha>0$ and $k>0$, then there exists $\bar z \in\R^n_{>0}$ such that $F\SO(\bar z)\neq h F\WE(\bar z)$, $\forall h\in\R$. Moreover, $F\SO(\bar z)\neq 0$, $F\WE(\bar z)\neq 0$.
\end{lemma}
\vspace*{-1.5mm}
\begin{proof}
Let us consider the first statement.
By contradiction, assume there exists $\beta(z):\R^n_{>0}\rightarrow\R$ such that $F\SO(z)= \beta(z) F\WE(z)$ for all $z \in\R^n_{>0}$. This implies
\vspace*{-0.5mm}
\begin{equation}
\label{eq:alphas}
f'(z_t+d_t)(z_t+d_t)=(\beta(z_1,\dots,z_n)-1)f(z_t+d_t)\,,
\vspace*{-0.5mm}
\end{equation}
for all $t\in\{1,\dots,n\}$ and for all $z \in\R^n_{>0}$, $d\in\R^n_{>0}$.
By Assumption \ref{ass:nonlin}, $f(z_t+d_t)>0$.
Hence one can divide \eqref{eq:alphas} for $f(z_t+d_t)$ without loss of generality, and conclude that $\beta(z_1,\dots,z_n)=\beta_1(z_1)=\dots=\beta_n(z_n)$ with $\beta_i:\R\rightarrow\R$ for all $z\in\R^n_{>0}$.
For $n\ge2$ the last condition implies $\beta(z_1,\dots,z_n)=b$ constant. Equation \eqref{eq:alphas} reads as $
f'(y)y=(b-1)f(y)~~\forall y>0$,
whose continuously differentiable solutions are all and only $f(y)=a y^{b-1}$. Note that if $a\le0$ or $b\le 1$, Assumption \ref{A1} is not satisfied, while if $a>0$ and $b>1$ we contradicted the assumption that $f(y)$ did not take the form $\alpha y^k $ for some $\alpha>0$ and $k>0$.
Setting $h=0$ in the previous claim gives $F\SO(\bar z)\ne0$.
Since $f:\R_{>0}\rightarrow\R_{>0}$, one has $F\WE(\bar z):=[ f(\bar z_t+d_t)]_{t=1}^n\neq0$.
\end{proof}
\vspace*{-1mm}
\begin{lemma}\label{lem:ass}
Suppose that the price function $p$ is as in Assumption \ref{ass:nonlin} with $f(y)=\alpha y^k$, $\alpha>0,k>0$. Then $p$ satisfies Assumption \ref{A1} and \ref{ass:sequence}.
\end{lemma}
\begin{proof}
Note that $\nabla_z p(z+d)$ is a diagonal matrix with entry $f'(z_t+d_t)$ in position $(t,t)$. Since $f'(y)=\alpha k y^{k-1}>0$ for all $y>0$ and since $z_t+d_t$ is positive by assumption for all $t$, we get that $p(z+d)$ is continuously differentiable and that $\nabla_z p(z+d)\succ 0$ i.e. that $z\mapsto p(z+d)$ is strongly monotone. Similarly, one can show that the Hessian of $p(z+d)^\top (z+d)$ and the Hessian of $J^i(x^i,\sigma(x))$ with respect to $x\i$ are positive definite. Thus, $z\mapsto p(z+d)^\top (z+d)$ and $x^i \mapsto J^i(x^i,\sigma(x))$ are strongly convex. See \cite{paccagnan2018efficiency} for further details.
\end{proof}
\vspace*{2mm}
\begin{myproof3}
\textcolor{black}{
\newline
We prove only {a)} as {b)} can be shown as in Theorem~1b).
We define
$C^{\sigma_1}(\sigma_2):=p(\sigma_1+d)^\top(\sigma_2+d)$
so that $J_S(\sigma)=C^{\sigma}(\sigma)$.
Let $x_W$ be any Wardrop equilibrium. Then, the average $\bar \sigma:=\sigma_W$ solves VI$(\Sigma,F_W)$ i.e.
$F_W(\bar \sigma)^\top(\sigma-\bar \sigma)\ge 0,~ \forall \sigma\in\Sigma.$
This can be seen following the proof of Lemma \ref{lemma:averageVI} part 1), and observing that only convexity and closedness of $\mc{X}^i$ are required.
Equivalently,
$ J_S(\bar \sigma) \le C^{\bar \sigma}(\sigma),~\forall \sigma\in\Sigma.$
However,
$C^{\bar \sigma}(\sigma)=\sum_{t} l_t(\bar \sigma_t+d_t) (\sigma_t+d_t)
=J_S(\sigma)+ \sum_{t} [l_t(\bar \sigma_t+d_t) - l_t( \sigma_t+d_t)] (\sigma_t+d_t)
=J_S(\sigma)+ \sum_{t} \frac{[l_t(v_t) - l_t(w_t)]w_t }{l_t(v_t)v_t} l_t(v_t) v_t
\le J_S(\sigma)+ \sum_{t} \beta(\mathcal{L}) l_t(v_t) v_t
= J_S(\sigma)+ \beta(\mathcal{L}) J_S(\bar\sigma)
$
where we use $v_t:=\bar \sigma_t+d_t\ge d_t$, $w_t:= \sigma_t+d_t\ge d_t$ and $d_t\ge 0$.
The previous relation holds for all $\sigma\in\Sigma$. Selecting $\sigma=\sigma_S$ (the optimum average), we get
$ J_S(\bar \sigma)\! \le \!J_S(\sigma_S)\!+\! \beta(\mathcal{L}) J_S(\bar\sigma).$
Rearranging we obtain \eqref{eq:stepThm3}.
}
\end{myproof3}
\vspace*{-2mm}
\bibliographystyle{IEEEtran}
\section{Introduction}
In the last decade we have witnessed a profound change in the way energy systems are operated.
A new paradigm called demand response is emerging, according to which the energy requirements of a population of users are tuned, by means of incentives, to account for the operational needs of the power grid \cite{albadi2007demand}. Previous works \cite{ma2013decentralized,gan2013optimal} have suggested to model these demand response methods as a game. Therein each player represents a user that needs to optimize his energy consumption over a given period of time, with the objective of minimizing his electricity bill. What couples the users, and thus makes the charging problem a game, is the assumption that the energy price depends at every instant of time on the sum of the energy demand of the whole population.
The seminal paper \cite{ma2013decentralized} shows that the (unique) Nash equilibrium of such game has desirable properties from the standpoint of the grid operator, in the case of large and homogeneous populations. Under these assumptions, \cite{ma2013decentralized} shows that the equilibrium is socially optimum in the sense that it minimizes the collective electricity bill (including the cost of both flexible and inflexible demand) and fills the overnight demand valley.
\textcolor{black}{As a result, a rich body of literature has focused on devising distributed and decentralized schemes that are numerically efficient, and can be used by the grid operator to coordinate the strategies of the agents to a Nash equilibrium \cite{ma2013decentralized,grammatico:parise:colombino:lygeros:14, dario2015aggregative, gan2013optimal, chen2014autonomous,paccagnan2016distributed}.}
Less attention has been devoted to verify whether the optimality statement made in \cite{ma2013decentralized} is still valid in the presence of more general cost functions, agents heterogeneity and realistic charging constraints (e.g., upper bounds on the instantaneous charging, different charging windows, ramping constraints).
Nonetheless, this is a fundamental prerequisite for the applicability of the aforementioned coordination schemes.\footnote{
While there are multiple factors impacting the choice of a control scheme, if the Nash equilibria do not posses desirable properties, the grid operator has limited incentive in coordinating the agents to such a strategy profile.
}
\\
\indent
\textcolor{black}{Following \cite{ma2013decentralized}, the efficiency of the Nash equilibrium has been studied in \cite{Gonz2015}, under the assumption of linear price functions. Both \cite{ma2013decentralized} and \cite{Gonz2015} focus on \textit{simplex constraints and homogeneous populations}.
The homogeneity assumption is relaxed in \cite{deori2016nash}, where the authors provide similar efficiency results of those in \cite{ma2013decentralized}, but limited to \emph{linear} price functions and in a probabilistic sense.
\emph{Non linear price functions} are considered in \cite{deconvergence}, but the efficiency results pertain to the notion of Wardrop equilibrium and charging constraints are limited to upper bounds.
We observe that all the previous works assume that the price at time $t$ depends only on the consumption at the same time instant.
Finally, we note that \cite{Beaude12} also provides efficiency bounds for charging games, but the setup considered therein is different, in that each agent's decision variable is limited to its starting charging time.}
\textcolor{black}{The aim of this paper is to provide efficiency results for the Nash equilibrium of aggregative charging games under different assumptions involving {\it finite populations} of vehicles, {\it general convex constraints}, {\it non linear price functions} and {\it price dependence on different time instants}}. To do so, we model the charging problem as an aggregative game \cite{jensen2010aggregative}, and study the equilibrium efficiency using the notion of \textit{price of anarchy} ($\poa$). The $\poa$ is a measure introduced in game theory to quantify how much selfish behavior degrades the performance of a given system \cite{koutsoupias1999worst}.
By definition, $\poa\ge1$ and the closer to $1$ the better the overall performance of the system. The result in \cite{ma2013decentralized} can be equivalently stated as the fact that for homogeneous populations with simplex constraints, the $\poa$ converges to~$1$ as the population size grows.
Our main contributions are:
\begin{enumerate}[leftmargin=*]
\vspace*{-1mm}
\item We show that the $\poa$ for charging games with linear price function (that might however depend on all charging instants) and generic convex constraints always converges to $1$, complementing \cite{ma2013decentralized, Gonz2015, deori2016nash, deconvergence};
\item For charging games with generic convex constraints and nonnegative price function that depends only on the instantaneous demand, we show that the $\poa$ converges to $1$ if the price function is a positive pure monomial (i.e., $\alpha z^k$ for some $\alpha, k>0$).
On the contrary, if the price function does not have this form, it is possible to construct a sequence of games whose $\poa$ does not converge to $1$.
\textcolor{black}{In such cases, we show how results for routing games\cite{roughgarden2003price, correa2004selfish} can be used to bound the asymptotic value of the price of anarchy.}
\item In all the previous cases we provide an explicit bound connecting the efficiency of the equilibria with the (finite) number of vehicles in the game. To the best of our knowledge, this is the first result providing a bound on $\poa$ as a function of the population size, for charging games with general convex constraints and price functions.
\end{enumerate}
\subsubsection*{\bf \emph{Organization}} Section \ref{sec:PF} includes the game formulation and some preliminary notions. In Section \ref{sec:poalarge} we define the efficiency metric used throughout this manuscript, and present the main results for linear and nonlinear price functions. Section \ref{sec:ATP} focuses on the application of charging a fleet of electric vehicles. All the proofs are reported in the Appendix.
\subsubsection*{\bf \emph{Notation}}
$\R^n_{\ge0}$ and $\R^n_{>0}$ denote the elements of $\R^n$ whose components are non negative and positive;
$\mathbb{0}_n\in\mb{R}^{n}$ (resp. $\mathds{1}_n$) is the column vector of zero (resp. unit) entries.
Given $A\in\mathbb{R}^{n\times n}$ not necessarily symmetric, $A\succ0$ $\Leftrightarrow$ $x^\top A x>0,$ $\forall x\neq 0$.
Given $g(x):\mathbb{R}^n \rightarrow \mathbb{R}^m$ we define the matrix $\nabla_{x} g(x) \in \mathbb{R}^{n\times m}$ component-wise as
$[\nabla_x g(x)]_{i,j}\coloneqq \frac{\partial g_j(x)}{\partial x\i}$.
An operator $F:\mc{K}\subset\R^n \rightarrow \R^n$ is called $\alpha$ strongly monotone if $(F(x)-F(y))^\top (x-y)\ge\alpha||x-y||^2$ for some $\alpha>0$, $\forall x,y\in\mc{K}$; $\mathcal{U}[a,b]$ is the uniform distribution on the real interval $[a, b]$.
\vspace*{-3mm}
\section{Problem formulation}
\label{sec:PF}
Let us consider a population of $M$ agents, each choosing an action $x\i\!\in\!\mc{X}\i\!\subseteq\!\mb{R}^n$. Agent $i$ incurs the cost $J\i(x\i,\sigma(x)):\mb{R}^n\times \mb{R}^n\rightarrow \mb{R}$ that depends on his own action $x\i \!\in\! \mc{X}\i$ and on the average action $\sigma(x)\!\coloneqq\!\frac{1}{M}\!\sum_{j=1}^{M}x\j$ of the population, as typical of aggregative games~\cite{jensen2010aggregative}. We assume that
\vspace*{-1mm}
\begin{equation}
J\i(x\i,\sigma(x)) \coloneqq p(\sigma(x)+d)^\top x\i\,,
\label{eq:costs}
\vspace*{-1mm}
\end{equation}
with $d\in\mathbb{R}_{\ge0}^n$ and $p:\R^n\to\R^n$.
The cost in~\eqref{eq:costs} can be used to describe applications where $x^i$ denotes the usage level of a certain commodity,
whose per-unit cost $p$ depends on the average usage level of the other players plus some inflexible normalized usage level $d$~\cite{ma2013decentralized,chen2014autonomous}. We denote with $\mc{X} \coloneqq \mc{X}^1\times\ldots\times\mc{X}^M$, and identify such game with the tuple
\vspace*{-1mm}
\begin{equation}
\label{eq:gameG}
\mathcal{G}\coloneqq\{M,\{\mathcal{X}^i\}_{i=1}^M,p\}.
\vspace*{-4mm}
\end{equation}
\subsection{Nash, Wardrop equilibrium and social optimizer}
\vspace*{-1mm}
We consider two notions of equilibrium for the game $\mc{G}$.
\vspace*{-2mm}
\begin{definition}[Nash equilibrium \cite{nash1950equilibrium}]\label{def:NE}
A set of actions $x\NE = [x^1\NE; \dots; x^M\NE] \in \R^{M n}$ is a Nash equilibrium of the game $\mathcal{G}$ if $x\NE\in\mc{X}$ and for all $ i\in\{1,\dots,M\}$ and all $ x\i \in\mc{X}\i$
\vspace*{-1mm}
\begin{equation}
J\i(x\i\NE,\sigma(x\NE)) \le J\i\biggl( x\i,\frac 1M x\i + \frac 1M \sum_{j \neq i} x\j\NE \biggr ).
\label{eq:def_NE}
\vspace*{-2mm}
\end{equation}
\end{definition}
Observe that on the right-hand side of \eqref{eq:def_NE} the variable $x^i$ appears in both arguments of $J^i(\cdot,\cdot)$.
As the population size grows, the contribution of an agent to the average decreases. This motivates the definition of Wardrop equilibrium.
\vspace*{-1mm}
\begin{definition}[Wardrop equilibrium \cite{wardrop1952road,Gentilearxiv17}]\label{def:WE}
A set of actions $x\WE = [x^1\WE; \dots; x^M\WE] \in \R^{M n}$ is a Wardrop equilibrium of $\mathcal{G}$ if $x\WE\in\mc{X}$, and for all $i\in\{1,\dots,M\}$, and all $x\i\in\mc{X}\i$,
\vspace*{-1mm}
\begin{equation}
J\i(x\i\WE,\sigma(x\WE)) \le J\i ( x\i,\sigma(x\WE) )\,.
\label{eq:def_WE}
\vspace*{-2mm}
\end{equation}
\end{definition}
\noindent Note that in this latter definition the average is fixed to $\sigma(x\WE)$ on both sides of \eqref{eq:def_WE}.
\noindent
Consequently, a feasible set of actions is a Wardrop equilibrium if no agent can improve his cost, assuming that the average action is \emph{fixed}.
\begin{definition}[Social optimizer]
A set of actions $x\SO = [x^1\SO; \dots; x^M\SO] \in \R^{M n}$ is a social optimizer of $\mathcal{G}$ if $x\SO\in\mc{X}$ and it minimizes the cost
$
J\SO(\sigma(x))\coloneqq p(\sigma(x)+d)^\top(\sigma(x)+d).
$
\end{definition}
\noindent Note that the cost $J\SO$
is the sum of all the players costs, divided by $M$, and the additional term $p(\sigma(x)+d)^\top d$. The reason why the latter term is included is that we want to compute the total cost of buying the commodity both for the flexible ($\sigma(x)$) and inflexible ($d$) users.
This cost was first introduced in \cite{ma2013decentralized} and then used in \cite{Gonz2015, deori2016nash, deconvergence}.
The inflexible usage level is sometimes modeled in the literature \cite{deori2016nash} as an additional player with constraint set represented by $\{x\in\R^n\mid x=d\cdot M\}$, where $d$ is the \emph{normalized} inflexible demand. We do not follow such approach here because we are interested in large populations and this set is unbounded as $M\rightarrow\infty$.
Throughout the manuscript, we denote with $
\s\coloneqq \bigl\{z\in \R^n ~|~ z\!=\!\frac{1}{M}\sum_{j=1}^M x^j,~x^j\!\in\!\mc{X}^j,~\forall~j=1,\dots,M \bigr\}.
$
\begin{assumption}
\label{A1}
For $i\in\{1,\dots,M\}$, the constraint set $\mathcal{X}\i$ is closed, convex, non empty.
For $z\in \s$, the function $z\mapsto p(z+d)$ is continuously differentiable and strongly monotone while $z\mapsto p(z+d)^\top(z+d)$ is strongly convex.
\vspace*{-1mm}
\end{assumption}
\noindent We denote with $L\SO$, $L_p$ the Lipschitz constant of $J\SO(\cdot)$, $p(\cdot)$, and with $\alpha$ the monotonicity constant of $p(\cdot)$.
%
\vspace*{-4mm}
\section{Price of Anarchy for finite and large populations}
\label{sec:poalarge}
In this section we study the efficiency of equilibria as a function of the population size $M$.
To do so, we consider a sequence of games $(\mc{G}_M)_{M=1}^\infty$ of increasing population size. For fixed $M$, the game $\mc{G}_M$ is played amongst $M$ agents and is defined as in \eqref{eq:gameG} with arbitrary sets $\{\mc{X}\i\}_{i=1}^{M}$. The function $p$ is instead the same for every game of the sequence.
\begin{assumption}
\vspace*{-1mm}
\label{ass:sequence}
There exists a convex, compact set $\mathcal{X}_0\subset\R^n$ s.t. $\cup_{i=1}^M \mathcal{X}^i\subseteq{\mathcal{X}_0}$ for each game $\mc{G}_M$ in $(\mc{G}_M)_{M=1}^\infty$. Moreover, $J^i(x^i,\sigma(x))$ is convex in $x^i\in\mathcal{X}^i$ for all fixed $x^{-i}\in\mathcal{X}^{-i}$, for all $i\in\{1,\dots,M\}$. We let $R\coloneqq \max_{y\in\mc{X}_0}||y||$.
\vspace*{-1mm}
\end{assumption}
For a given a game $\mc{G}_M$, we quantify the efficiency of equilibrium allocations using the notion of price of anarchy~\cite{koutsoupias1999worst}
\begin{equation}\label{eq:poa}
\poa_M \coloneqq \frac{\max_{x_N\in \textup{NE}_M}J\SO(\sigma(x_N)) }{J\SO(\sigma(x\SO))}\,,
\end{equation}
where $\textup{NE}_M\subseteq \mc{X}$ is the set of Nash equilibria of $\mc{G}_M$ and $x\SO$ is a social optimizer of $\mc{G}_M$. The price of anarchy captures the ratio between the cost at the worst Nash equilibrium and the optimal cost; by definition $\poa_M\ge1$.
\textcolor{black}{In the next subsections we study the behavior of $\poa_M$, for three different classes of admissible price functions $p$ (of increasing generality).}
%
\vspace*{-4mm}
\subsection{Linear price function}
Throughout this subsection we consider linear price functions $p$, as detailed in the following.
\vspace*{-1mm}
\begin{assumption}
\label{ass:lin}
The price function $p$ is linear, that is $p(z+d)=C(z+d)$, with $C=C^\top\in\R^{n\times n}$, $C\succ0$.
\vspace*{-1mm}
\end{assumption}
Note that Assumption \ref{ass:lin} implies strong monotonicity of $z\mapsto p(z+d)$ and strong convexity of $z\mapsto p(z+d)^\top(z+d)$, therefore Assumption \ref{ass:lin} is consistent with Assumption \ref{A1}. It is easy to verify that $J^i(x^i,\sigma(x))$ is convex in $x^i$, consistently with Assumption \ref{ass:sequence}.
Nevertheless, $C$ is not required to be diagonal as it was instead in \cite{Gonz2015,deori2016nash}.
\vspace*{-1mm}
\begin{theorem}[$\poa_M$ bound and convergence to 1]
\label{thm:lin}
~
\vspace*{-5mm}
\begin{itemize}
\item[a)]{Under Assumption \ref{A1} and \ref{ass:lin}, for any game $\mc{G}_M$ in the sequence, every Wardrop equilibrium $x\WE$ is a social optimizer i.e. $J\SO(\sigma(x\WE))\le J\SO(\sigma(x)),~\forall x\in \mc{X}$.
}
\item[b)]{With the further Assumption \ref{ass:sequence}, for any fixed game $\mc{G}_M$ in the sequence it holds that
\begin{equation}
\textstyle
J\SO(\sigma(x\SO))\le J\SO(\sigma(x\NE))\le J\SO(\sigma(x\SO))+c/{\sqrt{M}}\,,
\label{eq:boundjlin}
\end{equation}
with $c=RL\SO\sqrt{2L_p\alpha^{-1}}$ constant, $x\SO$ social optimizer.\\
{Thus, if there exists $\hat J\ge 0$ s.t. $J\SO(\sigma(x\SO))>\hat J$ for every game in the sequence $(\mc{G}_M)_{M=1}^\infty$, one has}
{\[
1\le \poa_M\le 1+c/\bigl(\hat J\sqrt{M}\bigl)~~~\text{and}~~
\lim_{M\to\infty}\poa_M=1\,.\]}}
\end{itemize}
\end{theorem}
The proof is reported in the Appendix.
\vspace*{-1mm}
\begin{remark}
The previous theorem extends the results of \cite{ma2013decentralized,Gonz2015,deori2016nash,deconvergence} simultaneously allowing for arbitrary convex constraints, finite populations, and non diagonal price function. Note that the condition $J\SO(\sigma(x\SO))>\hat J\ge0$ is merely technical and required to properly define $\poa_M$. This condition is trivially satisfied in the applications when, e.g., every agent requests an amount of charge bounded away from zero. Even if the latter condition does not hold, the cost at any Nash equilibrium converges to the minimum cost as $M\!\to\!\infty$, see~\eqref{eq:boundjlin}.
\end{remark}
\vspace*{-5mm}
\subsection{Non linear homogeneous price function}
In this section we consider $p(z+d)$ to be a nonlinear function, and assume its $t$-th component to depend only on the $t$-th component $z_t+d_t$, for all $t\in\{1,\dots,n\}$. This models, e.g., cases where the unit cost of electricity at every instant of time depends on the total consumption at that same instant.
\begin{assumption}
\label{ass:nonlin}
The price function $p$ takes the form
\vspace*{-1mm}
\[
p(z+d)=
\begin{bmatrix}
f(z_1+d_1),\hdots,
f(z_n+d_n)
\end{bmatrix}^\top,
\vspace*{-1mm}
\]
with $f(y):\R_{>0}\rightarrow\R_{>0}$.
Further $\mc{X}\i\subseteq \R^n_{\ge0}$ and $d\in\R^n_{>0}$\,.
\end{assumption}
If $f(y)$ is not linear, a simple check shows that, in general, $\nabla_{x^j}(\nabla_{x^i} J^i(x^i,\sigma(x)))\neq\nabla_{x^i}(\nabla_{x^j} J^j(x^j,\sigma(x)))$ when $i\neq j$. Consequently, the game is not potential, \cite[Theorem 1.3.1]{facchinei2007finite}. Hence methods to bound the $\poa$ based on the existence of an underlying potential function \cite{Gonz2015, deori2016nash}, can not be used here.
\begin{theorem}[$\poa_M$ convergence and counterexample]
\label{thm:polypoa}
Suppose that Assumptions \ref{A1}, \ref{ass:sequence} and \ref{ass:nonlin} hold. Further assume that $J\SO(\sigma(x\SO))$ $>$$\hat J$ for some $\hat J \ge 0$, for every game in $(\mc{G}_M)_{M=1}^\infty$.
\begin{itemize}
\item[a)] If $f(y)=\alpha y^k$ with $\alpha>0$ and $k>0$, it holds
\[
1\le \poa_M\le 1+c/\bigl(\hat J\sqrt{M}\bigl)~~~\text{and}~~
\lim_{M\to\infty}\poa_M=1\,,
\vspace*{-2mm}
\]
\vspace*{2mm}
with $c = RL\SO\sqrt{2 L_p \alpha^{-1}}$ constant.
\item[b)] For $n\ge 2$, if $f(y)$ satisfies the assumptions, but does not take the form $\alpha y^k$ for some $\alpha>0$ and $k>0$, it is possible to construct a sequence of games $(\mc{G}_M)_{M=1}^\infty$ for which $\lim_{M\to\infty}\poa_M>1$.
\end{itemize}
\end{theorem}
The proof is reported in the Appendix. Therein, the counterexample relative to b) is constructed using $\mc{X}\i=\bar{\mc{X}}$. In other words our impossibility result holds also for the case of
homogeneous populations. This is not in contrast with the result in \cite{ma2013decentralized} or \cite{deconvergence}, because therein the sets $\bar{\mc{X}}$ were assumed to be simplexes with upper bounds constraints. Here we claim that there exists a convex set $\bar{\mc{X}}$ (not a simplex with upper bounds) such that $\poa_M$ does not converge to $1$.
\begin{remark}
The previous theorem is of fundamental importance from the standpoint of the system operator, in that it suggests the use of monomial price functions to guarantee the highest achievable efficiency (all Nash equilibria become social optimizers for large $M$). If different price functions are chosen, it is always possible to construct a problem instance such that the worst Nash equilibrium is \emph{not} a social optimizer.
\end{remark}
\vspace*{-4mm}
\subsection{Nonlinear heterogeneous price function}
\textcolor{black}{
In the previous subsection we showed that if the price function is not a monomial, then $\poa_M$ may not converge to one. In this section we derive upper bounds for $\poa_M$ when the price function belongs to a general class of functions and may be different at different time instants, as formalized next.
\begin{assumption}
\label{ass:nonlin2}
The price function $p$ takes the form
\[
p(z+d)=
\begin{bmatrix}
l_1(z_1+d_1),
\hdots,
l_n(z_n+d_n)
\end{bmatrix}^\top,
\]
where $l_t(y):\R_{\ge 0}\rightarrow\R_{\ge0}$, $ l_t\in \mathcal{L}$ for all $t$ and $\mathcal{L}$ is a given class of continuous and nondecreasing price functions. Further let $\mc{X}\i\subseteq \R^n_{\ge0}$ be non empty, closed and convex.
\end{assumption}
Note that Assumption \ref{ass:nonlin2} is \emph{less restrictive} than Assumption \ref{ass:nonlin} as we let the price $l_t$ depend on the time instant $t$. The key idea in this case is to show that standard results derived in \cite{roughgarden2003price}, \cite{correa2004selfish} for Wardrop equilibria in routing games can be applied to charging games too. The resulting bounds on $\poa_M$ can then be derived using the converging result in \cite{Gentilearxiv17}.
More formally, given a charging game $\mc{G}_{M}$, we consider an equivalent nonatomic routing game over a parallel network with as many links as charging intervals. To present our next result we introduce the following quantity from \cite[Eq 3.8]{correa2004selfish}
$$\beta(\mathcal{L}):=\sup_{l\in\mathcal{L}} \sup_{v\ge 0} \left( \frac{1}{vl(v)}\max_{w\ge 0} [ (l(v)-l(w))w] \right).$$
It follows from \cite{correa2004selfish} that $\beta(\mathcal{L})\le 1$ and $[ 1- \beta(\mathcal{L}) ]^{-1}=\alpha(\mathcal{L})$, where $\alpha(\mathcal{L})$ is the anarchy value for class $\mathcal{L}$ as defined in \cite{roughgarden2003price}.
Therein (see Table 1), $\alpha(\mathcal{L})$ is computed for classes of functions such as affine, quadratic, polynomials. The key idea of the following theorem is to show that the games considered here are $(1,\beta(\mathcal{L}))$-smooth, as defined in \cite{roughgarden2009intrinsic}.
\begin{theorem}[$\poa_M$ for heterogeneous price function]\label{thm:routing}
a) Suppose that Assumption \ref{ass:nonlin2} holds. Then for any fixed game $\mathcal{G}_M$ and any Wardrop equilibrium $x_W$ it holds
\begin{equation}\label{eq:stepThm3}
J_S(\sigma(x_W))\le J_S(\sigma(x_S))\alpha(\mathcal{L})
\end{equation}
b) Further suppose Assumptions \ref{A1}, \ref{ass:sequence} hold, and there exists $\hat J\ge 0$ s.t. $J\SO(\sigma(x\SO))>\hat J$ for every game in $(\mc{G}_M)_{M=1}^\infty$.
Then, for any game $\mathcal{G}_M$ in the sequence
\[
J_S(\sigma(x_S))\le J_S(\sigma(x_N))\le J_S(\sigma(x_S))\alpha(\mathcal{L})+c/\sqrt{M},
\]
and $1\le\poa_M\le \alpha(\mathcal{L})+c/\bigl(\hat J\sqrt{M}\bigl),$ thus implying
$\lim_{M\rightarrow \infty} \poa_M\le \alpha(\mathcal{L}),$
with $c = RL_s\sqrt{2L_p\alpha^{-1}}$.
\end{theorem}
\vspace*{-3mm}
\begin{remark}
If $\mathcal{L}$ contains constant functions, then \eqref{eq:stepThm3} is tight (see \cite{roughgarden2003price} and the simulation section).
This is not a contradiction of Theorems \ref{thm:lin}, \ref{thm:polypoa} because therein either constant functions are not allowed or the price function is assumed to be time independent. Theorems \ref{thm:lin}, \ref{thm:polypoa} can be seen as refinements of Theorem~\ref{thm:routing} and guarantee that $\lim_{M\rightarrow \infty} \poa_M=1$ by restricting the admissible class of price functions.
\end{remark}
}
\vspace*{-4mm}
\section{Application to charging of electric vehicles}
\label{sec:ATP}
We consider a population of $M$ electric vehicles, where the level of charge of vehicle $i$ at time $t$ is described by $s\i_t$.
Its evolution is specified by the discrete-time system
$
s\i_{t+1} = s\i_t + b\i x\i_t \,, t = 1, \dots, n$,
where $x\i_t$ is the charging control and $b\i > 0$ is the charging efficiency.
We assume that $x\i_t$ is non-negative, that it cannot exceed $\tilde x^i_t \ge 0$ at time $t$ and that the absolute value of the difference between $x\i_{t}$ and $x\i_{t+1}$ is bounded by $r_i$.
The final level of charge is constrained to $s_{n+1}^i\ge\eta\i$, where $\eta\i \ge 0$ is the desired level of charge of agent $i$.
Denoting $x\i =[x\i_1, \dots, x^i_n]^\top \in \R^n$, the constraints of agent $i$ reduce to%
\vspace*{-1mm}
\begin{equation}
\label{eq:vehicle_constraint}
x\i \!\!\in\! \mc{X}\i \!=\!\! \left\{\!x\i\! \!\in\! \mathbb{R}^n \! \left|\!\!
\!\begin{array}{l}
0 \le x\i_t \le \tilde x\i_t, ~~~~~ \forall \, t=1,\dots,n \\
\sum_{t=1}^{n} x\i_t \ge \theta\i\\
|x\i_{t+1}-x\i_{t}|\le r\i, \forall \, t=1,\dots,n\!-\!1
\end{array}
\!\!\!\!\right.
\right\
\vspace*{-1mm}
\end{equation}
where $\theta\i \coloneqq {(b\i)}^{-1} (\eta\i - s\i_1)$, with
$s\i_1 \ge 0$ the level of charge for $t=1$. Note that the vehicles are \textit{heterogeneous} in the \textit{total amount of energy} required $\theta^i$ as well as the \textit{time-varying upper bounds} $\tilde x^i_t$ (that can be used to model deadlines, availability for charging), and the {\it ramping constraints $r^i$}. Such constraints satisfy Assumption \ref{A1}. Further, we assume that there exists $\hat \eta>0$ such that for each $M$ and $i\in\{1,\dots,M\}$, $\eta^i\le \hat \eta$ so that $\mathcal{X}_0$ is compact as required in Assumption \ref{ass:sequence}. Note that this is without loss of generality in any practical scenario. The cost function of each vehicle reads as
\vspace*{-1mm}
\begin{equation}
\textstyle J\i(x\i,\sigma(x))\!=\!\sum_{t=1}^n p_t \left( \frac{\sigma_t(x)+d_t }{\kappa_t} \right) x\i_t \!= \!p(\sigma(x)+d)^\top x^i,
\label{eq:PEV_energy_bill}
\vspace*{-1mm}
\end{equation}
where we assumed that the energy price for each time interval $p_t:\R_{\ge0}\rightarrow \R_{>0}$ depends on the ratio between total consumption and total capacity $(\sigma_t(x)+d_t)/ \kappa_t$, where $d_t$ and $\sigma_t(x):=\frac{1}{M}\sum_{i=1}^M x^i_t$ are the non-EV and EV demand at time $t$ divided by $M$
and $\kappa_t$ is the total production capacity divided by $M$ as in~\cite[eq. (6)]{ma2013decentralized}.
To sum up, we define the game $\mc{G}^\text{EV}_M$ as in~\eqref{eq:gameG}, with $\mathcal{X}\i$ and $J\i(x\i,\sigma(x))$ as in \eqref{eq:vehicle_constraint} and \eqref{eq:PEV_energy_bill} respectively.
Let $x:=[x^1;\ldots; x^M]$ be the vector of charging schedules for the whole population. The social cost of the game is $
J_S(\sigma(x))$$=$$\textstyle \sum_{t=1}^n p_t \left( \frac{\sigma_t(x)+d_t}{\kappa_t} \right) (\sigma_t(x)+d_t) $$=$$ p(\sigma(x)+d)^\top (\sigma(x)+d)$,
that is, the overall electricity bill for the sum of non-EV and EV demand; $n=24$. For the numerical study, we consider four cases as described next.
\emph{Case $1$.} We set $p_t(y)=0.15y^3$ and choose $\tilde x^i_t$ to allow charging in $[t^i_{\textup{min}},t^i_{\textup{max}}]$, with $t^i_{\textup{min}},t^i_{\textup{max}}$ uniformly randomly
distributed between 5pm and 10am;
$\theta^i\sim\mathcal{U}[5, 15]$, $r^i\sim\mathcal{U}[1,7]$ and $d_t$ as in \cite[Figure 1]{ma2013decentralized}.
\textcolor{black}{
\emph{Cases $2$-$4$.} We set $p_t(y)=0.15$ from 5pm to 1am and $p_t(y)=0.15y$ from 2am to 10am. For all vehicles, we choose $\tilde x^i_t$ to allow charging from 5pm to 10am. There are no ramping constraints. Cases 2-4 differ in $\theta^i$, $d_t$ as in the following table.}
\vspace*{-5mm}
\begin{table}[h!]
\centering
\textcolor{black}{
\begin{tabular}{|c|c|c|}
\hline
Case& $\theta^i$ & $d_t$ \\ \hline
2 & $9$ & $\mathbb{0}_n$\\
3 &$9$ & as in \cite[Figure 1]{ma2013decentralized} \\
4 &$\mathcal{U}[5,13]$ & $\mathbb{0}_n$ \\ \hline
\end{tabular}
}
\end{table}%
\noindent For each case, we report the (numerical) price of anarchy as a function of $M$ in Figure \ref{fig:poa_and_diff} (top).
Observe that case $1$ and $4$ feature heterogenous charging needs. For these cases, we have randomly extracted $100$ games $\mc{G}^{\text{EV}}_M$ (for any fixed $M$) and report the worst $\poa$ amongst the $100$ realization.
In order to plot the price of anarchy, we computed the ratio between \emph{one} (instead of the \emph{worst}) Nash equilibrium of $\mc{G}^\text{EV}_M$ and the social optimum. This choice is imposed by the fact that computing all Nash equilibria of $\mc{G}^\text{EV}_M$ is in general a hard problem.\footnote{To compute a Nash equilibrium we applied the extragradient algorithm \cite{facchinei2007finite}, which is not guaranteed to converge for small $M$ as the operator associated with the variational inequality of the Nash problem is not guaranteed to be strongly monotone \cite{Gentilearxiv17}. We thus verified a posteriori that the point where the algorithm stopped was a Nash equilibrium.}
\textcolor{black}{In Figure \ref{fig:poa_and_diff} (bottom) we plot the difference between the cost at the Nash and at the social optimizer, relative to case~1.}
\vspace*{-2mm}
\newlength\figureheight
\newlength\figurewidth
\setlength\figureheight{2.8cm}
\setlength\figurewidth{0.8\linewidth}
\begin{figure}[h!]
\begin{subfigure}[b]{\linewidth}
\centering
\resizebox{1\linewidth}{!}{\input{poa.tikz}}
\end{subfigure
\\[0.1cm]
\begin{subfigure}[b]{\linewidth}
\centering
\resizebox{1\linewidth}{!}{\input{whisker_plot.tikz}}
\end{subfigure}
\vspace*{-2mm}
\caption{Price of anarchy (top), and \textcolor{black}{cost difference between Nash and social optimum (bottom)} as a function of $M$.}
\label{fig:poa_and_diff}
\vspace*{-5mm}
\end{figure}
Thanks to the choice of parameters and price function, case 1 meets the Assumptions \ref{A1}, \ref{ass:sequence} and \ref{ass:nonlin} (see Lemma \ref{lem:ass} in the Appendix). Thus, Theorem \ref{thm:polypoa}b) guarantees that $\lim_{M\rightarrow \infty} \poa_M=1$. The numerical results reported in Figure~\ref{fig:poa_and_diff} (top, black line) are consistent with it: the ratio between the cost at the Nash and the cost at the social optimum converges to one. \textcolor{black}{In addition to this, Figure~\ref{fig:poa_and_diff} (bottom) shows that also the difference between these costs converges to zero, as guaranteed by the proof of theorem \ref{thm:polypoa}a) and the boundedness of $\mc{X}_0$.
A typical plot describing the valley filling property of the equilibrium in case 1 can be found e.g., in \cite[Figure 2]{ma2013decentralized}.
Case 2 has been constructed so that the corresponding Wardrop equilibrium features the worst possible asymptotic price of anarchy within the class of affine cost functions (for which $\alpha(\mathcal{L})=4/3$, see \cite{roughgarden2003price}).
The numerics of Figure \ref{fig:poa_and_diff} (top, red line) show that $\poa_M$ converges to $1.33\approx 4/3=\alpha(\mathcal{L})$. Cases 3 and 4 are a modification of case 2.
While the presence of base demand (case 3) helps in lowering the price of anarchy, the impact of heterogeneity (case 4) on the asymptotic price of anarchy is minor (blue and green plots in Figure~\ref{fig:poa_and_diff}).}
\vspace*{-3mm}
\section{Conclusions}
\vspace*{-1mm}
We considered the problem of charging a fleet of heterogeneous electric vehicles as formulated using game theoretic tools. More precisely, we studied the efficiency of the resulting equilibrium allocations, measured by the concept of price of anarchy.
We showed that the price of anarchy converges to one as the population of vehicles grow
if the price function is linear (but possibly dependent on all the time instants), or if the price function depends only on the instantaneous demand and is a positive pure monomial. \textcolor{black}{We provided efficiency bounds for general non linear functions.} For these three cases, we also provided bounds on the $\poa$ as a function of the population size. Our theoretical findings are corroborated by means of numerical simulations.
\textcolor{black}{We conclude noting that the question regarding the efficiency of equilibria in aggregative games is of interest for a broader class of cost functions than those studied here (e.g., quasi convex costs). We leave this as a future work.}
\vspace*{-5mm}
\section*{Appendix A: Characterization of the average}
\vspace*{-1mm}
\noindent
This section characterizes the average players' action $\sigma(x)$ at the Wardrop equilibrium and at the social optimizer of $\mathcal{G}$.
\begin{definition}[Variational inequality~\cite{facchinei2007finite}]
\label{def:vi}
Given $\mathcal{K}\subseteq \mathbb{R}^\ell$ and $F:\mathcal{K}\rightarrow \mathbb{R}^\ell$. A point $\bar x\in\mathcal{K}$ is a solution of the variational inequality $\textup{VI}(\mathcal{K},F)$ if $\,\forall x\in\mathcal{K}$, $F(\bar x)^\top (x-\bar x)\ge 0.$
\end{definition}
\begin{lemma}[Equivalent characterizations]
\label{lemma:averageVI}
\begin{enumerate}[leftmargin=*]
\item[]
\item[] \hspace*{-5.8mm} Suppose Assumption \ref{A1} holds.
\item
Given $x\WE$ a Wardrop equilibrium, its average $\sigma(x\WE)$ solves $\textup{VI}(\s,F\WE)$, with $F\WE:\mathbb{R}^n\rightarrow\mathbb{R}^n$,
$F\WE(z) \coloneqq p(z+d)$.
The $\textup{VI}(\s,F\WE)$ admits a unique solution $\sigma\WE$. Let us define $\mc{X}\WE\coloneqq\{x\in\mc{X}~\text{s.t.}~\frac{1}{M}\sum_{j=1}^M x\j=\sigma\WE\}$. Then any vector of strategies $x\WE\in\mc{X}\WE$ is a Wardrop equilibrium.
\item
Given $x\SO$ a social optimizer, its average $\sigma(x\SO)$ solves $\textup{VI}(\s,F\SO)$, with $F\SO:\mathbb{R}^n\rightarrow\mathbb{R}^n$,
$F\SO(z)\coloneqq p(z+d)+[\nabla_z p(z+d)](z+d).$
The $\textup{VI}(\s,F\SO)$ admits a unique solution $\sigma\SO$. Define $\mc{X}\SO\coloneqq\{x\in\mc{X}~\text{s.t.}~\frac{1}{M}\sum_{j=1}^M x\j=\sigma\SO\}$. Then any vector of strategies $x\SO\in\mc{X}\SO$ is a social optimizer.
\end{enumerate}
\end{lemma}
%
\begin{proof}
{\bf 1)}
The sets $\mc{X}\i$ are convex and closed by Assumption \ref{A1}; further, for fixed $z\in \s$, the functions $J\i(x\i,z)$ are linear and thus convex in $x\i\in\mc{X}\i$ for all $i\in\{1,\dots,M\}$. It follows that (see \cite{Gentilearxiv17}) a Wardrop equilibrium $x\WE$ satisfies
\vspace*{-1mm}
\begin{equation}
[\mathds{1}_{M}\otimes \,p(\sigma(x\WE)+d)]^\top \!(x-x\WE)\!\ge\!0,~~\forall x\!\in\!\mc{X}.
\label{eq:bigvi}
\vspace*{-1mm}
\end{equation}
Rearranging and dividing by $M$ we get
$
p(\sigma(x\WE)+d)^\top(\frac{1}{M}\sum_{j=1}^M x\i-\frac{1}{M}\sum_{j=1}^M x\WE\i)\ge 0,
$
for all $x\in\mathcal{X}$, or equivalently
$p(\sigma(x\WE)+d)^\top(z-\sigma(x\WE) )\ge 0,~\forall z\in\s,$
that is, $\sigma(x\WE)$ solves $\textup{VI}(\s,F\WE)$.
By Assumption \ref{A1} $F\WE(z)=p(z+d)$ is strongly monotone and $\s$ is closed, convex (since the sets $\mathcal{X}^i$ are closed, convex), hence by \cite{facchinei2007finite} $\textup{VI}(\s,F\WE)$ has a unique solution $\sigma\WE$. By definition of variational inequality, for any $z\in \s$ it holds $p(\sigma\WE+d)^\top(z-\sigma\WE)\ge0$. By definition of $x\WE\in\mc{X}\WE$, we have $\sigma(x\WE)=\sigma\WE$. It follows that $p(\sigma(x\WE)+d)^\top(z-\sigma(x\WE))\ge0$ for any $z\in \s$. By definition of $\s$, we conclude that \eqref{eq:bigvi} holds for all $x\in\mc{X}$. Thus, $x\WE$ is a Wardrop equilibrium (see \cite{Gentilearxiv17}).
\newline
{\bf 2)} By Assumption \ref{A1}, the set $\mc{X}$ is convex and closed and $J\SO(\sigma(x))$ is convex. Hence, a social optimizer $x\SO$ satisfies
\begin{equation}
\label{eq:bigvi2}
\nabla_x[p(\sigma(x)+d)(\sigma(x)+d)]_{|x={x\SO}}^\top (x-x\SO)\ge0
~~\forall x\in\mc{X}\,.
\end{equation}
Note that $M \nabla_{x^i} (p(\sigma(x)+d)^\top(\sigma(x)+d)) =p(\sigma(x\SO)+d)+[\nabla_z p(\sigma(x\SO)+d)](\sigma(x\SO)+d)$ for all $i\in\{1,\ldots,M\}$. Consequently, \eqref{eq:bigvi2} is equivalent to
$
[ p(\sigma(x\SO)+d)+[\nabla_z p(\sigma(x\SO)+d)](\sigma(x\SO)+d)]^\top
(\sigma(x)-\sigma(x\SO))\ge0\,,
$
that is $\sigma(x\SO)$ solves $\textup{VI}(\s,F\SO)$. The remaining claims are proven similarly to 1)
\end{proof}
%
%
\vspace*{-3mm}
\section*{Appendix B: Proofs of Theorem \ref{thm:lin}, \ref{thm:polypoa} and \ref{thm:routing}}
\begin{myproof1}
\newline
{\bf a)} Let $x\WE$ be a Wardrop equilibrium. By Lemma \ref{lemma:averageVI} part 1, $\sigma(x\WE)$ solves $\textup{VI}(\s,F\WE)$. Because of Assumption \ref{ass:lin}, $F\SO(z)=C(z+d)+C^\top(z+d)=2C(z+d)=2F\WE(z)$. Since the two operators $F\WE(z)$ and $F\SO(z)$ are parallel for each $z\in\s$, it follows from the definition of variational inequality that $\sigma(x\WE)$ must solve $\textup{VI}(\s,F\SO)$ too. Using Lemma \ref{lemma:averageVI} part 2 we conclude that $x\WE$ must be a social optimizer. \\
{\bf b)} By definition $J\SO(\sigma(x\SO))\le J\SO(\sigma(x\NE))$ and so $1\le \poa_M$.
Further, Assumption \ref{ass:sequence} and the strong monotonicity of $p(z+d)$ (Assumption \ref{A1}) allow us to use the convergence result of \cite[Theorem 1]{Gentilearxiv17}. That is, for any Nash equilibrium $x\NE$ and Wardrop equilibrium $x\WE$ of the game $\mc{G}_M$,
$
||\sigma(x\WE)-\sigma(x\NE)||\le \sqrt{2 R^2L_p \alpha^{-1}{M}^{-1}}.$ It follows that $|J\SO(\sigma(x\NE))- J\SO(\sigma(x\WE))|\le L\SO\sqrt{2 R^2L_p \alpha^{-1}{M}^{-1}} = c\sqrt{M^{-1}}.$ Since every Wardrop equilibrium is socially optimum (previous point of this proof), one has $|J\SO(\sigma(x\NE))- J\SO(\sigma(x\SO))|\le c\sqrt{M^{-1}}$ and thus
$J\SO(\sigma(x\NE))\le J\SO(\sigma(x\SO))+c\sqrt{M^{-1}}$. The final result regarding the price of anarchy follows from the latter inequality upon dividing both sides by $J\SO(\sigma(x\SO))>\hat J\ge 0$.
\end{myproof1}
\vspace*{2mm}
\begin{myproof2}
\newline
{\bf a)}
We first show that any Wardrop equilibrium is a social optimizer.
To do so, observe that the function $f(y)=\alpha y^k$ satisfies all the assumptions required by Lemma \ref{lemma:averageVI} (see Lemma \ref{lem:ass} in the Appendix).
Let $x\WE$ be a Wardrop equilibrium of $\mc{G}_M$. By Lemma \ref{lemma:averageVI}, $\sigma(x\WE)$ solves $\textup{VI}(\s,F\WE)$. Thanks to Assumption \ref{ass:nonlin} and the choice of $f(y)$,
\[F\SO(z)\!=\!(k+1)
[
\alpha(z_1+d_1)^k,
\hdots,
\alpha(z_n+d_n)^k
]^\top
\!\!=\!(k+1)F\WE(z)\,.
\]
Hence $\sigma(x\WE)$ solves $\textup{VI}(\s,F\SO)$ too. Using Lemma \ref{lemma:averageVI} we conclude that $x\WE$ must be a social optimizer.
The proof is now identical to the proof of Theorem \ref{thm:lin}, part b).
\newline
{\bf b)}
If $f(y)$ does not take the form $\alpha y^k$ for some $\alpha>0$ and $k>0$, by Lemma \ref{lemma:notaligned} there exists a point $\bar z\in\R^n_{>0}$ for which $F\WE(\bar z)$ and $F\SO(\bar z)$ are not aligned, i.e. for which $F\SO(\bar z)\neq h F\WE(\bar z)$ for all $h\in\R$.
We intend to construct a sequence of games $\mc{G}_M$ so that for every $\mc{G}_M$ in the sequence the unique average at the Wardrop equilibrium is exactly $\bar z$, that is $\bar z$ solves $\textup{VI}(\s,F\WE)$, but $\bar z$ does not solve $\textup{VI}(\s,F\SO)$. This fact indeed proves, by Lemma \ref{lemma:averageVI}, that for any game $\mc{G}_M$ the Wardrop equilibria of $\mc{G}_M$ are not social minimizers.
Since $\sigma(x\NE)\to\sigma(x\WE)$ as $M\to\infty$ \cite[Theorem 1]{Gentilearxiv17}, one concludes that $\poa$ cannot converge to~$1$.
In the following we construct a sequence of games with the above mentioned properties.
To this end let us define $\mc{X}\i\coloneqq\bar{\mc{X}}\subseteq \R^n$, so that $\s=\bar{\mc{X}}$ with
$ \bar{\mc{X}}\coloneqq\{\bar z+\alpha v_1 +\beta v_2~~ \alpha,\beta\in[0~1]\}\cap\R^n_{\ge0},$
where $v_1\coloneqq\bar F\WE$, $v_2\coloneqq(\bar F\WE^\top\bar F\SO )\bar F\WE- (\bar F\WE^\top \bar F\WE)\bar F\SO $ and $\bar F\WE\coloneqq F\WE(\bar z)$, $\bar F\SO\coloneqq F\SO(\bar z)$; see Figure \ref{fig:setX}. The intuition is that $-v_2$ is the component of $\bar F_S$ that lives in the same plane as $\bar F_S$ and $\bar F_W$ and is orthogonal to $\bar F_W$, so that $\bar F_W^\top v_2=0$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.8]{setXnew}
\vspace*{-2mm}
\caption{Construction of the set $\bar{\mc{X}}$.}
\label{fig:setX}
\vspace*{-7mm}
\end{figure}
Observe that $\s=\bar{\mc{X}}$ is the intersection of a bounded and convex set with the positive orthant and thus satisfies Assumptions \ref{A1}, \ref{ass:sequence} and \ref{ass:nonlin}.
It is easy to verify that $\bar z \in \bar{\mc{X}}$ and that $F\WE(\bar z)^\top(z-\bar z)=\alpha ||F\WE(\bar z)||^2\ge0$ for all $z\in \s=\bar{\mc{X}}$, so that $\bar z$ solves $\textup{VI}(\s,F\WE)$. Let us pick $\hat z=\bar z+ \beta v_2$. Note that since $\bar z>0$, for $\beta$ small enough $\hat z$ belongs to $\R^n_{>0}$ as well and thus to $\bar{\mc{X}}$. Then $F\SO(\bar z)^\top(\hat z-\bar z)=\beta (\bar F\SO^\top\bar F\WE)^2-\beta ||\bar F\SO||^2||\bar F\WE||^2< 0$. The inequality is strict because $\bar F\WE$, $\bar F\SO$ are neither parallel nor zero (Lemma \ref{lemma:notaligned}).
Thus, $\bar z$ does not solve $\textup{VI}(\s,F\SO)$.
\end{myproof2}
\begin{lemma}
\label{lemma:notaligned}
For $n\ge 2$, if $f(y)$ satisfies Assumptions \ref{A1}, \ref{ass:sequence} and \ref{ass:nonlin}, but does not take the form $\alpha y^k$ for some $\alpha>0$ and $k>0$, then there exists $\bar z \in\R^n_{>0}$ such that $F\SO(\bar z)\neq h F\WE(\bar z)$, $\forall h\in\R$. Moreover, $F\SO(\bar z)\neq 0$, $F\WE(\bar z)\neq 0$.
\end{lemma}
\vspace*{-1.5mm}
\begin{proof}
Let us consider the first statement.
By contradiction, assume there exists $\beta(z):\R^n_{>0}\rightarrow\R$ such that $F\SO(z)= \beta(z) F\WE(z)$ for all $z \in\R^n_{>0}$. This implies
\vspace*{-0.5mm}
\begin{equation}
\label{eq:alphas}
f'(z_t+d_t)(z_t+d_t)=(\beta(z_1,\dots,z_n)-1)f(z_t+d_t)\,,
\vspace*{-0.5mm}
\end{equation}
for all $t\in\{1,\dots,n\}$ and for all $z \in\R^n_{>0}$, $d\in\R^n_{>0}$.
By Assumption \ref{ass:nonlin}, $f(z_t+d_t)>0$.
Hence one can divide \eqref{eq:alphas} for $f(z_t+d_t)$ without loss of generality, and conclude that $\beta(z_1,\dots,z_n)=\beta_1(z_1)=\dots=\beta_n(z_n)$ with $\beta_i:\R\rightarrow\R$ for all $z\in\R^n_{>0}$.
For $n\ge2$ the last condition implies $\beta(z_1,\dots,z_n)=b$ constant. Equation \eqref{eq:alphas} reads as $
f'(y)y=(b-1)f(y)~~\forall y>0$,
whose continuously differentiable solutions are all and only $f(y)=a y^{b-1}$. Note that if $a\le0$ or $b\le 1$, Assumption \ref{A1} is not satisfied, while if $a>0$ and $b>1$ we contradicted the assumption that $f(y)$ did not take the form $\alpha y^k $ for some $\alpha>0$ and $k>0$.
Setting $h=0$ in the previous claim gives $F\SO(\bar z)\ne0$.
Since $f:\R_{>0}\rightarrow\R_{>0}$, one has $F\WE(\bar z):=[ f(\bar z_t+d_t)]_{t=1}^n\neq0$.
\end{proof}
\vspace*{-1mm}
\begin{lemma}\label{lem:ass}
Suppose that the price function $p$ is as in Assumption \ref{ass:nonlin} with $f(y)=\alpha y^k$, $\alpha>0,k>0$. Then $p$ satisfies Assumption \ref{A1} and \ref{ass:sequence}.
\end{lemma}
\begin{proof}
Note that $\nabla_z p(z+d)$ is a diagonal matrix with entry $f'(z_t+d_t)$ in position $(t,t)$. Since $f'(y)=\alpha k y^{k-1}>0$ for all $y>0$ and since $z_t+d_t$ is positive by assumption for all $t$, we get that $p(z+d)$ is continuously differentiable and that $\nabla_z p(z+d)\succ 0$ i.e. that $z\mapsto p(z+d)$ is strongly monotone. Similarly, one can show that the Hessian of $p(z+d)^\top (z+d)$ and the Hessian of $J^i(x^i,\sigma(x))$ with respect to $x\i$ are positive definite. Thus, $z\mapsto p(z+d)^\top (z+d)$ and $x^i \mapsto J^i(x^i,\sigma(x))$ are strongly convex. See \cite{paccagnan2018efficiency} for further details.
\end{proof}
\vspace*{2mm}
\begin{myproof3}
\textcolor{black}{
\newline
We prove only {a)} as {b)} can be shown as in Theorem~1b).
We define
$C^{\sigma_1}(\sigma_2):=p(\sigma_1+d)^\top(\sigma_2+d)$
so that $J_S(\sigma)=C^{\sigma}(\sigma)$.
Let $x_W$ be any Wardrop equilibrium. Then, the average $\bar \sigma:=\sigma_W$ solves VI$(\Sigma,F_W)$ i.e.
$F_W(\bar \sigma)^\top(\sigma-\bar \sigma)\ge 0,~ \forall \sigma\in\Sigma.$
This can be seen following the proof of Lemma \ref{lemma:averageVI} part 1), and observing that only convexity and closedness of $\mc{X}^i$ are required.
Equivalently,
$ J_S(\bar \sigma) \le C^{\bar \sigma}(\sigma),~\forall \sigma\in\Sigma.$
However,
$C^{\bar \sigma}(\sigma)=\sum_{t} l_t(\bar \sigma_t+d_t) (\sigma_t+d_t)
=J_S(\sigma)+ \sum_{t} [l_t(\bar \sigma_t+d_t) - l_t( \sigma_t+d_t)] (\sigma_t+d_t)
=J_S(\sigma)+ \sum_{t} \frac{[l_t(v_t) - l_t(w_t)]w_t }{l_t(v_t)v_t} l_t(v_t) v_t
\le J_S(\sigma)+ \sum_{t} \beta(\mathcal{L}) l_t(v_t) v_t
= J_S(\sigma)+ \beta(\mathcal{L}) J_S(\bar\sigma)
$
where we use $v_t:=\bar \sigma_t+d_t\ge d_t$, $w_t:= \sigma_t+d_t\ge d_t$ and $d_t\ge 0$.
The previous relation holds for all $\sigma\in\Sigma$. Selecting $\sigma=\sigma_S$ (the optimum average), we get
$ J_S(\bar \sigma)\! \le \!J_S(\sigma_S)\!+\! \beta(\mathcal{L}) J_S(\bar\sigma).$
Rearranging we obtain \eqref{eq:stepThm3}.
}
\end{myproof3}
\vspace*{-2mm}
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2018-05-16T02:12:55",
"yymm": "1803",
"arxiv_id": "1803.02583",
"language": "en",
"url": "https://arxiv.org/abs/1803.02583"
}
|
\section{Introduction}
Experimental progress with ultracold quantum gases has made possible to engineer the
coupling between different internal states of the atoms, and to realize synthetic gauge
fields ~\cite{RevModPhys.83.1523,PhysRevLett.107.255301StrongMagneticFieldsOpticalLattice,PhysRevLett.111.185301Butterfly}. When a neutral atom moves in a properly designed laser field, its center-of-mass motion mimics the dynamics of a charged particle in a magnetic field, under the effect of a Lorentz-like force. The corresponding Aharonov-Bohm phase is related to the Berry’s phase that emerges
when the atom adiabatically follows one of the dressed states of the atom-laser interaction ~\cite{RevModPhys.83.1523}. These advances allow the quantum simulation of a wide range
of Hamiltonians, in particular relevant in condensed matter physics.
Indeed, some of the most intriguing phenomena in condensed matter physics involve the
presence of strong magnetic fields. For instance, topological states of matter are realized in quantum Hall systems, which are insulating in the bulk, but bear conducting edge states~\cite{QuantumHall}.
A ladder is the simplest geometry where one can get some insight on two-dimensional quantum system subjected to a synthetic gauge field ~\cite{PhysRevX.7.021033,PhysRevB.92.115446}. The bosonic linear ladder has been the subject of intense theoretical work. The phase diagram has been established by means of field-theoretical methods~\cite{Georges,PhysRevB.64.144515Orignac}, and intensive DMRG simulations~\cite{PhysRevB.91.140406DMRG}. Those studies, in addition to common features of Bose-Hubbard models such as superfluid and Mott insulating phases, revealed new exciting phase of matter induced by the magnetic field: chiral superfluid phases, chiral Mott insulating phases displaying Meissner currents ~\cite{PhysRevB.64.144515Orignac,ChiralMottDhar} and vortex-Mott insulating phases~\cite{PhysRevLett.111.150601}. In the weakly interacting regime, on which we will focus on this work, an additional phase has been predicted ~\cite{Mueller} a biased ladder phase characterized by an imbalanced population of the bosons between the two legs, explicitly breaking $\mathbb{Z}_2$ symmetry. This phase was shown to be stable in the interacting case, except for a special value of the applied flux, where umklapp processes destabilize it ~\cite{PhysRevA.92.013625Uchino}. The dependence of the critical flux separating Meissner and vortex phase on interparticle interactions has been also studied ~\cite{Oktel}. In parallel to these theoretical advances, the experimental realization of the bosonic flux ladder has been reported in optical lattices~\cite{Atala} as well as for lattices in synthetic dimensions, both for fermions and bosons ~\cite{Stuhl,Mancini}.
In this work, we consider a system made of two one-dimensional coupled lattice rings subjected to different flux in each leg. This specific bosonic ladder corresponds to different boundary conditions than the case of a linear ladder. In particular, this double ring lattice geometry allows to study persistent currents in dimension larger than one~\cite{Marco}, which shows promising applications for atomtronics developments~\cite{Atomtronics,LuigiD}. At difference from~\cite{Qubit,Tobias}, we focus on a planar geometry with concentric rings, as could be realized eg with dressed potentials~\cite{Helene}, or using copropagating Laguerre-Gauss beams~\cite{CylindricalOpticalLattice}.
We study first the properties of the non interacting gas. After identifying the vortex and Meissner phases, we discuss specific features of the double ring lattice geometry, as the appearance of a vortex in the Meissner phase and parity effect in the vortex phase, and the behavior of persistent currents.
Through a numerical study we then explore the dilute, weak-interacting regime and address the nature of the ground state at mean field level. In particular we identify known phases such as the Meissner, vortex and biased-ladder phases \cite{Mueller} as well as the effect of commensurability of the total flux.
Finally, we propose the spiral interferogram images -- obtained by interference among the two rings during time of flight expansion -- as a probe of vortex-carrying phases, specifically adapted to the ring geometry.
\section{The model}
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.4]{Figure-1.png}
\caption{(Color online) Representation of the geometry studied in this work: coplanar ring lattices of radii $R_1$ and $R_2$ with the same number of sites, with inter-ring tunnel energy $K$ and intra-ring tunnel energies $J e^{i \Phi_p}$, with $p=1,2$.}
\label{fig1b}
\end{figure}
We consider a Bose gas confined in a double ring lattice. In the tight-binding approximation we model the system using the Bose-Hubbard model:
\begin{align}
&\hat{H}=\hat{H}_0+\hat{H}_{int}=\nonumber\\ &-\!\sum_{l=1,p=1,2}^{N_s} \!\!\!J_p\left(a^{\dagger}_{l,p}a_{l+1,p}e^{i\Phi_p} + a^{\dagger}_{l+1,p}a_{l,p}e^{-i\Phi_p}\right)\nonumber \\ &-\!K\sum_{l=1}^{N_s}\left(a_{l,1}^{\dagger}a_{l,2}+a_{l,2}^{\dagger}a_{l,1}\right)+\frac{U}{2}\!\!\!\sum_{l=1,p=1,2}^{N_s}a^{\dagger}_{l,p}a^{\dagger}_{l,p}a_{l,p}a_{l,p}
\label{eq1}
\end{align}
where the angular position on the double ring lattice is given by $\theta_l=\frac{2\pi}{N_s}l$ where $l$ is an integer $l \in \left[1, N_s\right]$ with $N_s$ the number of sites in each ring.
In Eq (\ref{eq1}) $J_1$ and $J_2$ are respectively the tunneling amplitude from one site to an other along each ring, the parameter $K$ is the tunneling amplitude between the two rings, connecting only sites with the same position index $l$ and $\Phi_{1,2}$ are the fluxes threading the inner and outer ring respectively. In the case where the gauge fields are induced by applying a rotation to each ring one has $\Phi_i=\frac{2\pi}{N_s}\frac{\tilde{\Phi}_i}{\Phi_0}$, with $\tilde{\Phi}_i=\Omega R_i^2$, $\Omega$ being the angular rotation frequency, $R_i$ radius of ring $i$, $\Phi_0=2\pi \hbar/m$ the Coriolis flux quantum. As $J_i \approx \frac{\hbar^2}{2mR_i^2}$, to lowest order we can consider $J_1 \approx J_2$ corresponding to two rings close to each other, or realized using adjusted lattice potential. In the following, it will be useful to introduce the relative flux $\phi = \Phi_1-\Phi_2$ and average flux $\Phi = (\Phi_1+\Phi_2)/2$.
\section{Non interacting regime}
We first proceed by analyzing the non-interacting problem.
The diagonalization of $H_0$ (see Appendix A for details) yields the following two-band Hamiltonian:
\begin{eqnarray}
\hat{H}_0 = \sum_k \alpha_k^{\dagger}\alpha_kE_+(k) + \beta_k^{\dagger}\beta_k E_-(k),
\end{eqnarray}
where
\begin{align}
\begin{pmatrix}
a_{k,1}\\
a_{k,2}
\end{pmatrix}
=
\begin{pmatrix}
v_k & u_k\\
-u_k & v_k
\end{pmatrix}
\begin{pmatrix}
\alpha_k\\
\beta_k
\end{pmatrix},
\end{align}
and the functions $u_k$ and $v_k$ depend on the parameters $\phi$ and $K/J$ (see Appendix A for details), the momentum in units of inverse lattice spacing takes discrete values given by $k =\frac{2 \pi n}{N_s}$, with $n=0,1,2... N_s-1$ and the dispersion relation $E_{\pm}(k)$ reads
\begin{align}
E_{\pm}(k)=-&2J\cos(\phi/2)\cos(k-\Phi)\nonumber\\ \pm &\sqrt{K^2+(2J)^2\sin(\phi/2)^2\sin(k-\Phi)^2}.
\label{eq:epm}
\end{align}
We see that the only influence of the average flux $\Phi$ is to shift in momentum space the energy spectrum.
\begin{figure}[h!]
\includegraphics[scale=1]{Figure-2.pdf}
\caption{(Color online) Energy spectrum (in units of $J$ with $N_s=40$ sites on each ring) as a function of wavevector $k$ (in units of inverse lattice spacing) of non-interacting bosons on a double ring lattice, for several values of the tunneling ratio $K/J$ at fixed relative flux $\phi=\pi/2$ (bottom) and several values of $\phi$ at fixed $K/J=\sqrt{2}$ (top).}
\label{fig1}
\end{figure}
The relevant ground state properties are obtained from the low-energy branch spectrum since, for a finite size-ring, at $T=0$ and $U=0$ the bosons form a condensate in the lowest-energy state available. At varying tunneling ratio $K/J$ and relative flux $\phi$, two possible situations arise from the lowest-energy branch $E_-(k)$ (see Fig.~\ref{fig1}). When $E_-(k)$ has a single minimum, the bosons condense in the state $k=\Phi$, corresponding to the Meissner phase, while one has a vortex phase when $E_-(k)$ has two minima and bosons condense with the same occupancy in each of the two minima $k_1$ and $k_2$ given by
\begin{align}
k_{1,2}= \Phi \mp \arccos\left[\cot\left(\frac{\phi}{2}\right)\sqrt{\left(\frac{K}{2J}\right)^2+\sin^2\left(\frac{\phi}{2}\right)}\right].
\label{eq:k}
\end{align}
Other possible occupancies of the two minima are discussed in Appendix A.
The vortex to Meissner phase transition has been experimentally observed in bosonic linear flux ladders ~\cite{Atala}. At fixed $K/J$ value, the critical flux where the transition appears is obtained by determining the change of curvature in $E_-(k=\Phi)$, thus yielding ~\cite{Georges}:
\begin{align}
\phi_c = 2\arccos\left[\sqrt{\left(\frac{K}{4J}\right)^2+1}-\left(\frac{K}{4J}\right)\right].
\end{align}
The Meissner phase is characterized by vanishing transverse currents $j_{l,\perp}= iK\langle a^{\dagger}_{l,1}a_{l,2}-a^{\dagger}_{l,2}a_{l,1}\rangle$; the longitudinal currents on each ring, defined as $j_{l,p}= iJ\langle a_{l,p}^{\dagger}a_{l+1,p}e^{i\Phi_p}-a_{l+1,p}^{\dagger}a_{l,p}e^{-i\Phi_p}\rangle$, are opposite and the chiral current, i.e $J_c=\sum_l \langle j_{l,1}-j_{l,2}\rangle$ is saturated. The vortex phase is characterized by a modulated density, jumps of the phase of the wave function, and
non-zero, oscillating transverse currents which create a vortex pattern. This is illustrated in Fig.\ref{figcurrents}, which shows the longitudinal and transverse current configurations both in the Meissner and in the vortex phase.
\begin{figure}
\begin{tikzpicture}
\matrix [ampersand replacement=\amp] {
\node{\includegraphics[scale=0.2]{Figure-3a.png}}; \amp \node[text height=1.5ex,text depth=.25ex]{$\Phi=0$, $K/J=2$}; \\
\node{\includegraphics[scale=0.3]{Figure-3b.png}}; \amp \node[text height=1.5ex,text depth=.25ex]{$\Phi=0$, $K/J=0.9$ }; \\
\node{\includegraphics[scale=0.2]{Figure-3c.png}}; \amp \node[text height=1.5ex,text depth=.25ex]{$\Phi=\frac{\pi}{N_s}$,$K/J=0.9$}; \\
};
\end{tikzpicture}
\caption{(Color online) Representation of the current patterns for noninteracting bosons on a double ring lattice in various parameter regimes as indicated on the figure. The length of arrows is proportional to the amplitude of the current field. The currents fields are minimal at the core of the vortex, where also the density drops. Upper panel: Meissner phase. Middle panel: vortex phase, case of two vortices. Lower panel: single vortex in the Meissner phase. In all panels, $\phi=\pi/2$ and $N_s=12$.}
\label{figcurrents}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.8]{Figure-4a.pdf}
\includegraphics[scale=0.8]{Figure-4b.pdf}
\caption{(Color online) Phase and density profiles of the condensate wavefunction for noninteracting bosons along the double ring lattice as a function of the lattice index. Top panel: odd number of vortices for average flux $\Phi=\pi/N_s$. Bottom panel: even number of vortices for $\Phi=0$. The other parameters are $K/J=0.8$, $\phi=\pi/2$,$N_s=20$ and $n=N/N_s$.}
\label{fig:vortices}
\end{figure}
\subsection{Vortex configurations on a finite double ring lattice}
\label{vortex-subsec}
Figure~\ref{fig:vortices} shows the distribution of the phase and density of the condensate wave function of the noninteracting gas in the vortex phase, which reads $\psi_{l,p}=\sqrt{\frac{N}{2N_s}}\left(\delta_{p,1}(u_{k_1}e^{ik_1l}+u_{k_2}e^{ik_2l})+\delta_{p,2}(v_{k_1}e^{ik_1l}+v_{k_2}e^{ik_2l})\right)$, for various values of the system parameters. The number $N_v$ of vortices is obtained by counting the number of jumps in the phase. Since it is also associated to the number of oscillations in the density, which are characterized by the wavevector $k=k_2-k_1$, it is readily obtained as $N_v= N_s (k_2-k_1)/2\pi$. Recalling that the value of the total flux $\Phi$ fixes the position of the minima of the dispersion relation (\ref{eq:epm}), in the case where the total flux is multiple of $\frac{\pi}{N_s}$ we obtain specific features associated to the commensurability of $\Phi$ with the allowed values of the discrete wavevector $k$. Figure~\ref{fig3} depicts the various possibilities. When $\Phi=2 j\frac{\pi}{N_s}$, with $j$ integer number, the dispersion relation is centered on an allowed value of the quantized momentum $k$. In this case vortices start to form when the dispersion relation displays a double-minima structure, and the number of vortices is even.
On the other hand, when
$\Phi = (2j+1)\frac{\pi}{N_s}$ the value of $\Phi$ falls among two adjacent values of quantized momentum $k$ (see again Fig.\ref{fig3}). In this case, in the vortex phase, the distance among the two minima corresponds to an odd multiple of $\frac{\pi}{N_s}$, giving rise to an odd number of vortices. Quite interestingly, in the Meissner phase, ie for a choice of parameters $\phi$ and $K/J$ leading to a single minimum in the single-particle excitation dispersion $E_-(k)$, for $\Phi = (2j+1)\frac{\pi}{N_s}$ we find a nontrivial pattern in the current profiles, corresponding to a single vortex configuration (see Fig.\ref{figcurrents}, third panel). This is a mesoscopic effect associated to the finite size and the geometry of the ring. As we shall see below, however, this vortex is more fragile than those appearing in the vortex phase, and is destroyed in the presence of interactions.
\begin{figure}[h!]
\includegraphics[scale=1]{Figure-5a.pdf}
\includegraphics[scale=1]{Figure-5b.pdf}
\caption{(Color online) Scheme of the occupancy of the single-particle levels by noninteracting bosons at zero temperature (filled green circles), on the single-particle dispersion relation in the energy-momentum plane (empty circles joined by line), for various choices of total flux $\Phi$ (dashed vertical line). In the Meissner phase, when $\Phi=2j \frac{\pi}{N_s}$ (left top panel, with $\Phi=10\pi/N_s$) bosons condense in the $k=\Phi$ mode. When $\Phi=(2j+1)\frac{\pi}{N_s}$ (right top panel, with $\Phi=9\pi/N_s$), $\Phi$ lies between two momentum modes, the lowest-energy states are doubly degenerate and the system supports a vortex in the Meissner phase. In the vortex phase, when $\Phi=2j\frac{\pi}{N_s}$ (bottom left panel, with $\Phi=8\pi/N_s$) we find an even number of vortices, whereas when $\Phi=(2j+1)\frac{\pi}{N_s}$ (bottom right panel, with $\Phi=9\pi/N_s$) the number of vortices is odd. Notice that the scheme is completely general for values of $\Phi$ equal to any odd or even multiple of $\pi/N_s$.}
\label{fig3}
\end{figure}
\subsection{Persistent and chiral currents}
We proceed next to study the persistent currents on the ring. They are defined as $I_p=\frac{\partial\langle H\rangle}{\partial \Phi_p}$. Since for the Hamiltonian (\ref{eq1}) one has $\frac{\partial\langle H\rangle}{\partial \Phi}=0$, we obtain that $I_1=-I_2=I$ and we have a correspondence between chiral current $J_c$ and persistent current:
\begin{equation}
J_c=2I=\frac{\partial \langle H\rangle}{\partial \phi}.
\label{jchiral}
\end{equation}
In particular, Fig~\ref{fig4} represents the dependence of the excitation spectrum branches on the relative flux. In order to obtain the persistent currents for each value of $\phi$ we identify the lowest-energy branch as defined piece-wise by following the lowest-energy part of $E_-(k)$ (see Fig.~\ref{fig4} upper panel). The persistent current is then readily obtained by deriving this curve with respect to the flux $\phi$.
\begin{figure}[h!]
\includegraphics[scale=1]{Figure-6.pdf}
\caption{(Color online) Upper panel: Excitation branches $E_-(k_n,\phi)$ as a function of the relative flux $\phi$ (dimensionless) for various values of $k_n=\frac{2\pi}{N_s}n$, $n\in [0,N_s/2]$. At $\phi=0$, one has $E_-(k_0, \phi)<E_-(k_1,\phi)<\dots<E_-(k_n, \phi)$ (blue to brown curves, from bottom to top). The energy of the lowest excitation branch is the lower envelope of these curves and is used to calculate the chiral current. Lower panel: chiral current, obtained from Eq.~(\ref{jchiral}), as a function of $\phi$.
In both panels we have taken $N_s=20$, $\Phi=0$ and $K/J=0.8$.
}
\label{fig4}
\end{figure}
The resulting persistent current as a function of relative flux $\phi$ is illustrated in Fig.~\ref{fig4} (bottom panel). By increasing the relative flux at fixed $K/J$, the system undergoes a transition from Meissner to vortex phase. For low $\phi$ values, the particle stays in the branch $E_-(k=\Phi)$ as long as it is in the Meissner phase. At the critical value $\phi_c$ for entering the vortex phase, the persistent current displays a jump, and takes an angular momentum value equal to $\Phi+2\pi/N_s$. As the flux $\phi$ increases, the persistent currents display several other jumps, each corresponding to the appearance of a vortex pair in the ring. We notice that the total number of jumps in the current curve corresponds to $N_s/2$, ie the maximal number of vortex pairs on the ring.
\section{Weakly-interacting regime}
\subsection{Variational Ansatz}
In the case of non interacting bosons, when the single-particle spectrum has two degenerate minima, the many-body ground state energy is highly degenerate as it corresponds to all possible partitions of the particles among the two minima. In the presence of interactions this degeneracy is broken. Introducing the variational Ansatz
\begin{align}
|\Phi_N\rangle=\frac{1}{\sqrt{N!}}\left(\cos(\theta/2)\beta^{\dagger}_{k_1}+\sin(\theta/2)\beta^{\dagger}_{k_2}\right)^N|0\rangle,
\label{Ansatz}
\end{align}
which is valid in weakly interacting regime, Wei and Mueller \cite{Mueller} have identified two phases, corresponding to two different partitions of the bosons on the minima $k_1$ and $k_2$: a vortex phase, when each minimum is occupied by $N/2$ bosons, occuring if $1-6u_{k_1}v_{k_1}>0$; and a biased ladder phase, characterized by symmetry breaking and full occupancy of only one of the two minima, occurring when $1-6u_{k_1}v_{k_1}<0$. The biased ladder phase is characterized by the absence of density modulations and different density values on the two rings.
\subsection{Coupled discrete nonlinear Schr{\"o}dinger equations (DNLSE)}
In order to explore in a broader way the weakly-interacting regime, we study the ground state of the system in the mean-field approximation, obtained by neglecting the quantum fluctuations and correlations.
We start from the equations of motion for the bosonic field operators in the Heisenberg picture:
\begin{eqnarray}
i\hbar\frac{d a_{l,p}(t)}{dt}=\left[a_{l,p}(t),H\right].
\end{eqnarray}
Taking the mean-field approximation, ie setting $\Psi_{l,p}(t)= \langle a_{l,p}(t)\rangle$
we obtain two coupled discrete non-linear Schr{\"o}dinger equations (DNLSE):
\begin{eqnarray}
\label{dnlse}
i\partial_t \Psi_{l,1}(t)&=& -J\Psi_{l+1,1}(t)e^{i(\Phi+\phi/2)}-J\Psi_{l-1,1}(t)e^{-i(\Phi+\phi/2)}\nonumber\\&-&K\Psi_{l,2}(t)+U|\Psi_{l,1}(t)|^2\Psi_{l,1}(t)\\
i\partial_t \Psi_{l,2}(t)&=& -J\Psi_{l+1,2}(t)e^{i(\Phi-\phi/2)}-J\Psi_{l-1,2}(t)e^{-i(\Phi-\phi/2)}\nonumber\\&-&K\Psi_{l,1}(t)+U|\Psi_{l,2}(t)|^2\Psi_{l,2}(t)
\end{eqnarray}
This is the lattice version of the Gross-Pitaevskii equations. The above equations are expected to hold for weak interactions and large number of particle on each site.
The corresponding energy functional is given by
\begin{align}
&E[\mathbf{\Psi}_1,\mathbf{\Psi}_2]=-J\sum_{l,p}\left(\Psi^*_{l,p}\Psi_{l+1,p}e^{i\Phi_p}+c.c\right)\nonumber\\&-K\sum_l\left(\Psi_{l,1}^*\Psi_{l,2}+c.c\right)+\frac{U}{2}\sum_{l,p}|\Psi_{l,p}|^4,
\label{NRJF}
\end{align}
where $\mathbf{\Psi}_p=\{\Psi_{l,p}\}$.
We use a split-step Fourier transform method~\cite{Split} to solve the discrete time dependent NLSE and perform imaginary-time evolution to obtain the ground state of the system with the normalization condition,
\begin{align}
\sum_{l=1}^{N_s}\sum_{p=1,2} |\Psi_{l,p}|^2=N
\end{align}
where $N$ is the number of particles in the system.
\section{Numerical results}
\subsection{Mean field ground state phase diagram}
We use the numerical solution of the DNLSE (\ref{dnlse}) to explore the nature of the ground state at varying interactions and inter-ring tunnel coupling, as identified by the ratios $U n/J$ and $K/J$, with $n=N/N_s$. For simplicity of the analysis, we choose a fixed value $\phi=\pi/2$ for the relative flux.
Our results are illustrated in Fig.~\ref{Nldif}, showing the particle imbalance among the two rings $\Delta=\left|\sum_l\langle n_{l,1}-n_{l,2}\rangle\right|/N$. For a choice of total flux $\Phi$ corresponding to an even multiple of $\pi/N_s$ (upper panel of Fig.~\ref{Nldif}) at varying interaction and tunnel parameters we identify three phases: the vortex (V) and Meissner (M) phases found in the non-interacting regime, as well as the biased-ladder phase (BL-V) predicted by the variational Ansatz. We have denoted this latter phase BL-V since it is competing with the vortex phase, and are both obtained from the Ansatz when the single-particle spectrum has a double minimum structure.
Figure ~\ref{Density} shows the corresponding density profiles of the various phases: biased-ladder, Meissner and vortex phases are illustrated in panels (BL-V), (M) and (V) respectively.
For values of total flux corresponding to an odd multiple of $\pi/N_s$ (lower panel of Fig.~\ref{Nldif}) in place of the Meissner phase admitting a single vortex, as predicted in absence of interactions, we find a biased-ladder phase (denoted as BL-M in the figure). As it will be discussed in section V.B, this is a mesoscopic effect due to the finite size of the ring -- the imbalance decreases with increasing number of sites on the ring.
\begin{figure}[h!]
\includegraphics[scale=0.85]{Figure-8a.pdf}
\includegraphics[scale=0.85]{Figure-8b.pdf}
\caption{(Color online) Color map of the imbalance among particle numbers in each ring, in the ($K/J$,$Un/J$) plane, for (upper panel) $\phi=\pi/2$, $\Phi=6\pi/N_s$ and $N_s=20$, (lower panel) $\phi=\pi/2$, $\Phi=\pi/N_s$ and $N_s=20$ The letters indicate the parameter regimes where we find a biased-ladder phase (BL-V) where the single-particle spectrum has a double minimum, a Meissner phase (M), a vortex phase (V) and a biased-ladder phase (BL-M) where the single-particle spectrum has a single minimum. The corresponding density profiles are illustrated in Fig.~\ref{Density} and ~\ref{fig15}.}
\label{Nldif}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=1]{Figure-9.pdf}
\caption{(Color online) Density profiles along each ring as a function of the lattice index along the ring, with $N_s=20$, $\phi=\pi/2$, $\Phi=6\pi/N_s$, in the various phases identified in the diagram of Fig.~\ref{Nldif}:
with parameter $\phi=\pi/2$,
$N_s=20$, $Un/J=0.05$
and biased-ladder phase (BL-V),
for $K/J=1.1$; Meissner phase (M), for $K/J=2$; vortex phase (V), for $K/J=0.5$ and $Un/J=0.3$.}
\label{Density}
\end{figure}
\subsection{Fate of the single vortex in the Meissner phase}
As discussed in Sec.\ref{vortex-subsec}, in the case when the total flux $\Phi = (2j+1)\frac{\pi}{N_s}$ and the system is in the Meissner phase, the noninteracting solution predicts the formation of a single vortex.
We explore here the fate of such a vortex in the presence of weak interactions.
A first answer is provided by the variational Ansatz introduced in Ref.\cite{Mueller} specialized to the case where the bosons occupy two neighbouring momentum states of the single-particle excitation spectrum centered around $k=\Phi$, in the case where it has a single minimum (as shown in Fig.~5, upper left panel):
\begin{align}
|\Psi_N\rangle = \frac{1}{\sqrt{N!}}\left(\cos(\theta/2)\beta_{\Phi+\pi/N_s}^{\dagger}+\sin(\theta/2)\beta_{\Phi-\pi/N_s}^{\dagger}\right)^N|0\rangle.
\end{align}
One readily obtains that the total energy is minimized by the choice $\theta=\pi$ if $1-6u_{\Phi+\pi/N_s}^2v_{\Phi+\pi/N_s}^2<0$, while one has $\theta=\pi/2$ if $1-6u_{\Phi+\pi/N_s}^2v_{\Phi+\pi/N_s}^2>0$. However, by using the results of Appendix A for the amplitudes $u_k$ and $v_k$, one readily finds that in the Meissner phase $1-6u_{\Phi+\pi/N_s}^2v_{\Phi+\pi/N_s}^2$ is always negative, and we conclude that lowest-energy solution is of biased-ladder type.
We have verified this prediction by the numerical solution of the DNLSE, and we confirm that no vortex is found at finite interactions and the density profile is of biased-ladder type, as illustrated in Fig.~\ref{fig15} and in the phase diagram (Fig.~\ref{Nldif}, lower panel). By performing calculations at varying system size, we find that the imbalance among the two rings decreases with increasing $N_s$.
It is interesting to notice that this is different from the case of the biased ladder phase BL-V obtained for values of flux corresponding to even multiples of $\pi/N_s$. In this case, the particle imbalance does not depend on $N_s$ and the phase is also found in the thermodynamic limit.
\begin{figure}[h!]
\centering
\includegraphics[scale=1]{Figure-10.pdf}
\caption{(Color online) Density profile for a double ring lattice of interacting bosons with total flux $\Phi=\pi/N_s$, in the absence of interactions, single vortex in the Meissner phase (upper panel) and for weak repulsive interactions biased-ladder (BL-M) phase (lower panel). The other parameters are $N_s=20$,$K/J=2$,$\phi=\pi/2$.}
\label{fig15}
\end{figure}
\subsection{Persistent currents for interacting bosons on the double ring lattice }
\begin{figure}[h!]
\includegraphics[scale=1]{Figure-11.pdf}
\caption{(Color online) Chiral currents in units of $J$ as a function of the relative flux $\phi$ (dimensionless) for noninteracting bosons (blue, thin solid line) and weakly interacting ones $Un/J=0.1$ (red, thick solid line) for $N_s=20$ and $K/J=3$.}
\label{currents-int}
\end{figure}
The numerical solution of the DNLSE allows also to obtain the persistent currents in the presence of weakly repulsive interactions. Figure \ref{currents-int} shows the dependence on persistent currents amplitude on relative flux $\phi$ for the interacting double ring lattice. As compared to the noninteracting case, notable differences occur at increasing $\phi$ when the phase boundary is crossed: due to the presence of the intermediate biased-ladder phase, the jumps in the persistent current are suppressed as they are associated to the creation of vortices. For the parameter choice used in Fig.~\ref{currents-int} one can then identify both the transition from Meissner to biased ladder and from the latter to the vortex phase.
Persistent currents thus provide a powerful tool to explore the phases of the double ring lattice.
\section{Spiral interferograms}
\begin{figure}[t]
\includegraphics[scale=1]{Figure-12a.pdf}
\includegraphics[scale=1]{Figure-12b.pdf}
\caption{(Color online) Spiral interferogram in the Meissner phase (upper panels) with $K/J=1.5$, $\phi=\pi/2$ and $Un/J=0.3$ , $\Phi=0$ (upper left panel), $Un/J=0$, $\Phi=\frac{\pi}{N_s}$ (upper right panel), and in the vortex phase (lower panels) taking $K/J=0.1$,$Un/J=0.3$, $\phi=\pi/3$, with $\Phi=0$ (lower left panel), and $\Phi=\frac{\pi}{N_s}$ (lower right panel). In all panels $N_s=35$.
}
\label{OddEven}
\end{figure}
It has been shown~\cite{PhaseInter,PhaseInter2,1367-2630-18-7-075003} that it is possible to reconstruct the phase pattern of ring trapped Bose-Einstein condensate by studying its interference pattern with a reference disk-shaped condensate placed at the center of the ring. Using a similar principle, we show here that the interference pattern of two concentric rings allows to characterize the vortices in the bosonic double ring lattice.
Assuming that the distance between neighbouring sites on each ring is larger than the difference of the radii of the two rings, the main contribution to the interference process is due to radially overlapping condensates belonging to the same site index in each ring (ie with the same angular coordinate).
In this case, the wave function after after a time $t_{TOF}$ from releasing the double ring trap is given by (see Appendix D for details):
\begin{eqnarray}
\Psi_p(r,\theta_l) \approx \tilde{\Psi}_0(k_{s,p})e^{i\hbar\frac{k_{s,p}^2}{2m}t_{TOF}}e^{i\phi_{l,p}}\sqrt{n_{l,p}}
\label{eq:psip}
\end{eqnarray}
where $\phi_{l,p}$ and $n_{l,p}$ are respectively the phase and the number of particles of a condensate on the ring $p$ at site $l$, and $k_{s,p} = \frac{(R_p-r)(-1)^pm}{\hbar t_{TOF}}$ is related to the velocity at which each wave function evolve after releasing the trap. The interference pattern intensity is given by $I(r,\theta)= 2$ Re$[\Psi^*_1(r,\theta)\Psi_2(r,\theta)]$. By recalling that in density-phase representation $\sqrt{n_{l,1}n_{l,2}}e^{i(\phi_{l,1}-\phi_{l,2})}= \langle a_{l,2}^{\dagger}a_{l,1}\rangle$, we obtain the following intensity distribution in the polar plane $(r,\theta_l)$:
\begin{eqnarray}
I(r,\theta_l) & = &\langle a_{l,1}^{\dagger}a_{l,1}\rangle+\langle a_{l,2}^{\dagger}a_{l,2}\rangle+2 {\rm Re}\left[e^{i\Delta_R}e^{iQr}\langle a_{l,1}^{\dagger}a_{l,2}\rangle\right]\nonumber\\
\label{eq:spirals}
\end{eqnarray}
with $Q=\frac{m(R_1-R_2)}{\hbar t_{TOF}}$ and $\Delta_R = \frac{(R_2^2-R_1^2)m}{\hbar t_{TOF}}$.
In order to analyze typical interference profiles in the various phases, we start from the noninteracting regime. In this case, using the results of Appendix A, in the Meissner phase one readily obtains
\begin{align}
I(r,\theta_l) \propto \frac{1}{N_s}\cos(Qr+\Delta_R) + n_{\theta_l}
\label{eq:interf}
\end{align}
where $n_{\theta_l}=\langle a_{l,1}^{\dagger}a_{l,1} + a_{l,2}^{\dagger}a_{l,2}\rangle$. This corresponds to an interference pattern made of concentric rings, as illustrated in the first panel of Fig.\ref{OddEven}.
In the case of a single vortex in the Meissner phase, (second panel of Fig.\ref{OddEven}) the interference pattern displays a line of dislocations, which are due to the phase slip and vanishing of the density in correspondence of the vortex core.
In the vortex phase, Eq.(\ref{eq:spirals}) yields
\begin{eqnarray}
I(r,\theta_l)&\propto& \frac{1}{N_s}[2u_{k_1} v_{k_1}\cos(Qr+\Delta_R)\nonumber\\&+&v_{k_1}^2\cos(\theta_l (k_2-k_1)-\Delta_R - Qr)\nonumber\\&+&u_{k_1}^2\cos(\theta_l (k_2-k_1)+\Delta_R+Qr)]\nonumber\\&+ n_{\theta_l}.
\end{eqnarray}
In this case, the interference pattern is composed of a term which is constant along $\theta$, that gives rise to concentric rings and two spirals patterns with uniform intensity each of them corresponding to one of the two ring , one going clockwise and the other counter-clockwise. The superposition of the three contributions yields a modulated spiral pattern, shown in Fig.\ref{OddEven}. This method, which is specific for the ring geometry, is a very powerful characterization of the vortex phase, as the number of branches in the pattern yields the number of vortices in the system. This allows in particular to evidence the possibility of having even or odd number of vortices, depending on the value of the total flux. As a final remark we notice that the interference pattern is dependent on the choice of gauge, other choices will lead to different spiral interferogram pictures.
\section{Conclusions and outlook}
In this work, we have studied the ground-state properties of weakly interacting bosons on a double ring lattice, subjected to two gauge fields.
Depending on the ratio between inter-ring and intra-ring tunnel energies, as well on the relative flux, the bosons are found to be in the Meissner or vortex phases, previously identified for the linear ladder geometry.
As specific of the ring geometry, for the non interacting gas, we have found a parity effect on the number of vortices in the system, which originates from the commensurability of total flux with respect to allowed momentum states on the rings. Also, for special values of total flux $\Phi$, due to finite size effects, we have found that the ground state may host a single vortex even in the Meissner phase. The analysis of persistent currents shows that at varying relative flux it is possible to identify both the Meissner and vortex phase. In the latter, due to finite-size of the double ring lattice, it is possible to monitor the appearence of pairs of vortices at increasing $\phi$.
We have then considered the effect of weakly repulsive interactions, as described within a mean-field approach.
We have identified the biased ladder phase and shown that the Meissner phase becomes imbalanced at odd value of the total flux $\Phi$ due to mesoscopic effects.
Even in the presence of interactions, the study of persistent currents is a useful tool to characterize the various phases.
Finally, we have proposed the interference patterns among the two rings as probe of the various phases, specifically adapted to the our ring geometry, yielding in particular spiral images in the presence of vortices.
An analysis beyond mean field suggests that the very small ring lattice at weak filling displays fragmentation \cite{Kolovsky} in a similar way as what is found for spin-orbit coupled Bose gases \cite{Eiji}. In outlook, it would be interesting to explore the crossover from mean-field to fragmented state at decreasing the lattice filling and size.
\acknowledgements
We thank Luigi Amico, Roberta Citro, Romain Dubessy, Rosario Fazio, Fabrice Gerbier, Erich Mueller, Maxim Olshanii, Paolo Pedri, and Hélène Perrin for fruitful discussions. We acknowledge funding from the ANR SuperRing project (ANR-15-CE30-0012-02).
\bibliographystyle{prsty}
|
{
"timestamp": "2018-09-06T02:11:44",
"yymm": "1803",
"arxiv_id": "1803.02718",
"language": "en",
"url": "https://arxiv.org/abs/1803.02718"
}
|
\section{Introduction}
A {\bfseries flat affine manifold} is a differentiable manifold equipped with a flat, torsion-free connection. Equivalently, it is a manifold equipped with an atlas such that all translation maps between charts are affine transformations (see \cite{FGH} or \cite{shima}).
A {\bfseries Hessian manifold} is an affine manifold with a Riemannian metric which is locally equivalent to a Hessian of a function. Any Kähler metric can be defined as a complex Hessian of a plurisubharmonic function. Thus, the Hessian geometry is a real analogue of the Kähler one.
A Kähler structure $(I,g^T)$ on $TM$ can be constructed by a Hessian structure $(\nabla,g)$ on $M$ (see \cite{shima}). The correspondence
$$
\text{r}:\{\text{Hessian manifolds}\} \to \{\text{K\"ahler manifolds}\}
$$
$$
\ \ \ \ \ (M,\nabla,g)\ \ \to \ \ (TM,I,g^T)
$$
is called the {\bfseries r-map}. In particular, this map associates special Kähler manifolds to special real manifolds (see \cite {AC}). In this case, r-map describes a correspondence between the scalar geometries for supersymmetric theories in dimension {5 and 4.} See \cite{CMMS} for details on the r-map and supersymmetry.
Hessian manifolds have many different application: in supersymmetry (\cite{CMMS}, \cite{CM}, \cite{AC}), in convex programming
(\cite{N}, \cite{NN}), in the Monge-Ampère Equation (\cite{F1}, \cite{F2}, \cite{G}), in the WDVV equations (\cite{T}).
A {\bfseries Riemannian cone} is a Riemannian manifold $(M\times \mathbb R^{>0}, t^2g_M+dt^2)$, where $t$ is a coordinate on $\mathbb R^{>0}$ and $g_M$ is a Riemannian metric on $M$. Riemannian cones have important applications in supegravity (\cite{ACDM},
\cite{ACM}, \cite{CDM}, \cite{VDMV}). Geometry and holonomy of pseudo-Riemannian cones are studied in \cite{ACGL} and \cite{ACL}.
The contact geometry is an odd-dimensional counterpart to the symplectic geometry. A manifold $M$ is {\bfseries contact} if and only if there exists a symplectic form $\omega$ on the cone $M\times \mathbb R^{>0}$ satisfying $\lambda_q \omega = q^2 \omega$, where $\lambda_q (m\times t) = m\times qt$. Moreover, if there exists an $\mathbb R^{>0}$-invariant complex structure $I$ on $M\times \mathbb R^{>0}$ such that $g=\omega(I\cdot, \cdot )$ is positive defined, that is, $(M\times\mathbb R^{>0},I,\omega)$ is a Kähler manifold then the manifold $M$ is called {\bfseries Sasakian}. Any Sasakian manifold is a Riemannian cone (see \cite{OV}). Note that our definitions of contact and Sasakian manifolds are not standard but equivalent. See \cite{5sasaki} or \cite{BG} for standard definitions and \cite{ACHK} or \cite{OV} for equivalence of them.
A {\bfseries Hessian cone} is a Hessian manifold $(M\times\mathbb R^{>0},\nabla,g=t^2g_M+dt^2)$ where $g_M$ is a metric on $M$, $t$ a coordianate on $\mathbb R^{>0}$, and $\nabla$ a flat torsion free connection satisfying
$$
\nabla \left(t\frac{\partial}{\partial t} \right)=\text{Id}.
$$
We say that a Riemannian manifold $(M,g)$ is a {\bfseries projective Hessian manifold} if there exists a connection $\nabla$ on $M\times \mathbb R^{>0}$ such that $(M\times\mathbb R^{>0},\nabla,g=t^2g_M+dt^2)$ is a Hessian cone.
We study a relation between projective Hessian and Sasakian manifolds analogous to the one between Hessian and Kähler manifolds.
\begin{theorem}
Let $(M, g)$ be a projective Hessian manifold. Then $TM\times\mathbb R$ admits a structure of a Sasakian manifold.
\end{theorem}
This theorem is closely related to the r-map. Namely, we have a diagram
$$
\begin{CD}
M\times \mathbb R^{>0} @>\text{r}>> T(M\times\mathbb R^{>0})\\
@AA con A @AA con A @.\\
M @>>> TM\times\mathbb R
\end{CD},
$$
where vertical arrows associate Riemannian cones to the corresponding Riemannian manifolds. The theorem implies that the Riemannian manifold $T(M\times\mathbb R^{>0})$ with the metric constructed by r-map is actually a cone over $TM\times \mathbb R$.
Then we work with Lie algebras and groups equipped with invariant structures on them. There are different descriptions of an {\bfseries invariant affine structure} on a Lie algebra $\mathfrak{g}$: a torsion-free flat connection on $\mathfrak{g}$, an étale affine representation $\mathfrak{g}\to \mathfrak{aff}(\mathbb R^n)$, where $n$ is dimension of $\mathfrak{g}$, or a structure of a left symmetric algebra on $\mathfrak{g}$, that is, a multiplication on $\mathfrak{g}$ satisfying
$$
XY-YX=[X,Y]
$$
and
$$
X(YZ)-(XY)Z=Z(XY)-(ZX)Y
$$
for any $X,Y,Z \in \mathfrak{g}$ (see \cite{burde} or \cite{Bu2}).
Then we adapt r-map and Theorem 1.1. to the case of Lie algebras and groups.
\begin{theorem}[\cite{BD}]
Let $\mathfrak{g}$ be an $n$-dimensional Lie algebra endowed with an affine structure $\nabla$ and $\eta$ the corresponding étale affine representation. Then there exists an invariant complex structure on $\mathfrak{g}\ltimes_\eta \mathbb R^n$.
\end{theorem}
It is immediately follows from the Theorem 1.2 that if a Lie group $G$ admits a left invariant affine structure then there exists a left invariant complex structure on a semidirect product $G\ltimes_\theta \mathbb R^n$. We show that $\theta$ is a linear part of the corresponding affine action of $G$ and get the following addition to results from \cite{BD}.
\begin{theorem}
Let $G$ be a simply connected Lie group equipped with a left invariant affine structure, $\theta$ the linear part of the corresponding affine action of $G$. Then there exists a left invariant integrable complex structure $I$ on the group
$$
G\ltimes_\theta \mathbb R^n\simeq TG
$$
such that $I$ swaps vertical and horizontal tangent subbundles.
\end{theorem}
Note that a Kähler structure on a the group $G\ltimes_{\theta^*} \left(\mathbb R^n\right)^*$ is constructed by an invariant Hessian structure on $G$ in \cite{medina}. The corresponding complex structure on $G\ltimes_{\theta^*} \left(\mathbb R^n\right)^*$ depends on the Hessian metric on $G$. In our case, the complex structure on $G\ltimes_\theta \mathbb R^{n}$ depends only on the affine structure on $G$.
{\bfseries A Hessian Lie group} $(G,\nabla, g)$ is a Lie group $G$ endowed with a left invariant affine structure $\nabla$ and a left invariant Hessian metric $g$.
\begin{theorem}
Let $G$ be an $n$-dimensional simply connected Lie group equipped with a left invariant affine structure $\nabla$ and $\theta$ the linear part of the corresponding affine action of $G$. Then there exists a left invariant Kähler metric on $G\ltimes_\theta \mathbb R^n=TG.$
\end{theorem}
The construction from Theorem 1.4. is a homogeneous analogue of r-map.
For adapting Theorem 1.1 to the case of Lie algebras we introduce definitions of semi-sasakian Lie algebras and groups.
Let $\mathfrak{g}$ be a $2n+1$-dimensional Lie algebra, $D$ a derivation of $\mathfrak{g}$, and $\omega$ 2-form on $\mathfrak{g}$ and $\eta$ a 1-form on $\mathfrak{g}$. If
$$
d\omega =0, \ \ \ \omega+D^*\omega-d\eta = 0 \ \ \ \text{and} \ \ \ \eta\wedge \omega^n \ne 0
$$
then we say that a collection $(\mathfrak{g},D, \omega,\eta)$ is a {\bfseries semi-contact Lie algebra}. Note that if $D=0$ then we have the pair $(\mathfrak{g},\eta)$ such that $\eta(d\eta)^n\ne 0$ i.e. $(\mathfrak{g},\eta)$ is a contact Lie algebra.
The definition of semi-contact Lie algebras is closely related to the definition of lcs Lie algebras. A {\bfseries locally conformally symplectic} (shortly, {\bfseries lcs}) {\bfseries algebra} is a $2n$-dimensional Lie algebra $\mathfrak{g}$ endowed with a 2-form $\omega$ and 1-form $\vartheta$ such that
$$
\Omega^n\ne 0, \ \ \ d\vartheta=0, \ \ \text{and} \ \ \ d\Omega = \vartheta \omega \ \ \ \text{(See \cite{BD} or \cite{ABP})}.
$$
According to results of \cite{ABP}, a collection $(\mathfrak{g},D, \omega,\eta)$ is a semi-contact Lie algebra if and if $(\mathbb R\ltimes_D \mathfrak{g}, \omega+\vartheta \wedge \eta)$ is an lcs Lie algebra, where $\vartheta(r\ltimes_D v)=r$, and any lcs Lie algebra arises by this way. Thus, there is a one-to-one correspondence between lcs and semicontact Lie algebras. If $(\mathfrak{g},\Omega,\vartheta)$ is an lcs Lie algebra then we say that $(G,\Omega,\vartheta)$ is an {\bfseries lcs Lie group}.
A {\bfseries semi-contact Lie group} is a triple $(G,\theta, \Omega)$, where $G$ is a Lie algebra $\theta$ be an action of $\mathbb R^{>0}$ on $G$ and $\omega$ a symplectic form on $\mathbb R^{>0}\ltimes_\theta G$ such that $\omega$ is invariant with respect to the left action of $G$ and satisfying $\lambda_q^* \omega =q^2 \omega$, where $\lambda_q x= (q\ltimes 1 )(x)$. If $\theta=id$ then $G,\Omega$ is a contact Lie group. See \cite{D} for more information on contact Lie algebras and groups.
\begin{theorem}
Let $\mathfrak{g}$ be a Lie algebra of left invariant vector fields on a Lie group $G$, $D$ a derivation of $\mathfrak{g}$, $\theta=\exp D$ the corresponding automorphism of $G$, $\omega$ and $\eta$ left invariant $2$-form and $1$-form correspondingly. Then the following conditions are equivalent:
\begin{itemize}
\item [(i)] $(\mathfrak{g},D,\omega,\eta)$ is a semicontact Lie algebra.
\item[(ii)] $\left(\mathbb R^{>0}\ltimes_\theta G, \Omega=\omega+t^{-1}\eta\wedge dt,2t^{-1}dt
\right)$ is an lcs Lie group.
\item[(iii)] $(G,\theta, \hat\Omega=t^2\omega+tdt\wedge\eta)$ is a semi-contact Lie group.
\end{itemize}
\end{theorem}
A {\bfseries locally conformally K\"ahler} (shortly, {\bfseries lck}) Lie algebra is an lcs $(\mathfrak{h},\Omega,\vartheta)$ Lie algebra endowed with a complex structure $I$ such that
$$
g:=\Omega(*,I*)
$$
is a positive definite symmetric bilinear form (see \cite{HK}). The Lie group corresponding to an lck Lie algeba is called an lck Lie group. Note that homogeneous lck manifolds of reductive Lie groups are classified in \cite{ACHK}.
A {\bfseries semi-Sasakian Lie algebra} is a semi-contact Lie algebra $(\mathfrak{g},\eta,\omega)$ with a left-invariant integrable almost complex structure $I$ on $(\mathfrak{g}\ltimes_D \mathbb R)$ such that
$$
g=\hat\Omega(*,I*)
$$
is a positive definite symmetric bilinear form, where $\hat\Omega=\omega+\vartheta \wedge \eta$. As above, there is a one-to one correspondence between semi-Sasakian algebras $(\mathfrak{g},D, \omega,\eta)$ and lck algebras $(\mathbb R\ltimes_D \mathfrak{g}, \omega+\vartheta \wedge \eta)$.
A {\bfseries semi-Sasakian Lie group} is a semi-contact Lie group $(G,\theta,\hat\Omega)$ with a left invariant complex structure $I$ on $\mathbb R^{>0}\ltimes_\theta G$ such that $(\mathbb R^{>0}\ltimes_\theta G,\hat\Omega,I)$ is a K\"ahler manifold. If $\theta=id$ then $G$ is called a {\bfseries Sasakian Lie group}. Note that in this case $g=t^2g_G+dt^2$ according to Proposition 3.5.
\begin{cor}
Let $\mathfrak{g}$ be a Lie algebra of the invariant vector fields on a Lie group $G$, $D$ the derivation of $\mathfrak{g}$, $\theta=\exp D$ the corresponding automorphism of $G$, $I$ a complex structure on $\mathbb R\ltimes \mathfrak{g}$, $\omega$ and $\eta$ left invariant $2$-form and $1$-form correspondingly. Then the following conditions are equivalent:
\begin{itemize}
\item [(i)] $(\mathfrak{g},D,\omega,\eta,I)$ is a semi-Sasakian Lie algebra.
\item[(ii)] $\left(\mathbb R^{>0}\ltimes_\theta G, \Omega=\omega+t^{-1}\alpha\wedge dt,2t^{-1}dt,I \right)$ is an lck Lie group.
\item[(iii)] $(G,\theta, \hat\Omega=t^2\omega+tdt\wedge\eta,I)$ is a semi-Sasakian Lie group.
\end{itemize}
\end{cor}
A Lie algebra $\mathfrak{g}$ is called {\bfseries projective} if there is an invariant affine structure $\nabla$ on $\mathfrak{g}\times \mathbb R$ such that
$$
\nabla_X E =\nabla_E X = X,
$$
where $X\in \mathfrak{g}$ and $E\in \mathbb R$. A Lie group is called {\bfseries projective} if the corresponding Lie algebra is projective. Note that if $G$ is a projective Lie group then there is an invariant affine structure on $G\times \mathbb R^{>0}$. A {\bfseries projective Hessian Lie group} $(G,g_G)$ is a projective Lie group $G$ endowed with a left-invariant Riemannian metric $g_G$ such that $(G\times\mathbb R^{>0},\nabla,g=t^2g_G +dt^2)$ is a Hessian cone, where $t$ is a coordinate on $\mathbb R^{>0}$ and $\nabla$ the affine connection on $G\times \mathbb R^{>0}$ corresponding to the projective structure on $G$.
\begin{theorem}
Let $G$ be an $n$-dimensional simply connected projective Hessian Lie group and $\theta$ the linear part of the corresponding affine representation of $G \times \mathbb R^{>0}$. Then there exists a structure of a semi-Sasakian Lie group on $G\ltimes_\theta \mathbb R^{n+1}$. Moreover, $G\ltimes_\theta \mathbb R^{n+1}\simeq
TG\times \mathbb R$.
\end{theorem}
Note that Theorem 1.7 is a homogeneous analogue of Theorem 1.1.
In section 6, we explain why Theorem 1.4 and Theorem 1.5 generalize the known construction of homogeneous K\"ahler domains by a homogeneous convex regular domains. According to \cite{vinb}, an invariant Hessian metric can be constructed on any homogeneous convex regular domain. Moreover, a homogeneous K\"ahler domain can be associated with a homogeneous convex regular cone (see \cite{shima}). We can get the same construction applying Theorem 1.4 to a Lie group acting simply transitively on a homogeneous regular convex domain. Such group always exists and it is called a {\bfseries Lie group associated with} a homogeneous regular domain. Any Lie group associated with a homogeneous regular domain is both a Hessian Lie group and a projective Lie group.
The groups $\text{U}(1)=\text{SO}(2)$ and $\text{SU}(2)$ belong to another kind of projective Hessian Lie groups. Using them, we can construct a semi-Sasakian structure on the Euclidean group $\text{E}(2)$ and the group of isometries of the complex plane $\mathbb{C}^2$. Any Sasakian group is obviously semi-Sasakian. The Sasakian groups of dimension $n \le 5$ are classified in \cite{5sasaki}. Using this classification, we verify that the semi-Sasakian group $\text{E}(2)$ does not admit a Sasakian structure.
Any Lie group associated with a homogeneous regular domain admits both structures: Hessian and projective Hessian. The group $\text{SO}(2)$ admits both structures too. However, not any projective Hessian group is Hessian. The group $\text{SU}(2)$ is not Hessian just because the sphere $S^3$ does not admit any flat affine structure. However, the group $\text{SU}(2)$ admits an invariant projective structure, since there is an invariant affine structure on $\text{SU}(2)\times\mathbb R^{>0}$. Thus, the natural question arises: does the existence of $G$-invariant selfsimilar structure on $G\times\mathbb R^{>0}$ implies the existence of $G\times\mathbb R^{>0}$-invariant Hessian structure on $G\times \mathbb R^{>0}$? The answer is positive when $G$ is Lie group associated with a homogeneous regular domain or $\text{U}(1)$. We show that $\text{SU}(2)\times \mathbb R^{>0}$ does not admit an invariant Hessian structure.
\section{Hessian and Kähler structures}
\begin{defin}
A {\bfseries flat affine manifold} is a differentiable manifold equipped with a flat, torsion-free connection. Equivalently, it is a manifold equipped with an atlas such that all translation maps between charts are affine transformations (see \cite{FGH}).
\end{defin}
\begin{defin}
A Riemannianian metric $g$ on a flat affine manifold $(M,\nabla)$ is called to be a {\bfseries Hessian metric} if $g$ is locally expressed by a Hessian of a function
$$
g=\text{Hess} \varphi=\nabla d \varphi =\frac{\partial^2}{\partial x^i \partial x^j} dx^i dx^j,
$$
where $x^1,\ldots, x^n$ are flat local coordinates. A {\bfseries Hessian manifold} $(M,\nabla,g))$ is a flat affine manifold $(M,\nabla)$ endowed with a Hessian metric $g$. (see \cite{shima}).
\end{defin}
Let $U$ be an open chart on a flat affine manifold $M$, functions $x^1,\ldots, x^n$ be affine coordinates on $U$, and $x^1,\ldots, x^n, y^1, \ldots, y^n$ be the corresponding coordinates on $TU$. Define the complex structure $I$ by $I(\frac{\partial}{\partial x^i})=\frac {\partial} {\partial y^i}$. Corresponding complex coordinates are given by $z^i=x^i+\sqrt {-1}y^i$. The complex structure $I$ does not depend on a choice of flat coordinates on $U$. Thus, in this way, we get a complex structure on the $TM$.
Let $\pi : TM \to M$ be a natural projection. Consider a Riemannian metric $g$ given locally by
$$
f_{i,j} dx^idx^j.
$$
Define the Hermitian metric $g^T$ on $TM$ by
$$
\pi^* f_{i,j} dz^id\bar z^j.
$$
\begin{proposition}[\cite{shima}, \cite{AC}]
Let $M$ be a flat affine manifold, $g$ and $g^T$ as above. Then the following conditions are equivalent:
\begin{itemize}
\item[(i)]$g$ is a Hessian metric.
\item[(ii)] $g^T$ is a Kähler metric.
\end{itemize}
Moreover, if
$
g=\text{Hess} \varphi
$
locally then $g^T$ is equal to a complex Hessian
$$
g=\text{Hess}_\mathbb{C} (4\pi^*\varphi).
$$
\end{proposition}
\begin{defin}
The metric $g^T$ is called {\bfseries Kähler metric associated with $g$}. The correspondence that associates the Kähler manifold $(TM,g^T)$ with a Hessian manifold $(M, g)$ is called {\bfseries r-map} (see \cite{AC}).
\end{defin}
\begin{proposition}
Let $(M,g)$ be a Hessian manifold, $I$ the corresponding complex structure on $TM$, and $\pi : TM \to M$ a projection. Then the associated Kähler metric $g^T$ equals
\begin{equation}
h(X,Y)=\pi^*g(X,Y)+\pi^*g(IX,IY)+\sqrt{-1}\pi^*g(IX,Y)-\sqrt{-1}\pi^*g(X,IY).
\end{equation}
\end{proposition}
\begin{proof}
If
\begin{equation}
g=f_{i,j} dx^i dx^j
\end{equation}
then
$$
g^T=\pi^* f_{i,j} dz^i d\bar z^j=\pi^* f_{i,j} d(x^i+\sqrt{-1}y^i) d(x^j-\sqrt{-1}y^j)=
$$
\begin{equation}
=\pi^* f_{i,j} dx^i dx^j+\pi^* f_{i,j} dy^idy^j+\sqrt{-1}\pi^* f_{i,j} dy^idx^j-\sqrt{-1}\pi^* f_{i,j} dx^i dy^j.
\end{equation}
It is enough to check the identity
$$
h(X,Y)=g^T(X,Y)
$$
on pairs of basis vectors. For any $i\in \{1,\ldots,n\}$ we have
$$
\pi^*g\left(\frac{\partial}{\partial y^i},\ldots \right)=0,
$$
moreover,
$$
I \frac{\partial}{\partial x^i}=\frac{\partial}{\partial y^i}.
$$
Hence, by (2.1),
$$
h\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)=\pi^* g\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)
$$
and, by (2.2),
$$
\pi^* g\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)=f_{i,j}.
$$
On the other hand, by (2.3),
$$
g^T\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)=f_{i,j}.
$$
Thus, we get
$$
g^T\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)=h\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right).
$$
Checking for the pairs $\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial y^j}\right)$ and $\left(\frac{\partial}{\partial y^i},\frac{\partial}{\partial y^j}\right)$ is similar.
\end{proof}
\section{Projective Hessian and Sasakian manifolds}
\begin{defin}
A {\bfseries radiant manifold} $(C,\nabla, \xi)$ is a flat affine manifold $(C,\nabla)$ endowed with a vector field $\xi$ satisfying
\begin{equation}
\nabla \xi =\text{Id}.
\end{equation}
Equivalently, it is a manifold equipped with an atlas such that all translation maps between charts are linear transformations (see e.g. \cite{Go}).
\end{defin}
\begin{proposition}[\cite{Go}]
Let $t$ be a coordinate on $\mathbb R^{>0}$ and $(M\times \mathbb R^{>0},\nabla,t\frac{\partial}{\partial t})$ a radiant manifold. Consider a natural action of $\mathbb R^{>0}$ on $M\times \mathbb R^{>0}$. Then the connection $\nabla$ is $\mathbb R^{>0}\text{-invariant}$.
\end{proposition}
\begin{defin}
A {\bfseries Hessian cone} is a Hessian manifold $(M\times\mathbb R^{>0},\nabla,g=t^2g_M+dt^2)$ such that $(M\times \mathbb R^{>0},\nabla,t\frac{\partial}{\partial t})$ is a radiant manifold, where $t$ is a coordinate on $\mathbb R^{>0}$.
We say that a Riemannian manifold $(M,g_M)$ is a {\bfseries projective Hessian manifold} if there exists a connection $\nabla$ on $M\times \mathbb R^{>0}$ such that $(M\times\mathbb R^{>0},\nabla,g=t^2g_M+dt^2)$ is a Hessian cone.
\end{defin}
\begin{defin}
A {\bfseries Sasakian manifold} is a Riemannian manifold $(M,g_M)$ such that the cone metric $t^2 g_M+ dt^2$ on $M\times \mathbb R^{>0}$ is Kähler with respect to a dilation invariant complex structure.
\end{defin}
\begin{proposition}[\cite{OV}]
Let $(M\times \mathbb R^{>0}, g, I)$ be a Kähler manifold. For any $q\in \mathbb R^{>0}$ consider a map
$
\lambda_q : M\times \mathbb R^{>0} \to M\times \mathbb R^{>0}
$
defined by
$
\lambda_q (m\times t)=m\times qt.
$
If $\lambda_q^* g=q^2 g$ then $g=t^2g_M+dt^2$ and $(M,g_M)$ is a Sasakian manifold.
\end{proposition}
There exists a decomposition
$$
T(M\times\mathbb R^{>0})=TM \times T\mathbb R^{>0}=TM\times \mathbb R^{>0} \times \mathbb R.
$$
If $M\times\mathbb R^{>0}$ possess a Hessian structure then, by Proposition 2.3, $T(M\times\mathbb R^{>0})$ admits a Kähler structure.
\begin{proposition}
Let $(M\times \mathbb R^{>0}, \nabla, g)$ be a Hessian cone and $g^T$ the associated Kähler metric on $T(M\times \mathbb R^{>0})$ . Consider $T(M\times\mathbb R^{>0})=TM\times \mathbb R \times \mathbb R^{>0}$ as a cone over $TM\times \mathbb R$. Then for any $q\in\mathbb R^{>0}$ we have $\mu_q^* g=q^2 g$, where the map
$
\mu_q : TM\times\mathbb R\times \mathbb R^{>0} \to TM\times \mathbb R\times \mathbb R^{>0}
$
is defined by
$
\mu_q (m\times s\times t)=m\times s\times qt.
$
\end{proposition}
\begin{proof}
We have the commutative diagram
$$
\begin{CD}
T(M\times\mathbb R^{>0}) @>\mu_q>> T(M\times\mathbb R^{>0})\\
@VV\pi V @VV\pi V @.\\
M\times\mathbb R^{>0} @>\lambda_q>> M\times\mathbb R^{>0}
\end{CD},
$$
where $\mu_q$ and $\lambda_q$ are multiplications of the coordinate on $\mathbb R^{>0}$ by $q$. By Proposition 2.5, we have
\begin{equation}
g^T(X,Y)=\pi^*g(X,Y)+\pi^*g(IX,IY)+\sqrt{-1}\pi^*g(IX,Y)-\sqrt{-1}\pi^*g(X,IY).
\end{equation}
Since the diagram is commutative, it follows that \begin{equation}
\mu_q^*\pi^*=\pi^*\lambda_q^*.
\end{equation}
Moreover, $g$ is a cone metric. Hence,
\begin{equation}
\lambda_q^* g= q^2 g.
\end{equation}
It follows from (3.2), (3.3), and (3.4) that
$$
\mu_q^* g^T(X,Y)=q^2 g^T(X,Y).
$$
\end{proof}
\begin{theorem}
Let $(M, g_M)$ be a projective Hessian manifold. Then $TM\times\mathbb R$ admits a structure of a Sasakian manifold.
\end{theorem}
\begin{proof}
Define a K\"ahler structure $(g^T,I)$ on $T(M\times\mathbb R^{>0})=TM\times\mathbb R\times\mathbb R^{>0}$ as above. By proposition 3.2, the connection $\nabla$ is $\mathbb R^{>0}$ invariant. Therefore, the constructed by $\nabla$ complex structure $I$ is $\mathbb R^{>0}$-invariant. Then, by Proposition 3.4 and Proposition 3.5, $TM\times \mathbb R$ admits a structure of a Sasakian manifold.
\end{proof}
\section{Affine representations and flat torsion free connections on Lie groups}
The group of affine transformations $\text{Aff} (\mathbb{R}^n)$ is given by the matriсes of the form
$$
\begin{pmatrix}
A & a\\
0 & 1
\end{pmatrix}
\in \text{GL}(\mathbb R^{n+1}),
$$
where $A\in \text{GL}(\mathbb R^n)$, and $a\in \mathbb R^n$ is a column vector. The corresponding Lie algebra $\mathfrak{aff}(\mathbb R^n)$ is given by matrices of the form
$$
\begin{pmatrix}
A & a\\
0 & 0
\end{pmatrix}
\in \mathfrak{gl}(\mathbb R^{n+1}).
$$
The commutator of $\mathfrak{aff}(\mathbb R^n)$ is equal to
$$
\left[
\begin{pmatrix}
A & a\\
0 & 0
\end{pmatrix},
\begin{pmatrix}
B & b\\
0 & 0
\end{pmatrix}
\right]
=
\begin{pmatrix}
[A,B] & A(b)-B(a)\\
0 & 0
\end{pmatrix}.
$$
Algebra $\mathfrak{aff} (\mathbb R^n)$ is the semidirect product $\mathfrak{gl}(\mathbb R^n)\ltimes \mathbb R^n$, where the commutator is given by
$$
[A\ltimes a,B \ltimes b]=[A,B]\ltimes (Ab-Ba).
$$
Group $\text{Aff}(\mathbb R^n)$ is the semidirect product $\text{GL}(\mathbb R^n)\ltimes \mathbb R^n$, where multiplication is given by
$$
(A\ltimes a) (B \ltimes B)=AB\ltimes (a+Ab).
$$
\begin{defin}
An affine representation is called {\slshape étale} if there exists a point $x \in \mathbb R^n$ such that the orbit of $x$ is open and the stabilizer of $x$ is discrete.
\end{defin}
\begin{theorem}[\cite{burde} or \cite{Bu2}]
Let $G$ be a Lie group. There is a correspondence between left invariant torsion-free flat connections and étale affine representations. Moreover, if the connection is complete then the corresponding étale affine representation acts simply transitive on $\mathbb R^n$.
\end{theorem}
\begin{proof}
Choosing a basis, identify $\mathfrak{g}$ with $\mathbb R^n$. Then consider $\nabla_X$ as a linear endomorphism of $\mathbb R^n$ for any $X\in \mathfrak{g}$. The corresponding to $\nabla$ étale representation is given by
$$
\eta : \mathfrak{g} \to \mathfrak{aff} (\mathbb R^n),
$$
\begin{equation}
\eta(X) =
\begin{pmatrix}
\nabla_X & X\\
0 & 0
\end{pmatrix}
\in \mathfrak{aff}(\mathbb R^n)\subset \mathfrak{gl}(\mathbb R^{n+1})
\end{equation}
(see \cite{burde} or \cite{Bu2} for details).
\end{proof}
For any $X \in \mathfrak{g}$ we can consider $\nabla_X$ on the Lie algebras $\mathfrak{g}$ as a linear automorphism of the vector space $\mathfrak{g}$. Thus, the linear automorphism
$$
\exp \nabla_X=\text{id} +\frac{\nabla_X}{1!}+\frac{\nabla_X\nabla_X}{2!}+\frac{\nabla_X\nabla_X\nabla_X}{3!}+\ldots
$$
is well defined.
\begin{proposition}
Let $\nabla$ be a left invariant flat torsion-free connection on a simply connected Lie group $G$ and $\tau$ the corresponding étale affine representation of $G$. If $X \in \mathfrak{g}$ then the linear part of $\tau(\exp X)$ is equal to $\exp \nabla_X$.
\end{proposition}
\begin{proof}
By (4.1), we have
$$
\tau(\exp X)=\exp
\begin{pmatrix}
\nabla_X & X\\
0 & 0
\end{pmatrix} =
\begin{pmatrix}
\exp \nabla_X & (\exp \nabla_X )(X)\\
0 & 1
\end{pmatrix}.
$$
Hence, the linear part of $\tau(\exp X)$ is equal to $\exp \nabla_X$.
\end{proof}
Fix notations: let $\mathfrak{g}$ be a Lie algebra; $G$ the corresponding simply connected Lie group; $\eta$ an étale affine representation $\mathfrak{g} \to \mathfrak{aff}$; $\theta$ the linear part of the corresponding affine representation of $G$; $i$ an identification $\mathfrak{g} \to \mathbb R^n$.
Define an almost complex structure $I$ on $\mathfrak{g} \ltimes_\eta \mathbb R^n$ by the rule
$$
I(X\ltimes_\eta Y)= -i^{-1} Y \ltimes_\eta iX.
$$
\begin{theorem}[\cite{CO} or \cite{BD}]
Let $\mathfrak{g}$, $\eta$, $I$ be as above. Then the almost complex structure $I$ is integrable.
\end{theorem}
\begin{defin}
The algebra $\mathfrak{g}\ltimes_\eta \mathbb R^n$ is called {\bfseries associated with the connection} $\nabla$ and denoted by $\mathfrak{g_\nabla}$.
\end{defin}
\begin{proposition}
Let $\mathfrak{g}, \nabla, \theta, \eta$ be as above and $G_\nabla$ the simply connected Lie group corresponded to $\mathfrak{g}_\nabla$. Then
$$G_\nabla=G\ltimes_\theta \mathbb R^n.$$
Moreover, there is an identification $TG=G \ltimes_\theta \mathbb R^n$ such that fields of the form $0\oplus I\mathfrak{g}\subset T(TG)$ are vertical, that is, they lie in $\ker d\pi$, where $\pi: TG \to G$ is a projection.
\end{proposition}
\begin{proof}
By the definition, $\mathfrak{g}_\nabla$ is isomorphic to $\mathfrak{g} \ltimes_\eta \mathbb R^n$. The corresponding Lie group is equal to the semidirect product $G \ltimes \mathbb R^n$ with respect to the action such that the element $\exp X$ acts on $\mathbb R^n$ by $\exp \nabla_X$. By Proposition 4.6, this action equals $\theta$. Thus, the corresponding Lie group $G_\nabla$ is equal to $G \ltimes_\theta \mathbb R^n$
Using the trivialization of the tangent bundle of $G$ by the flat connection $\nabla$, we identify $TG$ with $G \times \mathbb R^n$. Define the multiplication by
$$
(g_1\times X_1)(g_2\times X_2)=(g_1 g_2) \times (X_1 + \theta(g_1) (X_2)).
$$
This multiplication is equal to the multiplication on $G_\nabla =G\ltimes_\theta \mathbb R^n$. Thus, the group $TG$ with this multiplication is isomorphic to $G_\nabla$. Moreover, the left invariant fields corresponding to the subalgebra $0\oplus I \mathfrak{g}$ are actually vertical.
\end{proof}
\begin{theorem}
Let $G$ be a simply connected Lie group equipped with a left invariant affine structure, $\mathfrak{g}$ the corresponding Lie algebra, and $\theta$ the linear part of the corresponding affine action of $G$. Then there exists a left invariant integrable complex structure $I$ on the group
$$
G\ltimes_\theta \mathbb R^n\simeq TG
$$
defined in Proposition 4.6 such that $I$ swaps vertical and horizontal tangent subbundles.
\end{theorem}
\begin{proof}
The theorem follows from Theorem 4.4 and Proposition 4.6.
\end{proof}
\section{Semi-sasakian Lie algebras and groups}
\begin{defin}
A {\bfseries locally conformally symplectic} (shortly, {\bfseries lcs}) {\bfseries algebra} is a $2n$-dimensional Lie algebra $\mathfrak{g}$ endowed with a 2-form $\omega \in \Lambda^2 \mathfrak{g}^*$ and 1-form $\vartheta\in\mathfrak{g}^*$ such that
$$
\Omega^n\ne 0, \ \ \ d\vartheta=0, \ \ \text{and} \ \ \ d\Omega = \vartheta \wedge\omega.
$$
If $(\mathfrak{g},\Omega,\vartheta)$ is a {\bfseries lcs Lie algebra} then we say that $(G,\Omega,\vartheta)$ is lcs Lie group. Here we consider $\Omega$ and $\vartheta$ as tensors on left invariant vector fields.
\end{defin}
\begin{proposition}[\cite{ABP}]
Any lcs Lie algebra $(\mathfrak{h},\Omega, \vartheta)$ takes a form $(\mathbb R\ltimes_D \mathfrak{g}, \Omega=\omega+\vartheta \wedge \eta)$, where $\mathfrak{g}$ is a Lie algebra, $D$ a derivation of $\mathfrak{g}$, $\vartheta$ a 1-form on $\mathbb R\ltimes_D \mathfrak{g}$ defined by $\vartheta(r\ltimes_D v)=r$, $\omega$ and $\eta$ are $G$-invariant 2-form and 1-form correspondingly satisfying
\begin{equation}
d^\mathfrak{g}\omega =0, \ \ \ \omega+D^*\omega-d^\mathfrak{g}\eta = 0 \ \ \ \text{and} \ \ \ \eta\wedge \omega^n \ne 0.
\end{equation}
On the other side, if a collection $(\mathbb R\ltimes_D \mathfrak{g}, \Omega=\omega+\vartheta \wedge \eta)$ satisfies (5.1) then it defines an lcs algebra.
\end{proposition}
\begin{defin}
Let $\mathfrak{g}$ be a $2n+1$-dimensional Lie algebra, $D$ a derivation of $\mathfrak{g}$,
$\omega$ a 2-form on $\mathfrak{g}$, and $\eta$ a 1-form on $\mathfrak{g}$. If
$$
d\omega =0, \ \ \ \omega+D^*\omega-d\eta = 0 \ \ \ \text{and} \ \ \ \eta\wedge \omega^n \ne 0
$$
then we say that a collection $(\mathfrak{g},D, \omega,\eta)$ is a {\bfseries semi-contact Lie algebra}.
\end{defin}
According to Definition 5.3 and Proposition 5.2, $(\mathfrak{g},D, \omega,\eta)$ is a semi-contact Lie algebra if and if $(\mathbb R\ltimes_D \mathfrak{g}, \omega+\vartheta \wedge \eta)$ is lcs, where $\vartheta$ is as above. Thus, there is a one-to-one correspondence between lcs and Lie semicontact algebras.
\begin{defin}
A {\bfseries semi-contact Lie group} is a triple $(G,\theta, \hat\Omega)$, where $G$ is a Lie algebra, $\theta$ an action of $\mathbb R^{>0}$ on $G$, and $\hat\Omega$ a symplectic form on $\mathbb R^{>0}\ltimes_\theta G$ such that $\omega$ is invariant with respect to the left action of $G$ and satisfying $\lambda_q^* \hat\Omega =q^2 \hat\Omega$, where $\lambda_q x= (q\ltimes 1 )(x)$.
\end{defin}
\begin{proposition}
Let $G$ be a Lie group, $t$ a coordinate on $\mathbb R^{>0}$, $\theta$ an automorphism of $G$, and $\Omega$ a 2-form on $\mathbb R^{>0}\ltimes_\theta G$. Then the collection $(G,\theta,\hat\Omega)$ is a semi-contact Lie group if and only if $(\mathbb R^{>0}\ltimes_\theta G,{\Omega}=t^{-2}\hat\Omega,2tdt)$ is an lcs Lie group.
\end{proposition}
\begin{proof}
We have
$$
d\Omega=d\left(t^{-2}\hat\Omega\right)= 2t^{-3} dt\wedge\hat\Omega +t^{-2} d\hat\Omega=\left(2t^{-1}dt\right)\wedge {\Omega}+t^{-2}d\hat\Omega.
$$
Thus, $d{\Omega}=2tdt\wedge {\Omega}$ if and only if $d\hat\Omega=0$.
Moreover $\hat\Omega=t^2 \Omega$ satisfies $\lambda_q^*\hat\Omega = q^2\hat\Omega$ if and only if $\lambda_q\Omega={\Omega}$.
\end{proof}
\begin{cor}
Let $(G,\theta, \Omega)$ be a semi-contact Lie group. Then the form $\Omega$ on $\mathbb R^{>0}\ltimes_\theta G$ can be written as $\Omega = t^2\omega + t dt\wedge\eta$, where $\omega$ and $\eta$ are $G$-invariant 2-form and 1-form.
\end{cor}
\begin{proof}
By Proposition 5.5, $\left(\mathbb R^{>0}\ltimes_\theta G, \hat{\Omega}=t^{-2}\Omega\right)$ is lcs. Let $D$ be a derivation of $\mathfrak{g}$ such that $\exp D= \theta$. Consider an identification of $\mathbb R\ltimes_D \mathfrak{g}$ with the left invariant fields on $\mathbb R^{>0}\ltimes_\theta G$ such that
$$
1\ltimes_D 0= t \frac{\partial}{\partial t}.
$$
Let $\vartheta$ be as in Proposition 5.2. Then
$$
\vartheta = t^{-1}dt.
$$
By Proposition 5.2, we have $$\hat \Omega =\omega +t^{-1}dt \wedge\eta.$$ Thus, we have
$$
\Omega= t^2\omega +tdt\wedge\eta.
$$
\end{proof}
\begin{theorem}
Let $\mathfrak{g}$ be a Lie algebra of left invariant vector fields on a Lie group $G$, $D$ a derivation of $\mathfrak{g}$, $\theta=\exp D$ the corresponding automorphism of $G$, $\omega$ and $\eta$ left invariant $2$-form and $1$-form correspondingly. Then the following conditions are equivalent:
\begin{itemize}
\item [(i)] $(\mathfrak{g},D,\omega,\eta)$ is a semicontact Lie algebra.
\item[(ii)] $\left(\mathbb R^{>0}\ltimes_\theta G, \Omega=\omega+t^{-1}\eta\wedge dt,2t^{-1}dt
\right)$ is an lcs Lie group.
\item[(iii)] $(G,\theta, \hat\Omega=t^2\omega+tdt\wedge\eta)$ is a semi-contact Lie group.
\end{itemize}
\end{theorem}
\begin{proof}
$\text{(ii)}\Leftrightarrow\text{(iii)}$\ It follows from Proposition 5.5.
$\text{(i)}\Leftrightarrow\text{(ii)}$ Consider an identification of $\mathbb R\ltimes_D \mathfrak{g}$ with the left invariant fields on $\mathbb R^{>0}\ltimes_\theta G$ such that
$$
1\ltimes_D 0 = \frac{1}{2} t \frac{\partial}{\partial t}.
$$
Let $\vartheta$ be as in Proposition 5.2. Then
$$
\vartheta = 2t^{-1} dt.
$$
By Proposition 5.2, $(\mathfrak{g},D,\omega,\eta)$ is a semicontact Lie algebra if and only if $\left(\mathbb R\ltimes_D \mathfrak{g},\Omega,\vartheta=2t^{-1}dt\right)$ is an lcs Lie algebra. By the definition, $(G,\theta, \omega,\eta)$ is a semi-contact Lie group if and only if $\left(\mathbb R\ltimes_D \mathfrak{g},\Omega,\vartheta\right)$ is lcs Lie algebra.
\end{proof}
\begin{defin}
A {\bfseries locally conformally K\"ahler} (shortly, {\bfseries lck}) Lie algebra is an lcs $(\mathfrak{h},D,\Omega,\vartheta)$ Lie algebra endowed with a complex structure $I$ such that
$$
g:=\Omega(*,I*)
$$
is a positive definite symmetric bilinear form (see \cite{HK}). The Lie group corresponding to an lck Lie algebra is called an lck Lie group.
\end{defin}
\begin{defin}
A semi-Sasakian Lie algebra is a semi-contact Lie algebra $(\mathfrak{g},D,\omega,\eta)$ with a left-invariant integrable almost complex structure $I$ on $(\mathfrak{g}\ltimes_D \mathbb R)$ such that
$$
g:=\hat\Omega(*,I*)
$$
is a positive definite symmetric bilinear form, where $\hat\Omega=\omega+\vartheta\wedge \eta$ and $\vartheta$ is as in Proposition 5.2.
\end{defin}
\begin{defin}
A semi-Sasakian Lie group is a semi-contact Lie group $(G,\theta,\hat\Omega)$ with a left invariant complex structure $I$ on $\mathbb R^{>0}\ltimes_\theta G$ such that $(\mathbb R^{>0}\ltimes_\theta G,\hat\Omega,I)$ is a K\"ahler manifold.
Equivalently, a semi-Sasakian Lie is a Lie group $G$ equipped with an automorphism $\theta$ and a K\"ahler structure $(g,I)$ on $\mathbb R^{>0}\ltimes_\theta G$ such that $I$ is $\mathbb R^{>0}\ltimes_\theta G$-invariant, $g$ is $G$-invariant and satisfies $\lambda_q^* g= q^2g$, where $\lambda_q$ is as above.
\end{defin}
\begin{cor}
Let $\mathfrak{g}$ be a Lie algebra of the invariant vector fields on a Lie group $G$, $D$ the derivation of $\mathfrak{g}$, $\theta=\exp D$ a corresponding automorphism of $G$, $I$ a complex structure on $\mathbb R\ltimes \mathfrak{g}$, $\omega$ and $\eta$ left invariant $2$-form and $1$-form correspondingly. Then the following conditions are equivalent:
\begin{itemize}
\item [(i)] $(\mathfrak{g},D,\omega,\eta,I)$ is a semi-Sasakian Lie algebra.
\item[(ii)] $\left(\mathbb R^{>0}\ltimes_\theta G, \Omega=\omega+t^{-1}\alpha\wedge dt,2t^{-1}dt,I \right)$ is an lck Lie group.
\item[(iii)] $(G,\theta, \hat\Omega=t^2\omega+tdt\wedge\eta,I)$ is a semi-Sasakian Lie group.
\end{itemize}
\end{cor}
\begin{rem}
Let $(G,\theta,g,I)$ be a semi-Sasakian Lie group. Let $\rho$ be a vector field corresponding to the left action of $\mathbb R^{>0}$ on $\mathbb R\ltimes_\theta G$. Define $s:\mathbb R^{>0}\ltimes_\theta G\to \mathbb R^{>0}\ltimes_\theta G$ by the rule
$$
s(x)=g(\rho,\rho).
$$
Then $M=\{x\in \mathbb R\ltimes_\theta G|\ g(\rho,\rho|_M)=1\}$ is a Sasakian manifold. Moreover, we have an isomorphism of manifolds
$$
\alpha :\mathbb R^{>0}\ltimes_\theta G\to \mathbb R^{>0}\times G \ \ \ \alpha(p)=s(p)\times \left(\gamma_p\cap M\right),
$$
where $\gamma_p$ be an integral curve of $\rho$ containing $p$.
Then, by Proposition 3.5 we have
$$
(\mathbb R^{>0}\ltimes_\theta G,g)\simeq (\mathbb R^{>0}\times M, g=s^2g_M+ds^2)
$$
where $g_M= g|_M$. Note that $M\subset G\times_\theta \mathbb R^{>0}$ is not necessary a subgroup.
\end{rem}
\section{Projective Hessian Lie groups}
\begin{defin}
{\bfseries A Hessian Lie group} $(G,\nabla, g)$ is a Lie group $G$ endowed with a left invariant affine structure $\nabla$ and a left invariant Hessian metric $g$.
\end{defin}
\begin{theorem}
Let $(G,\nabla,g)$ be an $n$-dimensional simply connected Hessian Lie group and $\theta$ the linear part of the corresponding affine action of $G$. Then there exists a left invariant Kähler metric on $G_\nabla=G\ltimes_\theta \mathbb R^n=TG.$
\end{theorem}
\begin{proof}
The group $G\ltimes_\theta \mathbb R^n=TG$ is locally biholomorphic to $\mathfrak{g} \oplus I \mathfrak{g}$. Hence, we have the local coordinates $x_1,\ldots,x_n,y_1,\ldots,y_n$ on $G\ltimes_\theta \mathbb R^n=TG$ such that $I\frac{\partial}{\partial x_i}=\frac{\partial}{\partial y_i}$ and $x_1,\ldots,x_n$ are constant along any fiber of $\pi:TG\to G$. The Hessian metric $g$ is locally equivalent to $\text{Hess}\varphi$. Define the associated Hermitian metric $g^T$ as in Section 2. By Proposition 2.3, $g^T$ is a Hessian metric which is locally expressed by $\text{Hess}_\mathbb{C}(4\pi^*\varphi)$. The subgroup $\text{id}\ltimes \mathbb R^n \subset G\ltimes_\theta \mathbb R^n$ acts on fibers of $TG\to G$. The function $\pi^*\varphi$ is constant along the fibers hence $g^T=\text{Hess}_\mathbb{C}(4\pi^*\varphi)$ is invariant under the action of $\text{Id}\ltimes_\theta \mathbb R^n \subset G\ltimes_\theta \mathbb R^n$. By Proposition 2.5,
$$
g^T(X,Y)=\pi^*g(X,Y)+\pi^*g(IX,IY)+\sqrt{-1}\pi^*g(IX,Y)-\sqrt{-1}\pi^*g(X,IY).
$$
Moreover, $g$ is invariant under the action of $G\ltimes 0\subset G\ltimes_\theta \mathbb R^n$. Thus, $g^T$ is invariant under the action of $G\ltimes 0\subset G\ltimes_\theta \mathbb R^n$, also. Therefore, $g^T$ is invariant under the action of the group $G\ltimes_\theta \mathbb R^n$.
\end{proof}
\begin{defin}
A Lie algebra $\mathfrak{g}$ is called {\bfseries projective} if there is an invariant affine structure $\nabla$ on $\mathfrak{g}\times \mathbb R$ such that
$$
\nabla_X E =\nabla_E X = X,
$$
where $X\in \mathfrak{g}$ and $E\in \mathbb R$. A Lie group is called {\bfseries projective} if the corresponding Lie algebra is projective.
\end{defin}
Note that if $G$ is a projective Lie group then there is an invariant affine structure on $G\times \mathbb R^{>0}$.
\begin{defin}
A {\bfseries projective Hessian Lie group} $(G,g_G)$ is a projective Lie group $G$ endowed with a left-invariant Riemannian metric $g_G$ such that $(G\times\mathbb R^{>0},\nabla,g=t^2g_G +dt^2)$ is a Hessian manifold, where $t$ is a coordinate on $\mathbb R^{>0}$ and $\nabla$ be the affine connection on $G\times \mathbb R^{>0}$ corresponding to the projective structure on $G$. A Lie algebra corresponding to projective Hessian Lie group is called a {\bfseries projective Hessian Lie algebra}.
\end{defin}
\begin{theorem}
Let $(G, g_G$ be an $n$-dimensional simply connected projective Hessian Lie group and $\theta$ the linear part of the corresponding affine representation of $G \times \mathbb R^{>0}$. Then there exists a structure of a semi-Sasakian Lie group on $G\ltimes_\theta \mathbb R^{n+1}$. Moreover, $G\ltimes_\theta \mathbb R^{n+1}\simeq
TG\times \mathbb R$.
\end{theorem}
\begin{lemma}
Let $F,G,H$ be groups, $\alpha$ and $\beta$ actions of $F$ and $G$ on $H$ respectively such that for any $f\in F, g\in G$, and $h\in H$ we have
$$
\alpha(f)\beta(g)h=\beta(g)\alpha(f)h.
$$
Then
$$
(F\times G)\ltimes_{\alpha \times \beta} H=F\ltimes_{\text{id} \times \alpha} (G\ltimes_\beta H).
$$
\end{lemma}
\begin{proof}
Both semidirect products are equal to $F\times G \times H$ as sets. Thus, it is enough to check that two multiplications coincide. Write multiplication on the group $(F\times G)\ltimes_{\alpha \times \beta} H$
$$
((f_1\times g_1)\ltimes_{\alpha \times \beta} h_1)((f_2\times g_2)\ltimes_{\alpha \times \beta} h_2)=(f_1 f_2\times g_1 g_2)\ltimes_{\alpha \times \beta} h_1\alpha(f_1)\beta(g_1)h_2.
$$
Write multiplication on the group $F\ltimes_{\text{id} \times \alpha} (G\ltimes_\beta H)$
$$
(f_1\ltimes_{\text{id}\times \alpha} (g_1\ltimes_{\beta} h_1))(f_2\ltimes_{\text{id}\times \alpha} (g_2 \ltimes_ {\beta} h_2))=f_1 f_2\ltimes_{\text{id}\times \alpha} (g_1 \ltimes_{\beta} h_1)(g_2 \ltimes_{\beta} \alpha(f_1)h_2)=
$$
$$
=f_1 f_2\ltimes_{\text{id}\times \alpha} (g_1 g_2 \ltimes_{\beta} h_1\beta(g_1)\alpha(f_1)h_2)=f_1 f_2\ltimes_{\text{id}\times \alpha} (g_1 g_2 \ltimes_{\eta} h_1\alpha(f_1)\beta(g_1)h_2).
$$
Two multiplications coincide.
\end{proof}
\begin{proof}[Proof of Theorem 6.5]
There exist an invariant torsion free flat connection $\nabla$ on $G\times \mathbb R^{>0}$ and a Hessian metric $g$ invariant under the left action of $G$. Define a Kähler metric $g^T$ on $(G\times \mathbb R^{>0}) \ltimes_\theta \mathbb R^{n+1}$ as in the proof of Theorem 6.2. This metric is invariant under the left action of $(G\times \text{Id})\ltimes_\theta \mathbb R^{n+1}\subset (G\times \mathbb R^{>0}) \ltimes_\theta \mathbb R^{n+1}$ by the same argument as in the proof of Theorem 6.2. Moreover,
$$
g^T(X,Y)=\pi^*g(X,Y)+\pi^*g(IX,IY)+\sqrt{-1}\pi^*g(IX,Y)-\sqrt{-1}\pi^*g(X,IY),
$$
where $\pi : G\times\mathbb R^{>0}\ltimes_\theta \mathbb R^{n+1}\simeq T(G\times \mathbb R^{>0}) \to G\times \mathbb R^{>0}$ is a projection. Let
$$
\lambda_q :G\times \mathbb R^{>0} \to G\times \mathbb R^{>0}, \ \ \lambda_q(p\times r)= p\times qr
$$
and
$$
\mu_q :\left(G\ltimes_\theta \mathbb R^{>0}\right)\ltimes_\theta \mathbb R^{n+1} \to \left(G\ltimes_\theta \mathbb R^{>0}\right)\ltimes_\theta \mathbb R^{n+1}, \ \ \lambda_q((p\times r)\ltimes_\theta v)= (p\times qr)\ltimes_\theta \theta(q)v.
$$
Then we have the commutative diagram
$$
\begin{CD}
\left(G\ltimes_\theta \mathbb R^{>0}\right)\ltimes_\theta \mathbb R^{n+1} @>\mu_q>> \left(G\ltimes_\theta \mathbb R^{>0}\right)\ltimes_\theta \mathbb R^{n+1}\\
@VV\pi V @VV\pi V @.\\
G\times\mathbb R^{>0} @>\lambda_q>> G\times\mathbb R^{>0}
\end{CD},
$$
By the same argument as in Proposition 3.6, we get
\begin{equation}
\mu_q^*g^T=q^2g^T.
\end{equation}
We constructed the Kähler metric $g^T$ on $(G\times \mathbb R^{>0}) \ltimes_\theta \mathbb R^{n+1}$ which is invariant under the action of $G\ltimes_\theta \mathbb R^{n+1}$ and satisfies (6.1). Also, by Lemma 6.6, we have
$$
(G\times \mathbb R^{>0}) \ltimes_\theta \mathbb R^{n+1}= \mathbb R^{>0}\ltimes (G\ltimes_\theta \mathbb R^{n+1}).
$$ Thus, $G\ltimes_\theta \mathbb R^{n+1}$ is a semi-Sasakian Lie group.
\end{proof}
Examples of projective Hessian Lie groups are described in the next two sections.
\section{Regular convex cones}
\begin{defin}
A subset $V\subset \mathbb R^n$ is called {\bfseries regular} if $V$ does not contain any straight line.
\end{defin}
Let $V\subset \mathbb R^n$ be a convex regular domain. We denote the maximal subgroup of $\text{GL}(\mathbb R^n)$ preserving $V$ by $\text{Aut}(V)$. Note that if $V$ is a regular convex cone then $$\text{Aut}(V)=(\text{Aut}(V)\cap \text{SL}(\mathbb R^n))\times \mathbb R^{>0}.$$
The following theorem summarized known results.
\begin{theorem}[\cite{vinb}, \cite{VGP}, \cite{shima}]
Let $V \subset \mathbb R^n$ be a convex homogeneous regular domain, $${U=\mathbb R^n \oplus \sqrt {-1} V \subset \mathbb{C}^n},$$ and
$$
\pi : U=\mathbb R^n \oplus \sqrt {-1} V \to \sqrt {-1} V \simeq V
$$
be a projection. Then there exist a function $\varphi$ on $V$ and a subgroup $T\subset\text{Aut}(V)$ such that the following conditions are satisfied:
\begin{itemize}
\item[(i)] $g_{can}=\text{Hess}\varphi$ is $\text{Aut}(V)$-invariant Hessian metric on $V$.
\item[(ii)] $T$ acts on $V$ simply transitively.
\item[(iii)] A group $\text{Aut}(V)\ltimes \mathbb R^n$ acts on $U$ by holomorphic automorphisms. Moreover, the subgroup $T\ltimes \mathbb R^n \subset \text{Aut}(V)\ltimes \mathbb R^n$ acts on $U$ simply transitively.
\item[(iv)] The bilinear form $\text{Hess}_\mathbb{C} (4\pi^* \varphi)$ is a $\text{Aut}(V)\ltimes \mathbb R^n$-invariant K\"ahler metric on $U$.
\end{itemize}
\end{theorem}
See \cite{vinb} for (i) and (ii), \cite{VGP} for (iii), \cite{shima} for (iv).
\begin{theorem}
Let $V\subset\mathbb R^n$ be a convex homogeneous regular cone and $U$, $\pi$, $T$, $\varphi$ as in Theorem 7.2. Denote ${T_\text{SL}=T\cap \text{SL}(\mathbb R^n)}$. Then following conditions are satisfied:
\begin{itemize}
\item[(i)] The exists a $\text{Aut}_\text{SL}(V)$-invariant conical Hessian metric $g_{con}=\text{Hess}\varphi$ on $V$. The dilation subgroup $\mathbb R^{>0}\subset \text{Aut}(V)$ acts on $g_{con}$ by the rule $\lambda_q^* g_{con} = q^2 g_{con}$, for any $q\in \mathbb R^{>0}$.
\item[(ii)] The bilinear form $\text{Hess}_\mathbb{C} (4\pi^* \varphi)$ is a $T_\text{SL}$-invariant K\"ahler metric. Moreover, $\lambda_q^* g = q^2 g$.
\end{itemize}
\end{theorem}
\begin{proof}
This theorem is similar to the previous one. The metric $g_{con}$ on a convex regular cone $V$ is defined as $\text{Hess}(\varphi)$, where $\varphi$ is a characteristic function of the cone (see \cite{vinb}). The item (ii) is analogous to the item (iv) from Theorem 7.2.
\end{proof}
By the item (ii) from Theorem 7.2, $T$ is a Hessian Lie group. By the item (iv) from Theorem 7.2, $T\ltimes \mathbb R^n$ is a K\"ahler Lie group. On the other hand, we can get the same result using Theorem 6.2. Thus, Theorem 6.2 generalizes the item (iv) from Theorem 7.2.
Analogically, by Theorem 7.3, $T_\text{SL}$ is a projective Hessian Lie group and $T_\text{SL}\ltimes \mathbb R^n$ is a Sasakian Lie group. We can get the same result using Theorem 6.5. Thus, Theorem 6.5 generalizes known constructions for homogeneous regular cones.
\begin{defin}
The group $T$ from the Theorem 7.2 is called a {\bfseries Lie group associated with $V$}. The corresponding Lie algebra $\mathfrak{t}$ is a {\bfseries Lie algebra associated with $V$}.
\end{defin}
\begin{proposition}[\cite{vinb}]
Let $T_\text{SL}$ be a Lie group from Theorem 6.3 then there exist a convex regular domain $W$ such that $T_\text{SL}$ is an associated with $W$ Lie group. Conversely, if $S$ is a Lie group associated with a regular convex domain $W$ then there exist regular convex $V$ such that $S=T_\text{SL}$, where $T_\text{SL}$ is as in Theorem 7.3.
\end{proposition}
\begin{cor}
Let $T$ be a Lie group associated with a homogeneous regular domain. Then $T$ is both a Hessian Lie group and a projective Hessian Lie group.
\end{cor}
\begin{example}
Let $V$ be the vector space of all real symmetric matrices of rank $n$ and $\Omega$ the set of all positive definite symmetric matrices in $V$. Then $\Omega$ is a regular convex cone and the group of upper triangular matrix $\text{T}(\mathbb R^n)$ acts simply transitively on $\Omega$ by $s(x)= s x s^T$, where $x \in \Omega$ and $s \in \text{T}(n)$.The characteristic function is equal to
$$
\varphi(x)=(\det x)^{-\frac{n+1}{2}}\varphi (e),
$$
where $e$ is the unit matrix (see \cite{shima}).
\end{example}
\section{Projective Hessian structure on $\text{SO}(2)$ and $\text{SU}(2)$}
\begin{example}
Consider the group $\mathbb R$ as the universal covering of $\text{U}(1)=\text{SO(2)}$. The identification $\text{SO}(2)\times{\mathbb R^{>0}} \simeq \mathbb R^2 \backslash \{0\}$ sets a projective Hessian structure on $\text{SO}(2)$ and on the universal covering $\mathbb R$. The corresponding to $\mathbb R$ semi-Sasakian group $\mathbb R\ltimes \mathbb R^2$ is the universal covering to the group of Euclidean motions $\text{E}(2)=\text{SO}(2)\ltimes \mathbb R^2$. Hence, the Lie algebra of Euclidean motions $\mathfrak{e}(2)$ is semi-Sasakian. Let $t, x, y, r$ be standard coordinates on the group $\mathbb R\ltimes\mathbb R^2\times \mathbb R^{>0}$. The Kähler metric is given by $\text{Hess}_\mathbb{C} (r^2)$. The Kähler structure on $\mathbb R\ltimes\mathbb R^2\times \mathbb R^{>0}$ induces the Kähler structure on $\text{SO}(2)\ltimes\mathbb R^2\times \mathbb R^{>0}$. Thus, the group of Euclidean motions $\text{E}(2)$ is semi-Sasakian, too.
\end{example}
All $3$-dimensional Sasakian Lie algebras are classified.
\begin{proposition}[\cite{5sasaki}]
Any 3-dimensional Sasakian Lie algebra is isomorphic to one of the
following: $\mathfrak{su}(2), \mathfrak{sl}(2,\mathbb R),\mathfrak{aff}(\mathbb R)\times \mathbb R$, and the Heisenberg algebra $\mathfrak{h}_3$.
\end{proposition}
The algebras $\mathfrak{su}(2)$ and $\mathfrak{sl}(2,\mathbb R)$ are semisimple, and the algebras $\mathfrak{aff}(\mathbb R)\times \mathbb R$ and $\mathfrak{h}_3$ are nilpotent. The algebra $\mathfrak{e}(2)$ is solvable but not nilpotent. Therefore, the algebra $\mathfrak{e}(2)$ is semi-Sasaian but not Sasakian.
\begin{example}
There is an identifications $\text{SU}(2)\simeq S^3$ and $\text{SU}(2)\times \mathbb R^{>0}=\mathbb R^4 \backslash \{0\}$. The group structure on $S^3$ equals to the restriction on $S^3$ of the standard
$\text{SU}(2)$-action on $\mathbb{C}^2\simeq\mathbb R^4$. The corresponding semi-Sasakian Lie group is equal to $\text{SU}(2)\ltimes \mathbb{C}^2$.
\end{example}
Notice that the group $\text{SU}(2)$ is not Hessian just because the sphere $S^3$ does not admit an affine structure. However, there exists an invariant affine structures on $\text{SU}(2)\times \mathbb R^{>0} \simeq \mathbb R^4 \backslash \{0\}$. Moreover, if the group $G$ is one of the previous examples of projective Hessian Lie groups (a clan or $\text{U}(1)$) then the group $G\times \mathbb R^{>0}$ is Hessian. We prove that the group $\text{SU}(2)\times \mathbb R^{>0}$ is not Hessian.
\begin{proposition}
The manifold $S^3\times \mathbb R^{>0}$ does not admit an $\mathbb R^{>0}$-invariant Hessian structure.
\end{proposition}
\begin{proof}
If $S^3\times \mathbb R^{>0}$ admits an $\mathbb R^{>0}$-invariant Hessian structure then the Hopf manifold
$$S^3\times \mathbb R^{>0} /_{(x\times t) \sim (x\times 2t)} \simeq S^3\times S^1
$$
admits a Hessian structure. In \cite{shima}, Shima proved that if $M$ is compact Hessian manifold then the universal covering $\tilde{M}$ is convex domain in $\mathbb R^n$. The universal covering of the Hopf manifold is diffeomorphic to $S^3\times \mathbb R^{>0}$ that cannot be diffeomorphic to any domain in $\mathbb R^4$. Therefore, $S^3\times \mathbb R^{>0}$ does not admit an $\mathbb R^{>0}$-invariant Hessian structure.
\end{proof}
\begin{cor}
The group $\text{SU}(2)\times \mathbb R^{>0}$ is not Hessian.
\end{cor}
\paragraph*{Acknowledgements.} I would like to thank M. Verbitsky for fruitful discussions, and {D.V. Alekseevsky}, for his useful comments and help with preparation of the paper.
|
{
"timestamp": "2019-10-11T02:12:46",
"yymm": "1803",
"arxiv_id": "1803.02799",
"language": "en",
"url": "https://arxiv.org/abs/1803.02799"
}
|
\section{Introduction}
Quantum mechanics is a probabilistic theory that turns out to be particularly suitable to describe different kinds of stochastic processes, that - in principle - can also include non-microscopic domains.
As some example, in recent years quantum formalism has been exploited in non standard contexts such as game theory, economic processes, cognitive sciences and so on \cite{1,2,5,14,hla,qicm}.
By this perspective, another non-standard application of quantum theory is devoted to apply it for solving classification problems. The basic idea is to represent classical patterns in terms of quantum objects, with the aim to boost the computational efficiency of the classification algorithms. In the last few years many efforts have been made to apply the quantum formalism to signal processing \cite{7} and pattern recognition \cite{8,9}. Exhaustive surveys concerning the applications of quantum computing in computational intelligence and machine learning are provided in \cite{10,11}. Even if these approaches suggest possible computational advantages of
this sort \cite{12,13}, what we have proposed in \cite{sergioli2016,S,SaSe,Entropy} is based on a different approach that consists in using quantum formalism in order to reach remarkable benefits in the classical context. What we have provided is a model that allows to process any kind of classical dataset in a supervised system by $i)$ translating each element of the dataset (pattern) into a density operator (that is the usual mathematical tool to formally describe a quantum state) that will be called \emph{density pattern}; $ii)$ defining, for any class of density patterns, a \emph{quantum centroid} that is an object free of any counterpart in the initial classical dataset; $iii)$ using the standard minimum distance procedure to classify an unlabeled density pattern; $iv)$ decoding the result of the classification process in the classical pattern space.
In this way, by exploiting the expressive power of the quantum formalism, it is possible to reach remarkable advantages in terms of classification process accuracy.
In this regard, we have shown a comparison between the standard \emph{Nearest Mean Classifier} (NMC) and its quantum version (named \emph{Quantum Nearest Mean Classifier} (QNMC)), exhibiting meaningful advantages of our proposed model in its application on different datasets. In particular, the model has been tested on artificial and real datasets commonly downloadable by standard machine learning repositories.
In the present work we propose a particular application of the model to a real dataset (IPF dataset) that is obtained form a group of 126 patients. IPF is a disease characterized by the development of fibrotic areas within the parenchyma of lungs causing a progressive reduction of the respiratory function. The prognosis of IPF patient is very poor with a median survival of 3-5 years from diagnosis; the dataset includes baseline variables with an established relation to patient's survival. In this paper we refer to the IPF dataset to compare the performances of two different variants of the QNMC not only with the NMC but also with other well known standard classifiers (the \emph{Linear Discriminant Analyisis} (LDA) and the \emph{Quadratic Discriminant Analysis} (QDA)).
The paper is organized as follows: in the first section we briefly describe the formal structure of both the Nearest Mean Classifier and its quantum-inspired version (the QNMC). In the second section we briefly summarize some interesting results previously obtained in \cite{sergioli2016,S,SaSe,Entropy} by comparing the NMC and the QNMC on different datasets, and showing the advantages of the QNMC in terms of pattern classification accuracy. In the third section we first introduce an alternative encoding from the real vector (pattern) space to the density operator space that turns out to be particularly beneficial for the rest of the paper. Secondly, we introduce the IPF dataset and we provide a detailed description of the dataset features. Finally, we show and discuss the promising results arising by the application of two different QNMC variants on the IPF dataset, showing an improvement of the accuracy with respect to some standard classifiers, \emph{i.e.} the NMC, the LDA and the QDA. The last section of the paper is devoted to propose possible developments and different strategies we will take into account in future works in order to provide a further improvement in terms of classification accuracy in biomedical contexts.
\section{Classical and quantum version of the nearest mean classifier}
\label{sec:NMC}
In this section we briefly describe the quantum version of the standard nearest mean classification, which is an instance of supervised learning, \emph{i.e.} learning from a training dataset of correctly labeled objects. In the classical domain each object is characterized by its features; hence, a $d$-feature object is naturally represented by a $d$-dimensional real vector $\vec x = \left[x^1, \ldots, x^d \right]\in \Real^d.$\footnote {For the sake of clarity regarding the indexes, we accord to use superscript index to indicate the different components of the vector and subscript to indicate different vectors.}
Formally, a pattern can be represented as a pair $(\vec x_i, \lambda_i)$, where $\vec x_i$ is the $d$-dimensional vector associated to the object and $\lambda_i$ is the label that refers to the class which the object belongs to. We can simply consider a class as a set of objects and, for our aim, we confine ourselves to the special (but very common) case where each object belongs to one and only one class of objects. Let $\Lambda=\{\lambda_1, \ldots, \lambda_N\}$ be the set of labels corresponding to the respective classes. The goal of the classification process is to design a \emph{classifier} that attributes (in the most accurate way) a label (class) to any unlabled object. In supervised learning, such a classifier is obtained by getting information from the \emph{training set} $\TrSet$, \emph{i.e.} a set of correctly labeled objects. Formally:
$$\TrSet = \left\{ (\vec x_1, \lambda_1), \ldots , (\vec x_M, \lambda_M) \right\},$$
where $\vec x_i\in\mathbb R^d$ and $\lambda_i$ is the label associated to its class.
Generally, we will deal with $n$ possible different classes (\emph{i.e.} $N=n$) and,
given a training dataset $\TrSet = \left\{ (\vec x_1, \lambda_1), \ldots , (\vec x_M, \lambda_M) \right\}$, we can define the $j$-th class $\TrSet^{j}$, which represents the set of the training patterns belonging to the class labeled by $\lambda_j$, in the following way:
$$\TrSet^{j}=\{(\vec x_i, \lambda_i)\in\TrSet : \lambda_i=\lambda_j\}.$$
Finally, by $M_j$ we will denote the number of elements of $\TrSet^j$.
One of the simplest classification method in pattern recognition is the so called \textit{Nearest Mean Classifier}. The NMC algorithm consists in the following steps:
\begin{enumerate}
\item Training: one has to compute the \textit{centroid} for each class, that is:
%
\begin{equation}\label{eq:ccentroid}
\vec \mu_j = \frac{1}{M_j} \sum_{i\in\{m\in M:\lambda_m=\lambda_j\}} \vec x_i
\end{equation}
%
\item Classification: the associated classifier is a function $Cl:\mathbb R^d\to\Lambda$ such that $\forall \vec x\in \mathbb R^d$:
$$Cl(\vec x)=
\lambda_j \,\,\,\,\text{if} \,\,\,\,d(\vec x,\vec \mu_j)\leq d(\vec x,\vec \mu_k) \,\, \forall k\neq j.$$
where $d(\vec x, \vec y)=|\vec x - \vec y|$ is the Euclidean distance.
\end{enumerate}
Intiutively, the classifier associates to a $d$-feature object $\vec x$ the label of the closest centroid.
In order to evaluate the NMC performance, one introduces another set of patterns (called \textit{test set}) that does not belong to the training set \cite{DuHa}. Formally, the test set is a set $\TsSet = \left\{ \{\vec y_{1}, \beta_{1}\}, \ldots , \{\vec y_{M'}, \beta_{M'}\} \right\}$, such that $\TrSet\cap\TsSet=\emptyset$, where $M'_j$ is the number of the test patterns belonging to the $j$-th class.
Then, by applying the NMC to the test set, it is possible to evaluate the semi-supervised classifier performance by considering the accuracy (ACC) of the classification process as the ratio between the number of all the correctly classified test patterns and the cardinality of the test set.\footnote{We recall that the classification accuracy is defined as ACC $= 1 - $ERR, where ERR is the classification error. Consequently, it is possible to study the performance of a given classification method by means of accuracy or error likewise.}
Let us notice that the values of such quantities are obviously related to the training/test datasets; as a natural consequence, also the classifier performance is strictly dataset-dependent.
\bigskip
In order to provide a quantum counterpart of the NMC (we say \emph{Quantum Nearest Mean Classifier} (QNMC)) we need to fulfill the following steps:
\begin{enumerate}
\item for each pattern, one has to provide a suitable encoding into a quantum object (\emph{i.e.} a density operator) that we call \textit{density pattern};
\item for each class of density patterns, one has to define the quantum conterpart of the classical centroid, that we call \textit{quantum centroid};
\item one has to provide a suitable definition of \textit{quantum distance} between density patterns, that plays a similar role as the Euclidean distance for the NMC.
\end{enumerate}
Even though there are infinite many ways to encode a real vector into a density pattern (and the convenience of using one instead of others could be strictly dataset-dependent), in \cite{S} we have propose the following encoding, that we call \emph{stereographic encoding} (SE).
First, let us recall the notion of \emph{stereographic projection} as follows.
Let $\vec {\tilde x} = \left[\tilde x^1, \ldots, \tilde x^{d+1} \right]$ be an arbitrary $(d+1)$-feature object of $\mathbb R^{d+1}$. The stereographic projection $SP$ is a map $SP:\mathbb R^{d+1}\to\mathbb R^d$ such that:
$$\vec x=SP(\tilde{x}^1,\tilde{x}^2,...,\tilde{x}^{d+1})=\Big (\frac{\tilde{x}^1}{1-\tilde{x}^{d+1}},...,\frac{\tilde{x}^d}{1-\tilde{x}^{d+1}}\Big ).$$
Analogously, let $\vec x = \left[x^1, \ldots, x^d \right]$ be an arbitrary $d$-feature object of $\mathbb R^d$;
the inverse of the stereographic projection $SP^{-1}$, is a map $SP^{-1}:\mathbb R^{d}\to\mathbb\mathbb R^{d+1}$ such that:
\begin{equation}\label{eq:strproj}
\vec{ \tilde{x}} = SP^{-1}(\vec x) = \frac{1}{\sum_{i=1}^d (x^i)^2+1} \left[2x^1, \ldots, 2x^d, \sum_{i=1}^d (x^i)^2 - 1 \right],
\end{equation}
where $\frac{1}{\sum_{i=1}^d (x^i)^2+1}$ is a normalization factor.
\begin{definition}[Density pattern by SE]
\label{def:dp}
The density pattern $\rho_{\vec x}$ associated to the $d$-feature object $\vec x\in\mathbb R^d$ is defined as:
\begin{equation}
\label{eq:dp}
\rho_{\vec x} \doteq \vec{ \tilde{x}}^t \vec{ \tilde{x}}.
\end{equation}
\end{definition}
Clearly, every density pattern is a quantum pure state, \emph{i.e.} $\rho_{\vec x}^2 = \rho_{\vec x}$.
Therefore, the SE allows to encode any real vector $\vec x\in\mathbb R^d$ into a density operator $\rho_{\vec x}$. On this basis, we define the quantum training dataset
$$\TrQSet = \left\{ \{\rho_{\vec x_1}, \lambda_1\}, \ldots , \{\rho_{\vec x_M}, \lambda_M\} \right\}$$
as the set of all the density patterns obtained by encoding all the elements of $\TrSet$.
This fact allows us to introduce the quantum versions of the standard centroid given in Eq.~\eqref{eq:ccentroid}, as following.
\begin{definition}[Quantum centroids]
\label{def:qcentroid}
Let $\TrQSet = \left\{ \{\rho_{\vec x_1}, \alpha_1\}, \ldots , \{\rho_{\vec x_M}, \alpha_M\} \right\}$ be a quantum training dataset of density patterns.
The quantum centroids for the positive and negative class are respectively given by:
\begin{equation}
\label{eq:qcentroid}
\rho_j = \frac{1}{M_j} \sum_{i\in\{m\in M:\lambda=\lambda_j\}} \rho_{\vec x_i}.
\end{equation}
\end{definition}
Notice that the quantum centroids are now mixed states and they are not generally obtained by the encoding of the respective classical centroids $\vec \mu_j$.
Accordingly, the definition of the quantum centroid leads to a new object that does not have any classical counterpart.
As a suitable definition of distance between density patterns, we recall the well known distance between quantum states that is commonly used in quantum computation (see, \emph{e.g.}~\cite{nielsenbook}).
\begin{definition}[Trace distance]
\label{def:trdist}
Let $\rho$ and $\sigma$ be two quantum density operators belonging to the same Hilbert space. The trace distance ($\dtr$) between $\rho$ and $\sigma$ is given by:
\begin{equation}\label{eq:trdist}
\dtr(\rho,\sigma) = \frac{1}{2} \Tr|\rho - \sigma|,
\end{equation}
where $|A| = \sqrt{A^\dag A}$.
\end{definition}
Notice that the trace distance is a metric; hence, it satisfies: \emph{i)} $\dtr(\rho,\sigma) \geq 0$ with equality iff $\rho=\sigma$ (positivity), \emph{ii)} $\dtr(\rho,\sigma) = \dtr(\sigma,\rho)$ (symmetry) and \emph{iii)} $\dtr(\rho,\omega)+\dtr(\omega,\sigma) \geq \dtr(\rho,\sigma)$ (triangle inequality).
We have introduced all the necessary ingredients to describe into detail the QNMC process, which, similarly to the classical case, consists in the following steps:
\begin{itemize}
\item obtaining the quantum training dataset $\TrQSet$ by applying the encoding given in Definition~\ref{def:dp} to each pattern of the classical training set $\TrSet$;
\item calculating the quantum centroids $\rho_j$ according to Definition~\ref{def:qcentroid};
\item classifying an arbitrary pattern $\vec x$ accordingly with the minimization problem:
%
the quantum classifier is a function $QCl:\mathbb R^d\to\Lambda$ such that $\forall \vec x\in \mathbb R^d$:
$$QCl(\rho_{\vec x})=
\lambda_j \,\,\,\,\text{if} \,\,\dtr(\rho_{\vec x},\rho_{j})\leq \dtr(\rho_{\vec x},\rho_{i}) \,\,\,\, \forall i\neq j.$$
%
\end{itemize}
\subsection{Experimental results}
\label{sec:exp}
In what follows we summarize some preliminary result obtained by comparing the performances of NMC and the QNMC on different (artificial and real) different datasets. In particular, we consider
three artificial (two-feature) datasets (Moon, Banana, and Gaussian) and four real (many-feature) datasets (Diabetes, Cancer, Liver and Ionosphere) extracted from the UC Irvine Machine Learning Repository.
In our experiment, we follow the standard methodology of randomly splitting each dataset in training and test datasets with $\%80$ and $\%20$ of the total patterns, respectively. Moreover, in order to obtain statistical significance results, we carry out $100$ experiments for each dataset, where the splitting is randomly taken each time.
We summarize our results in the Table \ref{Tab}.
\begin{table}[h]
\tiny{
\begin{center}
\begin{tabular}{cccccccc}
\midrule
Dataset & $\#\TrSet$/$\#\TsSet$& ACC (NMC) & ACC (QNMC) \\
\midrule
\vspace{0.1cm}
Banana & $4240/1060$($2$) & $55.0\pm 1.8$ & $\mathbf{71.0\pm 1.2}$
\\
\vspace{0.1cm}
Gaussian & $160/40$($2$) & $55.5\pm 7.7$ & $\mathbf{76.2\pm 5.6}$
\\
\vspace{0.1cm}
Moon & $160/40$($2$) & 77.9$\pm 5.7$ & $\mathbf{88.9\pm 4.4}$
\\
\vspace{0.1cm}
Diabetes & $614/154$($8$) &63.4 $\pm 3.9$ & $\mathbf{68.7\pm 3.2}$
\\
\vspace{0.1cm}
Cancer & $546/137$ ($10$) & $\mathbf{96.4\pm 1.4}$ & $93.7\pm 1.9$
\\
\vspace{0.1cm}
Liver& $463/116$($10$) & $53.8\pm 4.2$ & $\mathbf{59.6\pm 4.2}$
\\
\vspace{0.1cm}
Ionosphere & $280/71$($34$) & $72.9\pm 4.5$ & $\mathbf{83.7\pm 4.3}$
\\
\bottomrule
\end{tabular}
\end{center}
\caption{Average results for NMC and QNMC classifiers (in $\%$) and their standard deviations. $\#\TrSet$ cardinality of the training dataset; $\#\TsSet=$ cardinality of the test set; $(d)=$ number of features of each element of its respective dataset.}
\label{Tab}
}
\end{table}
Let us notice that the ACC of the QNMC is significantly greater than the ACC of the NMC for all the datsets, except for the Cancer dataset.
In particular, this improvement is even greater for the $2$-feature datasets.
Further, let us notice that a key difference between NMC and QNMC regards the invariance under rescaling. Let us suppose that each pattern of the training and test sets is multiplied by the same \textit{rescaling factor} $t$, \emph{i.e.} $\vec{ x}_m \mapsto t \vec{ x}_m$ and $\vec{ y}_{m'} \mapsto t \vec{ y}_{m'}$ for any $m$ and $m'$. Then, the (classical) centroids change according to $\vec{\mu}_j \mapsto t \vec{\mu}_j$ and the classification problem of each pattern of the rescaled test set becomes
\begin{equation}\label{eq:rescaledcclassify}
\argmin_i d(t \vec y_{m'}, t \vec \mu_i) = t \argmin_i d(\vec y_{m'},\vec \mu_i),
\end{equation}
which has the same solution of the unrescaled problem, \emph{i.e.} $t=1$.
On the contrary, the QNMC turns out to be not invariant under rescaling.
Far from being a disadvantage, this allows us to introduce a ``free'' parameter, \emph{i.e.} the \emph{rescaling factor}, that could be useful to get a further improvement of the classification performance as it is shown in Fig.~\ref{rescaling} for the $2$-feature datasets. The pictures in Fig.~\ref{rescaling} represent the experimental results where we repeated the same experiments described above by rescaling (within a small range) the coordinates of the initial dataset. The picture of the Figure 1 shows that for each dataset there is an interval ($I_t$) of the rescaling factor $t$ such that for any $t\in I_t$ the average values of the accuracy are slightly greater than the accuracy values of the respective unrescaled cases.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{Resc.png}
\\
\caption{Accuracy vs rescaling factor for Banana (left), Gaussian (medium) and Moon (right) datasets. For a fixed $t$, each point corresponds to an average of the accuracy under $20$ experiments and its standard deviation is represented by the corresponding error bar. In each plot, we fix the $y$-axis to the case $t=1$ (no rescaling). The dotted line represents the average accuracy reached by the NMC.
}
\label{rescaling}
\end{figure}
The method that we have introduced in this section allows us to get a relevant improvements of the standard NMC when we have an \emph{a priori} knowledge about the distribution of the dataset we have to deal with. Indeed, if we need to classify an unknown pattern, looking at the distribution of the training dataset, we can guess \emph{a priori} if: \emph{i)} for that kind of distribution the QNMC performs better than the NMC and \emph{ii)} what is the suitable rescaling has to be applied to the original dataset in order to get a further improvement of the accuracy.
\section{Applying the QNMC on the IPF dataset}
As mentioned at the beginning of the previous section, there generally exist different ways to encode a $d$-dimensional feature vector into a density operator \cite{8}. Indeed, finding the ``best'' encoding of real vectors into quantum states (\emph{i.e.} outperforming all the possible encodings for any dataset) is still an open and intricate problem. This fact is not so surprising because, on the other hand, in pattern recognition is not possible to establish an absolute and \emph{a priori} superiority of a given classification method with respect to the other ones, and the reason is that each dataset has unique and specific characteristics (according to the well known \emph{No Free Lunch} Theorem \cite{DuHa}).
Hereby, we like to introduce a new encoding, we call \emph{informative encoding} (IE), already used in two previous works \cite{SaSe,Entropy}. According to recent debates on quantum machine learning \cite{8}, in order to avoid loss of information it is crucial that, in the transition from the classical to the quantum space, the quantum state keeps information on the original vector norm.\\
\noindent Let $\vec x = (x^1, \ldots, x^d)\in \mathbb{R}^d$ be a $d$-dimensional vector.
\begin{enumerate}
\item We map the vector $\vec x \in \mathbb{R}^d$ into a vector $\vec x' \in \mathbb{R}^{d+1}$, whose first $d$ features are the components of the vector $\vec x$ and the $(d+1)$-th feature is the norm of $\vec x$. Formally:
\begin{equation}
\vec x = (x^1, \ldots, x^d)\ \mapsto\ \vec x' = (x^1, \ldots, x^d, |\vec x|).
\end{equation}
\item We obtain the vector $\vec x''$ by dividing the first $d$ components of the vector $\vec x'$ for $|\vec x|$:
\begin{equation}
\label{x''}
\vec x' \ \mapsto\ \vec x'' = \Big (\frac{x^1}{|\vec x|}, \ldots, \frac{x^d}{|\vec x|}, |\vec x|\Big ).
\end{equation}
\item We compute the norm of the vector $\vec x''$, \emph{i.e.} $|\vec x''| = \sqrt{|\vec x|^2 + 1}$ and we map the vector $\vec x''$ into the normalized vector $\vec x'''$ as follows:
\begin{equation}\label{x'''}
\vec x'' \ \mapsto\ \vec x''' = \frac{\vec x''}{|\vec x''|}= \Big (\frac{x^1}{|\vec x|\sqrt{|\vec x|^2 + 1}}, \ldots, \frac{x^d}{|\vec x|\sqrt{|\vec x|^2 + 1}}, \frac{|\vec x|}{\sqrt{|\vec x|^2 + 1}}\Big ).
\end{equation}
\end{enumerate}
Now, similarly to Definition \ref{def:dp}, we end up with the following definition.
\begin{definition}[Density pattern by IE]
\begin{equation}
\label{eq:dp}
\rho_{\vec x} \doteq \vec x'''\cdot(\vec x''')^\dagger,
\end{equation}
where the vector $\vec x'''$ is given by Eq. (\ref{x'''}).
\end{definition}
Hence, this encoding maps real $d$-dimensional vectors $\vec x$ into a $(d+1)$-dimensional pure state $\rho_{\vec x}$. In this way, we obtain an encoding that takes into account the information about the initial real vector norm and, at the same time, allows to easily encode arbitrary real $d$-dimensional vectors.\\
\noindent As we have seen in the previous section, the QNMC is the quantum replacement of the standard NMC that is one of the more basic standard classifiers. Other well known standard models that will be taken into account in the following are the so called \emph{Linear Discriminant Analysis} (LDA) and \emph{Quadratic Discriminant Analysis} (QDA) classifiers \cite{DuHa}. In particular, they belong to the set of minimum distance classifiers and the goal consists in classifying patterns by using a distance measure which involves not only the centroids of the classes but also the class distribution (by means of the \emph{covariance matrix} \cite{DuHa}). The difference between them is the following: \emph{i)} in the LDA case the distance measure depends on the average covariance matrix (over all the covariance matrices related to each class) and the discriminant function (\emph{i.e.} the surface which separates classes in the optimal way) is linear; \emph{ii)} in the QDA case, the distance measure depends on all the covariance matrices simultaneously and the discriminant function is quadratic.
In what follows, we compare different variants of the QNMC with the mentioned classifiers (NMC, LDA, QDA) by referring to a very special real dataset obtained from a biomedical context. \footnote{The dataset is downloadable from
\emph{http://people.unica.it/giuseppesergioli/files/2018/02/IPFDataset.xlsx}}
\subsection{The IPF dataset}
In details, the idiopathic pulmonary fibrosis (IPF) dataset includes a group of 126 consecutive patients (the patterns) retrospectively extracted from databases of the Regional Referral Centre for Interstitial and Rare lung diseases of Catania. These patients are divided in three different classes (with different cardinality), where each class corresponds to a different degree of survival (that is named GAP stage). All patients were required to have received a Multidisciplinary team diagnosis of IPF according to 2011 American Thoracic Society (ATS)/European Respiratory Society (ERS)/Japanese Respiratory Society (JRS)/Latin American Thoracic Association (ALAT) IPF guidelines \cite{Sebastiano1}. A minimum follow-up time of three years from diagnosis was also required in order to assess survival. For this reason, only patients diagnosed between July 2010 and December 2014 were considered. The dataset includes a series of baseline variables (the features) with an established relation to survival (the classes, where three different survival “degrees” are considered) \cite{Sebastiano2,Sebastiano3}.
The dataset is organized in the following way: the patterns are numered in the column A (we also indicate in the column B the dates of birth of each patient). We distinguish between two different blocks of features; the first block (from column C to column I, highlighted in light grey) contains features that allow to perfectly classify each patient; indeed, by using the features introduced in the columns C ... I, it is possible to exactly evaluate the “GAP stage” of each patient (each feature adds a score to the calculation of the GAP stage). Indeed, the features introduced from C to I are all it takes in order to assign to each patient the class to which he belongs to; in other words, these features are useful to have an \emph{a priori} classification of each patient. The second block of features are introduced (in light green) from column J to column U; even if these features should allow to classify the patients, anyway - unlike the first block - there is not a systematic method to classify each patient by involving this set of features only.
Finally, the column W contains the labels associate to each different class (the column V is only used as a support to calculate W). The rest of the paper will be devoted to use the introduced quantum-inspired algorithm to classify the IPF dataset, only involving the second block of features. But before let us briefly provide a medical description of the meaning of each feature.
Regarding the first block, the feature “Forced Vital Capacity” (FVC) represents the amount of air which can be forcibly exhaled from the lungs after taking the deepest breath possible \cite{Sebastiano4}. This value, measured with a spirometer, was reported in the dataset as percent of predicted value (FVC\%), resulting from the comparison between a list of normal reference values and the measured ones \cite{Sebastiano4}. In the context of IPF, both baseline FVC\% value and its change over the time, represent strong predictors of mortality \cite{Sebastiano5,Sebastiano6}. The feature “Diffusing Capacity for Carbon Monoxide” (DLCO), measures the ability of the lungs to transfer gas from inhaled air to the red blood cells in pulmonary capillaries \cite{Sebastiano7}. As in the case of FVC, also DLCO is expressed as percent of predicted value. Interestingly in IPF, DLCO is frequently reduced since early stages of the disease, making this variable more sensitive than FVC to assess interstitial lung damage \cite{Sebastiano8}. Another feature collected which significantly impacts on survival, as in IPF as in other diseases, is the “Age at first diagnosis” \cite{Sebastiano1}. Dataset also included the variable “Sex”. Incidence and prevalence of IPF are higher in males than in females with a ratio ranging from 1.6:1 to 2:1. Moreover, male sex was demonstrated to be related with a worse prognosis \cite{Sebastiano1,Sebastiano10}. All of these four features were recently included in a single multidimensional index, known as GAP (gender [G], age [A] and lung physiology variables [P]). This index assigns a point to each variable in order to obtain a single value, in the dataset “GAP point”, which resumes the weight of each variable. Points raging from 0 to 3, 4-5 and 6-8 compose respectively “GAP stage 1, 2 and 3” \cite{Sebastiano11}, that we consider as the label of our dataset. Simply speaking, the columns from F to I indicate the contribute in the calculation of the GAP stage provided by the features “Sex”, “FVC”, “DLCO” and “Age”, respectively. Regarding the second block of features, Oxygen saturation (SpO2 \%) reflects blood oxygenation, and heart rate were indirectly measured with a pulse oximeter. Reduced levels of SpO2, which are frequently associated with high levels of heart rate, are usually related to a worse survival \cite{Sebastiano1}. Information regarding smoking habit was also collected and reported as follows: never smoker =0, ex smoker =1 and current smoker=2. Dataset included also a description of high resolution computed tomography (HRCT) features which, according to 2011 IPF guidelines, describe three scenarios: “definite UIP”, “possible UIP” and “inconsistent with UIP” \cite{Sebastiano1}. Recent studies demonstrated that also this evaluation at baseline is related with prognosis \cite{Sebastiano12}. Other variables regarding information on lung transplantation, duration of follow-up (days), status at the end of follow-up (alive $=0$ or died $=1$), confirmation of diagnosis through biopsy and family history of the Interstitial Lung Diesease (ILD) were also included in the dataset.\\
\subsection{Applying the QNMC to the IPF dataset}
It is natural to believe how each feature described above has not the same impact in the evaluation of the GAP stage (i.e. in the classification process). As an example (confining to the second block of features only), it is possible to say that “Sex” and “Oxygen Saturation” have more impact in the classification process with respect to the rest of the considered features. In general, it is possible to recognise for each feature a different impact in the classification process.
\noindent On this basis, let us stress that the key difference between NMC and QNMC regards the invariance under rescaling \cite{sergioli2016,S,Entropy}. Indeed, we have shown in the previous works that, conversely to the standard NMC, the QNMC turns out to be not invariant under rescaling, \emph{i.e.} if we multiply each dataset pattern for a real factor. Far from being a disadvantage, this allows to introduce a “free” parameter that could be useful to get a further improvement of the classification performance.\\
\noindent In order to take into suitable account both the different incidence of each dataset feature in the classification process and the non-invariance of the QNMC under rescaling, the strategy we adopt is to assign for each feature a rescaling factor that is proportional to its degree of incidence. Differently from the previous section, where all the dataset features were multiplied for the same rescaling real factor, here we multiply each feature for a different weight, in accord with the incidence of each feature in the evaluation of the GAP stage (that is related to the degree of survival). Consequently, the rescaled dataset becomes:
\begin{equation}\label{rescdataset}
\mathcal{S}^{(r)} = \TrSet^{(r)} \cup \TsSet^{(r)}
\end{equation}
where
\begin{align}
\TrSet^{(r)} &= \left\{ (\gamma_1 \vec x_1, \lambda_1), \ldots , (\gamma_M \vec x_M, \lambda_M) \right\},\quad \gamma_i\in \mathbb{R},\ i,=1, \ldots, M,\nonumber\\
\TsSet^{(r)} &= \left\{ \{\delta_1 \vec y_{1}, \beta_{1}\}, \ldots , \{\delta_{M'} \vec y_{M'}, \beta_{M'}\} \right\}, \quad \delta_{j}\in \mathbb{R},\ j=1, \ldots, M'. \nonumber
\end{align}
Finally, the quantum version $\mathcal{S}^{\text{q}(r)} = \mathcal{S}_{tr}^{\text{q}(r)}\cup \mathcal{S}_{ts}^{\text{q}(r)}$ of the rescaled dataset is obtained by putting $\rho_{\vec x_i},\ \rho_{\vec y_j}$ in place of $\vec{x}_i,\ \vec{y}_j$ in Equation (\ref{rescdataset}).
\begin{table}[h]
\tiny{
\begin{center}
\begin{tabular}{cc}
\midrule
Classifier & Total Error \\
\midrule
\vspace{0.1cm}
QNMC (SE) & 0.455 $\pm$ 0.093 \\
\vspace{0.1cm}
QNMC (IE) & 0.378 $\pm$ 0.092 \\
\vspace{0.1cm}
QNMC (IE) Resc 1 & 0.334 $\pm$ 0.097 \\
\vspace{0.1cm}
QNMC (IE) Resc 2 & 0.341 $\pm$ 0.071 \\
\vspace{0.1cm}
QNMC (IE) Resc 3 & 0.344 $\pm$ 0.076 \\
\vspace{0.1cm}
QNMC (IE) Resc 4 & {\bf 0.314 $\pm$ 0.081} \\
\vspace{0.1cm}
NMC & 0.495 $\pm$ 0.085 \\
\vspace{0.1cm}
LDA & 0.393 $\pm$ 0.082 \\
\vspace{0.1cm}
QDA & 0.568 $\pm$ 0.119
\\
\bottomrule
\end{tabular}
\end{center}
\caption{Average classification error for NMC, QNMC (with difference encodings and different rescaling), LDA and QDA classifiers (in $\%$) over $50$ runs with related standard deviations.}
\label{Tab}
}
\end{table}
In Table \ref{Tab} we present the statistical results that allow to compare the performances - in terms of classification error - of the three standard classifiers described above with the two different variants of the quantum-inspired classifier introduced above. In detail, for each classifier we have evaluated the total error (with the respective standard deviation) obtained by running 50 times the algorithm for each different choise of rescaling (each of them in accord with the different survival degree of the features of the dataset). The standard classifiers we have considered are the NMC, the LDA and the QDA. On the other hand, the proposed quantum-inspired classifiers are the QNMC obtained by the stereographic encoding and the QNMC obtained by the informative encoding. In particular, in this second case three different rescalings have been taken into account.
As shown in Table \ref{Tab}, the QNMC provides in general a meaningful improvement of the accuracy in the classification process with respect to all the three standard classifiers that have been considered and, interestingly enough, the values of the accuracy obtained for the third class are remarkable. In particular, the QNMC based on the informative encoding exhibits better performance than the NMC (about $12\%$) and the QDA (where the difference is very high, about $24\%$). On the other hand, this version of the QNMC exhibits performance similar to the LDA (the difference is about $2\%$): since the LDA is the classifier which takes into account the class distribution by means of the covariance matrix (\emph{i.e.}, we can say it is more “informative”), this result suggests that this specific version of the QNMC is sensitive to the dataset distribution and, consequently, it gives a more accurated classification with respect to the NMC, which does not take into account the data distribution.\\
\noindent Let us note that the “stereographic” QNMC provides a classification accuracy worse than the “informative” QNMC (about $8\%$). As a consequence, this is a remarkable result because it suggests that: \emph{i)} keeping information about the original real vector norm during the encoding process is crucial in order to get a more performing model; \emph{ii)} the choice of the specific encoding is fundamental and strongly affects the final pattern classification.\\
\noindent The final result we discuss concerns the use of the informative encoding together with different rescaling parameters for different features (accordingly with the real different incidence of these features on the probability of survival). In particular, we have rescaled the feature columns “Follow Up Time (days)”, “Oxygen saturation $\%$” and “Heart rate” first by a rescaling parameter equal to $0.1$ (“QNMC (IE) Resc 1”), after by a rescaling factor equal to $10$ (“QNMC (IE) Resc 2”) and finally by a rescaling factor equal to $20$ (“QNMC (IE) Resc 3”). In this regard, we can observe a further improvement in terms of accuracy, up to get a classification error equal to $0.33$. The most interesting result is obtained by concurrently rescaling the feature columns “HRCT Pattern”, “Smoking”, “Smoking Status” by a parameter equal to $600$ and the columns “Sex” and “Oxygen saturation $\%$” by a parameter equal to $10$. In this case, we reach a classification error equal to $31 \%$ (“QNMC (IE) Resc 4”), which is much lower than the NMC classification error (indeed, they differ by approximately $20 \%$). \\
Let us remark that in the proposed approach, which consists of rescaling the feature columns by a real parameter in order to reach some computational benefits, we have adopted a systematic empirical procedure in order to get favorable rescaling parameters. Nevertheless, by the preliminary results shown in Table \ref{Tab}, it is possible to note that - in accord with the \emph{a priori} assignment of the incidence of each feature - we obtain advantages in terms of classification performance by multiplying more significant features by a higher rescaling parameter and less significant features by a lower rescaling parameter. Consequently, we can look at the rescaling factor as a “weight” which plays in accord with the relevance of a specific feature column. It suggests, as a future work, a theoretical analysis in order to systematically obtain the more convenient rescaling for each feature of a given dataset.
\noindent We conclude the experimental sections with the following two remarks:
\begin{enumerate}
\item even if it is possible to establish whether a classifier is “good” or “bad” for a given dataset by the evaluation of some a priori data characteristics, generally it is no possible to establish an \emph{absolute} superiority of a given classifier for any dataset, thanks to the No Free Lunch Theorem \cite{DuHa}. Anyway, the QNMC seems to be particularly convenient when the data distribution is difficult to treat with the standard NMC;
\item clearly, there exist classifiers more sophisticated than the ones we have considered in the present work for this specific dataset. Anyway, the introduced preliminary results are enough to show that our quantum-inspired minimum distance model outperforms not only its natural classical counterpart (\emph{i.e.}, the NMC) but also other more performing minimum distance methods.
\end{enumerate}
\section{Concluding remarks}
This paper is mostly devoted to show the potentialities to use the standard framework of the quantum mechanics in the context of classification problem related to biomedical problems. In particular, we have shown how for some artificial and real datasets, some kind of quantum-inspired classifier provides a remarkable improvement of the classification accuracy with respect to some standard classifier. In particular, in the second part of the paper we have focused on a very special dataset obtained by a real biomedical context.
Obviously, techniques used in biometrics are actually much more sophisticated with respect to the standard classifiers that we have considered in this work; anyway, we think the results provided in this paper as promising in order to establish a new investigation based on the application of the quantum framework on the biometric classification problems.
In particular, our future investigation will be based on three relevant points: \emph{i)} first, the QNMC arises as a kind of \emph{quantum replacement} of the standard NMC. We think that, exploiting the expressive power of the quantum framework, to follow the same strategy in order to make an analogue quantum replacement of some more sophisticated standard classifier should be naturally beneficial; \emph{ii)} as we have remarked in the paper, the choise of the \emph{best} encoding is strongly dataset-dependent. Anyway, this point deserves a further investigation; as an example, it should be important to identify some class of datasets that, because of some its internal property, is better to manage by using some encoding instead of any other; \emph{iii)} finally, as we can see, the quantum-inspired classification process we have considered is strongly based on the distribution of the patterns. Hence, the role of the distance is crucial. However, the IPF dataset also contains features that are not given by ordered values (such as \emph{Sex} or \emph{Smoking status}). On this basis, should be useful to modify the datatset trying to keep the same reliability in the classification process but only involving features with ordered values. Obviously, all these points require a very interdisciplinary investigation and some partial result will be introduced in a future work.
Finally, we think that even if our investigation is in a preliminary stage, the actual results introduced in the present paper (and in the previously mentioned ones \cite{sergioli2016,S,SaSe,Entropy}) are promising enough to suggest to carry on with this research.
\begin{acknowledgements}
This work is supported by the Sardinia Region Project “Time-logical evolution of correlated microscopic systems”, CRP 55, LR 7/8/2007.
\end{acknowledgements}
|
{
"timestamp": "2018-03-08T02:10:45",
"yymm": "1803",
"arxiv_id": "1803.02749",
"language": "en",
"url": "https://arxiv.org/abs/1803.02749"
}
|
\section{Introduction}
We consider the Dirichlet problem for the generalized H\'{e}non equation
\begin{equation}\label{1.4}
\left \{
\begin{aligned}
-\Delta u+ \mu u &=|x|^\alpha |u|^{p-2}u&&\qquad \text{in $\Omega$,}\\
u&=0&&\qquad \text{on $\partial \Omega$,}
\end{aligned}
\right.
\end{equation}
and the corresponding problem for a Schr\"{o}dinger-H\'{e}non system
\begin{equation}\label{eq:henon-1-zh}
\left \{
\begin{aligned}
-\Delta u + \mu_1 u &= |x|^{\alpha}\partial_u F(u,v)&&\qquad \text{in $\Omega$,}\\
-\Delta v + \mu_2 v &= |x|^{\alpha}\partial_v F(u,v)&&\qquad \text{in $\Omega$,}\\
u&=v=0&&\qquad \text{on $\partial \Omega$,}
\end{aligned}
\right.
\end{equation}
Here $\Omega \subset \mathbb{R}^N,N\geq 2$ is the unit ball, $\mu,\mu_1,\mu_2 \ge 0$, $p>2$, $\alpha>-1$ and $F: \mathbb{R}^2 \to \mathbb{R}$ satisfies the following assumption:
\begin{itemize}
\item[(F)] $F$ is of class $C^2$ on $\mathbb{R}^2$, homogeneous of degree $p>2$ and satisfies $F(u,v) >0$ for $(u,v) \in \mathbb{R}^2 \setminus \{0\}$.
\end{itemize}
We note that (\ref{1.4}) is merely called H\'enon equation in the case where $\mu=0$, and it has been introduced by H\'enon in \cite{Henon} in the context of astrophysics.
One of the first mathematical papers on this equation is due to Weiming Ni \cite{N}, who observed
that the presence of the weighted term leads to new critical exponents for
the non-existence of classical positive solutions. After Ni's work,
(\ref{1.4}) has been studied extensively in recent years. In \cite{C-P,S-W,S-S-W} the
authors study the existence of the ground state solutions of
\eqref{1.4} and their asymptotic behavior both for $\alpha>0$ fixed, $p \to 2^*$ and
$2 < p< 2^*$, $\alpha \to \infty$. Here $2^*$ is the critical Sobolev exponent given by
$2^*= \frac{2N}{N-2}$ for $N \ge 3$ and $2^* = \infty$ for $N=1,2$.
We also note that partial symmetry and symmetry-breaking results for ground state solutions of
the H\'{e}non equation were obtained in \cite{S-W}, while partial symmetry results for sign changing solutions were studied in \cite{Bartsch-Weth-Willem,van-Schaftingen-Willem}.
In the special case where $F(u,v) = \frac{a_1u^4+a_2v^4}{4}+ b \frac{u^2 v^2}{2}$ with constants $a_1,a_2 >0$, $b\ge 0$,
System (\ref{eq:henon-1-zh}) is a weighted version of the nonlinear Schr\"odinger system
\begin{equation}\label{1.2}
\left\{\begin{aligned} &-\Delta u+\mu_1 u=a_1u^3+b uv^2 &~~
\hbox{in}~ &~~\Omega,\\ &-\Delta v+\mu_2 v=a_2 v^3+b
u^2v &~~\hbox{in}~ &~~ \Omega.\\
\end{aligned}\right.
\end{equation}
This system arises both in the context of nonlinear optics and of Bose-Einstein condensation and has been receiving extensive attention in recent years, see \cite{A-C,B-L-W-Z,D-W-Z-1,D-W-Z-2,D-W-Z-3,D-W-W,S-W,TV} and the
references therein. The majority of papers is concerned with $\Omega= \mathbb{R}^N$, but also the case of bounded domains $\Omega \subset \mathbb{R}^N$ has been studied together with Dirichlet boundary conditions. We remark in particular that
Sato-Wang \cite{S-W813} studied the limit system with
$b\rightarrow+\infty$, and they obtained the existence of multiple
solutions of the limit system.
Note that, if (\ref{1.2}) is considered with Dirichlet boundary conditions, then every solution $(u,v)$ with $u>0$ and $v>0$ in $\Omega$ is radial by Troy's symmetry result in \cite{T} based on the moving plane method. The same radiality result applies to the more general system (\ref{eq:henon-1-zh}) in the case where $\alpha=0$ and when the system is cooperative, i.e., $\partial_{uv}F(u,v)>0$ for $u,v>0$. On the other hand, the moving plane method breaks down in the case $\alpha>0$ and symmetry breaking of ground state solutions is expected.
The notion of ground state solutions is defined in the case where $2<p< 2^*$. In this case, both problems (\ref{1.4}) and (\ref{eq:henon-1-zh}) have a variational structure with respect to the Sobolev space ${\mathcal H}:=H^1_0(\Omega)$, as solutions are critical points of the corresponding functionals
$$
I_h: {\mathcal H} \to \mathbb{R}, \qquad I_h(u)= \frac{1}{2}\int_\Omega (|\nabla u|^2 + \mu u^2)\,dx - \frac{1}{p}\int_\Omega |x|^\alpha |u|^p\,dx
$$
and $I_{hs}: {\mathcal H}\times {\mathcal H} \to \mathbb{R}$ given by
$$
I_{hs}(u,v)=\frac{1}{2}\dint_\Omega(|\nabla u|^2+\mu_1 u^2+|\nabla v|^2+\mu_2 v^2)dx-\frac{1}{2}\dint_\Omega |x|^\alpha F(u,v)dx,
$$
The corresponding {\em Nehari manifolds} are then given by
$$
\mathcal{N}_h:=\{u \in {\mathcal H} \setminus \{0\} \::\: I_h'(u)u = 0\}
$$
and
$$
\mathcal{N}_{hs}:=\{(u,v) \in {\mathcal H} \times {\mathcal H} \setminus \{(0,0)\} \::\: I_{hs}'(u,v)(u,v) = 0\},
$$
and they form natural constraints in the sense that solutions of (\ref{1.4}) resp. (\ref{eq:henon-1-zh}) are automatically contained in
$\mathcal{N}_h$, $\mathcal{N}_{hs}$, respectively.
As remarked above, it is expected that, for $\alpha>0$ large, ground state solutions of (\ref{1.4}) resp. (\ref{eq:henon-1-zh}) are not radially symmetric. For the case of (\ref{1.4}) with $\mu=0$, this has already been proved in \cite{S-S-W}. There are basically two approaches to prove symmetry breaking, i.e., the non-radiality of ground state solutions of (\ref{1.4}) and (\ref{eq:henon-1-zh}) for $\alpha$ large. The first approach is based on direct energy comparison between radial and nonradial functions in the Nehari manifolds $\mathcal{N}_{h}$ and
$\mathcal{N}_{hs}$. The second approach is to use the Morse index, which is equal to one for every minimizer of $I_h$ on $\mathcal{N}_{h}$ and every minimizer of $I_{hs}$ on $\mathcal{N}_{hs}$.
This approach is in fact much more general since the Morse index of classical solutions of (\ref{1.4}) and (\ref{eq:henon-1-zh}) can be defined for arbitrary $p>2$. Moreover, Morse index estimates are available not only for ground state solutions but also for critical points associated with more general minimax principles.
To define the Morse index, we note that, for a solution $u$ of (\ref{1.4}), the linearized operator at $u$ is given by
$$
L_{u}^\alpha \phi:= -\Delta \phi + \mu \phi - (p-1) |x|^\alpha |u|^{p-2},
$$
Here and in the following, when we refer to a solution of (\ref{1.4}) or of (\ref{eq:henon-1-zh}), we always mean a classical solution in $C^2(\overline \Omega)$. Then the operator $L_{u}^\alpha$ is self-adjoint in $L^2(\Omega)$ with domain
$H^2(\Omega) \cap H^1_0(\Omega)$ and form domain $H^1_0(\Omega)$, and the Morse index $\mu(u)$ of $u$ is defined as the number of negative
eigenvalues of $L^\alpha_u$ counted with multiplicity. Similarly, for a (non-singular) solution $(u,v)$ of (\ref{eq:henon-1-zh}), the Morse index $\mu(u,v)$ is defined as the number of negative eigenvalues of the linearized operator $L_{u,v}^\alpha$ given by
\begin{equation}
\label{eq:linearized-operator}
L_{u,v}^\alpha {\phi_1 \choose \phi_2 } := -\Delta { \phi_1 \choose \phi_2 } +
{\mu_1 \phi_1 \choose \mu_2 \phi_2 } - |x|^{\alpha} D^2 F(u,v) {\phi_1 \choose \phi_2 }
\end{equation}
with $D^2 F(u,v) = {\partial_{uu}F(u,v)\;\;
\partial_{uv} F(u,v) \choose \partial_{uv} F(u,v)\;\; \partial_{vv} F(u,v)}$. We note that $L_{u,v}^\alpha$ is self-adjoint in $L^2(\Omega,\mathbb{R}^2)$ with domain $H^2(\Omega,\mathbb{R}^2) \cap H^1_0(\Omega,\mathbb{R}^2)$ and form domain $H^1_0(\Omega,\mathbb{R}^2)$
The main result of the present paper is the following:
\begin{thm}\label{th1.4}
Let $p>2$.
\begin{enumerate}
\item[i)] We have $\mu_H(\alpha) \to \infty$ as $\alpha \to \infty$, where
$$
\mu_{h}(\alpha) := \inf \{\mu(u)\::\: \text{$u$ is a nontrivial radial solution of (\ref{1.4})}\}.
$$
\item[ii)] Suppose that (F) is satisfied, and let
$$
\mu_{hs}(\alpha) := \inf \{\mu(u,v)\::\: \text{$(u,v)$ is a nontrivial radial solution of (\ref{eq:henon-1-zh})}\}
$$
for $\alpha >0$. Then $\mu_{hs}(\alpha) \to \infty$ as $\alpha \to \infty$.
\end{enumerate}
\end{thm}
We remark that assertion (i) is in fact a consequence of assertion (ii). Indeed, if $p>2$ and $u$ is a solution of (\ref{1.4}), then $(u,0)$ is a solution of (\ref{eq:henon-1-zh}) with $\mu_1= \mu$, $\mu_2=0$ and the nonlinearity $F(u,v)= \frac{|u|^{p}+|v|^p}{p}$ which satisfies assumption $(F)$. Moreover, if $u$ has Morse index $\mu(u)= k$, then $(u,0)$ has Morse index $\mu(u,0)=k$, since the linearized operator $L_{u,0}^\alpha$ coincides with ${L_u^\alpha \choose -\Delta }$ on $H^2(\Omega,\mathbb{R}^2) \cap H^1_0(\Omega,\mathbb{R}^2)$ and the second component is a positive semidefinite operator.\\
Theorem~\ref{th1.4} is new already in the special case of the H\'{e}non equation
(\ref{1.4}) with $\mu=0$. In this case, it extends and complements recent interesting results of Moreira dos Santos and Pacella in \cite{Moreira-dos-Santos-Pacella}, who obtained explicit lower bounds on the Morse index in the planar case $N=2$. More precisely, they consider the equation $-\Delta u = |x|^\alpha f(u)$ in a ball or an annulus in $\mathbb{R}^2$ together with Dirichlet boundary conditions, and they prove that, for
{\em even $\alpha>0$}, radial sign changing solutions have a Morse index greater than or equal to $\alpha+3$, see \cite[Theorem 1.4]{Moreira-dos-Santos-Pacella}. Moreover, if $f$ is superlinear, the lower bound improves to $\alpha + n(u) + 3$, where $n(u)$ denotes the number of nodal domains of $u$. These results are obtained by means of special transformations and a study of the corresponding
non-weighted reduced problem. For this approach, the assumptions $\alpha$ even, $N=2$ are key requirements, and it also does not seem to extend to systems of type (\ref{eq:henon-1-zh}).
Theorem~\ref{th1.4} immediately implies the following result on ground state solutions, as the Morse index equals one for all ground state solutions of (\ref{1.4}) and (\ref{eq:henon-1-zh}).
\begin{coro}\label{thm 3.1}
Let $2<p<2^*$. Then there exists a number $\bar{\alpha}>0$ such that for $\alpha \ge \bar{\alpha}$
\begin{enumerate}
\item[i)] every
ground state solution of (\ref{1.4}) is not radially symmetric;
\item[ii)] every ground state solution of (\ref{eq:henon-1-zh}) is not radially symmetric.
\end{enumerate}
\end{coro}
As noted already, Corollary~\ref{thm 3.1}i) is due to \cite{S-S-W} in the case $\mu=0$. Moreover, it has been proved by Smets and Willem in \cite{S-W} that ground state solutions of (\ref{1.4}) are foliated Schwarz symmetric for every
$\alpha \ge 0$. We recall that a function $u$ on $\Omega$ is called foliated Schwarz symmetric with respect to some unit vector $e \in \mathbb{R}^N$ is $u$ is axially symmetric with respect to the axis $\mathbb{R} e$ and nonincreasing in the angle $\theta = \arccos x \cdot e$. In the case where $p \ge 3$, the same symmetry is shared more generally by every solution $u$ of (\ref{1.4}) with Morse index $\mu(u) \le N$, the space dimension, see \cite{Pacella-Weth}.
In the case of the system~(\ref{eq:henon-1-zh}), we need to assume cooperativity again to recover foliated Schwarz symmetry. More precisely, if $p \ge 3$ and $\partial_{uv}F(u,v)>0$ for $u,v>0$, then every solution $(u,v)$ with Morse index $\mu(u,v) \le N$ is foliated Schwarz symmetric, i.e., both components $u$ and $v$ are foliated Schwarz symmetric with respect to the {\em same} unit vector $e$, see \cite[Theorem 1.4]{D-P}. Such a property is not expected in the non-cooperative case. For a study of symmetry properties in this case, we refer the reader to the recent papers \cite{T-W, S-T}.
We also mention related work on symmetry of solutions to the related second order Hamiltonian PDE system
\begin{equation}\label{1.3}
\left\{\begin{aligned}
&-\Delta u=|x|^\beta |v|^{q-1}v&~ \hbox{in} ~&~ \Omega,\\
&-\Delta v=|x|^\alpha |u|^{p-1}u&~\hbox{in}~ &~ \Omega,\\
&u=v=0 &~~\hbox{on} ~&~\partial\Omega
\end{aligned}\right.
\end{equation}
in the unit ball $\Omega \subset \mathbb{R}^N$, where $~\alpha, \beta> 0,~p,q>0$ and $\frac{1}{p+1}+\frac{1}{q+1}>\frac{N-2}{N}$.
In \cite{C-R}, Calanchi and Ruf have introduced the notion of ground state solutions, and they show symmetry breaking of these solutions for large values of $\alpha$ or $\beta$.
Moreover, in \cite{B-S-R-1,B-S-R-2}, Bonheure, dos Santos and Ramos proved that ground
state solutions always exhibit foliated
Schwarz symmetry, and they present precise conditions on the parameters under which
the ground state solutions are not radially symmetric. Since there is no straightforward notion of Morse index of solutions of (\ref{1.3}), Theorem~\ref{th1.4} does not seem to have an analogue for (\ref{1.3}). Instead, the available results on symmetry breaking of ground state solutions of (\ref{1.3}) rely on a direct energy comparison involving radial and nonradial test functions.
We close the introduction with a brief outline of the strategy of the proof of Theorem~\ref{th1.4} and the structure of this paper. The main argument is given in Section~\ref{sec:symm-break-henon}. Here we argue by contradiction, assuming that there exists a sequence of numbers $\alpha_k>0$ with $\alpha_k \to \infty$ for $k \to \infty$ and, for every $k$, a nontrivial radial solution $(\tilde u_k,\tilde v_k)$ of (\ref{eq:henon-1-zh}) with $\alpha = \alpha_k$ and such that the Morse index of $(\tilde u_k,\tilde v_k)$ remains finite as $k \to \infty$. A spectral analysis using spherical harmonics then implies that an associated weighted radial eigenvalue problems only admits nonnegative eigenvalues. Inspired by Byeon and Wang \cite{B-W}, we then use a change of variable $r=|x|= e^{-\beta_k t}$ with $\beta_k = \frac{N}{N+\alpha_k}$ to transform both the $k$-dependent nonlinear system and the associated eigenvalue problem to the half line. Moreover, using the stability information derived in the previous step, we deduce local a priori bounds on the transformed sequence of solutions via a contradiction argument based on a blow up analysis. The a priori bounds then allow to pass to the limit along a subsequence and to deduce the existence of a stable solution of an associated limit problem either on $\mathbb{R}$ or on the half line. In Section~\ref{sec:preliminaries}, we derive a corresponding Liouville theorem which excludes the existence of stable solutions of these limit problems, and this yields a contradiction. Some technical parts of the argument, in particular regarding a variant of the very useful doubling lemma of Polacik, Quittner and Souplet \cite{PQS}, are postponed to the appendix of the paper.
\section{Preliminary results}
\label{sec:preliminaries}
In the present section, we collect some preliminary results which will be used in the proof of Theorem~\ref{th1.4}. Throughout this section, we assume that the function $F: \mathbb{R}^2 \to \mathbb{R}$ satisfies assumption (F) from the introduction with some $p>2$. We start by noting some immediate consequences of (F). First, it follows that
\begin{equation}
\label{eq:consequence-F-0}
F(u,v) \ge c_F (|u|^p + |v|^p) \qquad \text{for $u,v \in \mathbb{R}$}
\end{equation}
with $c_F:= \min \{F(u,v)\::\: u,v \in \mathbb{R},\:|u|^p + |v|^p=1\}>0.$ Moreover, by differentiating the function $t \mapsto F(tu,tv)$ at $t=1$, we see that
\begin{equation}
\label{F-consequence-1}
p F(u,v) = \partial_u F(u,v)u + \partial_v F(u,v) v.
\end{equation}
Next, it is easy to see that the partial derivatives $\partial_u F(u,v)$, $\partial_v F(u,v)$ are \mbox{$p-1$-homogeneous.} Consequently, by differentiating the function
$$
t \mapsto \partial_u F(tu,tv)u + \partial_v F(tu,tv) v
$$
at $t=1$, we see that
\begin{equation}
\label{F-consequence-2}
\Bigl \langle D^2 F(u,v) {u \choose v},{u \choose v} \Bigr \rangle = (p-1) \bigl(\partial_u F(u,v)u + \partial_v F(u,v) v\bigr)
\end{equation}
where
$$
D^2 F(u,v) = \left (
\begin{array}{cc}
\partial_{uu} F(u,v) & \partial_{uv}F(u,v)\\
\partial_{uv}F(u,v) & \partial_{vv}F(u,v)
\end{array}
\right)
$$
Here and in the following, we let $\langle \cdot, \cdot \rangle$ denote the inner product in $\mathbb{R}^2$. Combining these assumptions, we see in particular that
\begin{align}
\label{F-consequence-3}
\Bigl \langle D^2 F(u,v) {u \choose v},{u \choose v} \Bigr \rangle &- \partial_u F(u,v)u - \partial_v F(u,v) v\\
&\ge p(p-2) c_F (|u|^p+|v|^p)\qquad \text{for $u,v \in \mathbb{R}$.}\nonumber
\end{align}
In Section~\ref{sec:symm-break-henon}, we will study radial solutions of (\ref{eq:henon-1-zh}) after a transformation. This approach will lead us to consider ODE systems both on $\mathbb{R}$ and on the half line $[0,\infty)$. In the remainder of this section, we will be concerned with observations related to functions on $\mathbb{R}$ and on $[0,\infty)$. We start with an elementary estimate for $C^1$-functions on the half line.
\begin{lemma}
\label{approx-lemma}
Let $u \in C^1([0,\infty))$ be a function with
\begin{equation}
\label{assumption-approx-lemma}
u(0)=0\qquad \text{and}\qquad \int_0^\infty [u']^2\,dt < \infty.
\end{equation}
Then we have
\begin{equation}
\label{eq:approx-lemma-limit-0}
u^2(t) \le t \int_0^\infty [u']^2\,d\tau \qquad \text{for all $\:t \ge 0$,}
\end{equation}
and there exists a sequence
$(\psi_n)_n$ in $C^\infty_c(0,\infty)$ with
\begin{equation}
\label{eq:approx-lemma-limit}
\lim_{n \to \infty} \int_0^\infty (u-\psi_n)'^2\,dt = 0 \qquad \text{and}\qquad e^{-\delta t}(u-\psi_n)(t) \to 0
\end{equation}
uniformly in $t \in [0,\infty)$ as $n \to \infty$ for any $\delta>0$.
\end{lemma}
\begin{proof}
By (\ref{assumption-approx-lemma}) and H\"older's inequality we have
$$
|u(t)| = \Bigl| \int_0^t u'(\tau)\,d\tau\Bigr| \le \sqrt{C_u t} \quad \text{for $t \ge 0$}\qquad \text{with}\qquad C_u:=\int_0^\infty [u']^2\,d\tau,
$$
as claimed in (\ref{eq:approx-lemma-limit-0}). Let $\phi \in C_c^{\infty}(\mathbb{R})$ with $0 \le \phi \le 1$ and $\phi \equiv 1$ on $[-1,1]$, $\phi \equiv 0$ on $\mathbb{R} \setminus [-2,2]$. Moreover, let
$$
\phi_n \in C_c^\infty(0,\infty),\qquad \phi_n(t)= \phi(\frac{\ln t}{n})
$$
We then note that $0 \le \phi_n \le 1$ on $(0,\infty)$ and
$$
\phi_n(t) \to 1 \qquad \text{as $n \to \infty\quad $ for $t \in (0,\infty)$,}
$$
where the convergence is uniform in compact subsets of $(0,\infty)$. Let $\psi_n := \phi_n u \in C^1_c(0,\infty)$ for $n \in \mathbb{N}$. Then we have
$$
e^{-\delta t}|u(t)-\psi_n(t)|= e^{-\delta t}|(1-\phi_n(t))u(t)|\le \sqrt{C_u t}e^{-\delta t}|1-\phi_n| \qquad \text{for $n \in \mathbb{N}$, $t\ge 0$,}
$$
where the RHS tends to zero as $n \to \infty$ uniformly in $t \ge 0$. It thus remains to show the first limit in (\ref{eq:approx-lemma-limit}). For this we note that
\begin{align*}
\int_0^\infty [(u-\psi_n)']^2\,dt &= \int_0^\infty [(1-\phi_n)u' - \phi_n' u]^2\,dt \\
&\le 2 \int_0^\infty \Bigl(
(1-\phi_n)^2[u']^2 + [\phi_n']^2 u ^2\Bigr)\,dt,
\end{align*}
where, by Lebesgue's theorem,
$$
\int_0^\infty (1-\phi_n)^2[u']^2 \to 0 \qquad \text{as $n \to \infty$,}
$$
and
\begin{align*}
&\int_0^\infty [\phi_n']^2 u^2\,dt = \frac{1}{n^2} \int_0^\infty \frac{1}{t^2}[\phi']^2(\frac{\ln t}{n})u^2(t)\,dt \le \frac{4 C_u}{n^2} \int_0^\infty \frac{1}{t}[\phi']^2(\frac{\ln t}{n})\,dt \\
&=\frac{4 C_u}{n^2} \int_\mathbb{R} [\phi']^2(\frac{s}{n}) \,ds =\frac{4 C_u}{n^2} \int_{-2n}^{2n} [\phi']^2(\frac{s}{n}) \,ds
\le \frac{16 C_u \|\phi'\|_\infty^2}{n} \to 0 \quad \text{as $n \to \infty$.}
\end{align*}
The proof is thus finished.
\end{proof}
Next, we state a Liouville Theorem for bounded solutions $(u,v)$ of the ODE system
\begin{equation}
\label{eq:limit-system-rescaling-without-bc}
\left \{
\begin{aligned}
-u'' &= \partial_u F(u,v)&&\qquad \text{in $I$,}\\
-v'' &= \partial_v F(u,v)&&\qquad \text{in $I$,}
\end{aligned}
\right.
\end{equation}
where
\begin{equation}
\label{sec:limit-system-1-0}
I = \mathbb{R} \qquad \text{or}\qquad I = (0,\infty).
\end{equation}
We need to introduce some notation. Let $(u,v) \in L^\infty(I,\mathbb{R}^2) \cap C^2(I,\mathbb{R}^2)$ be a fixed solution of (\ref{eq:limit-system-rescaling-without-bc}). We consider the quadratic form
$q_{u,v}$ on $C^1_c(I,\mathbb{R}^2)$ defined by
$$
q_{u,v}(\phi) := \int_{I} \bigl(|\phi'|^2 - \bigl \langle D^2 F(u,v) \phi, \phi \bigr \rangle
\bigr)\,dt
$$
If $\Omega \subset I$ is an open subset, we say that $(u,v)$ is {\em stable in $\Omega$} if $q_{u,v}(\phi,\psi) \ge 0$ for all $\phi,\psi \in C^1_c(\Omega).$ Moreover, we say that $(u,v)$ is stable outside a compact set if $(u,v)$ is stable in $I \setminus K$ for some compact set $K \subset I$. We then have the following nonexistence result.
\begin{theorem}
\label{sec:limit-system-1}
Let $I$ satisfy (\ref{sec:limit-system-1-0}), and let $(u,v) \in L^\infty(I,\mathbb{R}^2) \cap C^2(I,\mathbb{R}^2)$ be a solution of (\ref{eq:limit-system-rescaling-without-bc}) which is stable outside a compact set.\\
Then $u \equiv v \equiv 0$.
\end{theorem}
\begin{remark}
For a solution $(u,v) \in \bigl[L^\infty(I) \cap C^2(I)\bigr]^2$ of (\ref{eq:limit-system-rescaling-without-bc}), one may define the Morse index as the maximal $k \in \mathbb{N} \cup \{0,\infty)$ such that there exists a $k$-dimensional subspace $X \subset C^1_c(I)\times C_c^1(I)$ with the property that
$$
q_{u,v}(\phi,\psi) < 0 \qquad \text{for every $(\phi,\psi) \in X \setminus \{0\}$.}
$$
A standard and straightforward argument shows that a solution
$(u,v) \in \bigl[L^\infty(I) \cap C^2(I)\bigr]^2$ of (\ref{eq:limit-system-rescaling-without-bc}) with finite Morse index is stable outside a suitable compact set $K \subset I$. Therefore the conclusion of Theorem~\ref{sec:limit-system-1} also applies to solutions with finite Morse index. In the present paper, we apply Theorem~\ref{sec:limit-system-1} only to the case where $(u,v)$ is stable, i.e., where $q_{u,v}(\phi,\psi) \ge 0$ for all $\phi,\psi \in C^1_c(I).$
\end{remark}
For the proof of Theorem~\ref{sec:limit-system-1}, we first need the following observation.
\begin{lemma}
\label{sec:limit-system-2}
Let $I$ satisfy (\ref{sec:limit-system-1-0}), and let $(u,v) \in L^\infty(I,\mathbb{R}^2) \cap C^2(I,\mathbb{R}^2)$ be a solution of (\ref{eq:limit-system-rescaling-without-bc}) with $(u,v) \not \equiv (0,0)$. Then there exists $\varepsilon,\delta>0$ and a sequence $(r_n)_n \subset I$ with $r_n \to \infty$ as $n \to \infty$ and
\begin{equation}
\label{eq:r-n-eps-delta-est}
\int_{r_n-\varepsilon}^{r_n+\varepsilon}(|u|^p+|v|^p)\,dt \ge \delta.
\end{equation}
\end{lemma}
\begin{proof}
By (\ref{eq:limit-system-rescaling-without-bc}), we see that the function
$$
E:= \frac{1}{2}\Bigl(u'^2+v'^2\Bigr) + F(u,v)
$$
is constant on $I$. Let $c_{u,v}$ denote the constant value of this function. Then $c_{u,v}>0$ by assumption (F) and since $(u,v) \not \equiv (0,0)$. Since $u^2+v^2$ is a bounded function on $I$, there exists a sequence $(t_n)_n \subset I$ with $t_{n+1} \ge t_n +1$ for every $n \in \mathbb{N}$ and
$$
u'(t_n)u(t_n)+ v'(t_n)v(t_n)= \frac{d}{dt}\Big|_{t_n}\Bigl(u^2+v^2\Bigr) \to 0 \qquad \text{as $n \to \infty$.}
$$
Suppose by contradiction that
\begin{equation}
\label{eq:contradiction-decay}
u(t),v(t) \to 0 \qquad \text{as $t \to \infty$.}
\end{equation}
Multiplying (\ref{eq:limit-system-rescaling-without-bc}) with $u$, $v$ respectively and integrating by parts over $(t_n,t_{n+1})$, we then find that
$$
\int_{t_n}^{t_{n+1}}\Bigl(u'^2+v'^2\Bigr)\,dt= o(1) + \int_
{t_n}^{t_{n+1}}F(u,v)\,dt = o(|t_{n+1}-t_n|)
$$
as $n \to \infty$. Thus there exists numbers $s_n \in (t_n,t_{n+1})$, $n \in \mathbb{N}$ with
$$
u'(s_n)^2 + v'(s_n)^2 \to 0 \qquad \text{as $n \to \infty$.}
$$
Moreover, $u(s_n),v(s_n) \to 0$ as $n \to \infty$ by (\ref{eq:contradiction-decay}). By definition of $E$, this contradicts the fact that $c_{u,v}>0$. Hence (\ref{eq:contradiction-decay}) is false, and so there exists $\delta>0$ and a sequence $(r_n)_n \subset I$ with $r_n \to \infty$ and
$$
|u|^p(r_n) + |v|^p(r_n) \ge 2 \delta \qquad \text{for all $n \in \mathbb{N}$.}
$$
Moreover, since $u'', v'' \in L^\infty(I)$ as a consequence of (\ref{eq:limit-system-rescaling-without-bc}), we may choose $\varepsilon>0$ sufficiently close such that
$$
\int_{r_n- \varepsilon}^{r_n+\varepsilon}(|u|^p+|v|^p)\,dx \ge \delta \qquad \text{for all $n \in \mathbb{N}$,}
$$
as claimed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{sec:limit-system-1}]
We suppose by contradiction that $(u,v) \not \equiv (0,0)$. Let $\psi \in C^1_c(I \setminus K)$. The stability assumption applied to $(u \psi, v \psi) \in C^1_c(I \setminus K,\mathbb{R}^2)$ then yields
\begin{align}
&0 \le \int_{I} \Bigl([(\psi u)']^2 + [(\psi v)']^2
- \Bigl \langle D^2 F(u,v) {\psi u \choose \psi v}, {\psi u \choose \psi v} \Bigr) \rangle
\Bigr)\,dt \nonumber\\
&= \int_{I} \Bigl( u'(\psi^2 u)' + v' (\psi^2 v)' +(u^2+v^2) [\psi']^2
- \Bigl \langle D^2 F(u,v) {\psi u \choose \psi v}, {\psi u \choose \psi v} \Bigr \rangle
\Bigr)dt\nonumber\\
&= \int_{I} \Bigl([-u''- \partial_u F(u,v)](\psi^2 u) +
[-v''- \partial_v F(u,v)](\psi^2 v)+ (u^2+v^2) [\psi']^2\Bigr)dt\nonumber\\
&\qquad -\int_I \psi^2 \Bigl(\,\Bigl\langle D^2 F(u,v) {u \choose v}, {u \choose v} \Bigr \rangle - \partial_u F(u,v)u- \partial_v F(u,v)v\Bigr)\,dt\nonumber\\
&\le \int_I (u^2+v^2) [\psi']^2\,dt - p (p-2)c_F \int_I (|u|^p + |v|^p)\psi^2 \,dt, \label{eq:psi-est}
\end{align}
where in the last step (\ref{F-consequence-3}) and (\ref{eq:limit-system-rescaling-without-bc}) were used.
Now, for arbitrary $\phi \in C^1_c(I \setminus K)$, we apply (\ref{eq:psi-est}) to $\psi= \phi^{\frac{p}{p-2}}$ to get
$$
\int_I \Bigl(|u|^p + |v|^p\Bigr) \phi^{\frac{2p}{p-2}}\,dt \le \frac{p}{c_F(p-2)^3} \int_I (u^2+v^2)\phi^{\frac{4}{p-2}}[\phi']^2\,dt\\
$$
Combining this with Young's inequality yields, for $\tau>0$,
\begin{align*}
\int_I &\bigl(|u|^p + |v|^p\bigr) \phi^{\frac{2p}{p-2}}\,dt\\
& \le \frac{2\,\tau^{\frac{p}{2}} }{c_F(p-2)^3}\int_I \bigl(|u|^p + |v|^p\bigr) \phi^{\frac{2p}{p-2}}\,dt + \frac{2}{c_F(p-2)^2 \tau^{\frac{2p}{p-2}}}\int_{I} |\phi'|^{\frac{2p}{p-2}}\,dt.
\end{align*}
Choosing $\tau>0$ such that $\frac{2 \,\tau^{\frac{p}{2}}}{c_F(p-2)^3}= \frac{1}{2}$, we thus conclude that
\begin{equation}
\label{eq:phi-est}
\int_I \bigl(|u|^p + |v|^p\bigr) \phi^{\frac{2p}{p-2}}\,dt \le C_\tau \int_{I} |\phi'|^{\frac{2p}{p-2}}\,dt
\end{equation}
with $C_\tau := \frac{4 \tau^{\frac{2p}{2-p}}}{c_F(p-2)^2}$. Next, let $\phi_0 \in C^1_c(\mathbb{R})$ satisfy
$$
0 \le \phi_0 \le 1, \quad \text{$\phi_0 \equiv 1$ on $[-1,1]$}\quad \text{and}\quad \text{$\phi_0 \equiv 0$ on $\mathbb{R} \setminus [-2,2]$.}
$$
For $\rho>0$ and $r \in \mathbb{R}$, we then consider
$$
\phi_{\rho,r} \in C^1_c(\mathbb{R}),\qquad \phi_{\rho,r}(t)= \phi(\rho(t-r)),
$$
so that
$$
\int_\mathbb{R} [\phi_{\rho,r}'(t)]^{\frac{2p}{p-2}}\,dt = \rho^{\frac{2p}{p-2}} \int_{\mathbb{R}} [\phi'(\rho(t-r))]^{\frac{2p}{p-2}}\,dt =
\rho^{\frac{p+2}{p-2}} \int_{\mathbb{R}}\phi_0\,dt.
$$
With $\varepsilon,\delta>0$ given by Lemma~\ref{sec:limit-system-2}, we may now fix $\rho<\frac{1}{\varepsilon}$ sufficiently small such that
\begin{equation}
\label{eq:delta-a-min-est}
\int_\mathbb{R} [\phi_{\rho,r}'(t)]^{\frac{2p}{p-2}}\,dt < \frac{\delta}{C_\tau} \qquad \text{for all $r \in \mathbb{R}$.}
\end{equation}
Since $r_n \to \infty$ for the sequence $(r_n)_n$ given by Lemma~\ref{sec:limit-system-2}, there exists $n_0 \in \mathbb{N}$ such that
$$
{\rm supp} \, \phi_{\rho,r_n} \subset I \setminus K \qquad \text{for $n \ge n_0$.}
$$
Since moreover $\phi_{\rho,r_n} \equiv 1$ on $[r_n-\varepsilon,r_n+\varepsilon]$ by our choice of $\rho$, we can use (\ref{eq:phi-est}) and (\ref{eq:delta-a-min-est}) to estimate that
$$
\int_{r_n-\varepsilon}^{r_n+\varepsilon}(|u|^p+|v|^p)\,dt \le \int_I (|u|^p+|v|^p)\phi_{\rho,r_n}^{\frac{2p}{p-2}}
\,dt \le C_\tau \int_\mathbb{R} [\phi_{\rho,r_n}'(t)]^{\frac{2p}{p-2}}\,dt
< \delta
$$
for $n \ge n_0$, contrary to (\ref{eq:r-n-eps-delta-est}). The contradiction shows that $u \equiv v \equiv 0$, as claimed.
\end{proof}
We close this section with estimates for a more general ODE system which arises when studying radial solutions of
(\ref{eq:henon-1-zh}) after a transformation. These estimates will be used in Proposition~\ref{sec:morse-index-system} below.
\begin{lemma}
\label{pohozaev-transformed}
Let $\gamma >0$, $\rho \ge 0$, $N \ge \frac{p\rho}{2} + \frac{(p-2)\gamma}{2}$ and $\nu_1, \nu_2 \ge 0$ be constants. Moreover, let
$(u, v) \in C^2([0,\infty),\mathbb{R}^2)$ be a bounded solution of the system
\begin{equation}
\label{eq:henon-transformed-0}
\left \{
\begin{aligned}
-\bigl(e^{-\gamma t} u'\bigr)' + \nu_1 e^{-\rho t} u &= e^{-N t}\partial_u F(u,v)&&\quad \text{in $(0,\infty)$,}\\
-\bigl(e^{-\gamma t} v'\bigr)' + \nu_2 e^{-\rho t} v &= e^{-N t}\partial_v F(u,v)&&\quad \text{in $(0,\infty)$,}\\
u (0)&=v(0)=0.
\end{aligned}
\right.
\end{equation}
Then we have
\begin{equation}
\label{pohozaev-transformed-1}
[u'(0)]^2+ [v'(0)]^2 \ge \frac{2(N+\gamma)}{p} \int_0^\infty \Bigl([(e^{-\gamma t}u)'\,]^2 + [(e^{-\gamma t}v)'\,]^2\Bigr)\,dt.
\end{equation}
Moreover, if $\gamma \le \frac{N}{3p}$, then there exists a constant $C>0$ depending only on $F,p$ and $N$ such that
\begin{equation}
\label{pohozaev-transformed-lower-bound}
[u'(0)]^2+ [v'(0)]^2 \ge C \qquad \text{if $(u,v) \not \equiv (0,0)$.}
\end{equation}
\end{lemma}
\begin{proof}
By (\ref{eq:henon-transformed-0}), we have that
\begin{align*}
&\frac{1}{2}\Bigl([u'(0)]^2+ [v'(0)]^2\Bigr) = -\frac{1}{2}\int_0^\infty \partial_t \Bigl([e^{-\gamma t} u']^2+[e^{-\gamma t}v']^2\Bigr) \,dt\\
&= -\int_0^\infty \Bigl(e^{-\gamma t}u' (e^{-\gamma t} u')'+ e^{-\gamma t}v'
(e^{-\gamma t}v')\Bigr)\,dt\\
&= \!\int_0^\infty\!\!\! e^{-\gamma t}\Bigl( e^{-Nt}\bigl(\partial_u F(u,v)u' + \partial_v F(u,v)v'\bigr)-e^{- \rho t}(\nu_1 uu'+\nu_2 vv')\!\Bigr)dt\\
&= \int_0^\infty \Bigl[e^{-(N+\gamma)t} [F(u,v)]'-\frac{e^{-(\rho+ \gamma)t}}{2}(\nu_1 u^2+\nu_2 v^2)'\Bigr]\,dt\\
&= (N+\gamma) \int_0^\infty e^{-(N+\gamma)t} F(u,v)\,dt - \frac{\rho+ \gamma}{2} \int e^{-(\rho+ \gamma)t} (\nu_1 u^2+\nu_2 v^2) \,dt\\
& \ge (N+\gamma) \int_0^\infty e^{-\gamma t}\Bigl[e^{-Nt}F(u,v)- \frac{e^{-\rho t}}{p}(\nu_1 u^2+\nu_2 v^2)\Bigr]\,dt,
\end{align*}
where in the last step we used the assumption $N \ge \frac{p\rho}{2} + \frac{(p-2)\gamma}{2}$. Multiplying (\ref{eq:henon-transformed-0}) by $u$ resp. $v$ and integrating, we thus find that
\begin{align}
&\int_0^\infty \Bigl([(e^{-\gamma t}u)'\,]^2 + [(e^{-\gamma t}v)'\,]^2\Bigr)\,dt \nonumber \\
&=\int_0^\infty e^{-\gamma t}\bigl[\bigl(e^{-\gamma t} u'\bigr)'u + \bigl(e^{-\gamma t} v'\bigr)'v\bigr]\,dt \nonumber \\
&=\int_0^\infty e^{-\gamma t}\Bigl[e^{-Nt}(\partial_u F(u,v)u + \partial_v F(u,v)v)- e^{-\rho t}(\nu_1 u^2+\nu_2 v^2)\Bigr]\,dt \nonumber\\
&=\int_0^\infty e^{-\gamma t}\Bigl[p e^{-Nt} F(u,v)- e^{-\rho t}(\nu_1 u^2+\nu_2 v^2)\Bigr]\,dt \label{compactness-add-ref}\\
&\le \frac{p}{2(N+\gamma)} \Bigl([u'(0)]^2+ [v'(0)]^2\Bigr), \nonumber
\end{align}
as claimed in (\ref{pohozaev-transformed-1}). Here we used (\ref{F-consequence-1}). Moreover, for $t \ge 0$ we have
$$
e^{- \gamma t} u(t)
= \int_{0}^t (e^{-\gamma s} u)'\,ds \le \sqrt{t} \Bigl(\int_0^\infty [(e^{-\gamma s} u)'\,]^2\,ds\Bigr)^{\frac{1}{2}}
$$
and thus, if $\gamma \le \frac{N}{3p}$,
\begin{align*}
&e^{- \frac{4N}{3p}t} u^2(t) = e^{-(\frac{4N}{3p}+2\gamma)t}[e^{- \gamma t} u(t)]^2 \le
te^{-(\frac{4N}{3p}+2\gamma)t} \int_0^\infty [(e^{-\gamma s} u)'\,]^2\,ds\\
&\le t e^{-\frac{2N}{3p}t} \int_0^\infty [(e^{-\gamma s} u)\,]'^2\,ds \le C_{N,p} \int_0^\infty [(e^{-\gamma s} u)'\,]^2\,ds
\end{align*}
with $C_{N,p}:= \max \limits_{t \ge 0}\bigl(t e^{-\frac{2N}{3p}t}\bigr)$. The same estimate holds for $v$, and then by (\ref{compactness-add-ref}) we get
\begin{align}
\int_0^\infty &\Bigl([(e^{-\gamma t}u)'\,]^2 + [(e^{-\gamma t}v)'\,]^2\Bigr)\,dt \le p \int_0^\infty e^{-(N+\gamma) t}F(u,v)\,dt
\nonumber\\
&\le p C_F \int_0^\infty e^{- N t} \Bigl(|u(t)|^2+|v(t)|^2\Bigr)^{\frac{p}{2}}\,dt \nonumber \\
&= p C_F \int_0^\infty e^{- \frac{N}{3} t} \Bigl(e^{-\frac{4N}{3p} t} |u(t)|^2+e^{-\frac{4N}{3p} t}|v(t)|^2\Bigr)^{\frac{p}{2}}\,dt \nonumber\\
&\le p C_F {C_{N,p}}^{\frac{p}{2}} \int_{0}^\infty e^{-\frac{N}{3} t}\,dt \Bigl[\int_0^\infty \Bigl([(e^{-\gamma t}u)'\,]^2 + [(e^{-\gamma t}v)'\,]^2\Bigr)\,dt \Bigr]^{\frac{p}{2}} \nonumber\\
&=\frac{3 p C_F {C_{N,p}}^{\frac{p}{2}}}{N}\Bigl[\int_0^\infty \Bigl([(e^{-\gamma t}u)'\,]^2 + [(e^{-\gamma t}v)'\,]^2\Bigr)\,dt \Bigr]^{\frac{p}{2}}.\nonumber
\end{align}
So if $(u,v) \not \equiv (0,0)$ we have
$$
\int_0^\infty \Bigl([(e^{-\gamma t}u)'\,]^2 + [(e^{-\gamma t}v)'\,]^2\Bigr)\,dt \ge \Bigl(\frac{N}{3 p C_F {C_{N,p}}^{\frac{p}{2}}}\Bigr)^{\frac{2}{p-2}}.
$$
Combining this with (\ref{pohozaev-transformed-1}), we thus conclude that
$$
[u'(0)]^2+ [v'(0)]^2 \ge \frac{2 N}{p} \int_0^\infty \Bigl([(e^{-\gamma t}u)'\,]^2 + [(e^{-\gamma t}v)'\,]^2\Bigr)\,dt \ge \frac{2 N}{p}\Bigl(\frac{N}{3 p C_F {C_{N,p}}^{\frac{p}{2}}}\Bigr)^{\frac{2}{p-2}},
$$
as claimed in (\ref{pohozaev-transformed-lower-bound}).
\end{proof}
\section{Symmetry breaking for the H{\'e}non-Schr\"{o}dinger system}
\label{sec:symm-break-henon}
The present section is completely devoted to the proof of Theorem~\ref{th1.4}. As noted in the introduction, Part (i) is a direct consequence of Part (ii), so it only remains to prove Part (ii).
Arguing by contradiction, we suppose that there exists $m>0$, a sequence of numbers $\alpha_k>0$ with $\alpha_k \to \infty$ for $k \to \infty$ and, for every $k$, a nontrivial radial solution $(\tilde u_k,\tilde v_k)$ of
\begin{equation}\label{eq:henon-1}
\left \{
\begin{aligned}
-\Delta \tilde u_k + \mu_1 \tilde u_k &= |x|^{\alpha_k}\partial_u F(\tilde u_k,\tilde v_k)&&\qquad \text{in $\Omega$,}\\
-\Delta \tilde v_k + \mu_2 \tilde v_k &= |x|^{\alpha_k}\partial_v F(\tilde u_k,\tilde v_k)&&\qquad \text{in $\Omega$,}\\
u&=v=0&&\qquad \text{on $\partial \Omega$,}
\end{aligned}
\right.
\end{equation}
with $\alpha=\alpha_k$ such that
\begin{equation}
\label{eq:morse-upper-bound}
\mu(\tilde u_k,\tilde v_k) \le m \qquad \text{for all $k \in \mathbb{N}$.}
\end{equation}
Let $L_k:= L_{\tilde u_k,\tilde v_k}^{\alpha_k}$ denote the linearized operator at
$(\tilde u_k,\tilde v_k)$ as defined in (\ref{eq:linearized-operator}), i.e.,
$$
L_k \phi := -\Delta { \phi_1 \choose \phi_2 } +
{\mu_1 \phi_1 \choose \mu_2 \phi_2 } - |x|^{\alpha_k} D^2 F(\tilde u_k,\tilde v_k) \phi .
$$
By (\ref{eq:morse-upper-bound}), $L_k$ has most $m$ negative Dirichlet eigenvalues. Let $\Delta_\theta$ denote
the Laplace-Beltrami-Operator on the unit sphere. In the following, we restrict our attention to eigenfunctions of the form
$$
x \mapsto \phi(x) = Y_{l}(\theta)w(r)\quad
\text{with}\quad w(r)= {w_1(r) \choose w_2(r)}
$$
for $r = |x|$ and $\theta = \frac{x}{|x|}$. Here $Y_l$ is a
spherical harmonic of degree $\ell$, i.e. an eigenfunction of $-\Delta_\theta$ on the
unit sphere $\mathbb S$ corresponding to the eigenvalue $\lambda_\ell := l(l+N-2)$.
Then the problem
$$
L_k \phi = \mu \phi \quad
\text{in $\Omega$},\qquad \phi = 0 \quad \text{on
$\partial \Omega$}
$$
reduces to
\begin{equation}
\label{eq:radial-variable-linear-problem}
-\Delta_r w +
\frac{\lambda_\ell}{r^2}w - V_k (r) w = \mu w \quad \text{on
$[0,1]$}, \qquad w(1)=0,
\end{equation}
with
$$
V_k(r):= r^{\alpha_k}D^2 F(\tilde u_k,\tilde v_k)(r) - \left (\begin{array}{cc}
\mu_1 & 0 \\
0 & \mu_2
\end{array}
\right)
\quad \in \mathbb{R}^{2 \times 2}\qquad \text{for $r \in [0,1]$.}
$$
We call $\mu \in \mathbb{R}$ an eigenvalue of (\ref{eq:radial-variable-linear-problem}) if there exists a solution $w \in C^2([0,1])\setminus \{0\}$ of (\ref{eq:radial-variable-linear-problem}). We claim that
\begin{equation}
\label{eq:eigenvalue-radial-text}
\begin{aligned}
&\text{for every $k \in \mathbb{N}$ there exists $\ell_k \in \{0,\dots,m\}$ such that}\\
&\text{(\ref{eq:radial-variable-linear-problem}) admits only nonnegative eigenvalues for $\ell = \ell_k$.}
\end{aligned}
\end{equation}
Indeed, if this is not the case, then there exists $k \in \mathbb{N}$ and $w_1,\dots,w_m \in C^2((0,1)) \cap C([0,1]) \setminus \{0\}$ such that $w_j$ solves (\ref{eq:radial-variable-linear-problem}) with some eigenvalue $\mu=\mu_j<0$ for $0=1,\dots,m$. We may then pick a spherical harmonic $Y_j$ of degree $j$ for $j=0,\dots,m$ and define $v_j \in H^2(\Omega) \cap H^1_0(\Omega)$ in polar coordinates by $v_j(r,\theta)=Y_j(\theta) w_j(r)$. Then the functions $v_0,\dots,v_m$ are orthogonal in $L^2(\Omega)$, since the functions $Y_0,\dots,Y_m$ are orthogonal in $L^2(\mathbb S)$. Moreover, every $v_j$ is an eigenfunction of $L_k$ associated with a negative eigenvalue, and thus $L_k$ has at least $m+1$ negative eigenvalues. This contradicts (\ref{eq:morse-upper-bound}), and thus the claim~(\ref{eq:eigenvalue-radial-text}) is true.\\
As a consequence of (\ref{eq:eigenvalue-radial-text}), there exists $\ell \in \{0,\dots,m\}$ such that, after passing to a subsequence in $k$,
\begin{equation}
\label{eq:eigenvalue-radial-text-1}
\begin{aligned}
&\text{the eigenvalue problem (\ref{eq:radial-variable-linear-problem}) admits only}\\
&\text{nonnegative eigenvalues for every $k \in \mathbb{N}$.}
\end{aligned}
\end{equation}
It is now convenient to use, inspired by Byeon-Wang \cite{B-W}, the change of variable
$r= e^{-\beta_k t}$ with $\beta_k = \frac{N}{N+\alpha_k}$, which leads to
$\partial_r = \frac{e^{\beta_k t}}{\beta_k} \partial_t$ and therefore
$$
\Delta_r = r^{1-N}\partial_r (r^{N-1} \partial_r) = \frac{e^{\beta_k N
t}}{\beta_k^{2}} \partial_t \Bigl(e^{(2-N) \beta_k t}\partial_t\Bigr).
$$
Hence, setting $\gamma_k:= (N-2) \beta_k$,
we see that the transformed functions
$$
u_k, v_k: [0,\infty) \to \mathbb{R}, \qquad u_k(t):= \beta_k^{\frac{2}{p-2}} \tilde u_k (e^{-\beta_k t}), \quad v_k(t):= \beta_k^{\frac{2}{p-2}} \tilde v_k (e^{-\beta_k
t})
$$
solve the system
\begin{equation}
\label{eq:henon-transformed}
\left \{
\begin{aligned}
-\bigl(e^{-\gamma_k t} {u_k}'\bigr)' + {\beta_k}^2\mu_1 e^{-{\beta_k} N t}{u_k} &= e^{-N t}\partial_u F(u_k,v_k)&&\quad \text{in $(0,\infty)$,}\\
-\bigl(e^{-\gamma_k t} {v_k}'\bigr)' + {\beta_k}^2 \mu_2 e^{-{\beta_k} Nt}{v_k} &= e^{-N t}\partial_v F(u_k,v_k)&&\quad \text{in $(0,\infty)$,}\\
{u_k} (0)&={v_k}(0)=0.
\end{aligned}
\right.
\end{equation}
Here and in the following, the prime stands for $\partial_t$. Moreover, setting
$$
h(t)= w(e^{-\beta_k t}) = {w_1(e^{-\beta_k t}) \choose
w_2(e^{-\beta_k t})}
$$
and putting $\lambda= \lambda_\ell \ge 0$, we see that (\ref{eq:radial-variable-linear-problem}) transforms into
\begin{equation}
\label{eq:radial-variable-linear-problem-1}
-\bigl(e^{-\gamma_k t}h' \bigr)' +\beta_k^2 \lambda e^{-\gamma_k t} h- U_k(t) h = \mu \beta_k^2 e^{-\beta_k N t} h \quad \text{on
$[0,\infty)$}
\end{equation}
subject to the conditions
\begin{equation}
\label{eq:radial-variable-linear-problem-1-cond}
h(0)=0,\qquad h(\infty)=\lim_{t \to \infty}h(t) \quad \text{exists,}
\end{equation}
where
\begin{align*}
U_k(t)&:= \beta^2 e^{- N t} D^2 F(u_k,v_k)(e^{-\beta_k t}) - \beta_k^{2} e^{- \beta_k N t} \left (\begin{array}{cc}
\mu_1 & 0 \\
0 & \mu_2
\end{array}
\right)\\
&= e^{- N t} D^2 F(u_k, v_k)(t) - \beta_k^{2} e^{- \beta_k N t} \left (\begin{array}{cc}
\mu_1 & 0 \\
0 & \mu_2
\end{array}
\right)\;\in \mathbb{R}^{2 \times 2}\qquad \text{for $t \ge 0$.}
\end{align*}
Here we used the fact that the function $\mathbb{R}^2 \to \mathbb{R},\; (u,v) \to D^2 F(u,v)$ is $(p-2)$-homogeneous, which follows easily from assumption $(F)$. We call $\mu \in \mathbb{R}$ an eigenvalue of (\ref{eq:radial-variable-linear-problem-1}),~(\ref{eq:radial-variable-linear-problem-1-cond}) if there exists a bounded solution
$h \in C^2([0,\infty)) \setminus \{0\}$ of (\ref{eq:radial-variable-linear-problem}) such that $~(\ref{eq:radial-variable-linear-problem-1-cond})$ holds. It then follows immediately from~(\ref{eq:eigenvalue-radial-text-1}) that the eigenvalue problem (\ref{eq:radial-variable-linear-problem-1}) admits only nonnegative eigenvalues for every $k \in \mathbb{N}$. Applying Lemma~\ref{sec:morse-index-system-help-lemma} from the Appendix for fixed $k$ with $\gamma= \gamma_k$, $\delta= N \beta_k$ and $U(t):= e^{\beta_k N t}U_k(t)$, we deduce that
\begin{equation}
\label{eq:Q_k-nonnegative}
Q_k(\varphi) \ge 0 \qquad \text{for every $\varphi \in C_c^\infty((0,\infty),\mathbb{R}^2)$,}
\end{equation}
where
$$
Q_k(\varphi):= \int_{0}^\infty \Bigl(e^{-\gamma_k t} |\varphi'(t)|^2 + \lambda e^{-\gamma_k t}|\phi(t)|^2 - e^{-N t} \bigl \langle U_k(t)\varphi(t),\varphi(t) \bigr \rangle\Bigr)\,dt.
$$
Hence we may apply Proposition~\ref{sec:morse-index-system} below to deduce that $u_k \equiv v_k \equiv 0$ for $k$ sufficiently large. This is a contradiction.
Thus, the following Proposition completes the proof of Theorem~\ref{th1.4}.
\begin{proposition}
\label{sec:morse-index-system} Let $\lambda,\mu_1,\mu_2 \ge 0$ be constants, and let $\beta_k, \gamma_k >0$, $k \in \mathbb{N}$ with $\lim \limits_{k \to \infty} \beta_k = \lim \limits_{k \to \infty} \gamma_k= 0$. Moreover, for $k \in \mathbb{N}$,
let $(u_k, v_k) \in C^2([0,\infty),\mathbb{R})$ be solutions of the system
\begin{equation}
\label{eq:henon-transformed-1}
\left \{
\begin{aligned}
-\bigl(e^{-\gamma_k t}u_k' \bigr)' + \beta_k^2 e^{-\beta_k N t} \mu_1 u_k &= e^{-N t}\partial_u F(u_k,v_k)&&\quad \text{in $(0,\infty)$,}\\
-\bigl(e^{-\gamma_k t}v_k' \bigr)' + \beta_k^2 e^{-\beta_k N t}\mu_2 v_k &= e^{-N t}\partial_v F(u_k,v_k)&&\quad \text{in $(0,\infty)$,}\\
u_k (0)&=v_k(0)=0.
\end{aligned}
\right.
\end{equation}
Assume furthermore that
\begin{equation}
\label{eq:stability-k}
Q_k(\varphi) \ge 0 \qquad \text{for every $\varphi \in C^1_c((0,\infty),\mathbb{R}^2)$, $k \in \mathbb{N}$,}
\end{equation}
where
$$
Q_k(\varphi):= \int_{0}^\infty \Bigl(e^{-\gamma_k t} |\varphi'(t)|^2 + \lambda \beta_k^2 e^{-\gamma_k t}|\varphi(t)|^2 - \bigl \langle U_k(t)\varphi(t),\varphi(t) \bigr \rangle\Bigr)\,dt
$$
and $U_k: [0, \infty) \to \mathbb{R}^{2 \times 2}$ is defined by
$$
U_k(t):= e^{-Nt} D^2 F(u_k,v_k)(t) - \beta_k^{2}e^{-\beta_k N t} \left (\begin{array}{cc}
\mu_1 & 0 \\
0 & \mu_2
\end{array}
\right)
$$
Then $(u_k,v_k) \equiv 0$ for $k$ sufficiently large.
\end{proposition}
\begin{proof}
We may rewrite the system (\ref{eq:henon-transformed-1}) as
$$
\left \{
\begin{aligned}
-u_k'' + \gamma_k u_k' + {\beta_k}^2 e^{\gamma_k t} W_1(t) u_k &= e^{(\gamma_k-N) t}\partial_u F(u_k,v_k)\\
-v_k'' + \gamma_k v_k' + {\beta_k}^2 e^{\gamma_k t} W_2(t) v_k &= e^{(\gamma_k-N) t}\partial_v F(u_k,v_k)\\
u_k (0)&=v_k(0)=0,
\end{aligned}
\right.
$$
with $W_i(t):= e^{-\beta_k N t}\mu_i$ for $i=1,2$ and $t \ge 0$. By a blow up argument based on the Liouville Theorem~\ref{sec:limit-system-1} and a variant of the doubling lemma of Polacik, Quittner and Souplet \cite{PQS}, we first wish to show that the sequence $(u_k,v_k)_k$ is bounded in $L^\infty_{loc}([0,\infty),\mathbb{R}^2)$. For this we consider the functions
$$
M_k:= \max \{ |u_k|^{\frac{p-2}{2}}, |v_k|^{\frac{p-2}{2}}\} : [0,\infty) \to \mathbb{R}, \qquad k \in \mathbb{N}.
$$
Arguing be contradiction, let us assume that there exists a bounded sequence $(s_k)$ in $[0,\infty)$ such that ${N_k}:= M_k(s_k) \to \infty$ as $k \to \infty$. Applying Lemma~\ref{doubling-simplified-further} from the Appendix to $X:=[0,\infty)$,
we find another bounded sequence $(t_k)_k$ in $[0,\infty)$
such that, for $k \in \mathbb{N}$,
$$
M_k(t_k) \ge {N_k} \qquad \text{and}\qquad M_k(t) \le 2 M_k(t_k)\quad \text{for
$t \in B_{\sigma_k {N_k}}(t_k) \cap [0,\infty)$,}
$$
where $\sigma_k:= \frac{1}{M_k(t_k)}$ for $k \in \mathbb{N}$. Passing to a subsequence, we may assume that
$$
t_k \to t_0 \in [0,\infty) \qquad \text{as $k \to \infty$.}
$$
We now put $c_k:= \frac{t_k}{\sigma_k}$ for $k \in \mathbb{N}$ and distinguish two cases. \\[0.2cm]
{\bf Case 1:} $c_k \to \infty$ as $k \to \infty$.\\
In this case, we consider the functions $\bar u_{k}, \bar
v_{k}$ on \mbox{$I_k:= [-c_k,\infty)$} given by
$$
\bar u_k(t):= {\sigma_k}^{\frac{2}{p-2}} u(t_k+ {\sigma_k} t),\quad \bar v_k(t):= {\sigma_k}^{\frac{2}{p-2}} v(t_k + {\sigma_k} t)\qquad \text{for $k \in \mathbb{N}$.}
$$
These functions solve
$$
\left \{
\begin{aligned}
-\bar u_k'' + {\sigma_k} \gamma_k
\bar u_k' + {\sigma_k}^{2} {\beta_k}^2 W_1^k(t) \bar u_k &= e^{[\gamma_k-N](t_k+ \sigma_k t) }\partial_u F(\bar u_k,\bar v_k)
\\
-\bar v_k'' + {\sigma_k} \gamma_k \bar v_k' + {\sigma_k}^2{\beta_k}^2 W_2^k(t) \bar v_k &= e^{[\gamma_k-N](t_k+ \sigma_k t) }\partial_v F(\bar u_k,\bar v_k)
\end{aligned}
\right.
$$
in $I_k$ with
$$
W_i^k(t):= e^{\gamma_k (t_k + \sigma_k t)} W_i (t_k + \sigma_k t)\qquad \text{for $i=1,2$, $k \in \mathbb{N}$ and $t \in I_k$.}
$$
Moreover, we have $\max \{|\bar u_k(0)|,|\bar v_k(0)|\}=1$ and
$$
\max \{|\bar u_k(t)|,|\bar v_k(t)|\} \le 2^{\frac{2}{p-2}} \qquad \text{in $(-{N_k} ,{N_k} ) \cap I_k$}
$$
Since the functions $W_i^k$ remain locally bounded as $k \to \infty$, we may apply elementary ODE regularity estimates in order to pass to a subsequence with
$$
\bar u_k \to \bar u ,\quad \bar v_k \to \bar v \qquad \text{locally
uniformly in $\mathbb{R}$}
$$
where $\max \{\bar u(0), \bar v(0)\}=1$ and $(\bar u, \bar v)$
satisfies the limit system
\begin{equation}
\label{eq:limit-system-rescaling-entire}
\left \{
\begin{aligned}
-\bar u'' &= e^{-Nt_0}\partial_u F(\bar u,\bar v), &&\qquad \text{in $\mathbb{R}$}\\
-\bar v'' &= e^{-Nt_0}\partial_v F(\bar u,\bar v), &&\qquad \text{in $\mathbb{R}$}\\
\end{aligned}
\right.
\end{equation}
In particular, $(\bar u,\bar v)$ is nontrivial. By Theorem~\ref{sec:limit-system-1} -- applied with $e^{-N t_0} F$ in place of $F$-- it then follows that $(\bar u,\bar v)$ is not stable in $\mathbb{R}$, so there exists $\varphi= (\varphi_1,\varphi_2) \in C^1_c(\mathbb{R},\mathbb{R}^2)$ such that
\begin{equation}
\label{eq:not-stable-limit-1}
\int_{\mathbb{R}} \Bigl(|\varphi'(t)|^2
-e^{-N t_0} \bigl \langle D^2 F(\bar u,\bar v) \varphi, \varphi \bigr \rangle(t)
\Bigr)\,dt <0.
\end{equation}
Since
$$
\lim_{k \to \infty} (\bar u_k(t), \bar v_k(t))= (\bar u(t), \bar v(t))\qquad \text{and} \qquad
\lim_{k \to \infty} (t_k+ \sigma_k t ) = t_0
$$
uniformly in $t$ on the support of $\varphi$, we also have that
\begin{align*}
&\sigma_k^2 U_k(t_k + \sigma_k t)\\
&= e^{-N(t_k + \sigma_k t)} D^2 F(\bar u_k,\bar v_k)(t) - \sigma_k^2 \beta_k^{2} \left (\begin{array}{cc}
W_1 (t_k + \sigma_k t) & 0 \\
0 & W_2 (t_k + \sigma_k t)
\end{array}
\right)\\
&\longrightarrow\; e^{-N t_0} D^2 F(\bar u,\bar v)(t) \qquad \text{uniformly in $t$ on the support of $\varphi$.}
\end{align*}
Here we used the fact that $\sigma_k ,\beta_k \to 0$ as $k \to \infty$. Recalling also that $\gamma_k \to 0$ as $k \to \infty$, it then follows from (\ref{eq:not-stable-limit-1}) that
\begin{align*}
q_k:= \int_{I_k}&
e^{-\gamma_k (t_k + \sigma_k t)} |\varphi'(t)|^2\\
+& \sigma_k^2 \Bigl(\beta_k^2 \lambda e^{-\gamma_k (t_k + \sigma_k t)}|\varphi(t)|^2 - \bigl \langle U_k(t_k + \sigma_k t) \varphi (t),\varphi (t) \bigr \rangle
\Bigr)\,dt <0
\end{align*}
for $k$ sufficiently large. Fixing $k$ with this property and large enough to guarantee that $I_k$ contains the support of $\varphi$, we may then write
$$
\varphi(t)= \psi(t_k + \sigma_k t) \qquad \text{with}\quad \psi= (\psi_1,\psi_2) \in C^1_c((0,\infty),\mathbb{R}^2).
$$
A change of variable then shows that
$$
q_k = \sigma_k \!\int_{I_k}\!\bigl(
e^{-\gamma_k \tau} |\psi'(\tau)|^2 + \lambda \beta_k^2 e^{-\gamma_k \tau}|\psi(\tau)|^2 -
\langle \psi(\tau),
U_k(\tau)\psi(\tau)\rangle
\bigr)d\tau = \sigma_k Q_k(\psi).
$$
Sincr $\sigma_k>0$, we conclude that $Q_k(\psi)= \frac{q_k}{\sigma_k}>0$, contradicting the assumption (\ref{eq:stability-k}).\\
{\bf Case 2:} For a subsequence, $c_k:= \frac{t_k}{\sigma_k} \to c \ge 0$ as $k \to \infty$.\\[0.2cm]
In this case we have $t_0 = 0$, and we consider the functions
$$
\bar u_{k}, \bar
v_{k}: [0,\infty) \to \mathbb{R},\qquad
\bar u_k(t):= {\sigma_k} u({\sigma_k} t),\; \bar v_k(t):= {\sigma_k} v({\sigma_k} t)
$$
for $k \in \mathbb{N}$. These functions solve
$$
\left \{
\begin{aligned}
-\bar u_k'' + {\sigma_k} \gamma_k
\bar u_k' + {\sigma_k}^{2} {\beta_k}^2
W_1^k(t) \bar u_k &= e^{(\gamma_k-N) \sigma_k t }\partial_u F(u_k, v_k)
\\
-\bar v_k'' + {\sigma_k} \gamma_k \bar v_k' + {\sigma_k}^2 {\beta_k}^2 W_2^k(t)
\bar v_k &= e^{(\gamma_k-N) \sigma_k t }\partial_v F(u_k,v_k)
\end{aligned}
\right.
$$
with
$$
W_i^k(t):= e^{\gamma_k \sigma_k t} W_i (\sigma_k t)\qquad \text{for $i=1,2$, $k \in \mathbb{N}$ and $t \in I_k$.}
$$
in $[0,\infty)$. Moreover, $\max \{|\bar u_k(c_k)|,|\bar v_k(c_k)|\}=1$ and
$$
\max \{|\bar u_k(t)|,|\bar v_k(t)|\} \le 2^{\frac{2}{p-2}} \qquad \text{in $[0,c_k + {N_k} )$}
$$
Here we suppose that $k$ is sufficiently large so that ${N_k} \ge c_k$. Applying elementary ODE regularity theory, we may pass to a subsequence such that
$$
\bar u_k \to \bar u ,\quad \bar v_k \to \bar v \qquad \text{locally
uniformly in $[0,\infty)$}
$$
where $\max \{\bar u(c), \bar v(c)\}=1$ and $(\bar u, \bar v)$
satisfies the limit problem
\begin{equation}
\label{eq:limit-system-rescaling-half}
\left \{
\begin{aligned}
-\bar u'' &= \partial_u F(u,v), &&\qquad \text{in $[0,\infty)$}\\
-\bar v'' &= \partial_v F(u,v), &&\qquad \text{in $[0,\infty)$}\\
\bar u(0)&= \bar v(0)=0.
\end{aligned}
\right.
\end{equation}
A posteriori, it then follows that $c>0$. Moreover, by Theorem~\ref{sec:limit-system-1}, it follows that $(\bar u,\bar v)$ is not stable in $\mathbb{R}$, so there exists $\varphi= (\varphi_1,\varphi_2) \in C^1_c((0,\infty),\mathbb{R}^2)$ such that
\begin{equation}
\label{eq:not-stable-limit-1-caseii}
\int_{\mathbb{R}} \Bigl(|\varphi'(t)|^2
-\bigl \langle D^2 F(\bar u,\bar v) \varphi,\varphi \bigr \rangle(t)
\Bigr)\,dt <0.
\end{equation}
Since
$$
\lim_{k \to \infty} (\bar u_k(t), \bar v_k(t))= (\bar u(t), \bar v(t))\qquad \text{and} \qquad
\lim_{k \to \infty} \sigma_k t = 0
$$
uniformly in $t$ on the support of $\varphi$, we also have that
\begin{align*}
&\sigma_k^2 U_k(\sigma_k t)\\
&= e^{- \sigma_k N t} D^2 F(\bar u_k,\bar v_k)(t) - \sigma_k^2 \beta_k^{2} \left (\begin{array}{cc}
W_1(\sigma_k t) & 0 \\
0 & W_2(\sigma_k t)
\end{array}
\right)\\
&\longrightarrow\; D^2 F(\bar u,\bar v)(t) \qquad \text{uniformly in $t$ on the support of $\varphi$.}
\end{align*}
It then follows from (\ref{eq:not-stable-limit-1-caseii}) that
\begin{align*}
q_k:= \int_{0}^{\infty}&
e^{-\gamma_k \sigma_k t} |\varphi'(t)|^2\\
+& \sigma_k^2 \Bigl(\lambda \beta_k^2 e^{-\gamma_k \sigma_k t}|\varphi(t)|^2 - \bigl \langle U_k(\sigma_k t) \varphi(t),\varphi(t)\bigr \rangle
\Bigr)\,dt <0
\end{align*}
for $k$ sufficiently large. Writing
$$
\varphi(t)= \psi(\sigma_k t) \qquad \text{with}\quad \psi= (\psi_1,\psi_2) \in C^1_c((0,\infty),\mathbb{R}^2),
$$
we then see, by a change of variable,
$$
q_k = \sigma_k \!\int_{0}^\infty\!\! \bigl(
e^{-\gamma_k \tau} |\psi'(\tau)|^2 + \lambda \beta_k^2 e^{-\gamma_k \tau}|\psi(\tau)|^2 -
\langle \psi(\tau),
U_k(\tau) \psi(\tau) \rangle
\bigr)d\tau = \sigma_k Q_k(\psi),
$$
contradicting the assumption (\ref{eq:stability-k}).\\
Since in both cases we reached a contradiction, we conclude that
$$
\text{$(u_k,v_k)$ remains bounded on compact subsets of $[0,\infty)$ as $k
\to \infty$.}
$$
Hence, by elementary ODE regularity estimates, we may pass to a subsequence such that
$$
u_k \to u_{0}, \quad v_k \to v_{0} \qquad \text{in
$C^1_{loc}([0,\infty)$ as ${\beta_k} \to 0$,}
$$
where $(u_0, v_0)$ is a nonnegative solution of the limit system
\begin{equation}
\label{eq:henon-limit-system}
\left \{
\begin{aligned}
- u_0'' &= e^{-Nt}\partial_{u}F(u_0,v_0)&&\qquad \text{in $(0,\infty)$,}\\
- v_0'' &= e^{-Nt}\partial_{v}F(u_0,v_0) &&\qquad \text{in $(0,\infty)$,}\\
u_0(0)&=v_0(0)=0.
\end{aligned}
\right.
\end{equation}
Moreover, passing to the limit in (\ref{eq:stability-k}), we find that
\begin{equation}
\int_{0}^\infty \Bigl(|\varphi'|^2 - e^{-Nt} \bigl \langle D^2 F(u_0,v_0) \varphi, \varphi \bigr \rangle \Bigr)\,dt \ge 0
\label{stability-limit}
\end{equation}
for all $\varphi \in C_c^1((0,\infty),\mathbb{R}^2)$. Moreover, by Lemma~\ref{pohozaev-transformed} and Fatou's Lemma we have that
\begin{align}
&\int_0^\infty \bigl([u_0']^2 + [v_0']^2\bigr)\,dt = \liminf_{k \to \infty} \int_0^\infty (e^{-\gamma_k t} u_k)'^2 + (e^{-\gamma_k t} v_k)'^2 \,dt \nonumber\\
&\le \frac{2}{N} \lim_{k \to \infty} ([u_k'(0)]^2 + [v_k'(0)]^2) = \frac{2}{N} \bigl([u_0'(0)]^2 + [v_0'(0)]^2\bigr)< \infty.\label{limit-finite-energy}
\end{align}
Applying Lemma~\ref{approx-lemma}, we thus find a sequence $\varphi_n=(\varphi_n^1,\varphi_n^2) \in C_c^1((0,\infty),\mathbb{R}^2)$ such that
$$
\lim_{n \to \infty} \int_0^\infty [(u_0-\varphi_n^1)'\,]^2\,dt = \lim_{n \to \infty} \int_0^\infty [(v_0-\varphi_n^2)\,]'^2\,dt= 0
$$
and
$$
\lim_{n \to \infty}e^{-\delta t}(u(t)-\varphi_n^1(t))=\lim_{n \to \infty}e^{-\delta t}(v(t)-\varphi_n^2(t))= 0
$$
uniformly in $t \ge 0$ for every $\delta>0$. Evaluating (\ref{stability-limit}) for $\varphi_n$ and passing to the limit, we thus see that
\begin{equation}
\label{stability-limit-1}
\int_{0}^\infty \Bigl([u_0']^2 +[u_0']^2 - e^{-Nt} \Bigl \langle D^2 F(u_0,v_0) {u_0 \choose v_0} , {u_0 \choose v_0} \Bigr \rangle \Bigr)\,dt \ge 0.
\end{equation}
On the other hand, by Lemma~\ref{approx-lemma} we also have
$$
u_0^2(t)+v_0^2(t) \le C t \qquad \text{with}\; C:=\int_0^\infty \bigl([u_0']^2 + [v_0']^2\bigr)\,dt.
$$
Moreover, it follows from \eqref{limit-finite-energy} that there exist $t_n \ge 0$, $n \in \mathbb{N}$ with $t_n \to \infty$ and
$$
\sqrt{t_n}\bigl(|u_0'(t_n)|+|v_0'(t_n)|\bigr) \to 0 \qquad \text{as $n \to \infty$.}
$$
Consequently,
\begin{align}
&\int_0^\infty \bigl([u_0']^2 + [v_0']^2\bigr)\,dt =\lim_{n \to \infty}
\int_0^{t_n} \bigl([u_0']^2 + [v_0']^2\bigr)\,dt \nonumber\\
& =\lim_{n \to \infty}\Bigl(\frac{u_0(t_n)u_0'(t_n)}{2}-\int_0^{t_n} \bigl(u_0'' u_0 + v_0'' v_0 \bigr)\,dt\Bigr) \nonumber\\
& =\lim_{n \to \infty} \int_0^{t_n}e^{-Nt} \bigl(\partial_u F(u_0,v_0)u_0 + \partial_v F(u_0,v_0)v_0 \bigr)\,dt \nonumber \\
&=\int_0^{\infty}e^{-Nt} \bigl(\partial_u F(u_0,v_0)u_0 + \partial_v F(u_0,v_0)v_0 \bigr)\,dt. \nonumber
\end{align}
Combining this with (\ref{stability-limit-1}) and (\ref{F-consequence-3}), we deduce that
\begin{align*}
0 &\le \int_{0}^\infty e^{-Nt} \Bigl(\partial_u F(u_0,v_0)u_0 + \partial_v F(u_0,v_0)v_0- \Bigl \langle D^2 F(u_0,v_0) {u_0 \choose v_0} , {u_0 \choose v_0} \Bigr \rangle \Bigr)\,dt\\
&\le - p(p-2)c_F \int_0^\infty e^{-Nt}\bigl(|u_0|^p+|v_0|^p\bigr)\,dt,
\end{align*}
which implies that $u_0 \equiv v_0 \equiv 0$. Consequently,
$$
\lim_{k \to \infty} \bigl([u_k'(0)]^2 + [v_k'(0)]^2\bigr) = \bigl([u_0'(0)]^2 + [v_0'(0)]^2\bigr) = 0
$$
yielding $u_k \equiv v_k \equiv 0$ for $k$ sufficiently large by Lemma~\ref{pohozaev-transformed} and the assumption $\lim \limits_{k \to \infty}\gamma_k \to 0$. The proof is finished.
\end{proof}
\section{Appendix}
\label{sec:appendix}
\subsection{Part A: A note on a linear ODE system on the half line}
The following Lemma is not surprising, as it relates variational instability of linear ODE system to the existence of negative eigenvalues and corresponding eigenfunctions.
Since the proof is rather technical and not straightforward, we include it for the convenience of the reader.
\begin{lemma}
\label{sec:morse-index-system-help-lemma}
Let $\delta> \gamma>0$, $\lambda \ge 0$, and let $U:[0,\infty) \to \mathbb{R}^{2 \times 2}$ be a bounded continuous function taking symmetric real $2 \times 2$-matrices as values. Suppose that there exists a function $\varphi \in C^1_c((0,\infty),\mathbb{R}^2)$ such that
$$
Q(\varphi):=\int_{0}^\infty \Bigl(e^{-\gamma t}|\varphi'(t)|^2 + \lambda e^{-\gamma t}|\phi(t)|^2 - e^{-\delta t} \bigl \langle U(t)\varphi(t),\varphi(t) \bigr \rangle\Bigr)\,dt <0,
$$
Then there exists $\mu<0$ and a function $h \in C^2([0,\infty),\mathbb{R}^2)$ such that
\begin{equation}
\label{eq:radial-variable-linear-problem-1-help-lemma}
- \partial_t \bigl(e^{-\gamma t}\partial_t h\bigr) + \lambda e^{-\gamma t}h - e^{-\delta t} U(t) h
= \mu e^{-\delta t} h \quad \text{in
$[0,\infty)$}
\end{equation}
and such that
$$
h(0)=0, \qquad h(\infty)=\lim \limits_{t \to \infty}h(t)\; \text{exists.}
$$
\end{lemma}
\begin{proof}
In the following, $C>0$ always denotes a positive constant depending only on $\delta,\gamma$ and $U$. We introduce the weighted Sobolev space $H_*$ given as the completion of $C^1_c((0,\infty),\mathbb{R}^2)$ with respect to the norm $\|\cdot\|_*$ defined by
$$
\|h\|_*^2 = \int_{0}^\infty e^{-\gamma t}|h'(t)|^2\, dt
$$
Then $H_*$ is a Hilbert space. Moreover, for $h \in C^1_c((0,\infty),\mathbb{R}^2)$ we have, integrating by parts,
$$
\int_{0}^\infty e^{-\gamma t}|h(t)|^2\,dt = \frac{2}{\gamma}
\int_{0}^\infty e^{-\gamma t} \langle h(t), h'(t) \rangle \,dt
\le \frac{2}{\gamma}\Bigl(\int_{0}^\infty e^{-\gamma t}|h(t)|^2\,dt\Bigr)^{\frac{1}{2}}\|h\|_*
$$
and thus
$$
\|h\|_{L^2_\gamma}:= \Bigl(\int_{0}^\infty e^{-\gamma t}|h(t)|^2\,dt\Bigr)^{\frac{1}{2}} \le \frac{2}{\gamma}\|h\|_*.
$$
This estimate extends to functions on $H_*$ and shows that the quadratic form $Q$ is well defined on $H_*$. Moreover, for $h \in C^1_c((0,\infty),\mathbb{R}^2)$ we have the pointwise estimate
\begin{align*}
|h(t)|^2= 2 \int_0^t \langle h(s), h'(s) \rangle \,ds &\le 2 e^{\gamma t}
\int_0^t e^{-\gamma s} \langle h(s), h'(s) \rangle \,ds\\
&\le 2 e^{\gamma t} \|h\|_{L^2_\gamma} \|h\|_* \le \frac{4}{\gamma} e^{\gamma t} \|h\|_*^2 \quad \text{for $t \ge 0$.}
\end{align*}
This pointwise estimate also extends to functions in $H_*$, and it easily shows that every $h \in H_*$ is continuous on $[0,\infty)$ with $h(0)=0$
and
\begin{equation}
\label{eq:pointwise-estimate-h}
|h(t)| \le \frac{2}{\sqrt{\gamma}} \|h\|_* e^{\frac{\gamma}{2}t} \qquad \text{for $t \ge 0$.}
\end{equation}
Now by assumption we have
$$
\mu:= \inf_{M}Q \;<\;0 \qquad \text{for}\quad M:= \Bigl \{\varphi \in H_*\::\: \int_0^\infty e^{-\delta t}|\varphi(t)|^2\,dt=1 \Bigr \}.
$$
Let $(h_n)_n \subset M$ be a sequence with $Q(h_n) \to \mu$ as $n \to \infty$. Since
$$
\int_0^\infty e^{-\delta t} \langle U(t)h_n(t),h_n(t) \rangle dt \le \|U\|_\infty \int_0^\infty e^{-\delta t}|h_n(t)|^2\,dt =\|U\|_\infty \qquad \text{for $n \in \mathbb{N}$,}
$$
it follows that $\mu> -\infty$ and that $h_n$ is bounded in $H_*$. Passing to a subsequence,
we then have
\begin{equation}
\label{eq:L2-loc-est}
h_n \rightharpoonup h \quad \text{in $H_*$}\qquad \text{and}\qquad h_n \to h \quad \text{in $L^2_{loc}([0,\infty))$.}
\end{equation}
As a consequence of (\ref{eq:pointwise-estimate-h}), we also find that
\begin{equation*}
\int_{t}^\infty \!\!e^{-\delta s}h_n^2(s) ds \le \frac{4}{\gamma} \|h_n \|_*^2 \!\int_{t}^\infty \!\!e^{(\gamma-\delta)s}ds = \frac{4 }{\gamma (\delta -\gamma)} \|h_n \|_*^2 e^{(\gamma-\delta)t} \quad \text{for $t \ge 0$,}
\end{equation*}
where the RHS tends to zero as $t \to \infty$ uniformly in $n \in \mathbb{N}$. Combining this with (\ref{eq:L2-loc-est}), we conclude that
$$
\int_{0}^\infty e^{-\delta t}|h|^2 dt= \lim_{n \to \infty}\int_{0}^\infty e^{-\delta t}|h_n|^2 dt=1,
$$
which implies that $h \in M$. By the weak lower semicontinuity of $Q$, it then follows that $h$ is a minimizer of $Q$ on $M$. By a standard argument in the calculus of variations, this implies that $h$ is a classical solution of~(\ref{eq:radial-variable-linear-problem-1-help-lemma}) in the open interval $(0,\infty)$. Since, as remarked above, we also have $h \in
C([0,\infty),\mathbb{R}^2)$ and $h(0)=0$, we may use ~(\ref{eq:radial-variable-linear-problem-1-help-lemma}) to see that $h \in C^2([0,\infty),\mathbb{R}^2)$.
It remains to show that
\begin{equation}
\label{eq:claim-limit-exist}
\text{the limit $\;h(\infty)= \lim_{t \to \infty} h(t)\;$ exists.}
\end{equation}
For this we need to distinguish two cases.\\
{\underline{Case 1:}} $\lambda>0$.\\
In this case we rewrite (\ref{eq:radial-variable-linear-problem-1-help-lemma}) as
\begin{equation}
\label{eq:radial-variable-linear-problem-1-help-lemma-rewrite}
h'' = \gamma h' +\lambda h - e^{(\gamma-\delta)t}V(t)h \qquad \text{with}\quad V(t)= \mu \, 1_{\mathbb{R}^{2\times 2}} + U(t).
\end{equation}
We then consider
$$
v: [0,\infty) \to \mathbb{R}, \qquad v(t)=|h(t)|^2= h_1^2(t)+ h_2^2(t),
$$
and we compute that
\begin{align*}
v''= 2 (|h'|^2 + \langle h'', h \rangle ) &= 2 \Bigl(|h'|^2 + \gamma \langle h', h \rangle + \lambda |h|^2 + \langle e^{(\gamma-\delta)t}V(t) h, h\rangle\Bigr)\\
&= 2|h'|^2 + \gamma v' + 2 \Bigl(\lambda v - \langle e^{(\gamma-\delta)t}V(t) h, h\rangle\Bigr)\\
&\ge 2|h'|^2 + \gamma v' + 2 \Bigl(\lambda - e^{(\gamma-\delta)t} \|V\|_\infty \Bigr)v.
\end{align*}
Since $\gamma< \delta$, we may fix $t_0> 0$ such that $e^{(\gamma-\delta)t} \|V\|_\infty < \frac{\lambda}{2}$ for $t \ge t_0$, which yields that
$$
v'' -\gamma v' \ge 2|h'|^2 + \lambda v \ge 0 \qquad \text{on $[t_0,\infty)$.}
$$
With $\gamma_0:= \min \{2,\lambda\}>0$, we also have that
$$
2|h'|^2 + \lambda v \ge \gamma_0 (|h'|^2 + |h|^2) \ge \gamma_0 \langle h', h \rangle + \frac{\gamma_0}{2}(|h'|^2 + |h|^2)
\ge \gamma_0 v' + \frac{\gamma_0}{2}(|h'|^2 + |h|^2),
$$
and thus
\begin{equation}
\label{eq:appendix-diff-ineq}
v'' - (\gamma+ \gamma_0) v' \ge \frac{\gamma_0}{2}(|h'|^2 + |h|^2) \ge 0 \qquad \text{on $[t_0,\infty)$.}
\end{equation}
Consequently, the function $v'- (\gamma+ \gamma_0) v$ is increasing on $[t_0,\infty)$, and thus
$$
c:= \lim \limits_{t \to \infty}\bigl[v'(t)-(\gamma+ \gamma_0) v(t)\bigr] \; \in (-\infty,\infty]\qquad \text{exists.}
$$
We suppose by contradiction that $c \in (0,\infty]$. Then there exists $t_1 \ge t_0$ and $\rho>0$ such that
$$
v' - (\gamma+ \gamma_0) v \ge \rho \qquad \text{on $[t_1,\infty)$}
$$
and thus
$$
\liminf_{t \to \infty} v(t)e^{-(\gamma+\gamma_0)t}>0,
$$
whereas (\ref{eq:pointwise-estimate-h})
implies that
\begin{equation}
\label{eq:pointwise-estimate-h-0-1}
v(t) \le \frac{4}{\gamma} \|h\|_*^2 \, e^{\gamma t} \qquad \text{for $t \ge 0$.}
\end{equation}
This is a contradiction, and thus $c \in (-\infty,0]$. Along a sequence $t_n \to \infty$ we then have, by (\ref{eq:appendix-diff-ineq}),
$$
0 = \lim_{n \to \infty}\Bigl(v''(t_n) -(\gamma+ \gamma_1) v'(t_n)\Bigr) \ge \frac{\gamma_0}{2} \limsup_{n \to \infty}\bigl( |h'(t_n)|^2+|h(t_n)|^2\bigr),
$$
which implies that $h(t_n) \to 0$ and $h'(t_n) \to 0$. Consequently, $v(t_n) \to 0$ and $v'(t_n)= 2 \langle h'(t_n), h(t_n) \rangle \to 0$ and thus
$$
c= \lim_{n \to \infty} \Bigl(v'(t_n) -(\gamma+\gamma_1) v(t_n)\Bigr) = 0
$$
Next we put $\tilde v(t):= v(t)e^{-(\gamma+\gamma_1)t}$, so that $\lim \limits_{t \to \infty} \tilde v(t)= 0$ by (\ref{eq:pointwise-estimate-h-0-1}).
As $c=0$, we find that
$$
\tilde v'(t) = \bigl(v'(t)-(\gamma+\gamma_1)v(t)\bigr)e^{-(\gamma+\gamma_1)t}= o(e^{-(\gamma+\gamma_1)t})\qquad \text{as $t \to \infty$}
$$
and hence
$$
|\tilde v(t)|= \Bigl| \int_t^\infty \tilde v'(s)\,ds \Bigr| = o(e^{-(\gamma+\gamma_1)t}) \qquad \text{as $t \to \infty$}
$$
which implies that
$$
v(t) = e^{(\gamma+\gamma_1)t}\tilde v(t) \to 0 \qquad \text{as $t \to \infty$.}
$$
Hence we conclude that $h(t) \to 0$ as $t \to \infty$, so (\ref{eq:claim-limit-exist}) holds.\\
{\underline{ Case 2:}} $\lambda=0$.\\
In this case we have, by (\ref{eq:radial-variable-linear-problem-1-help-lemma})
\begin{equation}
\label{equation-estimate-appendix}
|\partial_t \bigl(e^{-\gamma t} h'(t))| \le C
e^{- \delta t} |h(t)| \qquad \text{for $t \ge 0$}
\end{equation}
and thus
\begin{equation}
\label{equation-estimate-appendix-1}
|\partial_t \bigl(e^{-\gamma t} h'(t))| \le C e^{(\frac{\gamma}{2} - \delta) t} \qquad \text{as $t \to \infty$.}
\end{equation}
by (\ref{eq:pointwise-estimate-h}). Here and in the following, the letter $C$ denotes different positive constants. Since $\delta>\gamma > \frac{\gamma}{2}$, we infer that the limit $\lim \limits_{t \to \infty} e^{-\gamma t} h'(t)$ exists, and this limit must be zero since $\|h\|_*<\infty$. We may then integrate
(\ref{equation-estimate-appendix-1}) to see that
$$
|e^{-\gamma t} h'(t)| \le C e^{(\frac{\gamma}{2} -\delta)t}
$$
and thus $|h'(t)| \le C e^{(\frac{3}{2}\gamma-\delta)t}$ for $t \ge 0$.
If $\frac{3}{2}\gamma-\delta \ge 0$, integration then shows that
$$
|h(t)|= \Bigl| \int_0^t h'(s)\,ds\Bigr| \le C e^{(\frac{3}{2}\gamma-\delta)t}\qquad \text{for $t \ge 0$.}
$$
We may then combine this estimate with (\ref{equation-estimate-appendix}) and integrate to find that
$$
|e^{-\gamma t} h'(t)|\le C e^{(\frac{3}{2}\gamma-2\delta)t}\qquad \text{for $t \ge 0$,}
$$
i.e.,
$$
|h'(t)| \le C e^{(\frac{5}{2}\gamma-2\delta)t}\qquad \text{for $t \ge 0$.}
$$
We may iterate this process $n$ times, where $n \in \mathbb{N}$ is chosen such that
$\frac{\gamma}{2} + (n-1)(\gamma-\delta) \ge 0$ and
$\frac{\gamma}{2} + n(\gamma-\delta) < 0$. Consequently, we obtain that
$$
|h'(t)| \le C e^{(\frac{\gamma}{2} + n(\gamma-\delta))t} \quad \text{for $t \ge 0$.}
$$
Hence $h'(t)$ decays exponentially as $t \to \infty$, and from this we deduce (\ref{eq:claim-limit-exist}).\\
The proof is finished.
\end{proof}
\subsection{Part B: A doubling Lemma}
\label{sec:appendix-b:-doubling}
In the following, we note a simplified variant of a result of Polacik, Quittner and Souplet \cite{PQS}. We include the elementary proof for the convenience of the reader.
\begin{lemma}
\label{doubling-simplified-further} Let $(X,d)$ be a complete metric
space and $M: X \to (0,\infty)$ be a function which is bounded on compact subsets of
$X$. Then for any $s_* \in X$ there exists $t_* \in \overline{B_2(s_*)} \subset X$
such that
\begin{equation}
\label{eq:doubling-property}
M(t_*) \ge M(s_*) \qquad \text{and}\qquad M(t) \le 2 M(t_*)\quad \text{for all $t \in B_{\frac{M(s_*)}{M(t_*)}}(t_*)$}.
\end{equation}
\end{lemma}
\begin{proof}
We follow the proof of Polacik, Quittner and Souplet \cite{PQS}. Assuming by
contradiction that the lemma is false, we can successively construct a
sequence $(t_n)_n \subset X$ such that $t_0=s_*$
and
\begin{equation}
\label{eq:doubling-property-proof}
M(t_{n+1}) > 2 M(t_{n}) \quad \text{and}\quad {\rm dist}(t_{n+1},t_{n})
\le M(s_*)/M(t_{n})
\end{equation}
for $n \in \mathbb{N} \cup \{0\}$. Indeed, suppose that $t_0,\dots,t_{n}$ are already constructed with this property. Then we have that
\begin{equation}
\label{eq:doubling-property-proof-0}
M(t_k) \ge 2^{k} M(s_*) \qquad \text{for $k=0,\dots,n$}
\end{equation}
and thus
\begin{equation}
\label{eq:doubling-property-proof-1}
{\rm dist}(t_k,t_{k-1}) \le \frac{M(s_*)}{M(t_{k-1})} \le 2^{1-k} \qquad \text{for $k=1,\dots,n$}.
\end{equation}
This shows that
$$
{\rm dist}(t_n,s_*) = {\rm dist}(t_n,t_0) \le \sum_{k=1}^n {\rm dist}(t_k,t_{k-1}) \le 2,
$$
So if there is no $t_{n+1}$ satisfying (\ref{eq:doubling-property-proof}),
then (\ref{eq:doubling-property}) is true with $t_*=t_{n} \in \overline{B_2(s_*)}$, contrary to what we assume. By induction, we thus see that the sequence exists as claimed.\\
On the other hand, this sequence is a Cauchy sequence by (\ref{eq:doubling-property-proof-1}), and $M(y_n) \to \infty$ as $n \to \infty$ by (\ref{eq:doubling-property-proof-0}). This contradicts the assumption that
$X$ is complete and $M$ is bounded on compact subsets. Hence the lemma is proved.
\end{proof}
|
{
"timestamp": "2018-03-08T02:09:46",
"yymm": "1803",
"arxiv_id": "1803.02712",
"language": "en",
"url": "https://arxiv.org/abs/1803.02712"
}
|
\section{Introduction}
The formulation of the Nambu-Goldstone theorem suggests the existence of certain number of gapless particles as a consequence of the existence of the same number of broken symmetries of the system under study \cite{Nambu3}. This was the brilliant observation of Yoichiro Nambu who was inspired in some fundamental problems inside the models of superconductivity in order to formulate what we know today as spontaneous symmetry breaking (SSB) \cite{Bardeen, Vala, Bolo, Nambu1, Nambu2}. Later this principle was translated to particle physics where it proved to be a powerful tool for explaining the electroweak unification, the associated BEH mechanism, as well as the chiral symmetry breaking \cite{Nambu3, Nambu4, Nambu5, Salam}. There are "exceptions" of the theorem where the number of Nambu-Goldstone bosons is lower than the number of broken generators \cite{Nambu55}. In addition, the dispersion relations for the Nambu-Goldstone bosons are not necessarily linear as it is expected from gapless particles \cite{Others}. This observation is summarized in the theorem of Nielsen and Chadha where a connection between the number of Nambu-Goldstone bosons and their dispersion was analyzed \cite{NC}. Then these previous observations were reviewed by some authors in order to generalize the counting rules for the Nambu-Goldstone bosons \cite{Murayama}. After then it was proved that the number of Nambu-Goldstone bosons as well as the associated dispersion relations are related to the number of independent histories representing the interaction of pairs of Nambu-Goldstone bosons with the degenerate vacuum, taken as a third particle with trivial phase. The number of independent histories are constrained by the Quantum Yang-Baxter equations (QYBE's) \cite{My papers, Integra}. In general, given the interaction of pairs of Nambu-Goldstone bosons with the degenerate vacuum (the degenerate vacuum taken as a fictitious particle with trivial phase), we can obtain a total of four histories of interaction. The QYBE's constraint the number of independent histories to two. The two independent histories will be related by a twist map (exchange of Nambu-Goldstone bosons), or equivalently by a time-reversal symmetry. It was demonstrated that when the number of independent histories is only one, then the dispersion relation of the Nambu-Goldstone bosons is quadratic and we only have one Nambu-Goldstone boson for a pair of broken symmetries. On the other hand, when the number of independent histories is two, then we have a linear dispersion relation and we have one Nambu-Goldstone boson for each broken symmetry \cite{My papers}. Previously, Nicolis and Piazza demonstrated that there is a gap associated to the Nambu-Goldstone bosons when there is a chemical potential associated to a finite charge density \cite{Nicolis}. The gap depends on the chemical potential as well as on the symmetry algebra of the broken generators. It comes out that in addition it is also necessary to make some arrangements to the counting of Nambu-Goldstone bosons in these situations \cite{Murayama}. However, it was not clear what is the effective gap when pairs of Nambu-Goldstone bosons with some specific value of mass (for each one), become a single degree of freedom. In this paper, by exploring the number of independent histories for the interaction of pairs of particles with the degenerate vacuum (triangle relations), we explain how under some circumstances pairs of Nambu-Goldstone bosons become effectively a single degree of freedom with an effective gap determined by the superposition of the individual masses for each Nambu-Goldstone boson.
\section{General argument}
We can analyze the interaction of a pair of Nambu-Goldstone bosons by understanding first how two plane waves meet at some region of spacetime. It is understood that the pair of Nambu-Goldstone bosons under analysis live over the degenerate vacuum and share a common region over the same vacuum. In addition we are working at the infrared limit such that the wave functions representing the pair of particles spread everywhere over the degenerate vacuum. The figure (\ref{The titan3}) illustrates the basic scenario of a pair of waves meeting in some region of spacetime. Initially we will take the particle (wave) denoted by $n$ as the Nambu-Goldstone boson corresponding to one of the broken generators different to the finite charge density term appearing in the action. We will then consider the particle $n'$ as the Nambu-Goldstone boson corresponding to the broken generator related to the charge appearing explicitly in the action. This case is interesting because it describes the origin of the mass of the Nambu-Goldstone bosons when the mechanism involved in their origin is based on symmetries. In other sections we generalize these results for explaining the cases where the pair of Nambu-Goldstone bosons under analysis are related to broken symmetries different to the one represented by the charge appearing explicitly in the action. In such a case, it will be already assumed that the pair of Nambu-Goldstone bosons interacting are already massive. We define two Hamiltonians $H_\mu$ and $H$ in a similar way as in ref. \cite{Nicolis}. The Hamiltonian $H$ satisfies the relation
\begin{equation} \label{Qespo}
H\vert0_{SV}>=\mu Q_1\vert0_{SV}>,
\end{equation}
where $Q_1$ denotes the charge which appears explicitly in the action and it can be a broken generator. We can then define the Hamiltonian $H_\mu=H-\mu Q_1$ from which we define the ground state as $H_\mu\vert0_{SV}>=0$, consistent with the previous result. We can make the system to evolve in agreement with any Hamiltonian, namely, $H$ or $H_\mu$ as far as the results are consistent with each other. The only difference between the description of a system with respect to the Hamiltonian $H_\mu$ or the Hamiltonian $H$ is the time-direction considered for the evolution of the system. A system evolving in agreement with $H_\mu$, will not perceive explicitly the effects of the charge $Q_1$, but such effects will appear implicitly. On the other hand, a system evolving in agreement with $H$, will perceive explicitly the effects of the charge $Q_1$. At the end of the calculations, the physics obtained under both kind of evolutions is the same. In the same way, we can define the ground state with respect to any Hamiltonian, as far as we guarantee that our system is bounded from below. For example, for the ground state defined by $H_\mu$, there is no chance of having a particle with negative energy, which is correct in order to guarantee the stability of the theory. On the other hand, for a ground state defined in agreement with $H$, then $H\vert 0>=0$ and $H_\mu\vert0>=-\mu Q_1\vert0>$ for consistency. In this case, the particles should have positive energy with respect to $H$ and if for any reason they have a negative energy, it cannot be larger in magnitude than $\vert\mu q_a\vert$, with $q_a$ representing the eigenvalue of the operator $Q_1$. A particle with negative energy in this case should be interpreted as a particle with positive energy with respect to the real vacuum defined with respect to $H_\mu$. The Nambu-Goldstone bosons for the pair of particles will be represented by states evolving in agreement with the wave functions $e^{-i(px)}$ and $e^{-i(\tilde{p}x)}$, corresponding to the particle $n$ and $n'$ respectively. The phase convention is defined with respect to the vacuum. For the vacuum defined with respect to $H_\mu$, we will take $E_{\mu n}=0$ and ${\bf p}=0$, which gives the trivial wave function (phase) $e^{ip_\mu x}\to 1$. It is understood that $E_{\mu n}$ is the eigenvalue of the Hamiltonian $H_\mu$ and $E_n$ is the eigenvalue of the related Hamiltonian $H$. As has been said before, the vacuum with respect to $H_\mu$ is taken as a particle of trivial phase. However, we can define another degenerate vacuum with respect to the Hamiltonian $H$, which has a non-trivial phase represented as a particle with zero momentum but non-zero energy with the phase given by $e^{i\mu Q_1t}$ as it is shown by the purple dotted line in the figure (\ref{The titan3}).
\begin{figure}
\centering
\includegraphics[width=10cm, height=8cm]{dyn1.png}
\caption{The Nambu-Goldstone bosons associated to the broken charge $Q_1$, meeting with the Nambu-Goldstone boson related to another broken generator of the system. The slopes measured with respect to the degenerate vacuum $0$ represent the spatial momentum of the particles. There are two degenerate vacuums with zero momentum. One of the vacuums also has zero energy but the other one (purple) has an energy shifted by $\mu Q_1$. The upper and lower figures are related by the exchange of particles $n\to n'$. The equalities respect the QYBE's.}
\label{The titan3}
\end{figure}
We have to define the phases for the particles with respect to the pair of degenerate vacuums. For the upper part of the figure (\ref{The titan3}), we will take $e^{-i(px)}=e^{-i(E_nt-{\bf p\cdot x})}$ as the function corresponding to the particle $n$. On the other hand, $e^{i(\tilde{p}x)}=e^{i(\tilde{E}_nt-{\bf \tilde{p}\cdot x})}$ will represent the function for the particle $n'$. The same convention applies for both sides of the equality. The equality is guaranteed by the QYBE's. For the lower figures, the phase convention is modified due to the exchange of particles $n\to n'$. In such a case, we take $e^{i(\tilde{p}x)}=e^{i(\tilde{E}_nt-{\bf \tilde{p}\cdot x})}=e^{i(\tilde{E}_nt+{\bf p\cdot x})}$ for the wave function corresponding to the particle $n$. In addition, we take $e^{-i(px)}=e^{-i(E_nt-{\bf p\cdot x})}=e^{-i(E_nt+{\bf \tilde{p}\cdot x})}$ for the plane wave representing the particle $n'$. Note the exchange of roles for the functions under the condition $n\to n'$. This convention is general, independent on whether the evolution of the particles is expressed with respect to $H_\mu$ or $H$. The figure (\ref{The titan4}) illustrates the phase conventions for the different particles. Note that it is the same figure (\ref{The titan3}) but at the infrared limit where the slopes of the lines representing the particles almost converge over the degenerate vacuum which is taken as a third particle with zero momentum. This is the interesting limit for the purposes of this analysis.
\begin{figure}
\centering
\includegraphics[width=10cm, height=8cm]{dyn2.png}
\caption{The infrared limit of the Nambu-Goldstone bosons. The triangles tend to be lines parallels to the degenerate vacuum. The figure illustrates the phase conventions used for the waves representing each particle. The vacuum is taken as a third particle of zero momentum.}
\label{The titan4}
\end{figure}
\section{The sum of histories}
Each diagram showed in Fig. (\ref{The titan3}), illustrates a series of events (histories). When we sum the histories, then the exchange of location of the degenerate vacuum in the diagrams (from upper to lower) will be weighted with a minus sign. In addition, the exchange of particles represented by $n\to n'$ will be weighted by a minus sign. When there is a simultaneous exchange of both, vacuum and particles, then the graph will be weighted with a positive sign. The degenerate vacuum appearing in the diagrams, corresponds in reality to a third particle with trivial phase (zero energy and zero spatial momentum). Each graph in all the previous figures, corresponds to the product of three matrices, here denoted by $R$. In the product, the indices $n$, $n'$ and $0$ will appear as contractions between pairs of labels over which we have to sum. This is the case because the mentioned indices correspond to internal lines of the triangles. This is understood from the coordinate form of the QYBE's. Up to some phases to appear later, the sum illustrated in the figure (\ref{The titan5}) can be expressed as follows
\begin{eqnarray} \label{spo}
\sum_{0, n, n'}<0_{DV}\vert Q_{l}(0)\vert n'><n'\vert Q_{p}(0)\vert n><n\vert \phi_{b}(x)\vert0_{DV}>(Ph_1)\nonumber\\
-<0_{DV}\vert \phi_{b}(x)\vert n'><n'\vert Q_{l}(0)\vert n><n\vert Q_{p}(0)\vert0_{DV}>(Ph_2)\nonumber\\
-<0_{DV}\vert Q_{p}(0)\vert n><n\vert Q_{l}(0)\vert n'><n'\vert \phi_{b}(x)\vert0_{DV}>(Ph_3)\nonumber\\
+<0_{DV}\vert \phi_{b}(x)\vert n><n\vert Q_{p}(0)\vert n'><n'\vert Q_{l}(0)\vert0_{DV}>(Ph_4)=0,
\end{eqnarray}
where $Ph_i$ ($i=1,2,3,4$) corresponds to the phases for each term. They will be obtained through the spacetime invariance condition $Q_{p}(y)=e^{-ipy}Q_{p}(0)e^{ipy}$. In eq. (\ref{spo}), this condition is assumed. It is also assumed that the spatial translations are not spontaneously broken. If the charge $Q_1$ is a broken generator, then the symmetry under time translations, generated by the Hamiltonian $H$, will be also a broken symmetry if we define the ground state with respect to $H_\mu$. In this case, the time-translation symmetry is spontaneously broken. Equivalently, the symmetry under time translations generated by the Hamiltonian $H_\mu$ will be broken if we define the ground state with respect to $H$. Then the Lorentz symmetry is explicitly or spontaneously broken depending with respect to which Hamiltonian we are describing the physics of the system under study \cite{Nicolis}. We can introduce some auxiliary indices in eq. (\ref{spo}). They will represent the interactions of either, the pair of Nambu-Goldstone bosons $n$ and $n'$, or the interaction of one Nambu-Goldstone boson with the degenerate vacuum. Then eq. (\ref{spo}) can be written in terms of the product of three matrices as follows
\begin{eqnarray} \label{spo2}
R^{0,n'}_{\emph{\color{red}m},l}R^{n,\emph{\color{red}k}}_{p,n'}R^{\emph{\color{red}a},b}_{n,0}(Ph_1)-R^{n,0}_{p,\emph{\color{red}m}}R^{\emph{\color{red}a},n'}_{n,l}R^{b,\emph{\color{red}k}}_{0,n'}(Ph_2)-
R^{0,n}_{\emph{\color{red}m},p}R^{n',\emph{\color{red}a}}_{l,n}R^{\emph{\color{red}k},b}_{n',0}(Ph_3)+R^{n',0}_{l,\emph{\color{red}m}}R^{\emph{\color{red}k},n}_{n',p}R^{b,\emph{\color{red}a}}_{0,n}(Ph_4)=0.
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=10cm, height=8cm]{dyn3.png}
\caption{The sum of histories representing the interaction of the pairs of Nambu-Goldstone bosons and the degenerate vacuum taken as a third particle of zero momentum. An exchange $n\to n'$ is weighted with a minus factor. The same applies for an exchange of vacuum locations in the graph (from upper to lower side).}
\label{The titan5}
\end{figure}
Here we have introduced the notation
\begin{eqnarray} \label{another form3}
R^{0,n'}_{\emph{\color{red}m},l}=<0_{DV}\vert Q_{\emph{\color{red}m},l}(0)\vert n'>,\;\;\;R^{n,\emph{\color{red}k}}_{p,n'}=<n'\vert Q_{\emph{\color{red}k},p}(0)\vert n>,\;\;\;\nonumber\\
R^{\emph{\color{red}a},b}_{n,0}=<n\vert \phi_{\emph{\color{red}a,}b}(x)\vert0_{DV}>,\;\;\;R^{n,0}_{p,\emph{\color{red}m}}=<n\vert Q_{p, \emph{\color{red}m}}(0)\vert0_{DV}>,\;\;\;\nonumber\\
R^{\emph{\color{red}a},n'}_{n,l}=<n'\vert Q_{l, \emph{\color{red}a}}(0) \vert n>,\;\;\;\;R^{b,\emph{\color{red}k}}_{0,n'}=<0_{DV}\vert \phi_{b, \emph{\color{red}k}}(x)\vert n'>.\;\;\;\;\nonumber\\
R^{0,n}_{\emph{\color{red}m},p}=<0_{DV}\vert Q_{\emph{\color{red}m},p}(0)\vert n>,\;\;\;\;R^{n',\emph{\color{red}a}}_{l,n}=<n\vert Q_{\emph{\color{red}a},l}(0)\vert n'>,\;\;\;\;\nonumber\\
R^{\emph{\color{red}k},b}_{n',0}=<n'\vert \phi_{\emph{\color{red}k},b}(x)\vert0_{DV}>,\;\;\;\;R^{n',0}_{l,\emph{\color{red}m}}=<n'\vert Q_{l, \emph{\color{red}m}}(0)\vert0_{DV}>,\;\;\;\;\nonumber\\
R^{\emph{\color{red}k},n}_{n',p}=<n\vert Q_{p,\emph{\color{red}k}}(0)\vert n'>,\;\;\;\;R^{b,\emph{\color{red}a}}_{0,n}=<0_{DV}\vert \phi_{b,\emph{\color{red}a}}(x)\vert n>.\;\;\;\;
\end{eqnarray}
Note the presence of the auxiliary indices marked with red color. They can be observed in the figure (\ref{The titan3}). Note that the pair of indices $\{p, \emph{\color{red}k}\}$ and $\{\emph{\color{red}a}, l\}$, represent interactions between the pair of Nambu-Goldstone bosons. Equivalently, the pair of indices $\{b, \emph{\color{red}k}\}$ and $\{\emph{\color{red}m}, l\}$, represent interactions between the degenerate vacuum and the particle $n'$ and finally the pair of indices $\{p, \emph{\color{red}m}\}$ and $\{\emph{\color{red}a}, b\}$, represent the interactions between the particles $n$ with the degenerate vacuum. After spatial integration, as well as taking into account time-independence, the terms representing the phases denoted by $Ph_i$ in eq. (\ref{another form3}), will tell us how the frequency and momentum go to zero simultaneously, telling us then the behavior of the dispersion relation for the effective Nambu-Goldstone bosons appearing at the end. Before going in further details, we will develop the concept of spontaneous symmetry breaking.
\section{Spontaneous symmetry breaking}
If we absorb the phases for each term inside the broken generators in eq. (\ref{spo}), and if in addition, we evaluate the expression in a single vacuum instead of summing over the degenerate vacuum, we will obtain the standard result
\begin{equation} \label{spo3}
<0_{SV}\vert [\phi_b(x),[Q_p(y),Q_l(z)]]\vert0_{SV}>\neq0.
\end{equation}
Note the presence of the spacetime dependence $y$ and $z$. This dependence will disappear after spatial integration and after imposing the standard time-independence condition. We want to remark at this point that eq. (\ref{spo3}) is just the SSB condition \cite{Nambu3}. Then we can conclude that the principle of SSB is a natural consequence of the interaction of pairs of particles with the degenerate vacuum after taking into account the appropriate weight factors (signs in front of the triangles). Then the SSB condition can be expressed in terms of the $R$-matrices obtained in eq. (\ref{spo2}), after introducing auxiliary indices and after summing over the degenerate vacuum. The degenerate vacuum is then a third particle with trivial momentum (zero slope). Note that the pair of indices $\{\emph{\color{red}m}, b\}$ live in the same space corresponding to the degenerate vacuum and they can be taken as equivalent. In addition, the pair of indices $\{\emph{\color{red}k}, l\}$ live in the same space corresponding to the particle $n'$. Finally, the pair of indices $\{\emph{\color{red}a}, p\}$ live in the space corresponding to the particle $n$. Then now the role of the auxiliary indices can be understood without any ambig\"uity. If a charge has the label $l$, then this broken generator is related to the Nambu-Goldstone boson $n'$. If a broken generator has the label $p$, then it is related to the Nambu-Goldstone boson $n$. Finally the order parameter with a label $b$ is always connected with the vacuum. By writing the SSB condition as in eq. (\ref{spo3}), we are assuming that the pair of broken generators $Q_p$ and $Q_l$ obey a Lie algebra. In our initial analysis, we will tale $Q_l=Q_1$. Then subsequently we will generalize the reault for any other pair of broken generators.
\section{The dispersion relations and number of Nambu-Goldstone bosons}
The dispersion relations for the Nambu-Goldstone bosons, as well as their number, can be found under the assumption of spacetime translational invariance for the charges. In this section, we will only consider the case where one Nambu-Goldstone boson is related to the charge $Q_1$, and the other Nambu-Goldstone boson is related to a charge connected to any other symmetry spontaneously broken. As we have remarked before, we have two degenerate vacuums represented by a pair of parallel lines with zero slope but shifted with respect to each other by the gap $\mu q_a$, where $q_a$ is an eigenvalue of $Q_1$. The gap is illustrated in Fig. (\ref{The titan3}). In the figure, the red lines are the series of vacuums for which $H_\mu$ vanishes and the purple dotted lines are the vacuums under which $H$ vanishes. If the charge $Q_1$ is not a broken generator, then the gap between the lines disappears. The lines representing both degenerate vacuums, will be parallel because the spatial translations are not spontaneously broken and then the spatial momentum annihilates the vacuum independent on the Hamiltonian used for describing the evolution of the system. In this section we will define the ground state (vacuum) as well as the evolution of the charges with respect to $H_\mu$. Then for $H_\mu$, we define the phases $e^{-ip_\mu x}\vert0>=e^{-i(E_\mu t-{\bf p\cdot x})}\vert0>=\vert0>$. Here $H_\mu\vert0>=E_\mu\vert0>=0$, which is consistent with what we have explained in eq. (\ref{Qespo}). On the other hand, since the Hamiltonian $H$ satisfies the relation (\ref{Qespo}), then the symmetry under time translations for the evolution of the system with respect to $H$ is broken due to its relations with the charge $Q_1$, as far as this charge is also a broken generator. Then the phase of this Hamiltonian with respect to the same ground state is defined in general by $e^{-ip x}\vert0>=e^{-i(Et-{\bf p\cdot x})}\vert0>=e^{i\mu Q_1t}\vert0>$. Here we will take the line $n'$ in the triangle relations illustrated in the figures (\ref{The titan3}), (\ref{The titan4}) and (\ref{The titan5}) as the line representing the Nambu-Goldstone bosons related to the broken charge $Q_1$. The slope of the line $n'$ with respect to the vacuum line ($0$), represents the spatial momentum of the Nambu-Goldstone bosons related to $Q_1$. By taking $Q_l(z)=e^{-i{pz}}Q_l(0)e^{ipz}=Q_1(z)$ and considering the evolution of the charges with respect to the Hamiltonian $H_\mu$, then the result (\ref{spo}), or equivalently, the result (\ref{spo3}) becomes
\begin{eqnarray} \label{spo4}
\sum_{0, n, n'}<0_{DV}\vert Q_{1}(0)e^{i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}\vert n'><n'\vert e^{-i(E_{\mu n'}t-{\bf p_{n'}\cdot y})}Q_{p}(0)e^{i(E_{\mu n}t-{\bf p_n\cdot y})}\vert n><n\vert \phi_{b}(x)\vert0_{DV}>\nonumber\\
-<0_{DV}\vert \phi_{b}(x)\vert n'><n'\vert e^{-i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}Q_{1}(0) e^{i(\tilde{E}_{\mu n}t-{\bf \tilde{p}_{n}\cdot z)}} \vert n><n\vert e^{-i(E_{\mu n}t-{\bf p_n\cdot y})}Q_{p}(0)\vert0_{DV}>\nonumber\\
-<0_{DV}\vert Q_{p}(0)e^{i(E_{\mu n}t-{\bf p_n\cdot y})}\vert n><n\vert e^{-i(\tilde{E}_{\mu n}t-{\bf \tilde{p}_{n}\cdot z})}Q_{1}(0)e^{i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}\vert n'><n'\vert \phi_{b}(x)\vert0_{DV}>\nonumber\\
+<0_{DV}\vert \phi_{b}(x)\vert n><n\vert e^{-i(E_{\mu n}t-{\bf p_n\cdot y})} Q_{p}(0) e^{i(E_{\mu n}t-{\bf p_n\cdot y})}\vert n'><n'\vert e^{-i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})} Q_{1}(0)\vert0_{DV}>=0.
\end{eqnarray}
The time-independence condition, together with the integration over the whole space, gives us the following results $E_{\mu n}=E_{\mu n'}$, $\tilde{E}_{\mu n}=\tilde{E}_{\mu n'}$, ${\bf p}_n={\bf p}_{n'}$ and ${\bf \tilde{p}_{n}}={\bf \tilde{p}_{n'}}$. Then we can ignore the phases related to the exponential terms having the combinations $E_{\mu n}-E_{\mu n'}$, $\tilde{E}_{\mu n}-\tilde{E}_{\mu n'}$, ${\bf p}_n-{\bf p}_{n'}$ and ${\bf \tilde{p}}_n-{\bf \tilde{p}}_{n'}$, understanding that the previous equalities are satisfied. If we introduce the notation given in eq. (\ref{another form3}) and then express eq. (\ref{spo4}) in terms of the $R$-matrices after introducing the auxiliary indices previously defined, we obtain
\begin{eqnarray} \label{spo5}
R^{0,n'}_{\emph{\color{red}m},1}R^{n,\emph{\color{red}k}}_{p,n'}R^{\emph{\color{red}a},b}_{n,0}e^{i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}-R^{n,0}_{p,\emph{\color{red}m}}R^{\emph{\color{red}a},n'}_{n,1}R^{b,\emph{\color{red}k}}_{0,n'}e^{-i(E_{\mu n}t-{\bf p_n\cdot y})}_{n'\to n}\nonumber\\
-R^{0,n}_{\emph{\color{red}m},p}R^{n',\emph{\color{red}a}}_{1,n}R^{\emph{\color{red}k},b}_{n',0}e^{i(E_{\mu n}t-{\bf p_n\cdot y})}+R^{n',0}_{1,\emph{\color{red}m}}R^{\emph{\color{red}k},n}_{n',p}R^{b,\emph{\color{red}a}}_{0,n}e^{-i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}_{n\to n'}=0.\;\;\;\;\;
\end{eqnarray}
Note that in the coordinate notation, the order of the matrices is irrelevant as far as the index contraction is the appropriate. The subindex $n\to n'$ below some phases, corresponds to the exchange of particles $n\to n'$. The final phases for the exchange of particles $n\to n'$ can be obtained naturally from the Yang-Baxter diagrams in the figures (\ref{The titan4}) and (\ref{The titan5}). They can be defined explicitly as $e^{-i(E_{\mu n}t-{\bf p_n\cdot y})}_{n'\to n}=e^{i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot y})}=e^{i(E_{\mu n}t+{\bf p_n\cdot y})}$ and $e^{-i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}_{n\to n'}=e^{i(E_{\mu n}t-{\bf p_n\cdot z})}=e^{i(\tilde{E}_{\mu n'}t+{\bf \tilde{p}_{n'}\cdot z})}$. This result is general, independent on whether or not $n$ and $n'$ represent the same or different degrees of freedom, or equivalently, on whether or not the number of independent histories for the interaction of the particles is one or two. If we only have one independent history of interaction, then both degrees of freedom, namely, $n$ and $n'$ are the same and then $E_{\mu n}=\tilde{E}_{\mu n'}$. However, here ${\bf \tilde{p}_{\mu n'}}=-{\bf p_{\mu n}}$ due to the change of slope in the corresponding lines of the triangle relations. This condition over the spatial momentum is equivalent to say that the particles are approaching to each other. The two equalities considered in the figure (\ref{The titan3}), are a natural consequence of the QYBE's. However, in general, the pair of triangles of the upper part of the figure are different to the pair of triangles in the lowest part. The four triangles represent different histories for the interaction of the pair of particles and the degenerate vacuum. Then we can say that we have a total of four histories. However the QYBE's constraint the number of independent histories to only two. The two independent histories are related to each other by a twist map or time-reversal symmetry (exchange of particles $n\to n'$). If we have only one independent history, then this is only possible if $n=n'$, and then all the triangles of the figure are equivalent and we can then factorize all the terms with common coordinates. Then eq. (\ref{spo5}), becomes
\begin{eqnarray}
R^{0,n'}_{\emph{\color{red}m},1}R^{n,\emph{\color{red}k}}_{p,n'}R^{\emph{\color{red}a},b}_{n,0}\left(e^{i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}+e^{i(\tilde{E}_{\mu n'}t+{\bf \tilde{p}_{n'}\cdot z})}\right)-R^{0,n}_{\emph{\color{red}m},p}R^{n',\emph{\color{red}a}}_{1,n}R^{\emph{\color{red}k},b}_{n',0}\left(e^{i(E_{\mu n}t-{\bf p_n\cdot y})}+e^{i(E_{\mu n}t+{\bf p_n\cdot y})}\right)
=0.\;\;\;\;\;
\end{eqnarray}
This equation can be simplified as
\begin{eqnarray} \label{timeindependent}
2R^{0,n'}_{\emph{\color{red}m},1}R^{n,\emph{\color{red}k}}_{p,n'}R^{\emph{\color{red}a},b}_{n,0}e^{i\tilde{E}_{\mu n'}t}cos\left({{\bf \tilde{p}_{n'}\cdot z}}\right)-2R^{0,n}_{\emph{\color{red}m},p}R^{n',\emph{\color{red}a}}_{1,n}R^{\emph{\color{red}k},b}_{n',0}e^{iE_{\mu n}t}cos({\bf p_n\cdot y})=0.\;\;\;\;\;
\end{eqnarray}
At this level it is clear that the momentum will go to zero quadratically and the frequency will vanish linearly. This is a proof of the fact that the condition $n=n'$ is equivalent to say that pairs of degrees of freedom originally with linear dispersion relation, become a single one with quadratic dispersion relation. Then the Nambu-Goldstone bosons related to the broken generators $Q_1$, are eaten up by the Nambu-Goldstone bosons related to the broken generator $Q_p$. This is an interesting phenomena and in some scenarios it can be interpreted as the fundamental origin of the Nambu-Goldstone's bosons mass. In other scenarios however, the gap of the Nambu-Goldstone bosons appears due to an ordinary contribution of the energy of the field associated to the charge $Q_1$. This is the situation in ferromagnetism and antiferromagnetism. Since the energy $E_{\mu n}$ is related to the Hamiltonian $H_\mu$, then the condition $E_{\mu n}=\tilde{E}_{\mu n'}\to 0$ is equivalent to the condition $E_n=\mu q_a$, where $q_a$ is an eigenvalue of $Q_1$. This gap is then generated dynamically due to the interaction of the pairs of degrees of freedom, which become a single one. The gap is defined by the relation $E_n-E_{\mu n}=\mu q_a$. Writing explicitly the results, from the time-independence condition of eq. (\ref{timeindependent}), as well as the spatial integration, we get
\begin{equation}
E_{\mu n}\to0\;\;\;\;E_n\to\mu q_a\;\;\;\;{\bf p}_n\to 0,
\end{equation}
with the dispersion relation
\begin{equation}
E_{\mu n}={\bf p}_n^2,
\end{equation}
which is equivalent to $E_n=\mu q_a+{\bf p}^2_n$. Note that this dependence is obtained by picking up the lowest order terms from the phases expansion in eq. (\ref{timeindependent}). The previous analysis, was based on the fact that the charges evolve with respect to the Hamiltonian $H_\mu$. Note that if $Q_1$ commutes with the other charges, then eq. (\ref{spo3}) becomes trivial. If we develop the same analysis with any other pair of charges different to $Q_1$ and satisfying eq. (\ref{spo3}), then the result will be equivalent but with a gap defined as the superposition of the individual gaps of the Nambu-Goldstone bosons interacting with each other.
\section{Broken time translation symmetry}
When we analyzed the physics in the previous sections, we defined the vacuum with respect to the Hamiltonian $H_\mu$ (red dotted line in Fig. (\ref{The titan3})) and we also considered the evolution of the operators with respect to $H_\mu$. This means that the term $\mu Q_1$ did not appear explicitly although its physical effects still appear through the gap of the particles. The gap for the Nambu-Goldstone bosons appeared from the time-independent condition, obtaining then $E_{\mu n}\to 0$, but $E_n\to \mu q_a$. We will now analyze the evolution of the charges with respect to the Hamiltonian $H$ but still considering the vacuum with respect to $H_\mu$. Here we consider that the Hamiltonian $H$ is a broken generator since the charge $Q_1$ is also a broken generator. This is the only way for keeping consistency. Note that if we consider the vacuum with respect to $H$ (purple line in Fig. (\ref{The titan3})) such that $H\vert 0>=0$ (not broken), and if in addition, the charge $Q_1$ is not a broken generator, then we just recover the standard result where all the Nambu-Goldstone bosons are massless and independent (pairs do not converge into a single Goldstone boson). The physics described in this section is independent on which Hamiltonian is selected for describing the evolution of the charges and independent on the vacuum selected for describing the physics.
If we define the vacuum with respect to $H$, then the frequencies defined by $E_n$ are gapless modes since $E_n\to0$. However, if in addition we have $Q_1$ as a broken generator, then it appears a negative gap for the frequencies $E_{\mu n}\to -\mu q_a$. This does not contradict the previous results and it should not be considered as an unphysical situation because what is important is to keep the condition $H_\mu\vert 0>=(H-\mu Q_1)\vert0>=0$, which is equivalent to $H\vert0>=(H_\mu+\mu Q_1)\vert0>=0$. Then having gapless frequencies for $E_n$, together with a negative gap for the frequency $E_{\mu n}=-\mu q_a$ is just equivalent to having a positive gap for $E_n=\mu q_a$ and a gapless condition for $E_{\mu n}$, proving then the consistency. Then whether a Nambu-Goldstone boson is massive or not is a matter of which vacuum is used for describing the physics of the system. Then the gap is correctly defined by the relative relation $E_n-E_{\mu n}=\mu q_a$ as before. In this section we consider the vacuum annihilated by $H_\mu$ (red line in the figure (\ref{The titan3})), but at the same time we consider the evolution of the operators with respect to $H$. In this case some interesting physical effects can be perceived. However, the results still keep consistency with the fact that there must be a relative gap of $\mu q_a$ between the frequencies $E_n$ and $E_{\mu n}$. We will analyze two possibilities inside this situation. Analogous results to the ones developed in this section, would be obtained if we consider the vacuum with respect to $H$ but the evolution of the charges with respect to $H_\mu$.
\subsubsection{Pairs of Nambu-Goldstone bosons: $n'$ for the charge $Q_1$ and $n$ for another charge $Q_p$} \label{great}
This is exactly the same case analyzed in detail previously. We define the vacuum with respect to $H_\mu$. However, since now we consider the evolution of the charges with respect to $H$, we have to make some changes. The charge $Q_1$ will evolve in the same way with respect to any Hamiltonian since $Q_1(t)=e^{-iHt}Q_1e^{iHt}=e^{-i(H_\mu+\mu Q_1)t}Q_1e^{i(H_\mu+\mu Q_1)t}=e^{-iH_\mu t}Q_1e^{iH_\mu t}$. This is the case because any operator commutes with itself and with any function depending on it. Then for this case we only have to focus on the operator $Q_p$. By finding the expression in terms of the Hamiltonian $H_\mu$, we get the result
\begin{eqnarray} \label{spo41}
\sum_{0, n, n'}<0_{DV}\vert Q_{1}(0)e^{i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}\vert n'><n'\vert e^{-i(E_{\mu n'}t-{\bf p_{n'}\cdot y})}e^{-i\mu Q_1t}Q_{p}(0)e^{i\mu Q_1t}e^{i(E_{\mu n}t-{\bf p_n\cdot y})}\times\nonumber\\
\vert n><n\vert \phi_{b}(x)\vert0_{DV}>-<0_{DV}\vert \phi_{b}(x)\vert n'><n'\vert e^{-i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}Q_{1}(0)e^{i(\tilde{E}_{\mu n}t-{\bf \tilde{p}_{n}\cdot z)}}\vert n>\times\nonumber\\
<n\vert e^{-i(E_{\mu n}t-{\bf p_n\cdot y})}e^{-i\mu Q_1t}Q_{p}(0)e^{i\mu Q_1t}\vert0_{DV}>-<0_{DV}\vert e^{-i\mu Q_1t}Q_{p}(0)e^{i\mu Q_1t}\times\nonumber\\
e^{i(E_{\mu n}t-{\bf p_n\cdot y})}\vert n><n\vert e^{-i(\tilde{E}_{\mu n}t-{\bf \tilde{p}_{n}\cdot z})}Q_{1}(0)e^{i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}\vert n'><n'\vert \phi_{b}(x)\vert0_{DV}>\nonumber\\
+<0_{DV}\vert \phi_{b}(x)\vert n><n\vert e^{-i(E_{\mu n}t-{\bf p_n\cdot y})} e^{-i\mu Q_1t}Q_{p}(0)e^{i\mu Q_1t}e^{i(E_{\mu n}t-{\bf p_n\cdot y})}\vert n'>\times\nonumber\\
<n'\vert e^{-i(\tilde{E}_{\mu n'}t-{\bf \tilde{p}_{n'}\cdot z})}Q_{1}(0)\vert0_{DV}>=0.
\end{eqnarray}
Now we proceed to find the expression for $e^{-i\mu Q_1t}Q_{p}(0)e^{i\mu Q_1t}$, taking into account that $Q_p$ and $Q_1$ do not commute. The symmetries of the action obey the Lie algebra and then
\begin{equation}
[Q_p, Q_l]=if_{pl}^cQ_c.
\end{equation}
Here we consider the case $l=1$. With this commutator, we obtain \cite{Nicolis}
\begin{equation} \label{MIX}
e^{-i\mu Q_1t}Q_{p}(0)e^{i\mu Q_1t}=(e^{f_1\mu t})_a^bQ_b.
\end{equation}
This result implies a mix of charges as the time evolve with respect to $H$. We consider the Hermitian basis where the structure constants are block diagonals matrices with the structure
\begin{equation} \label{matrix}
\begin{bmatrix}
0 & -q_a \\[0.3em]
q_a & 0 \\[0.3em]
\end{bmatrix}
\end{equation}
In this basis, the result (\ref{spo41}), expressed in terms of the $R$-matrices, becomes
\begin{eqnarray} \label{spo51}
R^{0,n'}_{\emph{\color{red}m},1}R^{n,\emph{\color{red}k}}_{p,n'}R^{\emph{\color{red}a},b}_{n,0}e^{i\left((\tilde{E}_{\mu n'}-\mu q_a)t-{\bf \tilde{p}_{n'}\cdot z}\right)}-R^{n,0}_{p,\emph{\color{red}m}}R^{\emph{\color{red}a},n'}_{n,1}R^{b,\emph{\color{red}k}}_{0,n'}e^{-i\left((E_{\mu n}-\mu q_a)t-{\bf p_n\cdot y}\right)}_{n'\to n}\nonumber\\
-R^{0,n}_{\emph{\color{red}m},p}R^{n',\emph{\color{red}a}}_{1,n}R^{\emph{\color{red}k},b}_{n',0}e^{i\left((E_{\mu n}-\mu q_a)t-{\bf p_n\cdot y}\right)}+R^{n',0}_{1,\emph{\color{red}m}}R^{\emph{\color{red}k},n}_{n',p}R^{b,\emph{\color{red}a}}_{0,n}e^{-i\left((\tilde{E}_{\mu n'}-\mu q_a)t-{\bf \tilde{p}_{n'}\cdot z}\right)}_{n\to n'}=0.
\end{eqnarray}
Here we repeat the phase conventions explained previously such that $e^{-i\left((\tilde{E}_{\mu n'}-\mu q_a)t-{\bf \tilde{p}_{n'}\cdot z}\right)}_{n\to n'}=e^{i\left((E_{\mu n}-\mu q_a)t-{\bf p_{n}\cdot z}\right)}=e^{i\left((\tilde{E}_{\mu n'}-\mu q_a)t+{\bf \tilde{p}_{n'}\cdot z}\right)}$ and $e^{-i\left((E_{\mu n}-\mu q_a)t-{\bf p_n\cdot y}\right)}_{n'\to n}=e^{i\left((\tilde{E}_{\mu n'}-\mu q_a)t-{\bf \tilde{p}_{n'}\cdot y}\right)}=e^{i\left((E_{\mu n}-\mu q_a)t+{\bf p_n\cdot y}\right)}$. Here again the QYBE's constraint the number of independent histories to only two. Then after integration over the whole space and then taking the time-independence condition, here again the momentum goes to zero quadratically when the number of independent histories is reduced to one. Here again the Nambu-Goldstone bosons get a gap defined by $E_{\mu n}=\mu q_a$. For this case, the frequencies $E_n$ get a gap $E_n\to 2\mu q_a$. Then here again the relative value between the frequencies gap is $E_n-E_{\mu n}=\mu q_a$. Then the gap of the Nambu-Goldstone bosons is $\mu q_a$. The same relative gap will appear for any other case or combination of the previous cases. The mix of currents defined by the result (\ref{MIX}), is equivalent to a temporal exchange of the triangle lines representing the space where the Nambu-Goldstone bosons $n'$ live in the figures (\ref{The titan3}), (\ref{The titan4}) and (\ref{The titan5}). Then we have a mix of spaces or equivalently, a line becoming a different one after some temporal evolution. We can write explicitly how to get the gap from the previous expression. It is understood that the histories constrained by the QYBE's correspond to histories described by different spacetime coordinates ($y$ and $z$). Then we understand that the number of independent histories in the expression (\ref{spo51}) is two. When we reduce the number of histories to one, such that $n=n'$, then we reduce eq. (\ref{spo51}) to
\begin{eqnarray} \label{spo51lala}
2R^{0,n'}_{\emph{\color{red}m},1}R^{n,\emph{\color{red}k}}_{p,n'}R^{\emph{\color{red}a},b}_{n,0}e^{i\left(\tilde{E}_{\mu n'}-\mu q_at\right)}cos({\bf \tilde{p}_{n'}\cdot z})-2R^{0,n}_{\emph{\color{red}m},p}R^{n',\emph{\color{red}a}}_{1,n}R^{\emph{\color{red}k},b}_{n',0}e^{i\left(E_{\mu n}-\mu q_a\right)t}cos({\bf p_n\cdot y})=0.
\end{eqnarray}
Then now we can see that at the lowest order in the expansions the momentum goes to zero quadratically and simultaneously the energy has a gap defined by $\mu q_a$. Then here we have a dispersion relation of the form
\begin{equation}
E_{\mu n}=\mu q_a+{\bf p}^2,
\end{equation}
which can be considered as a quadratic dispersion relation.
\subsubsection{Pairs of Nambu-Goldstone bosons: $n'$ for a charge $Q_l\neq Q_1$ and $n$ for another charge $Q_p\neq Q_1$}
For this case, the pair of charges under consideration have a non-trivial evolution with respect to $H$ when the vacuum is defined with respect to $H_\mu$. This case is just an extension of the one just analyzed and we will just write the final result, to be
\begin{eqnarray} \label{spo51nojoda}
R^{0,n'}_{\emph{\color{red}m},l}R^{n,\emph{\color{red}k}}_{p,n'}R^{\emph{\color{red}a},b}_{n,0}(e^{i\left((\tilde{E}_{\mu n'}-\mu q_a -\mu q_b)t-{\bf \tilde{p}_{n'}\cdot z}\right)})-R^{n,0}_{p,\emph{\color{red}m}}R^{\emph{\color{red}a},n'}_{n,l}R^{b,\emph{\color{red}k}}_{0,n'}(e^{-i\left((E_{\mu n}-\mu q_a-\mu q_b)t-{\bf p_n\cdot y}\right)}_{n'\to n})\nonumber\\
-R^{0,n}_{\emph{\color{red}m},p}R^{n',\emph{\color{red}a}}_{l,n}R^{\emph{\color{red}k},b}_{n',0}(e^{i\left((E_{\mu n}-\mu q_a-\mu q_b)t-{\bf p_n\cdot y}\right)})+R^{n',0}_{l,\emph{\color{red}m}}R^{\emph{\color{red}k},n}_{n',p}R^{b,\emph{\color{red}a}}_{0,n}(e^{-i\left((\tilde{E}_{\mu n'}-\mu q_a-\mu q_b)t-{\bf \tilde{p}_{n'}\cdot z}\right)}_{n\to n'})=0.
\end{eqnarray}
Since the vacuum is defined with respect to $H_\mu$, then the gap for $E_{\mu n}$ has to be positive definite. We ignore any possible negative gap contribution for $E_{\mu n}$ when the vacuum is defined with respect to $H_\mu$. This previous result can be simplified with the help of the QYBE's and by taking into account the phase convention previously explained, then we get
\begin{equation} \label{spo51nojodadelajoda}
2R^{0,n'}_{\emph{\color{red}m},l}R^{n,\emph{\color{red}k}}_{p,n'}R^{\emph{\color{red}a},b}_{n,0}(e^{i(\tilde{E}_{\mu n'}-\mu q_a-\mu q_b)t})cos({\bf\tilde{p}_{n'}\cdot z})-2R^{n,0}_{p,\emph{\color{red}m}}R^{\emph{\color{red}a},n'}_{n,l}R^{b,\emph{\color{red}k}}_{0,n'}(e^{i(E_{\mu n}-\mu q_a-\mu q_b)t})cos({\bf p_{n}\cdot y})=0.
\end{equation}
Here the spatial momentum goes to zero quadratically as it is expected. Note that in this case, the gap has an interesting structure. The gap once again should be obtained from the time-independence condition and it is defined by $E_{\mu}=\mu q_a+\mu q_b$. The linear combination of charges appearing in the structure of the gap suggest that the pair of Nambu-Goldstone bosons were already massive before becoming a single degree of freedom. Then one Nambu-Goldstone boson with a gap $\mu q_a$ combines with another Nambu-Goldstone boson with gap $\mu q_b$, generating then effectively a single degree of freedom with an affective gap $\mu q_a+\mu q_b$ at the infrared level. Then at the moment when a pair of Nambu-Goldstone bosons become a single degree of freedom, there are two possibilities. The first one is that two massive Nambu-Goldstone bosons become effectively a single degree of freedom with a mass given by the superposition of the mass defined by each individual degree of freedom. The second option is one massless Nambu-Goldstone boson meets another one related to the charge $Q_1$, generating then a single degree of freedom, massive. This last scenario is illustrated by the analysis of the subsection (\ref{great}). It can be interpreted as a dynamical origin of the Nambu-Goldstone's bosons mass.
\subsubsection{Possible future applications of the triangular formulation of the Nambu-Goldstone theorem}
The method developed in this paper can be perceived as a triangular formulation of the Nambu-Goldstone theorem. The systems analyzed in this paper have also been studied in the past by using other methods \cite{Nicolis}. However, in the near future we expect to extend the methods developed here in order to analyze some other systems where the physics is not yet completely understood. For example, since the formulation of this paper is based on the $R$-matrices, which in general are bi-linear objects; then the triangular approach becomes convenient (and natural) at the moment of analyzing cases where we consider order parameters formed as bi-linear objects. This is the case for the order parameter considered in the chiral condensation for example \cite{Chiral}, where the order parameter is given by \cite{Chiral}
\begin{equation}
<\bar{\psi}^a\psi^b>=R^{a,b}_{c, d}\epsilon^c\otimes\epsilon^d,
\end{equation}
which is evidently a bi-linear object corresponding to the $R$-matrices constructed in this paper. Note that for the cases analyzed here, we focused on systems where the order parameter might correspond to a scalar or a vector. The same applies for the corresponding symmetries involved. However, it is well-known that scalar and vectors are just zero and first order rank tensors. Then the $R$-matrices can be also perfectly adapted for such situations if we understand the role taken by each index as has been done in this paper. In other words, the triangular formulation proposed here is general and can be perfectly extended to the analysis of different situations, independent on whether or not the order parameter and the symmetries of the systems correspond to bi-linear objects. In addition, the formalism can be also applied to situations where the perturbative approaches to Quantum Field Theory (QFT) cannot be used or when the structure of the vacuum seems to be complicated to analyze by using conventional methods. This is the case for example of the quark-gluon plasma, which is in general difficult to analyze by using conventional methods. The formalism can also be extended for the cases where we can consider finite temperature corrections. With this formulation, we expect in general to analyze some difficult problems inside the scenario of Quantum Chronodynamics (QCD). These treatments, as well as other physical situations, will be considered in coming papers.
\section{Conclusions}
In this paper we have explained the different mechanisms under which the Nambu-Goldstone bosons can get a mass when there is a chemical potential, breaking then the symmetry under time-translations for one of the Hamiltonians describing the physics of the system. We used the method based on expressing the spontaneous symmetry breaking condition as a sum of histories. In total for the interaction of a pair of Nambu-Goldstone bosons with the degenerate vacuum taken as a third particle of zero momentum, we have four histories of interaction. However, the QYBE's constraint the number of independent histories to two. When the number of independent histories is only one, then the pair of Nambu-Goldstone bosons become a single degree of freedom with quadratic dispersion relation. This triangular formulation suggests that in some circumstances, some Nambu-Goldstone bosons are eaten up by the partners, generating then the gap dynamically. The gap and the physics described is independent on the definition of the vacuum and independent on the Hamiltonian selected in order to describe the evolution of the system. This result evidently matches with what was found by other authors in \cite{Nicolis}. The structure for the gaps becomes richer when we consider the pair of charges in the triangle relations as different to $Q_1$, with the vacuum defined with respect to $H_\mu$ and the evolution of the system with respect to $H$. In this case, the spectrum of masses of the Nambu-Goldstone bosons appear explicitly. The analysis suggests that for this case a pair of massive Nambu-Goldstone bosons with linear dispersion relation become a single degree of freedom with quadratic dispersion relation and with the mass determined by the sum of the individual gaps of the original degrees of freedom. We have to remark at this point that the previous methods for analyzing the mechanism of spontaneous symmetry breaking are good enough for confronting the kind of problems described in this paper. The purpose of this paper is to introduce a new tool for analyzing the spontaneous symmetry breaking phenomena. We have proved then that the new method developed here can also predict the same physics which was previously known, which is the minimal requisite in order to be valid. This new method will become fundamentally important when we confront some difficult problems such as those related to the quark-gluon plasma and problems related to the quark-confinement in general, together with the associated chiral symmetry breaking as has been mentioned in the last section. In this paper we have just mentioned some possible advantages of the method but not developed them in deep detail. Such deeper analysis related to QCD and its vacuum structure will correspond to a coming paper.
\\\\
{\bf Acknowledgement}
I. A is supported by the JSPS Post-doctoral fellowship for oversea Researchers.
\newpage
|
{
"timestamp": "2018-03-08T02:09:06",
"yymm": "1803",
"arxiv_id": "1803.02680",
"language": "en",
"url": "https://arxiv.org/abs/1803.02680"
}
|
\section{Introduction}
The idea of modeling networks using random graphs was first given by Gilbert (1961) in ~\cite{gilbert1961random} where he considered a network formed by connecting points of a Poisson point process that are sufficiently close
to each other. The model Gilbert introduced was a different one from the Erd{\H o}s-Renyi random graph models in~\cite{erdos1961evolution}~\cite{erd6s1960evolution}~\cite{erdds1959random}. In this model the vertices have some (random) geometric layout and the edges are determined by the distances between the positions of the vertices. We call graphs formed in this way \emph{random geometric graphs}.
Recently, quite a lot of work has been done on \emph{random geometric graphs}, partly due to the importance of these graph model as some theoretical models for ad hoc networks, e.g., see~\cite{hekmat2006ad}. Most of the theoretical results on random geometric graphs can be found in the monograph written by Penrose~\cite{penrose2003random}.
\emph{Random geometric graphs} model is as follows. Let $f$ be some specific probability density function on $\mathbb{R}^d$, and let $X_1, X_2,...$ be independent and identically distributed $d$-dimensional random variables with common density $f: \mathbb{R}^d\to [0,\infty)$. In the whole paper, we assume that $f$ is measurable and bounded, which also satisfies $\int_{\mathbb{R}^d}f(x)dx=1.$ Let $\mathcal{X}_n=\{X_1, X_2,..., X_n\}.$ We denote $G(\mathcal{X}_n; r_n)$ the undirected graph with vertex set $\mathcal{X}_n$ and with undirected edges connecting all those pairs $\{X_i, X_j\}$ with $\lVert X_i-X_j \rVert\le r(n),$ in which $\lVert \centerdot \rVert$ denotes the Euclidean distance.
\smallskip
The \emph {random connection model} was introduced in the context of continuum percolation by Penrose ~\cite{penrose1991continuum}. Let $g\ : \mathbb{R}^2 \to [0,1]$ be such that $g(x)=g(-x).$ The function $g$ is called the \emph{connection \ function}. For two vertices $x$,$y$, they are connected with probability $g(y-x)$. Typically, it is also assumed that $g$ only depends on the distance between $x$ and $y$, i.e., $g(y-x)=\hat{g}(\lVert y-x \rVert)$ where $\hat{g}:\mathbb{R}^+ \to [0,1]$ and $\lVert \centerdot \rVert$ denotes the Euclidean norm. The \emph {random geometric graph} is a \emph {random connection model} with $\hat{g}(x)={\bf 1}_{[0,r_n]}(x)$.
\smallskip
Very recently, Penrose~\cite{penrose2016connectivity} investigated the connectivity of \emph{random connection model }with various classes of connection functions, which are called \emph{soft random geometric graphs}. He showed that as vertex number $n \to \infty$, the probability of full connectivity is governed by that of having no isolated vertices, itself governed by a Poisson approximation for the number of isolated vertices. He generalized this beautiful result to higher dimensions, and to a large class of connection probability function in $d=2$.
\smallskip
In this paper, we consider a specific connection function which is also mentioned in Penrose ~\cite{penrose2016connectivity}: $\hat{g}(x)=p_n{\bf 1}_{[0,r_n]}(|x|)$, for some $p_n \in [0,1]$. The \emph {soft random geometric graph} gotten by this connection function, is called \emph {percolated random geometric graph}. To be more precise, a \emph{percolated random geometric graph} is defined as a random graph with vertex set $\mathcal{X}_n=\{X_1, X_2,..., X_n\}$ in which $n$ vertices are chosen at random and independently from distribution in $\mathbb{R}^2$ with probability density $f$, and a pair of vertices with Euclidean distance $r$ appears as an edge with probability $p_n$, some function of $n$, independently for each such a pair, we denote this graph $G(\mathcal{X}_n; p_n, r_n)$. In particular, for $p=1$, we can get the \emph {classic random geometric graph}, which we denote $G(\mathcal{X}_n; r).$ \emph{Hereafter, we always consider $p=p(n)$ as a function of $n$}.
In this paper, we focus on the \emph{induced} subgraph count problem on \emph {percolated random geometric graph} $G(\mathcal{X}_n; p, r)$. Let $\Gamma$ be a fixed connected graph on $k$ vertices, $k\ge 2$. Consider the number of \emph{induced} subgraphs of $G(\mathcal{X}_n; p, r)$ isomorphic to $\Gamma$. In~\cite{penrose2003random}, the author always assumes that the subgraph $\Gamma$ is \emph{feasible}, which means
$$Pr[(\mathcal{X}_k; r) \cong \Gamma]>0$$
for some $r >0$. However, we will not make this assumption in this paper: the subgraphs are always \emph{feasible} for \emph{percolated random geometric graphs}. Surprisingly, we can attain the asymptotic results for the means of the $\Gamma$-subgraph counts on $G(\mathcal{X}_n; p, r)$ with the help of $\Gamma$-subgraph counts on $G(\mathcal{X}_n; r)$, given that $\Gamma$ is a clique (i.e., a complete graph), see Corollary~\ref{thm:clique}. But when it comes to general \emph{induced} subgraph and component, we can only get a lower bound for the asymptotic result for the means of the $\Gamma$-component counts on $G(\mathcal{X}_n; p, r)$ for a wide range of $p_n$, see Theorem~\ref{thm:subgraph} and Corollary~\ref{thm:tree}. The main reason behind this is that there exist many subgraphs which are feasible for $G(\mathcal{X}_n; p, r)$ but not for $G(\mathcal{X}_n; r)$, which makes the counting on $G(\mathcal{X}_n; p, r)$ more complicated.
\smallskip
Recent years have seen an explosive growth in research of \emph {random geometric simplicial complexes}. \emph {Random simplicial complexes} may be viewed as higher dimensional generalizations of random graphs. Simplicial complex analogues of the classic Erd{\"o}s-Renyi model and their topological properties have been the subjects of many literatures in recent years. See for example~\cite{kahle2014topology},~\cite{linial2006homological}, and~\cite{meshulam2009homological}.
It is also natural to generalize the random geometric graphs, and a lot of references can be found in the survey articles~\cite{kahle2011random}. As Kahle mentioned in~\cite{kahle2011random}, two natural ways of extending a geometric graph to a simplicial complexes are: the Cech complex and the Vietoris-Rips complex(see formal definitions in the following). Most of the results in the researches of the topology of random geometric complexes are related to their homology. Briefly speaking, if $X$ is a topology space, its degree $i$-homology, denote by $H_i(X)$ is a vector space. The dimension dim$H_0(X)$ is the number of connected components of $X$, and for $i>0$, $H_i(X)$ contains information about $i-$dimensional "holes".
Since we focus on "counting" in this paper, we will also count the expected number of "holes" for the corresponding \emph{percolated random geometric complexes} in this paper. We also get the expectation of Betti number of the percolated random geometric complex (see formal definitions below).
\smallskip
Our argument is based on "coupling" of two random graph models: $G(\mathcal{X}_n; p, r)$ and $G(\mathcal{X}_n; r)$; and then use the same technique in Chapter 3 in Penrose~\cite{penrose2003random}.
The paper is organized as follows: In Section 2 we present our main results. In Section 3 we prove the main results. Finally, we note some possible generalizations and remarks in Section 4.
\section{Main results}
\subsection{Counting on percolated random geometric graphs}
We first present one asymptotic result for the means of the $\Gamma$-subgraphs counts $G_n$ in ~\cite{penrose2003random} given by Penrose. Given a connected graph $\Gamma$ on $k$ vertices, and given $A\subseteq \mathbb{R}^d$, define the indicator functions $h_{\Gamma}(\mathcal{Y})$, $h_{n, A, \Gamma}(\mathcal{Y})$, $\hat{h}_{\Gamma}(\mathcal{Y})$ and $\hat{h}_{n, A, \Gamma}(\mathcal{Y})$ for finite $\mathcal{Y}\subset \mathbb{R}^d$ by
$$h_{\Gamma}(\mathcal{Y}):=1_{\{G(\mathcal{Y}; r_n) \cong \Gamma\}},$$
$$h_{n, A, \Gamma}(\mathcal{Y}):=1_{\{G(\mathcal{Y}; r_n) \cong \Gamma\} \cap \{LMP(\mathcal{Y})\in A\}},$$
and
$$\hat{h}_{\Gamma}(\mathcal{Y}):=1_{\{G(\mathcal{Y}; r_n, p) \cong \Gamma\}},$$
$$\hat{h}_{n, A, \Gamma}(\mathcal{Y}):=1_{\{G(\mathcal{Y}; r_n, p) \cong \Gamma\} \cap \{LMP(\mathcal{Y})\in A\}},$$
in which $LMP(\mathcal{Y})$ means the left-most point of set $\mathcal{Y}$. It is easy to observe that $h_{\Gamma}(\mathcal{Y}) =h_{n, A, \Gamma}(\mathcal{Y})=\hat{h}_{\Gamma}(\mathcal{Y})=\hat{h}_{n, A, \Gamma}(\mathcal{Y})=0$ unless $\mathcal{Y}$ has $k$ elements.
Similarly, we define
$$g_{\Gamma}(\mathcal{Y}):=1_{\{G(\mathcal{Y}; r_n) \supsetneqq \Gamma\}},$$
and
$$g_{n, A, \Gamma}(\mathcal{Y}):=1_{\{G(\mathcal{Y}; r_n) \supsetneqq \Gamma\} \cap \{LMP(\mathcal{Y})\in A\}},$$
in which ${\{G(\mathcal{Y}; r_n) \supsetneqq \Gamma\}}$ means $\Gamma$ is a subgraph of $G(\mathcal{Y}; r_n)$, but not equals $G(\mathcal{Y}; r_n)$. {\bf Hereafter we define $g_{\Gamma}(\mathcal{Y}) =g_{n, A, \Gamma}(\mathcal{Y})=0$ unless $\mathcal{Y}$ has $k$ elements, which means we just need to consider the graph $G(\mathcal{Y}; r_n)$ with order $k$}.
\smallskip
The reader should keep in mind that all the functions $h_{\Gamma}(\cdot)$, $h_{n,A,\Gamma}(\cdot)$, $g_{\Gamma}(\cdot)$, $g_{n,A,\Gamma}(\cdot)$ are defined on random geometric graph $G(\mathcal{X}_n; r)$; and only functions $\hat{h}_{\Gamma}(\cdot)$, $\hat{h}_{n,A,\Gamma}(\cdot)$, are defined on percolated random geometric graph $G(\mathcal{X}_n; r, p)$.
\smallskip
We set
$$\mu_{\Gamma, A}:= k!^{-1}\int_{A}f(x)^kdx\int_{(\mathbb{R}^d)^{k-1}}h_{\Gamma}(\{0, x_1,...,x_{k-1}\})d(x_1,...,x_{k-1}),$$
$$\hat{\mu}_{\Gamma, A}:= k!^{-1}\int_{A}f(x)^kdx\int_{(\mathbb{R}^d)^{k-1}}\hat{h}_{\Gamma}(\{0, x_1,...,x_{k-1}\})d(x_1,...,x_{k-1}),$$
$$\mu'_{\Gamma, A}:= k!^{-1}\int_{A}f(x)^kdx\int_{(\mathbb{R}^d)^{k-1}}g_{\Gamma}(\{0, x_1,...,x_{k-1}\})d(x_1,...,x_{k-1}).$$
We write $\mu_{\Gamma}$, $\hat{\mu}_{\Gamma}$, $\mu'_{\Gamma}$ for $\mu_{\Gamma, \mathbb{R}^d}$, $\hat{\mu}_{\Gamma, \mathbb{R}^d}$ and $\mu'_{\Gamma, \mathbb{R}^d}$ respectively.
\smallskip
Let $G_{n,A}(\Gamma)$ and $G'_{n,A}(\Gamma)$ be the number of \emph {induced} subgraphs of $G(\mathcal{X}_n; r)$ and $G(\mathcal{X}_n; p,r)$ for which the left-most of the vertex set lies in $A$, respectively.
\begin{theorem}[Penrose~\cite{penrose2003random}]\label{thm:penrosesubgraph}
Suppose that $\Gamma$ is a feasible connected graph of order $k\geq 2$, that $A \subseteq \mathbb{R}^d $ is open with $Leb(\partial A)=0$, and that $\lim_{n\to \infty}(r_n)=0.$ Then
$$\lim_{n\to \infty}r_n^{-d(k-1)}n^{-k}E(G_{n,A}(\Gamma))=\mu_{\Gamma,A}.$$
\end{theorem}
Similar with the result above, we count the \emph{induced} subgraph in the percolated random geometric graph $G(\mathcal{X}_n; p,r)$ and get one theorem below:
\begin{theorem}\label{thm:similar}
Suppose that $\Gamma$ is a connected graph of order $k\geq 2$, that $A \subseteq \mathbb{R}^d $ is open with $Leb(\partial A)=0$, and that $\lim_{n\to \infty}(r_n)=0.$ Then
$$\lim_{n\to \infty}r_n^{-d(k-1)}n^{-k}E(G'_{n,A}(\Gamma))=\hat{\mu}_{\Gamma,A}.$$
\end{theorem}
\smallskip
However, if have more information about graph $\Gamma$, we can get more detailed results. In the following, we will present some results related to induced-graph $\Gamma$ with order $k\ge 2$ and size $m\ge 1$.
\begin{theorem}[Counting of induced subgraph]\label{thm:subgraph}
Suppose that $\Gamma$ is a connected graph of order $k\geq 2$ and size $m$, that $A \subseteq \mathbb{R}^d $ is open with $Leb(\partial A)=0$, and that $\lim_{n\to \infty}(r_n)=0.$ Then\\
if $p_n \equiv p$, we have
$$\lim_{n\to \infty}p^{-m}n^{-k}r_n^{-d(k-1)} E(G'_{n,A}(\Gamma)) \geq \mu_{\Gamma, A} +(1-p)^{\binom{k}{2}}\mu'_{\Gamma, A}.$$
If $\lim_{n\to\infty}n^2p_n\to \alpha\in (0,\infty)$, we have
$$\lim_{n\to \infty}p^{-m}n^{-k}r_n^{-d(k-1)} E(G'_{n,A}(\Gamma)) \geq \mu_{\Gamma, A} +e^{-\alpha/2}\mu'_{\Gamma, A}.$$
If $\lim_{n\to\infty}n^2p_n\to 0$, we have
$$\lim_{n\to \infty}p^{-m}n^{-k}r_n^{-d(k-1)} E(G'_{n,A}(\Gamma)) \geq \mu_{\Gamma, A} +\mu'_{\Gamma, A}.$$
\end{theorem}
\begin{corollary}[Counting of tree-subgraph ]\label{thm:tree}
Suppose that $\Gamma$ is a connected graph of order $k\geq 2$ and size $m = k-1$, that $A \subseteq \mathbb{R}^d $ is open with $Leb(\partial A)=0$, and that $\lim_{n\to \infty}(r_n)=0.$ Then \\
if $p_n\equiv p$, we have
$$ \lim_{n\to \infty}\frac{E(G'_{n,A}(\Gamma))}{n}\left(\frac{\theta}{d(n)}\right)^{k-1}\ge \mu_{\Gamma,A}+(1-p)^{\binom{k}{2}-(k-1)}\mu'_{\Gamma,A};$$
If $n^2p_n\to \alpha\in (0,\infty)$, we have
$$ \lim_{n\to \infty}\frac{E(G'_{n,A}(\Gamma))}{n}\left(\frac{\theta}{d(n)}\right)^{k-1}\ge \mu_{\Gamma,A}+e^{-\alpha/2}\mu'_{\Gamma,A};$$
If $n^2p_n\to 0$, we have
$$ \lim_{n\to \infty}\frac{E(G'_{n,A}(\Gamma))}{n}\left(\frac{\theta}{d(n)}\right)^{k-1}\ge \mu_{\Gamma,A}+\mu'_{\Gamma,A},$$
in which $d(n)=n\theta r_n^d p_n$.
\end{corollary}
\begin{corollary}[Counting of clique-subgraph]\label{thm:clique}
Suppose that $\Gamma$ is a clique of order $k\geq 2$, that $A \subseteq \mathbb{R}^d $ is open with $Leb(\partial A)=0$, and that $\lim_{n\to \infty}(r_n)=0.$ Then
$$E(G'_{n,A}(\Gamma))=p^{\binom{k}{2}}E(G_{n,A}(\Gamma)).$$
Moreover, we can get
$$\lim_{n\to \infty}p^{-\binom{k}{2}}n^{-k}r_n^{-d(k-1)}E(G'_{n,A}(\Gamma))=\mu_{\Gamma, A}.$$
\end{corollary}
\bigskip
Next consider the component count in the thermodynamic limit where $n r_n^d$ tends to a constant. Given $\lambda >0$, and given a feasible connected graph $\Gamma$ of order $k\geq 2$, define
$$p_{\Gamma}(\lambda):=\frac{\lambda^{k-1}}{(k-1)!}\int_{(\mathbb{R}^d)^{k-1}}h_{\Gamma}(\{0,x_1,...,x_{k-1}\}) \times \exp(-\lambda V(0,x_1,...,x_{k-1}))d(x_1,...,x_{k-1})$$
and
$$\hat{p}_{\Gamma}(\lambda):=\frac{\lambda^{k-1}}{(k-1)!}\int_{(\mathbb{R}^d)^{k-1}}\hat{h}_{\Gamma}(\{0,x_1,...,x_{k-1}\}) \times \exp(-\lambda V(0,x_1,...,x_{k-1}))d(x_1,...,x_{k-1}),$$
and
$$p'_{\Gamma}(\lambda):=\frac{\lambda^{k-1}}{(k-1)!}\int_{(\mathbb{R}^d)^{k-1}}g_{\Gamma}(\{0,x_1,...,x_{k-1}\}) \times \exp(-\lambda V(0,x_1,...,x_{k-1}))d(x_1,...,x_{k-1}),$$
where $V(y_1,...,y_m)$ denotes the Lebesgue measure (volume) of the union of balls of unit radius (in the chosen norm) centered at $y_1, ..., y_m$. If $\Gamma$ consists of one single point (i.e. if $k=1$), set $p_{\Gamma}(\lambda)=p'_{\Gamma}(\lambda)=\hat{p}_{\Gamma}(\lambda):=\exp(-\lambda \theta)$, in which $\theta$ is the volume of the unit ball in $\mathbb{R}^d$.
Let $J_{n,A}(\Gamma)$ be the number of $\Gamma$-components of $G(\mathcal{X}_n; r)$ for which the left-most point of the vertex set lies in $A$; and $J'_{n,A}(\Gamma)$ be the number of $\Gamma$-components of $G(\mathcal{X}_n; p, r)$ for which the left-most point of the vertex set lies in $A$.
\begin{theorem}[Penrose~\cite{penrose2003random}]\label{thm:penrosecomponent}
Suppose that $A\subseteq {\mathbb R}^d$ is open with $Leb({\partial A})=0$, that $\Gamma$ is a feasible connected graph order $k\in \mathbb{N}$, and that $n r_n^d\to \rho \in (0,\infty)$. Then
$$\lim_{n\to \infty}\left(\frac{E(J_{n, A}(\Gamma))}{n}\right)= k^{-1}\int_A p_{\Gamma}(\rho f(x))f(x)dx.$$
\end{theorem}
For the percolated random geometric graphs, we have one similar result as follows.
\begin{theorem}\label{thm:similar2}
Suppose that $A\subseteq {\mathbb R}^d$ is open with $Leb({\partial A})=0$, that $\Gamma$ is a connected graph order $k\in \mathbb{N}$, and that $n r_n^d\to \rho \in (0,\infty)$. Then
$$\lim_{n\to \infty}\left(\frac{E(J_{n, A}(\Gamma))}{n}\right)= k^{-1}\int_A \hat{p}_{\Gamma}(\rho f(x))f(x)dx.$$
\end{theorem}
Same story as the counting of \emph{induced} subgraph, we can get more detailed results if have more information about the \emph{induced} component.
\begin{theorem}[Counting of $\Gamma$-component]\label{thm:component}
Suppose that $A\subseteq {\mathbb R}^d$ is open with $Leb({\partial A})=0$, that $\Gamma$ is a connected graph order $k\in \mathbb{N}$ and size $m$, and that $n r_n^d\to \rho \in (0,\infty)$. Then \\
if $p_n\equiv p$, we have
$$ \lim_{n\to \infty}\left(\frac{E(J'_{n, A}(\Gamma))}{n p^m_n}\right)\geq k^{-1}\int_A p_{\Gamma}(\rho f(x))f(x)dx+k^{-1}(1-p)^{\binom{k}{2}-m}\int_{A}p'_{\Gamma}(\rho f(x))f(x)dx.$$
If $n^2p_n\to \alpha\in (0,\infty)$, we have
$$ \lim_{n\to \infty}\left(\frac{E(J'_{n, A}(\Gamma))}{n p_n^m}\right)\geq k^{-1}\int_A p_{\Gamma}(\rho f(x))f(x)dx+k^{-1}e^{-\alpha/2}\int_{A}p'_{\Gamma}(\rho f(x))f(x)dx.$$
If $n^2p_n\to 0$, we have
$$ \lim_{n\to \infty}\left(\frac{E(J'_{n, A}(\Gamma))}{n p^m_n}\right)\geq k^{-1}\int_A p_{\Gamma}(\rho f(x))f(x)dx+k^{-1}\int_{A}p'_{\Gamma}(\rho f(x))f(x)dx.$$
\end{theorem}
\begin{corollary}[Counting of clique-component]\label{thm:clique-component}
Suppose that $A\subseteq {\mathbb R}^d$ is open with $Leb({\partial A})=0$, that $\Gamma$ is a clique with order $k\in \mathbb{N}$, and that $n r_n^d\to \rho \in (0,\infty)$. Then we have
$$E(J'_{n, A}(\Gamma))\geq p_n^{\binom{k}{2}} E(J_{n, A}(\Gamma)).$$
Moreover, we have
\begin{equation}\label{equ:6}
\lim_{n\to \infty}\left(\frac{E(J'_{n, A}(\Gamma))}{n p_n^{\binom{k}{2}}}\right)\geq k^{-1}\int_A p_{\Gamma}(\rho f(x))f(x)dx.
\end{equation}
\end{corollary}
In the following subsection, we will present the basic Poisson approximation theorem for the induced $\Gamma$-subgraph count $G'_n$ on \emph{percolated random geometric graph} $G(\mathcal{X}_n; p, r).$
Compare to the similar results for random geometric graphs in ~\cite{penrose2003random}, the \emph{total variation distance} between the distribution of $G'_n$ and corresponding Poisson distribution is tighter for percolated random geometric graphs.
\begin{theorem}\label{thm:poisson}
Let $\Gamma$ be a connected graph of order $k\ge 2$ and size $m$, and we define $G'_n:=G'_{n,\mathbb{R}^d}(\Gamma)$. Suppose $(nr_n^d)_{n\ge 1}$ is a bounded sequence. Let $Z_n$ be Poisson with parameter $E(G'_n).$
Then there exists a constant $c$ such that for all n,
\begin{displaymath}\label{equ:poissonappro}
d_{TV}(G'_n, Z_n)\le \left\{\begin{array}{ll} cnp_n^{2m+2-k}r^d_n &\textrm{if $k\ge 4$}\\
cnp_n^{2m-2}r_n^d &\textrm{if $2\le k<4$}
\end{array}\right.
\end{displaymath}
If $n^kr^{d(k-1)}_n \to \alpha \in(0,\infty)$, then $G'_n \xrightarrow[] {D} Po(\lambda)$ with $\lambda=\alpha \hat{\mu}_{\Gamma}$.
If $n^kr^{d(k-1)}_n \to \infty$ and $nr^d_n \to 0$, then
$\left(n^kr^{d(k-1)}_n\hat{\mu}_{\Gamma} \right)^{-1/2}(G'_n-EG'_n)\xrightarrow[]{D}\mathcal{N}(0,1).$
\end{theorem}
\subsection{Counting on random geometric complexes}
In this section, we present some very preliminary results related to the \emph{percolated random geometric complexes}, which are the corresponding results in \emph{percolated} version for the expectation of Betti numbers of Vietoris-Rips complex as in Kahle~\cite{kahle2011random}.
For completeness of this paper, we first review some definitions related to simplicial complexes. A set of $k+1$ points, $u_0, u_1,..., u_k$, is \emph{affinely independent} if the $k$ vectors, $u_1-U-0, u_2-u_0,...,u_k-u_0$, are linear independent. A \emph{k-simplex} is the convex hull of $k+1$ affinely independent points. Writing $\sigma$ for the $k$-simplex, we call $k=$dim$\sigma$ its dimension, and $u_0$ to $u_k$ its \emph{vertices}. Simplifies of dimension $0, 1, 2, 3$ are usually referred to as \emph{vertices, edges, triangles, tetrahedra}. A \emph{face} of $\sigma$ is a simplex spanned by a subset of the vertices of $\sigma$. Since a set of $k+1$ elements has $\binom{k+1}{l+1}$ subsets of size $l+1$, $\sigma$ has this number of $l$-faces, for $0\le l\le k$. The total number of faces is
$$\sum_{l=0}^k\binom{k+1}{l+1}=2^{k+1}-1,$$
the number of subsets minus 1 means we do not count the empty set. We then define a \emph{simplicial complex} as a finite collection of simplices, $K$, such that
\begin{enumerate}
\item for every simplex $\sigma\in K$, every face of $\sigma$ is in $K$;
\item for every two simplifies $\sigma, \tau \in K$, the intersection, $\sigma\cap \tau$, is either empty or a face of both simplices.
\end{enumerate}
If the intersection of two simplifies is a common face, then $(i)$ implies that it is a simplex in $K$. The \emph{dimension} of a simplicial complex $K$ is the largest dimension of any simplex in $K$. A \emph{sub complex} of $K$ is the simplices that is itself a simplicial complex. For more detains on simplicial complex and related properties, we recommend the brief monograph~\cite{edelsbrunner2014short} by Edelsbrunner.
\smallskip
The random geometric complexes studied are simplicial complexes built on independent and identically distributed random points in Euclidean space $\mathbb{R}^d$. In this section, we make mild assumptions about the common density $f$: $f$ is bounded Lebesgue-measurable function and
$$\int_{\mathbb{R}^d}fdx=1.$$
The main object of study in this section is the \emph{percolated} Vietoris-Rips complexes on ${X_1, X_2,...,X_n}$, which is a sequence of independent and identically distributed $d$-dimensional random variables with common density $f$, we denote the sequence by $\mathcal{X}_n=\{X_1, X_2,...,X_n\}$. The Vietoris-Rips complex was first introduced by Vietoris in order to extend simplicial homology to a homology theory for metric spaces~\cite{vietoris1927hoheren}. Eliyahu Rips applied the same complex to the study of hyperbolic groups, and Gromov popularized the name of Rips complex~\cite{gromov1987hyperbolic}.
Denote the closed ball of radius $r$ centered at a point $p$ by $B(p,r)=\{x\mid ||x-p||\le r\}$, in which $||\cdot||$ is the Euclidean distance in $\mathbb{R}^d$.
The formal definition of Vietoris-Rips complex goes as follows:
\begin{definition}[Random VR complex]
The random Vietoris-Rips complex $R(\mathcal{X}_n; r)$ is the simplicial complex with vertex set $\mathcal{X}_n$ and $\sigma$ a face if
$$B(X_i,r/2)\cap B(X_j,r/2)\not= \emptyset$$
for every pair $X_i,\ X_j\in \sigma$.
\end{definition}
From the definition above, it is easy to see that the random Vietoris-Rips complex is the clique complex of $G(\mathcal{X}_n; r)$.
As we mentioned before, we want to study one \emph{percolated} version of the random Vietoris-Rips complex. Roughly speaking, the underlying graph for the random Vietoris-Rips complex is the classic random graphs $G(\mathcal{X}_n;r)$, while the underlying graph for the \emph{percolated} random Vietoris-Rips complex is the \emph{percolated} random geometric graph $G(\mathcal{X}_n; r,p)$.
\begin{definition}[Percolated random VR complex]
Let $G(\mathcal{X}_n; r,p)$ be the percolated random geometric graph built on the random points $\mathcal{X}_n=\{X_1,X_2,...,X_n\}$, which are $i.i.d$ with common density $f$. The percolated random Vietoris-Rips complex $R(\mathcal{X}_n;r,p)$ associated with graph $G(\mathcal{X}_n; r,p)$ is the simplicial complex with vertex $\mathcal{X}_n$ and $\sigma$ a face if
$$(X_i, X_j)\in E\left(G(\mathcal{X}_n; r,p)\right)$$
for every pair $X_i,\ X_j\in \sigma$.
\end{definition}
In other words, we build any $k-$simplex by its basic $2$-faces, i.e., edges. A face $\sigma$ exists if all its $2$-subfaces exist.
\smallskip
In this paper, we only mention the similar result for the expectation of Betti number in the subcritical regime.
\begin{theorem}[Betti number of random geometric VR complex,\cite{kahle2011random}]\label{thm:kahle}
For $d\ge 2$, $k\ge 1$, and $r_n=o(n^{-1/d}),$ the expectation of the $k$th Betti number $E[\beta_k]$ of the random Vietoris-Rips complex $R(\mathcal{X}_n;r)$ satisfies
$$
\frac{E[\beta_k]}{n^{2k+2}r^{d(2k+1)}}\to C_k$$
as $n\to \infty$, where $C_k$ is a constant that depends only on $k$ and the underlying density $f$.
\end{theorem}
For the percolated random Vietoris-Rips complex, we have the similar results:
\begin{theorem}[Betti number of percolated random VR complex]\label{thm:betti}
For $d\ge 2$, $k\ge 1$, and $r_n=o(n^{-1/d}),$ the expectation of the $k$th Betti number $E[\beta_k]$ of the percolated random Vietoris-Rips complex $R(\mathcal{X}_n;r,p)$ associated with graph $G(\mathcal{X}_n; r,p)$ satisfies
$$
\frac{E[\beta_k]}{n^{2k+2}r^{d(2k+1)}p^{2k(k-1)}}\to C'_k$$
as $n\to \infty$, where $C'_k$ is a constant that depends only on $k$ and the underlying density $f$.
\end{theorem}
\section{Proof of the main theorems}
\subsection{Coupling of two random geometric models: $G(\mathcal{X}_n; p, r)$ and $G(\mathcal{X}_n; r)$}
Given a vertex set $\mathcal{X}_n=\{X_1,..., X_n\}$ which are independently from a distribution on $\mathbb{R}^2$ with probability density $f$, and two functions $r_n >0, p_n\in [0,1]$. We get the percolated random geometric graph $G(\mathcal{X}_n; p, r)$ in the following two steps:
\begin{itemize}
\item Put an edge between $X_i$ and $X_j$ if $\lVert X_i -X_j\rVert \le r$, for $1\le i<j \le n$, we get $G(\mathcal{X}_n; r)$;
\item For $G(\mathcal{X}_n; r)$ obtained above, we keep every edge with probability $p$ (i.e., we delete it with probability $1-p$), independently with all other edges. Then we get $G(\mathcal{X}_n; p, r)$.
\end{itemize}
From the procedure above, we can get that there are at least two ways to get the \emph{induced} subgraphs in $G(\mathcal{X}_n; p, r)$:
\begin{itemize}
\item If $G(\mathcal{X}_k; r)\cong \Gamma$, we can keep it in the second step;
\item If $G(\mathcal{X}_k; r) \supsetneqq \Gamma$, we can delete the unwanted edges in the second step, and get $G(\mathcal{X}_k; r, p)\cong \Gamma$.
\end{itemize}
In short, all the \emph{induced} subgraphs $G(\mathcal{X}_k; r, p)\cong \Gamma$ are born from some graphs $G(\mathcal{X}_k; r)$ with more (or same) edges.
\smallskip
It is easy to observe that: if $(X_i, X_j) $ is one edge in $G(\mathcal{X}_n; r)$, then $(X_i, X_j)$ is one edge in $G(\mathcal{X}_n; p, r)$ with probability $p$. As a consequence, we can get a lemma.
\begin{lemma}[Coupling Lemma]~\label{lem:cou}Suppose that $\Gamma$ be a fixed connected graph of order $k \geq 2$ and size $m\ge 1$. Then
$$Pr[G(\mathcal{X}_k; p, r)\cong \Gamma]\ge p^m Pr[G(\mathcal{X}_k; r)\cong \Gamma].$$
\end{lemma}
\begin{proof}
If $\Gamma$ is not feasible for $G(\mathcal{X}_n; r)$, i.e., $Pr[G(\mathcal{X}_k; r)\cong \Gamma]=0$, the statement of course holds; If $G(\mathcal{X}_k; r)\cong \Gamma$, then keep all the edges in $G(\mathcal{X}_k; r)$, we can get $G(\mathcal{X}_k; p, r)\cong \Gamma$. We complete the proof.
\end{proof}
\begin{remark}
From the Lemma~\ref{lem:cou}, every $\Gamma$-subgraph with size $m$ in $G(\mathcal{X}_n; r)$, can contribute $p_n^m$ to the expectation of number of $\Gamma$-subgraph in $G(\mathcal{X}_n; p, r)$.
\end{remark}
\subsection{Proof of Theorem~\ref{thm:similar}}
Slightly modify the proof of Theorem~\ref{thm:penrosesubgraph} in ~\cite{penrose2003random}, we can get the theorem.
\subsection{Proof of Theorem~\ref{thm:subgraph}}\label{pro:subgraph} It is easy to get
\begin{equation}\label{eq:1}
\begin{array}{rcl}
\displaystyle
Pr(G(\mathcal{X}_k; r,p)\cong \Gamma)
\displaystyle
&=&
Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G(\mathcal{X}_k; r)\cong \Gamma \right)Pr(G(\mathcal{X}_k; r)\cong \Gamma)\\
\displaystyle
&+&
Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G(\mathcal{X}_k; r)\supsetneqq \Gamma)\\
\displaystyle
&=&
p^mPr(G(\mathcal{X}_k; r)\cong \Gamma)\\
\displaystyle
&+&
Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G(\mathcal{X}_k; r)\supsetneqq \Gamma)\\
\displaystyle
&\ge&
p^mPr(G(\mathcal{X}_k; r)\cong \Gamma)+p^m(1-p)^{\binom{k}{2}-m}Pr(G(\mathcal{X}_k; r)\supsetneqq \Gamma)\\
\displaystyle
&\geq&
p^mPr(G(\mathcal{X}_k; r)\cong \Gamma)+p^m(1-p)^{\binom{k}{2}}Pr(G(\mathcal{X}_k; r)\supsetneqq \Gamma).
\end{array}
\end{equation}
The first equality means: the conditional probability $Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G(\mathcal{X}_k; r)\supsetneqq \Gamma \right)$ cannot be calculated easily, as it depends on both the structures $\Gamma$ and $G(\mathcal{X}_k; r)$. However, we can get a lower bound for this probability by considering $G(\mathcal{X}_k; r)$ as a complete graph, and delete all the unwanted $\binom{k}{2}-m$ edges.
\smallskip
As $E(G'_{n,A}(\Gamma))=\binom{n}{k}Pr(G(\mathcal{X}_k; r,p)\cong \Gamma)$, we have
\begin{equation}\label{equ:4}
\begin{array}{rcl}
\displaystyle
E[G'_{n,A}(\Gamma)]
\displaystyle
&\geq&
\binom{n}{k}p^m(Pr(G(\mathcal{X}_k; r)\cong \Gamma)+\binom{n}{k}p^m(1-p)^{\binom{k}{2}}Pr(G(\mathcal{X}_k; r)\supsetneqq \Gamma).
\end{array}
\end{equation}
Follow the same idea of the proof of the Proposition 3.1 in ~\cite{penrose2003random}, we can get that the first term and second term on the right side of (~\ref{equ:4} ) is asymptotic to $n^kp^mr_n^{d(k-1)}\mu_{\Gamma, A}$ and $n^kp^m(1-p)^{\binom{k}{2}}r_n^{d(k-1)}\mu'_{\Gamma, A}$.
\begin{enumerate}
\item If $p_n\equiv p$, we can rearrange the terms, and get the result;
\item if $n^2p_n\to \alpha$, which means $(1-p)^{\binom{k}{2}}\geq(1-p)^{\binom{n}{2}}\sim e^{-p_nn^2/2}\sim e^{-\alpha/2}$ as $n\to \infty$;
\item if $n^2p_n\to 0$,which means $(1-p)^{\binom{k}{2}}\geq(1-p)^{\binom{n}{2}}\sim e^{-p_nn^2/2}\sim 1$ as $n\to \infty$.
\end{enumerate}
We complete our proof.
\subsection{Proof of Corollary~\ref{thm:tree}}
let $m=k-1$, use the same idea as proof of ~\ref{thm:subgraph}, we can get the Theorem~\ref{thm:tree}.
\subsection{Proof of Corollary ~\ref{thm:clique}}
If $\Gamma$ is a clique with order $k$, we have
$$Pr(G(\mathcal{X}_k; r)\supsetneqq \Gamma)=0,$$
i.e.,
$$Pr(G(\mathcal{X}_k; r,p)\cong \Gamma)=Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G(\mathcal{X}_k; r)\cong \Gamma \right)Pr(G(\mathcal{X}_k; r)\cong \Gamma).$$
Use the same argument as proof of Theorem~\ref{thm:subgraph}, we can finish the proof here.
\subsection{Proof of Theorem~\ref{thm:similar2}}
Almost same argument as the proof of Theorem~\ref{thm:penrosecomponent} in ~\cite{penrose2003random}, we can get the theorem.
\subsection{Proof of Theorem~\ref{thm:component}}For the component counting, we have
\begin{equation}\label{eq:5}
\begin{array}{rcl}
\displaystyle
Pr(G(\mathcal{X}_k; r,p)\cong \Gamma)
\displaystyle
&=&
Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G(\mathcal{X}_k; r)\cong \Gamma \right)Pr(G(\mathcal{X}_k; r)\cong \Gamma)\\
\displaystyle
&+&
Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G^{dis}(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G^{dis}(\mathcal{X}_k; r)\supsetneqq \Gamma)\\
\displaystyle
&+&
Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G^{con}(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G^{con}(\mathcal{X}_k; r)\supsetneqq \Gamma)\\
\displaystyle
&=&
p^mPr(G(\mathcal{X}_k; r)\cong \Gamma)\\
\displaystyle
& + &
Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G^{dis}(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G^{dis}(\mathcal{X}_k; r)\supsetneqq \Gamma)\\
\displaystyle
&+&
Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G^{con}(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G^{con}(\mathcal{X}_k; r)\supsetneqq \Gamma)\\
\displaystyle
&\ge&
p^mPr(G(\mathcal{X}_k; r)\cong \Gamma)\\
\displaystyle
&+&
p^m(1-p)^{\binom{k}{2}-m}Pr(G^{dis}(\mathcal{X}_k; r)\supsetneqq \Gamma)\\
\displaystyle
&+&
Pr\left( G^{con}(\mathcal{X}_k; r,p)\cong \Gamma \mid G^{con}(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G^{con}(\mathcal{X}_k; r)\supsetneqq \Gamma),
\end{array}
\end{equation}
in which, $G^{dis}(\mathcal{X}_k; r)$ means that $G(\mathcal{X}_k; r)$ does not connect with any vertices in $\mathcal{X}_n\setminus \mathcal{X}_k$; and $G^{con}(\mathcal{X}_k; r)$ means that $G(\mathcal{X}_k; r)$ does connect with some vertex in $\mathcal{X}_n\setminus \mathcal{X}_k$.
So we get
\begin{equation}\label{equ:5}
\begin{array}{rcl}
\displaystyle
E(J'_{n,A}(\Gamma))
\displaystyle
&=&
\binom{n}{k}Pr(G(\mathcal{X}_k; r,p)\cong \Gamma)\\
\displaystyle
&=&
\binom{n}{k}Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G(\mathcal{X}_k; r)\cong \Gamma \right)Pr(G(\mathcal{X}_k; r)\cong \Gamma)\\
\displaystyle
&+&
\binom{n}{k}Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G^{dis}(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G^{dis}(\mathcal{X}_k; r)\supsetneqq \Gamma)\\
\displaystyle
&+&
\binom{n}{k}Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G^{con}(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G^{con}(\mathcal{X}_k; r)\supsetneqq \Gamma).
\end{array}
\end{equation}
For the first term of the right side of (~\ref{equ:5}), by Theorem ~\ref{thm:penrosecomponent}, we can know the the asymptotic result
$$n^{-1}\binom{n}{k}Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G(\mathcal{X}_k; r)\cong \Gamma \right)Pr(G(\mathcal{X}_k; r)\cong \Gamma)\to p^m k^{-1}\int_{A}p_{\Gamma}(\rho f(x))f(x)dx.$$
For the second term, we use the same argument as in the proof of Proposition 3.3 in~\cite{penrose2003random}, and get that
$$n^{-1}\binom{n}{k}Pr\left( G(\mathcal{X}_k; r,p)\cong \Gamma \mid G^{dis}(\mathcal{X}_k; r)\supsetneqq \Gamma \right)Pr(G^{dis}(\mathcal{X}_k; r)\supsetneqq \Gamma) $$
is asymptotically bounded from below by
$$ p^m(1-p)^{\binom{k}{2}-m}k^{-1}\int_{A}p'_{\Gamma}(\rho f(x))f(x)dx.$$
Then use the same arguments as the ~\ref{pro:subgraph}, we finish the proof.
\subsection{Proof of Corollary~\ref{thm:clique-component}}
By Lemma{~\ref{lem:cou}}, we can get
\begin{equation}\label{eq:2}
\begin{array}{rcl}
\displaystyle
E[J'_{n,A}(\Gamma)]
\displaystyle
&\geq&
p^{\binom{k}{2}}E[J_{n,A}(\Gamma)].
\end{array}
\end{equation}
By Theorem ~\ref{thm:penrosecomponent}, which gives us (~\ref{equ:6}).
\subsection{Proof of Theorem ~\ref{thm:poisson}}
Before we prove this theorem, we present some notations related to \emph{dependency graphs} and some approximation results for sums of Bernoulli variables indexed by the vertices of a dependency graph.
Suppose $(I,E)$ is a graph with finite for countable vertex set $I$. For $i,j \in I$ write $i\sim j$ if $\{i,j\}\in E.$ For $i\in I$, let $\mathcal{N}_i$ denote the adjacency neighborhood of $i$, that is, the set $\{i\}\cup \{j\in I: j\sim i\}.$ We say that the graph $(I,\sim)$ is a \emph{dependency graph} for a collection of random variables $(\xi_i,i\in I)$ if for any two disjoint subsets $I_1, I_2$ of $I$ such that there are no edges connecting $I_1$ to $I_2$, the collection of random variables $(\xi_i,i\in I_1)$ is independent of $(\xi_j, j\in I_2).$ The notation of \emph{dependency graph} is very helpful to cope with some problem related to near-independence random variables.
\smallskip
\begin{theorem}[Arratia et al. 1989~\cite{arratia1989two}]~\label{thm:stein}
Suppose $(\xi_i, i \in I)$ is a finite collection of Bernoulli random variables with dependency graph $(I,\sim)$. Set $p_i:=E(\xi_i)=P[\xi_i=1]$, and set $p_{ij}:=E[\xi_i \xi_j].$ Let $\lambda:=\sum_{i\in I}p_i,$ and suppose $\lambda$ is finite. let $W:=\sum_{i\in I}\xi_i$. Then
$$d_{TV}(W, Po(\lambda)) \leq \min(3, \lambda^{-1})\left(\sum_{i\in I}\sum_{j\in \mathcal{N}_i\setminus \{i\}}p_{ij} +\sum_{i\in I}\sum_{j\in \mathcal{N}_i}p_i p_j\right).$$
\end{theorem}
\smallskip
\begin{proof}[Proof of Theorem~\ref{thm:poisson}]
Clearly we have
$$G'_n=\sum_{{\bf i}\in \mathcal{I}_n}\xi_{{\bf i},n},$$
where $\bf{i}$ runs through the index set $\mathcal{I}_n$ of all $k$-subsets ${\bf i}=\{i_1,..., i_k\}$ of $\{1,2,...,n\},$ and $\xi_{{\bf i}, n}=1_{\{G(\{X_i, \ i\in {\bf i}\}; p, r)\cong \Gamma\}}.$
Then we use \emph{stein's} method to get the error bounds for the convergence.
For each index ${\bf i} \in \mathcal{I}_n$, let $\mathcal{N}_i$ be the set of ${\bf j} \in \mathcal{I}_n$ such that ${\bf i}$ and ${\bf j}$ have at least one element in common. Let $\sim$ be the associated adjacency relation on $\mathcal{I}_n$, that is ${\bf i}\sim {\bf j}$ if ${\bf j} \in \mathcal{N}_i$ and ${\bf i}\neq {\bf j}$. Then $\xi_{{\bf i},n}$ is independent of $\xi_{{\bf j},n}$ except when ${\bf j} \in \mathcal{N}_i$. In this way, we get a dependency graph $(\mathcal{I}_n, \sim)$ for $(\xi_{{\bf i},n}, {\bf i} \in \mathcal{I}_n).$
By connectedness all vertices of any $\Gamma$-subgraph of $G(\mathcal{X}_n; p,r)$ lie within a distance $(k-1)r_n$ of one another, and hence, with $\theta$ denoting the volume of the unit ball in $\mathbb{R}^d$, we have
\begin{equation}\label{equ:poisson}
\begin{array}{rcl}
\displaystyle
E\xi_{{\bf i},n}
\displaystyle
&\leq&
p^m\int_{\mathbb{R}^d}\cdots\int_{\mathbb{R}^d}h_{\Gamma,n}(\{x_1,...,x_k\})f(x_1)^kdx_k...dx_1\\
\displaystyle
&+&
p^m \int_{\mathbb{R}^d}\cdots\int_{\mathbb{R}^d}h_{\Gamma,n}(\{x_1,...,x_k\})\times\left(\prod_{i=1}^kf(x_i)-f(x_1)^k\right)\prod_{i=1}^kdx_i\\
\displaystyle
&+&
\sum_{j=m+1}^{\binom{k}{2}}\binom{j}{m}p^m(1-p)^{j-m}\int_{\mathbb{R}^d}\cdots\int_{\mathbb{R}^d}g_{\Gamma,n}(\{x_1,...,x_k\})f(x_1)^kdx_k...dx_1\\
\displaystyle
&+&
\sum_{j=m+1}^{\binom{k}{2}}\binom{j}{m}p^m(1-p)^{j-m} \int_{\mathbb{R}^d}\cdots\int_{\mathbb{R}^d}g_{\Gamma,n}(\{x_1,...,x_k\})\times\left(\prod_{i=1}^kf(x_i)-f(x_1)^k\right)\prod_{i=1}^kdx_i\\
\displaystyle
&\leq &
p^m\int_{B(x_1,kr_n)}\cdots\int_{B(x_1,kr_n)}f(x_1)^{k-1}dx_k...dx_2\int_{\mathbb{R}^d}h_{\Gamma,n}(\{x_1,...,x_k\})f(x_1)dx_1\\
\displaystyle
&+&
p^m \int_{\mathbb{R}^d}\cdots\int_{\mathbb{R}^d}h_{\Gamma,n}(\{x_1,...,x_k\})\times\left(\prod_{i=1}^kf(x_i)-f(x_1)^k\right)\prod_{i=1}^kdx_i\\
\displaystyle
&+&
p^m\frac{1-p}{p}\binom{\binom{k}{2}}{m}\int_{B(x_1,kr_n)}\cdots\int_{B(x_1,kr_n)}f(x_1)^{k-1}dx_k...dx_2\int_{\mathbb{R}^d}h_{\Gamma,n}(\{x_1,...,x_k\})f(x_1)dx_1\\
\displaystyle
&+&
p^m\frac{1-p}{p}\binom{\binom{k}{2}}{m} \int_{\mathbb{R}^d}\cdots\int_{\mathbb{R}^d}g_{\Gamma,n}(\{x_1,...,x_k\})\times\left(\prod_{i=1}^kf(x_i)-f(x_1)^k\right)\prod_{i=1}^kdx_i\\
\displaystyle
&\leq&
p^m(f_{\max}\theta(kr_n)^d)^{k-1}+p^{m-1}\binom{\binom{k}{2}}{m}(f_{\max}\theta(kr_n)^d)^{k-1}\\
\displaystyle
&\leq&
p^{m-1}(f_{\max}\theta(kr_n)^d)^{k-1}(1+C'),
\end{array}
\end{equation}
in which $C'=\binom{\binom{k}{2}}{m}$. Considering the third items on the right side of the first inequality, we sum up all the possibilities of the graphs which contains" strictly" subgraph $\Gamma$, and keep the $m$ edges we want and delete all other edges unwanted; and then we bound $\sum_{j=m+1}^{\binom{k}{2}}\binom{j}{m}(1-p)^{j-m}$ by $\binom{\binom{k}{2}}{m}\sum_{j=1}^{\infty}(1-p)^j$.
\smallskip
We can also get
$$card(\mathcal{N}_i)=\binom{n}{k}-\binom{n-k}{k}=k!^{-1}k^2n^{k-1}+O(n^{k-2}),$$
which leads to
$$\sum_{{\bf i}\in \mathcal{I}_n}\sum_{{\bf j} \in \mathcal{N}_i} E\xi_{{\bf i},n} E\xi_{{\bf j},n}\le c'p^{2(m-1)}n^{2k-1}r_n^{2d(k-1)}=c'p^{2(m-1)}n^{k+1}r_n^{dk}(nr_n^d)^{k-2}.$$
The next step is to bound $E{\xi_{{\bf i},n}}{\xi_{{\bf j},n}}$ when ${\bf i}\sim {\bf j}$ and ${\bf i}\neq {\bf j}.$
Here we have
$$h=\mid{\bf i}\cap {\bf j}\mid \in \{1,...,k-1\}.$$
By the same arguments as we bound $E\xi_{{\bf i},n}$, we can get
\begin{equation}\label{equ:poisson1}
E[\xi_{{\bf i},n}\xi_{\xi_{\bf j},n}]\le C'' p^{2m-h+1}(f_{\max}\theta(2kr_n)^d)^{2k-h-1}.
\end{equation}
Given $h\in\{1,2,...,k-1\}$, the number of pairs $({\bf i},{\bf j})\in \mathcal{I}_n\times \mathcal{I}_n$ with $h$ elements in common is
$$\binom{n}{k}\binom{k}{h}\binom{n-k}{k-h}=\Theta(n^{2k-h}).$$
Finally, we get
\begin{equation}\label{equ:poisson2}
\begin{array}{rcl}
\displaystyle
\sum_{{\bf i}\in \mathcal{I}_n}\sum_{{\bf j} \in {\mathcal{N}_i}\setminus \{i\}} E\xi_{{\bf i},n} \xi_{{\bf j},n}
\displaystyle
&\le &
C\sum_{h=1}^{k-1}p^{2m-h+1}n^{2k-h}r_n^{d(2k-h-1)}\\
\displaystyle
&=&
C\sum_{h=1}^{k-1}p^{2m-h+1}n^{k+1}r_n^{dk}(nr_n^d)^{k-h-1}\\
\displaystyle
&\le&
Cp^{2m-k+2}n^{k+1}r_n^{dk}\sum_{h=1}^{k-1}(nr_n^d)^{k-h-1}\\
\displaystyle
&=&
c''p^{2m-k+2}n^{k+1}r_n^{dk},
\end{array}
\end{equation}
the last equality holds as the condition: $(nr_n^d)_{n\ge 1}$ is a bounded sequence.
\smallskip
From the bound (~\ref{equ:poisson1}) and (~\ref{equ:poisson2}) and Theorem~\ref{thm:stein}, we have
\begin{displaymath}\label{equ:poisson}
d_{TV}(G'_n, Z_n)=\left\{\begin{array}{ll} cnp_n^{2m+2-k}r^d_n &\textrm{if $k\ge 4$}\\
cnp_n^{2m-2}r_n^d &\textrm{if $2\le k<4$}
\end{array}\right.
\end{displaymath}
From Theorem ~\ref{thm:similar}, we can get the Poisson approximation; and the fact that the convergence of the standardized Po$(\lambda)$ to the normal when $\lambda \to \infty$ gives us the remaining assertion of the theorem.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:betti}}
\begin{proof}
From the definition of percolated random Vietoris-Rips complex, if a simplex $\sigma$ exists, which means each of its 2-faces exists, i.e., this underlying subgraph with vertex set same as of $\sigma$ is a complete graph. And the 1-skeleton of the cross polytope $\mathcal{O}_k$ has $2^2\binom{k}{2}=2k(k-1)$ 1-faces. Then with the same arguments as in the proof of Theorem~\ref{thm:kahle} in~\cite{kahle2011random}, we can get $$E[\tilde{o}'_k]=\Theta(n^{2k+2}r^{(2k+1)d}p^{2k(k-1)}).$$
Then slightly modify the proof of Theorem~\ref{thm:kahle} in~\cite{kahle2011random}, we finish the proof.
\end{proof}
\section{Possible generalization}
One of the difficulties on the counting of \emph{induced} subgraphs of random geometric graphs arises from the complicated geometric structures. In the \emph{percolated random geometric graphs} $G(\mathcal{X}_n; p, r)$, there are more $\Gamma$-subgraphs than we can get directly from $G(\mathcal{X}_n; r)$ by keeping edges. In other words, there exist many subgraphs which are \emph{feasible} in $G(\mathcal{X}_n; p, r)$ but not in $G(\mathcal{X}_n; r),$ however, if some subgraph $\Gamma_0$ is not feasible for $G(\mathcal{X}_n; p, r)$, of course, it is not feasible for $G(\mathcal{X}_n; r),$ either.
We disturb the geometric structures of $G(\mathcal{X}_n; r)$ by deleting the unwanted edges with certain probability $1-p$. Roughly speaking, there are "fewer" edges in $G(\mathcal{X}_n; p, r)$, but "more" \emph{induced} subgraphs with some positive probability.
\smallskip
In this paper, we only explore the counting of \emph{induced} subgraphs on \emph{percolated random geometric graph}, which is the simplest\emph{ soft random geometric graph}. Can we extend the counting to the general\emph{ soft random geometric graph} with other more general connection functions(e.g., see \cite{penrose2016connectivity})? We would be very interested to see more results related to this topic.
Moreover, as we mentioned already, random geometric simplicial complexes is extensively studied in these years. There are of course a lot of interesting and challenging open problems in this areas, see the last part of ~\cite{bobrowski2014topology} for some of them. We would like to explore more in further directions.
\bibliographystyle{plain}
|
{
"timestamp": "2018-03-08T02:03:45",
"yymm": "1803",
"arxiv_id": "1803.02505",
"language": "en",
"url": "https://arxiv.org/abs/1803.02505"
}
|
\section{Introduction}
The unknown fundamental nature of dark matter~\cite{Garrett:2010hd} and dark energy~\cite{Frieman:2008sn} opens the way to a theory of gravity that might differ from General Relativity (GR)~\cite{Nojiri:2006ri,Nojiri:2010wj,Clifton:2011jh,Capozziello:2011et,Nojiri:2017ncd}. In particular, failure to detect different sorts of dark matter particles encourages the search for alternative scenarios. Similarly, the cosmological constant of the standard $\Lambda$CDM cosmological model suffers from a huge fine-tuning problem~\cite{Weinberg:1988cp} and several attempts are being made to overcome this issue by assuming a dynamical origin of the current accelerated expansion of the Universe~\cite{Copeland:2006wr,Bamba:2012cp}. In recent years, several models of modified gravity have been developed in order to reproduce the full expansionary history of our Universe without invoking the presence of forms of matter different other than standard baryonic matter and radiation~\cite{Nojiri:2006ri,Nojiri:2010wj,Clifton:2011jh,Capozziello:2011et,Nojiri:2017ncd}.
There are several different ways in which GR can be modified. For instance, one can add new combinations of the curvature invariants to the Hilbert-Einstein action (see for instance~\cite{Sotiriou:2008rp,DeFelice:2010aj,Nojiri:2010wj,Capozziello:2011et,Sebastiani:2015kfa} for the case of $F(R)$-gravity). Another approach is offered by scalar-tensor theories of gravity, wherein additional scalar fields are introduced and coupled to gravity. A particularly promising attempt in this direction is represented by Horndeski's theory of gravity~\cite{Horn}: this theory consists the most general class of scalar-tensor gravitational theories where the equations of motion are second-order differential equations (as in GR). Horndeski gravity provides a generic action avoiding Ostrogradski instabilities~\cite{Ostro1, Ostro2,Ostro3}. Aside from Horndeski gravity and $F(R)$ gravity, a plethora of other models have been explored. The possibility of taking some of these theories off the table will ultimately rely upon comparison with observations~\cite{Koyama:2015vza} (see e.g.~\cite{Alonso:2016suf} for the case of scalar-tensor gravity), and gravitational waves (GWs) represent an extremely promising arena in this direction.
Recently, the LIGO/Virgo collaboration observed the event GW170817, produced by merger of a binary neutron star system~\cite{TheLIGOScientific:2017qsa}. Thereafter, a number of counterparts across the electromagnetic (EM) spectrum were observed. In particular, the optical counterpart of the GW170817, the short gamma-ray burst event GRB170817A, was observed by the \textit{Fermi} Gamma-ray Burst Monitor and the Anti-Coincidence Shield on board the International Gamma-Ray Astrophysics Laboratory (\textit{INTEGRAL}) spectrometer~\cite{Monitor:2017mdv}. The association of GRB170817A to GW170817A, as well as the consistency of its arising from a binary neutron star merger, was confirmed in~\cite{Monitor:2017mdv,Goldstein:2017mmi,Murguia-Berthier:2017kkn}.
The optical counterpart GRB170817A was detected within a time-delay of $\delta t=(1.734 \pm 0.054)\,s$ from GW170817. Most of the time-delay is dominated by astrophysical contributions, associated to the collapse of the hypermassive neutron star formed during the merger. In~\cite{Monitor:2017mdv}, the size of these astrophysical contributions was conservatively estimated as $\approx 10\,s$. This implies that GWs travel at a speed $c_T$ which is extremely close to the speed of light: $c_T \approx 1$~\footnote{We use natural units wherein the speed of light is set to $c=1$.}. This astonishingly simple observation has already placed severe constraints on several theories of modified gravity: any modified gravity model predicting $c_T\neq 1$ must now be seriously reconsidered, and several previously viable theories of gravity are now excluded~\cite{Creminelli:2017sry,Sakstein:2017xjx,Ezquiaga:2017ekz,Baker:2017hug,Ezquiaga:2018btd} (see~\cite{Lombriser:2015sxa,Lombriser:2016yzn} for earlier important work, see also~\cite{Boran:2017rdn,Nojiri:2017hai,Arai:2017hxj,Amendola:2017orw,Visinelli:2017bny,Crisostomi:2017lbg,Langlois:2017dyl,
Gumrukcuoglu:2017ijh,Kreisch:2017uet,Bartolo:2017ibw,Dima:2017pwp,Cai:2018rzd,Pardo:2018ipy}). Still, one could imagine models where the dispersion relation of GWs is modified such that $c_T=1$ today and for a range of wavelengths detectable by the LIGO/Virgo experiment. However, one might generically expect such a model to bring along severe fine-tuning problem. With these considerations in mind, in this paper we will focus on a model with $c_T^2=1$ throughout the entire evolutionary history and at all scales. The possibility of a tiny violation of the $c_T^{2} = 1$ constraint, within the uncertainty allowed by present data from the joint GW170817/GRB17081A detection, will be entertained in a companion paper~\cite{inprep}.
A particularly interesting theory of modified gravity is mimetic gravity, proposed in 2013 by Chamseddine and Mukhanov~\cite{m1} (see also~\cite{Lim:2010yk,Gao:2010gj,Capozziello:2010uv,Zumalacarregui:2013pma} for related and important earlier work). In the original work, the conformal degree of freedom of gravity was isolated in a covariant way, through a reparametrization of the physical metric $g_{\mu \nu}$ in terms of an auxiliary metric $\tilde{g}_{\mu \nu}$ and the mimetic scalar field $\phi$:
\begin{eqnarray}
g_{\mu \nu} = -\tilde{g}_{\mu \nu}\tilde{g}^{\alpha \beta}\partial_{\alpha}\phi\partial_{\beta}\phi \, .
\label{mimetic}
\end{eqnarray}
It is easy to show that, for consistency, the following condition has to be satisfied:
\begin{eqnarray}
g^{\mu \nu}\partial_{\mu}\phi\partial_{\nu}\phi = -1 \, .
\label{mimeticconstraint}
\end{eqnarray}
In~\cite{m1}, it was shown that the equations of motion resulting from the reparametrization of Eq.~(\ref{mimetic}) mimic a pressureless fluid on cosmological scales, which can identified with dark matter. Subsequently it was realized that the theory is related to GR via a non-invertible disformal transformation involving the mimetic field $\phi$, thus explaining why the dynamics of the theory are modified with respect to GR~\cite{Deruelle:2014zza,Domenech:2015tca,Achour:2016rkg}. A simple extension of the original model featuring a potential for the mimetic field, $V(\phi)$, has been shown to also be able to mimic dark energy and provide an early-time inflationary era, as well as some allowing for bouncing solutions~\cite{m2}. In~\cite{m2}, it was also argued that the mimetic constraint Eq.~(\ref{mimeticconstraint}) can be enforced at the level of the action through a Lagrange multiplier term. An incomplete list of works examining astrophysical and cosmological issues in mimetic gravity can be found in~\cite{Golovnev:2013jxa,Barvinsky:2013mea,Nojiri:2014zqa,Saadi:2014jfa,Capela:2014xta,Mirzagholi:2014ifa,Leon:2014yua,
Haghani:2015iva,Matsumoto:2015wja,Momeni:2015gka,Myrzakulov:2015sea,Astashenok:2015haa,Myrzakulov:2015qaa,
Myrzakulov:2015kda,Odintsov:2015wwp,Hammer:2015pcx,Ramazanov:2016xhp,Nojiri:2016ppu,Nojiri:2016vhu,Sadeghnezhad:2017hmr,
Baffou:2017pao,Vagnozzi:2017ilo,Bouhmadi-Lopez:2017lbx,Shen:2017rya,Nojiri:2017ygt,Dutta:2017fjw,Golovnev:2018icm,Langlois:2018jdg,
Brahma:2018dwx,deHaro:2018sqw,Zhong:2018tqn,Chamseddine:2018qym,Chamseddine:2018gqh,Zlosnik:2018qvg,Nashed:2018qag}, while a recent review can be found in~\cite{vase}.
A comment on local gravity tests of mimetic gravity is in order. We notice that, to date, there is no work thoroughly examining constraints on mimetic gravity from local gravity, i.e. from modifications to the Newtonian potential within the Solar System. Efforts towards this direction were nonetheless carried out in~\cite{Babichev:2016jzg}, which studied local gravity constraints within a mimetic model closely related to the one we will study in this work [i.e. setting $a=b=0$ in the action given by Eq.~(\ref{lagrangian})]. By studying the precession of Mercury's perihelion, in~\cite{Babichev:2016jzg} the constraint $c \lesssim 10^{-18}$, where $c$ is the Lagrangian parameter appearing in the model we will study [Eq.~(\ref{lagrangian})], was found. In general, however, we note that mimetic models suffer from caustic instabilities on small (galactic and subgalactic) scales, which imply that in order for the theory to be mathematically consistent it requires an appropriate ``completion" which removes these caustic instabilities. So far, the issue of how to remove the caustic instabilities has yet to receive a definite solution, although work in this direction has been pursued~\cite{Capela:2014xta,Babichev:2016jzg,Babichev:2017lrx}. Nonetheless, as noted in~\cite{Babichev:2016jzg}, the details of how the mimetic theory is ``completed" to avoid caustic instabilities on small scales will then inevitably affect conclusions concerning local gravity constraints. In the absence of a compelling caustic-free mimetic theory which would lead to equally compelling local gravity constraints, herein we conservatively choose to refrain from discussing local gravity constraints on mimetic gravity further. The issue of local constraints on mimetic gravity, nonetheless, is admittedly an important open problem to which it is definitely worth returning within a more thorough study, which however falls beyond the scope of our work.~\footnote{We further note that no work so far has examined the possibility of invoking screening mechanisms to evade local gravity constraints if necessary. Vainshtein-like screening mechanisms generally require non-linear kinetic terms which are likely to be strongly constrained following the joint GW170817/GRB170817A detection. It is moreover conceivable that it would be in any case hard to implement screening mechanisms in mimetic gravity given that, at least in the original scenario, the mimetic field is non-dynamical and constrained (see however also~\cite{Ganz:2018vzg} examining the gravitational slip in mimetic theories consistent with GW170817/GRB170817A). We choose therefore not to discuss this issue further in our paper.}
A particularly appealing variant of the original mimetic theory starts from a ``seed'' Horndeski action rather than the Einstein-Hilbert one: in other words, the mimetic constraint Eq.~(\ref{mimeticconstraint}) is enforced on the scalar degree of freedom of Horndeski gravity through a Lagrange multiplier term in the action. The resulting mimetic Horndeski theory has been proposed in~\cite{Arroja:2015wpa} and subsequently studied in e.g.~\cite{Rabochaya:2015haa,Arroja:2015yvd,Cognola:2016gjy,Arroja:2017msd}. On a cosmological background, the theory features a fluid mimicking dark matter. However, at the perturbative level, this theory features some problems. In fact, the mimetic constraint kills the wave-like parts of the Horndeski scalar degree of freedom and removes the scalar degree of freedom of the theory. This implies that the speed of scalar perturbations (the sound speed $c_s$) vanishes. It is worth clarifying that a vanishing sound speed is problematic only if one wishes to perform inflation with the mimetic field, because the resulting perturbations would fail in explaining structure formation, as explained in~\cite{m2}. In fact, if $c_s=0$, perturbations of the mimetic field do not propagate in space. Quantizing such a field, and consequently generating vacuum quantum fluctuations, is problematic for many reasons (for instance, it would be hard to satisfy the appropriate commutation relation with the conjugate momentum). This in turn hinders the generation of perturbations which will then grow under gravitational instability to form the large-scale structure, one of the most important outcomes of successful inflation. Perhaps more importantly, a vanishing sound speed of the inflaton is also problematic from the observational point of view. In fact, measurements of the CMB temperature and polarization anisotropies from the \textit{Planck} satellite (and in particular the absence of detection of primordial non-Gaussianity) favour a speed of sound for the inflaton $c_s=1$, with a 95\% confidence level lower bound of $c_s > 0.024$~\cite{Ade:2015lrj}. This result excludes $c_s=0$ at high significance~\cite{Ade:2015lrj}. We wish to stress nonetheless that a vanishing speed of sound is strictly speaking only a problem if one wishes to perform inflation with the mimetic field, and not if one is only aiming at describing dark matter (for which $c_s=0$ is instead quite natural, although a very tiny but non-zero sound speed could nonetheless be desirable in order to possibly avoid caustic instabilities~\cite{Capela:2014xta,Babichev:2016jzg,Babichev:2017lrx}).
At any rate, it is worth considering modifications to the original mimetic scenario which allow for a non-vanishing sound speed. An obvious way to address this issue is to break the Horndeski structure of the theory, and thereby removing the special tuning guaranteeing that the equations of motion are at most of second order. Nonetheless, the presence of the mimetic constraint prevents the appearance of higher-than-second-order derivatives in the equations of motion. In this work, we shall follow this procedure, and consider the mimetic model proposed in~\cite{Cognola:2016gjy}, obtained by breaking the Horndeski structure of a starting mimetic Horndeski model, thus allowing for a non-zero sound speed. The model is theoretically appealing as it appears in the low-energy limit of projectable Ho\v{r}ava-Lifshitz gravity, a well-motivated candidate theory of quantum gravity. We will show that within this model, after imposing constraints on the speed of GWs arising from the detection of GW170817/GRB170817A, it is possible to mimic the $\Lambda$CDM evolutionary history wherein the Universe is filled with dark matter in agreement with observations.
This paper is organized as follows. In Sec.~\ref{sec:background} we define the action of the mimetic model we consider, and derive its equations of motion on a flat FLRW background. In Sec.~\ref{sec:perturbations}, we then perturb the FLRW line-element, first considering scalar perturbations (Subsec.~\ref{subsec:scalar}) which allow us to derive the sound speed $c_s$, and subsequently tensor perturbations (Subsec.~\ref{subsec:tensor}) which allow us to derive the gravitational wave speed $c_T$ and hence consider constraints on the parameters of the model from the joint GW170817/ GRB170817A detection. In Subsec.\ \ref{subsec:stability} we address the problem of gradient and ghost instabilities of the theory. In Sec.~\ref{sec:late_time} we then consider late-time solutions which mimic dark matter and dark energy in agreement with observations, first in a simplified vacuum case (Subsec.~\ref{subsec:vacuum}), and then in a realistic setting adding radiation and baryonic matter (Subsec.~\ref{subsec:adding}), and finally calculate the resulting age of the Universe. We summarize our main findings and provide concluding remarks in Sec.~\ref{sec:conclusions}.
\section{Background equations}
\label{sec:background}
\noindent We consider the mimetic theory defined by the following action~\footnote{See also~\cite{Rinaldi:2016oqp,Diez-Tejedor:2018fue,Casalino:2018mna}, where related Horndeski and beyond Horndeski models, as well as their ability to mimic dark matter on cosmological and galactic scales, were studied.}:
\begin{eqnarray}\label{lagrangian}\nonumber
S&=& \int d^4 x \,\sqrt{-g} \Big[ R(1+2aX) -{c\over 2}(\square\phi)^{2}+{b\over 2}(\nabla_{\mu}\nabla_{\nu}\phi)^{2}-{\lambda\over 2}(2X+1) \\
&-&V+ \mathcal{L}_m \Big]\,,
\end{eqnarray}
where we set $16\pi G_N=1$ ($G_N$ is Newton's constant), $g$ is the determinant of the metric tensor $g_{\mu\nu}(x^i)$, $\mathcal{L}_m$ is the action of standard matter and radiation, $\phi$ is the mimetic field, $V\equiv V(\phi)$ is a potential for the mimetic field, and $X\equiv(1/2) g_{\mu\nu}\nabla^{\mu}\phi\nabla^{\nu}\phi$ is the kinetic term of the field. The Lagrangian multiplier $\lambda$ is introduced to enforce the mimetic constraint Eq.~(\ref{mimeticconstraint}) on the mimetic field, while $a\,,b\,,c$ are constant parameters. Note that when $b=c=4a$ we recover a mimetic Horndeski model~\cite{Horn}. Breakage of the Horndeski structure of the action is necessary in order for scalar perturbations to propagate ~\cite{Arroja:2015yvd,Cognola:2016gjy}. However, we will see that, on a Friedmann-Lema\^itre-Robertson-Walker (FLRW) background, the model preserves the solutions of the corresponding mimetic Horndeski Lagrangian up to a (constant) rescaling of the effective Plank mass of the theory.
The action in Eq.~(\ref{lagrangian}) was first considered in~\cite{Cognola:2016gjy}, by explicitly breaking the Horndeski structure of a starting mimetic Horndeski model. In~\cite{Cognola:2016gjy}, it was argued that the model is related to the low-energy limit of Ho\v{r}ava-Lifshitz gravity~\cite{Horava:2009uw}, a candidate theory of quantum gravity which achieves power-counting renormalizability by explicitly breaking diffeomorphism invariance. In fact, the motivation for studying the original mimetic Horndeski model in~\cite{Cognola:2016gjy} was related to attempts to achieve power-counting renormalizability \`{a} la Ho\v{r}ava, albeit via a dynamical rather than explicit breaking of diffeomorphism invariance, through a non-standard coupling of curvature to the energy-momentum tensor of an exotic fluid. In these ``covariant renormalizable gravity'' theories, issues of infrared strong-coupling in Ho\v{r}ava-Lifshitz gravity, due to the appearance of an unphysical extra mode related to the explicit breaking of diffeomorphism invariance~\cite{Blas:2009yd,Blas:2009qj,Blas:2009ck}, are circumvented.
Let us now consider the equations of motion of our mimetic model, which are obtained by varying the action with respect to the metric, the Lagrange multiplier, and the mimetic field. Varying the action with respect to the metric we get
\begin{eqnarray}\label{einstein_equations}\nonumber
&&(1+2aX)G_{\mu\nu}+\nabla_{\mu}\phi\nabla_{\nu}\phi\left(aR-{\lambda\over 2}\right)\\\nonumber
&&-\frac12 g_{\mu\nu}\left[ {b\over 2}\phi_{\alpha\beta}\phi^{\alpha\beta} -{c\over 2}(\square \phi)^{2}-{\lambda\over 2}(2X+1)-V \right]\\\nonumber
&&+2a(g_{\mu\nu}\square X-\nabla_{\mu}\nabla_{\nu}X)\\\nonumber
&&-{b\over 2}g^{\alpha\beta}\left[ \nabla_{\alpha}(\phi_{\mu\beta}\nabla_{\nu}\phi)+
\nabla_{\alpha}(\phi_{\nu\beta}\nabla_{\mu}\phi)-\nabla_{\alpha}(\phi_{\mu\nu}\nabla_{\beta}\phi) \right]+b\phi^{\alpha}_{\,\,\,\mu}\phi_{\alpha\nu}\\
&&+{c\over 2}\left[ \nabla_{\nu}\phi\nabla_{\mu}(\square \phi) +\nabla_{\mu}\phi\nabla_{\nu}(\square \phi)-g_{\mu\nu}g^{\alpha\beta}\nabla_{\alpha}(\square \phi\nabla_{\beta}\phi) \right]=\frac{1}{2} T_{\mu \nu}\,,
\end{eqnarray}
where $G_{\mu\nu}$ is Einstein's tensor and $T_{\mu\nu}$ represents the stress tensor of standard baryonic matter and radiation.
Here we set $\phi_{\mu\nu}\equiv\nabla_{\nu}\nabla_{\mu}\phi$. Variation of the action with respect to the field $\phi$ yields:
\begin{align}
& (\lambda-2 a R) (\nabla_\mu \nabla^\mu\phi) + (\nabla_\mu \phi)(\nabla^\mu \lambda) - 2 a (\nabla_\mu R) (\nabla^\mu \phi)\nonumber\\
&+ b (\nabla_\nu \nabla_\mu \nabla^\nu \nabla^\mu \phi) - c (\nabla_\mu \nabla^\mu \nabla_\nu \nabla^\nu \phi) - \frac{\partial V(\phi)}{\partial \phi}=0\,.
\label{phigen}
\end{align}
Finally, variation with respect to the Lagrange multiplier $\lambda$ enforces the mimetic constraint:
\begin{eqnarray}
X=-\frac12\,.
\end{eqnarray}
We choose to work within a flat FLRW space-time, whose line-element is given by:
\begin{equation}
ds^2=-dt^2+A(t)^2(dx^2+dy^2+dz^2)\,,
\end{equation}
In the above, $A\equiv A(t)$ is the cosmological scale factor and is a function of the time only. We have chosen to denote the scale factor by $A$ instead of $a$ in order to avoid possible confusion with the Lagrangian parameter $a$ in Eq.~(\ref{lagrangian}). On this background, the mimetic constraint immediately allows for the identification
of the field with the cosmological time (up to a constant), namely:
\begin{eqnarray}\label{mim_phi}
\phi=t\,.
\end{eqnarray}
With this identification, from the $(0,0)$ and $(1,1)$ components of (\ref{einstein_equations}) we obtain the equations of motion (EOMs):
\begin{align}
&(-6c+24a)\dot H+(36a+9c+12-9b)H^2-2V-2\lambda -2\rho_{\rm m}=0\,,\label{ttcomp}\\
&3H^2+2\dot H={2V-2P_{\rm m}\over 4-4a-b+3c}\,,\label{final}
\end{align}
where $H\equiv H(t)=\dot A/A$ is the Hubble parameter. Here, we denote the time derivative with a dot, and $\rho_{\rm m}$ and $P_{\rm m}$ correspond to the combined energy density and pressure of baryonic matter and radiation.
The Klein-Gordon (KG) equation of the field, Eq.~(\ref{phigen}), taking into account Eq.~(\ref{mim_phi}), reads:
\begin{eqnarray}\label{eomlambda_equiv}
\frac{1}{A^3} \frac{d}{d t} \left[A^3 \left(\lambda + (3b - 24a) H^2 + (3c - 12a)\dot{H}\right)\right] = - \frac{dV}{dt}\,,
\end{eqnarray}
while the continuity equation $\nabla_{\mu}T^{\mu\nu}=0$ for baryonic matter and radiation assumes the standard form:%
\begin{equation}\label{continuity_matter}
\dot\rho_{\rm m}+3H(\rho_{\rm m}+P_{\rm m})=0\,.
\end{equation}
Note that when the Horndeski structure for the $\phi$ sector of the action Eq.~(\ref{lagrangian}) is recovered, i.e. when $c=4a$, the $\dot{H}$ term in Eq.~(\ref{eomlambda_equiv}) disappears, leaving a second order differential equation.
Given the equation of state of the matter fluid (\ref{continuity_matter}), once the form of the potential $V$ is chosen, the system of equations (\ref{ttcomp},\ref{final}) can be solved with respect to $A(t)$ and $\lambda$. Alternatively, one can use Eq.~(\ref{eomlambda_equiv}) with one of Eqs.~(\ref{ttcomp}, \ref{final}). In this case, from Eq.~(\ref{eomlambda_equiv}) we get:
\begin{equation}
\rho_\text{df} (t) = \frac{C}{A(t)^3}+\frac{3}{A(t)^3}\int^t V(t') A(t')^3 H(t') dt'\,,\label{rhodf}
\end{equation}
where $C>0$ is an integration constant which sets the amount of mimetic dark matter, as the corresponding contribution to the energy density decays as $a^{-3}$, as expected for a pressureless component. In the above expression, $\rho_\text{df}$ is defined by:
\begin{eqnarray}
\rho_\text{df}:=\lambda+V+(3 b - 24 a) H^2 + (3 c - 12 a)\dot H\label{rhoP}\,,
\end{eqnarray}
%
and can be read as an effective energy density of an induced \textit{dark fluid} with effective pressure
\begin{eqnarray}
P_\text{df}:=-V\,.\label{PP}
\end{eqnarray}
It is then easy to verify that:
\begin{eqnarray}\label{continuity_df}
\dot\rho_{\text{df}} + 3 H (\rho_\text{df} + P_\text{df}) = 0\,.
\end{eqnarray}
Rearranging Eqs.~(\ref{ttcomp})-(\ref{final}) we obtain:
\begin{eqnarray}\label{friedmann}
6H^2 &=&\frac{4}{4-b + 3c - 4a}\, \left(\rho_\text{df} + \rho_\text{m}\right) \,,\nonumber\\
-4\dot H - 6H^2&=&\frac{4}{4-b + 3c - 4a} \left(P_\text{df} + P_\text{m}\right)\,.
\end{eqnarray}
We recognize the above as being Friedmann-like equations, with the Planck mass rescaled by a factor $(4-b+3c-4a)/4$. The quantity by which the Planck mass is rescaled determines the effective Newton constant. Enforcing that the rescaling is positive implies:
\begin{equation}
4-b+3c-4a>0\,.\label{condizione}
\end{equation}
Notice that for $a=b=0$ we recover the results of~\cite{m2}, which extended the original mimetic action by a term proportional to $(\Box \phi)^2$.
We immediately see that a constant potential $V$ in Eqs.~(\ref{rhoP},\ref{PP}) can be exploited to model dark matter and dark energy through the corresponding fluid. For more complex potentials, given in the action as functions of $\phi$ (and therefore of $t$ thanks to the mimetic constraint) and not as functions of the scale factor $A$, the dark fluid will model various types of fluids while leaving the dark matter sector unchanged.
In the next sections we will explore the perturbations of this model and find physical constraints on the Lagrangian parameters. Then we will analyse the background solutions for different forms of the potential $V$.
\section{Perturbations on a FLRW background}
\label{sec:perturbations}
\noindent As mentioned above, one of the main problems in mimetic gravity is the vanishing sound speed, implying the non-propagation of scalar perturbations. The problem persists even in mimetic Horndeski gravity. We have seen that breaking the Horndeski form of the $\phi$ sector, we can find a non-vanishing sound speed~\cite{Arroja:2015yvd,Cognola:2016gjy}. However, this comes at the risk of modifying the speed of gravitational waves $c_T$, which in the original mimetic gravity model is identically equivalent to the speed of light, $c_T=1$. Enforcing that $c_T$ remains equal to the speed of light when considering the mimetic model of Eq.~(\ref{lagrangian}) will strongly constrain the Lagrangian parameters. We will now discuss these issues in detail, and begin by computing the sound speed $c_s$ and the gravitational wave speed $c_T$.
\subsection{Scalar perturbations}
\label{subsec:scalar}
We begin by considering scalar perturbations around a flat FLRW line-element. In Newtonian gauge, the perturbed metric reads:
\begin{eqnarray}
ds^{2}=-\left[1+2\Phi(t,x,y,z)\right]dt^{2}+A(t)^{2}\left[1-2\Psi(t,x,y,z)\right]\delta_{ij}dx^{i}dx^{j}\,.
\end{eqnarray}
We perturb the mimetic field and the Lagrange multiplier field as follows:
\begin{eqnarray}
\phi=t+\delta\phi(t,x,y,z)\,,\quad \lambda=\lambda_{0}(t)+\lambda_{1}(t,x,y,x)\,,
\end{eqnarray}
where $|\delta\phi/t|\,,|\lambda_1/\lambda|\ll 1$. The mimetic constraint yields $\Phi=\delta\dot\phi$ and the $i\neq j$ components of the perturbed field equations [Eq.~(\ref{einstein_equations})] give:
\begin{eqnarray}
(1-a)\Psi-\frac{b}{2}H\delta\phi+\left( a-\frac{b}{2}-1\right)\delta\dot\phi=0\,.
\end{eqnarray}
By substituting this result into any of the $tj$ components of Eq.~(\ref{einstein_equations}) leads to:
\begin{eqnarray}\label{spert}\nonumber
\delta\ddot\phi+H\delta\dot\phi+\left[\dot H + \frac{c_s^2 (\rho_{\rm m} + P_\text{m})}{b-c}\right]
\delta\phi-{c_{s}^{2}\over A^{2}}\nabla^{2}\delta\phi= -\frac{A(t) c_s^2}{b-c} (\rho_{\rm m} + P_\text{m}) v_{\rm m}\,,\\
\end{eqnarray}
where
\begin{eqnarray}\label{sound}
c_{s}^{2}={ 2(b-c)(a-1)\over (2a-b-2)(4-4a-b+3c)}\,,
\end{eqnarray}
is the squared speed of sound. In the expression above, $v_{\rm m}$ is the matter velocity.
Recall that we have defined $\rho_{\rm m}$ and $P_{\rm m}$ to include both the baryonic matter and radiation components, although in principle one could separated them in the above discussion: that is, the term $(p_\text{m} + \rho_\text{m}) v_\text{m}$ should really be considered as a sum over the baryonic matter and radiation contributions. Notice also that in the limit $a=0, b=0$ and $c=-2\gamma$ we find the results of~\cite{m2}. Notice finally that these results are independent of the choice of the field potential $V$.
\subsection{Tensor perturbations}
\label{subsec:tensor}
Let us now turn our attention to tensor perturbations. The line-element is perturbed as:
\begin{eqnarray}\nonumber
ds^2 &=& -dt^2 +A(t)^2(1+h_{+})dx^2+2A(t)^2h_{\times}dxdy+A(t)^2(1-h_{+})dy^2\\
&+&A(t)^2dz^2\,,
\end{eqnarray}
where we have chosen the TT-gauge. We denote by $h_{\times,+}$ the two polarisation states of the linear tensor perturbations. By inserting this into the Einstein equations and by using the unperturbed equations, we find the perturbed equation at the first order, where $h=h_{\times}$ or $h=h_{+}$:
\begin{eqnarray}\label{tpert}
\ddot h+3H\dot h-{2(1-a)\over 2-2a+b}{1\over A(t)^{2}}{\partial^{2}h\over \partial z^{2}}=0\,.
\end{eqnarray}
From the above, we read off the squared gravitational wave speed
\begin{eqnarray}\label{tensor}
c_{T}^{2}={2(1-a)\over 2-2a+b}\,.
\end{eqnarray}
Clearly, the gravitational wave speed is in general different from the speed of light. Notice furthermore that the tensor perturbations are not affected by the presence of standard matter.
We clearly see that in order to satisfy the recent constraint from GW170817 /GRB17081A which enforces $c_T^2 \simeq 1$, we have to consider $\vert b \vert \ll 1$. In fact, the requirement that $c_T \equiv 1$ forces $b$ to be identically $0$. We will consider further implications of these findings in the next sections.
\subsection{Ghost and gradient instabilities}
\label{subsec:stability}
The numerical results obtained below show that, for $V=$ const, we have $\rho_{\rm df}+P_{\rm df}=\rho_{\rm df}(1+\omega_{\rm df})>0$ (see Fig.~\ref{fig:ev_p1}). This implies that the dark fluid does not violate the null energy conditions, at least when $V$ is constant. Nevertheless, this does not guarantee that ghost and gradient instabilities are absent. As shown, for instance, in \cite{Ijjas:2016pad}, a similar mimetic model has unavoidable scalar gradient instabilities while ghost instabilities disappear in certain areas of the parameter space. Our case, however, is more complicated because of the non-minimal coupling to gravity of the kinetic term $X$, see Eq.~(\ref{lagrangian}). The effects of the non-minimal coupling can be spotted by inspecting Eq.~(\ref{spert}), where the speed of sound $c_{s}^{2}$ is modulated by the factor $(b-c)^{-1}$, but only in the matter sector, i.e. an effective sound speed $c_{s,{\rm eff}}^2=c_s^2/(b-c)$ appears. If $b=0$ and $c>0$ we see that $c_s^2$ and $c_{s,{\rm eff}}^2$ are always opposite in sign.
To shed further light on the behaviour of perturbations, we consider the action to quadratic order in both scalar and tensor perturbations. For the scalar sector a straightforward calculation gives:
\begin{equation}
S^{(2)}_\text{S} =2 (1-a) \int d^4x A^4 H^2 \left[- \frac{1}{c_s^2} \dot{\delta \phi}^2 + \frac{1}{A^2}\left(\partial_k \delta \phi\right) \left(\partial^k \delta \phi\right) + \ldots \right]\,,
\label{quadraticscalar}
\end{equation}
where $\ldots$ stands for terms proportional to $\delta \phi \delta \phi$ and $\delta \phi \dot{\delta \phi}$ which are not important for the stability analysis. For the tensor sector we obtain:
\begin{equation}
S^{(2)}_\text{T} = \frac{1}{4}(1-a) \int d^4x A^3 \left[ \frac{\dot h^2}{c_T^2} - \frac{1}{A^2}\left(\frac{\partial h}{\partial z}\right)^2 \right]\,.
\label{quadratictensor}
\end{equation}
From Eq.~(\ref{quadratictensor}) one sees that in order for tensor perturbations to not suffer from instabilities, we must set $a<1$, so that both terms on the right-hand side of the quadratic action for tensor perturbations appear with the right sign. However, this choice leads, as can be straightforwardly seen from Eq.~(\ref{quadraticscalar}), to gradient instability in the scalar sector and, depending on the sign of $c_{s}^{2}$, also to ghost instability. When $b=a=0$, we recover the same result of~\cite{Ijjas:2016pad}, up to some irrelevant normalisation factors. Thus, as suggested also in this work, the only way to avoid ghost instabilities is to choose $c_{s}^{2}<0$\footnote{In \cite{Ijjas:2016pad} the quantity $(2-3\gamma)/\gamma$ corresponds to our $c_{s}^{-2}$. }. By combining Eqs.~(\ref{friedmann},\ref{sound}) we see that, for $b=0$, $c_{s}^{2}$ and $c$ must have opposite signs. Thus, the conditions $a<1$ and $c>0$ guarantee that the theory is free from ghost instabilities in both the scalar and tensor sectors, although gradient instabilities are still present in the scalar sector.
However, as discussed above, the instability might be tamed by the fact that $c_{s,{\rm eff}}^2=c_s^2/(b-c)>0$ when $c_s^2<0$ and $c>0$ in the limit where $b\rightarrow0$, i.e. the effective sound speed in the presence of matter non-minimally coupled to gravity might actually be positive. A thorough analysis of these perturbed equations goes beyond the scope of this paper but it certainly worth investigating in a follow-up work.
\section{Late-time cosmological evolution}
\label{sec:late_time}
\noindent The main goal of this section is to find solutions mimicking dark matter and/or dark energy in the late Universe, while respecting observational bounds on the speed of scalar and tensor perturbations. To simplify the discussion, we will force the gravitational wave speed $c_T$ to be identically equal to the speed of light, $c_T=1$: as we have seen previously, this implies setting the Lagrangian parameter $b$ to $0$. In other words, a term of the form $\nabla^{\mu}\nabla^{\nu}\phi\nabla_{\mu}\nabla_{\nu}\phi$ is forbidden from appearing in the action, Eq.~(\ref{lagrangian}).
\subsection{Vacuum case}
\label{subsec:vacuum}
Let us begin by considering the idealized case of vacuum, where no cosmological matter is present. Then, the dark fluid density (and pressure) are by definition the effective ones. Then, from the first equation in Eq.~(\ref{friedmann}) combined with Eq.~(\ref{rhoP}) and imposing $b=0$ we get:
\begin{equation}
6H(t)^2 =\frac{4}{4+ 3c - 4a}\left[\frac{C}{A(t)^3}+\frac{3}{A(t)^3}\int^t V(t') A(t')^3 H(t') dt'\right]\,,\label{UNO}
\end{equation}
while the effective Equation of State (EoS) parameter of the Universe, following Eqs.~(\ref{rhoP},\ref{PP}), is given by the following:
\begin{eqnarray}
\omega_\text{df}:=\frac{P_\text{df}}{\rho_\text{df}} = \frac{-V}{V+\lambda - 24 a H^2 + (3 c - 12 a)\dot H} \,.\label{eos}
\end{eqnarray}
Now, given a specific form for the scale factor $A(t)$, and therefore a specific form for the Hubble parameter $H(t)$, is possible to reconstruct from Eq.~(\ref{UNO}) the potential $V$ as a function of $t$ and therefore of $\phi$. Moreover, the on-shell form of $\lambda$ can be inferred from Eq.~(\ref{eos}).
Observations reveal that the Universe today is dominated by dark matter and dark energy. Recent measurements of the CMB temperature and polarization anisotropies and their cross-correlations, in combination with geometrical measurements from Baryon Acoustic Oscillations and Supernovae Type-Ia luminosity distance measurements, suggest that the equation of state of dark energy is extremely close to the the cosmological constant value $\omega=-1$, although small deviations from $-1$ either in the quintessence or the phantom region are still allowed by data~\cite{Planck2015}. In particular, the 68\% confidence level allowed region for the dark energy equation of state is given by~\cite{Planck2015}:
\begin{equation}
-1.0051<\omega_{\rm DE} < -0.961\,,
\label{omega_df_exp_constr}
\end{equation}
Since it is expected that the future evolution of the Universe be dominated by dark energy, let us consider the far-future evolutionary history in our model where we neglect the contribution of the dark matter, setting $C=0$ in Eq.~(\ref{UNO}). In order to describe all the three possible regimes of $\omega_\text{df}$ (cosmological constant-like, quintessence-like, and phantom-like), we will consider the following possibilities:
\begin{align}
A(t)&=\text{e}^{H (t-t_0)} & \omega_\text{df}=-1\,\label{sf1},\\
A(t)&=\left(\frac{t}{t_0}\right)^{\frac{2}{3(1+\omega_{\text{df}})}} & \omega_\text{df}>-1\,\label{sf2},\\
A(t)&=\left(\frac{t^*-t}{t^*-t_0}\right)^{\frac{2}{3(1+\omega_{\text{df}})}} & \omega_\text{df}<-1\,\label{sf3},
\end{align}
where $t_0$ is the present time for which $A(t_0)= 1$, while $t^*\,, t<t^*$ is the time of the Big Rip~\cite{Caldwell:2003vq} (which emerges within phantom cosmologies, but which can be avoided if a de Sitter Universe is asymptotically reached~\cite{Frampton:2011sp}, or in certain modified gravity theories~\cite{Astashenok:2012tv}). Furthermore, we note that the Hubble parameter $H$ in the case $\omega=-1$ is a constant.
\subsubsection{Potential for $\omega_\text{df} = -1$}
Let us start with the scale factor Eq.~(\ref{sf1}), where $H$ is constant. By choosing the constant potential:
\begin{equation}\label{p1}
V(t) = 2 \Lambda\,,
\end{equation}
where $\Lambda$ is the cosmological constant, from Eq.~(\ref{UNO}) we obtain:
\begin{equation}
6 H^2 = \frac{4}{4+3c-4a}\,(2 \Lambda)\,,
\end{equation}
which is the solution one would expect for a $\Lambda$-dark energy dominated universe (up to the Planck mass rescaling factor). Notice that from Eq.~(\ref{eos}) a constant $\lambda = 24 a H^2$ is needed in order to have $\omega_\text{df} = -1$.
\subsubsection{Potentials for $-1 <\omega_\text{df}$}
Let us now consider an equation of state in the quintessence region. Using the quintessence potential:
\begin{equation}\label{p2}
V(t) = \alpha \left(\frac{t_0}{t}\right)^2 = \alpha \left(\frac{\phi_0}{\phi}\right)^2\,,
\end{equation}
where $\phi_0 = \phi(t_0)$, we recover the scale factor evolution Eq.~(\ref{sf2}). The constant $\alpha$ follows from Eq.~(\ref{UNO}) and reads:
\begin{equation}
\alpha t_0^2 = -\frac{2 \, \omega_\text{df}}{3 (1+\omega_\text{df})^2} (4+3 c - 4 a)\,.
\end{equation}
We see that $\alpha$ is positive when $\omega_\text{df}<0$. More specifically, the constraint Eq.~(\ref{omega_df_exp_constr}) leads to:
\begin{equation}
420 \, (4 + 3 c - 4 a) \lesssim \alpha t_0^2 < + \infty\,,
\end{equation}
where $\omega_\text{df} = -1$ corresponds to the limit $\alpha \rightarrow + \infty$.
\subsubsection{Potentials for $\omega_\text{df} < -1$}
Finally, let us consider the case where the equation of state is phantom. Similarly to the previous case, we can use a potential of the form:
\begin{equation}\label{p3}
V(t) = \beta \left(\frac{t_*-t_0}{t_*-t}\right)^2 = \beta \left(\frac{\phi_*-\phi_0}{\phi_*-\phi}\right)^2\,,
\end{equation}
where $\phi_* = \phi(t_*)$ and $\phi_0 = \phi(t_0)$, in order to find the scale factor evolution Eq.~(\ref{sf3}). From Eq.~(\ref{UNO}) we find that the constant $\beta$ reads:
\begin{equation}
\beta (t_*-t_0)^2 = -\frac{2 \, \omega_\text{df}}{3 (1+\omega_\text{df})^2} (4+3 c - 4 a)\,,
\end{equation}
and therefore:
\begin{equation}
25800 \, (4 + 3 c - 4 a) \lesssim \beta (t_*-t_0)^2 < + \infty\,,
\end{equation}
where we have used Eq.~(\ref{omega_df_exp_constr}) and $\omega_\text{df} = -1$ corresponds to the limit $\beta \rightarrow + \infty$.
\subsection{Adding radiation, baryons, and dark matter}
\label{subsec:adding}
In this section we numerically solve the system of equations given by the continuity equations for the cosmological matter [Eq.~(\ref{continuity_matter})] and the dark fluid [Eq.~(\ref{continuity_df})], as well as the second Friedmann equation in Eq.~(\ref{friedmann}), using the constant potential $V(t)=2\Lambda$, as in Eq.~(\ref{p1}).
We consider the action parameter $b=0$ for in order to ensure $c_T=1$ and hence agreement with GW170817/GRB170817A, and we take $a<1$ and $c>0$ (from the requirements on the stability for $b \approx 0$) as free parameters. We further impose that the Planck mass is rescaled to a positive quantity, Eq.~(\ref{condizione}).
We plot the fractional densities defined as:
\begin{equation}\label{frac_density}
\Omega_i(t) = \frac{4}{4+3c-4a}\,\frac{\rho_i(t)}{6 H(t)^2}\,,
\end{equation}
where the index $i$ can correspond to $r$ (radiation), $b$ (baryonic matter) or $df$ (dark fluid), where the dark fluid will include dark energy and dark matter, since in general $C\neq 0$ in Eq.~(\ref{rhodf}). The factor in front of the density is needed if we require the fractional densities of all the components (radiation, baryionic matter and dark fluid) to sum to one at any time $t$. With this definition we expect the $\Omega$'s to not depend on the action parameters $a$ and $c$.
Note that, in order to evolve the system and solve the differential equations, we need to consider some initial conditions. We set $\Omega_r (t_0) = 8 \times 10^{-5}$ and $\Omega_b (t_0) = 0.0486$ (values in agreement with~\cite{Planck2015}), and compute the remaining dark fluid density through $1 - \Omega_r (t_0) -\Omega_b (t_0)$. We evolve the system from a scale factor $A=10^{-5}$ (radiation era) to $A=1$ (present time).
The evolution we obtain is depicted in Fig.~\ref{fig:ev_p1}. The fractional densities behave exactly as the ones of $\Lambda$CDM, with the dark fluid corresponding to the cold dark matter and dark energy of $\Lambda$CDM. As already mentioned, we obtain the same fractional densities for every value of $c$ and $a$.
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.8\textwidth]{plot_const.pdf}
\end{center}
\caption{Plot of the evolution of the fractional densities $\Omega$, defined in equation (\ref{frac_density}), for the cosmological
matter components (baryionic matter and radiation) and the dark fluid, and the equation of state parameter $\omega_\text{df}$, as functions of the scale factor $A$.
}
\label{fig:ev_p1}
\end{figure}
Let us further analyse how varying $a$ and $c$ affects the age of the Universe, which is given by:
\begin{equation}
t_0 \equiv \int_0^1 \, \frac{dA}{A H(A)} = \sqrt{\frac{4+3c-4a}{4}}\int_0^1\, \frac{dA}{A H_0 \sqrt{\frac{1}{6}\sum_i \Omega_i(A)}}\,,
\end{equation}
where $H_0 = H(t_0)$, and we have written the integral as a function of the fractional densities as they do not depend on $a$ and $c$. Since the factor in front of the integral is $1$ if the action parameters satisfy the condition $4a=3c$ (and therefore also in the case of GR $a=c=0$), we expect the value of the age of the universe to be different from $1$ with a dependence on the action parameters given by
\begin{equation}\label{gamma}
t_0 = \sqrt{\frac{4+3c-4a}{4}} \left.t_0\right|_{4a=3c}\equiv \gamma \left.t_0\right|_{4a=3c}\,.
\end{equation}
Notice that the parameter $\gamma$ is always real, because of the condition imposed in Eq.~(\ref{condizione}). We show some results in Tab.~\ref{tb:age}, where we confirm that Eq.~(\ref{gamma}) is obeyed by our numerical results. The conclusion is that, although the evolution of the fractional densities of the dark fluid corresponds to the one expected in $\Lambda$CDM, the age of the Universe we predict will in general be different unless $4a=3c$. In addition to the constraint on $b$ from the speed of gravitational waves, constraints on the age of the Universe can in principle be used to further constrain the parameters $a$ and $c$.
To get a rough feel for how constraints on the age of the Universe can restrict the viable $a$-$c$ parameter space, we note that the $\approx 0.3\%$ determination of the age of the Universe from the \textit{Planck} satellite~\cite{Planck2015} can in turn be used to constrain $\gamma$, leading to the approximate requirement $\gamma = 1.000 \pm 0.003$. This requirement can be used to set bounds on $a$ and $c$. Focusing for definiteness on the region of parameter space where $a\,,c \lesssim {\cal O}(1)$, in Fig.~\ref{fig:contourf} we show a contour-plot of $\gamma = \sqrt{(4+3c-4a)/4}$ as a function of $a$ and $c$, along with the contours corresponding to $\gamma=1.009$, $\gamma=1.000$, and $\gamma=0.991$. These values approximately correspond to the $3\sigma$ upper limit, best fit, and $3\sigma$ lower limit on $\gamma$ respectively, arising from the constraint $\gamma=1.000 \pm 0.003$. These contours lie along lines of constant values of the linear combination $4a-3c$. Moreover, as expected, we see that limits on the age of the Universe do not constrain the parameters $a$ and $c$ \textit{per se}, but rather the ``orthogonal" linear combination $4a-3c$ (in this sense, the combination $4a-3c$ can be thought of as a principal component of the system), as is clear from the functional form of $\gamma$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{contourf.pdf}
\end{center}
\caption{Contour-plot of $\gamma=\sqrt{(4+3c-4a)/4}$ in the $a$-$c$ parameter space, focusing for definiteness on the region of parameter space where $a\,,c \lesssim {\cal O}(1)$. The black, red, and blue lines correspond to contours of constant $\gamma=1.009$, $\gamma=1.000$, and $\gamma=0.991$. These values approximately correspond to the $3\sigma$ upper limit, best fit, and $3\sigma$ lower limit on $\gamma$ respectively, given the constraint $\gamma=1.000 \pm 0.003$ which we derive from the $0.3\%$ determination of the age of the Universe from \textit{Planck}~\cite{Planck2015}. Notice that the contours lie along lines of constant $4a-3c$, as expected given the functional form of $\gamma$.}
\label{fig:contourf}
\end{figure}
\begin{table}[!h]
\centering
\begin{tabular}{|C{1.5cm}|C{1.5cm}||C{4.3cm}|C{4.3cm}|}
\hline
$a$&$c$&$t_0$ numerical [Gyr] & $\gamma$\\
\hline\hline
$0$&$0$&$13.818$&1\\
\hline
$0.75$&$1$&$13.818$&1\\
\hline
$0.075$&$0.1$&$13.818$&1\\
\hline
$0.1$&$0.1$&$13.644$&$0.99 = 13.644/13.818$\\
\hline
$0.5$&$0.1$&$10.478$&$0.76 = 10.478/13.818$\\
\hline
$0$&$1$&$18.280$& $1.32 = 18.280/13.818$\\
\hline
\end{tabular}
\caption{Table of the results for the age of the universe, as function of the action parameters $a$ and $c$. The factor $\gamma$ is defined in equation (\ref{gamma}).}\label{tb:age}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
In this paper we have studied a mimetic model constructed in~\cite{Cognola:2016gjy} by breaking the Horndeski structure of a starting mimetic Horndeski model in order to achieve a non-zero sound speed. We explored the model in light of the recent near-simultaneous detection of GW170817/GRB170817A, which implies that the speed of tensor perturbations $c_T$ is extremely close to the speed of light. In light of this constraint, we then showed how the model can closely mimic the evolution of dark matter and dark energy.
We have found that the stringent constraint on the speed of gravitational waves, equal to the speed of light up to deviations of order 1 part in $10^{15}$~\cite{Monitor:2017mdv}, severely constrains the Lagrangian parameter $b$, which controls the strength of a term of the form $\nabla^{\mu}\nabla^{\nu}\phi\nabla_{\mu}\nabla_{\nu}\phi$ in the action, Eq.~(\ref{lagrangian}). In the limit where we force $c_T$ to be identically equal to the speed of light, $b=0$ is required. We have found that the other two Lagrangian parameters $a$ and $c$ lead to a constant rescaling of the Planck mass, where the unscaled Planck mass of GR is recovered for $3c=4a$.
The smallness of the parameter $b$, which might lead to a fine-tuning problem, deserves a further comment. When integrated by parts, the relevant term in the action, $\nabla^{\mu}\nabla^{\nu}\phi\nabla_{\mu}\nabla_{\nu}\phi$, leads to a term proportional to $\phi\Box^2\phi$. In~\cite{Saltas:2016awg} (see also~\cite{Brouzakis:2013lla,Saltas:2016nkg}), it was argued that such a term appears when considering 1-loop corrections to the cubic Galileon action. While the analysis of~\cite{Saltas:2016awg} is, not directly applicable to our model, the results tempt us to speculate that the smallness of $b$ might in fact be due to the relevant term being a quantum correction, with the bare parameter being $b=0$. A detailed analysis of the issue, however, is well beyond the scope of this paper, and hence we defer it to future work.
By considering the addition of radiation and baryonic matter, we have numerically solved the modified Friedmann equation and shown that the system can closely mimic an evolutionary history of the Universe consistent with the standard $\Lambda$CDM one at the background level: that is, the mimetic model in question with $b=0$ (in order to comply with constraints from G170817/GRB170817A) can mimic, at the background level, dark matter and dark energy consistently with observations. We have calculated the age of the Universe within the model and find that the $\Lambda$CDM value for this quantity is recovered when $3c=4a$ (which leads to an unscaled Planck mass) as well as for the trivial case where $a=c=0$. Therefore, we expect the approximate relation $3c \simeq 4a$ to hold.\\
A final consideration concerning the stability of the theory is necessary. We have shown that, in order to avoid ghost instabilities, we need a negative squared sound speed $c_s^2$ together with $c>0$. However, gradient instabilities are still present in the scalar sector but these might be mildened because of the matter field contributions. The stability issue has been studied in many recente papers, see e.g.~\cite{Chaichian:2014qba,Ijjas:2016pad,Firouzjahi:2017txv,Yoshida:2017swb,Hirano:2017zox,Zheng:2017qfs,Cai:2017dyi,
Cai:2017dxl,Takahashi:2017pje,Gorji:2017cai}. While definitive consensus on the matter is yet to be reached, we notice that these issues are likely to affect our model as well, thus mining its theoretical viability. Nonetheless, solutions to these issues have been proposed, involving direct couplings between higher derivatives of the mimetic field and curvature, for instance of the form $f(\Box \phi)$ or $\nabla^{\mu}\nabla^{\nu}R_{\mu \nu}$~\cite{Hirano:2017zox,Zheng:2017qfs,Gorji:2017cai}. Of course, such terms would be expected to modify the prediction for $c_T$ we derived in Eq.~(\ref{tensor}), and could possibly be in conflict with the GW170817/GRB170817A detection. We defer a study of these issues to a separate work.
In conclusion, we have presented a mimetic model which is in perfect agreement with the recent multi-messenger detection of GW170817 and of GRB170817A, and mimics the evolutionary history of a Universe filled with dark matter and dark energy in agreement with observations at the background level. The model is somewhat appealing in that it appears in the low-energy limit of a well-known candidate theory of quantum gravity, namely Ho\v{r}ava-Lifshitz gravity. A detailed study of cosmological perturbations within the model would be necessary in order to confront it with measurements of the CMB temperature and polarization anisotropies, which constrain many modified gravity models, as well as measurements of the growth of structure, for instance from redshift-space distortions. We defer this issue to future work. In a companion paper~\cite{inprep}, we perform a detailed Bayesian statistical analysis constraining the Lagrangian parameters $a$, $b$, and $c$ in light of the GW170817/GRB170817A detection, therefore allowing for deviations of $c_T$ from the speed of light in agreement with experimental constraints: this represents the \textit{first time} a mimetic model is robustly confronted against observations.
\section*{References}
|
{
"timestamp": "2018-10-02T02:21:19",
"yymm": "1803",
"arxiv_id": "1803.02620",
"language": "en",
"url": "https://arxiv.org/abs/1803.02620"
}
|
\section{Application to reserve price experiments}
\label{sec:application}
Online advertising exchanges provide an interface for bidders to participate in
a set of auctions for advertising online. These ads can appear within the
company's own content, in a social feed, below a search query, or on the webpage
of an affiliated publisher. These auctions provide the vast majority of revenue
to these platforms, and are thus the subject of experimentation and
optimization.
Platforms run experiments and monitor different metrics including of revenue
and estimates of bidders' welfare. One such welfare metric is the sum of the
bids of advertisers, and another metric is the sum of estimated utility of
bidders via another utility estimator.
One possible parameter subject to optimization is the method of determining reserve
prices. Online marketplaces can choose to implement a reserve price, which sets
the minimum bid required for a bid to be valid and compete with others. It may
vary from bidder to bidder, and from auction to auction. A higher reserve may
improve revenue, but if it is too high, then too many bids are discarded and ad
opportunities can go unsold.
Modifications to a reserve price rule are prime examples of experiments where
SUTVA does not hold. A change in reserve price to one bidder affects the
bidding problem facing another bidder, even when her reserve is unchanged
(e.g., reducing competition when the reserve to the first bidder is higher).
Although we ignore them here, budget constraints are another factor--- if a
budget-constrained bidder faces higher reserve prices, then she may adjust her
bids to re-optimize return on investment.
Working without budget constraints, we establish conditions under
which the resulting interference mechanism within reserve price
experiments is monotone, both in the case of a single-item second
price auction setting and in the Vickrey-Clarke-Groves auction setting
for positional ads. See~\citep{varian2014vcg} for a reference.
\subsection{Single-item second price auctions}
We consider a single-item second-price auction with $N$ bidders $B =
{\{B_i\}}_{i \in N}$: the highest bidder wins the auction and is
charged the maximum of her reserve price and the second-highest bid.
The second price auction is truthful (bidding true values is a
dominant-strategy equilibrium), and we will assume that the bidders
are rational.
Consider two reserve price mechanisms ${(r_i)}_{i \in B}$ (control) and
${(r'_i)}_{i\in B}$ (treatment). Suppose that the reserve price mechanism
corresponding to treatment always sets a higher reserve price than the reserve
price mechanism corresponding to control: $\forall i, r'_i > r_i$. By symmetry,
the following argumentation would also work if the treatment and control labels
were switched.
We suppose the bidders have unobserved values $(v_i)$ for winning the auction.
We randomly assign bidders to either the treatment or control reserve price
mechanism, with $\mathbf{Z}$ the resulting assignment. The chosen metric of interest is
a bidder's utility, denoted by $Y_i(\mathbf{Z})$. For a second-price auction, $Y_i = 0$
if bidder $i$ does not win the auction, and $Y_i = v_i - p$ when she wins the
auction and pays price $p$. The bidder welfare of an auction is the sum
of each bidder's utility, $\sum_i Y_i(\mathbf{Z})$,
and the estimand is given by:
\begin{equation*}
S = \sum_i Y_i(\vec 1) - \sum_i Y_i(\vec 0)
\end{equation*}
Tthe reserve price experiment for second price auctions verifies the
self-excitation property (cf. Prop.~\ref{prop:more}). The idea is
that assigning a unit to the intervention can only make them less
competitive by discarding their bid from the auction. Thus, the higher
the number of treated units, the lower the competition for the
remaining bidders, and the higher their utility.
\begin{theorem}\label{thm:second_price}
Consider a set of rational agents with no budget-constraints. Let the outcome
of interest be each agent's welfare. The interference mechanism of a reserve
price experiment, assigning treated units to a higher personalized reserve
price, for a single-item second-price auction is self-exciting, and thus
monotone.
\end{theorem}
\begin{proof}
Consider bidder i's outcome under $\mathbf{Z} = \vec 0$ and under any assignment $\mathbf{Z}'$
such that $Z_i = 0$. There are three possible cases:
\begin{itemize}
\item Bidder $i$ wins the auction in neither assignment. Her utility is
therefore constant.
\item Bidder $i$ wins the auction in only one assignment. It must be that
bidder $i$ wins under $\mathbf{Z}'$ but not $\mathbf{Z}$. Her utility is $0$ under $\mathbf{Z}$
and greater than $0$ under $\mathbf{Z}'$.
\item Bidder $i$ wins the auction under both assignments. If the second
highest bid is the same under both assignments, bidder $i$'s utility is
constant. Otherwise, the second highest bid under $\mathbf{Z}'$ can only be
lower than the second highest bid under $\mathbf{Z}$. Thus bidder $i$'s payment
is lower and her utility is higher under assignment $\mathbf{Z}'$ than under
assignment $\mathbf{Z}$.
\end{itemize}
By symmetry, we reach a similar conclusion when comparing assignments
$\mathbf{Z} = \vec 1$ and any assignment $\mathbf{Z}'$ such that $Z'_i = 1$.
\end{proof}
It follows that the reserve price experiment is $\mathcal{P}$-increasing,
and any cluster-based randomized design underestimates the
bidder welfare estimand.
\subsection{Positional ad auctions}
\begin{figure}
\centering
\includegraphics[scale=.5]{convexity_small_df}
\caption{The average click-through rate (CTR) observed in the \emph{Yahoo!
Search Auction} dataset, described in Section~\ref{sec:experimental}, can be
observed to be an approximately decreasing and convex function of the slot
rank. The confidence intervals were too small to be meaningfully reported in
the figure.}\label{fig:convexity}
\end{figure}
In practice, ad auctions are also multi-item, used for selling more
than one ad position on a user's view. We now extend the previous
results to a multi-item setting, with $m$ items (or ``slots''). We
assume the common positional ad setting, where each slot has an
inherent click-through rate $pos_j$, which we can suppose is ordered:
$pos_1 > pos_2 > \cdots > pos_m$~\citep{varian2007position}. Each
bidder $i$ is only ever allocated at most one item, with value $v_i$
for getting a \emph{click}. As a result, bidder $i$'s utility for
winning slot $j$ is $v_i \cdot pos_j - p_i$, where $p_i$ is the
required payment. We assume for simplicity that all bidders have the
same ad quality, and thus the same click-through rate for a given ad
slot.
The Vickrey-Clarke-Groves (VCG) auction takes place in two parts. First, a
value-maximising allocation is chosen (based on bids). Here, the highest bids
win the highest slots. Bidders are then charged the externality they impose on
all other bidders. In other words, assuming that bidder $k$ obtains the $k^{th}$
slot, bidder $k$ pays:
\begin{equation*}
p_k = \sum_{j=k+1}^m (pos_{j-1} - pos_j)\cdot v_j \cdot \mathbbm{1}_{[v_j \geq
r_j]}
\end{equation*}
where $r_j$ is the reserve imposed on bidder $j$ with value $v_j$. We can prove
that the self-excitation property holds under a convexity assumption.
\begin{theorem}\label{thm:vcg}
Consider a set of rational agents with no budget-constraints. Let the outcome
of interest be each agent's welfare. The interference mechanism of a reserve
price experiment, assigning treated units to a higher personalized reserve
price, for a VCG auction in the positional ad setting with no quality effects
is self-exciting, and thus monotone if the click-through rate function $pos: i
\mapsto pos_i$ is convex: \begin{equation*} \forall i > j,~pos_{i+1} - pos_i
\leq pos_{j+1} - pos_j, \end{equation*}
\end{theorem}
This convexity assumption is verified empirically in the literature and in the
Yahoo! auction dataset\footnote{Our own dataset could potentially suffer from
endogeneity, where weaker bidders are consistently assigned to lower slots. The
assumption is, however, supported elsewhere in the
literature~\citep{brooks2004atlas, richardson2007predicting}.}
introduced in Section~\ref{sec:experimental} (cf.
Figure~\ref{fig:convexity}). The intuition behind the proof is similar: the
greater the number of my competitors are treated, the fewer are able to compete,
and thus the higher my utility. We prove this through a case analysis.
Let $r^\mathbf{Z}_k$ be the reserve that bidder $k$ faces under assignment vector $Z$:
$r^\mathbf{Z}_k = r_k$ if $Z_k = 0$ and $r'_k$ otherwise.
\begin{proof}
Consider the outcomes of bidder $i$ and $j$ under $\mathbf{Z}$ and $\mathbf{Z}'$ such that
for all $k \neq j$, $Z_k = Z_k'$, $Z_i = Z_i' = 0$, and $Z_j = 0 < Z_j' =
1$. By transitivity, if we can show $Y_i(Z) \leq Y_i(Z')$, then it follows
that $Y_i(\vec 0) \leq \mathbb{E}_\mathcal{C}[Y_i(\mathbf{Z}) : Z_i = 0]$. There are three
possible cases:
\begin{itemize}
\item The allocation of bidders to slots does not change and thus prices
do not change. Bidder i's utility is constant.
\item Bidder $i$ is allocated to slot $i$ for both $\mathbf{Z}$ and $\mathbf{Z}'$
assignments, but bidder $j$'s ($j < i$) bid is discarded when $j$ is
treated ($Z'$): $r_j' > v_j > r_j$. The difference of bidder $i$'s
outcome under the two treatment assignments is:
$Y_i(\mathbf{Z}) - Y_i(\mathbf{Z}') = - \sum_{k \geq j} (pos_{k -1} - pos_k) (v_k
\mathbbm{1}_{v_k > r^\mathbf{Z}_k} - v_{k+1} \mathbbm{1}_{v_{k+1} >
r^\mathbf{Z}_{k+1}})$.
%
This quantity is always negative, hence $Y_i(\mathbf{Z}) \leq Y_i(\mathbf{Z}')$.
\item Bidder $j$'s ($j < i$) bid is discarded when $j$ is treated and thus
bidder $i$ is allocated to slot $i-1$. In that case, bidder $i$'s
utility under $\mathbf{Z}$ is:
$Y_i(\mathbf{Z}) = pos_i v_i - \sum_{k \geq i+1} (pos_{k-1} - pos_k) v_k
\mathbbm{1}_{v_k > r^Z_k}$. The same bidder $i's$ utility under $\mathbf{Z}'$
is: $Y_i(\mathbf{Z}') = pos_{i-1} v_i - \sum_{k \geq i+1} (pos_{k-2} - pos_k)
v_k \mathbbm{1}_{v_k > r^Z_k}$.
It follows that the difference of bidder $i$'s outcomes is equal to:
\begin{align*}
Y_i(\mathbf{Z}) - Y_i(\mathbf{Z}') & = (pos_i - pos_{i-1}) v_i \\
& \quad - \sum_{k \geq i+1} (pos_{k-2} + pos_k - 2
pos_{k-1}) v_k,
\end{align*}
%
where the $\mathbbm{1}_{v_k > r^Z_k}$ terms are implicit. Note that each
individual term of the sum is positive by convexity, such that $Y_i(\mathbf{Z})
\leq Y_i(\mathbf{Z}')$.
\end{itemize}
\end{proof}
\section{Discussion}
We showed that, under a certain monotonicity assumption, we can determine which
of two clusterings yields the least biased estimator by running an
experiment-of-experiments design. We noted that commonly-studied parametric
models of interference verify this monotonicity assumption. Moreover, we proved
that the interference mechanism resulting from the impact of a reserve price
experiment on social utility is monotone, and hence our framework applies.
Finally, we validated our framework on a simulated reserve price experiment,
grounded in a publicly-available Yahoo! search ad dataset. There are several
questions worth investigating that we did not tackle in this paper. Notably,
while we explored the case of rational bidders participating in positional ad
auctions without budget constraints or quality effects to establish
monotonicity, can these assumptions be relaxed or generalized? What other kinds
of experiments are monotone (or self-exciting)? Is it possible to generalize
Theorem~\ref{thm:vcg} to other Vickrey-Clacke-Groves auctions, \emph{Generalized
Second Price} auctions, or budgeted bidders? Finally, can the monotonicity
assumption be validated empirically, either through an experimental design or an
observational data study? It seems randomized saturation
designs~\citep{baird2014designing} would be a good place to start for testing
monotonicity experimentally. Finally, our framework relied on the transitivity
of the experiment-of-experiment estimators: namely, that they conserved the
ordering of the expectation of the estimators under each clustering. Whilst we
validated this assumption either
theoretically~(cf.~Prop.~\ref{prop:linear_transitive}) or through
simulation~(cf.~Section.~\ref{subsec:validating}), can we characterize the
clustering-experiment pairs that are transitive and can the assumption be tested
empirically?
\section{Experimental Data and Validation}
\label{sec:experimental}
In this section, we validate our design strategy for comparing two
given graph partitions for the purpose of experimentation under
interference to an advertising auction dataset.
For this purpose, we make use of a Yahoo! auction
dataset.
\subsection{The Yahoo! Search Auction dataset}
{\small
\begin{table}
\centering
\begin{tabular}{l r}
\begin{tabular}{lrl}
Per keyphrase \\
\midrule
nbr of bids & min & 1 \\
& median & 2 \\
& max & 7041 \\
bid value & min & $.3$\textcent\\
& median & $66$\textcent\\
& max & \$$320$\\
impressions & min & 1 \\
& median & 3 \\
& max & $5 \cdot 10^6$ \\
clicks & min & 0 \\
& $cdf(1)$ & $91.4$ \\
& max & 7041 \\
\end{tabular} &
\begin{tabular}{lrl}
Per bidder \\
\midrule
nbr of bids & min & 1 \\
& median & 9 \\
& max & $2.1 \cdot 10^4$ \\
bid value & min & $.5$\textcent\\
& median & $60$\textcent\\
& max & \$$4700$ \\
impressions & min & $1$ \\
& median & $31$ \\
& max & $1.4 \cdot 10^6$ \\
clicks & min & $0$ \\
& $cdf(1)$ & $93.3$ \\
& max & $1.1 \cdot 10^4$\\
\end{tabular}
\end{tabular}
\caption{Summary statistics for the Yahoo! dataset, aggregated by keyphrase
or by bidder, per day for the entire 4 month period. Bid values are given in
USD unless specified otherwise. $cdf(1)$ is the value of the cumulative
distribution function of impressions for a single
impression.}\label{tab:summary}
\end{table}
}
The \emph{Yahoo! Search Marketing Advertiser Bid-Impression-Click data on
competing Keywords} dataset is a publicly-available dataset released by
Yahoo!\footnote{Available for download at
\url{https://webscope.sandbox.yahoo.com/}}, containing bid, impression, click,
and revenue data between advertiser-keyphrase pairs over a period of 4 months.
The advertiser and keyphrase are anonymized, represented as a randomly-chosen
string. A sample line of the dataset is reproduced\footnote{The account ID and
keyword ID's have been shortened for the sake of exposition in this sample line.
The bid value is given in 1/100\textcent.} below:
\begin{table}[h]
\centering
\begin{tabular}{ccccccc}
day & id & rank & keyphrase & bid & impress. & clicks \\
1 & \texttt{a3d2}& 2 & \texttt{f3e4,j6r3,}\dots & 100.0 & 1.0 & 0.0
\end{tabular}
\caption{Sample line in the Yahoo! dataset}
\end{table}
The dataset contains $77,850,272$ bidding activities of $16,268$ different
bidders. There are a total of $75,359$ keywords represented, for a total of
unique $648,515$ keyphrases (or list of keywords). Table~\ref{tab:summary}
contains a series of summary statistics computed over keyphrase-day pairs and
bidder-day pairs, namely the total number of bids, the total bid value, the
total number of impressions, and the total number of clicks per keyword (or per
bidder) and per day.
We can represent the \emph{Yahoo!} dataset by a set of bipartite graphs between
bidders, identified by their \texttt{account\_id}, and the keyphrases. The
\emph{bid} bipartite graph on day $t$ draws a weighted edge of weight $w_{ij}$
between every bidder-keyphrase pair such that bidder $i$ bids $w_{ij}$ on
keyphrase $j$ on day $t$. We can aggregate these graphs over the entire time
period ($4$ months) by summing their edge weights together. We can also consider
the impression, rank, and clicks graphs, where the weight of the edge is given
by the number of impressions, the rank, or the number of clicks respectively
received by bidder $i$ on keyphrase $j$.
The dataset only provides data aggregated at the granularity of a single day,
reporting the average bid and total number of impressions and clicks for each
bidder, keyphrase day triplet. Hence, we define a keyphrase-day pair as
a single auction, where each bidder's bid is set to the reported average bid for
that keyphrase-day pair. For the sake of simplicity, we will only
consider a setting with the first $4$ ad positions, which account for the
majority of clicks.
\subsection{Simulating a reserve price experiment}
\label{sec:bipartite}
\begin{figure}
\centering
{\small
\begin{tabular}{cc}
\includegraphics[scale=.5]{new_50part_bid.pdf} &
\includegraphics[scale=.5]{new_400part_bid.pdf} \\
\end{tabular}
}
\caption{Weighted ratio of edges across partitions for successive runs of the
R-LDG algorithm on the weighted bid graph into $50$ partitions and $400$
partitions respectively.}\label{fig:cut_over_time}
\end{figure}
While the \emph{Yahoo! Search Auction} dataset provides us with a set of
bidders, keyphrases, and the bids, impressions, and clicks that link them, it
does not provide us with an actual intervention on the auction ecosystem. We
must therefore simulate the impact of a change in the reserve price given to
each bidder.
While many possible units of randomization exist for an auction experiment
(keyphrases, bidders, browsers, users, various pairings of these units, etc.),
the reserve price experiment we consider randomizes on bidders. On large auction
platforms, the reserve price might be set
through the application of machine learning methods. In our context, we choose
a random non-zero reserve price for each bidder, calibrating the spread of the
distribution such that some bidders will not always be able to match the reserve
price for all auctions. All bidders assigned to the intervention will face
their non-zero reserve price, fixed for every auction for simplicity. All
bidders assigned to the control bucket will not face a reserve price.
Within the same auction for a given
keyphrase, two participating bidders may face distinct reserves and be assigned
to different treatment buckets. A bidder-cluster-based randomized experiment is
thus used to mitigate the possible interference between bidders, our units of
randomization, within a single auction.
To validate our experiment-of-experiments design, we must find candidate
balanced graph partitions to compare, a problem known to be NP-hard --- even
when we slightly relax the balancedness assumption~\citep{andreev2006balanced}.
In the last several years, there has been good progress in developing scalable
distributed balanced partitioning algorithms for graphs with billions of
edges~\citep{TGRV14,ABM16}. These algorithms have enabled practitioners to apply
these large-scale graph mining algorithms for large-scale randomized
experimental studies~\citep{ugander2013balanced,saveski2017detecting,RAKMN16}. Of
the numerous heuristic algorithms for finding such partitions, the {\em
Restreaming Linear Deterministic Greedy} (R-LDG)
algorithm~\citep{nishimura2013restreaming} is a popular choice. It consists of
repeatedly applying a greedy algorithm, originally proposed
in~\citep{stanton2012streaming}, which assigns each node $u$ to one of $k$
partitions according to the following objective:
\begin{equation*}
\argmax_{i \in \{1, \dots k\}} |P_i^t \cap N(u) | \left(1 -
\frac{|P_i^t|}{H_i} \right)
\end{equation*}
where $P_i^t$ is the set of nodes assigned to partition $i$ at step $t$ of the
algorithm, $H_i$ is the maximum capacity of partition $i \in \{1, \dots k\}$,
and $N(u)$ is the set of neigbhors of node $u$ in the graph.
We can apply this clustering algorithm to any of the bipartite graphs introduced
in Section~\ref{sec:bipartite}, aggregated over the entire time period,
resulting in a set of mixed bidder-keyphrase clusters. The bidder-only clusters
are obtained from the previous clustering by simpling removing the keyphrase
nodes from consideration. The algorithm's objective must be slightly modifed to
accomodate weighted graphs, by replacing $|P_i^t \cap N(u)|$ with $\sum_{i, j}
w_{ij} \mathbbm{1}_{i \in N(u)} \mathbbm{1}_{j \in P_i^t}$. Furthermore, we must
also modify the balance requirement, since only the bidder side of the bipartite
graph clustering is required to be balanced! We therefore replace $\left(1 -
|P_i^t|/H_i \right)$ with $\left(1 - |P_{i,c}^t|/H_{i,c} \right)$ where $P_{i,
c}^t$ is the set of bidder nodes in partition $P_i^t$ and $H_{i,c}$ is
the maximum number of allowed bidder nodes in partition $P_i^t$. The
final objective is given by:
\begin{equation*}
\argmax_{i \in \{1, \dots k\}} \left| \sum_{i \in N(u), j \in P_i^t} w_{ij}
\right| \left(1 - \frac{|P_{i, c}^t|}{H_{i, c}} \right)
\end{equation*}
Figure~\ref{fig:cut_over_time} plots the proportion of edges cut, weighted by
the bid amount, over consecutive runs of the R-LDG algorithm for $50$ and $100$
clusters. We adopt three main vectors of comparison between
candidates partitions to determine the efficacy of our proposed
experiment-of-experiment design:
\begin{itemize}
\item \emph{Quality:} comparing partitions of the graph
that differ in their estimated quality, for example by looking at the number
of edges cut, for a fixed number of clusters. As an extreme
example, we will compare a random graph partitioning to a partitioning
obtained by running the R-LDG algorithm to convergence.
\item \emph{Number of partitions:} comparing two partitions of the
graph obtained by running the same clustering algorithm for a different
number of partitions. As an example, we will consider a R-LDG clustering
with $10$ clusters and a R-LDG clustering with $400$ clusters.
\item \emph{Metric:} comparing partitions of the graph that are
obtained by applying the same algorithm on different bipartite graphs. As an
example, we will compare a R-LDG clustering of the \emph{bid} graph with an
R-LDG clustering of the \emph{impressions} graph.
\end{itemize}
The dataset does not provide the budgets of the bidders or their
perceived ad quality, hence we will adopt the same simplifying
assumptions as Section~\ref{sec:application} of no quality effects
between bidders and no budget constraints. Furthermore, we assume bids are
unchanged as a result of the experiment (which would be valid for
rational, non budget-limited bidders).
\subsection{Validating the empirical optimization}
\label{subsec:validating}
\begin{figure*}
\centering
\begin{tabular}{c}
\includegraphics[scale=.3]{sub_auctions_random_vs_rldg.pdf} \\
\includegraphics[scale=.3]{sub_auctions_partition_comparison.pdf}
\\
\includegraphics[scale=.3]{sub_auctions_heuristic_comparison.pdf} \\
\end{tabular}
\caption{Distribution of the expectation of the HT estimator under $\mathcal{C}_1$
and $\mathcal{C}_2$, and the induced clusterings $\mathcal{C}_1^\mathbf{W}$ and $\mathcal{C}_2^\mathbf{W}$. The
red segment represents the total treatment effect estimand. \emph{(Top)}
$\mathcal{C}_1$ is a R-LDG clustering, $\mathcal{C}_2$ is a random clustering ($M_1 = M_2
= 50$). \emph{(Middle)} $\mathcal{C}_1$ is a R-LDG clustering into $10$ partitions,
$\mathcal{C}_2$ is a R-LDG clustering into $400$ partitions. \emph{(Bottom)}
$\mathcal{C}_1$ is a R-LDG clustering of the bid graph, whereas $\mathcal{C}_2$ is a R-LDG
clustering of the impressions graph. ($M_1 = M_2 = 50$)}\label{fig:results}
\end{figure*}
We first compare a partitioning of the graph obtained by running the modified
R-LDG algorithm (cf.~Section~\ref{sec:bipartite}) against a completely random
balanced partitioning of the graph. We fix a subset of auctions with few
bidders per auction, in order to showcase the framework and establish the
monotonicity and transitivity properties by allowing a setting for which there
is a clear difference between the two clusterings. The reduction in cut size
--- measured by the ratio of the weighted sum of edges inter-clusters over the
sum of all edge weights --- over the iterations of the algorithm is shown in
Figure~\ref{fig:cut_over_time}. While the weighted cut of the graph for a
random partition is around $98\%$, the partition obtained with the R-LDG
algorithm approaches $66\%$ within a few iterations.
We validate the monotonicity assumption, as well as the transitivity
assumption, for reserve price experiments. In Figure~\ref{fig:results} (a), we
plot four distributions as well as the Total Treatment Effect estimand (cf.
Eq.~\ref{eq:tte}), obtained by taking the difference between assigning all units
to a higher reserve price and assigning none. Namely, we plot the distribution
of the HT estimator's expectation (cf. Eq~\ref{eq:HT}) under each cluster-based
design: $\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_k}[\hat{\tau}]$ where $k = 1$ for the R-LDG
clustering and $k = 2$ for the random clustering. We also plot the distribution
of the expectation of the experiment-of-experiments (EoE) estimators: $\mathbb{E}_{\mathbf{W},
\mathbf{Z} \sim \mathcal{C}^\mathbf{W}_k}[\hat{\tau}_k^\mathbf{W}]$.
We find that they all under-estimate the true treatment effect,
as expected
from the $\mathcal{P}$-increasing property. As expected, the HT estimator is more
biased under a random clustering than under the R-LDG clustering. Furthermore,
we find that the property of transitivity holds
(cf.~Eq.~\ref{prop:transitivity}), namely the EoE estimate of the ``random
estimator'' also under-estimates the total treatment effect more severely than
the EoE estimate of the ``R-LDG estimator''.
We repeat the experiment to compare a R-LDG clustering with $10$ partitions
with another R-LDG clustering with $400$ partitions (cf.
Figure~\ref{fig:results} (b)). We find that the clustering with $10$ partitions
is less biased but exhibits higher variance, and that the transitivity property
holds. Finally, in Figure~\ref{fig:results}~(c), we compare a clustering of
the impressions bipartite graph with a clustering of the bid bipartite graph.
The transitivity property is again verified, and
moreover we see that clustering the bid bipartite
graph may be a better heuristic in this setting, but
the difference in the two
clusterings is very slight. The code is available for download at
\url{https://jean.pouget-abadie.com/kdd2018code}.
\section{Introduction}
Randomized experiments --- or A/B tests --- are at the core of many product
decisions at large technology companies. Under the commonly assumed Stable Unit
Treatment Value Assumption (SUTVA), these A/B tests serve to estimate unbiasedly
the effect of assigning all units to a particular intervention over an
alternative condition \citep{imbens2015causal}. The SUTVA assumption is one of no
interference between units: a unit's outcome in the experiment does not depend
on the treatment assignment of any other unit.
In many A/B tests however, this assumption is not tenable. Consider an
intervention on a user of a messaging platform: the (potential) resulting change
in her behavior (e.g. increase in time spent on the platform, in
number of messages sent, a decrease in response time) would affect the
friends on the platform she chooses to communicate with. The same cascading
phenomenon can also occur in more subtle ways in a social feed setting. Changes
to a feed ranking algorithm, and the resulting behavioral changes (e.g. a higher
click-through rate, feedback, or interaction time with the content on the feed)
will invariably affect the content on that unit's friends' social
feeds~\citep{eckles2016estimating, gui2015network}.
In particular, the same is true in an advertiser auction setting, where
modifications to the ecosystem can impact auctions and bidders not originally
assigned to the intervention~\citep{basse2016randomization}. Suppose that one
bidder changes her strategy as a result of being assigned to a higher reserve
price, or her usual bid no longer meets the reserve. The bidders she competes
with now face a different bid distribution --- the auction is now more
competitive if she increases her bid to meet the new reserve, or less
competitive if she fails to meet the reserve. These bidders might react to this
new bid distribution by updating their own bidding strategy, even though they
were not originally assigned to the intervention. This effect could potentially
affect the other auctions they participate in.
When SUTVA does not hold, we say there is \emph{interference} between units, and
many fundamental results of the causal inference literature no longer hold. For
example, the difference-in-means estimator under a completely randomized
assignment is no longer unbiased~\citep{imbens2015causal}. When the estimand is
the difference of outcomes under two extreme assignments --- one assigning all
units to the intervention, and the other assigning none --- a common approach to
mitigating the bias of standard estimators in the face of interference is to run
cluster-based randomized designs~\citep{ugander2013graph, walker2014design,
eckles2017design}. These randomized designs group assign units
to treatment or control in groups to limit the amount of interaction between
different treatment buckets.
If it can be shown that there is no interaction across treatment buckets, we
recover many of the results stated under SUTVA. In practice, however, such a
grouping of units may not exist and A/B test practitioners often settle
to find the best possible partitioning. The problem is often formulated as the
balanced partitioning of a weighted graph on the experimental
units, where an edge is drawn between two units that are liable to interfere
with one another. This is a challenging task, both algorithmically and
empirically: clustering a graph into balanced partitions is known to be NP-hard,
even if we tolerate some unevenness between
partitions~\citep{andreev2006balanced}; furthermore,
the correct graph representation of the interference mechanism is not always
clear.
While the literature on finding balanced partitioning of weighted graphs and
analysing cluster-based randomized designs is
extensive~\citep{middleton2011unbiased, donner2004pitfalls, eckles2017design},
there are relatively few prior works that tackle the following question: can we
determine which of two balanced partitionings produces less biased estimates of
the total treatment effect, without assuming the exact structure of interference
is known? The objective of this paper is to show we can in fact identify the
better of two clusterings through experimentation under an assumption on
the interference mechanism, which we call {\em monotonicity}.
Even when the exact structure of interference is not known, monotonicity can
established under a theoretical model. For example, some interference
mechanisms are {\em self-exciting} --- if assigning any unit to the intervention
will boost the outcomes of any neighboring units. Examples range from
vaccination campaigns to social feed ranking algorithms. In both cases, the
units in the vicinity of a unit assigned to the intervention tend to benefit
over those surrounded by units in the control bucket. Interference mechanisms
that exhibit this self-exciting property are a particular example of monotone
mechanisms (cf. Section~\ref{sec:monotonicity}). When monotonicity holds, we
show that it is feasible to compare two balanced partitionings of the
experimental units by running a straightforward modification of an
experiment-of-experiments design~\citep{saveski2017detecting,
pouget2017testing}.
We make the following contributions: we present an experiment-of-experiments
design for comparing cluster-based randomized designs. We define a monotonicity
assumption under which we can determine which clustering induces the least
biased estimates of the total treatment effect using this comparative design.
While our technique applies to the general problem of experimental design under
interference with a monotonicity assumption, we prove that pricing
experiments\footnote{While pricing experiments are done in the context of ad
exchanges~\citep{AdExchange}, we note that our paper is a theoretical study of
the subject and does not include any real treatments of ad campaigns.} in the
context of ad exchanges are monotone, and thus our framework applies to this
illustrative example. Finally, we report an empirical simulation study of our
algorithms for a publicly-available dataset for online ads.
In Section~\ref{sec:theory}, we establish the theoretical framework by defining
the monotonocity assumption, describing the suggested experiment-of-experiments
design, and proposing a test for interpreting its results. In
Section~\ref{sec:application}, we explain how this framework can be applied to
a real-world setting, by showing that reserve-price experiments on advertising
auctions are monotone. Finally, we validate these findings on a Yahoo! ad
auction dataset in Section~\ref{sec:experimental}.
\section{Proofs}
\label{sec:proofs}
\subsection{Proof of Proposition~\ref{prop:simple_linear_monotone}
and~\ref{prop:linear_monotone}}
Assume that $\forall \mathbf{Z},~Y_i(\mathbf{Z}) = \alpha_i + \beta_i \cdot Z_i + \gamma_i
\frac{1}{|\mathcal{N}_i|} \sum_{j \in \mathcal{N}_i} Z_j + \epsilon_i$, where $\epsilon_i \sim
\mathcal{N}(0, \sigma^2)$. Recall the definition of the estimand:
$\tau = \frac{1}{N} \sum_i Y_i(\vec 1) - Y_i(\vec 0)$. Plugging in the expression
for $Y_i(\vec Z)$, we obtain: $\tau = \frac{1}{N} \sum_i \beta_i + \frac{1}{N}
\sum_i \gamma_i$.
The estimator is given by: $\hat \tau = \frac{M}{N} \sum_i
\frac{{(-1)}^{1 - Z_i}}{m_t^{Z_i} m_c^{(1 - Z_i)}} Y_i(\mathbf{Z})$, where $m_t$
(resp.~$m_c$) is the number of clusters in treatment (resp.~control). Plugging
in the expression for $Y_i(\vec Z)$, we obtain:
\begin{equation*}
\mathbb{E}_{Z \sim \mathcal{C}}[\hat \tau] = \frac{1}{N} \sum_i \beta_i + \frac{1}{N} \sum_i
\gamma_i \left( \frac{|\mathcal{N}_i \cap C(i)|}{|\mathcal{N}_i|} - \frac{1}{M-1} \frac{|\mathcal{N}_i
\backslash C(i)|}{|\mathcal{N}_i|} \right)
\end{equation*}
We obtain the desired result by taking the difference between these quantities.
Prop.~\ref{prop:simple_linear_monotone} follows by substituting $\gamma_i =
\gamma$.
\subsection{Proof of Proposition~\ref{prop:more}}
The proposition can be established by rewritting the definition of
$\mathcal{P}$-increasing interference mechanisms,
\begin{align*}
\tau - \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[\hat \tau] =\frac{1}{N} \sum_i & \left( Y_i(\vec 1) -
\mathbb{E}_{\mathbf{Z}\sim\mathcal{C}}[Y_i(\mathbf{Z}) | z_{C(i)} = 1] \right) \\
& + \left( \mathbb{E}_{\mathbf{Z}\sim\mathcal{C}}[Y_i(\mathbf{Z}) | z_{C(i)} = 0] - Y_i(\vec 0) \right),
\end{align*}
such that a sufficient condition of the model to be $\mathcal{P}$-increasing is for $
Y_i(\vec 1) > \mathbb{E}_{\mathbf{Z}\sim\mathcal{C}}[Y_i(\mathbf{Z}) | z_{C(i)} = 1]$ and $Y_i(\vec 0) <
\mathbb{E}_{\mathbf{Z}\sim\mathcal{C}}[Y_i(\mathbf{Z}) | z_{C(i)} = 0]$. If increasing the number of treated
units in that unit's neighborhood increases that unit's outcome --- holding that
unit's treatment assignment constant --- then the two previous inequalities
hold.
\subsection{Proof of Proposition~\ref{prop:linear_transitive}}
Recall that for $k \in \{1, 2\}$, our estimator can be written as:
\begin{equation*}
\hat{\tau}^\mathbf{W}_k = \frac{M_k}{N_k} \sum_i W_i Y_i(\mathbf{Z}) \frac{{(-1)}^{1 -
Z_i}}{M^{Z_i}_{k,t} M^{1 - Z_i}_{k,c}},
\end{equation*}
where $M_{k,t}$ (resp.~$M_{k,c}$) is the number of treated (resp.~control)
clusters in design arm $k$ and $N_k$ is the number of units in design arm $k$.
We begin by first considering the no-interference case.
We have that $\mathbb{E}_{Z \sim C_k^\mathbf{W}}[\hat \tau_k | \mathbf{W} ] = \frac{1}{N_k} \sum_i W_i
(Y_i(1) - Y_i(0))$. By the law of iterated expectations, we have $\mathbb{E}_{\mathbf{W}, Z \sim
C_k^\mathbf{W}}[\hat{\tau}^\mathbf{W}_k] = \tau$.
We now consider the linear model suggested in Eq.~\ref{eq:linear}, where
we assume heterogeneous network effects ($\gamma_i$).
From the proof of Proposition~\ref{prop:linear_monotone}, we have that
\begin{equation*}
\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_k^\mathbf{W}}[\hat{\tau}^\mathbf{W}_k | \mathbf{W}] = \bar \beta +
\frac{M_k}{M_k-1}\frac{1}{N_k} \sum_i W_i \gamma_i \left( \theta_{C_k^\mathbf{W},
i} - 1 \right)
\end{equation*}
Note that we have $\mathbb{E}_\mathbf{W}[W_i \theta_{C_k^\mathbf{W}, i}] = \frac{N_k (N_k - 1)}{N (N -
1)} \theta_{C_k, i}$. It follows that, if $M_1 >> 1$,~$M_2 >> 1$, and $N_1 =
N_2 = \frac{N}{2}$,
\begin{align*}
\mathbb{E}_{\mathbf{W}, \mathbf{Z} \sim \mathcal{C}_1^\mathbf{W}}[\hat{\tau}^\mathbf{W}_1]- \mathbb{E}_{\mathbf{W}, \mathbf{Z} \sim \mathcal{C}_2^\mathbf{W}}[
\hat{\tau}^\mathbf{W}_2] & \approx \frac{1}{2N} \sum_i \gamma_i \theta_i
\\
& \approx \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_1}[\hat \tau]- \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_2}[\hat \tau]
\end{align*}
We conclude that the linear model of interference is transitive.
\subsection{Discussion for Proposition~\ref{prop:statistical_test}}
Under unspecified models of interference, theoretical bounds on the power of
even the simplest randomized experiment are hard to come by. While the joint
assumption of monotonicity and transitivity allow us to design a sensible test
for detecting the better of two partitions, they are not sufficient to
bound its power without stronger assumptions. We thus rely on simulations, like
the ones run in Section~\ref{sec:experimental}, or theoretical approximations,
like the ones suggested in Prop.~\ref{prop:statistical_test}. It approximates
$\mathbb{E}_{\mathbf{W}, \mathbf{Z}}[\hat{\tau}_k^\mathbf{W}]$, for $k \in \{1, 2\}$ by two
independently-distributed Gaussian variables of mean $\hat{\tau}_k^\mathbf{W}$ and
variance $\hat{\sigma}_k^\mathbf{W}$, given in Eq.~\ref{eq:neymann}.
Their difference therefore has the distribution $\mathcal{N}(\hat{\tau}_1^\mathbf{W} -
\hat{\tau}_2^\mathbf{W}, \hat{\sigma}_1^\mathbf{W} + \hat{\sigma}_2^\mathbf{W})$. Recall that Neymann's
variance estimator is an upper-bound of the true variance, under SUTVA, in
expectation over the assignment $\mathbf{Z}$ (cf.~\citep{imbens2015causal}). We prove in
the lemma below that this still holds true for a hierarchical assignment.
\begin{lemma} Under SUTVA, Neymann's variance estimator is an upper-bound in
expectation of the true variance of the HT estimator:
%
\begin{equation*}
\mathbb{E}_{\mathbf{W},\mathbf{Z}}[\hat{\sigma}_k^\mathbf{W}] \geq var_{\mathbf{W}, \mathbf{Z}}[\hat{\tau}_k^\mathbf{W}]
\end{equation*}
\end{lemma}
\vspace{-1.1em}
\begin{proof}
By Eve's law, $var_{\mathbf{W}, \mathbf{Z}}[\hat{\tau}_k^\mathbf{W}] = \mathbb{E}_\mathbf{W}[ var_{\mathbf{Z} \sim
\mathcal{C}_k^\mathbf{W}}[\hat{\tau_k^\mathbf{W}} | \mathbf{W}]] + var_\mathbf{W}[\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_k^\mathbf{W}}
[\hat{\tau}_k^\mathbf{W}]]$. From~\citep{imbens2015causal}, the first term can is equal
to:
\begin{equation*}
\frac{M_k}{N_k} \left(\frac{var(Y'(1))}{M_{k,t}} +
\frac{var(Y'(0))}{M_{k,c}} - \frac{var(Y'(1) - Y'(0))}{M_k} \right),
\end{equation*}
where $Y'_j(Z) = \sum_{i \in \mathcal{C}_k^\mathbf{W}(j)} Y_i(Z)$, the cluster-level
outcomes. The second term can be shown to be equal to $\frac{var(Y(1) -
Y(0))}{N}$.
Since $\mathbb{E}_{\mathbf{W}, \mathbf{Z}}[\hat{\sigma}_k^2] = \frac{M_k}{N_k}\left(
\frac{var(Y'(1))}{M_{k,t}} + \frac{var(Y'(0))}{M_{k,c}}\right)$, we must
prove: $\frac{var(Y'(1) - Y'(0))}{N_k} \geq \frac{var(Y(1) - Y(0))}{N}$. This
follows from an application of the Cauchy-Schwarz inequality for balanced
clusters: $\sum_j {(\sum_i Y_i)}^2 \leq \sum_j |C_j| \sum_i Y_i^2$, where
$C_j$ are the cluster sizes, equal to $\frac{N}{N_k}$ in the balanced case.
\end{proof}
In order to determine the greater of two clusterings, we can perform two
one-sided t-tests. The Bayesian approach is to compute the posterior
distribution of the difference of the two estimates, using a conjugate Gaussian
prior. In order to assess the impact of assuming the two estimates are
independent Gaussians, we suggest running a sensitivity analysis, by considering
the result of the test for different values of the correlation coefficient.
\section{Theory}
\label{sec:theory}
In this section, we set the notation for the estimand, estimates, and
cluster-based randomized designs that we study. We then define the
monotonicity assumption, introduce our experiment-of-experiments design, and
suggest an approach to analysing its results.
\subsection{Cluster-based randomized designs}
Let $N$ be the number of experimental units, let vector $\mathbf{Y}$ denote the
outcome metric of interest, and let vector $\mathbf{Z}$ denote the assignment of
units to treatment $(Z_i = 1)$ or control $(Z_i = 0)$. Recall that under the
potential outcomes framework, $\mathbf{Y}(\mathbf{Z})$ denotes the potential outcomes of the $N$
units under assignment $\mathbf{Z}$. Under the Stable Unit Treatment Value Assumption
(SUTVA), this simplifies to ${(Y_i(Z_i))}_1^N$. The estimand of interest here
is the {\em Total Treatment Effect} (TTE), defined as the difference of outcomes
between one assignment assigning all units to treatment, and another assigning
none:
\begin{equation}
\label{eq:tte}
TTE = \frac{1}{N} \sum_{i = 1}^N Y_i(\mathbf{Z} = \vec 1) - Y_i(\mathbf{Z} = \vec 0)
\end{equation}
A completely randomized (CR) design assigns $N_T$ units chosen completely at
random to treatment and the remaining $N_C = N - N_T$ units to control. A
clustering $\mathcal{C}$ is a partition of the $N$ experimental units into $M$
clusters. A {\em cluster-based randomized} (CBR) design is a randomized assignment of
units to treatment and control at the cluster level: if cluster $j$ is assigned to
treatment (resp.~control), then all units in cluster $j$ are assigned to
treatment (resp.~control). We will use the notation $\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[X]$ to
denote the expected value of estimator $X$ under a $\mathcal{C}$-cluster-based
randomized design. Recall that $\mathbf{Z} \sim \mathcal{C}$ represents the assignment of
units to treatment and control, resulting from assigning the \emph{clusters} of
$\mathcal{C}$ uniformly at random to treatment or control.
Let $M_T$ (resp. $M_C$) be the number of clusters assigned to treatment
(resp.~control). Let $z \in {\{0, 1\}}^{M}$ be the assignment vector over
\emph{clusters}, where $M = M_T + M_C$. In practice, we will use the
Horvitz-Thompson (HT) estimator, defined below:
\begin{equation}
\label{eq:HT}
\hat \tau = \frac{M}{N} \left( \frac{1}{M_T} \sum_{j=1}^M z_j \sum_{i \in
\mathcal{C}_j} Y_i(\mathbf{Z}) - \frac{1}{M_C} \sum_{j=1}^M (1 - z_j) \sum_{i \in \mathcal{C}_j}
Y_i(\mathbf{Z}) \right)
\end{equation}
Under SUTVA, the HT estimator is an unbiased estimator of the total treatment
effect under any $\mathcal{C}$-CBR assignment~\citep{middleton2011unbiased}:
\begin{equation*}
\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[\hat \tau] = TTE
\end{equation*}
When SUTVA does not hold, this property is no longer guaranteed, and $\hat{\tau}$
may be biased. Our objective is to minimize the bias, defined below, with
respect to the clustering, without assuming any explicit knowledge of the
interference mechanism or the value of the estimand $TTE$:
\begin{equation}
\label{eq:objective}
\min_{\mathcal{C}} | \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[\hat \tau] - TTE|
\end{equation}
\subsection{A monotonicity assumption}
\label{sec:monotonicity}
Choosing the partitioning of our experimental units in a way that minimizes the
bias of our estimators~(cf.~Eq.~\ref{eq:objective}) when running a cluster-based
experiment is a difficult task: without the ground truth, we cannot observe the
bias directly. However, under a specific monotonicity property--- common to
many randomized experiments ---the task of choosing the better of two
clusterings becomes straightforward.
\begin{definition}
\label{def:one-sided}
Let $\mathcal{P}$ be the set of all possible clusterings of our $N$ units. For a subset
$\mathcal{P}' \subset \mathcal{P}$ of possible clusterings, we say that the interference
model is {\em $\mathcal{P}'$-increasing} if and only if
\begin{equation*}
\forall \mathcal{C} \in \mathcal{P}',~\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[\hat \tau] \leq \tau,
\end{equation*}
and it is {\em $\mathcal{P}'$-decreasing} if and only if
\begin{equation*}
\forall \mathcal{C} \in \mathcal{P}',~\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[\hat \tau] \geq \tau
\end{equation*}
A $\mathcal{P}'$-\emph{monotone} model is one that is either $\mathcal{P}'$-increasing or
$\mathcal{P}'$-decreasing.
\end{definition}
A monotone model is one for which the expectation of the HT
estimator $\hat \tau$ is either always a lower bound or always an
upper-bound of the estimand under any $\mathcal{C}$-CBR design for $\mathcal{C} \in \mathcal{P}'$.
It is sufficient for $\mathcal{P}'$ to contain the partitions we wish
to compare: we do not have to prove monotonicity beyond those partitions.
Before delving into examples of monotone interference mechanisms, we introduce
the following proposition, which highlights why monotonicity is useful
for reasoning about bias.
\begin{proposition}
\label{prop:usefulness}
If the interference model is $\mathcal{P}'$-increasing, then for all $\mathcal{C}_1, \mathcal{C}_2
\in \mathcal{P}'^2$, it holds that
\begin{equation*}
\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_1}[\hat \tau] \leq \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_2}[\hat \tau]
\implies |\mathbb{E}_{\mathbf{Z}\sim \mathcal{C}_1}[\hat \tau] - \tau| \geq |\mathbb{E}_{\mathbf{Z} \sim
\mathcal{C}_2}[\hat \tau] - \tau|
\end{equation*}
If the interference model is $\mathcal{P}'$-decreasing, then for all $\mathcal{C}_1,
\mathcal{C}_2 \in \mathcal{P}'^2$, it holds that
\begin{equation*}
\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_1}[\hat \tau] \leq \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_2}[\hat \tau]
\implies |\mathbb{E}_{\mathbf{Z}\sim \mathcal{C}_1}[\hat \tau] - \tau| \geq |\mathbb{E}_{\mathbf{Z} \sim
\mathcal{C}_2}[\hat \tau] - \tau|
\end{equation*}
\end{proposition}
Proposition~\ref{prop:usefulness} is a simple consequence of
Definition~\ref{def:one-sided}: if we know that two cluster-based estimates are
both lower bounds of the estimand, then the greater of the two must be less
biased. The same reasoning applies if they both upper-bound the estimand. It is
sufficient to compare the expectation of our estimators to determine which is
less biased.
The crux of our framework therefore relies on reasoning about monotonicity.
Many commonly studied parametric models of interference are in fact monotone.
Consider the following {\em linear model of interference} (e.g. studied
in~\citep{eckles2017design}):
\begin{equation}
\label{eq:linear}
Y_i(\mathbf{Z}) = \alpha_i + \beta_i Z_i + \gamma \rho_i + \epsilon_i,
\end{equation}
where for all $i$, $(\alpha_i, \beta_i, \gamma) \in \mathbb{R}^3$, $\epsilon_i
\sim \mathcal{N}(0, 1)$ is independent of $\rho_i$, and $\rho_i =
\frac{1}{|\mathcal{N}_i|} \sum_{j \in \mathcal{N}_i} Z_j$ is the proportion of $i$'s
neighborhood $\mathcal{N}_i$ that is treated. This expresses each unit's outcome
as a linear function of a fixed effect, a heterogeneous treatment effect, and a
network effect proportional to the fraction of my neighborhood that is treated.
As shown in the following proposition, this is monotone.
\begin{proposition}\label{prop:simple_linear_monotone}
For all $\mathcal{C} \in \mathcal{P}$, let $\theta_\mathcal{C} = \frac{1}{N} \sum_i
\frac{|\mathcal{N}_i \cap \mathcal{C}(i)|}{|\mathcal{N}_i|}$ be the average proportion of a unit
$i$'s neighborhood $\mathcal{N}_i$ included in its assigned cluster $\mathcal{C}(i)$. Then,
%
\begin{equation*}
\tau - \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[\hat \tau] = \frac{\gamma M}{M-1} (1 -
\theta_\mathcal{C})
\end{equation*}
It follows that if $\gamma \geq 0$, the interference model is
$\mathcal{P}$-increasing, otherwise it is $\mathcal{P}$-decreasing.
\end{proposition}
We can also extend the above for heterogeneous network effect
parameters $\gamma_i$. A proof can be found in Section~\ref{sec:proofs}.
\begin{proposition}\label{prop:linear_monotone}
Let $\theta_{\mathcal{C}, i} = \frac{|\mathcal{N}_i \cap \mathcal{C}(i)|}{|\mathcal{N}_i|}$. For all $\mathcal{C}
\in \mathcal{P}$,
\begin{equation*}
\tau - \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[\hat \tau] = \frac{M}{N (M-1)}\sum_i
\gamma_i (1 - \theta_{\mathcal{C}, i})
\end{equation*}
It follows that if $\sum_i \gamma_i(1- \theta_i) \geq 0$, then the interference
model is $\mathcal{P}$-increasing, otherwise it is $\mathcal{P}$-decreasing.
\end{proposition}
It follows that if $\gamma_i \geq 0, \forall i$, then the interference mechanism
is $\mathcal{P}$-increasing, and if $\gamma_i \leq 0, \forall i$, then it is
$\mathcal{P}$-decreasing. If the sign of $\gamma_i$ is not consistent, then the
monotonicity depends on the clustering: if all units with a given
sign are perfectly clustered $(\theta_{C, i} = 1)$, e.g.~all units with
$\gamma_i \geq 0$, then the mechanism is once again monotone.
More sophisticated interference mechanisms, without an immediate parametric
form, are also monotone. For example, we show that the
interference mechanism present in reserve price experiments in an advertiser
auction setting is monotone (under certain conditions). See
Section~\ref{sec:application} for more details. For these complex interference
mechanisms, it can also be easier to establish the following sufficient (but not
necessary) condition:
\begin{proposition}
\label{prop:more}
We say an interference mechanism verifies the \emph{self-excitation property}
for a set of partitions~$\mathcal{P}'$, if for all units $i$ and partitions $\mathcal{C}
\in \mathcal{P}'$,
\begin{align*}
& \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[ Y_i(\mathbf{Z}) : Z_i = 0] \geq Y_i(\vec 0) \\
& \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}}[ Y_i(\mathbf{Z}) : Z_i = 1] \leq Y_i(\vec 1)
\end{align*}
A $\mathcal{P}'$-self-exciting process is $\mathcal{P}'$-increasing. A {\em self-deexciting
mechanism}, with flipped inequalities, is $\mathcal{P}'$-decreasing.
\end{proposition}
The proof is included in Section~\ref{sec:proofs}. The two
inequalities capture the following phenomenon: conditioned on my treatment
status, if my outcome is greatest when my neighborhood is entirely in treatment,
and lowest when my neighborhood is entirely in control, then an experiment
always under-estimates the true treatment effect. This only needs to
be true in \emph{expectation} over the assignments $\mathbf{Z}$, even if, in practice,
we can show that the inequalities hold for all $\mathbf{Z}$ (cf.
Section~\ref{sec:application}).
We say the interference mechanism is self-exciting because these inequalities
are verified when units benefit from being surrounded by units in treatment. A
successful messaging feature launch is a straightforward example of a
self-exciting process, as is any pricing mechanism that penalizes any treated
bidders and boosts the utility of their competitors.
\subsection{An experiment-of-experiments design}
\begin{figure}
\centering
\includegraphics[scale=.6]{drawing_without_crosses.pdf}
\caption{A hierarchical experimental design, which assigns the experimental
units to one of two cluster-based randomized designs, $C_1$ and $C_2$,
completely at random (CR). $\hat \tau^\mathbf{W}_1$ and $\hat \tau^\mathbf{W}_2$ represent
the treatment effect estimates under each design respectively.}\label{fig:hier}
\end{figure}
Under monotonicity, Proposition~\ref{prop:usefulness} states that we can
determine the least-biased of two $\mathcal{P}$-increasing or $\mathcal{P}$-decreasing
cluster-based designs, without knowledge of the estimand, by comparing the
expectation of their estimates. However, only one cluster-based design can ever
be applied to the set of experimental units in its entirety, and the comparison
of $\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_1} [\hat \tau]$ with $\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_2}[\hat \tau]$
cannot be done directly.
This resembles the fundamental problem of causal inference, which
states that units cannot be placed both in treatment and control buckets, and is
solved through randomization. Inspired by~\citep{saveski2017detecting,
pouget2017testing}, we suggest to randomly assign different units to either
clustering algorithm, resulting in a 2-step hierarchical randomized design. The
procedure, described in pseudo-code in Algorithm~\ref{alg:hier}, is as follows:
\begin{itemize}
\item Assign units completely at random to two design buckets, one for each
clustering algorithm. Let $\mathbf{W} \in {\{1, 2\}}^N$ be the vector representing
that assignment.
\item Within each design bucket, cluster the remaining units together
according to the appropriate partition: if $W_i = W_j = k$ \emph{and}
$C_k(i) = C_k(j)$, then $i$ and $j$ belong to the same cluster in design
bucket $k \in \{1, 2\}$. The resulting partitions are $C_1^\mathbf{W}$ and $C_2^\mathbf{W}$.
\item Within each design bucket, assign the resulting clusters to treatment
and control. Let $\mathbf{Z}$ be the resulting assignment vector. This is
possible because no unit belongs to both $\mathcal{C}_1^\mathbf{W}$ and $\mathcal{C}_2^\mathbf{W}$.
\end{itemize}
\begin{algorithm}
\SetAlgoNoLine%
\KwIn{Partitions $\mathcal{C}_1,~\mathcal{C}_2$ of the $N$ units into $M_1,~M_2$
clusters.}
\KwOut{$\mathbf{Z} \in {\{0,1\}}^N$ encoding the assignment of each unit to a
treatment or control bucket.}
\caption{Experiment of experiments design}
Choose $\mathbf{W} \in {\{1, 2\}}^N$ uniformly at random, encoding the
assignment of units to design arms $1$ and $2$\;
\For{$k \in \{1, 2\}$}{%
Let $C_k^\mathbf{W}$ be the clustering on $\{i\in [1,N]: W_i = k\}$ such that
$C_k^\mathbf{W}(i) = C_k^\mathbf{W}(j)~\text{iff}~C_k(i) = C_k(j)$\;
Assign units in treatment arm $k$ to treatment and
control with a $\mathcal{C}_k^\mathbf{W}$-cluster-based design\;}
\Return~the resulting assignment vector $\mathbf{Z}$\;
\label{alg:hier}
\end{algorithm}
Algorithm~\ref{alg:hier} provides us with two estimates, $\hat{\tau}^\mathbf{W}_1$ and
$\hat{\tau}^\mathbf{W}_2$, of the causal effect, one from each design arm. The
resulting clusterings $\mathcal{C}_1^\mathbf{W}$ and $\mathcal{C}_2^\mathbf{W}$ may be unbalanced. This is
of minor importance as the HT estimator (cf. Eq.~\ref{eq:HT}) is unbiased
(under SUTVA) for unbalanced clusterings, and balancedness is required only to
control its variance. In practice, $\mathcal{C}_1$ and $\mathcal{C}_2$ are not required to
have the same number of clusters, but we expect the clusters sizes to be large
enough for each cluster to have at least one unit in each design arm after the
first stage with high probability.
From the comparison of $\hat \tau^\mathbf{W}_1$ and $\hat{\tau}^\mathbf{W}_2$, we seek
to order $\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_1}[\hat{\tau}_1]$ and $\mathbb{E}_{\mathbf{Z} \sim
\mathcal{C}_2}[\hat{\tau}_2]$. Under arbitrary interference structures,
these proxy estimates are not guaranteed to have the same ordering,
the key condition for Proposition~\ref{prop:usefulness}. Intuitively,
$\hat{\tau}^\mathbf{W}_1$ and $\hat{\tau}^\mathbf{W}_2$ represent the treatment effect
estimates for two ``weakened'' versions of each partitioning $\mathcal{C}_1$
and $\mathcal{C}_2$. This is where a completely randomized assignment helps. Because
the assignment of units to design arms is done completely at random, it affects
each partitioning in the same way, and we expect the ordering to stay the same.
For the linear model of interference in Prop.~\ref{prop:linear_monotone}, we
have:
\begin{property}
\label{prop:transitivity}
An interference mechanism is said to be $\mathcal{P}'$-transitive if $\forall
(\mathcal{C}_1, \mathcal{C}_2) \in \mathcal{P}'^2$,
%
\begin{align*}
\mathbb{E}_{\mathbf{W}, \mathbf{Z} \sim \mathcal{C}_1^\mathbf{W}}\left[ \hat \tau^\mathbf{W}_1 \right]
\leq \mathbb{E}_{\mathbf{W}, \mathbf{Z} \sim \mathcal{C}_2^\mathbf{W}} \left[ \hat \tau^\mathbf{W}_2\right]
\Leftrightarrow \mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_1}[ \hat \tau ] \leq \mathbb{E}_{\mathbf{Z} \sim
\mathcal{C}_2}[ \hat \tau ]
\end{align*}
%
\end{property}
As a sanity check, we can also confirm that the property holds for SUTVA\@.~The
property can also be shown for the linear interference mechanisms introduced in
Prop.~\ref{prop:linear_monotone}:
\begin{proposition}\label{prop:linear_transitive}
Under SUTVA, it holds that
\begin{equation*}
\mathbb{E}_{\mathbf{W}, \mathbf{Z} \sim \mathcal{C}_k^\mathbf{W}} \left[\hat \tau^\mathbf{W}_k \right] =
\mathbb{E}_{\mathbf{Z} \sim \mathcal{C}_k}[\hat \tau] = \tau.
\end{equation*}
%
Hence, the no-interference case is
trivially $\mathcal{P}$-transitive. Furthermore, the linear model of interference
in Prop.~\ref{prop:linear_monotone} is $\mathcal{P}$-transitive if the same number
of units is assigned to each design arm: $\sum [W_i = 1] = \frac{N}{2}$.
\end{proposition}
A full proof can be found in Section~\ref{sec:proofs}. For more complex
mechanisms of interference, as is the case for reserve price experiments, we
use simulations to confirm the intuition that transitivity holds. See
Section~\ref{sec:experimental} for more details.
As is common with A/B tests, we do not have access to the expectation of our
estimators, and rely on approximations to the variance, such as Neymann's
variance estimator. In order to meaningfully compare the
estimates we obtain, we must apply our method of choice to determine when
their ordering is significant. For example, we can make a normal
approximation to the distribution of the estimates--- using Neymann's
estimator to upper-bound the variance ---to estimate the probability that one
estimate is greater than the other with a certain significance level:
\begin{proposition}\label{prop:statistical_test}
For $k \in \{1, 2\}$, recall the definition of the Neymannian variance
estimator for cluster-based randomized designs:
%
\begin{equation}\label{eq:neymann}
\hat \sigma_k^\mathbf{W} = \frac{M_k}{N_k} \left(\frac{\hat S_{k, t}}{M_{k,t}} +
\frac{\hat S_{k, c}}{M_{k,c}} \right),
\end{equation}
%
where $M_k$ (resp. $N_k$) is the number of clusters (resp.~units) in
$\mathcal{C}^\mathbf{W}_k$, $\hat S_{k, t} = var\{Y'_{j,k} : z_j = 1\}$ and $\hat S_{k, c} =
var\{Y'_{j,k} : z_j = 0\}$, and $Y'_{j, k} = \sum_{\mathcal{C}^\mathbf{W}_k(i) = j} Y_i$.
Assume that the interference mechanism is transitive and $\mathcal{P}'$-increasing,
such that $(\mathcal{C}_1, \mathcal{C}_2) \in \mathcal{P}'^2$. If $\alpha$ is the level of
significance chosen, we state that $\mathcal{C}_1$ is a significantly better
clustering than $\mathcal{C}_2$ if and only if
%
\begin{equation*}
\Phi\left(\frac{\hat \tau_1^\mathbf{W} - \hat \tau_2^\mathbf{W}}{\sqrt{\hat \sigma_1^\mathbf{W} +
\hat \sigma_2^\mathbf{W}}}\right) < \alpha,
\end{equation*}
where $\Phi$ is the cdf of the normal distribution.
\end{proposition}
A similar reasoning applies to $\mathcal{P}'$-decreasing mechanisms. If the Gaussian
approximation is not appropriate, the distribution of the estimators can equally
be approximated by a bootstrap analysis, or a more sophisticated model-based
imputation method~\citep{imbens2015causal}. More details can be found in
Section~\ref{sec:proofs}.
|
{
"timestamp": "2018-03-09T02:01:27",
"yymm": "1803",
"arxiv_id": "1803.02876",
"language": "en",
"url": "https://arxiv.org/abs/1803.02876"
}
|
\section{Introduction}
As a generalization of Riemannian product manifold Bishop and O'Neill \cite{BISHOP} introduced the notion of warped product manifold and
later it was studied in (\cite{ATCEKEN1}, \cite{ATCEKEN2}, \cite{HUOM}, \cite{UD1}, \cite{UDDIN}-\cite{UKK}).
The existence or non-existence of warped product manifolds plays
some important role in differential geometry as well as physics.\\
\indent As a generalization of LP-Sasakian manifold introduced independently by Matsumoto \cite{8} and also by Mihai and
Rosca \cite{9}, Shaikh \cite{11} introduced the notion of Lorentzian concircular structure manifolds
(briefly, $(LCS)_n$-manifolds) with an example. Then Shaikh and Baishya (\cite{12}, \cite{13})
investigated the applications of $(LCS)_n$-manifolds to the general
theory of relativity and cosmology. The $(LCS)_n$-manifolds are also
studied in (\cite{SKH}, \cite{SHAIKH1}, \cite{SHAIKH2}-\cite{14}).\\
\indent Due to important applications in applied mathematics and theoretical physics, the geometry of submanifolds
has become a subject of growing interest. Analogous to almost Hermitian manifolds, the invariant and anti-invariant
subamnifolds are depend on the behaviour of almost contact metric structure $\phi$. The study of the differential geometry
of a contact CR-submanifold as a generalization of invariant and anti-invariant subamnifold was introduced by Bejancu \cite{BEJ1}.
In this connection it is mentioned that different class of submanifolds of $(LCS)_n$-manifolds are studied in (\cite{ATCE2}, \cite{HUI2}, \cite{HUI1},
\cite{HAP}, \cite{HP}, \cite{HPP}, \cite{SHAIKH9}). Recently Hui et al. (\cite{HAP}, \cite{HAN}) studied contact CR-warped product submanifolds and also warped product pseudo-slant submanifolds of a $(LCS)_n$-manifold $\bar{M}$. In this paper we have studied the characterization for both
these classes of warped product submanifolds. An example of bi-slant submanifold of $(LCS)_n$-manifold is
constructed. However, it is also shown that there do not exists any proper warped product bi-slant
submanifold of a $(LCS)_n$-manifold.
\section{Preliminaries}
Let $\bar{M}$ be an $n$-dimensional Lorentzian manifold \cite{NIL} admitting a unit
timelike concircular vector field $\xi$, called the characteristic
vector field of the manifold. Then we have
\begin{equation}
\label{2.1}
g(\xi, \xi)=-1.
\end{equation}
Since $\xi$ is a unit concircular vector field, it follows that
there exists a non-zero 1-form $\eta$ such that for
\begin{equation}
\label{2.2}
g(X,\xi)=\eta(X),
\end{equation}
the equation of the following form holds \cite{15}
\begin{equation}
\label{2.3}
(\bar\nabla _{X}\eta)(Y)=\alpha \{g(X,Y)+\eta(X)\eta(Y)\},
\ \ \ (\alpha\neq 0)
\end{equation}
\begin{equation}
\label{2.4}
\bar\nabla _{X}\xi = \alpha \{X +\eta(X)\xi\}, \ \ \ \alpha\neq 0
\end{equation}
for all vector fields $X$, $Y$, where $\bar{\nabla}$ denotes the
operator of covariant differentiation with respect to the Lorentzian
metric $g$ and $\alpha$ is a non-zero scalar function satisfies
\begin{equation}
\label{2.5}
{\bar\nabla}_{X}\alpha = (X\alpha) = d\alpha(X) = \rho\eta(X),
\end{equation}
$\rho$ being a certain scalar function given by $\rho=-(\xi\alpha)$.
If we put
\begin{equation}
\label{2.6}
\phi X=\frac{1}{\alpha}\bar\nabla_{X}\xi,
\end{equation}
then from (\ref{2.4}) and (\ref{2.6}) we have
\begin{equation}
\label{2.7}
\phi X = X+\eta(X)\xi,
\end{equation}
\begin{equation}
\label{2.8}
g(\phi X,Y) = g(X,\phi Y)
\end{equation}
from which it follows that $\phi$ is a symmetric (1,1) tensor and
called the structure tensor of the manifold. Thus the Lorentzian
manifold $\bar{M}$ together with the unit timelike concircular vector
field $\xi$, its associated 1-form $\eta$ and an (1,1) tensor field
$\phi$ is said to be a Lorentzian concircular structure manifold
(briefly, $(LCS)_{n}$-manifold), \cite{11}. Especially, if we take
$\alpha=1$, then we can obtain the LP-Sasakian structure of
Matsumoto \cite{8}. In a $(LCS)_{n}$-manifold $(n>2)$ $\bar{M}$, the following
relations hold \cite{11}:
\begin{equation}
\label{2.9}
\eta(\xi)=-1,\ \ \phi \xi=0,\ \ \ \eta(\phi X)=0,\ \ \
g(\phi X, \phi Y)= g(X,Y)+\eta(X)\eta(Y),
\end{equation}
\begin{equation}
\label{2.10}
\phi^2 X= X+\eta(X)\xi,
\end{equation}
\begin{equation}
\label{2.11}
(\bar{\nabla}_{X}\phi)Y=\alpha\{g(X,Y)\xi+2\eta(X)\eta(Y)\xi+\eta(Y)X\},
\end{equation}
\begin{equation}
\label{2.12}
(X\rho)=d\rho(X)=\beta\eta(X)
\end{equation}
for all $X,\ Y,\ Z\in\Gamma(T\bar{M})$ and $\beta = -(\xi\rho)$ is a scalar function.\\
Let $M$ be a submanifold of $\bar{M}$ with
induced metric $g$. Also let $\nabla$ and $\nabla^{\perp}$ are the
induced connections on the tangent bundle $TM$ and the normal bundle
$T^{\perp}M$ of $M$ respectively. Then the Gauss and Weingarten
formulae are given by
\begin{equation}
\label{2.13} \bar{\nabla}_{X}Y = \nabla_{X}Y + h(X,Y)
\end{equation}
and
\begin{equation}
\label{2.14} \bar{\nabla}_{X}V = -A_{V}X + \nabla^{\perp}_{X}V
\end{equation}
for all $X,Y\in\Gamma(TM)$ and $V\in\Gamma(T^{\perp}M)$, where $h$
and $A_V$ are second fundamental form and the shape operator
(corresponding to the normal vector field $V$) respectively for the
immersion of $M$ into $\bar{M}$ and they are related by
$\label{2.15} g(h(X,Y),V) = g(A_{V}X,Y)$,
for any $X,Y\in\Gamma(TM)$ and $V\in\Gamma(T^{\perp}M)$.\\
For any $X\in \Gamma(TM)$ and $V\in \Gamma(T^\bot M)$, we can write
\begin{eqnarray}
\label{2.16}
\text{(a)} \ \phi X = PX+QX,\ \ \text{(b)}\ \phi V= bV+cV
\end{eqnarray}
where $\phi X,\ bV$ are the tangential components and $QX,\ cV$ are the normal components.\\
A submanifold $M$ of a $(LCS)_n$-manifold $\bar{M}$ is said to be invariant if $\phi(T_pM)\subseteq T_pM$, for every $p\in M$ and
anti-invariant if $\phi T_pM\subseteq T^\bot_pM$, for every $p\in M$.\\
A submanifold $M$ of a $(LCS)_n$-manifold $\bar{M}$ is said to be a CR-submnaifold if there is a differential distribution
$\mathcal{D}:p\rightarrow \mathcal{D}_p \subseteq T_p M$ such that $\mathcal{D}$ is an invariant distribution and the orthogonal
complementary distribution $\mathcal{D}^\bot$ is anti-invariant.\\
The normal space of a CR-submanifold $M$ is decomposed as $T^\bot M =Q\mathcal{D^\bot}\oplus\nu$,
where $\nu$ is the invariant normal subbundle of $M$ with respect to $\phi$.\\
A submanifold $M$ of a $(LCS)_n$-manifold $\bar{M}$ is said to be slant if for each non-zero vector $X\in T_pM$ the angle between $\phi X$
and $T_pM$ is a constant, i.e. it does not depend on the choice of $p\in M$.\\
A submanifold $M$ of a $(LCS)_n$-manifold $\bar{M}$ is said to be a pseudo-slant submanifold if there exists a pair of
orthogonal distributions $\mathcal{D^\bot}$ and $\mathcal{D^\theta}$ such that \\
(i) TM admits the orthogonal direct decomposition $TM=\mathcal{D^\bot}\oplus\mathcal{D^\theta}$,\\
(ii) The distribution $\mathcal{D^\bot}$ is anti-invariant, \\
(iii) The distribution $\mathcal{D^\theta}$ is slant with slant angle $\theta\neq 0,\frac{\pi}{2}$.\\
From the definition it is clear that if $\theta=0$, then $M$ is a CR-submanifold. We say that a pseudo-slant submanifold is proper if
$\theta\neq 0,\frac{\pi}{2}$. The normal space of a pseudo-slant submanifold $M$ is decomposed as $T^\bot M =Q\mathcal{D^\theta}\oplus\phi\mathcal{D}^\bot\oplus\nu$.\\
On a slant submanifold $M$ of a $(LCS)_n$-manifold $\bar{M}$, we have \cite{HUI2}
\begin{eqnarray}
\label{2.17} P^2X &=&\cos^2\theta[X+\eta(X)\xi], \\
\label{2.18} Q^2X &=&\sin^2\theta[X+\eta(X)\xi],
\end{eqnarray}
where $\theta$ is the slant angle of $M$ in $\bar{M}$.\\
From (\ref{2.17}) and (\ref{2.18}), we get
\begin{eqnarray}
\label{2.19}g(PX,PY) &=& \cos^2\theta[g(X,Y)+\eta(X)\eta(Y)] ,\\
\label{2.20}g(QX,QY) &=& \sin^2\theta[g(X,Y)+\eta(X)\eta(Y)],
\end{eqnarray}
for any $X,\ Y\in\Gamma(TM)$.\\
Also for a slant submanifold from (\ref{2.16}) and (\ref{2.17}), we have
\begin{equation}\label{2.21}
bQX=\sin^2\theta(X+\eta(X)\xi)\ \ \text{and}\ \ cQX=-QPX
\end{equation}
\indent For a Riemannian manifold $\bar{M}$ of dimension $n$ and a smooth function $f$ on
$\bar{M}$, $\nabla f$, the gradient of $f$ which is defined by
\begin{equation}
\label{2.19}
g(\nabla f,X)=X(f)(\text{or}=(Xf))
\end{equation}
for any $X\in \Gamma (TM)$.\\
\begin{definition}\cite{BISHOP}
Let $(N_1,g_1)$ and $(N_2,g_2)$ be two Riemannian manifolds with Riemannian metric $g_1$
and $g_2$ respectively and $f$ be a positive definite smooth function on $N_1$. The warped product
of $N_1$ and $N_2$ is the Riemannian manifold $N_1\times_{f}N_2 = (N_1\times N_2,g)$, where
\begin{equation}
\label{2.20}
g=g_1+f^2g_2.
\end{equation}
\end{definition}
\noindent A warped product manifold $N_1\times_{f}N_2$ is said to be trivial if the warping function $f$ is constant.\\
\begin{prop}\cite{NIL}
Let $M=N_1\times_{f}N_2$ be a warped product manifold. Then
\begin{eqnarray*}
\nabla_UX = \nabla_XU = (X\ln f) U,
\end{eqnarray*}
for any $X$, $Y\in\Gamma(TN_1)$ and $U\in\Gamma(TN_2)$.
\end{prop}
\begin{theorem}[Hiepko's Theorem]\cite{HIPKO}
Let $\mathcal{D}_1$ and $\mathcal{D}_2$ be two orthogonal complementary distributions on a Riemannian manifold $M$. Suppose that $\mathcal{D}_1$ and $\mathcal{D}_2$ are both involutive such that $\mathcal{D}_1$ is a totally geodesic foliation and $\mathcal{D}_2$ is a spherical foliation. Then $M$ is locally isometric to a non trivial warped product $M_1\times_f M_2$, where $M_1$ and $M_2$ are integral manifolds of $\mathcal{D}_1$ and $\mathcal{D}_2$,
respectively.
\end{theorem}
\section{characterization for contact CR-warped product submanifolds}
In \cite{HAN} it is shown that contact CR-warped product submanifolds of $\bar{M}$ of the form
$N_\bot\times_fN_T$, where $N_T$ and $N_\bot$ are invariant and anti-invariant submanifolds of $\bar{M}$ respectively,
exists if $\xi \in \Gamma (TN_\bot)$ and does not exists if $\xi \in \Gamma (TN_T)$.
In this section we find a characterization for a submanifolds $M$ of $\bar{M}$ to be contact CR-warped product of the form
$N_\bot\times_fN_T$ such that $\xi \in \Gamma(TN_\bot)$. First we prove the following Lemma:
\begin{lemma}
Let $M=N_\bot\times_fN_T$ be a warped product submanifold of $\bar{M}$ such that
$\xi \in \Gamma(T N_\bot)$, then
\begin{eqnarray}
\label{3.1}
g(h(X,Y),\phi Z)=-\alpha \eta(Z)g(X,Y)-(Z\ln f)g(\phi X,Y)
\end{eqnarray}
for $X,\ Y\in \Gamma(TN_T)$ and $Z\in \Gamma(TN_\bot)$.
\end{lemma}
\begin{proof}
For $X,\ Y\in \Gamma(TN_T)$ and $Z,\xi\in \Gamma(TN_\bot)$ we have
from (\ref{2.11}) and (\ref{2.13}) that
\begin{eqnarray*}
g(h(X,Y),\phi Z) &=& g(\bar{\nabla}_XY,\phi Z )\\
&=&g(\bar{\nabla}_X\phi Y,Z)-g((\bar{\nabla}_X\phi)Y,Z) \\
&=&-g(\phi Y,\nabla_XZ)-\alpha g(X,Y)\eta(Z).
\end{eqnarray*}
By virtue of Proposition 2.1 from the above relation we get (\ref{3.1}).
\end{proof}
\par Now interchanging $X$ by $\phi X$ and $Y$ by $\phi Y$, we get the following respective relations
\begin{eqnarray}
\label{3.2} g(h(\phi X,Y),\phi Z) &=& -\alpha \eta(Z)g(\phi X,Y)-Z(\ln f)g(X,Y), \\
\label{3.3} g(h(X,\phi Y),\phi Z) &=& -\alpha \eta(Z)g(\phi X,Y)-Z(\ln f)g(X,Y), \\
\label{3.4} g(h(\phi X,\phi Y),\phi Z) &=& -\alpha \eta(Z)g(X,Y)-Z(\ln f)g(\phi X,Y).
\end{eqnarray}
\begin{corollary}
Let $M=N_\bot\times_fN_T$ be a warped product submanifold of $\bar{M}$ such that
$\xi \in \Gamma(T N_\bot)$. Then
\begin{eqnarray*}
g(h(\phi X,Y),\phi Z) &=& g(h(X,\phi Y),\phi Z) \\
\text{and}\ \ \ \ \ g(h(\phi X,\phi Y),\phi Z)&=&g(h(X,Y),\phi Z)
\end{eqnarray*}
for $X,\ Y\in \Gamma(TN_T)$ and $Z\in \Gamma(TN_\bot)$.
\end{corollary}
Now we have the following characterization theorem:
\begin{theorem}
Let $M$ be a contact CR-submanifold of a $(LCS)_n$-manifold $\bar{M}$ such that $\xi$ is
tangent to the anti-invariant distribution $\mathcal{D}^\bot$. Then $M$ is locally a warped product submanifold if and only if
\begin{equation}
\label{3.5}
A_{\phi Z}X=-\alpha \eta(Z)X-(Z\mu)\phi X
\end{equation}
for any $X\in\Gamma(\mathcal{D})$ and $Z\in \Gamma(\mathcal{D}^\bot)$ and also for some smooth function $\mu$ on $M$ such that
$(Y\mu)=0$ for any $Y\in D$.
\end{theorem}
\begin{proof}
\indent If $M$ be a contact CR-warped product submanifold, then for any $X\in \Gamma(TM_T)$ and $Z,\ W\in \Gamma (TM_\bot)$,
we have
\begin{equation*}
g(A_{\phi Z}X,W)-g(h(X,W),\phi Z)=g(\bar{\nabla}_WX,\phi Z)=g(\phi\bar{\nabla}_WX,Z).
\end{equation*}
Using (\ref{2.11}) in the above equation we get
\begin{equation}\label{3.5a}
g(A_{\phi Z}X,W)=g(\bar{\nabla}_W\phi X,Z).
\end{equation}
Then from (\ref{2.13}) and Proposition 2.1 we have from (\ref{3.5a}) that $g(A_{\phi Z}X,W)=0$ and therefore $A_{\phi Z}X$ has no
component in $\Gamma(TN_\bot)$. Hence by virtue of Lemma 3.1, the relation (\ref{3.5}) follows.\\
\par Conversely, let $M$ be a contact CR-submanifold of $\bar{M}$ with the invariant and anti-invariant distributions
$D$ and $\mathcal{D}^\bot$ such that the relation (\ref{3.5}) holds. Then for any $X\in \Gamma(\mathcal{D})$ and $Z,\ W\in \Gamma(\mathcal{D}^\bot)$, and
using (\ref{2.13}) we have
\begin{eqnarray*}
g(\nabla_ZW,\phi X)&=& g(\bar{\nabla}_Z\phi W,X)-g((\bar{\nabla}_Z\phi)W,X)\\
&=&-g(\phi W,\bar{\nabla}_ZX)\\
&=&-g(A_{\phi W}X,Z).
\end{eqnarray*}
Using (\ref{3.5}) in above relation we get $$g(\nabla_ZW,\phi X)=0.$$\\
Similarly, we get $g(\nabla_WZ,\phi X)=0.$
Thus we obtain
\begin{equation}
\label{3.6}
g(\nabla_ZW+\nabla_WZ,\phi X)=0,
\end{equation}
Which implies that $\nabla_ZW+\nabla_WZ\in \Gamma(\mathcal{D}^\bot)$, i.e., $\mathcal{D}^\bot$ is integrable and its leaves are totally geodesic in $M$.
Again for any $X,\ Y\in \Gamma(\mathcal{D})$ and $Z\in \Gamma(\mathcal{D}^\bot)$, we get
\begin{eqnarray}
\label{3.7}
g(\nabla_XY,Z)&=&g(\bar{\nabla}_X\phi Y,\phi Z)+\eta(Z)g(Y,\bar{\nabla}_X\xi).
\end{eqnarray}
Using (\ref{2.6}) in (\ref{3.7}), we get
\begin{equation}
\label{3.8}
g(\nabla_XY,Z)=g(h(X,\phi Y),\phi Z)+\alpha \eta(Z)g(Y,\phi X).\\
\end{equation}
Interchanging $X$ and $Y$ in (\ref{3.8}), we get
\begin{equation}\label{3.8a}
g(\nabla_YX,Z)=g(h(\phi X,Y),\phi Z)+\alpha \eta(Z)g(X,\phi Y).
\end{equation}
From (\ref{3.8}) and (\ref{3.8a}), we have
\begin{equation}
\label{3.9}
g([X,Y],Z)=g(h(X,\phi Y),\phi Z)-g(h(\phi X,Y),\phi Z).
\end{equation}
Using (\ref{3.5}) in (\ref{3.9}), we get $g([X,Y],Z)=0$ and therefore $\mathcal{D}$ is integrable on $M$.\\
Let us consider a leaf $N_T$ of $\mathcal{D}$ in $M$ and let $h^T$ be the second fundamental form
of $N_T$ in $M$, then we have
\begin{eqnarray}
\label{3.10}
g(h^T(X,Y),Z) &=& g(\phi\bar{\nabla}_YX,\phi Z)-\eta(Z)g(\bar{\nabla}_YX,\xi)\\
\nonumber&=&-g(\phi X,\bar{\nabla}_Y\phi Z)+\eta(Z)g(X,\bar{\nabla}_Y\xi).
\end{eqnarray}
Using (\ref{2.6}) and (\ref{2.14}), (\ref{3.10}) yields
\begin{equation}\label{3.12}
g(h^T(X,Y),Z)=g(\phi X,A_{\phi Z}Y)+\alpha \eta(Z)g(X,\phi Y).
\end{equation}
From (\ref{3.5}) and (\ref{3.12}), we obtain
\begin{equation}\label{3.13}
g(h^T(X,Y),Z)=-(Z\mu)g(X,Y).
\end{equation}
Using (\ref{2.19}) in (\ref{3.13}), we get
\begin{eqnarray}
h^T(X,Y) &=& -(\nabla\mu)g(X,Y),
\end{eqnarray}
where $\nabla\mu$ is the gradient of the function $\mu$ and therefore $N_T$ is totally umbilical in $M$ with
mean curvature $-\nabla\mu$. Moreover, the condition $(Y\mu)=0$, for any $Y\in \mathcal{D}$ implies that the leaves
of $\mathcal{D}$ are extrinsic spheres in $M$, i.e., the integral manifold $N_T$ of $\mathcal{D}$ is umbilical and its mean curvature
vector field is non zero and parallel along $N_T$. Hence by Hiepko's theorem $M$ is locally a warped product $N_\bot\times_fN_T$ ,
where $N_T$ and $N_\bot$ denote the integral manifolds of the distributions $\mathcal{D}$ and $\mathcal{D}^\bot$ respectively and $f$ is the warping
function. Thus the theorem is proved completely.
\end{proof}
\section{characterization for warped product pseudo slant submanifolds}
Recently Hui et al. \cite{HAP} studied warped product pseudo-slant submanifolds of $(LCS)_n$-manifolds. In this section we
obtain a characterization for a submanifold $M$ of $\bar{M}$ to be a warped product pseudo-slant submanifold of the form
$N_\theta\times_fN_\bot$, where $N_\theta$ is a slant submanifold tangent to $\xi$ and
$N_\bot$ is an anti-invariant submanifolds of $\bar{M}$.
\begin{lemma}
Let $M$ be a proper pseudo-slant submanifold of a $(LCS)_n$-manifold $\bar{M}$ with anti-invariant and proper slant dfistributions
$\mathcal{D}^\bot$ and $\mathcal{D}^\theta$, respectively such that $\xi\in\Gamma(\mathcal{D}^\theta)$. Then
\begin{equation}\label{4.1}
g(\nabla_XY,Z)=\sec^2\theta[g(h(X,PY),\phi Z)+g(h(X,Z),QPY)],
\end{equation}
for any $X,\ Y\in \Gamma(\mathcal{D}^\theta)$ and $Z\in \Gamma(D^\bot)$.
\end{lemma}
\begin{proof}
For any $X,\ Y\in\Gamma(\mathcal{D}^\theta)$ and $Z\in\Gamma(\mathcal{D}^\bot)$, we have
\begin{eqnarray*}
g(\nabla_XY,Z) &=& g(\phi \bar{\nabla}_XY,\phi Z) \\
&=&g(\bar{\nabla}_X\phi Y,\phi Z)-g((\bar{\nabla}_X\phi)Y,Z) \\
&=&g(\bar{\nabla}XPY,\phi Z)+g(\bar{\nabla}_XQY,\phi Z) \\
&=&g(h(X,PY),\phi Z)+g(\bar{\nabla}X\phi QY,Z)-g((\bar{\nabla}_X\phi)QY,\phi Z) \\
&=&g(h(X,PY),\phi Z)+g(\bar{\nabla}_XbQY,Z)+g(\bar{\nabla}_XcQY,Z).
\end{eqnarray*}
Using (\ref{2.21}) in the above relation, we get
\begin{eqnarray*}
g(\nabla_XY,Z) &=&g(h(X,PY),\phi Z)+\sin^2\theta g(\bar{\nabla}_XY,Z)-g(\bar{\nabla}_XQPY,Z).
\end{eqnarray*}
From which (\ref{4.1}) follows.
\end{proof}
\begin{corollary}
Let $M$ be a proper pseudo-slant submanifold of a $(LCS)_n$-manifold $\bar{M}$ with anti-invariant and proper slant distributions
$\mathcal{D}^\bot$ and $\mathcal{D}^\theta$, respectively such that $\xi\in\Gamma(\mathcal{D}^\theta)$. Then the distribution $\mathcal{D}^\theta$
defines a totally geodesic foliation if and only if
\begin{equation*}
g(h(X,PY),\phi Z)+g(h(X,Z),QPY)=0
\end{equation*}
for every $X,\ Y\in \Gamma(\mathcal{D}^\theta)$ and $Z\in \Gamma(\mathcal{D}^\bot)$.
\end{corollary}
\begin{lemma}
Let $M=N_\theta\times_fN_\bot$ be a warped product submanifold of a $(LCS)_n$-manifold $\bar{M}$, where $N_\bot$ and $N_\theta$ are ant-invariant
and proper slant submanifold of $\bar{M}$ such that $\xi\in \Gamma(TN_\theta)$. Then
\begin{eqnarray}
\label{4.2} g(h(X,Y),\phi Z)+g(h(X,Z),QY) &=&0, \\
\label{4.3} g(h(Z,W),QX)+g(h(X,Z),QW) &=& (\phi X\ln f)g(Z,W), \\
\label{4.4}\ \ \ g(h(Z,W),QPX)+g(h(PX,Z),QW) &=& \cos^2\theta[(X\ln f)+\alpha \eta(X)]g(Z,W).
\end{eqnarray}
\end{lemma}
\begin{proof}
For any $X,\ Y\in \Gamma(TN_\theta)$ and $Z\in \Gamma(TN_\bot)$, we have
\begin{eqnarray*}
g(h(X,Y),\phi Z) &=& g(\bar{\nabla}_XY,\phi Z) \\
&=&g(\phi \bar{\nabla}_XY,Z) \\
&=&g(\bar{\nabla}_X\phi Y,Z) \\
&=&g(\bar{\nabla}_XPY,Z)+g(\bar{\nabla}_XQY,Z).
\end{eqnarray*}
Using Proposition 2.1 in the above relation, we get
\begin{equation}\label{4.5}
g(h(X,Y),\phi Z)=(X\ln f)g(Z,PY)-g(h(X,Z),QY).
\end{equation}
Thus (\ref{4.2}) follows from (\ref{4.5}).
Also, for any $X\in\Gamma(TN_\theta)$ and $Z,\ W\in \Gamma(TN_\bot)$, we have
\begin{eqnarray*}
g(h(Z,W),QX)&=&g(\bar{\nabla}_ZW,\phi X)-g(\bar{\nabla}_ZW,PX) \\
&=&g(\bar{\nabla}_Z\phi W,X)-g((\bar{\nabla}_Z\phi)W,X)-g(\bar{\nabla}_ZW,PX) \\
&=&-g(h(X,Z),\phi W)+g(W,\bar{\nabla}_ZPX).
\end{eqnarray*}
Using Proposition 2.1 in the above relation we get (\ref{4.3}).
Interchanging $X$ by $PX$ in (\ref{4.3}) we get (\ref{4.4})
\end{proof}
Now, we prove the following characterization theorem for warped product pseudo-slant submanifolds.
\begin{theorem}
Let $M$ be a proper pseudo-slant submanifold of a $(LCS)_n$-manifold $\bar{M}$ with anti-invariant distribution $\mathcal{D}^\bot$
and proper pseudo-slant distribution $\mathcal{D^\theta}$, respectively such that $\xi\in \Gamma(\mathcal{D}^\theta)$. Then $M$ is locally
a mixed-geodesic warped product submanifold of the form $N_\theta\times_fN_\bot$ if and only if
\begin{equation}\label{4.6}
A_{\phi Z}X=0\ \ \text{and}\ \ A_{QPX}Z=\cos^2\theta[(X\mu)+\alpha \eta(X)]Z,
\end{equation}
for any $X\in \Gamma(\mathcal{D^\theta}),\ Z\in \Gamma(\mathcal{D^\bot})$ and for some function $\mu$ on $M$ satisfying $(Z\mu)=0$, for
any $Z\in \Gamma(\mathcal{D}^\bot)$.
\end{theorem}
\begin{proof}
Let $M=N_\theta\times_fN_\bot$ be a mixed geodesic warped product submanifold of a $(LCS)_n$-manifold $\bar{M}$ such that (\ref{4.6}) holds.
Then for any $X,\ Y\in \Gamma(\mathcal{D}^\theta)$ and $Z\in\Gamma(\mathcal{D}^\bot)$, from (\ref{4.2}) and (\ref{4.4}), we get (\ref{4.6}).\\
Conversely, Let $M$ is a proper pseudo-slant submanifold of a $(LCS)_n$-manifold $\bar{M}$ such that (\ref{4.6}) holds.\\
Then for any for any $X,\ Y\in \Gamma(\mathcal{D}^\theta)$ and for any $Z\in \Gamma(\mathcal{D}^\bot)$, from (\ref{4.1}) and (\ref{4.6}), we get
$g(\nabla_XY,Z)=0$ and hence the leaves of $\mathcal{D}^\theta$ are totally geodesic in $M$.\\ Also, for any $X\in\Gamma(\mathcal{D})$ and
$Z,\ W\in\Gamma(\mathcal{D}^\bot)$, we have
\begin{eqnarray*}
g([Z,W],X) &=& g(\bar{\nabla}_ZW,X)-g(\bar{\nabla}_WZ,X) \\
&=& g(\phi\bar{\nabla}_ZW,\phi X)-g(\phi\bar{\nabla}_WZ,\phi X) \\
&=& g(\bar{\nabla}_Z\phi W,\phi X)-g(\bar{\nabla}W\phi Z,\phi X) \\
&=& -g(\phi W,\bar{\nabla}_Z\phi X)+g(\phi Z,\bar{\nabla}_W\phi X) \\
&=& -g(\phi W,\bar{\nabla}_ZPX)-g(\phi W,\bar{\nabla}_ZQX)+g(\phi Z,\bar{\nabla}_WPX)+g(\phi Z,\bar{\nabla}_WQX) \\
&=& -g(\phi W,h(Z,PX))-g(W,\bar{\nabla}_ZbQX)-g(W,\bar{\nabla}_ZcQX)\\
&&+g(\phi Z,h(W,PX))+g(Z,\bar{\nabla}_WbQX)+g(Z,\bar{\nabla}_WcQX).
\end{eqnarray*}
Using (\ref{2.21}) in the above relation, we get
\begin{eqnarray}
\label{4.7}g([Z,W],X) &=& -g(A_{\phi W}PX,Z)+g(A_{\phi Z}PX,W)+\sin^2\theta g([Z,W],X) \\
\nonumber&&+g(A_{QPX}Z,W)-g(A_{QPX}W,Z).
\end{eqnarray}
Using (\ref{4.6}) in (\ref{4.7}), we get
\begin{equation}\label{4.8}
\cos^2\theta g([Z,W],X)=0.
\end{equation}
Since $\mathcal{D}^\theta$ is proper pseudo-slant so, $\theta\neq 0,\frac{\pi}{2}$. Therefore, $g([Z,W],X)=0$ and hence the anti-invariant distribution
$\mathcal{D}^\bot$ is integrable.\\
Now, let $h^\bot$ be the second fundamental form of a leaf $N_\bot$ of $\mathcal{D}^\bot$ in $M$. Then for any $Z,\ W\in \Gamma(\mathcal{D}^\bot)$ and $X\in \Gamma(\mathcal{D}^\theta)$, we have
\begin{eqnarray*}
g(h^\bot(Z,W),X) &=& g(\phi \bar{\nabla}_ZW,\phi X) \\
&=& g(\bar{\nabla}_Z\phi W,\phi X) \\
&=& g(\bar{\nabla}_Z\phi W,PX)+g(\bar{\nabla}_Z\phi W,QX) \\
&=& -g(A_{\phi W}Z,PX)-g(W,\bar{\nabla}_Z\phi QX) \\
&=& -g(A_{\phi W}PX,Z)-g(W,\bar{\nabla}_ZbQX)-g(W,\bar{\nabla}_ZcQX) \\
&=& -g(A_{\phi W}PX,Z)+\sin^2\theta g(\bar{\nabla}_ZW,X)-g(A_{QPX}W,Z).
\end{eqnarray*}
Therefore
\begin{equation}\label{4.9}
\cos^2\theta g(h^\bot(Z,W),X)=-g(A_{\phi W}PX,Z)-g(A_{QPX}W,Z).
\end{equation}
Using (\ref{4.6}) in (\ref{4.9}), we get
\begin{equation}\label{4.10}
\cos^2\theta g(h^\bot(Z,W),X)=-\cos^2\theta[(X\mu)+\alpha \eta(X)]g(Z,W).
\end{equation}
Thus, we get
\begin{equation*}
h^\bot(Z,W)=-[\overrightarrow{\nabla}^\bot\mu+\alpha\xi]g(Z,W),
\end{equation*}
where $\overrightarrow{\nabla}^\bot\mu$ is gradient of the function $\mu$.\\ Therefore $N_\bot$ is totally umbilical in $M$ with
the mean curvature $H^\bot=-(\overrightarrow{\nabla}^\bot\mu+\alpha\xi)$.\\ Now, let $\mathcal{D}^N$ be the normal connection of $N_\bot$ in $M$. Then
for any $Y\in \Gamma(\mathcal{D}^\theta)$ and $Z\in \Gamma(\mathcal{D}^\bot)$, we have
\begin{equation*}
g(\mathcal{D}_Z^N\overrightarrow{\nabla}^\bot\mu+\alpha\xi,Y)=g(\nabla_Z\overrightarrow{\nabla}^\bot\mu,Y)+\alpha g(\nabla_Z\xi,Y).
\end{equation*}
Also, from (\ref{2.6}) and (\ref{2.13}) we get $\nabla_Z\xi=0$.\\
Therefore, $g(\mathcal{D}_Z^N\overrightarrow{\nabla}^\bot\mu+\alpha\xi,Y)=g(\nabla_Z\overrightarrow{\nabla}^\bot\mu,Y)=0$, since $(Z\mu)=0$ for
every $Z\in \Gamma(\mathcal{D}^\bot)$ and hence the mean curvature of $N_\bot$ is parallel.\\ Thus the leaves of the distribution $\mathcal{D}^\bot$
are totally umbilical in $M$ with non-vanishing parallel mean curvature vector $H^\bot$, i.e. $N_\bot$ is an extrinsic sphere in $M$. Therefore by
Theorem 2.1, $M$ is a warped product submanifold.
\end{proof}
\section{warped product bi-slant submanifolds of $(LCS)_n$-manifolds}
\begin{definition}
A submanifold $M$ of a $(LCS)_n$-manifold $\bar{M}$ is said to be a bi-slant submanifold if there exists a pair of orthogonal distributions
$\mathcal{D}_1$ and $\mathcal{D}_2$ of $M$ such that\\
$(\textit{i}) TM = \mathcal{D}_1\oplus\mathcal{D}_2 $\\
$(\textit{ii})\phi \mathcal{D}_1\bot \mathcal{D}_2 \ \ \text{and} \ \ \phi\mathcal{D}_2\bot \mathcal{D}_1$\\
$(\textit{iii})\mathcal{D}_1, \mathcal{D}_2 $ are slant submanifolds with slant angles $\theta_1$ and $\theta_2$, respectively.
\end{definition}
If we assume $\theta_1=0$ and $\theta_2=\frac{\pi}{2}$, then $M$ is a CR-submanifold and if $\theta_1=0$ and $\theta_2\neq0,\frac{\pi}{2}$, then
$M$ is a semi-slant submanifold. Also, if $\theta_1=\frac{\pi}{2}$ and $\theta_2\neq0,\frac{\pi}{2}$, then $M$ is a pseudo-slant submanifold.
A bi-slant submanifold $M$ of a $(LCS)_n$-manifold $\bar{M}$ is said to be proper if the slant distributions $\mathcal{D}_1$ and
$\mathcal{D}_2$ are of slant angles $\theta_1,\ \theta_2\neq0,\frac{\pi}{2}$.\\
For a proper bi-slant submanifold $M$ of a $(LCS)_n$-manifold, the normal bundle of $M$ is decomposed as $$T^\bot M=Q\mathcal{D}_1\oplus Q\mathcal{D}_2\oplus\nu,$$
where $\nu$ is the invariant normal subbundle of $M$.\\
Now we will construct a bi-slant submanifold of a $(LCS)_n$-manifold.
\begin{example}
\rm{Consider the semi-Euclidean space ${\mathbb{R}}^{7}$ with the cartesian coordinates $(x_1,y_1\cdots,x_3, y_3,\,t)$ and paracontact structure
\begin{equation*}
\phi\left(\frac{\partial}{\partial x_i}\right)=\frac{\partial}{\partial y_i},\,\,\,\,
\phi\left(\frac{\partial}{\partial y_j}\right)=\frac{\partial}{\partial x_j},\,\,\phi\left(\frac{\partial}{\partial t}\right)=0,\,\,\,\,1\leq i, j\leq3.
\end{equation*}
It is clear that ${\mathbb{R}}^{7}$ is a Lorentzian metric manifold manifold with usual semi-Euclidean metric tensor.
For any $\theta_1, \theta_2\in[0,\frac{\pi}{2}]$ let $M$ be a submanifold of ${\mathbb{R}}^{7}$ defined by
\begin{equation*}
\chi(u, v, w, s,\,t)=(w+u\cos \theta_1,\,u\sin \theta_1,\,s+v\cos \theta_2,\,v\sin \theta_2,\,0,\,0,\,t).
\end{equation*}
Then the tangent space of $M$ is spanned by the following vectors
\begin{eqnarray*}
Z_1 &=& \cos \theta_1\frac{\partial}{\partial x_1}+\sin \theta_1\frac{\partial}{\partial x_2}, \\
Z_2 &=& \cos \theta_2\frac{\partial}{\partial y_1}+\sin \theta_2\frac{\partial}{\partial y_2}, \\
Z_3 &=& \frac{\partial}{\partial x_1}, \\
Z_4 &=& \frac{\partial}{\partial y_1},\\
Z_5 &=&\frac{\partial}{\partial t}.
\end{eqnarray*}
Then we have
\begin{eqnarray*}
\phi Z_1 &=& \cos \theta_1\frac{\partial}{\partial y_1}+\sin \theta_1\frac{\partial}{\partial y_2}, \\
\phi Z_2 &=& \cos \theta_2\frac{\partial}{\partial x_1}+\sin \theta_2\frac{\partial}{\partial x_2}, \\
\phi Z_3 &=& \frac{\partial}{\partial y_1}, \\
\phi Z_4 &=&\frac{\partial}{\partial x_1},\\
\phi Z_5 &=& 0.
\end{eqnarray*}
We take ${\mathcal{D}_1}={\rm{Span}}\{Z_1,\,Z_4\}$ and ${\mathcal{D}_2}={\rm{Span}}\{Z_2,\ Z_3\}$, then $g(Z_1,\phi Z_4)=\cos \theta_1$
and $g(Z_2,\phi Z_3)=\cos\theta_2$. Thus the distributions $\mathcal{D}_1$ and $\mathcal{D}_2$ are slant with slant angles $\theta_1$ and
$\theta_2$ respectively and hence $M$ is a bi-slant submanifold.
}
\end{example}
\begin{lemma}
Let $M$ be a proper bi-slant submanifold of a $(LCS)_n$-manifold $\bar{M}$ with the slant distributions $\mathcal{D}_1$ and $\mathcal{D}_2$
such that $\xi\in \Gamma(\mathcal{D}_1)$. Then
\begin{eqnarray}
\label{5.1}\cos^2\theta_2g(\nabla_{X_1}X_2,Y_2) &=& g(\nabla_{X_1}PX_2,PY_2)+g(h(X_1,PX_2),QY_2)\\
\nonumber&& +g(h(X_1,Y_2),QPX_2),
\end{eqnarray}
for any $X_1\in\Gamma(\mathcal{D}_1)$ and $X_2,\ Y_2\in \Gamma(\mathcal{D}_2)$, where $\theta_1$ and $\theta_2$ are the slant angles
of slant distributions $\mathcal{D}_1$ and $\mathcal{D}_2$ respectively.
\end{lemma}
\begin{proof}
For any $X_1\in\Gamma(\mathcal{D}_1)$ and $X_2,\ Y_2\in \Gamma(\mathcal{D}_2)$, we have
\begin{eqnarray*}
g(\nabla_{X_1}X_2,Y_2) &=& g(\phi \bar{\nabla}_{X_1}X_2,\phi Y_2) \\
&=& g(\bar{\nabla}_{X_1}\phi X_2,\phi Y_2) \\
&=& g(\bar{\nabla}_{X_1}PX_2,PY_2)+g(\bar{\nabla}_{X_1}PX_2,QY_2)+g(\bar{\nabla}-{X_1}QX_2,\phi Y_2). \\
&=& g(\nabla_{X_1}PX_2,PY_2)+g(h(X_1,PX_2),QY_2)\\
&&+g(\bar{\nabla}_{X_1}bQX_2,Y_2)+g(\bar{\nabla}_{X_1}cQX_2,Y_2).
\end{eqnarray*}
Using (\ref{2.21}) in the above relation we get (\ref{5.1}).
\end{proof}
\begin{theorem}
There does not exists a proper warped product bi-slant submanifold $M =M_{1}\times_fM_2$ of $\bar{M}$ such that $\xi\in\Gamma(TM_1)$.
\end{theorem}
\begin{proof}
Let $M=M_{1}\times_fM_2$ be a proper warped product bi-slant submanifold of $\bar{M}$. Then for $X_1\in\Gamma(TM_1)$ and
$X_2,\ Y_2\in \Gamma(TM_2)$, we have
\begin{eqnarray*}
g(h(X_1,PX_2),QY_2) &=& g(\bar{\nabla}_{X_1}PX_2,\phi Y_2)+g(PX_2\bar{\nabla}_{X_1}PY_2) \\
&=& \cos^2\theta_2g(\bar{\nabla}_{X_1}X_2,Y_2)-g(h(X_1,Y_2),QPX_2)+g(PX_2,\bar{\nabla}_{X_1}PY_2).
\end{eqnarray*}
Using Proposition 2.1 in the above relation, we get
\begin{equation}\label{5.2}
g(h(X_1,PX_2),QY_2)+g(h(X_1,Y_2),QPX_2)=2\cos^2\theta_2(X_1 \ln f)g(X_2,Y_2).
\end{equation}
Again using Proposition 2.1 in (\ref{5.1}), we get
\begin{equation}\label{5.3}
g(h(X_1,PX_2),QY_2)+g(h(X_1,Y_2),QPX_2)=0.
\end{equation}
From (\ref{5.2}) and (\ref{5.3}), we get
\begin{equation}\label{4.4}
\cos^2\theta_2(X_1\ln f)=0.
\end{equation}
Since $M$ is a proper warped product bi-slant submanifold so, $\theta_2\neq\frac{\pi}{2}$. Therefore $(X_1\ln f)=0$ for every
$X_1\in\Gamma(TM_1)$ and hence $M$ does not exists.
\end{proof}
|
{
"timestamp": "2018-03-08T02:04:33",
"yymm": "1803",
"arxiv_id": "1803.02526",
"language": "en",
"url": "https://arxiv.org/abs/1803.02526"
}
|
\section{Introduction}
\label{intro}
HII Galaxies are compact dwarf starburst galaxies with strong and narrow emission lines superposed on a weak blue continuum. The optical spectra of HII Galaxies are indistinguishable from those of Giant HII regions in local galaxies \citep{SS1970}. It is now widely accepted that they are not bona fide young galaxies forming their first generation of stars, as thought in the past, since they all show a population of old stars.
\cite{Westera2004} used the spectra of some 100 HII galaxies to assess their stellar population content and history by deriving absorption line indexes (based on H$\delta$, H+K(Ca), Mg$_b$ and D4000) and comparing with stellar population models of SB99 \citep{sb99} and BC03 \citep{bc03}.
The main conclusion from that work is that, mostly, we can parametrize the star formation history (SFH) of HII galaxies with these three main stellar populations.
Optical \citep{Telles1997a,Telles1997b} and near-IR imaging \citep[and references therein]{Lagos2011} have also convincingly shown that these dwarf starburst galaxies possess underlying old populations. Simulations also show the episodic nature of star forming galaxies, particularly at low masses \citep[see e.g][]{Pelupessy2004, Debsarma2016}, being three episodes the simplest choice in this scenario.
The morphologies of HII galaxies remain, as first described by \cite{loose1986,kunth1988}, a mixed bag. The general properties of HII galaxies and Blue Compact Galaxies (BCG) broadly overlap \citep{kunthostl2000}. They have irregular shapes, typically small physical sizes, no signs of ordered structures, such as disks. Their starburst regions, consisting of emsembles of massive ionizing clusters and their respective giant HII regions, cover most of the extension of their optical images. More luminous HII galaxies seem to show some evidence of tails, fuzz in their outermost isophotes, and more disturbed overall morphologies whereas the lower luminosity ones seem more compact \citep{Telles1997a}. Deeper optical imaging \citep{Lagos2007} and near-IR imaging \citep{Lagos2011} reveal the clumpy nature of their starburst regions.
The various sub-classification attempts, such as cometary \citep{papa2008}, local tadpoles \citep{debra2012}, green-peas \citep{carda2009}, etc, all fall within the mixed bag of clumpy morphology with no fundamental differences in their intrinsic properties. In any case, due to their low mass, low oxygen abundance, low dust content, and low density environments, HII galaxies constitute the simplest starbursts at galactic scales.
With the advent of large surveys, particularly the Sloan Digital Sky Survey \citep[][SDSS]{york2000}, star forming galaxies all fall back into a uniform spectroscopic class and are viewed in a more general common perspective. Total stellar mass seems to be the main driver of the properties of the star forming galaxies at low redshift \citep{brinch2004, tremon2004}. However, the locus where emission line galaxies fall in the BPT diagram \citep{bpt1981} determines some important general properties as well, since the star forming sequence is also a sequence of increasing excitation with the decrease of stellar mass and metallicity \citep[and references therein]{curti2017}.
HII Galaxies are particularly interesting as cosmological probes over a surprisingly large range of redshifts extending to redshifts of z=3-4 with present-day instrumentation. Their emission-line luminosities, in particular the Balmer lines, correlate extremely well with the velocity-widths of the same emission lines \citep{ter81,mel88,telles93,bor09,bor11,chavez2014}. This so-called L-$\sigma$ relation can be calibrated as a distance indicator using Giant HII regions in local galaxies, and can thus be used to determine cosmological parameters \citep{mel2000,plionis11,ter15,chavez2016,Arenas2017}. Since the L-$\sigma$ method is independent from other methods, a cross comparison of results help us better understand the systematic uncertainties in these different methods, most notably the SNIa.
While it seems clear that the L-$\sigma$ relation stems from the natural relation between the ionizing flux of a starburst and the mass of its ionizing cluster, the relation is empirical and thus suffers from considerable intrinsic scatter. In \cite{mel2017} we have explored ways of reducing the scatter, but stumbled against the difficulty of accurately measuring the ages of the young component. The traditional age indicator, the equivalent width of H$\beta$ (EW(H$\beta$)), is biased by contamination of the continuum by older underlying populations and therefore age corrections using EW(H$\beta$) tend to increase the scatter rather than reduce it.
In this paper we make use of multi-wavelength stellar population
analysis by using the method of fitting the spectral energy
distribution (SED) from the UV to MIR in order to describe the
simplest star formation history for HII galaxies that accommodates
their general properties. We wish to investigate how efficient HII
galaxies are in forming stars in the present burst as compared to
their past. We also retrieve the true distribution of EW(H$\beta$) for
the young stellar component of our sample by applying a correction
factor $f_r$ that accounts for the contamination of the continuum by
the old stellar population derived from our stellar population
analysis. This will help us further understand, and possibly reduce,
the systematic errors related to the use of the L-$\sigma$ as a
powerful indicator of cosmological distances. Sec.~\ref{data}
presents our data selection of extreme star forming galaxies from the Sloan
Digital Sky Survey. Sec.~\ref{cigale} describes our SED fitting model, model choices, and procedure. In
Sec.~\ref{results} we present our results, and conclusions are given
in Sec.~\ref{conclusions}.
\section{Data and general spectral properties}\label{data}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{sample1.pdf}
\caption{\small BPT diagram of the selected objects. The blue points are the whole sample of 67000 starburst galaxies. The red points are the resulting spectroscopic sample of $\sim$4200 galaxies with the criteria given in Table~\ref{crit}. HII galaxies lie below and to the left of the \cite{kauff2003} classification line (solid red line) that distinguishes AGNs from star forming galaxies.}
\label{one}
\end{figure}
Our data are selected from the SDSS DR13 release \citep{dr13}
cross-matched with the emissionLinePort table\footnote{the
emissionLinePort table (Portsmouth stellar kinematics and
emission-line flux measurements Thomas et al. (2013) are based on
adaptations of the publicly available codes Penalized PiXel Fitting
(pPXF, Cappellari \& Emsellem 2004) and Gas and Absorption Line
Fitting code (GANDALF v1.5; Sarzi et al. 2006) to calculate stellar
kinematics and to derive emission line properties. )}, and contains
galaxies classified as subclass STARBURST which implies ${\rm EW(H}\alpha)>50$\AA. These criteria reflect in over 67000 galaxies. From these
we selected only those with ${\rm EW(H}\beta)>30$\AA\ and those whose
line ratios fall within the upper left panel of the canonical
interval for star forming regions in the BPT diagram \citep{bpt1981,kewley2006}. These choices aim at selecting extreme star forming
galaxies with high excitation, low abundances and low masses, typical
of bona fide HII galaxies. So, our selection criteria are
more restrictive and do not include more luminous star forming
galaxies. These criteria allow the inclusion of the lowest
metallicity objects which have slightly lower [OIII]/H$\beta$ ratios
due to their low ionic abundances \citep{izo2017}. In order to avoid
including local giant HII regions in nearby galaxies we also
restricted by z > 0.005, resulting in $\sim$4200 SDSS objects. Figure~\ref{one} shows the selected spectroscopic sample. A
summary of these criteria is given in Table~\ref{crit}.
\begin{table}
\centering
\caption{Summary of the selection criteria of our spectroscopic sample, resulting in our SDSS sample of $\sim$4200 objects.}
\begin{tabular}{c} \hline
EW(H$\alpha$) > 50\AA\\
EW(H$\beta$) > 30\AA\\
$0 < lg(\rm{[OIII]/H}\beta < 1.2$ \\
$-2.5 < lg(\rm{[NII]/H}\alpha<-0.8$ \\
$ 0.005 < z < 0.4$ \\ \hline
\end{tabular}
\label{crit}
\end{table}
For our final sample of HII galaxies for our multiwavelength analysis from FUV to MIR, we chose from the SDSS sample only targets with GALEX unique
photometry in both FUV (0.1528$\mu$m) \& NUV (0.2271$\mu$m) bands \citep[and references therein]{bianchi2014}. The resulting GALEX+SDSS sample
of HII galaxies consists of 2728 objects.
For this sample, we cross-matched targets with The UKIRT Infrared Deep Sky Surbey \citep[UKIDSS]{ukidss} Y (1.036$\mu$m), J (1.250$\mu$m), H (1.644$\mu$m), K (2.149$\mu$m) near-IR bands, and with the Wide-Field Infrared Survey Explorer \citep[WISE]{wise} W1 (3.4$\mu$m), W2 (4.6$\mu$m), W3 (12$\mu$m), W4 (22$\mu$m) \mbox{mid-IR} bands.
\begin{table}
\centering
\caption{Number of objects that have the corresponding photometric data from the given surveys.}
\begin{tabular}{lc} \hline
GALEX+SDSS & 2728\\
GALEX+SDSS+WISE & 2447 \\
GALEX+SDSS+UKIDSS & 950 \\
GALEX+SDSS+UKIDSS+WISE & 863 \\
\hline
\end{tabular}
\label{photsamp}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{petro.pdf}
\caption{\small Distribution of the Petrosian radius in the SDSS r band of our photometric sample of 2728 HII galaxies. The median value of the distribution is only 2.8\arcsec.}
\label{petro}
\end{figure}
In the end, we have chosen not to use the W3 \& W4 bands since we are most concerned with the stellar population properties as a result of our analysis, and these bands only suffer any significant effect at these longer wavelengths due to the dust emission component. Table~\ref{photsamp} shows the resulting photometric sample of HII galaxies. The most restrictive photometric band is the near-infrared UKIDSS with NIR data for only one-third of our sample. In any case, in our analysis we use all available photometric data each individual galaxy.
To minimize systematic effects we used Petrosian magnitudes except for GALEX for which we used model magnitudes. Our choice of the Petrosian magnitudes ensures that in all bands we measured the fluxes in the same way to include the same percentage of the total flux. In any case, our objects are compact (see Figure~\ref{petro}), thus aperture effects are minimized since we are probably including all the flux in all bands.
We also correlated with VISTA surveys but found no additional targets.
\subsection{Basic properties}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{fig3.pdf}
\caption{\small Spectroscopic properties of our sample of intense emission line objects. (left) Distribution of the Balmer decrement derived extinction parameter. (middle left) Distribution of the equivalent width of H$\beta$. (middle right) Distribution of H$\beta$ luminosity. (right) Distribution of the oxygen abundance derived from the direct method for objects with the auroral [O III]$\lambda$4363\AA\ line detected. This line is detected virtually in all our galaxies and allow for the determination of the electron temperature (see text).}
\label{sample}
\end{figure*}
Figure~\ref{sample} shows the spectroscopic characterization of our cross-matched GALEX+SDSS sample of 2728 HII galaxies. The Figure shows histograms of the logarithmic extinction correction factor C(H$\beta$), the observed equivalent width of H$\beta$, the derived H$\beta$ luminosity\footnote{Throughout this paper we assume H$_0$ = 71 \thinspace\thinspace\hbox{$\hbox{km}\thinspace\hbox{s}^{-1}$}\thinspace\hbox{Mpc}$^{-1}$.}, and the derived oxygen abundance (see below), respectively. A typical galaxy in our sample has low extinction (E(B-V) $\sim$ 0.1), intense emission lines (\mbox{EW(H$\beta$) $\sim$ 50 \AA}, $\log\rm{L(H}\beta)\sim40.5$ \thinspace\hbox{erg}\thinspace\hbox{$\hbox{cm}^{2}$}\thinspace\hbox{s}$^{-1}$), and low metal abundance (\mbox{$12+\log({\rm O/H}) \sim 8.2$}, Z$\sim 1/3$ \thinspace\hbox{$\hbox{Z}_{\odot}$}).
\subsubsection{Extinction corrections}
Our first step was to correct the fluxes for foreground extinction
using the maps of \cite{Schlafly2011} as reported in the SDSS data
base and the extinction law for the Galaxy from
\cite{cardelli1989}. This is relevant because in these starburst
galaxies the foreground extinction is substantial, and the extinction
laws for the internal extinction are very different in the UV.
As a second step, the intrinsic internal reddening is then derived from the resulting H$\alpha$/H$\beta$ ratio (corrected for Galactic extinction) for each HII galaxy, using either a Calzetti \citep{cal2000} or a Gordon \citep{gordon2003} extinction law.
The Gordon extinction curve is that of the SMC bar which is the steeper curve in the UV. Their comparative results seem to indicate that there is a trend for the extinction curves to be steeper in the UV for systems of lower gas to dust ratios. The starburst galaxies in our sample have typically sub-solar oxygen abundances implying a low dust content, and hence this ratio will be large for our sample galaxies, favoring a SMC-bar like extinction curve. This agrees with previous findings by \cite{gordon1997} who found that the steeper UV extinction curve seems to be associated with enhanced star formation region such that of 30 Dor in LMC whose extinction curve differs from the rest of the LMC.
We then measured the line intensities for H$\alpha$, H$\beta$, and H$\gamma$ by optimizing the placement of the continuum and carefully adjusting the integration box to include all the line fluxes.
Inspection of the spectra showed that in general the higher Balmer lines ($\lambda\leq {\rm H}\delta4101$\AA) were embedded in a stellar absorption feature that appears to be significantly broader than the emission lines. For this reason we did not include these lines in our analysis. Even H$\gamma$ is in most cases somewhat affected by absorption although, considering that the equivalent widths of the absorption features in synthetic spectra for starburst are similar for H$\beta$ and H$\gamma$, we did not correct the line ratios for this effect.
In Figure~\ref{two} we plot the Balmer decrements divided by the theoretical (Case B) recombination values for $T_e=11400$K, appropriate for the mean temperature of our sample, F(H$\alpha$)/F(H$\beta$)=2.855 and F(H$\gamma$)/F(H$\beta$)=0.467 \citep{Osterbrock1989}. Thus, in this log-log plot an object with zero internal extinction would be located at (0,0), which is indicated by dashed lines.
The colored solid lines in the figures show the reddening vectors for a range of 1.4 magnitudes in $A_V$ for four popular extinction laws as indicated in the captions.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{balmer.pdf}
\caption{\small Balmer decrements for our sample relative to the theoretical Case B recombination values for $T_e=11400$K. The solid lines represent four different extinction laws as shown for a range of $\Delta A_V$ = 1.4 mag. Points in gray are for the whole sample, while points in blue correspond to objects with errors in both axes $< 0.015$. The black cross on the left bottom represents the mean error of the whole sample.}
\label{two}
\end{figure}
While for the full sample of objects (gray points) the best fitting
extinction law appears to be unconstrained by our measurements, when
we restrict that sample to objects with errors $<0.015$ (blue
crosses), the best fit appears to disfavor the law of \cite{cal2000}
which produces much less reddening per unit absorption (it has
$R_V=4.88$). Therefore, we have corrected our spectroscopic
observations using the Balmer Decrement with a Gordon extinction law
as
\begin{equation}
\frac{\log {\cal F}(\lambda)_0}{\log {\cal F}({\rm H\beta})_0} =
\frac{\log {\cal F}(\lambda)_{obs}}{\log {\cal F}({\rm H\beta})_{obs}} -
{\rm C(H\beta)}\times f_\lambda
\end{equation}
\noindent
where ${f_\lambda}$ is derived from the extinction law. The resulting logarithmic extinction correction factor C(H$\beta$) for our sample is shown in a histogram of Figure~\ref{sample}.
\subsubsection{Oxygen abundances}
We have determined the physical conditions from the emission line spectra by using the direct method as described by the prescription of \cite{pagel1992} and \cite{izotov2006} since we were able to detect and measure the electron temperature sensitive emission line of [O III]$\lambda$ 4363 in virtually all of our spectra (see Figure~\ref{sample}, right panel). Electron densities were estimated by the ratio of the [SII] lines. This[SII]$\lambda$6717/[SII]$\lambda$6731 ratio has a strong peak at 1.3 for our sample which implies that all HII regions are in the low density regime. Thus, we adopt the reasonable assumption of a constant n$_e$ of 100 cm$^{-3}$.
The ionic and total abundances determined by both prescription agree very well, with a small offset of less than 0.1 towards higher abundance for Izotov prescription which we adopt here for having more recently updated atomic data. The prescription is also better suited for sub-solar abundances. While the low abundance tail in the distribution is expected to be real and accurate, the few objects with super-solar abundance have larger uncertainties, due to the lower S/N ratio of the [OIII]$\lambda$4363 line in these cases.
\section{Modelling with CIGALE}\label{cigale}
\begin{table}
\caption{The CIGALE Module parameters and their ranges for SED modelling.}
\smallskip
\centering
\begin{tabular}{l c }
\hline
\noalign{\smallskip}
Parameter & Value \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\hline
\noalign{\smallskip}
SFH & 3 bursts \\
Age$_{young}$ [Myr]& 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 \\
Duration$_{young}$ [Myr]& 1, 2, 3, 4, 5 \\
Age$_{int}$ [Myr]& 100, 500, 1000 \\
Duration$_{int}$ [Myr]& 10,100 \\
Age$_{old}$ [Myr]& 10000 \\
Duration$_{old}$ [Myr]& 100, 500 \\
\hline
Stellar Population Models & BC03 (1) \\
IMF & Chabrier\\
Metallicity & 0.008\\
\hline
Nebular emission & \\
Ionization Parameter & logU = -2.0 \\
LyC escape & 0.0 \\
LyC absorbed & 0.0 \\
\hline
Dust attenuation & Calzleit (2) \\
E(B-V)$_{young}$ & 0., 0.05, 0.1, 0.15, 0.2, 0.3, 0.5 \\
E(B-V)[Young/Old] & 0.44, 1 \\
\hline
Dust template & dl2014 (3)\\
Mass fraction of PAH & 0.47, 1.12 \\
Minimum radiation field & 0.1 \\
IR power-law slope ($\alpha$\tablefootmark{b}) & 2.0 \\
\hline
AGN template & NONE \\
\hline
Radio & NONE \\ \hline
Number of models run per & \\
redshift bin ($\Delta$(z) = 0.01): & 621600\\
\noalign{\smallskip}
\hline
\end{tabular}
\label{cig:mod}
\tablebib{
(1)~\citet{bc03}; (2) \citet{cal2000} \& \citet{leit2002}; (3) updated models of \citet{dl2007}.
}
\end{table}
\begin{table}
\centering
\caption{Models: Input Data Set and Extinction Choices.}
\begin{tabular}{lll} \hline
Model & Dust emission & Extinction \\ \hline
0 & No & free fit \\
1 & dl2014 & free fit \\
2 & dl2014 & fixed H$\alpha$/H$\beta$ (Gordon $C=1.00$)\\
3 & dl2014 & fixed H$\alpha$/H$\beta$ (Calzetti $C=0.44$)\\
4 & No & fixed H$\alpha$/H$\beta$ (Gordon $C=1.00$)\\
5 & No & fixed H$\alpha$/H$\beta$ (Calzetti $C=0.44$)\\ \hline
\end{tabular}
\label{models}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{example_mod0.pdf}
\includegraphics[width=0.49\textwidth]{example_mod1.pdf}
\caption{\small Comparing free fit without dust emission (left panel, MODEL 0) vs. with dust emission (right panel, MODEL 1). The Green points are the observed photometry from UV to mid-IR. Only the first two WISE data points are shown and used in the fits. The solid lines are the modeled components: young stellar population (cyan), intermediate+old stellar population (orange), nebular continuum (magenta), and for MODEL 1 (right panel) the dust emission (red). The red points represent the model fit results for each photometric band. The information in the inset are the respective ages and derived masses of the stellar populations. The inset also shows the observed equivalent width of H$\beta$ (WH$\beta$) and the corrected equivalent width of H$\beta$ for the young stellar component only (W$_y$(H$\beta$), see text.)}
\label{mol0mol1}
\end{figure*}
CIGALE\footnote{CIGALE software and documentation are available at: http://cigale.lam.br/}(Code Investigating GALaxy Emission) is a package for SED modelling as well as for SED fitting. The code has been developed by Denis Burgarella and M\'ed\'eric Boquien at Laboratoire d’Astrophysique de Marseille. Some applications and descriptions of CIGALE to modelling galaxy properties are found in \cite{noll2009,boquien2014,boquien2016,ciesla2015,ciesla2016,vika2017}.
CIGALE has a modular structure which allows great flexibility in {\it modelling } the various physical components and their possible parameters that contribute to produce the theoretical SED of galaxies. These components and the set of parameters used in our study (a subset of all possibilities) are given in Table~\ref{cig:mod}. Once the predicted theoretical SED are modelled, CIGALE is used for {\it fitting} the modelled SED to the observed SED. Best fit results, probed by the output $\chi^2$ , can be evaluated to provide the possible and most probable set of energy sources and their respective parameters that best represent the observed SED.
CIGALE allows for a number of star formation histories (SFH) such as exponentially declined, delayed or periodic (see CIGALE documentation$^3$). Our choice of the SFH consists of 3 episodes of star formation: a young ionizing population ($< 10$ Myr), an intermediate age population of (100-500 Myr), and an old stellar population (10 Gyr).
In CIGALE, this particular SFH module was not implemented by default, but was
developed by M. Bocquien to fit our purpose. Once we chose the SFH, we also made a choice of the evolutionary stellar population synthesis to produce our Simple Stellar Population (SSP). We used the models of \cite{bc03} for a Chabrier Initial Mass Function (IMF) and at a fixed metallicity of Z=0.008.
The choice of metallicity is justified by the fact that we have derived, from the optical spectra, their low metal content with a mean distribution of 12+logO/H= 8.2 as shown in our Figure~\ref{sample}. These are typical values for HII galaxies where the electron temperature sensitive line [OIII]$\lambda$4363 is detected \citep{Kehrig2004,Ter1991}. The oxygen abundances are expected to be low for our sample of high excitation HII galaxies, since they were selected to fall in the upper left locus of the star forming sequence in the BPT diagram, which is also a metallicity sequence \citep[see e.g.][]{curti2017}.
Nebular emission from the ultraviolet to the near-infrared was computed by the module Nebular that includes both emission lines and the nebular continuum. Here, for the computation of the nebular emission, we considered that the escape fraction and the absorption by dust of Lyman Continuum were both zero. We may evaluate {\it a posterior} whether these choices were appropriate.
In all CIGALE runs we have not included a possible AGN
emission component. Our targets are selected to be star forming
galaxies in the BPT diagram. It is true that AGN at low mass and low
metallicity may exist and may fall in the star forming region of the
BPT \citep{stasinka2006}, but their frequency in our sample is
expected to be low, if any at all.
Our choice of models differ simply on the way we considered the dust attenuation and the inclusion (or not) of dust emission in order to fit the whole spectral range from FUV to W2 band. Table~\ref{models} shows these six runs of CIGALE. Column 1 is the model identification, column 2 indicates whether or not a dust emission component was included in the runs. These are models 1, 2 and 3, marked ``dl2014'' indicate that the model of \cite{dl2014} was used. The inclusion of dust emission will not affect the results significantly since dust emission becomes important only with $\lambda > 15 \mu$m.
The comparison between model 0 (free attenuation and no dust emission)
with model 1 (free attenuation with dust emission) allows us to
evaluate how much the inclusion of the dust emission interferes on the
full fit.
Figure~\ref{mol0mol1} shows a typical example of this case. On the left panel the best fit for model 0 and on the right panel the best fit for model 1.
We can note that model 1 has a smaller $\chi^2$ that shows it is a better fit to the full wavelength range including W1 and W2 Wise bands. In general, the routine tries to compensate the absence of the dust emission component in model 0 by over-estimating the mass of the old stellar population. In this particular case the PAH 3.3$\mu$m emission also contributes to the flux in the W1 band. We conclude, then, that this dust emission component is necessary for any good fit in mid-infrared band, and contributes to a better fit in the near-infrared UKIDSS bands, as shown in this figure.
By applying analogous comparisons with the other models for which we had a choice of inclusion or not of a dust emission component, but with all other parameters being the same, such as in the case of model 2 vs. model 4 and model 3 vs. model 5 (see Table~\ref{models}) we reach similar conclusion, namely that models with dust emission are a better fit to the near-infrared data. So from this analysis we have, then, favored the models where dust emission is included. Therefore, we will discard models 0, 4 and 5 for the analysis that follows.
Now, we have to compare models where the attenuation corrections were left as a free parameter (model 1) to the models where the attenuation corrections were applied prior to the SED fitting procedure (model 2 vs. model 3, see 3rd column of Table~\ref{models}). A choice of Gordon extinction law was used in all SED fitting runs.
As mentioned above, all data have previously been corrected for foreground extinction due to our Galaxy. CIGALE will output the best fitted extinction parameter $C$ ($C=\frac{E(B-V)_{star}}{E(B-V)_{gas}}$) which represents the relation between the attenuation in the stellar continuum to the extinction in the nebular emission, to be either a Calzetti differential extinction ($C=0.44$) or an equal extinction in the two emission components ($C=1$). CIGALE uses this parameter as the ratio the attenuation in the stellar continuum of the {\it old} stellar population to the stellar continuum of the {\it young} stellar population, not to the nebular emission. Massive young stars are embedded in the ionized HII regions, but the distribution of the young population, the older stellar population and dust may be related. However, it is not clear how these relations behave. We prefer to constrain our model to the information from the nebula emission, fixing the total amount of extinction provided by the spectral information. In addition, a comparison of the resulting $\chi^2$ for the different models reveals marginal differences. Model 2 has a distribution of $\chi^2$ marginally better than model 1. So for these two reasons we have chosen to keep model 2 and discard model 1.
For the other remaining models (2, 3) we pre-corrected the internal extinction using the balmer decrement (H$\alpha$/H$\beta$) with the assumption of a Gordon extinction law with $C=1$ (models 2) or Calzetti extinction law with $C=0.44$ (model 3), as mentioned in Section~\ref{data}.
For these models we expect that CIGALE will output simply a residual extinction, since data were previously corrected for total extinction. The smaller this residual, then, will indicate a better assumption of the attenuation law and of $C$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{fig6.pdf}
\caption{\small Comparison of model 2 and 3 in relation to the residual of best fitted extinction parameter $C$ ($C=\frac{E(B-V)_{star}}{E(B-V)_{gas}}$). Model 2 has been pre-corrected for a Gordon C=1.00, and model 3 has been pre-corrected for a Calzetti C=0.44. Ideally, the assumption of a pre-correction would be perfect if this procedure resulted in zero residuals, which is not the case. But simply model 2 (green) is better because it has relatively more zeros than model 3 (blue).}
\label{C_comp}
\end{figure}
Figure~\ref{C_comp} shows a comparison of the output results of CIGALE for $C$ for models 2 (green histogram) and 3 (blue histogram). This represents the residuals of attenuation that CIGALE still manages to fit to find the best result. Model 2 has more zeros than model 3 and also more zeros than other residuals (either 0.44 or 1.0). This is not the case for model 3.
This is an indication the best prior extinction correction was performed using the assumption of model 2, using a Gordon attenuation law with no differential attenuation between gas and stars ($C=1$).
In summary, we have chosen model 2 as our best general model for the SED fitting procedure. Our results will then be derived from the SED fitting using model 2 only (see Table~\ref{models}). In addition, only best fits with $\chi^2 < 3$ will be considered for further analysis.
\section{Results and Discussion}\label{results}
As described in the previous Section, our results are based on CIGALE model \#2 (see Table~\ref{models}), which includes dust emission for a better fit to the near-IR and WISE bands. The photometry has been corrected for foreground (with a Galactic extinction law) and internal extinction (with a Gordon extinction law and E(B-V)$_{gas}$ = E(B-V)$_{star}$). CIGALE best-fit output parameters are the masses and ages (of the oldest stars) of the stellar components, the residual extinctions, the H$\beta$ equivalent width correction factor ($f_r$), and the best-fit $\chi^2$.
The realibitily of the output parameters by CIGALE has been tested in previous studies with a method by \cite{giovannoli2011} \citep[see also][]{buat2011} which consists of the creation of a mock catalogue of fluxes for each galaxy derived for a set of output parameters. After the addition of random noise CIGALE is run a second time and the new results are compared with the input ones. \cite{vika2017} also applied this test for their analysis of Spitzer/IRS galaxies and showed that stellar masses, SFR, and luminosities are rather well constrained, whereas the age of the oldest stars is not. We have applied the same test here. Figure~\ref{mock} shows the comparison of our most relevant parameters for our present study: masses (left panel) and young stellar age (right panel). We find that the stellar masses are very well constrained and reproduced. In the right panel we show the comparison of the true vs. the estimated values of the age of the young stellar component only. The age of the intermediate stellar component shows a similar spread (not shown here). As a reminder, the age of the old stellar component is kept fixed at 10 Gyrs. Note that the dynamical range in the plot for ages (right panel) is much smaller than the plot for the masses (left panel), so points look more spread. However, in fact the reproducibility of young stellar ages also seem rather well constrained.
All the results from CIGALE, along with the derived spectroscopic properties from the SDSS spectra, are available as FITS and ASCII tables at our website\footnote{http://staff.on.br/etelles/SED.html}. The most important output parameters measured by CIGALE, and used in our study are the stellar masses and ages of the young and old populations, and the output best model SED with $\chi^2$, stellar, nebular and dust continua emission and respective attenuations. Plots (JPG images) of the resulting SED best-fit for individual objects (as in Figure~\ref{mol0mol1}) are also made available, as well as the best fit model SED FITS tables.
\begin{figure}
\centering
\includegraphics[width=0.49\columnwidth]{mock_mass.pdf}
\includegraphics[width=0.49\columnwidth]{mock_age.pdf}
\caption{\small Comparison between the input true parameters with the output estimated parameters from the mock catalogue created by CIGALE. Only the two main parameters of immediate interest are shown (mass and age). (left) Stellar Masses: Blue points are the masses of the young stellar component. Red points are the masses of the old stellar component (in our case Mass$_{old}$ + Mass$_{intermediate}$). (right) Young Stellar Age. The solid line the 1:1 relation in both panels.}
\label{mock}
\end{figure}
\subsection{The Equivalent Width Correction (the $f_r$ factor)} \label{fr}
The observed equivalent width of H$\beta$, \rm EW(H$\beta$), has been shown to be an indication of the age of the burst \citep{dottori1981} for young stellar systems.
From a sample of HII galaxies, \cite{terl2004} showed that the observed distribution of \rm EW(H$\beta$)\ cannot be reproduced if the evolution of the starburst is represented by a simple stellar population (SSP) predicted by evolutionary population synthesis models such as Starburst 99 \citep{sb99}. The very high \rm EW(H$\beta$){$_{young}$} predicted for a very young stellar cluster is never observed, indicating that the observed \rm EW(H$\beta$){$_{obs}$} is actually a measure of the intensity of the H$\beta$ emission line (F(H$\beta$)) produced by the young burst averaged by the past history of star formation, including the continuum emission of the young massive star cluster (C$_{young}$) plus the continuum emission produced by the previous episodes of star formation (C$_{old}$). Hence,
\begin{equation}
{\rm EW(H}\beta)_{obs} = \frac{\rm F(H\beta)}{C_{young} + C_{old}}
\end{equation}
We have thus defined the fraction $f_r$ as the ratio $f_r = \frac{C_{old}}{C_{young}}$ so,
\begin{equation}
{\rm EW(H}\beta)_{obs} = \frac{\rm EW(H\beta)_{young}}{1 + f_r}
\label{eq:fr}
\end{equation}
Figure~\ref{ewb_hist} (left panel) shows the derived equivalent width correction factor $(1 +f_r)$ from equation~\ref{eq:fr}. The resulting equivalent width distributions are given in Figure~\ref{ewb_hist} (right panel). The corrected ${\rm EW(H}\beta)_{young}$ (blue histogram) is then the true equivalent width to be assigned to determine the burst ages, using equation~\ref{eq:fr} where $f_r$ is derived from the SED fitting. The median value of the equivalent width correction factor from equation~\ref{eq:fr} is $1 +f_r=2.0$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{fig8.pdf}
\caption{(left) ${\rm EW(H}\beta)$ correction factor $f_r$. (right) Histogram of the Distributions of equivalent widths of H$\beta$. The red histogram shows the observed values from SDSS spectra. The blue histogram is the corrected EW(H$\beta$) for the contribution of an underlying old stellar population using the results from the SED fitting as described in the text.}
\label{ewb_hist}
\end{figure}
\subsection{The relation between luminosity - and young stellar mass}
It is notoriously difficult to estimate the uncertainties of the parameters resulting from populations synthesis models.
Statistical errors in CIGALE are estimated through bayesian probability distribution functions and are given for each output parameter. Typical error for the masses in our work is 20\%-30\%, and for SFR errors are estimated to be 30\%-40\%.
A straightforward way of verifying our results is to compare the mass of young stars ($M_{young}$) from CIGALE with the observed H$\beta$\ luminosities, L(H$\beta$)~. This comparison is presented in Figure~\ref{ML1} that shows an excellent correlation between these parameters. Notice, however, that since mass and luminosity depend on the square of the distance the slope of log-log plots such as this is expected to be close to unity even when the objects span a relatively small range of distances, so the interesting information is in the scatter and the zero point, but not necessarily in the slope.
The ionizing fluxes of single-age (simple) starbursts depend mostly on two parameters: the age and the mass of the ionizing stars, and for these objects the equivalent width of the Balmer lines provides a robust age indicator (\citealt{sb99}, henceforth SB99). Thus, we expect the scatter in the relation between $M_{young}$\ and L(H$\beta$)~\ to be correlated with \rm EW(H$\beta$). Figure~\ref{ML1} shows that this is indeed the case. Objects with \rm EW(H$\beta$)\ lower than average lie predominantly below the fit line, while objects with values above average are above the line. Notice that the ridge separating these two groups is tilted relative to the least-squares fit.
In the figure we used the equivalent widths corrected for contamination by old stars as described in Section~\ref{fr}, but the separation also occurs when the uncorrected \rm EW(H$\beta$)\ are used, albeit with larger scatter and more overlap between the two groups (cf. Table~\ref{fifi} below).
\begin{figure}[ht]
\centering
\vspace*{-0.1cm}
\includegraphics[width=0.45\textwidth]{master2.pdf}
\vspace*{-0.1cm}
\caption{\small Relation between young stellar mass from CIGALE ($M_{young}$) and the observed H$\beta$\ luminosity for our sample of 2234 HII Galaxies with
accurate SED fits ($\chi^2<3$). The sample was divided in two groups according to their equivalent widths, \rm EW(H$\beta$), corrected for contamination as described in Section~\ref{fr}.
Objects in red have \rm EW(H$\beta$)\ lower than the average of the sample (110\AA) while objects in blue have values above the average. The line shows a standard least-squares fit of slope very close to unity as indicated in the legend. Typical error in $M_{young}$ is $< 30\%$.}
\label{ML1}
\end{figure}
In \cite{mel2017} we showed how the SB99 models can be used to normalize the observed H$\beta$\ luminosities to a fiducial age. Here we have used a dense grid of SB99 models for the standard Geneva isochrones with a metallicity of Z=0.008 and a Kroupa IMF shown in Figure~\ref{SB99}. This allows us to directly interpolate the models to the observed \rm EW(H$\beta$)\ without recourse to fitting some analytic expression as done in \cite{mel2017} or \cite{Arenas2017}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{SB99_GVA008.pdf}
\vspace*{-0.1cm}
\caption{\small Relation between L(H$\beta$)~\ and \rm EW(H$\beta$)\ for a simple $10^6$M$_{\odot}$\ starburst of Kroupa IMF (from SB99). The age scale is shown at the top of the figure. }
\label{SB99}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{modi38a.pdf}
\vspace*{-0.1cm}
\caption{\small Relation between young stellar mass from CIGALE ($M_{young}$) and H$\beta$\ luminosity at a fiducial age of 3.8Myr.
As in Figure~\ref{ML1} the sample was split in two groups according to the corrected equivalent widths.
We show in red objects with less than the average of the sample (110\AA) and in blue objects with larger values. The coloured lines show the predictions of SB99 models for two different stellar models as indicated in the figure. The orange line corresponds to the Monte-Carlo sampling discussed in the text.}
\label{ML2}
\end{figure}
Thus, for each object in our sample we interpolate the SB99 models to retrieve the luminosity offset between the observed age and the fiducial age, for which we chose the mean age of the sample, and we scale the corrected luminosity to the actual mass ($M_{young}$) of the object. For the SB99 models that we are using here the mean equivalent width of our sample corrected for contamination by old stars $<EW(H\beta)>=110.4$\AA\, corresponds to a mean age of 3.8Myr. Figure~\ref{ML2} shows the relation between young stellar mass and H$\beta$\ luminosity at a fiducial age of 3.8Myr.
The scatter is significantly reduced while the stratification of luminosities as a function of \rm EW(H$\beta$)\ is gone. Interestingly, however, the figure shows a systematic trend of \rm EW(H$\beta$)\ with mass: massive objects tend to have lower equivalent widths. The figure also shows that simple SB99 models predict significantly larger luminosities than observed, and that the relation using corrected luminosities is steeper than the uncorrected case (Figure~\ref{ML1}). We find, therefore, that single-age models provide the correct slopes ($dlogL/dEW$) of the evolutionary corrections, but not the correct zero points.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\textwidth]{fig10.pdf}
\caption{(left) Young stellar mass (blue) \& old stellar mass (red) histograms. (center) Burst strength ($f_{burst} = \frac{M_{young}}{M_{old}}$). (right) Birth rate parameter \citep[$b = \frac{SFR}{<SFR>}$,][]{ken1983}, but in this case SFRs are from the SED fitting results $b_{burst} = \frac{M_{young}/Age_{young}}{M_{old}/Age_{old}}$. }
\label{b_hist}
\end{figure*}
Simple dynamical arguments indicate that very massive strictly coeval (simple) starburst clusters cannot exist. The characteristic time scales for the formation of such clusters would be too long compared to the main-sequence life-times of the ionizing stars. So it is reasonable to assume that the stars in massive starbursts span a range of ages that is significant relative to the ages of the ionizing stars. In fact, observations of nearby HII Galaxies show that these objects tend to be clumpy and have multiple emission-line profiles \citep{Lagos2011,mel2017}.
In order to test this multiplicity effect we performed simple Monte Carlo experiments where we split the young component of each galaxy with $M_{young}>3\times10^6 M_{\odot}$ into a set of clumps of masses $M_{cl}$ randomly sampled from a power-law mass distribution of slope $\alpha=-2$ in the range $3\times10^5<M_{cl}/M_{\odot}<3\times10^6$. We then assign to each clump a random age sampled from a Gaussian distribution centered at the mean age of the galaxy (from \rm EW(H$\beta$)$_{young}$) with a dispersion that is a function of total young stellar mass: $\sigma_{age}=3\times(M_{young}/15)^{0.2}$Myr, with $M_{young}$ in units of $10^6$M$_{\odot}$. This generates a 3D grid from which we can read the predicted luminosity for a given mass and equivalent width.
The orange line in Figure~\ref{ML2} illustrates the predictions from our simple Monte-Carlo sampling. The prediction has the right slope, but is still offset by about 0.3dex relative to the observations. It may be possible that more refined models could explain this offset, but a full sampling of parameter space is beyond the scope of this paper. For our immediate purposes, the important result is that simple SB99 models are adequate for estimating the evolutionary corrections to the H$\beta$\ luminosities of young starbursts.
In Table~\ref{fifi} we explore possible additional sources of systematic scatter in the L(H$\beta$)~-$M_{young}$\ relation. The evolutionary corrections parametrized by the raw equivalent widths is shown in the first line, and the corrected evolutionary corrections for contamination by old stars in the second line. The rest of the table explores the scatter of the age-corrected mass-luminosity relation discussed above (Fig.~\ref{ML2}).
We find a weak correlation with nebular excitation ([OIII]/[OII]), which is probably a residual from our evolutionary corrections. The decrease in scatter is deceiving because, as shown in Figure~\ref{ML2}, when the luminosities are corrected for evolution using the SB99 models the scatter is rms=0.22. Also, the 1965 galaxies with measured [OII]3727\AA\ tend to be those with the best S/N. We do not see any residual correlation with metallicity or [OIII]/H$\beta$. We conclude, therefore, that to a very good approximation, the H$\beta$\ luminosities of HII Galaxies depend only on two parameters: the mass and the age of the starburst component. An immediate corollary of this conclusion is that the IMF of HII Galaxies is universal, at least for massive stars.
\begin{table}
\begin{threeparttable}
\tiny
\centering
\caption{Multi-parametric Fits.}
\tabcolsep 1.5mm
\tiny
\begin{tabular}{lcccc}
\hline\hline
& \multicolumn{4}{c}{ $ \rm log L(H\beta) = c_0 +c_{1}log M_{young} + c_2X$ } \\ \hline
\ \ \ \ \ \ $X$ & $c_0$ & $c_1$ & $c_2$ & rms \\ \hline
$\rm EW(H\beta)_{obs}$ & $33.34\pm0.060$ & $0.963\pm0.008$ & $6.245\pm0.226$ & 0.263 \\
$\rm EW(H\beta)_{corr}$ & $32.45\pm0.069$ & $1.091\pm0.009$ & $2.325\pm0.075$ & 0.255 \\
$\rm log [OIII]/[OII]^1$\ & $32.67\pm0.060$ &$1.108\pm0.008$ & $0.029\pm0.010$ & 0.213 \\
$\rm log [OIII]/H\beta^1$ & $32.40\pm0.007$ &$1.146\pm0.007$ & $0.005\pm0.035$ & 0.223 \\
$\rm 12+log(O/H)^1$ & $32.40\pm0.062$ &$1.147\pm0.008$ & $-0.001\pm0.003$ & 0.222 \\ \hline
\label{fifi}
\end{tabular}
\begin{tablenotes}
\tiny
\item $^1$Using L(H$\beta$)~\ corrected for evolution using the equivalent widths as in Fig.~\ref{ML2}.
\item To make the coefficients easier to read we scaled the equivalent widths by a factor of $10^3$.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsubsection{The most massive starbursts}
We remarked above that Figure~\ref{ML2} shows a clear systematic decrease of \rm EW(H$\beta$)\ with young stellar mass, in the sense that the most massive objects tend to have the lowest equivalent widths. The galaxies in our sample show a rather weak trend of metallicity with $M_{young}$\ for low mass objects while the most massive starbursts span the full range of metallicities, so we were puzzled by the fact that, even after correction for underlying older stellar populations, the most massive objects in our sample are still those with the lowest \rm EW(H$\beta$).
Visual inspection of the SDSS images revealed that most of these massive HII Galaxies show disturbed morphologies reminiscent of major mergers. Thus, the most luminous objects in our sample seem to be the low-mass equivalents of LIRGS and ULIRGS - the descendants of mergers of massive spiral galaxies.
It may be interesting to notice in this context that the relation between age dispersion and mass ($\sigma_{age}\propto M_{young}^{0.2}$) from our Monte-Carlo experiments is flatter than what we expect from the Virial theorem and the $L-\sigma$~\ relation, $\sigma_{age}\propto M_{young}^{0.25-0.4}$. This may be an indication that mergers rather than monolithic collapse controls the age spread in the most massive objects. Rejuvenation of massive stars through binary interactions may also play a role, although clearly more elaborate models are required to address these issues properly.
\subsection{Stellar Masses and Star Formation}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{sfr_lhb.pdf}
\caption{SFR vs. LH$\beta$ calibration. The blue points are for the current present-day star formation rate from SED result of SFR$_{10My} = M_{young} / age_{young}$ (<age>$_{young}$ = 6.8 My).
The green line is the relation by \cite{ken98} for normal spiral galaxies. The purple hexagon is the giant HII region 30 Dor in the LMC \citep{doran2013,crowther2017}.}
\label{sfr_lhb}
\end{figure}
Figure~\ref{b_hist} shows histograms of indicators of the importance of the current present-day star formation (SF) episode over the past history of star formation from our SED fitting. The left panel shows the histograms of the derived masses for the young stellar component (M$_{young} <$ 10 Myr) to be compared with the old stellar component (M$_{old}$ = M$_{int}$ + M$_{10Gy}$). Notice that we are not making a distinction in our CIGALE models between mass produced in the intermediate-age episode (M$_{int}$) from the mass of first episode of star formation (M$_{10Gy}$) in our three burst model, so we refer to these two components simply as M$_{old}$.
One can see that our galaxies have total masses of less than 10$^{10}$M$_{\odot}$, and typically 10$^{9}$M$_{\odot}$, putting them in the low mass tail of other studies of overall properties of star forming galaxies. This is of course due to our selection criteria as explained in Sec.~\ref{data}.
The middle panel shows the strength of the burst parameter defined as $f_{burst} = \frac{M_{young}}{M_{old}}$. It is clear from this histogram that the present episode of SF has contributed less than a few per cent to the total stellar mass production over the lifetime of these galaxies -- typically less than 2\%. Analogously, the right panel shows the $b$ parameter \citep{ken1983}, here defined as ratio of the present-day SFR (SFR$_{10My}$) to the average past SFR (SFR$_{10Gy}$). The current star formation is producing stars at high rates, typically over $\sim$ 20 times the average past history.
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{mainsequence.pdf}
\includegraphics[width=0.49\textwidth]{sSFR.pdf}
\caption{(left) The main sequence of HII galaxies: current SFR derived from the calibration between our SFR averaged over 10 Myrs from our SED fitting procedure vs. the LH$\beta$ from equation~\ref{sfr10my}, as a function of Mass$_{young}$ (blue points and line), and of Mass$_{total}$ (red points and line). The green line is from \cite{brinch2004}, the cyan line comes from \cite{chang2015}, and the orange line comes from \cite{elbaz2007} (see text). (right) Specific SFR as a function of Stellar Mass as in the left panel. In both plots the positions of the genuine starburst 30 Dor in the LMC are given (magenta). The black point in the right panel represents the position of the ULIRG Arp 220 with SFR over $\sim$ 200M$_{\odot}$/yr. }
\label{ms_gals}
\end{figure*}
Our SED fitting procedure allows us to isolate the mass of the young stellar component produced in the latest SF episode (M$_{young}$). Thus, the current SFR$_{10My}$ is simply M$_{young}$ / age$_{10My}$. We used the actual CIGALE best fitting young ages of the individual objects, although little differences would result had we used the mean age for our sample of <log(age)$_{10My}$> = 6.8, or a more conservative maximum age for the ionizing population of log(age)$_{10My} = 7$. For the SFR averaged over the whole SF history of the galaxy SFR$_{10Gy}$ we take M$_{old}/10^{10}$ yr.
Figure~\ref{sfr_lhb} shows our current SFR derived from our SED fitting procedure $\log({\rm SFR}_{10My})$, plotted against our observed H$\beta$ luminosities corrected for extinction (see Sec.~\ref{data}). The resulting calibration forcing the slope to be exactly one is given by:
\begin{equation}
\log({\rm SFR}) = -40.15\pm0.31 + \log {\rm L(H}\beta) ~(rms=0.25)
\label{sfr10my}
\end{equation}
For comparison, the commonly used calibration of \cite{ken98}, shown as the green line in the figure, is $\log({\rm SFR})= -40.65 + \log {\rm L(H}\beta)$. The difference in zero point is only partially due to a slightly different IMF, stellar input in the synthesis model, or aperture effects, and mostly to the fact that our calibration isolates the SFR of the starburst component alone. Thus, the Kennicutt relation underestimates the present SFR of starbursts typically by a factor of 3.
The prototypical starburst 30~Dor in the LMC is seen in Figure~\ref{sfr_lhb} to fall exactly within the errors in the locus of HII galaxies confirming that the starbursts in HII galaxies are similar to the Giant HII Regions in local spiral and irregular galaxies.
The relation between SFR and stellar mass is generally known as the ``main sequence'' of star-forming galaxies. There are extensive discussions of this relation in the literature and its use as a probe of galaxy evolution as a function of mass, environment, and galaxy types \citep{perez2003,brinch2004,noeske2007,salim2007,daddi2007,elbaz2007,peng2010,chang2015,salim2016}, as well as sub-sets of extremely metal-poor galaxies \citep[and references therein]{filho2016} and samples of BCDs \citep{sanchez2009,izo2014,janow2017}. Various SFR indicators have been used but Kennicutt's H$\alpha$ luminosity is the most common indicator for local star forming galaxies \citep{ken1983,brinch2004}.
Figure~\ref{ms_gals} (left panel) plots the relationship between the SFR derived from our calibration given by equation~\ref{sfr10my} and stellar masses for our sample. The relation for the total masses shown by the red points and red line is given by
\begin{equation}
\log({\rm SFR}) = -8.01\pm0.09 + 0.93\pm0.01 \times \log(M_{total})
\end{equation}with an rms=0.313. For comparison, we plot the relation derived from other commonly used samples of star-forming galaxies in the literature. The green and cyan lines are from \cite{chang2015}. The latter represents their calibration using the values from \cite{brinch2004} for stellar masses and SFRs. The orange line is from \cite{elbaz2007} for their sample of blue star forming galaxies. Even considering the scatter in these relations it is clear that the main sequence of HII galaxies (red points) is significantly steeper and stronger than the relation for more general samples of star-forming galaxies.
The blue points in the figure show the SFR for the starburst component alone. A linear fit (blue line) to this relation gives
\begin{equation}
\log({\rm SFR}) = -6.24\pm0.06 + 0.92\pm0.01 \times \log(M_{young})
\end{equation}with an rms=0.297.
This relation can be interpreted as the empirically derived main sequence for single starbursts and represents the maximum star formation rates for starbursts.
To illustrate this point Figure~\ref{ms_gals} (right panel) shows the relation between the so-called specific star formation rate (sSFR; star formation rate per unit mass) as a function of mass. As in the previous figure, the blue points represent the star formation per unit mass of stars formed in the current star formation episode ($M_{young}$) and the red points represent the current star formation rate over all stars ($M_{total}$) formed in the past history of the galaxy. HII Galaxies fall well above the overall average for star-forming galaxies of log(sSFR) $\sim$ -10 yr$^{-1}$ \citep{guo2015}.
Thus, averaged over their entire lifetimes HII Galaxies have been forming stars at levels that approach the largest starburst galaxies such as ULIRGS represented in the figure by Arp 220 with log(sSFR) $\sim$ -8 yr$^{-1}$. However, if one considers the specific star formation rate of the present burst alone (SFR per unit {\it young} stellar mass), the current starbursts in HII Galaxies are producing new stars at a much higher rate of log(sSFR) $\sim$ -7 to -6.5 yr$^{-1}$. By the same token, the present-day sSFR of ULIRGS are close to the upper limit set by HII galaxies when normalised by their young stellar masses.
Figure~\ref{ms_gals} also shows the SFR (left panel) and sSFR (right panel) for the prototypical Giant HII Region 30 Doradus in the LMC. \cite{doran2013} derived a mass of $1.1\times10^5$M$_{\odot}$\ and a SFR of $0.073 \pm 0.04$ M$_{\odot}$$yr^{-1}$ for a Kroupa IMF from their stellar Lyman continuum census. Although star formation in 30 Dor has spread over 5 Myr \citep{selman1999}, this is the closest example of a genuine real-life simple young massive stellar population, and sets an upper limit to how fast star formation may occur in starbursts.
The purple dashed-line in Figure~\ref{ms_gals} (right panel) shows the "speed limit" for star formation that is set by 30Dor.
As discussed above, simple dynamical arguments imply that objects substantially more massive than 30 Doradus, which is the case for all the HII Galaxies in our sample, cannot form stars significantly faster than 30Dor. This also explains the declining tilt of the sSFR of our galaxies shown in the figure: only low-mass HII Galaxies can harbour the most intense starbursts.
\subsection{The $L-\sigma$~\ relation}
The strong relation between L(H$\beta$)~\ and $M_{young}$\ found in this paper confirms that the $L-\sigma$~\ relation is indeed a correlation between the mass of the starburst component and the turbulence of the ionized gas. However, the relation remains empirical because we do not yet fully understand the origin of the gas turbulence in HII Galaxies. It could be due to gravity, if the gas clouds are virialized, or to the direct injection of mechanical energy from massive stars via stellar winds, or a combination of both.
We used the $f_r$ factors derived from our SED fitting to correct the luminosities of the galaxies used in
\cite{mel2017} to study the scatter of the relation and found that the corrections actually increase the scatter quite substantially. This is probably due to the fact that the corrections expose the effect of a second parameter -- possibly the effective radius of the young component ($R$) as suggested by \cite{chavez2016} and expected if the gas is virialized. Unfortunately, however, good measurements of the effective radii
of these galaxies are not yet available to verify, for example, whether $R\sigma^2$ correlates with $M_{young}$\ as expected if the gas is in Virial equilibrium with the stellar potential.
\section{Concluding remarks}\label{conclusions}
We have studied a representative sample of the youngest \mbox{($<{\rm EW(H}\beta)>=50$\AA)}; highest excitation and lowest metallicity (<12+logO/H>=8.2) HII Galaxies in the local universe ($z<0.4$) and find that, as a class, they have the following properties:
\begin{enumerate}
\item The correction factor $(1 +f_r)$ is typically between 1.5 and 2.5. This factor is derived from our SED fitting procedure and is then applied on the observed ${\rm EW(H}\beta)$ in order to correct for the contribution of the underlying old stellar continuum and recover the true ${\rm EW(H}\beta)_{young}$.
\item The star formation histories of HII Galaxies can be reproduced remarkably well by three bursts of star formation: (a) the current young burst, a few Myr old, that dominates the luminosity at all wavelengths but contains only a few percent of the total mass; (b) an intermediate age burst of a few hundred Myr; (c) and old stellar component (10 Gyr), which together contain most of the mass. Therefore, the past SF history is far more important in producing the bulk of the stellar mass in HII galaxies.
\item At a given age, the H$\beta$\ luminosity of HII Galaxies depends only on the mass of young stars. This implies that the IMF of the ionizing clusters must be a universal function at least for massive stars, and that only a relatively small fraction of Lyman continuum photons escape from the nebulae (case B photoionization).
\item The "main sequence" of star formation for HII Galaxies is significantly steeper and stronger than that of more massive star forming galaxies from the literature, while the present-day star formation rates of HII Galaxies are on average a factor of three larger than predicted by the H$\alpha$ Kennicutt relation. Therefore, extreme care must be exercised when combining starburst galaxies with more normal galaxies to construct the overall "main sequence" of star-forming galaxies.
\end{enumerate}
\section*{Acknowledgments}
We are grateful to Denis Burgarella, the father of CIGALE, and to M\'ed\'eric Boquien for guiding us through our first steps with the code and answering numerous questions. M\'ed\'eric kindly wrote the special module to fit three stellar populations that we used in this work. ET thanks Roderik Overzier for fruitful discussions, and Elena Terlevich for comments on the manuscript. JM acknowledges support from a CNPq {\it Ciencia Sem Fronteiras} grant at the Observatorio Nacional in Rio de Janeiro, and the hospitality of ON as a PVE visitor. Finally, we thank the anonymous referee for his/her comments and suggestions to improve the paper.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions
are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge,
Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the
Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle
Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National
Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State
University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States
Naval Observatory, and the University of Washington.
The entire GALEX Team gratefully acknowledges NASA's support for construction, operation, and science
analysis for the GALEX mission, developed in corporation with the Centre National d'Etudes Spatiales of
France and the Korean Ministry of Science and Technology. We acknowledge the dedicated team of engineers,
technicians, and administrative staff from JPL/Caltech, Orbital Sciences Corporation, University of California,
Berkeley, Laboratoire d'Astrophysique Marseille, and the other institutions who made this mission possible.
The UKIDSS project is defined in Lawrence et al. (2007). UKIDSS uses the UKIRT Wide
Field Camera WFCAM (Casali et al. 2007). The photometric system is described in
Hewett et al. (2006), and the calibration is described in Hodgkin et al. (2008). The
pipeline processing and science archive are described in Irwin et al. (2009, in prep)
and Hambly et al (2008).
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University
of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National
Aeronautics and Space Administration
\bibpunct{(}{)}{;}{a}{}{,}
\bibliographystyle{aa}
|
{
"timestamp": "2018-03-08T02:11:37",
"yymm": "1803",
"arxiv_id": "1803.02778",
"language": "en",
"url": "https://arxiv.org/abs/1803.02778"
}
|
\section{Introduction}
\label{sec:intro}
Motivated by a wide range of applications, bipartite matching models can be thought of as a generalization of the usual skill-based queueing system in which customers and servers play symmetric roles: instead of being part of the 'hardware' of the system, the servers come and go exactly
as the customers. Service times are not taken into account, as customers and servers only use the system as an interface to be matched together, and leave the system by couples,
as soon as they form one. A bipartite graph named {\em matching graph}, specifies the possible matchings, i.e. the classes of servers and customers are represented by the nodes of the graph, the bipartition
consists of the sets ${\mathcal{C}}$ of "server" nodes and ${\mathcal S}$ of "customer" nodes, and the existence of an edge between customer node $i$ and server node $j$ means that customers of class $i$ can be attended by servers of class $j$.
As is easily seen, such models have a large variety of applications, from organ exchange programs \cite{BDPS11} to housing allocation \cite{TW08} or taxi platforms (see \cite{BC15,BC17}, in which the considered matching graphs are
complete - however each match cab/passenger occurs with some specified probability).
Under general bipartite structures, this class of models was formalized in \cite{CKW09} and then \cite{AW11}, under the name {\em stochastic bipartite matching} (BM) model: assume that
arrivals occur by pairs in discrete time. At each time, a pair customer/server is drawn from a given probability measure $\mu:=\mu_{{\mathcal{C}}} \otimes \mu_{{\mathcal S}}$ on the set ${\mathcal{C}} \times {\mathcal S}$, independently of everything else.
Possible matches are specified by a fixed bipartite matching graph, and a {\em matching policy} decides which match to perform at any given time, in case of multiple choices.
The above seminal references addressed the stability problem of such models, under the First Come, First Served (FCFS) policy.
A general condition on $\mu$ is obtained (eq. (\ref{eq:Ncond}) below) that guarantees the existence of a perfect FCFS matching in the long run, using regeneration points.
Interestingly, this condition is also necessary and sufficient for complete resource pooling to hold for the fluid approximation of the corresponding skill-based service system under the FCFS-ALIS
(allocate to longest idle server) policy (see \cite{AW14}), and appears as a stochastic analog of Hall's necessary and sufficient condition for the existence of a perfect matching on a given bipartite graph, see \cite{Hall35}.
In \cite{ABMW17}, using an indirect reversibility argument, the stationary measure of the FCFS BM model is proven to have a remarkable product form under (\ref{eq:Ncond}).
Also, any initially empty system couples from the past to the above distribution, as shown using a backwards scheme {\em \`a la} Loynes \cite{Loynes62} (Section 3 in \cite{ABMW17}).
An extension of the BM model, termed {\em extended} bipartite matching (EBM) model was proposed in \cite{BGM13}, allowing the probability measure $\mu$ to have an arbitrary support on ${\mathcal{C}} \times {\mathcal S}$,
in a way that $\mu$ cannot necessarily be written as a tensor product of two measures on ${\mathcal{C}}$ and ${\mathcal S}$.
Couples customer/server in the support of $\mu$ are represented by a secondary graph, termed {\em arrival graph} on the bipartition ${\mathcal{C}}\cup {\mathcal S}$.
An extensive stability study of EBM models was undertaken in \cite{BGM13}, in particular providing sufficient stability conditions on $\mu$ (eq. (\ref{eq:Scond}) below), and proving that the 'Match the longest' policy is always stable under (\ref{eq:Ncond}) - we say in such case that the considered matching policy has a {\em maximal} stability region.
To suit alternative areas of applications, such as dating websites, collaborative economic architectures and assemble-to-order systems, another variation on matching systems was proposed in \cite{MaiMoy16}:
in the so-called stochastic {\em General matching} (GM) model, the matching graph is non-necessarily bipartite (hence there are no such things as {\em customers} and {\em servers}), and items enter the system one by one
following a fixed probability distribution on the set of nodes. General stability results are given in \cite{MaiMoy16}, including a necessary stability condition related to (\ref{eq:Ncond}) (which cannot be satisfied
whenever the matching graph is bipartite - thereby justifying the assumption of pairwise arrivals in that case, to make the system stabilizable), the maximality of 'Match the Longest'
and the study of particular graphical structures. A variant of the GM model in continuous time was then proposed in \cite{MoyPer17}, showing using fluid (in)stability arguments, that aside for a particular class of matching graphs, there always exist a matching policy of the strict priority type that is not maximal, and that the min-cost 'Uniform' policy, consisting in choosing the class of match uniformly at random among the non empty neighboring classes, is never maximal. The maximality of FCFS for GM models, and the product form of the stationary measure, was then shown in \cite{MBM17},
together with the coupling to the stationary state whenever the latter exists, in the strong backwards sense of Borovkov and Foss \cite{Bor84,Foss92}, for a various class of matching policies including FCFS, LCFS, Match the Longest and 'Uniform'. Related models are studied in
\cite{GurWa}, \cite{BM14} and \cite{NS16}, proposing optimization schemes for models in which the matching structures are particular (bipartite) graphs or hypergraphs, and the matching schemes are associated to a cost or a reward.
Coupling-from-the past convergence schemes, such as Loynes's construction, are a crucial tool for the explicit pathwise construction of the steady state. These techniques form a central keystone for the qualitative comparison of discrete-event systems under general statistical assumptions (see e.g. Chapter 4 of \cite{BacBre02}) and perfect simulation of the stationary state (see \cite{PW96}).
Motivated by this practical interest, the present work consists of a generalization of Loynes's construction in Section 3 of \cite{ABMW17} to:
\begin{itemize}
\item {EBM models},
\item {a wider class of matching policies} (in fact, to most usual policies that do not allow delaying any possible match, thanks to a particular 'block-wise' sub-additive property that is specified below),
\item {stationary ergodic} - but not necessarily independent - {inputs},
\item {a wider class of initial conditions}.
\end{itemize}
For doing so, we adapt to the present context, the coupling arguments developed in \cite{MBM17} for GM models, which mostly use Borovkov and Foss Theory of renovation \cite{Foss92,Foss94}. This paper is organized as follows: the EBM model is formally introduced in Section \ref{sec:model}.
A key sub-additive property of the model under most common matching policies is shown in Section \ref{sec:subadd}. After introducing abstract notions that will prove useful in the proofs to come
(namely, bi-separable graphs and erasing couples, respectively in Sections \ref{sec:bisep} and \ref{sec:erase}, we construct a backwards scheme, and then state and prove our main coupling results in Section \ref{sec:loynes},
making explicit the construction of perfect bi-infinite matchings under stability conditions, in sub-section \ref{subsec:matchings}.
\section{Formal definition of the models}
\label{sec:model}
\subsection{Notation}
We denote by ${\mathbb R}$ the set of real numbers, ${\mathbb Z}$ the set of integers, and by ${\mathbb N}$ the subset of
non-negative integers.
For any finite set $A$, we let $|A|$ denote the cardinality of $A$, $\varsigma(A)$ be the set of permutations of $A$, and $A^*$ be the free monoid generated by $A$.
Let $\emptyset$ denote the empty word of $A^*$. Words of $A^*$ will typically be denoted by bold symbols, and their letters in the corresponding regular symbol, e.g.
${\mathbf w}=w_1...w_{|{\mathbf w}|}$, where $|{\mathbf w}|$ denotes the length of the word ${\mathbf w}$.
For any ${\mathbf w} \in A$ and any $B \subset A$, set $|{\mathbf w}|_B = \# \{i \mid w_i \in B\}$, the number
of occurrences in ${\mathbf w}$ of letters from $B$. For $B = \{b\}$, we shorten
the notation to $|{\mathbf w}|_b$.
Furthermore, for any ${\mathbf w} \in A^*$ we set $[{\mathbf w}]:=(|{\mathbf w}|_a)_{a\in A}$, the commutative image of $w$.
For a word ${\mathbf w} \in A^*$ of length $k$ and $i\in \{1,\dots, k\}$, we denote by
${\mathbf w}_{[i]}:= w_1\ldots w_{i-1} w_{i+1}\ldots w_k$, the sub-word of ${\mathbf w}$
obtained by deleting $w_i$. For positive integers $i,j$ such that $i\le j$, the $i$-th vector of
the canonical basis of $\mathbb R^j$ is denoted by $\gre_i$.
For two positive integers $a$ and $b$, the denote by $\|.\|$ the $\ell_1$-norm of
$\mathbb R^{a}\times \mathbb R^{b}$, {\em i.e.} for all $(x,y)
\in \mathbb R^{a}\times \mathbb R^{b}$,
\[\|(x,y) \| = \sum_{i=1}^{a} |x(i)|+\sum_{j=1}^{b} |y(j)|.\]
Finally, the commutative image of a couple $({\mathbf w},{\mathbf z}) \in A^*\times B^*$ is the following couple of
${\mathbb N}^{|A|}\times {\mathbb N}^{|B|}$,
\[\left[({\mathbf w},{\mathbf z})\right]:=\left([{\mathbf w}],[{\mathbf z}]\right).\]
\subsection{Extended bipartite matching}
\label{subsec:defEBM}
We call {\em Extended Bipartite Matching} (EBM), the model introduced in \cite{BGM13}, from which
we keep the main terminology and notation. For clarity, let us recall hereafter the main definitions of Section 2 therein:
\begin{definition}
\rm
We call a {\em bipartite matching structure} a quadruple ${\mathcal{B}}:=({\mathcal{C}},{\mathcal S},E,F)$, where
\begin{itemi}
\item ${\mathcal{C}}$ (which we identify with $\{1,2,...,|{\mathcal{C}}|\}$) is the non-empty and finite set of customer classes;
\item ${\mathcal S}$ (identified with $\{1,2,...,|{\mathcal S}|\}$) is the non-empty and finite set of server classes;
\item $E \subset {\mathcal{C}}\times{\mathcal S}$ is the set of possible matchings;
\item $F \subset {\mathcal{C}}\times{\mathcal S}$ is the set of possible arrivals.
\end{itemi}
\end{definition}
Given a structure ${\mathcal{B}}$, we consider that customers and servers of various classes arrive in the system by pairs,
and let ${\mathcal{C}}$ and ${\mathcal S}$ denote the sets of customer and server classes, respectively.
The set of possible incoming pairs is given by $F$, and the set $E$ defines the pairs that may depart
from the system, aka the possible \emph{matchings}. We say that
${\mathcal{H}}:=({\mathcal{C}} \cup {\mathcal S},F)$ is the \emph{arrival graph} and that ${\mathcal{G}}:=({\mathcal{C}} \cup {\mathcal S},E)$ is the {\em matching graph} of the model.
We assume without loss of generality that
\begin{itemi}
\item ${\mathcal{G}}$ is connected;
\item ${\mathcal{H}}$ has no isolated vertices.
\end{itemi}
For a matching graph ${\mathcal{G}}=({\mathcal{C}} \cup {\mathcal S},E)$, we denote by ${\mathcal{C}}(s)$ the set of
customer classes that can be matched with an $s$-server, and by
${\mathcal S}(c)$ the set of server classes that can be matched with a
$c$-customer:
$${\mathcal S}(c) = \{s \in{\mathcal S} \; : \; (c,s) \in E\}, \quad {\mathcal{C}}(s) = \{c \in {\mathcal{C}} \; : \; (c,s) \in E\}.$$
For any subsets $A \subset {\mathcal{C}}$, and $B \subset{\mathcal S}$, we define
$${\mathcal S}(A) = \cup_{c\in A}{\mathcal S}(c), \quad {\mathcal{C}}(B) = \cup_{s\in B}{\mathcal{C}}(s).$$
Upon arrival of a new ordered pair $(c,s)\in F$, two situations may
occur: if neither $c$ nor $s$ match with the servers/customers
already present in the system, then $c$ and $s$ are simply added to
the buffer; if $c$, resp. $s$, can be matched then it departs the
system with its match, which leaves the buffer forever. If several matchings are possible for $c$,
resp. $s$, then it is the role of the matching policy to select one.
To properly define matching policies that may depend on (possibly random) choices
of matches, we will need to represent the orders of preferences of the customers and servers upon arrivals by elements of
the two following finite sets,
\begin{align*}
\mathbb S &= \varsigma({\mathcal S}(1)) \times ... \times \varsigma({\mathcal S}(|{\mathcal{C}}|));\\
\mathbb C &= \varsigma({\mathcal{C}}(1)) \times ... \times \varsigma({\mathcal{C}}(|S|)),
\end{align*}
in other words, e.g. for any $\sigma:= \left(\sigma(1),...,\sigma\left(|{\mathcal{C}}|\right)\right) \in \mathbb S$ and $c \in {\mathcal{C}}$,
$\sigma(c)$ is a permutation of the classes of servers that are compatible with $c$. Then, identifying ${\mathcal S}(c)$ with $\llbracket 1,|{\mathcal S}(c)| \rrbracket$ we denote
for all $k \in \llbracket 1,{\mathcal S}(c) \rrbracket$, by $\sigma(c)[k]$ the $k$-th neighboring class of $c$ according to $\sigma$. Similarly for any $\gamma \in \mathbb C$.
Any such array of permutations $\sigma \in \mathbb S$ (resp. $\gamma \in \mathbb C$) is called {\em list of customer (resp. server) preferences},
and if the entering couple is $(c,s) \in F$, $\sigma(c)$ and $\gamma(s)$ are respectively understood as the order of preference of the entering customer and the entering server,
among the classes of their possible matches.
Then, a matching policy will be formalized as an operator mapping the system state to the next one, provided that the classes of the entering
couple are $(c,s)\in F$ and the orders of preferences of the classes are given by $(\sigma,\gamma)\in\mathbb S \times \mathbb C$.
The matching policies we consider are presented in detail in Section \ref{subsec:pol}.
\begin{definition}
\rm
We call an {\em extended bipartite matching} (EBM) model, a bipartite matching structure $({\mathcal{C}},{\mathcal S},E,F)$ together with a matching policy $\phi$, and a (finite or infinite)
family of ordered quadruples $(c_n,s_n,\sigma_n,\gamma_n)_{n\in \mathcal N} \in \left(F\times (\mathbb S \times \mathbb C)\right)^{\mathcal N}$.
The array $(c_n,s_n,\sigma_n,\gamma_n)_{n\in \mathcal N}$ is then called {\em input} of the EBM model.
\end{definition}
\medskip
Observe that two classes of systems studied in the literature can be seen as special cases of
EBM models:
\begin{itemize}
\item The Bipartite Matching (BM) model corresponding to the version introduced in \cite{calkapwei09} is an EBM with $F={\mathcal{C}}\times{\mathcal S}$.
\item The General Matching (GM) model, as introduced in \cite{MaiMoy16}, having a general matching graph $({\mathcal{C}},R)$ on the set of vertices ${\mathcal{C}}$, is an EBM with $S= \tilde{{\mathcal{C}}}$, a disjoint copy of ${\mathcal{C}}$, $F=\{(c,\tilde{c}), c\in {\mathcal{C}}\}$ and
$({\mathcal{C}}\cup{\mathcal S}, E)$ is the bipartite double cover of $({\mathcal{C}},R)$.
\end{itemize}
\paragraph{An associated graph.} We consider the directed graph $({\mathcal{C}} \cup {\mathcal S},A)$ where the set of arcs $A$ is defined by:
\begin{itemi}
\item $(c,s) \in A$ if and only if $(c,s) \in E$ and
\item $(s,c) \in A$ if and only if $(c,s) \in F$.
\end{itemi}
As shown in Theorem 4.1 of \cite{BGM13}, irreducibility of the natural Markov representation of the system is closely related to the strong connectedness of
the directed graph $A$. Observe the following,
\begin{lemma}
\label{lemma:strongconnect}
In the BM model, if $({\mathcal{C}} \cup {\mathcal S},E)$ is connected then the graph $({\mathcal{C}} \cup {\mathcal S},A)$ is strongly connected. In the GM model, if the reduced graph $({\mathcal{C}},R)$ is connected, then $({\mathcal{C}} \cup {\mathcal S},A)$ is strongly connected.
\end{lemma}
\begin{proof}
First consider a BM model of connected matching graph $({\mathcal{C}} \cup {\mathcal S}, E)$, and let $c \in {\mathcal{C}}$ and $s \in{\mathcal S}$. There exists an alternating path
$c {\--} s_1 {\--} c_1 {\--} ... {\--} s_{p} {\--} c_p {\--} s$ connecting $c$ to $s$ in $({\mathcal{C}} \cup {\mathcal S}, E)$. Thus, as $F={\mathcal{C}} \times{\mathcal S}$,
$({\mathcal{C}} \cup {\mathcal S}, A)$ contains both the oriented path $c \to s_1 \to c_1 \to s_2 \to c_2 \to ... \to s_p \to c_p \to s$
and the oriented path $j \to c_p \to s_p \to c_{p-1} \to s_{p-1} \to ... \to c_1 \to s_1 \to c$. Thus the oriented graph $({\mathcal{C}} \cup {\mathcal S},A)$ is strongly connected.
We now consider a GM model of connected reduced graph $({\mathcal{C}},R)$. Let $i \in {\mathcal{C}}$ and $\tilde j \in{\mathcal S}=\tilde{{\mathcal{C}}}.$
By the connectedness there exists a path $i {\--} k_1 {\--} ... {\--} k_p {\--} j$ between $i$ and $j$ in $({\mathcal{C}},R)$. As ($c, \tilde c') \in E$ for any $c {\--} c' \in {\mathcal{C}}$ and $(c,\tilde c) \in F$ for any $c \in {\mathcal{C}}$,
$({\mathcal{C}} \cup {\mathcal S}, A)$ contains both paths $i \to \tilde k_1 \to k_1 \to \tilde k_2 \to k_2 \to ... \to \tilde k_p \to k_p \to \tilde j$
and $\tilde j \to j \to \tilde k_p \to k_p \to \tilde k_{p-1} \to ... \to k_1 \to \tilde i \to i$; so it is strongly connected.
\end{proof}
\subsection{State spaces}
\label{subsec:statespace}
Let $\mathcal N$ be a set of cardinality $N$, identified with $\llbracket 1,N \rrbracket$. Fix an EBM model of input $(c_n,s_n,\sigma_n,\gamma_n)_{n\in \mathcal N}$.
Let ${\mathbf c}=c_1...c_N$, and likewise for ${\mathbf s}$, ${\boldsymbol{\sigma}}$ and ${\boldsymbol{\gamma}}$.
In this case we will call for short {\em admissible input} of the EBM model, the couple of words $({\mathbf c},{\mathbf s})$.
Then, for a given admissible policy (to be properly defined in sub-section \ref{subsec:pol}), there exists a unique {\em matching} of the admissible input $({\mathbf c},{\mathbf s})$, that is, a bipartite graph having set of nodes
$\left\{c_1,...,c_{N}\right\}\cup \left\{s_1,...,s_{N}\right\}$, whose edges represent the matchings of the corresponding customers and servers.
This matching is denoted $M_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})$. In turn, the {\em buffer detail} of $M_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})$, denoted by $Q_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})$,
is the couple of words
\[Q_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}}):=\Bigl(C_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})\,,\,S_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})\Bigl) \in {\mathcal{C}}^*\times {\mathcal S}^*,\]
such that $C_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})$ (resp., $S_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})$) is the sub-word of ${\mathbf c}$ (resp., ${\mathbf s}$)
whose letters are the classes of the unmatched customers in ${\mathbf c}$ (resp. of the unmatched servers in ${\mathbf s}$), in order of arrivals.
Observe that the definitions of $M_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})$ and $Q_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})$ can be extended to words ${\mathbf c}$ and ${\mathbf s}$ of different sizes, as follows: if ${\mathbf c}$ is of length $N$ and ${\mathbf s}$ is of length
$M$ for $N \ne M$ (say e.g. $N > M$), then we consider that $N-M$ customers first enter the system alone, and then $M$ arrivals occur by couples; in other words for all
$n \in \llbracket 0,M-1 \rrbracket$, the couple $\left(c_{N-n},s_{M-n}\right)$ enters the system contemporarily, with lists of preferences $\left(\sigma_{N-n},\gamma_{M-n}\right)$,
and for all $n \in \llbracket 1,N-1 \rrbracket$, we let $c_{N-n}$ be the class of a single customer, with an (irrelevant) arbitrary list of preferences.
Any admissible buffer detail belongs to the set
\begin{equation}
\mathcal{U} = \Bigl\{ ({\mathbf w},{\mathbf z}) \in {\mathcal{C}}^*\times {\mathcal S}^* \; : \;
\forall (i,j) \in E, \; |{\mathbf w}|_i|{\mathbf z}|_j = 0 \Bigr\}.\label{eq-ncss}
\end{equation}
We will denote shortly by $\mathbf{\emptyset}$ the state $(\emptyset,\emptyset) \in \mathcal{U}$, representing the empty buffers.
Observe that any state $({\mathbf w},{\mathbf z})$ corresponding to the input $(c_i,\sigma_i)_{i\le N}$ and $(s_j,\gamma_j)_{j\le M}$ as above,
is such that $|{\mathbf w}|=|{\mathbf z}|$ if $N=M$. However we will often work in greater generality, so any state of $\mathcal{U}$ is admissible.
We denote by $\mathcal{U}_0$, the subset of admissible states having equal numbers of customers and servers, i.e.
\begin{equation}
\mathcal{U}_0 = \Bigl\{ ({\mathbf w},{\mathbf z}) \in \mathcal{U} \; : \; |{\mathbf w}|=|{\mathbf z}|\Bigr\}.\label{eq-ncss}
\end{equation}
Whenever the matching policy $\phi$ is such that the preference permutations are irrelevant, we will just drop this parameter from all notation, and write
e.g. $M_\phi({\mathbf c},{\mathbf s})$ and $Q_\phi({\mathbf c},{\mathbf s})$ for the matching of ${\mathbf c}$ and ${\mathbf s}$ and for the buffer detail of that matching, respectively.
As will be seen below, depending on the matching policy we can also restrict the available information on the state of the system, to a vector only keeping track of
the number of customers and servers of the various classes remaining unmatched after the matching of the finite words ${\mathbf c}$ and ${\mathbf s}$, that is, of the couple of
commutative images of $C_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})$ and $S_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})$. In such cases this restricted state, which we will be called {\em class detail} of the system, takes values in the set
\begin{equation}
\mathbb E = \Bigl\{(x,y)\in {\mathbb N}^{|{\mathcal{C}}|}\times
{\mathbb N}^{|S|}\,:\,x(i)y(j)=0\mbox{ for any }(i,j)\in E\Bigl\}=\Bigl\{\left[({\mathbf w},{\mathbf z})\right];\,({\mathbf w},{\mathbf z}) \in \mathcal{U}\Bigl\}.\label{eq-css}
\end{equation}
\subsection{Matching policies}
\label{subsec:pol}
We now formally describe the main matching policies we consider.
In all cases, we make the following {\em buffer-first} assumption: an incoming couple $(c,s)$ is matched together if and only if
$(c,s) \in E$, $c$ found no match in the buffer of servers {\em and} $s$ found no match in the buffer of customers.
\begin{definition}
\rm
A matching policy $\phi$ is said {\em admissible} if the choice of matches of an incoming couple $(c,s)$
depends {\em only} on the buffer detail and the couple of lists of preferences $(\sigma,\gamma)$.
\end{definition}
In other words, if a matching policy $\phi$ is admissible there exists a mapping $\odot_{\phi}: \mathcal{U} \times (F \times (\mathbb S \times \mathbb C)) \rightarrow \mathcal{U}$ such that,
denoting by $({\mathbf w},{\mathbf z})$ the buffer detail corresponding to a given input, and by $({\mathbf w}',{\mathbf z}')$ the buffer detail if the latter input is augmented by the quadruple $(c,s,\sigma,\gamma)$, then
$({\mathbf w}',{\mathbf z}')$ and $({\mathbf w},{\mathbf z})$ are connected by the relation
\begin{equation}
\label{eq:defodot}
({\mathbf w}',{\mathbf z}')= ({\mathbf w}, {\mathbf z}) \odot_{\phi} (c,s,\sigma,\gamma).
\end{equation}
\paragraph{First Come, First Served.}
In First Come, First Served ({\sc fcfs}), the lists of preference are irrelevant, and are erased from all notation for short.
The map $\odot_{\textsc{fcfs}}$ is then given for all $({\mathbf w},{\mathbf z}) \in \mathcal{U}$ and all couples $(c,s)$ by
$$
({\mathbf w}, {\mathbf z}) \odot_{\textsc{fcfs}} (c,s) =
\left \{
\begin{array}{ll}
({\mathbf w} c, {\mathbf z} s), & \textrm{if } \; |{\mathbf w}|_{{\mathcal{C}}(s)} = 0, \;|{\mathbf z}|_{{\mathcal S}(c)} = 0, \;
(c,s) \not\in E \\
({\mathbf w}, {\mathbf z}), & \textrm{if } \; |{\mathbf w}|_{{\mathcal{C}}(s)} = 0, \; |{\mathbf z}|_{{\mathcal S}(c)} = 0, \; (c,s)\in E\\
({\mathbf w}_{\left [\Phi(w,s)\right]}, {\mathbf z}_{\left [\Psi(z,c)\right]}), & \textrm{if } \; |{\mathbf w}|_{{\mathcal{C}}(s)} \not= 0, \; |{\mathbf z}|_{{\mathcal S}(c)} \not= 0\\
({\mathbf w}_{\left [\Phi(w,s)\right]}c, z), & \textrm{if } \; |{\mathbf w}|_{{\mathcal{C}}(s)} \not= 0, \; |{\mathbf z}|_{{\mathcal S}(c)} = 0\\
({\mathbf w}, {\mathbf z}_{\left [\Psi(z,c)\right]}s), & \textrm{if } \; |{\mathbf w}|_{{\mathcal{C}}(s)} = 0, \; |{\mathbf z}|_{{\mathcal S}(c)} \not= 0,
\end{array}
\right .
$$
with functions $\Phi$ and $\Psi$ as follows:
$$\Phi({\mathbf w}, s) = \arg\min \{w_k \in {\mathcal{C}}(s)\}, \quad \Psi({\mathbf z}, c) = \arg\min \{z_k \in {\mathcal S}(c)\}.$$
\paragraph{Last Come, First Served.}The lists of preferences are again irrelevant; the map $\odot_{\textsc{lcfs}}$ is analog to $\odot_{\textsc{fcfs}}$, for
$$\Phi({\mathbf w},s) = \arg\max \{w_k \in {\mathcal{C}}(s)\}, \quad \Psi({\mathbf z}, c) = \arg\max \{z_k \in {\mathcal S}(c)\}.$$
\begin{definition}
\rm
A matching policy $\phi$ will be said {\em class-admissible} if it is fully characterized by
two mappings $p_\phi$ and $q_\phi$ from ${\mathbb N}^{|S|} \times {\mathcal{C}} \times \mathbb S$ to $S$ (resp., ${\mathbb N}^{|{\mathcal{C}}|} \times{\mathcal S} \times \mathbb C$ to ${\mathcal{C}}$) such that $p_\phi(y,c,a)$ (resp. $q_\phi(x,s,b)$)
determines the class of the match (if any) chosen by the entering $c$-customer (resp. $s$-server) under $\phi$ in a system of class detail $(x,y)$, for the lists of preferences $(\sigma,\gamma)$.
\end{definition}
Let us define for any $(c,s) \in F$ and
$(x,y) \in \mathbb E$,
\begin{align*}
\mathscr P(y,c) &=\Bigl\{j\in {\mathcal S}(c)\,:\,y\left(j\right) > 0\Bigl\};\\%\label{eq:setP1}\\
{\mathscr{Q}}(x,s) &=\Bigl\{i\in {\mathcal{C}}(s)\,:\,x\left(i\right) >
0\Bigl\},\label{eq:setP2}
\end{align*}
which represent the set of classes of available compatible servers
(resp. customers) with the entering customer $c$ (resp. server $s$),
if the class detail of the system is given by $(x,y)$.
As is easily seen, for a class-admissible $\phi$, the arrival of $(c,s) \in F$ and the draw of $(\sigma,\gamma)$ from $\nu^\phi$ correspond to the following action on
the class detail of the system,
\begin{equation}
\label{eq:defccc}
(x, y) \ccc_{\phi} (c,s,\sigma,\gamma) = \left \{
\begin{array}{ll}
(x+\gre_c,y+\gre_s), &\mbox{ if }{\mathscr{P}}(y,c)=\emptyset,\,
{\mathscr{P}}(x,s)=\emptyset,\,(c,s)\not\in E,\\
(x,y), &\mbox{ if }{\mathscr{P}}(y,c)=\emptyset,\, {\mathscr{P}}(x,s)=\emptyset,\,(c,s)\in E,\\
(x,y+\gre_s-\gre_{p_\phi(y,c,a)}), &\mbox{ if }{\mathscr{P}}(y,c)\ne
\emptyset,\,{\mathscr{P}}(x,s)=\emptyset,\\
(x+\gre_c-\gre_{q_\phi(x,s,b)},y), &\mbox{ if }{\mathscr{P}}(y,c)=\emptyset,\,{\mathscr{P}}(x,s)\ne
\emptyset,\\
(x-\gre_{q_\phi(x,s,b)},y-\gre_{p_\phi(y,c,a)}), &\mbox{ if
}{\mathscr{P}}(y,c)\ne \emptyset,\,{\mathscr{P}}(x,s)\ne
\emptyset.
\end{array}
\right .
\end{equation}
\begin{remark}
\label{rem:equiv}
\rm
The same observation as in Remark 1 in \cite{MBM17} holds: to any class-admissible policy $\phi$ corresponds an admissible policy, e.g. by specifying that
the rule of choice within class is FCFS (i.e. the customer/server chosen is the {\em oldest} one in line within its class). Then any class-admissible policy $\phi$ is admissible, i.e. the mapping $\ccc_\phi$ from
$\mathbb E \times (F \times (\mathbb S \times \mathbb C))$ to $\mathbb E$ can be detailed into a map $\odot_{\phi}$ from
$\mathcal{U} \times (F \times (\mathbb S \times \mathbb C))$ to $\mathcal{U}$, as in (\ref{eq:defodot}), such that for any buffer detail $({\mathbf w},{\mathbf z})$ and any $(c,s,\sigma,\gamma)$,
denoting again $({\mathbf w}',{\mathbf z}')=({\mathbf w},{\mathbf z})\odot_\phi (c,s,\sigma,\gamma)$ we have that
\[\left([{\mathbf w}'],[{\mathbf z}']\right) = \left([{\mathbf w}],[{\mathbf z}]\right)\ccc_\phi (c,s,\sigma,\gamma).\]
\end{remark}
\paragraph{Random policies.}
Here the only information that is needed to determine the choice of matches for the incoming items, is whether their various compatible classes have an empty queue or not.
Specifically, the considered customer/server investigates its compatible classes
of servers/customers in the order induced by the lists of preferences upon arrival, until it finds one having a non-empty buffer, if any. The customer/server is then matched with a server/customer of the latter class.
In other words, we set
\begin{align*}
p_{\textsc{rand}}(y,c,\sigma) &=\sigma(c)[k],\mbox{ where }k=\min \Bigl\{i \in
[\llbracket 1,|{\mathcal S}(c)| \rrbracket\,:\,\sigma(c)[i]\in {\mathscr{P}}(y,c)\Bigl\};\\
q_{\textsc{rand}}(x,s,\gamma) &=\gamma(s)[\ell],\mbox{ where }\ell=\min \Bigl\{j \in
\llbracket 1,|{\mathcal{C}}(s)| \rrbracket\,:\,b(s)[j]\in {\mathscr{Q}}(x,s)\Bigl\}.
\end{align*}
We call such policies 'Random' (and denote {\sc rand}) since, as will be formalized in Section \ref{subsec:NcondScond}, the lists of preference may be random, drawn from a given distribution on $\mathbb S \times \mathbb C$.
The particular case where these lists are deterministic corresponds to a (strict) priority policy.
\paragraph{Match the Longest.}
In the 'Match the Longest' policy ({\sc ml}), the newly arrived customer/server chooses
a server/customer of the compatible class that has the longest line (not including the other incoming item
whenever it is compatible). Ties between classes having queues of the same length are broken using the list of preference at this time.
Formally, set for all $(c,s)$
and $(x,y)$ such that $\mathcal P(y,c) \ne \emptyset$ and $\mathcal P(x,s) \ne \emptyset$,
\begin{align*}
L(y,c) &=\max\left\{y(j)\,:\,j \in {\mathcal S}(c)\right\}\,\quad\mbox{ and }\quad\,
\mathcal L(y,c) =\left\{i\in \llbracket 1,{\mathcal S}(c) \rrbracket\,:\,y\left(i\right)=L(y,c)\right\}\subset {\mathscr{P}}(y,c),\\
M(x,s) &=\max\left\{x(i)\,:\,i \in {\mathcal{C}}(s)\right\}\,\quad\mbox{ and }\quad\,
\mathcal M(x,s) =\left\{i\in \llbracket 1,{\mathcal{C}}(s) \rrbracket\,:\,x\left(i\right)=M(x,s)\right\}\subset {\mathscr{Q}}(x,s).
\end{align*}
\begin{align*}
p_{\textsc{ml}}(y,c,\sigma) &=\sigma(c)[k],\mbox{ where }k=\min \Bigl\{i \in
\llbracket 1,|{\mathcal S}(c)| \rrbracket:\,\sigma(c)[i]\in \mathcal L(y,c)\Bigl\};\\
q_{\textsc{ml}}(x,s,\gamma) &=\gamma(s)[\ell],\mbox{ where }\ell=\min \Bigl\{j \in
\llbracket 1,|{\mathcal{C}}(s)| \rrbracket:\,\gamma(s)[j]\in \mathcal M(x,s)\Bigl\}.
\end{align*}
\paragraph{Match the Shortest.}
'Match the Longest' {\sc ms} is defined analogously to {\sc ml}, except that the shortest queue is chosen instead of
the longest.
\section{Sub-additivity}
\label{sec:subadd}
In this section we show that, under most matching policies we have introduced we have introduced above, the EBM model satisfies a sub-additivity property that will prove crucial in the construction of a backwards scheme.
\begin{definition}[Sub-additivity]
\label{def:subadd}
An admissible matching policy $\phi$ is said to be {\em sub-additive} if,
for all ${\mathbf c}',{\mathbf c}''\in {\mathcal{C}}^*$, ${\mathbf s}',{\mathbf s}''\in {\mathcal S}^*$, ${\boldsymbol{\sigma}}',{\boldsymbol{\sigma}}'' \in \mathbb S^*$ and ${\boldsymbol{\gamma}}',{\boldsymbol{\gamma}}'' \in \mathbb C^*$ such that $|{\mathbf c}'|=|{\mathbf s}'|=|{\boldsymbol{\sigma}}'|=|{\boldsymbol{\gamma}}'|$ and $|{\mathbf c}''|=|{\mathbf s}''|=|{\boldsymbol{\sigma}}''|=|{\boldsymbol{\gamma}}''|$,
we have that
\begin{align*}
\left|C_\phi\left({\mathbf c}'{\mathbf c}'',{\mathbf s}'{\mathbf s}'',{\boldsymbol{\sigma}}'{\boldsymbol{\sigma}}'',{\boldsymbol{\gamma}}'{\boldsymbol{\gamma}}''\right)\right| &\leq \left|C_\phi\left({\mathbf c}',{\mathbf s}',{\boldsymbol{\sigma}}',{\boldsymbol{\gamma}}'\right)\right| + \left|C_\phi\left({\mathbf c}'',{\mathbf s}'',{\boldsymbol{\sigma}}'',{\boldsymbol{\gamma}}''\right)\right|;\\
\left|S_\phi\left({\mathbf c}'{\mathbf c}'',{\mathbf s}'{\mathbf s}'',{\boldsymbol{\sigma}}'{\boldsymbol{\sigma}}'',{\boldsymbol{\gamma}}'{\boldsymbol{\gamma}}''\right)\right| &\leq \left|S_\phi\left({\mathbf c}',{\mathbf s}',{\boldsymbol{\sigma}}',{\boldsymbol{\gamma}}'\right)\right| + \left|S_\phi\left({\mathbf c}'',{\mathbf s}'',{\boldsymbol{\sigma}}'',{\boldsymbol{\gamma}}''\right)\right|.
\end{align*}
\end{definition}
It is proven in Lemma 4 of \cite{ABMW17} that FCFS is sub-additive for the BM model; as the sub-additivity is purely algebraic and does not depend on the statistics of the input,
this also clearly holds true for the EBM model. Also, Proposition 3 in
\cite{MBM17} shows that a similar sub-additive property is satisfied for the GM model for various matching policies.
The result below generalizes the former result to a larger class of matching policies, thereby providing an analog to the latter result in the bipartite case,
\begin{proposition}
\label{prop:sub}
The matching policies {\sc fcfs}, {\sc lcfs}, {\sc rand} and {\sc ml} are sub-additive.
\end{proposition}
Before proving Proposition \ref{prop:sub} let us observe that, similarly to Example 3 of \cite{MBM17},
\begin{ex}[{\sc ms} is not sub-additive]
\label{ex:MS}
\rm
Take as a matching graph, the graph of Figure \ref{Fig:NN}, and the arrival scenario depicted in Figure \ref{Fig:MS}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-] (1,4) -- (1,1);
\draw[-] (4,4) -- (4,1);
\draw[-] (4,4) -- (1,4);
\draw[-] (4,1) -- (1,1);
\draw[-] (1.5,3) -- (1.5,2);
\draw[-] (1.5,3) -- (2.5,2);
\draw[-] (2.5,3) -- (2.5,2);
\draw[-] (2.5,3) -- (3.5,2);
\draw[-] (3.5,2)-- (3.5,3);
\fill (1.5,3) circle (2.5pt) node[above] {\small{1}} ;
\fill (2.5,3) circle (2.5pt) node[above] {\small{2}} ;
\fill (3.5,3) circle (2.5pt) node[above] {\small{3}} ;
\fill (1.5,2) circle (2.5pt) node[below] {\small{$\bar 1$}} ;
\fill (2.5,2) circle (2.5pt) node[below] {\small{$\bar 2$}} ;
\fill (3.5,2) circle (2.5pt) node[below] {\small{$\bar 3$}} ;
\end{tikzpicture}
\caption[smallcaption]{The 'NN' graph.} \label{Fig:NN}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-] (-1,0.5) -- (6,0.5);
\draw[-] (-1,0.5) -- (-1,-4.5);
\draw[-] (-1,-2) -- (6,-2);
\draw[-] (-1,-4.5) -- (6,-4.5);
\draw[-] (6,-4.5) -- (6,0.5);
\fill (0,0) circle (2pt) node[above] {\small{3}} ;
\fill (1,0) circle (2pt) node[above] {\small{3}} ;
\draw[-,very thick] (1.45,0.5) -- (1.45,-1.5);
\draw[-,very thick] (1.55,0.5) -- (1.55,-1.5);
\fill (2,0) circle (2pt) node[above] {\small{3}} ;
\draw[-] (2,0)-- (4,-1);
\fill (3,0) circle (2pt) node[above] {\small{3}} ;
\fill (4,0) circle (2pt) node[above] {\small{1}} ;
\draw[-] (4,0)-- (2,-1);
\fill (5,0) circle (2pt) node[above] {\small{2}} ;
\draw[-] (5,0)-- (3,-1);
\fill (0,-1) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (1,-1) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (2,-1) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (3,-1) circle (2pt) node[below] {\small{$\bar 2$}} ;
\fill (4,-1) circle (2pt) node[below] {\small{$\bar 3$}} ;
\fill (5,-1) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (0,-3) circle (2pt) node[above] {\small{3}} ;
\draw[-] (0,-3)-- (4,-4);
\fill (1,-3) circle (2pt) node[above] {\small{3}} ;
\fill (2,-3) circle (2pt) node[above] {\small{3}} ;
\fill (3,-3) circle (2pt) node[above] {\small{3}} ;
\fill (4,-3) circle (2pt) node[above] {\small{1}} ;
\draw[-] (4,-3)-- (3,-4);
\fill (5,-3) circle (2pt) node[above] {\small{2}} ;
\fill (0,-4) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (1,-4) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (2,-4) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (3,-4) circle (2pt) node[below] {\small{$\bar 2$}} ;
\fill (4,-4) circle (2pt) node[below] {\small{$\bar 3$}} ;
\fill (5,-4) circle (2pt) node[below] {\small{$\bar 1$}} ;
\end{tikzpicture}
\caption[smallcaption]{A non sub-additive {\sc ms} matching on the 'NN' graph.} \label{Fig:MS}
\end{center}
\end{figure}
As illustrated in Figure \ref{Fig:MS} we have
\begin{align*}
\left|C_{\textsc{ms}}(333312,\bar 1\bar 1\bar 1\bar 2 \bar 3 \bar 1)\right| &= |3332| > |33| + |3| = \left|C_{\textsc{ms}}(33,\bar 1\bar 1)\right| + \left|C_{\textsc{ms}}(3312,\bar 1\bar 2 \bar 3 \bar 1)\right|;\\
\left|S_{\textsc{ms}}(333312,\bar 1\bar 1\bar 1\bar 2 \bar 3 \bar 1)\right| &= |\bar 1\bar 1\bar 1\bar 1| > |\bar 1\bar 1| + |\bar 1| = \left|S_{\textsc{ms}}(33,\bar 1\bar 1)\right| + \left|S_{\textsc{ms}}(3312,\bar 1\bar 2 \bar 3 \bar 1)\right|.
\end{align*}
\end{ex}
The remainder of this section is devoted to the proof of Proposition \ref{prop:sub}. The result is already known for $\phi=\textsc{fcfs}$ (this is Lemma 4 in \cite{ABMW17}).
The proof for non-expansive matching policies (including {\sc rand} and {\sc ml}) is given in Sub-section \ref{subsec:nonexp}, and that for {\sc lcfs} in Sub-section \ref{subsec:lcfs}.
\subsection{Non-expansiveness}
\label{subsec:nonexp}
The non-expansiveness of the class detail, as defined in \cite{MoyPer17} for GM models, is a
Lipschitz property of the driving map of the recursion that has interest for constructing stochastic approximations of the model
under consideration (see Section 6 in \cite{MoyPer17}). As we show in Proposition \ref{prop:nonexp}, this property is in fact stronger than
sub-additivity for the buffer detail sequence,
\begin{definition}
A class-admissible policy $\phi$ is said {\em non-expansive} if
for any $(x,y)$ and $(x',y')$ in $\mathbb E$, and for any $(c,s) \in F$ and any $(\sigma,\gamma) \in \mathbb S\times \mathbb C$
that can be drawn by $(\nu_\phi,\rho_\phi)$,
\begin{equation}
\label{eq:defnonexp1}
\|(x',y')\ccc_{\phi}(c,s,\sigma,\gamma) - (x,y)\ccc_{\phi}(c,s,\sigma,\gamma)\| \le \|(x',y')-(x,y)\|.
\end{equation}
\end{definition}
The following results, transposing Lemma 7 in \cite{MoyPer17} and Propositions 4 and 5 in \cite{MBM17} to the EBM model, are proven in Appendix,
\begin{proposition}
\label{prop:nonexp1}
Any {\sc rand} matching policy is non-expansive.
\end{proposition}
\begin{proposition}
\label{prop:nonexp2}
{\sc ml} is non-expansive.
\end{proposition}
Likewise section 4.2.2. in \cite{MBM17},
\begin{proposition}
\label{prop:nonexp}
Any non-expansive matching policy is sub-additive.
\end{proposition}
\begin{proof}
Fix a non-expansive matching policy $\phi$. Keeping the
notations of Definition \ref{def:subadd}, let us define the two
sequences $\{(x_n,y_n)\}$ and $\{(x'_n,y'_n)\}$ to be the class
details of the system at arrival times, starting respectively from
an empty system and from a system of buffer detail $({\mathbf w}',{\mathbf z}')$, and
having a common input $\left(c'',s'',\sigma'',\gamma''\right)$.
In other words, we set
\[\left\{\begin{array}{ll}
(x_0,y_0) &= \left(\mathbf 0,\mathbf 0\right);\\
(x'_0,y'_0) &= \left[\left(C_\phi({\mathbf c}',{\mathbf s}',{\boldsymbol{\sigma}}',{\boldsymbol{\gamma}}')\,,\,S_\phi({\mathbf c}',{\mathbf s}',{\boldsymbol{\sigma}}',{\boldsymbol{\gamma}}')\right)\right]\\
\end{array}\right.\]
and by induction,
\[\left\{\begin{array}{ll}
\left(x_{n+1},y_{n+1}\right) &= \left(x_{n},y_{n}\right)\ccc_{\phi} \left(c''_{n+1},s''_{n+1},\sigma''_{n+1},\gamma''_{n+1}\right),\,n\in\left\{0,\dots,|{\mathbf c}''|-1\right\};\\
\left(x'_{n+1},y'_{n+1}\right)&=\left(x'_{n},y'_{n}\right)\ccc_{\phi}\left(c''_{n+1},s''_{n+1},\sigma''_{n+1},\gamma''_{n+1}\right),\,n\in\left\{0,\dots,|{\mathbf c}''|-1\right\}.
\end{array}\right.\]
Applying (\ref{eq:defnonexp1}) at all $n$, we obtain by induction that for all $n
\in \left\{0,\dots,|{\mathbf c}''|\right\}$,
\begin{equation}
\|(x'_n,y'_n) -(x_n,y_n)\| \le \|(x'_0,y'_0)-(x_0,y_0)\|= |w'|+|z'|.\label{eq:nonexprec}
\end{equation}
Now observe that by construction, $\left(x_{|{\mathbf c}''|},y_{|{\mathbf c}''|}\right)=\left(\left[C_\phi({\mathbf c}',{\mathbf s}',\sigma',\gamma')\right]\,,\,\left[S_\phi({\mathbf c}',{\mathbf s}',\sigma',\gamma')\right]\right)$ which,
together with (\ref{eq:nonexprec}), implies that
\begin{align}
|{\mathbf w}|+|{\mathbf z}| &= \left\|\left(x'_{|{\mathbf c}''|},y'_{|{\mathbf c}''|}\right)\right\|\nonumber\\
&\le \left\|\left(x'_{|{\mathbf c}''|},y'_{|{\mathbf c}''|}\right) -\left(x_{|{\mathbf c}''|},y_{|{\mathbf c}''|}\right)\right\|+\left\|\left(x_{|{\mathbf c}''|},y_{|{\mathbf c}''|}\right)\right\|\nonumber\\
&\le \left|C_\phi({\mathbf c}',{\mathbf s}',\sigma',\gamma')\right| + \left|S_\phi({\mathbf c}',{\mathbf s}',\sigma',\gamma')\right| + \left|C_\phi({\mathbf c}'',{\mathbf s}'',\sigma'',\gamma'')\right|
+ \left|S_\phi({\mathbf c}'',{\mathbf s}'',\sigma'',\gamma'')\right|.\label{eq:Auster1}
\end{align}
Remember that the couples of words ${\mathbf c}'$ and ${\mathbf s}'$, and ${\mathbf c}''$ and
${\mathbf s}''$, are not necessarily of the same size, and let us
denote
\[q'= |{\mathbf s}'|-|{\mathbf c}'|\quad\mbox{ and }\quad q''= |{\mathbf s}''|-|{\mathbf c}''|.\]
By the very definition of a matching of ${\mathbf c}'$ and ${\mathbf s}'$ we have that
\[\left|S_\phi({\mathbf c}',{\mathbf s}',{\boldsymbol{\sigma}}',{\boldsymbol{\gamma}}')\right| - \left|C_\phi({\mathbf c}',{\mathbf s}',{\boldsymbol{\sigma}}',{\boldsymbol{\gamma}}')\right| = |{\mathbf s}'|-|{\mathbf c}'| = q'\] and likewise, that
$\left|S_\phi({\mathbf c}'',{\mathbf s}'',{\boldsymbol{\sigma}}'',{\boldsymbol{\gamma}}'')\right|-\left|C_\phi({\mathbf c}'',{\mathbf s}'',{\boldsymbol{\sigma}}'',{\boldsymbol{\gamma}}'')\right|=q''$ and $\left|S_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})\right|-\left|C_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})\right|=|v|-|u|=q'+q''$. All in all, we
obtain with (\ref{eq:Auster1}) that
\[\left\{\begin{array}{ll}
2\left|C_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})\right|+q'+q'' &\le 2\left|C_\phi({\mathbf c}',{\mathbf s}',{\boldsymbol{\sigma}}',{\boldsymbol{\gamma}}')\right|+q' + 2\left|C_\phi({\mathbf c}'',{\mathbf s}'',{\boldsymbol{\sigma}}'',{\boldsymbol{\gamma}}'')\right|+q'';\\
2\left|S_\phi({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}},{\boldsymbol{\gamma}})\right|-(q'+q'') &\le 2\left|S_\phi({\mathbf c}',{\mathbf s}',{\boldsymbol{\sigma}}',{\boldsymbol{\gamma}}')\right|-q' + 2\left|S_\phi({\mathbf c}'',{\mathbf s}'',{\boldsymbol{\sigma}}'',{\boldsymbol{\gamma}}'')\right|-q''.
\end{array}\right.\]
We conclude by adding up the two above inequalities.
\end{proof}
\subsection{Proof of Proposition \ref{prop:sub} for {\sc lcfs}}
\label{subsec:lcfs}
Likewise {\sc fcfs}, the policy {\sc lcfs} does not satisfy a non-expansiveness property similar to
(\ref{eq:defnonexp1}). We resort to a direct argument, similar to that of Section 4.2.3 in \cite{MBM17}.
We drop for short the dependence on $(\sigma,\gamma)$ in the notations $M_{\textsc{lcfs}}(.)$, $Q_{\textsc{lcfs}}(.)$ $C_{\textsc{lcfs}}(.)$ and $S_{\textsc{lcfs}}$, as the {\sc lcfs} matchings do not depend on any list of preferences.
{\bf Step I:} At first, set $|{\mathbf c}'|=1$ and $|{\mathbf s}'|=0$.
We need to show that $|C_{\textsc{lcfs}}({\mathbf c},{\mathbf s})| \le |C_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|+1$ and $|S_{\textsc{lcfs}}({\mathbf c},{\mathbf s})| \le |S_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|$. There are three different cases:
\begin{itemize}
\item[(a)] If $c'_1$ is unmatched in $M_{\textsc{lcfs}}({\mathbf c},{\mathbf s})$, then $c'_1$ is incompatible with ${\mathbf s}''_1$, otherwise the two items would have been matched.
In turn, if follows from the definition of {\sc lcfs} that the presence of $c'_1$ does not influence the choice of match of any
server $s''_j$ that is matched in $M_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')$, even though $(c'_1,s''_j) \in E$. So $|C_{\textsc{lcfs}}({\mathbf c},{\mathbf s})|=|C_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|+1$ and $|S_{\textsc{lcfs}}({\mathbf c},{\mathbf s})|= |S_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|$.
\item[(b)] Whenever $c'_1$ is matched in $M_{\textsc{lcfs}}({\mathbf c},{\mathbf s})$ with a server $s''_{j_1}$ that was unmatched in $M_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}''),$ any
matched customer $c''_i$ in $M_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')$ that is compatible with $s''_{j_1}$ has found in ${\mathbf s}''$ a more recent compatible server $s''_j$.
The matching of $c''_i$ with $s''_j$ still occurs in $M_{\textsc{lcfs}}({\mathbf c},{\mathbf s})$. Thus, as above the matching induced in $M_{\textsc{lcfs}}({\mathbf c},{\mathbf s})$ by the nodes of ${\mathbf c}''$ is not affected by the
match $(c_1,c''_{j_1})$, so $|C_{\textsc{lcfs}}({\mathbf c},{\mathbf s})|=|C_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|$ and $|S_{\textsc{lcfs}}({\mathbf c},{\mathbf s})|=|S_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|-1$.
\item[(c)] Suppose now that $c'_1$ is matched with a server $s''_{j_1}$ that was matched in $M_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')$ to some customer $c''_{i_1}$.
This occurs if and only if $c'_1 {\--} s''_{j_1}$ and $i_1 \ge j_1$, otherwise in {\sc lcfs}, $s''_{j_1}$ would prioritize $c''_{i_1}$ over $c'_1$ upon arrival.
We thus need to search a new match for $c''_{i_1}$. Either there is none and we stop, or we find a match, say $s''_{j_2}$.
The new pair $(c''_{i_1}, s''_{j_2})$ potentially broke an old pair $(u''_{i_2}, v''_{j_2})$. We continue until either $u''_{i_k}$ cannot find a new match or $v''_{j_k}$ was not previously matched.
In the first case, we end up with $|C_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|+1$ unmatched customers and $|S_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|$ unmatched servers; and in the second case, with $|C_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|$ unmatched customers and
$|S_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')|-1$ unmatched servers.
\end{itemize}
The case with $|{\mathbf c}'|=0$ and $|{\mathbf s}'|=1$ is symmetrical: then $\left|C_{\textsc{lcfs}}({\mathbf c},{\mathbf s})\right| \le \left|C_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')\right|$ and
$\left|S_{\textsc{lcfs}}({\mathbf c},{\mathbf s})\right| \le \left|S_{\textsc{lcfs}}({\mathbf c}'',{\mathbf s}'')\right|+1$.
\bigskip
{\bf Step II:} Consider now arbitrary finite words ${\mathbf c}'$ and ${\mathbf s}'$. Note that if $(c'_i, s'_j) \in M_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')$ then
$(c'_i,s'_j) \in M_{\textsc{lcfs}}({\mathbf c},{\mathbf s})$, as is the case for any admissible policy. Thus we have
\begin{equation}
\label{eq:tahri0}
\left\{\begin{array}{ll}
C_{\textsc{lcfs}}({\mathbf c},{\mathbf s}) &=C_{\textsc{lcfs}}\left(C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf c}''\,,\,S_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf s}''\right);\\
S_{\textsc{lcfs}}({\mathbf c},{\mathbf s}) &=S_{\textsc{lcfs}}\left(C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf c}''\,,\,S_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf s}''\right).
\end{array}\right.
\end{equation}
We will consider one by one the customers in $C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')$ and servers in $S_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')$, starting from the right to the left.
The order between customers and servers does not matter; we chose to consider customers first and then servers.
Let for all $1 \leq i \leq \left|C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')\right|$ (resp. $1 \leq i \leq \left|S_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')\right|$), $C^i$ (resp., $S^i$)
be the suffix of length $i$ of $C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')$ (resp., $S_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')$).
By Step I, for $1 \leq i \leq \left|C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')\right|-1$ we have that
\begin{equation}
\label{eq:tahri1}
\left\{\begin{array}{ll}
\left|C_{\textsc{lcfs}}\left(C^{i+1}{\mathbf c}'', {\mathbf s}''\right)\right| &\leq 1+\left|C_{\textsc{lcfs}}\left(C^{i}{\mathbf c}'', {\mathbf s}''\right)\right| \\%\leq i+1 +\left|C_{\textsc{lcfs}}\left({\mathbf c}'', {\mathbf s}''\right)\right|;\\
\left|S_{\textsc{lcfs}}\left(C^{i+1}{\mathbf c}'', {\mathbf s}''\right)\right| &\leq \left|S_{\textsc{lcfs}}\left(C^{i}{\mathbf c}'', {\mathbf s}''\right)\right|.
\end{array}\right.
\end{equation}
Similarly, by considering then the servers from right to left we obtain that
for $1 \leq i \leq \left|S_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')\right|-1$,
\begin{equation}
\label{eq:tahri2}
\left\{\begin{array}{ll}
\left|C_{\textsc{lcfs}}\left(C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf c}'', S^{i+1}{\mathbf s}''\right)\right| &\leq \left|C_{\textsc{lcfs}}\left(C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf c}'',S^{i}{\mathbf s}''\right)\right|\\
\left|S_{\textsc{lcfs}}\left(C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf c}'', S^{i+1}{\mathbf s}''\right)\right| &\leq 1+\left|S_{\textsc{lcfs}}\left(C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf c}'',S^{i}{\mathbf s}''\right)\right|.
\end{array}\right.
\end{equation}
Applying (\ref{eq:tahri1}) by induction, and then (\ref{eq:tahri2}) by induction we obtain that
\begin{equation*}
\left\{\begin{array}{ll}
\left|C_{\textsc{lcfs}}\left(C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf c}'', S_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf s}''\right)\right| &\leq \left|C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')\right|+\left|C_{\textsc{lcfs}}\left({\mathbf c}'', {\mathbf s}''\right)\right|\\
\left|S_{\textsc{lcfs}}\left(C_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf c}'', S_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}'){\mathbf s}''\right)\right| &\leq \left|S_{\textsc{lcfs}}({\mathbf c}',{\mathbf s}')\right| + \left|S_{\textsc{lcfs}}\left({\mathbf c}'', {\mathbf s}''\right)\right|,
\end{array}\right.
\end{equation*}
which, together with (\ref{eq:tahri0}), concludes the proof for {\sc lcfs}.
\section{Bi-separable graphs}
\label{sec:bisep}
We introduce a class of bipartite matching graphs which play an important role in the stability study that follows: the bi-separable graphs,
which can be seen as the analog, for bipartite graphs, of the separable graphs introduced in \cite{MaiMoy16}.
\begin{definition}
Let ${\mathcal{G}}=({\mathcal{C}} \cup {\mathcal S},E)$ be a matching graph. An {\em independent set} of ${\mathcal{G}}$ is a non-empty bipartite set $A \cup B$, where $A \subset {\mathcal{C}}$ and $B \subset{\mathcal S}$, that is such that
$A \times B \subset ({\mathcal{C}}\times{\mathcal S}) \setminus E.$
Denote for any independent set $I:=A\cup B$ of ${\mathcal{G}}$,
\begin{equation}
\label{eq:defCSnot}
{\mathcal{C}}_\circ (I) ={\mathcal{C}} \setminus \left(A \cup {\mathcal{C}}(B)\right)\quad\quad\mbox{ and }\quad\quad
{\mathcal S}_\circ (I) ={\mathcal S} \setminus \left(B \cup {\mathcal S}(A)\right).
\end{equation}
The independent set $I$ is then said {\em maximal} whenever ${\mathcal{C}}_\circ\left(I\right)=\emptyset\mbox{ and }{\mathcal S}_\circ\left(I\right)=\emptyset.$
\end{definition}
\begin{definition}
A bipartite matching graph ${\mathcal{G}}=\left({\mathcal{C}} \cup {\mathcal S},E\right)$ is said
{\em bi-separable} of order $p$, $p\ge 2$, whenever there exists a partition of ${\mathcal{C}}\cup{\mathcal S}$ into $p$ independent sets
$I_1=A_1 \cup B_1,..,I_p=A_p \cup B_p$ of ${\mathcal{G}}$ such that any $I_i$ that is not maximal (if any) is such that $A_i=\emptyset$ or $B_i=\emptyset$. In other words, the complement bipartite graph $\overline{{\mathcal{G}}}$ of ${\mathcal{G}}$ has $p$ disjoint bipartite-connected components $I_1,..,I_p$.
\end{definition}
\noindent Clearly,
\begin{proposition}
If the bipartite graph ${\mathcal{G}}=\left({\mathcal{C}} \cup {\mathcal S},E\right)$ is bi-separable, then any bipartite graph
$\hat{{\mathcal{G}}}=\left({\mathcal{C}} \cup {\mathcal S}, \hat E\right)$ for $E \subset \hat E$,
is bi-separable.
\end{proposition}
\noindent Also,
\begin{lemma}
\label{lemma:separable}
Let ${\mathcal{G}}=\left({\mathcal{C}} \cup {\mathcal S},E\right)$ is bi-separable of order $p$ and maximal independent sets $I_1=(A_1,B_1),...,I_p=(A_p,B_p)$. Then, for any $i \in \llbracket 1,p \rrbracket$ and
any $c,c' \in A_i$, $s,s' \in{\mathcal S}$, we have \[{\mathcal S}(c)={\mathcal S}(c')=S\setminus B_i\quad\quad\mbox{ and }\quad\quad {\mathcal{C}}(s)={\mathcal{C}}(s')={\mathcal{C}}\setminus A_i.\]
\end{lemma}
\begin{proof}
Let $c,c' \in A_i$. Suppose that there exists $\ell \in{\mathcal S}$ such that $(c,\ell) \in E$ but $(c',\ell) \not\in E$.
Then, $\ell \in {\mathcal S}(A_i),$ and thus $\ell \not\in B_i$. But $(c',\ell) \in \bar E$, so there exists $j \in \llbracket 1,p \rrbracket$ such that $(c',\ell) \in A_j\times B_j$.
This implies in turn that $j=i$, an absurdity since $\ell \not\in B_i.$
\end{proof}
Examples of bi-separable graphs are represented in Figure \ref{Fig:sep}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-] (1,3) -- (1.5,2);
\draw[-] (2,3) -- (1.5,2);
\draw[-] (2,3) -- (2.5,2);
\draw[-] (3,3) -- (2.5,2);
\fill (1,3) circle (2pt) node[above] {\small{1}} ;
\fill (1.5,2) circle (2pt) node[below] {\small{$\bar 2$}} ;
\fill (2,3) circle (2pt) node[above] {\small{2}} ;
\fill (3,3) circle (2pt) node[above] {\small{3}} ;
\fill (2.5,2) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (5,2.5) node[] {$\longleftrightarrow$} ;
\draw[-] (6,3) -- (6,2);
\draw[-] (7,3) -- (7,2);
\fill (6,3) circle (2pt) node[above] {\small{1}} ;
\fill (6,2) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (7,3) circle (2pt) node[above] {\small{3}} ;
\fill (7,2) circle (2pt) node[below] {\small{$\bar 2$}} ;
\fill (8,3) circle (2pt) node[above] {\small{2}} ;
\draw[-] (1,1) -- (1,0);
\draw[-] (1,1) -- (2,0);
\draw[-] (2,1) -- (1,0);
\draw[-] (2,1) -- (3,0);
\draw[-] (3,1) -- (2,0);
\draw[-] (3,1) -- (3,0);
\fill (1,1) circle (2pt) node[above] {\small{1}} ;
\fill (1,0) circle (2pt) node[below] {\small{$\bar 3$}} ;
\fill (2,1) circle (2pt) node[above] {\small{2}} ;
\fill (2,0) circle (2pt) node[below] {\small{$\bar 2$}} ;
\fill (3,1) circle (2pt) node[above] {\small{3}} ;
\fill (3,0) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (5,0.5) node[] {$\longleftrightarrow$} ;
\draw[-] (6,1) -- (6,0);
\draw[-] (7,1) -- (7,0);
\draw[-] (8,1) -- (8,0);
\fill (6,1) circle (2pt) node[above] {\small{1}} ;
\fill (6,0) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (7,1) circle (2pt) node[above] {\small{2}} ;
\fill (7,0) circle (2pt) node[below] {\small{$\bar 2$}} ;
\fill (8,1) circle (2pt) node[above] {\small{3}} ;
\fill (8,0) circle (2pt) node[below] {\small{$\bar 3$}} ;
\draw[-] (-0.5,-1) -- (0,-2);
\draw[-] (-0.5,-1) -- (1,-2);
\draw[-] (0.5,-1) -- (0,-2);
\draw[-] (0.5,-1) -- (1,-2);
\draw[-] (1.5,-1) -- (1,-2);
\draw[-] (1.5,-1) -- (2,-2);
\draw[-] (1.5,-1) -- (0,-2);
\draw[-] (1.5,-1) -- (3,-2);
\draw[-] (2.5,-1) -- (2,-2);
\draw[-] (2.5,-1) -- (3,-2);
\draw[-] (3.5,-1) -- (2,-2);
\draw[-] (3.5,-1) -- (3,-2);
\fill (-0.5,-1) circle (2pt) node[above] {\small{1}} ;
\fill (0.5,-1) circle (2pt) node[above] {\small{2}} ;
\fill (1.5,-1) circle (2pt) node[above] {\small{3}} ;
\fill (2.5,-1) circle (2pt) node[above] {\small{4}} ;
\fill (3.5,-1) circle (2pt) node[above] {\small{5}} ;
\fill (0,-2) circle (2pt) node[below] {\small{$\bar 4$}} ;
\fill (1,-2) circle (2pt) node[below] {\small{$\bar 3$}} ;
\fill (2,-2) circle (2pt) node[below] {\small{$\bar 2$}} ;
\fill (3,-2) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (5,-1.5) node[]{$\longleftrightarrow$};
\draw[-] (6,-1) -- (6,-2);
\draw[-] (7,-1) -- (7,-2);
\draw[-] (6,-1) -- (7,-2);
\draw[-] (7,-1) -- (6,-2);
\draw[-] (8,-1) -- (8,-2);
\draw[-] (9,-1) -- (9,-2);
\draw[-] (8,-1) -- (9,-2);
\draw[-] (9,-1) -- (8,-2);
\fill (6,-1) circle (2pt) node[above] {\small{1}} ;
\fill (7,-1) circle (2pt) node[above] {\small{2}} ;
\fill (8,-1) circle (2pt) node[above] {\small{4}} ;
\fill (9,-1) circle (2pt) node[above] {\small{5}} ;
\fill (6,-2) circle (2pt) node[below] {\small{$\bar 2$}} ;
\fill (7,-2) circle (2pt) node[below] {\small{$\bar 1$}} ;
\fill (8,-2) circle (2pt) node[below] {\small{$\bar 4$}} ;
\fill (9,-2) circle (2pt) node[below] {\small{$\bar 3$}} ;
\fill (10,-1) circle (2pt) node[above] {\small{3}} ;
\end{tikzpicture}
\caption[smallcaption]{Three bi-separable graphs of order 3, together with their bipartite complement graphs.}
\label{Fig:sep}
\end{center}
\end{figure}
\section{(Strong) erasing couples}
\label{sec:erase}
{\em Erasing couples} and {\em strong erasing couples} generalize to EBM models, the definitions of erasing words and strong erasing words for GM models
(Definitions 4 and 6 in \cite{MBM17}). They will play an important role in the construction below.
\subsection{Definitions}
\label{subsec:deferasing}
\begin{definition}
Let ${\mathcal{B}} =({\mathcal{C}},{\mathcal S},E,F)$ be a bipartite matching structure, $\phi$ be an admissible matching policy and $({\mathbf w},{\mathbf z}) \in \mathcal{U}_0$.
An admissible input $({\mathbf c},{\mathbf s})$ is said to be an {\em erasing couple} of $({\mathbf w},{\mathbf z})$ for $({\mathcal{B}},\phi)$ if for any words ${\boldsymbol{\sigma}},{\boldsymbol{\sigma}}'\in \mathbb C^*$ and ${\boldsymbol{\gamma}},{\boldsymbol{\gamma}}'\in \mathbb S^*$
such that $|{\boldsymbol{\sigma}}|=|{\boldsymbol{\gamma}}|=|{\mathbf w}|$ and $|{\boldsymbol{\sigma}}'|=|{\boldsymbol{\gamma}}'|=|{\mathbf c}|$, we have that
\begin{equation}
\label{eq:deferase}
Q_\phi\left({\mathbf c},{\mathbf s},{\boldsymbol{\sigma}}',{\boldsymbol{\gamma}}'\right)=\emptyset\quad\mbox{ and }\quad
Q_\phi\left({\mathbf w}{\mathbf c},{\mathbf z}{\mathbf s},{\boldsymbol{\sigma}}\msg',{\boldsymbol{\gamma}}\mga'\right)=\emptyset.
\end{equation}
\end{definition}
Condition (\ref{eq:deferase}) means that the input $({\mathbf c},{\mathbf s})$ is completely matchable, alone and together with $({\mathbf w},{\mathbf z})$, for any fixed lists of preferences.
\begin{definition}
Let ${\mathcal{B}} =({\mathcal{C}},{\mathcal S},E,F)$ be a bipartite matching structure and $\phi$ be a matching policy.
An admissible input $({\mathbf c},{\mathbf s})$ is said to be a {\em strong erasing couple} for $({\mathcal{B}},\phi)$ if for any words ${\boldsymbol{\sigma}} \in \mathbb C^*$ and ${\boldsymbol{\gamma}} \in \mathbb S^*$ such that $|{\boldsymbol{\sigma}}|=|{\boldsymbol{\gamma}}|=|{\mathbf c}|$,
\begin{align}
\mbox{For any suffixes } \breve{{\mathbf c}},\breve{{\mathbf s}},\breve{{\boldsymbol{\sigma}}}\mbox{ and }\breve{{\boldsymbol{\gamma}}} \mbox{ of }{\mathbf c},{{\mathbf s}},{{\boldsymbol{\sigma}}}\mbox{ and }{{\boldsymbol{\gamma}}}\mbox{ having the same length},\quad\quad
Q_\phi\left(\breve{{\mathbf c}},\breve{{\mathbf s}},\breve{{\boldsymbol{\sigma}}},\breve{{\boldsymbol{\gamma}}}\right)=\emptyset,\; \label{eq:defstrongcouple1}\\
\mbox{For any }(i,j) \in E^c\mbox{ and any }(\alpha,\beta)\in \mathbb S\times \mathbb C,\,\quad\quad Q_\phi(i{\mathbf c},j{\mathbf s},\alpha{\boldsymbol{\sigma}},\beta{\boldsymbol{\gamma}})=\emptyset.\label{eq:defstrongcouple2}
\end{align}
\end{definition}
In other words, the words $({\mathbf c},{\mathbf s})$ are completely matchable, as well as any of their suffixes of equal sizes, and this input "deletes" whatever admissible couple customer/server is present in the system.
As is easily seen, a strong erasing couple $({\mathbf c},{\mathbf s})$ for $({\mathcal{B}},\phi)$ is an erasing couple of any single-letter buffer detail $(c,s)$, for $(c,s)\in E^c$.
\begin{lemma}
\label{lemma:erasing}
Let $\phi$ be a sub-additive matching policy, and $({\mathbf c},{\mathbf s})$ be a strong erasing couple for ${\mathcal{B}}$ and $\phi$.
Then for any $({\mathbf w},{\mathbf z}) \in \mathcal{U}$, we have that $\left|Q_\phi({\mathbf w}{\mathbf c},{\mathbf z}{\mathbf s})\right| < \left|Q_\phi({\mathbf w},{\mathbf z})\right|$.
\end{lemma}
\begin{proof}
From the sub-additivity of $\phi$ and the very definition of a strong erasing couple,
\begin{equation*}
\left|Q_\phi({\mathbf w}{\mathbf c},{\mathbf z}{\mathbf s})\right| \le \left|Q_\phi\left(w_1...w_{|{\mathbf w}|-1},z_1...z_{|{\mathbf w}|-1}\right)\right|+\left|Q_\phi\left(w_{|{\mathbf w}|}{\mathbf c},z_{|{\mathbf w}|}{\mathbf s}\right)\right|
=\left|Q_\phi({\mathbf w},{\mathbf z})\right|-1.
\end{equation*}
\end{proof}
\subsection{Structures admitting strong erasing couples}
\label{subsec:structureerasing}
Observe the following,
\begin{proposition}
\label{pro:strongcouple}
There exists a strong erasing couple for the bipartite structure $\mathcal B=({\mathcal{C}},{\mathcal S},E,F)$ and any admissible policy $\phi$,
whenever there exist two subsets $\check{{\mathcal{C}}} \subseteq {\mathcal{C}}$ and $\check{{\mathcal S}} \subseteq {\mathcal S}$ such that
\begin{itemize}
\item[(i)] ${\mathcal S}(\check{{\mathcal{C}}})={\mathcal S}\quad\mbox{ and }\quad {\mathcal{C}}(\check{{\mathcal S}})={\mathcal{C}};$
\item[(ii)] Denoting by $\check{{\mathcal{G}}}$ the sub-graph induced by $\check{{\mathcal{C}}} \cup \check{{\mathcal S}}$ in
$({\mathcal{C}} \cup {\mathcal S},E)$, there exists a (non necessarily simple) alternating path $\mathscr P=i_1 {\--} j_1 {\--} i_2 {\--} j_2 {\--} ... {\--} i_q {\--} j_q$ of $\check{{\mathcal{G}}}$ spanning the whole set $\check{{\mathcal{C}}} \cup \check{{\mathcal S}}$, and such that
for all $\ell \in \llbracket 1,q \rrbracket,\,(i_\ell,j_\ell) \in F,$
\end{itemize}
and either of the three following conditions is satisfied:
\begin{itemize}
\item[(iiia)] The sub-graph $\check{{\mathcal{G}}}$ is bipartite complete, i.e. for all $k,\ell \in \llbracket 1,q \rrbracket$, $(i_k,j_\ell) \in E$;
\item[(iiib)]
For all $\ell \in \llbracket 2,q \rrbracket,\,(i_\ell,j_{\ell-1})\in F$, and ${\mathcal{C}}(j_q) \cap \check{{\mathcal{C}}} = \{i_q\}$;
\item[(iiic)] For all $\ell \in \llbracket 2,q \rrbracket,\,(i_\ell,j_{\ell-1})\in F$, and ${\mathcal S}(i_1) \cap \check{{\mathcal S}} = \{j_1\}$.
\end{itemize}
\end{proposition}
\begin{proof}
All the arguments below hold true regardless of the lists of preferences of the incoming items, so we drop these parameters from all notation for short.
Assume that (i) and (ii) hold, and fix a couple $(i,j) \in \left({\mathcal{C}}\times {\mathcal S}\right) \setminus E.$ We examine the three alternative third conditions separately:
\medskip
$\underline{\mbox{Case (iiia)}}$. Suppose that (i), (ii) and (iiia) are satisfied, and consider the couple
\[({\mathbf c},{\mathbf s}) = \left(i_1...i_q\,,\,j_1j_2...j_q\right)\] which, from assumption (ii), clearly satisfies (\ref{eq:defstrongcouple1}).
From (i), both $i$ and $j$ have a neighbor in $\check{{\mathcal{C}}} \cup \check{{\mathcal S}}$, and we let $j_p$ be the first letter in ${\mathbf s}$ compatible with $i$, and $i_r$ be the first letter in ${\mathbf c}$ that is compatible with $j$.
We have three different cases, according to whichever comes first:
\begin{itemize}
\item[(a1)] If $r=p$ (i.e. the respective neighbors of $i$ and $j$ arrive contemporarily), $i_\ell$ is matched with $j_\ell$ for any $\ell \in \llbracket 1,p-1 \rrbracket$
(if the later set is non-empty). Then,
$i$ and $j$ are respectively matched with $j_p$ and $i_p$, and all subsequent couples are matched together on the fly, so $Q_\phi\left(i{\mathbf c},j{\mathbf s}\right)=\emptyset$ (see the left display of
Figure \ref{Fig:strongcouple0}) and (\ref{eq:defstrongcouple2}) holds true.
\item[(a2)] Suppose now that $r < p$. First, for all $\ell \in \llbracket 1,r-1 \rrbracket$ (if this set is non-empty), the incoming couples $(i_\ell,j_\ell)$ are matched together upon arrival.
Then, $j$ is matched with $i_p$, and for all $\ell \in \llbracket r,p-1 \rrbracket$, the incoming $i_{\ell+1}$ is matched with $j_{\ell}$. Finally,
$j_p$ is matched with $i$ and, for all $\ell \in \llbracket p+1,q \rrbracket$ (if the later is non-empty), the incoming couples $(i_\ell,j_\ell)$ are matched together (middle display of Figure \ref{Fig:strongcouple0}),
and (\ref{eq:defstrongcouple2}) again holds.
\item[(a3)] The case $p < r$ is symmetric to (a2) (right display of Figure \ref{Fig:strongcouple0}).
\end{itemize}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.75]
\fill (-1,0) circle (2pt) node[above] {\scriptsize{$\mathbf{i}$}} ;
\fill (-0.5,0) circle (2pt) node[above] {\scriptsize{$i_1$}} ;
\fill (0,0) node[above] {\scriptsize{...}} ;
\fill (0.5,0) circle (2pt) node[above] {\,\scriptsize{$i_{r-1}$}} ;
\fill (1,0) circle (2pt) node[above] {\,\,\scriptsize{$\mathbf{i_r}$}} ;
\fill (1.5,0) circle (2pt) node[above] {\,\,\,\scriptsize{$i_{r+1}$}} ;
\fill (2,0) node[above] {\,\,\scriptsize{...}} ;
\fill (2.5,0) circle (2pt) node[above] {\,\,\scriptsize{$i_q$}} ;
%
\fill (-1,-1) circle (2pt) node[below] {\scriptsize{$\mathbf{j}$}} ;
\fill (-0.5,-1) circle (2pt) node[below] {\scriptsize{$j_1$}} ;
\fill (0,-1) node[below] {\scriptsize{...}};
\fill (0.5,-1) circle (2pt) node[below] {\,\scriptsize{$j_{p-1}$}} ;
\fill (1,-1) circle (2pt) node[below] {\,\,\scriptsize{$\mathbf{j_p}$}} ;
\fill (1.5,-1) circle (2pt) node[below] {\,\,\,\scriptsize{$j_{p+1}$}} ;
\fill (2,-1) node[below] {\,\,\scriptsize{...}};
\fill (2.5,-1) circle (2pt) node[below] {\,\,\scriptsize{$j_q$}} ;
%
\draw[-] (-1,0)-- (1,-1);
\draw[-] (-1,-1)-- (1,0);
\draw[-] (-0.5,0)-- (-0.5,-1);
\draw[-] (0.5,0)-- (0.5,-1);
\draw[-] (1.5,0)-- (1.5,-1);
\draw[-] (2.5,0)-- (2.5,-1);
\fill (4,0) circle (2pt) node[above] {\scriptsize{$\mathbf{i}$}} ;
\fill (4.5,0) circle (2pt) node[above] {\scriptsize{$i_1$}} ;
\fill (5,0) node[above] {\scriptsize{...}} ;
\fill (5.5,0) circle (2pt) node[above] {\,\scriptsize{$i_{r-1}$}} ;
\fill (6,0) circle (2pt) node[above] {\,\,\scriptsize{$\mathbf{i_r}$}} ;
\fill (6.5,0) circle (2pt) node[above] {\,\,\,\scriptsize{$i_{r+1}$}} ;
\fill (7,0) node[above] {\,\scriptsize{...}} ;
\fill (7.5,0) node[above] {\scriptsize{...}};
\fill (8,0) circle (2pt) node[above] {\scriptsize{$i_p$}} ;
\fill (8.5,0) circle (2pt) node[above] {\,\scriptsize{$i_{p+1}$}} ;
\fill (9,0) node[above] {\,\,\scriptsize{...}};
\fill (9.5,0) circle (2pt) node[above] {\scriptsize{$i_{q}$}} ;
%
\fill (4,-1) circle (2pt) node[below] {\scriptsize{$\mathbf{j}$}} ;
\fill (4.5,-1) circle (2pt) node[below] {\scriptsize{$j_1$}} ;
\fill (5,-1) node[below] {\scriptsize{...}} ;
\fill (5.5,-1) circle (2pt) node[below] {\,\scriptsize{$j_{r-1}$}} ;
\fill (6,-1) circle (2pt) node[below] {\,\,\scriptsize{$j_r$}} ;
\fill (6.5,-1) node[below] {\,\scriptsize{...}};
\fill (7,-1) node[below] {\,\scriptsize{...}};
\fill (7.5,-1) circle (2pt) node[below] {\scriptsize{$j_{p-1}$}} ;
\fill (8,-1) circle (2pt) node[below] {\,\,\scriptsize{$\mathbf{j_p}$}} ;
\fill (8.5,-1) circle (2pt) node[below] {\,\,\,\scriptsize{$j_{p+1}$}} ;
\fill (9,-1) node[below] {\,\,\scriptsize{...}};
\fill (9.5,-1) circle (2pt) node[below] {\,\scriptsize{$j_q$}} ;
%
\draw[-] (4,0)-- (8,-1);
\draw[-] (4,-1)-- (6,0);
\draw[-] (4.5,-1)-- (4.5,0);
\draw[-] (5.5,0)-- (5.5,-1);
\draw[-] (6.5,0)-- (6,-1);
\draw[-] (8,0)-- (7.5,-1);
\draw[-] (8.5,0)-- (8.5,-1);
\draw[-] (9.5,0)-- (9.5,-1);
\fill (11,0) circle (2pt) node[above] {\scriptsize{$\mathbf{i}$}} ;
\fill (11.5,0) circle (2pt) node[above] {\scriptsize{$i_1$}} ;
\fill (12,0) node[above] {\scriptsize{...}} ;
\fill (12.5,0) circle (2pt) node[above] {\,\scriptsize{$i_{p-1}$}} ;
\fill (13,0) circle (2pt) node[above] {\,\scriptsize{$i_p$}} ;
\fill (13.5,0) node[above] {\scriptsize{...}} ;
\fill (14,0) node[above] {\scriptsize{...}} ;
\fill (14.5,0) circle (2pt) node[above] {\scriptsize{$i_{r-1}$}} ;
\fill (15,0) circle (2pt) node[above] {\,\,\scriptsize{$\mathbf{i_{r}}$}} ;
\fill (15.5,0) circle (2pt) node[above] {\,\,\,\scriptsize{$i_{r+1}$}} ;
\fill (16,0) node[above] {\,\,\scriptsize{...}} ;
\fill (16.5,0) circle (2pt) node[above] {\scriptsize{$i_{q}$}} ;
%
\fill (11,-1) circle (2pt) node[below] {\scriptsize{$\mathbf{j}$}} ;
\fill (11.5,-1) circle (2pt) node[below] {\scriptsize{$j_1$}} ;
\fill (12,-1) node[below] {\scriptsize{...}} ;
\fill (12.5,-1) circle (2pt) node[below] {\,\scriptsize{$j_{p-1}$}} ;
\fill (13,-1) circle (2pt) node[below] {\,\,\scriptsize{$\mathbf{j_p}$}} ;
\fill (13.5,-1) circle (2pt) node[below] {\,\,\,\scriptsize{$j_{p+1}$}} ;
\fill (14,-1) node[below] {\,\scriptsize{...}};
\fill (14.5,-1) node[below] {\scriptsize{...}};
\fill (15,-1) circle (2pt) node[below] {\scriptsize{$j_r$}} ;
\fill (15.5,-1) circle (2pt) node[below] {\,\scriptsize{$j_{r+1}$}} ;
\fill (16,-1) node[below] {\,\,\scriptsize{...}};
\fill (16.5,-1) circle (2pt) node[below] {\scriptsize{$j_{q}$}} ;
%
\draw[-] (11,0)-- (13,-1);
\draw[-] (11,-1)-- (15,0);
\draw[-] (11.5,0)-- (11.5,-1);
\draw[-] (12.5,0)-- (12.5,-1);
\draw[-] (13,0)-- (13.5,-1);
\draw[-] (14.5,0)-- (15,-1);
\draw[-] (15.5,0)-- (15.5,-1);
\draw[-] (16.5,0)-- (16.5,-1);
\end{tikzpicture}
\caption[smallcaption]{Perfect matchings of $(i{\mathbf c},j{\mathbf s})$, case (iiia).} \label{Fig:strongcouple0}
\end{center}
\end{figure}
\bigskip
$\underline{\mbox{Case (iiib)}}$. Suppose now that (i), (ii) and (iiib) hold, and let $({\mathbf c},{\mathbf s})$ be the following couple of length $2q-1$,
\[({\mathbf c},{\mathbf s}) = \left(i_1...i_qi_2i_3...i_q\,,\,j_1j_2...j_qj_1j_2...j_{q-1}\right),\]
which is admissible from the first assertion of assumption (iiib), and clearly satisfies (\ref{eq:defstrongcouple1}) by the very definition of ${\mathscr{P}}$.
Again from (i), both $i$ and $j$ have a neighbor in ${\mathscr{P}}$,
and we let again $j_p$ be the first compatible server with $i$ in ${\mathbf c}$ and $i_r$ be the first compatible customer with $j$ in ${\mathbf s}$. The three sub-cases are as above:
\begin{itemize}
\item[(b1)] $p=r$. This case is analog to (a1);
\item[(b2)] $p>r$, that is, the first match of $j$ arrives before that of $i$. This case is analog to (a2);
\item[(b3)] $p<r$. First, $i_\ell$ is matched with $j_\ell$ for any $\ell \in \llbracket 1,p-1 \rrbracket$ if the later set is non-empty.
Then $i$ is matched with $j_p$, and we apply the following procedure:
\begin{itemize}
\item we set $p_0:= p;$
\item as long as the set \[{\mathcal{A}}_k:=\{\ell \in \llbracket p_{k}+1,r-1 \rrbracket \,:\,i_{p_{k}} {\--} j_\ell\}\]
is non-empty, we set $p_{k+1}=\min {\mathcal{A}}_k.$
\item we let $m$ be the smallest index $k$ such that ${\mathcal{A}}_k = \emptyset$.
\end{itemize}
Observe that $m \ge 1$. Then, for any $k \in \llbracket 0,m-1 \rrbracket$, $i_{p_k}$ is matched with $j_{p_{k+1}}$, and for any $\ell \in \llbracket p_k+1,p_{k+1}-1 \rrbracket$
(if the latter is non-empty), $i_\ell$ is matched with $j_\ell$. Finally, for any $\ell \in \llbracket p_m+1,r-1 \rrbracket$ (if the latter set is non-empty), $i_\ell$ is matched with $j_\ell$, and $j$ is matched with
$i_r$. Thus, after $r$ arrivals and whatever the matching policy is, we are in the following alternative: either $i_{p_m} {\--} j_r$, in which case these two items are matched together and the system is empty, or
$i_{p_m} {\not\!\!\--} j_r$, and then only the couple $(i_{p_m},j_r)$ remains to be matched. In the first case, as it is clear that all subsequent entering couples of $({\mathbf c},{\mathbf s})$ are matched together upon arrival, so
$Q_\phi(i{\mathbf c},j{\mathbf s})=\emptyset.$ Only the second case remains to be treated, in other words we are rendered to prove that
\begin{equation}
\label{eq:strongcouple3}
Q_\phi\left(i_{p_m}i_{r+1}...i_qi_2i_3...i_q,j_rj_{r+1}...j_qj_1...j_{q-1}\right)=\emptyset.
\end{equation}
Again, we have two sub-cases:
\begin{itemize}
\item[$\star$] There exists an index $k \in \llbracket r+1,q\rrbracket$ such $i_{p_m} {\--} j_k$ (take the smallest such $k$). Then,
whatever $\phi$ is, from the buffer-first assumption, for all $\ell \in \llbracket r,k-1 \rrbracket$, $j_\ell$ is matched with $i_{\ell +1}$, and then $i_{r_m}$ is matched
with $j_k$. Thus the matching is complete after $k$ arrivals, and $\phi$ matches all incoming couples on the fly after that, so (\ref{eq:strongcouple3}) holds; see the top display of Figure \ref{Fig:strongcouple}.
\item[$\star\star$] $r=q$, or $i_{p_m}$ does not find a match in the set $\{j_{r+1},...,j_q\}$. Then, for all $\ell \in k \in \llbracket r,q-1 \rrbracket$ (if the latter is non-empty),
$\phi$ matches $i_\ell$ with $j_\ell$, so after $q$ arrivals only the couple $(i_{r_m},j_q)$ remains to be matched. Now, observe that the set
\[{\mathcal{B}}:=\{\ell \in \llbracket 1,p_m \rrbracket \,:\,i_{p_{m}} {\--} j_\ell\}\]
is non-empty, since, by the definition of ${\mathscr{P}}$, $i_{p_m} {\--} j_1$ if $p_m=1$, and otherwise $i_{p_m} {\--} j_{p_m-1}$ and $i_{p_m} {\--} j_{p_m}$.
We let $n:=\min{\mathcal{B}}$. Then, from the second assertion of (iiib), $j_q$ necessarily waits for the next $i_q$ to find a compatible item. Therefore, before that,
$\phi$ necessarily matches, for any $\ell \in \llbracket 1,n-1 \rrbracket$ (if the latter is non-empty), the second incoming $j_\ell$ with the second incoming $i_{\ell+1}$.
After that, $i_{r_m}$ is matched with $j_n$, and then for all $\ell \in \llbracket n,q-1 \rrbracket$ (if the latter is non-empty), $\phi$ matched the second $i_\ell$ with the second $j_\ell$.
Finally, the still unmatched $j_q$ is matched with the second entering $i_q$ (see the bottom display of Figure \ref{Fig:strongcouple}).
Again, (\ref{eq:strongcouple3}) holds true, which concludes the proof of (\ref{eq:defstrongcouple2}) in that case.
\end{itemize}
\end{itemize}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.57]
\fill (4,0) circle (2pt) node[above] {\scriptsize{$\mathbf{i}$}} ;
\fill (5,0) circle (2pt) node[above] {\scriptsize{$i_1$}} ;
\fill (6,0) node[above] {...} ;
\fill (7,0) circle (2pt) node[above] {\scriptsize{$i_{p-1}$}} ;
\fill (8,0) circle (2pt) node[above] {\scriptsize{$i_{p_0}$}} ;
\fill (9,0) circle (2pt) node[above] {\scriptsize{$i_{p_0+1}$}} ;
\fill (10,0) node[above] {...} ;
\fill (11,0) circle (2pt) node[above] {\scriptsize{$i_{p_1-1}$}};
\fill (12,0) circle (2pt) node[above] {\scriptsize{$i_{p_1}$}};
\fill (13,0) node[above] {...} ;
\fill (14,0) node[above] {...} ;
\fill (15,0) circle (2pt) node[above] {\scriptsize{$i_{p_{m}}$}} ;
\fill (16,0) circle (2pt) node[above] {\,\,\scriptsize{$i_{p_{m}+1}$}} ;
\fill (17,0) node[above] {...} ;
\fill (18,0) circle (2pt) node[above] {\scriptsize{$i_{r-1}$}};
\fill (19,0) circle (2pt) node[above] {\scriptsize{$\mathbf{i_r}$}};
\fill (20,0) circle (2pt) node[above] {\scriptsize{$i_{r+1}$}};
\fill (21,0) node[above] {...};
\fill (22,0) circle (2pt) node[above] {\scriptsize{$i_{k}$}};
\fill (23,0) circle (2pt) node[above] {\scriptsize{$i_{k+1}$}};
\fill (24,0) node[above] {...} ;
\fill (25,0) circle (2pt) node[above] {\scriptsize{$i_{q}$}};
\fill (26,0) circle (2pt) node[above] {\scriptsize{$i_{2}$}};
\fill (27,0) node[above] {...} ;
\fill (28,0) circle (2pt) node[above] {\scriptsize{$i_{q}$}};
%
\fill (4,-1) circle (2pt) node[below] {\scriptsize{$\mathbf{j}$}} ;
\fill (5,-1) circle (2pt) node[below] {\scriptsize{$j_1$}} ;
\fill (6,-1) node[below] {...} ;
\fill (7,-1) circle (2pt) node[below] {\scriptsize{$j_{p-1}$}} ;
\fill (8,-1) circle (2pt) node[below] {\scriptsize{$\mathbf{j_{p}}$}} ;
\fill (9,-1) circle (2pt) node[below] {\scriptsize{$j_{p_0+1}$}} ;
\fill (10,-1) node[below] {...} ;
\fill (11,-1) circle (2pt) node[below] {\scriptsize{$j_{p_1}$}};
\fill (12,-1) circle (2pt) node[below] {\scriptsize{$j_{p_1}$}};
\fill (13,-1) node[below] {...} ;
\fill (14,-1) node[below] {...} ;
\fill (15,-1) circle (2pt) node[below] {\scriptsize{$j_{p_{m}}$}} ;
\fill (16,-1) circle (2pt) node[below] {\,\,\scriptsize{$j_{p_{m}+1}$}} ;
\fill (17,-1) node[below] {...} ;
\fill (18,-1) circle (2pt) node[below] {\scriptsize{$j_{r-1}$}};
\fill (19,-1) circle (2pt) node[below] {\scriptsize{$j_{r}$}};
\fill (20,-1) node[below] {...} ;
\fill (21,-1) circle (2pt) node[below] {\scriptsize{$j_{k-1}$}};
\fill (22,-1) circle (2pt) node[below] {\scriptsize{$j_{k}$}};
\fill (23,-1) circle (2pt) node[below] {\scriptsize{$j_{k+1}$}};
\fill (24,-1) node[below] {...} ;
\fill (25,-1) circle (2pt) node[below] {\scriptsize{$j_{q}$}};
\fill (26,-1) circle (2pt) node[below] {\scriptsize{$j_{1}$}};
\fill (27,-1) node[below] {...} ;
\fill (28,-1) circle (2pt) node[below] {\scriptsize{$j_{q-1}$}};
%
\draw[-] (4,0)-- (8,-1);
\draw[-] (4,-1)-- (19,0);
\draw[-] (5,-1)-- (5,0);
\draw[-] (7,-1)-- (7,0);
\draw[-] (12,-1)-- (8,0);
\draw[-] (9,0)-- (9,-1);
\draw[-] (11,0)-- (11,-1);
\draw[-] (12,0)-- (12.5,-0.3);
\draw[-,dotted] (12.5,-0.3)-- (12.75,-0.45);
\draw[-,dotted] (14.25,-0.1)-- (14.5,-0.4);
\draw[-] (14.5,-0.4)-- (15,-1);
\draw[-] (16,0)-- (16,-1);
\draw[-] (18,0)-- (18,-1);
\draw[-] (20,0)-- (19,-1);
\draw[-] (22,0)-- (21,-1);
\draw[-] (15,0)-- (22,-1);
\draw[-] (23,0)-- (23,-1);
\draw[-] (25,0)-- (25,-1);
\draw[-] (26,0)-- (26,-1);
\draw[-] (28,0)-- (28,-1);
%
%
\fill (4,-3) circle (2pt) node[above] {\scriptsize{$\mathbf{i}$}} ;
\fill (5,-3) circle (2pt) node[above] {\scriptsize{$i_1$}} ;
\fill (6,-3) node[above] {...} ;
\fill (7,-3) circle (2pt) node[above] {\scriptsize{$i_{p-1}$}} ;
\fill (8,-3) circle (2pt) node[above] {\scriptsize{$i_{p_0}$}} ;
\fill (9,-3) circle (2pt) node[above] {\scriptsize{$i_{p_0+1}$}} ;
\fill (10,-3) node[above] {...} ;
\fill (11,-3) circle (2pt) node[above] {\scriptsize{$i_{p_1-1}$}};
\fill (12,-3) circle (2pt) node[above] {\scriptsize{$i_{p_1}$}};
\fill (13,-3) node[above] {...} ;
\fill (14,-3) node[above] {...} ;
\fill (15,-3) circle (2pt) node[above] {\scriptsize{$i_{p_{m}}$}} ;
\fill (16,-3) circle (2pt) node[above] {\,\,\scriptsize{$i_{p_{m}+1}$}} ;
\fill (17,-3) node[above] {...} ;
\fill (18,-3) circle (2pt) node[above] {\scriptsize{$i_{r-1}$}};
\fill (19,-3) circle (2pt) node[above] {\scriptsize{$\mathbf{i_r}$}};
\fill (20,-3) circle (2pt) node[above] {\scriptsize{$i_{r+1}$}};
\fill (21,-3) node[above] {...};
\fill (22,-3) circle (2pt) node[above] {\scriptsize{$i_{q}$}};
\fill (23,-3) circle (2pt) node[above] {\scriptsize{$i_{2}$}};
\fill (24,-3) node[above] {...} ;
\fill (25,-3) circle (2pt) node[above] {\scriptsize{$i_{n}$}};
\fill (26,-3) circle (2pt) node[above] {\scriptsize{$i_{n+1}$}};
\fill (27,-3) node[above] {...} ;
\fill (28,-3) node[above] {...} ;
\fill (29,-3) circle (2pt) node[above] {\scriptsize{$i_{q-1}$}};
\fill (30,-3) circle (2pt) node[above] {\scriptsize{$i_{q}$}};
%
\fill (4,-4) circle (2pt) node[below] {\scriptsize{$\mathbf{j}$}} ;
\fill (5,-4) circle (2pt) node[below] {\scriptsize{$j_1$}} ;
\fill (6,-4) node[below] {...} ;
\fill (7,-4) circle (2pt) node[below] {\scriptsize{$j_{p-1}$}} ;
\fill (8,-4) circle (2pt) node[below] {\scriptsize{$\mathbf{j_{p}}$}} ;
\fill (9,-4) circle (2pt) node[below] {\scriptsize{$j_{p_0+1}$}} ;
\fill (10,-4) node[below] {...} ;
\fill (11,-4) circle (2pt) node[below] {\scriptsize{$j_{p_1}$}};
\fill (12,-4) circle (2pt) node[below] {\scriptsize{$j_{p_1}$}};
\fill (13,-4) node[below] {...} ;
\fill (14,-4) node[below] {...} ;
\fill (15,-4) circle (2pt) node[below] {\scriptsize{$j_{p_{m}}$}} ;
\fill (16,-4) circle (2pt) node[below] {\,\,\scriptsize{$j_{p_{m}+1}$}} ;
\fill (17,-4) node[below] {...} ;
\fill (18,-4) circle (2pt) node[below] {\scriptsize{$j_{r-1}$}};
\fill (19,-4) circle (2pt) node[below] {\scriptsize{$j_{r}$}};
\fill (20,-4) node[below] {...} ;
\fill (21,-4) circle (2pt) node[below] {\scriptsize{$j_{q-1}$}};
\fill (22,-4) circle (2pt) node[below] {\scriptsize{$j_{q}$}};
\fill (23,-4) circle (2pt) node[below] {\scriptsize{$j_{1}$}};
\fill (24,-4) node[below] {...} ;
\fill (25,-4) circle (2pt) node[below] {\scriptsize{$j_{n-1}$}};
\fill (26,-4) circle (2pt) node[below] {\scriptsize{$j_{n}$}};
\fill (27,-4) circle (2pt) node[below] {\scriptsize{$j_{n+1}$}};
\fill (28,-4) node[below] {...} ;
\fill (29,-4) node[below] {...} ;
\fill (30,-4) circle (2pt) node[below] {\scriptsize{$j_{q-1}$}};
%
\draw[-] (4,-3)-- (8,-4);
\draw[-] (4,-4)-- (19,-3);
\draw[-] (5,-4)-- (5,-3);
\draw[-] (7,-4)-- (7,-3);
\draw[-] (12,-4)-- (8,-3);
\draw[-] (9,-3)-- (9,-4);
\draw[-] (11,-3)-- (11,-4);
\draw[-] (12,-3)-- (12.5,-3.3);
\draw[-,dotted] (12.5,-3.3)-- (12.75,-3.45);
\draw[-,dotted] (14.25,-3.1)-- (14.5,-3.4);
\draw[-] (14.5,-3.4)-- (15,-4);
\draw[-] (16,-3)-- (16,-4);
\draw[-] (18,-3)-- (18,-4);
\draw[-] (20,-3)-- (19,-4);
\draw[-] (22,-3)-- (21,-4);
\draw[-] (15,-3)-- (26,-4);
\draw[-] (23,-3)-- (23,-4);
\draw[-] (25,-3)-- (25,-4);
\draw[-] (26,-3)-- (27,-4);
\draw[-] (29,-3)-- (30,-4);
\draw[-] (30,-3)-- (22,-4);
\end{tikzpicture}
\caption[smallcaption]{Perfect matchings of $(i{\mathbf c},j{\mathbf s})$ in case (b3): sub-case $\star$ (above) and sub-case $\star \star$ (below).} \label{Fig:strongcouple}
\end{center}
\end{figure}
\bigskip
$\underline{\mbox{Case (iiic)}}$. By symmetry between customers and servers and reading words in reverse sense, we can apply the exact argument of (iiib) for the input
\[({\mathbf c},{\mathbf s}) = \left(i_q...i_1i_{q}...i_{2}\,,\,j_q...j_1j_{q-1}...j_{1}\right).\]
\end{proof}
With Proposition \ref{pro:strongcouple} in hand, we can now make more precise the classes of bipartite structures admitting strong erasing couples.
\paragraph{Bi-separable graphs.} Strong erasing couples are easily obtained for bipartite structures having a
bi-separable matching graph:
\begin{proposition}
\label{pro:strongerasesep}
Suppose that the matching graph ${\mathcal{G}}=({\mathcal{C}}\cup{\mathcal S},E)$ is bi-separable of order at least 3.
If there exist three maximal independent sets $I_1$, $I_2$ and $I_3$ and three couples
$(k_1,\ell_1)\in I_1,\,(k_2,\ell_2)\in I_2$ and $(k_3,\ell_3)\in I_3$ such that $F$ contains
$(k_1,\ell_2)$, $(k_2,\ell_3)$ and $(k_3,\ell_1)$, then $(\mathcal B,\phi)$ admits at least one strong erasing couple for any admissible policy $\phi$.
\end{proposition}
\begin{proof}
Fix an admissible matching policy. The proof does not depend in any list of preferences, and we skip again this parameter from all notation for short.
A strong erasing word is given by $({\mathbf c},{\mathbf s})=(k_1k_2k_3,\ell_2\ell_3\ell_1)$. To see this, it is first immediate to observe that
$Q_\phi({\mathbf c},{\mathbf s})=\emptyset$ by the definition of a bi-separable graph. Any single-letter buffer detail is of the form $(c,s)$, where $c\in A$, $s \in B$ and $A\cup B$ is a maximal independent set of ${\mathcal{G}}$. Thus we have four alternatives:
\begin{itemize}
\item If $I=I_1$, then $M_\phi(c{\mathbf c},s{\mathbf s})$ contains the matches $(c,\ell_2)$, $(k_1,\ell_3)$, $(k_2,s)$ and $(k_3,\ell_1)$;
\item If $I=I_2$, then $M_\phi(c{\mathbf c},s{\mathbf s})$ contains $(c,\ell_3)$, $(k_1,s)$, $(k_2,\ell_1)$ and $(k_3,\ell_2)$;
\item If $I=I_3$, $M_\phi(c{\mathbf c},s{\mathbf s})$ contains $(c,\ell_2)$, $(k_1,s)$, $(k_2,\ell_3)$ and $(k_3,\ell_1)$;
\item If ${\mathcal{G}}$ is of order at least 4 and $I\not\in\{I_1,I_2,I_3\}$, then $M_\phi(c{\mathbf c},s{\mathbf s})$ contains $(c,\ell_2)$, $(k_1,s)$, $(k_2,\ell_3)$ and $(k_3,\ell_1)$.
\end{itemize}
In all cases we have $Q_\phi(c{\mathbf c},s{\mathbf s})=\emptyset,$ which concludes the proof.
\end{proof}
\paragraph{GM model.} Observe that condition (ii) of Proposition \ref{pro:strongcouple} cannot hold for a GM model (i.e. $F=\left\{(c,\tilde c), c\in{\mathcal{C}}\right\}$), as we would have
$c {\--} c' = \tilde c$ for some $c \in{\mathcal{C}}$, an absurdity. Proposition 7 in \cite{MBM17} provides alternative specific conditions for the existence of
strong erasing words (analog to strong erasing couples in that case) for BM models.
\paragraph{BM Model.} Consider now the case where $({\mathcal{C}} \cup {\mathcal S}, F)$ is complete, i.e. a BM model. It is then immediate to reformulate the assumptions
of Proposition \ref{pro:strongcouple}, to obtain the following:
\begin{proposition}
\label{pro:strongcoupleBM}
Suppose that ${\mathcal{B}}$ defines a BM model. If, for some subsets $\check{{\mathcal{C}}}$ and $\check{{\mathcal S}}$ such that ${\mathcal S}(\check{{\mathcal{C}}})=S$ and ${\mathcal{C}}(\check{{\mathcal S}})=C,$ there exists an alternating path $\mathscr P=i_1 {\--} j_1 {\--} i_2 {\--} j_2 {\--} ... {\--} i_q {\--} j_q$ spanning $\check{{\mathcal{C}}} \cup \check{{\mathcal S}}$,
and such that either one of the following hold:
\begin{itemize}
\item[(a)] the sub-graph $\check{{\mathcal{G}}}$ induced by $\check{{\mathcal{C}}} \cup \check{{\mathcal S}}$ in
$({\mathcal{C}} \cup {\mathcal S},E)$ is bipartite complete;
\item[(b)] ${\mathcal{C}}(j_q) \cap \check{{\mathcal{C}}} = \{i_q\}$;
\item[(c)] ${\mathcal S}(i_1) \cap \check{{\mathcal S}} = \{j_1\}$,
\end{itemize}
Then $(\mathcal B,\phi)$ admits at least one strong erasing couple for any admissible policy $\phi$.
\end{proposition}
Clearly, condition (b) (respectively, (c)) above holds for $\check{{\mathcal{C}}} \equiv{\mathcal{C}}$ and $\check{{\mathcal S}} \equiv{\mathcal S}$ (as $({\mathcal{C}}\cup{\mathcal S},E)$ is connected), whenever $({\mathcal{C}} \cup {\mathcal S},E)$ contains a customer (resp. server) node of degree 1, by setting
$i_1$ (resp., $j_q$) as the latter node.
\subsection{Structures admitting erasing couples}
The following result provides sufficient conditions for the existence of erasing couples for any admissible buffer detail,
\begin{proposition}
\label{prop:existerasing}
Let $\phi$ be a sub-additive policy. Then, any admissible buffer detail $({\mathbf w},{\mathbf z})$ admits an erasing couple for $({\mathcal{B}},\phi)$ in the following cases:
\begin{enumerate}
\item ${\mathcal{B}}$ satisfies Proposition \ref{pro:strongcouple} or Proposition \ref{pro:strongerasesep};
\item ${\mathcal{B}}$ defines a BM model (i.e. $F={\mathcal{C}}\times{\mathcal S}$);
\item ${\mathcal{B}}$ defines a GM model and $({\mathbf w},{\mathbf z})$ is of even length.
\end{enumerate}
\end{proposition}
\begin{proof}
The argument in the proof are clearly independent of the lists of preferences, and we drop again this parameter of all notations for short.
\begin{enumerate}
\item Suppose that Proposition \ref{pro:strongcouple} is satisfied, and thereby a strong erasing couple $({\mathbf c},{\mathbf s})$ exists for $({\mathcal{B}},\phi)$.
Therefore, as was observed above, $({\mathbf c},{\mathbf s})$ is an erasing couple of any single-letter couple $(i,j)\in{\mathcal{C}}\times{\mathcal S} \setminus E$.
Let us now consider an admissible buffer detail $({\mathbf w},{\mathbf z})=\left(w_1...w_r\;,\;z_1...z_r\right)$ for $r\ge 1$.
First, as we just proved, there exists an erasing couple, say $({\mathbf c}^1,{\mathbf s}^1)$, for the single-letter word $(w_r,z_r)$.
Thus, the sub-additivity of $\phi$ entails that
\begin{align*}
\left|C_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right)\right| &\le \left|C_\phi\left(w_1w_2...w_{r-1}\,,\,z_1z_2...z_{r-1}\right)\right|+\left|C_\phi\left(w_r{\mathbf c}^1\,,\,z_r{\mathbf s}^1\right)\right|
= \left|C_\phi({\mathbf w},{\mathbf z})\right|-1;\\
\left|S_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right)\right| &\le \left|S_\phi\left(w_1w_2...w_{r-1}\,,\,z_1z_2...z_{r-1}\right)\right|+\left|S_\phi\left(w_r{\mathbf c}^1\,,\,z_r{\mathbf s}^1\right)\right|
= \left|S_\phi({\mathbf w},{\mathbf z})\right|-1.
\end{align*}
We can in turn, apply the same argument to the admissible buffer detail $Q_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right)$: there exists an erasing couple $({\mathbf c}^2,{\mathbf s}^2)$ for the single-letter
couple gathering the last letter of $C_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right)$ and the las letter of $S_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right)$. As above, the sub-additivity of $\phi$ entails that
\begin{align*}
\left|C_\phi\left({\mathbf w}{\mathbf c}^1{\mathbf c}^2,{\mathbf z}{\mathbf s}^1{\mathbf s}^2\right)\right| &= \left|C_\phi\left(C_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right){\mathbf c}^2\,,\,S_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right){\mathbf s}^2\right)\right|
\le \left|C_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right)\right|-1;\\
\left|S_\phi\left({\mathbf w}{\mathbf c}^1{\mathbf c}^1,{\mathbf z}{\mathbf s}^1{\mathbf s}^2\right)\right| &= \left|S_\phi\left(C_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right){\mathbf c}^2\,,\,S_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right){\mathbf s}^2\right)\right|
\le \left|S_\phi\left({\mathbf w}{\mathbf c}^1,{\mathbf z}{\mathbf s}^1\right)\right|-1.
\end{align*}
By an immediate induction, we obtain that for some $p \le r$, there exist $p$ couples $({\mathbf c}^1,{\mathbf s}^1)$, ... , $({\mathbf c}^p,{\mathbf s}^p)$ such that
\begin{equation}
\label{eq:OK}
Q_\phi\left({\mathbf w}{\mathbf c}^1...{\mathbf c}^p\,,\,{\mathbf z}{\mathbf s}^1...{\mathbf s}^p\right) = \left(C_\phi\left({\mathbf w}{\mathbf c}^1...{\mathbf c}^p\,,\,{\mathbf z}{\mathbf s}^1...{\mathbf s}^p\right)\,,\,S_\phi\left({\mathbf w}{\mathbf c}^1...{\mathbf c}^p\,,\,{\mathbf z}{\mathbf s}^1...{\mathbf s}^p\right)\right)
=\emptyset.
\end{equation}
On the other hand, by the very definition of an erasing couple we have that $Q_\phi({\mathbf c}^1,{\mathbf s}^1)=...=Q_\phi({\mathbf c}^p,{\mathbf s}^p)=\emptyset.$
Thus $Q_\phi({\mathbf c}^1...{\mathbf c}^p,{\mathbf s}^1...{\mathbf s}^p)=\emptyset$, which shows, together with (\ref{eq:OK}), that $({\mathbf c}^1...{\mathbf c}^p,{\mathbf s}^1...{\mathbf s}^p)$ is an erasing couple for $({\mathbf w},{\mathbf z})$.
\item As ${\mathcal{G}}=({\mathcal{C}}\cup{\mathcal S},E)$ is connected, there exists an alternating path
\[i {\--} j_1 {\--} i_1 {\--} j_2 {\--} \,...\, i_{p-1} {\--} j_p {\--} i_p {\--} j\]
connecting $i$ to $j$. Then, as $F={\mathcal{C}}\times{\mathcal S}$, the input
\[({\mathbf c},{\mathbf s}) = \left(i_1i_2 \,...\,i_k\,,\,j_1j_2 \,...\,j_k\right)\]
is admissible, and is clearly such that $Q_\phi(i{\mathbf c},j{\mathbf s})=\emptyset$ for any $\phi$ and any lists of preference, see Figure \ref{Fig:weakcouple}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=1]
\fill (0,0) circle (2pt) node[above] {\scriptsize{$\mathbf{i}$}} ;
\fill (1,0) circle (2pt) node[above] {\scriptsize{$i_1$}} ;
\fill (2,0) node[above] {...} ;
\fill (3,0) node[above] {...} ;
\fill (4,0) circle (2pt) node[above] {\scriptsize{$i_{p-1}$}} ;
\fill (5,0) circle (2pt) node[above] {\scriptsize{$i_{p}$}} ;
%
\fill (0,-1) circle (2pt) node[below] {\scriptsize{$\mathbf{j}$}} ;
\fill (1,-1) circle (2pt) node[below] {\scriptsize{$j_1$}} ;
\fill (2,-1) circle (2pt) node[below] {\scriptsize{$j_2$}} ;
\fill (3,-1) node[below] {...} ;
\fill (4,-1) node[below] {...} ;
\fill (5,-1) circle (2pt) node[below] {\scriptsize{$j_{p}$}} ;
\draw[-] (0,0) -- (1,-1);
\draw[-] (0,-1)-- (5,0);
\draw[-] (1,0)-- (2,-1);
\draw[-] (4,0)-- (5,-1);
%
%
\fill (8,0) circle (2pt) node[above] {\scriptsize{$i_1$}} ;
\fill (9,0) circle (2pt) node[above] {\scriptsize{$i_2$}} ;
\fill (10,0) node[above] {...} ;
\fill (11,0) node[above] {...} ;
\fill (12,0) circle (2pt) node[above] {\scriptsize{$i_{p}$}} ;
%
\fill (8,-1) circle (2pt) node[below] {\scriptsize{$j_1$}} ;
\fill (9,-1) circle (2pt) node[below] {\scriptsize{$j_2$}} ;
\fill (10,-1) node[below] {...} ;
\fill (11,-1) node[below] {...} ;
\fill (12,-1) circle (2pt) node[below] {\scriptsize{$j_{p}$}} ;
\draw[-] (8,-1)-- (8,0);
\draw[-] (9,0)-- (9,-1);
\draw[-] (12,0)-- (12,-1);
\end{tikzpicture}
\caption[smallcaption]{Perfect matchings of $(i{\mathbf c},j{\mathbf s})$ (left) and $({\mathbf c},{\mathbf s})$ (right).} \label{Fig:weakcouple}
\end{center}
\end{figure}
The existence of an erasing word for any arbitrary admissible buffer detail $({\mathbf w},{\mathbf z})$ follows as in case 1.
\item In the case where ${\mathcal{B}}$ defines a GM model, the existence of erasing couples for all buffer details of even size is
proven in Proposition 6 of \cite{MBM17}, transposing to the present context, the concept of {\em erasing word} therein.
\end{enumerate}
\end{proof}
\section{Loynes construction}
\label{sec:loynes}
\subsection{Probabilistic assumptions}
\label{subsec:NcondScond}
Fix a bipartite matching structure ${\mathcal{B}}=({\mathcal{C}},{\mathcal S},E,F)$ and a matching policy $\phi$. We suppose that the classes of the entering couples and their lists of preferences are {random}: on the reference probability space $(\Omega,\mathcal F, \mathbb P)$, we define the $F\times \mathbb S\times \mathbb C$-valued bi-infinite input sequence $\left(C_n,S_n,\Sigma_n,\Gamma_n\right)_{n\in {\mathbb Z}}$, and make the following general assumption:
\begin{enumerate}
\item[\textbf{(H1)}] The sequence $\left(C_n,S_n,\Sigma_n,\Gamma_n\right)_{n\in {\mathbb Z}}$ is stationary, drawn at all $n$ from a distribution on $F\times \mathbb S\times \mathbb C$ whose $F$-marginal is denoted $\mu$ and $\mathbb S\times \mathbb C$-marginal is denoted $\nu^\phi$, and ergodic.
\end{enumerate}
In the above definition, we assume that the probability measure $\mu$ has full support $F$. We denote by $\mu_{{\mathcal{C}}}$ (respectively, $\mu_{{\mathcal S}}$) its ${\mathcal{C}}$-marginal (resp., ${\mathcal S}$-marginal) distribution.
On another hand, we emphasize the dependence of the distribution of lists of preferences on $\mathbb S \times \mathbb C$ on the matching policy. For instance:
\begin{itemize}
\item if $\phi$ is a strict priority, i.e. the order of priority of any customer/server among the class of compatible servers/customers is fixed beforehand.
In other word we fix a list of customer preference $\alpha$ in $\mathbb S$ and a list of server preference $\beta$ in $\mathbb C$, and we set $\nu^\phi=\delta_\alpha\otimes \delta_\beta$.
\item if $\nu^\phi$ is the uniform distribution on $\mathbb S\times\mathbb C$, we say that the matching policy is {\em uniform};
\item for $\phi=\textsc{ml}$ or {\sc ms}, we also fix $\nu^\phi$ as the uniform distribution on $\mathbb S\times\mathbb C$ - i.e. ties are broken uniformly at random;
\item the drawn lists of preference are irrelevant for policies such as {\sc fcfs} or {\sc lcfs}, and in such case we drop this parameter from all notation.
\end{itemize}
We will also consider the following statistical assumption which, from Birkhoff's ergodic Theorem, is strictly stronger than (H1),
\begin{enumerate}
\item[\textbf{(IID)}] The sequence $\left(C_n,S_n,\Sigma_n,\Gamma_n\right)_{n\in {\mathbb Z}}$ is i.i.d., drawn at all $n$ from a distribution on $F\times \mathbb S\times \mathbb C$ whose $F$-marginal is denoted $\mu$ and $\mathbb S\times \mathbb C$-marginal is denoted $\nu^\phi$.
\end{enumerate}
Assumption (IID), which allows a representation of the system by a discrete-time Markov chain, is typically made in all references on matching models except \cite{MBM17}.
Observe that the BM model, as investigated in \cite{ABMW17}, makes, in addition to (IID), the assumption that the classes of the incoming server and customer are independent, i.e.
$\mu=\mu_{{\mathcal{C}}} \otimes \mu_{{\mathcal S}}$. We will not restrict to this case here.
\medskip
Consider the two following conditions on ${\mathcal{G}}$ and $\mu$,
\begin{equation}
\label{eq:Ncond}
\text{For any set }A\subsetneq{\mathcal{C}}\mbox{ and }B\subsetneq{\mathcal S},\quad\quad \mu_{\mathcal{C}}(A) < \mu_{{\mathcal S}}\left({\mathcal S}(A)\right)\quad\quad\mbox{ and }\quad\quad\mu_{\mathcal S}(B) < \mu_{{\mathcal{C}}}\left({\mathcal{C}}(B)\right),
\end{equation}
and recalling (\ref{eq:defCSnot}),
\begin{equation}
\label{eq:Scond}
\text{For any independent set }I=A \cup B \ne \emptyset,\quad\mu_{\mathcal{C}}\left({\mathcal{C}}(B)\right)+\mu_{\mathcal S}({\mathcal S}(A)) >1-\mu\left(E \cap \left({\mathcal{C}}_\circ\left(I\right)\times {\mathcal S}_\circ(I)\right)\right).
\end{equation}
The condition (\ref{eq:Ncond}) was introduced in \cite{calkapwei09}, and shown to guarantee complete resource pooling in a BM system under the policy {\sc fcfs}, and for $\mu=\mu_{{\mathcal{C}}}\otimes \mu_{{\mathcal S}}.$
It was shown in Lemma 3.2 in \cite{BGM13} to be necessary for the positive recurrence of any EBM system under the (IID) assumption.
Condition (\ref{eq:Scond}) was also introduced in \cite{BGM13}, and shown to be sufficient for the positive recurrence of an EBM system under (IID) (Proposition 5.2 of \cite{BGM13}).
\medskip
Let us now introduce the following condition for a bi-separable matching graph ${\mathcal{G}}=({\mathcal{C}} \cup {\mathcal S}, E)$ of order $p$, denoting by
$I_1=(A_1 \cup B_1),...,I_p=(A_p \cup B_p)$ the independent sets of the corresponding partition of ${\mathcal{C}} \cup {\mathcal S}$,
\begin{equation}
\label{eq:scondmonotone0}
\mbox{For all }i\in\llbracket 1,p \rrbracket,\, \mu\left(A_i\times B_i\right) < {1 \over 2}.
\end{equation}
Observe that (\ref{eq:scondmonotone}) is non-empty if and only ${\mathcal{G}}$ is of order strictly greater than 2.
We have the following result,
\begin{proposition}
\label{pro:equivalenceBGM}
For a bi-separable graph ${\mathcal{G}}=({\mathcal{C}} \cup {\mathcal S}, E)$ of order $p$, both conditions (\ref{eq:Ncond}) and (\ref{eq:Scond}) are equivalent to (\ref{eq:scondmonotone0}).
\end{proposition}
\begin{proof}
Denote again $I_1=A_i \cup B_i,...,I_p=A_p\cup B_p$ the independent sets of the partition of ${\mathcal{C}} \cup {\mathcal S}$. It is an immediate consequence of Lemma \ref{lemma:separable} that
(\ref{eq:scondmonotone0}) is equivalent to
\begin{equation}
\label{eq:scondmonotone}
\mbox{For all }i\in\llbracket 1,p \rrbracket,\, \mu\left(A_i\times B_i\right) < \mu\left({\mathcal{C}}(B_i)\times {\mathcal S}(A_i)\right).
\end{equation}
Thus, as (\ref{eq:Scond}) entails (\ref{eq:Ncond}), it suffices to check that
(\ref{eq:Ncond}) entails (\ref{eq:scondmonotone}) and (\ref{eq:scondmonotone}) entails (\ref{eq:Scond}).
\medskip
\noindent $\underline{(\ref{eq:Ncond}) \,\Rightarrow\, (\ref{eq:scondmonotone})}$:
Suppose that $A_i \ne \emptyset$ and $B_i \ne \emptyset$ (otherwise the result is trivial). We deduce from (\ref{eq:Ncond}) that
\begin{align*}
\mu\Bigl(A_i\times B_i\Bigl) =\mu_{\mathcal{C}}\left(A_i\right)-\mu\Bigl(A_i\times {\mathcal S}\left(A_i\right)\Bigl)
&<\mu_{\mathcal S}\left({\mathcal S}\left(A_i\right)\right)-\mu\Bigl(A_i\times {\mathcal S}\left(A_i\right)\Bigl)\\
&=\mu\Bigl({\mathcal{C}}\left(B_i\right)\times {\mathcal S}\left(A_i\right) \Bigl).
\end{align*}
\noindent $\underline{(\ref{eq:scondmonotone})\,\Rightarrow\,(\ref{eq:Scond})}$: let $I= A\cup B$ be a non-empty independent set of ${\mathcal{G}}$.
\begin{itemize}
\item[(i)] If $I=I_i$ for some $i\in\llbracket 1,p \rrbracket$,
\begin{align}
\mu\Bigl({\mathcal{C}}\left(B_i\right)\times {\mathcal S}\left(A_i\right) \Bigl) > \mu\left(A_i\times B_i\right)
&\Longleftrightarrow \mu_{\mathcal{C}}\left({\mathcal{C}}(B_i)\right)-\mu\Bigl({\mathcal{C}}(B_i)\times B_i\Bigl)> \mu\left(A_i\times B_i\right)\nonumber\\
&\Longleftrightarrow \mu_{\mathcal{C}}\left({\mathcal{C}}(B_i)\right)> \mu_{\mathcal S}\left(B_i\right)\nonumber\\
&\Longleftrightarrow \mu_{\mathcal{C}}\left({\mathcal{C}}(B_i)\right)> 1-\mu_{\mathcal S}\left({\mathcal S}\left(A_i\right)\right).\label{eq:equivalence1}
\end{align}
\item[(ii)] If not, we show that
\begin{equation}
\label{eq:equivalence2}
I \subseteq I_i\mbox{ for some }i\in\llbracket 1,p \rrbracket.
\end{equation}
For this, fix $c,c' \in A$, and suppose that $k\in A_i$ and $k'\in A_{i'}$.
Assume that $B \not\subseteq B_i$. Then, there exists a class of servers $s$ satisfying
\begin{equation*}
s \in B\subset {\mathcal S}\left(A_{i'}\right)={\mathcal S}\left(\{c'\}\right),
\end{equation*}
from Lemma \ref{lemma:separable}. This implies that $(c',s)\in E$, so $I$ is not an independent set, an absurdity. Hence $B \subseteq B_i$ and similarly,
$B\subseteq B_{i'}$. As the latter sets are disjoints, this is true if and only if $i=i'$, which implies in turn that $\{c,c'\}\subseteq A_i$ for all couples $\{c,c'\}$ of elements
of $A$, so $A \subseteq A_i$, which concludes the proof of (\ref{eq:equivalence2}).
Finally, in view of Lemma \ref{lemma:separable}, (\ref{eq:equivalence2}) together with (\ref{eq:equivalence1}) applied to $I_i$ imply that
\begin{align*}
\mu_{\mathcal{C}}\left({\mathcal{C}}(A)\right)+\mu_{\mathcal S}\left({\mathcal S}(B)\right)
&=\mu_{\mathcal{C}}\left({\mathcal{C}}(A_i)\right)+\mu_{\mathcal S}\left({\mathcal S}(B_i)\right)\\
&>1 \ge 1- \mu\Bigl(E \cap \left({\mathcal{C}}_\circ\left(I\right)\times {\mathcal S}_\circ\left(I\right)\right)\Bigl),
\end{align*}
which concludes the proof.
\end{itemize}
\end{proof}
\subsection{Stationary ergodic framework}
\label{subsec:statergo}
Our coupling results will be more easily formulated in the ergodic theoretical framework. For this, we work on the canonical space $\Omega^0:=\left(F\times \mathbb S\times \mathbb C\right)^\mathbb Z$ of the input,
on which we define the bijective shift operator $\theta$ by $\theta\left((\omega_n)_{n\in\mathbb Z}\right)= (\omega_{n+1})_{n\in\mathbb Z}$ for all $(\omega_n)_{n\in \mathbb Z} \in \Omega$.
We denote by $\theta^{-1}$ the reciprocal operator of $\theta$, and by $\theta^n$ and $\theta^{-n}$ the $n$-th iterated of
$\theta$ and $\theta^{-1}$, respectively, for all $n\in{\mathbb N}$. We equip $\Omega^0$ with a sigma-field $\mathscr F^0$ and with the image probability measure
$\noindent{\bf Proof.}\ $ of the sequence $\left(C_n,S_n,\Sigma_n,\Gamma_n\right)_{n\in {\mathbb Z}}$ on $\Omega^0$. Observe that, under assumption (H1), $\noindent{\bf Proof.}\ $ is {compatible} with the shift, i.e. for any ${\mathcal{A}} \in \mathscr F^0$, $\bpr{{\mathcal{A}}}=\bpr{\theta^{-1}{\mathcal{A}}}$ and any $\theta$-invariant event is either $\noindent{\bf Proof.}\ $-negligible or almost sure.
Then the quadruple $\mathscr Q:=\left(\Omega^0,\mathscr F^0,\noindent{\bf Proof.}\ ,\theta\right)$, termed {\em Palm space} of the input, is stationary ergodic.
For more details about this framework, we refer the reader to the monographs \cite{BranFranLis90}, \cite{BacBre02} (Sections 2.1 and 2.5) and \cite{Rob03} (Chapter 7).
Let the random variable (r.v. for short) $({\mathcal{C}},{\mathcal S},\Sigma,\Gamma)$ be the projection of sample paths over their 0-coordinate. Thus $({\mathcal{C}},{\mathcal S},\Sigma,\Gamma)$ can be interpreted as the input brought to the system at time 0, that is,
at 0 a couple $({\mathcal{C}},{\mathcal S})$ enters the systems, in which $C$ has a list of preference $\Sigma$ over ${\mathcal S}$ and $S$ has a list of preference $\Gamma$ over ${\mathcal{C}}$.
Then for any $n\in \mathbb Z$, the r.v. $({\mathcal{C}},{\mathcal S},\Sigma,\Gamma)\circ\theta^n$ corresponds to the input brought to the system at time $n$.
In what follows, for any $\mathcal{U}$-valued r.v. $V$, we let $\suite{U^{[V]}_n}$ be the $\mathcal{U}$-valued buffer detail sequence of the model, whenever the buffer detail at time 0 equals $V$.
From (\ref{eq:defodot}), for any fixed bipartite matching structure and any fixed matching policy $\phi$, the sequence $\suite{U^{[V]}_n}$ is stochastic recursive, in that it obeys the recurrence relation
\begin{equation}
\label{eq:recur}
\left\{\begin{array}{ll}
U^{[V]}_0 &= V\\
U^{[V]}_{n+1} &= U^{[V]}_{n}\odot_{\phi} ({\mathcal{C}},{\mathcal S},\Sigma,\Gamma)\circ\theta^n,\,n\in{\mathbb N}
\end{array}\right.\quad ,\,\noindent{\bf Proof.}\ -\mbox{ a.s..}
\end{equation}
It follows from the stationarity of $\mathscr Q$ that a stationary buffer detail, if any, is a $\mathcal{U}$-valued sequence $\suite{U_n}$ that is such that
$U_n=U\circ\theta^n$, $\noindent{\bf Proof.}\ $-a.s. for all $n\in {\mathbb N}$, where $U$ is a $\mathcal{U}$-valued r.v.. In turn, with (\ref{eq:recur}) this amounts to saying that $U$ is solution to the equation
\begin{equation}
\label{eq:recurstat}
U\circ\theta = U\odot_{\phi} ({\mathcal{C}},{\mathcal S},\Sigma,\Gamma),\,\noindent{\bf Proof.}\ -\mbox{ a.s..}
\end{equation}
In other words, finding a stationary version of the buffer detail sequence amounts to solving the almost sure equation (\ref{eq:recurstat}).
Further, such a solution corresponds uniquely to a stationary distribution of the buffer detail on the original probability space (see again the aforementioned references for details).
By applying the very argument of the proof of Lemma 3.2 in \cite{BGM13} and Birkhoff's Theorem (instead of the SLLN), it is immediate to observe that (\ref{eq:Ncond}) is also necessary for the stability of the system
under (H1). Specifically, there clear cannot exist a proper solution $U$ to (\ref{eq:recurstat}) such that $\bpr{U=\emptyset}>0$, unless (\ref{eq:Ncond}) holds.
\medskip
To derive sufficient stability conditions and explicitly construct the equilibrium state of the system, the {\em backwards scheme} {\em \`a la Loynes} associated to the present recursion, is defined as follows:
for any $n\in {\mathbb N}$, consider the r.v. $U^{[\emptyset]}_n \circ\theta^{-n}$, which can be interpreted as the buffer detail at time 0,
starting from an empty system at time $-n$. The typical setting of Loynes's Theorem (see \cite{Loynes62}) is the case when the random map $x \mapsto x \odot_{\phi} ({\mathcal{C}},{\mathcal S},\Sigma,\Gamma)$
is almost surely continuous and monotonic, for a given metric and a partial ordering on $\mathcal{U}$ such that $\mathcal{U}$ admits a minimal point and all monotonic sequences
of $\mathcal{U}$ converge. Then an explicit solution of (\ref{eq:recurstat}) is obtained by taking the almost sure limit of the sequence $\suite{U^{[\emptyset]}_n \circ\theta^{-n}}$ -
and the coupling of any sequence $\suite{U^{[V]}_n}$ to the stationary version, follows easily.
In the present case, the recursion does not put in evidence any particular monotonicity property.
However, we can use the sub-additivity of the matching policy under consideration, to obtain a coupling result in the strong backwards sense. For this we use Borovkov's theory of renovating events.
Following \cite{Bor84}, we say that the buffer detail sequence $\suite{U^{[V]}_n}$ converges with {\em strong backwards coupling} to the stationary buffer detail sequence
$\suite{U\circ\theta^n}$ if, $\noindent{\bf Proof.}\ $-almost surely, there exists $N^*\ge 0$ such that for all $n \ge N^*$, $U^{[V]}_n\circ\theta^{-n}=U$.
Note that strong backwards coupling implies the forward coupling between $\suite{U^{[V]}_n}$ and $\suite{U\circ\theta^n}$, that is, there exists $\noindent{\bf Proof.}\ $-a.s. an integer $N\ge 0$ such that $U^{[V]}_{n}=U\circ\theta^n$
for all $n \ge N$. In particular the distribution of $U^{[V]}_{n}$ converges in total variation to that of $U$; see e.g. Section 2.4 of \cite{BacBre02} for details.
\subsection{Renovating events}
\label{subsec:renove}
In this Section we adapt the coupling results in section 4.4 of \cite{MBM17} to the case of EBM models (instead of GM models).
For this, we define the following family of events for any $\mathcal{U}_0$-valued r.v. $V$,
\begin{equation*}
{\mathscr{A}}_l(V) =\left\{U^{[V]}_{l}=\emptyset\right\},\,l\in{\mathbb N}.
\end{equation*}
For any $k\in{\mathbb N}$ and any $l > -k$, the event
\[\theta^k {\mathscr{A}}_{l+k}(V)= \left\{U^{[V]}_{l+k}\circ\theta^{-k}=\emptyset\right\}\]
has the following interpretation: a model started in state $V$ at time $-k$ is empty at time $l$.
Thus if we denote $V=(W,Z)$ we have that
\[\theta^k {\mathscr{A}}_{l+k}\left((W,Z)\right) = \left\{Q_\phi\Bigl(WC\circ\theta^{-k}... \,\,C\circ\theta^{l-1}\,,\,ZS\circ\theta^{-k}...\,\,S\circ\theta^{l-1} \Bigl)=\emptyset\right\}.\]
Clearly, $\suite{{\mathscr{A}}_n(V)}$ form a sequence of renovating events of length 1 for the recursion $\suite{U^{[V]}_n}$, for any such initial condition $V$ (see \cite{Foss92,Foss94}).
Thus the following result is a consequence e.g. of Theorem 2.5.3 of \cite{BacBre02}, and is proven similarly to Proposition 8 in
\cite{MBM17},
\begin{proposition}
\label{pro:renov1}
Let $G=({\mathcal{V}},{\mathcal{E}})$ be a matching graph, $\phi$ be an admissible policy and
$V$ be a $\mathcal{U}$-valued random variable. Suppose that assumption (H1) holds, and that
\begin{equation}
\label{eq:renov0}
\lim_{n\to\infty} \bpr{\bigcap_{k=0}^{\infty} \bigcup_{l=0}^n {\mathscr{A}}_{l}(V) \cap \theta^k{\mathscr{A}}_{l+k}(V)}=1.
\end{equation}
Then, there exists a stationary buffer detail sequence $\suite{U\circ\theta^n}$, to which
$\suite{U^{[V]}_{n}}$ converges with strong backwards coupling. Moreover we have $\bpr{U=\emptyset}>0$.
\end{proposition}
Define the following sets of random variables:
\begin{equation*}
\mathscr V^r=\Bigl\{\mathcal{U}_0-\mbox{ valued r.v. $(W,Z)$:\, $|W|=|Z| \le r$ a.s.}\Bigl\},\,r\in {\mathbb N}_+,
\end{equation*}
and let
\[\mathscr V^{\infty}:= \bigcup_{r=1}^{+\infty} \mathscr V^r.\]
Define the following event for any admissible input $({\mathbf c},{\mathbf s}) \in {\mathcal{C}}^*\times {\mathcal S}^*$ of length $m$,
\begin{align}
\mathscr B\left({\mathbf c}\,,\,{\mathbf s}\right)
&=\left\{\left(C\,C\circ\theta \,... \,C\circ\theta^{m-1}\;,\;S\,S\circ\theta \,... \,S \circ\theta^{m-1}\right)=\left({\mathbf c}\,,\,{\mathbf s}\right)\right\}.\label{eq:defB}
\end{align}
We have the following analog to Theorem 3 in \cite{MBM17},
\begin{theorem}
\label{thm:main}
Let $\phi$ be a sub-additive policy and $\mathcal B$ be a bipartite matching structure such that $(\mathcal B,\phi)$ admit at least one
strong erasing couple. Suppose that for some $r \in \mathbb N_+$.
\begin{equation}
\label{eq:renov1}
\lim_{n\to\infty} \bpr{\bigcap_{k=0}^{\infty} \bigcup_{l=0}^n {\mathscr{A}}_l(\emptyset) \cap \theta^k{\mathscr{A}}_{l+k}(\emptyset) \cap \theta^{-l}\mathscr B\left({\mathbf c}^1...{\mathbf c}^r\,,\,{\mathbf s}^1\,...\,{\mathbf s}^r\right)}=1,
\end{equation}
where $({\mathbf c}^1,{\mathbf c}^1),\,...,\,({\mathbf c}^r,{\mathbf c}^r)$ are $r$ (possibly identical) strong erasing couples for $\mathcal B$ and $\phi$.
Then, there exists a solution $U^r$ to (\ref{eq:recurstat}) in $\mathscr V^{\infty}$ such that $\bpr{U^r=\emptyset}>0$, and
to which all sequences $\suite{U^{[V]}_{n}}$, for $V\in \mathscr V^{r}$, converge with strong backwards coupling.
If the above is true for any $r \in {\mathbb N}_+$, then there exists a solution $U^*$ to (\ref{eq:recurstat}) such that $\bpr{U^*=\emptyset}>0$,
and to which all sequences $\suite{U^{[V]}_{n}}$, for $V\in \mathscr V^{\infty}$, converge with strong backwards coupling.
\end{theorem}
\begin{proof}
Understanding (strong) erasing couples as the analog of (strong) erasing words in \cite{MBM17}, we can prove the first statement for all $r$ similarly to Proposition 9 in \cite{MBM17}.
This follows, as in Lemma 5 in \cite{MBM17}, from the fact that
$\left\{{\mathscr{A}}_n\left(\emptyset\right) \circ\theta^{-n}\mathscr B(\left({\mathbf c}^1...{\mathbf c}^r\,,\,{\mathbf s}^1\,...\,{\mathbf s}^r\right)\right\}_{{\mathbb N}}$ is a sequence of renovating events of length $m=\sum_{i=1}^r |{\mathbf c}^i|$
for the recursion $\suite{U^{[V]}_{n}}$, for any $V\in \mathscr V^{r}$. The uniqueness statement follows by recurrence on $r$,
exactly as in the proof of Theorem 3 in \cite{MBM17}.
\end{proof}
\begin{ex}
\label{ex:NNbis}
\rm
We address the toy example of the 'NN' graph, for which we explicitly construct the unique solution to (\ref{eq:recurstat}) for $\phi=\textsc{fcfs}$ and {\sc lcfs}.
Specifically, consider the structure ${\mathcal{B}}=\left(\{1,2,3\},\{\bar 1,\bar 2,\bar 3\},E,F\right)$, where the matching graph
$E$ and the arrival graph $F$ are respectively given by the left and right graphs in Figure \ref{Fig:NNarrival}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw[-] (-3.5,3) -- (-2.5,2);
\draw[-] (-3.5,3) -- (-3.5,2);
\draw[-] (-2.5,3) -- (-1.5,2);
\draw[-] (-2.5,3)-- (-2.5,2);
\draw[-] (-1.5,3)-- (-1.5,2);
\fill (-3.5,3) circle (2.5pt) node[above] {\small{1}} ;
\fill (-2.5,3) circle (2.5pt) node[above] {\small{2}} ;
\fill (-1.5,3) circle (2.5pt) node[above] {\small{3}} ;
\fill (-3.5,2) circle (2.5pt) node[below] {\small{$\bar 1$}} ;
\fill (-2.5,2) circle (2.5pt) node[below] {\small{$\bar 2$}} ;
\fill (-1.5,2) circle (2.5pt) node[below] {\small{$\bar 3$}} ;
\draw[-] (1.5,3) -- (2.5,2);
\draw[-] (2.5,3) -- (1.5,2);
\draw[-] (2.5,3) -- (3.5,2);
\draw[-] (3.5,3)-- (2.5,2);
\draw[-] (3.5,3)-- (1.5,2);
\fill (1.5,3) circle (2.5pt) node[above] {\small{1}} ;
\fill (2.5,3) circle (2.5pt) node[above] {\small{2}} ;
\fill (3.5,3) circle (2.5pt) node[above] {\small{3}} ;
\fill (1.5,2) circle (2.5pt) node[below] {\small{$\bar 1$}} ;
\fill (2.5,2) circle (2.5pt) node[below] {\small{$\bar 2$}} ;
\fill (3.5,2) circle (2.5pt) node[below] {\small{$\bar 3$}} ;
\end{tikzpicture}
\caption[smallcaption]{Matching graph (left) and arrival graph (right) of Example \ref{ex:NNbis}.} \label{Fig:NNarrival}
\end{center}
\end{figure}
The matching structure ${\mathcal{B}}$ satisfies case (iiia) of Proposition \ref{pro:strongcouple}, for $\check{{\mathcal{C}}}=\{1,2\}$, $\check{{\mathcal S}}=\{\bar 2,\bar 3\}$ and
$\mathscr P=1{\--} \bar 2 {\--} 2 {\--} \bar 3$. Thus there necessarily exist strong erasing couples for both {\sc fcfs} and {\sc lcfs}.
The construction below is independent of the lists of preferences, and we drop again these parameters from all notation.
Set $\Omega^0:=\left\{\om,\,\theta\om,\,...\,,\theta^8\om\right\}$, where
\begin{equation}
\label{eq:om}
\small{\om=...(1,\bar 2)(2,\bar 1)(1,\bar 2)(2,\bar 3)(1,\bar 2)(2,\bar 3)(2,\bar 3)(3,\bar 1)(3,\bar 2)\underline{\mathbf{(1,\bar 2)}}(2,\bar 1)(1,\bar 2)(2,\bar 3)(1,\bar 2)(2,\bar 3)(2,\bar 3)(3,\bar 1)(3,\bar 2)...,}
\end{equation}
in which the underlined couple is the $0$-coordinate (i.e. the origin of time). Setting $\mathscr F^0$ as the power-set of $\Omega^0$
and $\noindent{\bf Proof.}\ $ the uniform probability on $\Omega^0$, it is immediate that $\mathscr Q^0=\left(\Omega^0,\mathscr F^0,\noindent{\bf Proof.}\ ,\theta\right)$ is a stationary ergodic quadruple.
Observe that the image measure $\noindent{\bf Proof.}\ $ corresponds to the following probability measure $\mu$ on $F$:
\[\mu(1,\bar 2)={1 \over 3},\quad\mu(2,\bar 3)={1 \over 3},\quad\mu(2,\bar 1)={1 \over 9},\quad\mu(3,\bar 2)={1 \over 9},\quad\mu(3,\bar 1)={1 \over 9},\]
which clearly satisfies (\ref{eq:Ncond}). (Observe however, taking the independent set $I=\{3\}\times\{\bar 1\}$, that (\ref{eq:Scond}) does not hold.)
Consider the couple $({\mathbf c},{\mathbf s})=(331212122,\bar 1\bar 2\bar 2\bar 1 \bar 2\bar 3\bar 2\bar 3\bar 3)$. It can be checked that
$Q_{\textsc{fcfs}}\left(\breve{{\mathbf c}},\breve{{\mathbf s}}\right)=\emptyset$ and $Q_{\textsc{lcfs}}\left(\breve{{\mathbf c}},\breve{{\mathbf s}}\right)=\emptyset$ for any respective suffixes $\breve{{\mathbf c}}$ and $\breve{{\mathbf s}}$ of ${\mathbf c}$ and ${\mathbf s}$.
Moreover, we obtain on the one hand that
\[Q_{\textsc{fcfs}}\left(3{{\mathbf c}},\bar 1{{\mathbf s}}\right)=Q_{\textsc{fcfs}}\left(3{{\mathbf c}},\bar 2{{\mathbf s}}\right)=Q_{\textsc{fcfs}}\left(2{{\mathbf c}},\bar 1{{\mathbf s}}\right)
=Q_{\textsc{fcfs}}\left(1{{\mathbf c}},\bar 3{{\mathbf s}}\right)=\emptyset,\]
and on the other hand,
\[Q_{\textsc{lcfs}}\left(3{{\mathbf c}}{\mathbf c}\,\bar 1{{\mathbf s}}{\mathbf s}\right)=Q_{\textsc{fcfs}}\left(3{{\mathbf c}}{\mathbf c},\bar 2{{\mathbf s}}{\mathbf s}\right)=Q_{\textsc{fcfs}}\left(2{{\mathbf c}}{\mathbf c},\bar 1{{\mathbf s}}{\mathbf s}\right)
=Q_{\textsc{fcfs}}\left(1{{\mathbf c}}{\mathbf c},\bar 3{{\mathbf s}}{\mathbf s}\right)=\emptyset.\]
Thus $({\mathbf c},{\mathbf s})$ is a strong erasing couple for $({\mathcal{B}},\textsc{fcfs})$, whereas $({\mathbf c}\mc,{\mathbf s}\ms)$ is a strong erasing couple for $({\mathcal{B}},\textsc{lcfs})$.
By the very definition of $\Omega^0$, condition (\ref{eq:renov1}) is trivially satisfied for all $r$ for both {\sc fcfs} and {\sc lcfs}. Therefore there exists a unique stationary
solution $U^{\scriptsize{\mbox{\textsc{f}}}}$ (resp., $U^{\scriptsize{\mbox{\textsc{l}}}}$) of (\ref{eq:recurstat}) for {\sc fcfs} (resp., {\sc lcfs}). They are respectively given by
\begin{equation}
\label{eq:solNNfcfs}\left\{\begin{array}{c}
U^{\scriptsize{\mbox{\textsc{f}}}}(\om)=(33,\bar 1\bar 2),\,U^{\scriptsize{\mbox{\textsc{f}}}}(\theta\om)=(33,\bar 2\bar 2),\,U^{\scriptsize{\mbox{\textsc{f}}}}(\theta^2\om)=(33,\bar 2\bar 1),\,U^{\scriptsize{\mbox{\textsc{f}}}}(\theta^3\om) =(33,\bar 1\bar 2),\\U^{\scriptsize{\mbox{\textsc{f}}}}(\theta^4\om)=(3,\bar 1),\,
U^{\scriptsize{\mbox{\textsc{f}}}}(\theta^5\om)=(3,\bar 2),\,U^{\scriptsize{\mbox{\textsc{f}}}}(\theta^6\om)=\emptyset,\,U^{\scriptsize{\mbox{\textsc{f}}}}(\theta^7\om)=\emptyset,\,U^{\scriptsize{\mbox{\textsc{f}}}}(\theta^8\om)=(3,\bar 1);
\end{array}\right.
\end{equation}
\begin{equation}
\label{eq:solNNlcfs}\left\{\begin{array}{c}
U^{\scriptsize{\mbox{\textsc{l}}}}(\om)=(33,\bar 1\bar 2),\,U^{\scriptsize{\mbox{\textsc{l}}}}(\theta\om)=(33,\bar 1\bar 2),\,U^{\scriptsize{\mbox{\textsc{l}}}}(\theta^2\om)=(33,\bar 1\bar 1),\,U^{\scriptsize{\mbox{\textsc{l}}}}(\theta^3\om) =(33,\bar 1\bar 2),\\U^{\scriptsize{\mbox{\textsc{l}}}}(\theta^4\om)=(3,\bar 1),\,
U^{\scriptsize{\mbox{\textsc{l}}}}(\theta^5\om)=(3,\bar 2),\,U^{\scriptsize{\mbox{\textsc{l}}}}(\theta^6\om)=\emptyset,\,U^{\scriptsize{\mbox{\textsc{l}}}}(\theta^7\om)=\emptyset,\,U^{\scriptsize{\mbox{\textsc{l}}}}(\theta^8\om)=(3,\bar 1).
\end{array}\right.
\end{equation}
\end{ex}
\subsection{Independent Case}
\label{subsec:iid}
We show in this Section, that the conditions (\ref{eq:renov0}) and (\ref{eq:renov1}) take a simple form
if we assume additionally that the input sequence is mutually independent, i.e. under assumption (IID)).
Denote for any $\mathcal{U}_0$-valued r.v. $V=(W,Z)$, for any $k\in\mathbb N^*$, by $\tau_j(V)$ the $j$-th visit time to $\emptyset$ for the process $\left(U_n^{[V]}\right)_n$:
\[\tau_0(V) :=0, \quad \tau_j(V) := \inf \{n > \tau_{j-1}(V), U_{n}^{[V]} = \emptyset \}, \; j\geq 1.\]
We define the following stability condition depending on the initial condition $V$,
\begin{itemize}
\item[\textbf{(H2)}] The stopping time $\tau_1(V)$ is integrable.
\end{itemize}
A quick survey of the literature on the stability of matching models gives the following,
\begin{proposition}
\label{prop:H2}
Assumption (H2) holds true for any $\mathcal{U}_0$-valued r.v. $V$, in the following cases:
\begin{enumerate}
\item $({\mathcal{C}}\cup{\mathcal S},A)$ is strongly connected and $({\mathcal{B}},\mu)$ satisfies (\ref{eq:Scond});
\item $({\mathcal{C}}\cup{\mathcal S},A)$ is strongly connected, $({\mathcal{B}},\mu)$ satisfies (\ref{eq:Ncond}) and $\phi=\textsc{ml}$;
\item $({\mathcal{C}}\cup{\mathcal S},A)$ is strongly connected, ${\mathcal{G}}$ is bi-separable and $({\mathcal{B}},\mu)$ satisfies (\ref{eq:scondmonotone});
\item ${\mathcal{B}}$ defines a BM model and $\phi=\textsc{fcfs}$;
\item ${\mathcal{B}}$ defines a GM model and $\phi=\textsc{fcfs}$;
\item ${\mathcal{B}}$ defines a GM model and $\phi=${\sc ml}.
\end{enumerate}
\end{proposition}
\begin{proof}
Assertions 1 and 2 follow from Theorem 4.2 in \cite{BGM13}, and respectively from Proposition 5.2 and Theorem 7.1 in \cite{BGM13}. Assertion
4 follows from Theorem 3 in \cite{AW11} (revisited in Theorem 2 of \cite{ABMW17}). Items 5 and 6 are precisely assertions 1 and 2 of Proposition 11 in \cite{MBM17}.
To prove assertion 3, just notice that (\ref{eq:scondmonotone}) entails (\ref{eq:Scond}) from Proposition \ref{pro:equivalenceBGM}, and we conclude applying 1.
\end{proof}
We have the following result,
\begin{theorem}
\label{thm:mainiid}
Suppose that assumptions (IID) and (H2) hold. If $\phi$ is sub-additive and $(\mathcal B,\phi)$ are such that any admissible buffer detail $({\mathbf w},{\mathbf z})$ admits an erasing couple, then
there exists a unique solution $U^*$ to (\ref{eq:recurstat}) in $\mathscr V^{\infty}$ such that $\bpr{U^*=\emptyset}>0$, and to which all sequences
$\suite{U^{[V]}_n}$, for $V \in \mathscr V^{\infty}$, converge with strong backwards coupling.
\end{theorem}
\begin{proof}
This result is analog to Theorem 3 in \cite{MBM17}, and we skip the technical details of the proof. The main steps, which can be checked using similar arguments to
those in \cite{MBM17}, are as follows:
\begin{enumerate}
\item Under these assumptions, an analog argument to Proposition 13 in \cite{MBM17} shows that there is forward coupling between $\suite{U^{[V]}_n}$ and $\suite{U^{[V^*]}_n}$ for any $V$ and $V^*$ in $\mathscr V^\infty$;
\item Fix a r.v. $V\in\mathscr V^{\infty}$, and let $r$ be such that $V \in \mathscr V^r$. From the independence of the input, the sub-additivity of $\phi$ and the existence of erasing couples of any $({\mathbf w},{\mathbf z})$
for $(\mathcal B,\phi)$, one can show similarly to Proposition 12 in \cite{MBM17} that assumption (H2) entails (\ref{eq:renov0});
\item As (H1) holds true under the iid assumptions, we can apply Proposition \ref{pro:renov1}: $V$ converges with strong backwards coupling, and thereby also in the forward sense, to a stationary sequence
$\suite{U\circ\theta^n}$, where $U\in\mathscr V^{\infty}$. But from 1., any pair of such stationary sequences $\suite{U\circ\theta^n}$ and $\suite{U^*\circ\theta^n}$ couple, and therefore coincide almost surely.
Thus there exists a unique solution $U$ to (\ref{eq:recurstat}) in $\mathscr V^{\infty}$. This completes the proof.
\end{enumerate}
\end{proof}
\paragraph{Summarizing the results.}
To summarize Theorems \ref{thm:main} and \ref{thm:mainiid}, for a given bipartite matching structure $\mathcal B$ and a sub-additive matching policy $\phi$ (such as
{\sc ml}, random, priorities, {\sc lcfs} and {\sc fcfs} - Section \ref{sec:subadd}), there exists a unique proper stationary buffer detail on the Palm space $\mathscr Q$ in the two following general cases:
\begin{itemize}
\item The input is stationary ergodic, $(\mathcal B,\phi)$ admits a strong erasing couple (which is true if Proposition \ref{pro:strongcouple} - or Proposition \ref{pro:strongcoupleBM} for a BM model - holds true), and the renovation condition (\ref{eq:renov1}) (which entails (\ref{eq:Ncond})) holds true for any $r \ge 1$.
\item The input is iid, any admissible state admits an erasing couple for $(\mathcal B,\phi)$ (which is true n particular under either conditions of Proposition \ref{prop:existerasing}),
and the regeneration condition (H2) (satisfied under either conditions of Proposition \ref{prop:H2}) holds true.
\end{itemize}
\subsection{Constructing bi-infinite perfect matchings}
\label{subsec:matchings}
Fix a bipartite matching structure $\mathcal B$ and a sub-additive matching policy $\phi$. Suppose that we are under either one of the above conditions. Then there exists, on the Palm space $\mathscr Q$ of the input, a unique $\theta$-compatible buffer-content sequence $\suite{U\circ\theta^n}$, that is such that $\bpr{U=\emptyset}>0$. By the ergodicity of the shift, this readily entails that the set
\[\left\{n \in \mathbb Z\,:\,U\circ\theta^n = \emptyset\right\}=:\left\{\mbox{Construction points of the system on }\mathbb Z\right\}\]
is infinite (and infinite on both sides of the origin). As is done for stationary queuing systems e.g. in \cite{Nev83} or Chapter 2 of \cite{BacBre02}, and for stationary BM models in Section 4.6 of \cite{MBM17},
we can easily use these construction points to build a unique stationary {\em matching} of the bi-infinite input $\left(({\mathcal{C}},{\mathcal S},\Sigma,\Gamma)\circ\theta^n\right)_{n\in\mathbb Z}$, by just constructing the unique matching of the incoming customers and servers between each successive construction points.
\begin{ex}[Example \ref{ex:NNbis}, continued]
\rm
From the unique respective stationary solutions of (\ref{eq:recurstat}) for $\phi=\textsc{fcfs}$ and {\sc lcfs}, given in (\ref{eq:solNNfcfs}) and (\ref{eq:solNNlcfs}), we can construct
the unique {\sc fcfs} and the unique {\sc lcfs} matchings of the input considered in Example \ref{ex:NNbis}. They are respectively represented (for the sample $\om$ defined by (\ref{eq:om})), in Figure
\ref{Fig:matchingNN}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.66]
\draw[-] (0.5,1) -- (23.5,1);
\draw[-] (0.5,1) -- (0.5,-5);
\draw[-] (0.5,-2) -- (23.5,-2);
\draw[-] (0.5,-5) -- (23.5,-5);
\draw[-] (23.5,-5) -- (23.5,1);
\fill (1,0) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (2,0) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (3,0) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (4,0) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (5,0) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (6,0) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (7,0) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (8,0) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (9,0) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (10,0) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (11,0) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (12,0) circle (2.5pt) node[above] {\scriptsize{$\mathbf{1}$}} ;
\fill (13,0) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (14,0) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (15,0) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (16,0) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (17,0) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (18,0) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (19,0) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (20,0) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (21,0) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (22,0) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (23,0) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (1,-1) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (2,-1) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (3,-1) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (4,-1) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (5,-1) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (6,-1) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (7,-1) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (8,-1) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (9,-1) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (10,-1) circle (2pt) node[below] {\scriptsize{${\bar 1}$}} ;
\fill (11,-1) circle (2.5pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (12,-1) circle (2pt) node[below] {\scriptsize{$\mathbf{\bar 2}$}} ;
\fill (13,-1) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (14,-1) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (15,-1) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (16,-1) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (17,-1) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (18,-1) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (19,-1) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (20,-1) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (21,-1) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (22,-1) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (23,-1) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\draw[-] (1,0) -- (6,-1);
\draw[-] (2,0) -- (8,-1);
\draw[-] (3,0) -- (1,-1);
\draw[-] (4,0) -- (2,-1);
\draw[-] (5,0) -- (3,-1);
\draw[-] (6,0) -- (5,-1);
\draw[-] (7,0) -- (4,-1);
\draw[-] (8,0) -- (7,-1);
\draw[-] (9,0) -- (9,-1);
\draw[-] (10,0) -- (15,-1);
\draw[-] (11,0) -- (17,-1);
\draw[-] (12,0) -- (10,-1);
\draw[-] (13,0) -- (11,-1);
\draw[-] (14,0) -- (12,-1);
\draw[-] (15,0) -- (14,-1);
\draw[-] (16,0) -- (13,-1);
\draw[-] (17,0) -- (16,-1);
\draw[-] (18,0) -- (18,-1);
\draw[-] (19,0) -- (23.5,-0.9);
\draw[-] (20,0) -- (23.5,-0.58);
\draw[-] (21,0) -- (19,-1);
\draw[-] (22,0) -- (20,-1);
\draw[-] (23,0) -- (21,-1);
\draw[-] (22,-1) -- (23.5,-0.5);
\draw[-] (23,-1) -- (23.5,-0.5);
\fill (1,-3) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (2,-3) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (3,-3) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (4,-3) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (5,-3) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (6,-3) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (7,-3) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (8,-3) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (9,-3) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (10,-3) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (11,-3) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (12,-3) circle (2.5pt) node[above] {\scriptsize{$\mathbf{1}$}} ;
\fill (13,-3) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (14,-3) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (15,-3) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (16,-3) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (17,-3) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (18,-3) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (19,-3) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (20,-3) circle (2pt) node[above] {\scriptsize{3}} ;
\fill (21,-3) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (22,-3) circle (2pt) node[above] {\scriptsize{2}} ;
\fill (23,-3) circle (2pt) node[above] {\scriptsize{1}} ;
\fill (1,-4) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (2,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (3,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (4,-4) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (5,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (6,-4) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (7,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (8,-4) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (9,-4) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (10,-4) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (11,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (12,-4) circle (2.5pt) node[below] {\scriptsize{$\mathbf{\bar 2}$}} ;
\fill (13,-4) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (14,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (15,-4) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (16,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (17,-4) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (18,-4) circle (2pt) node[below] {\scriptsize{$\bar 3$}} ;
\fill (19,-4) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (20,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (21,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\fill (22,-4) circle (2pt) node[below] {\scriptsize{$\bar 1$}} ;
\fill (23,-4) circle (2pt) node[below] {\scriptsize{$\bar 2$}} ;
\draw[-] (1,-3) -- (6,-4);
\draw[-] (2,-3) -- (8,-4);
\draw[-] (3,-3) -- (2,-4);
\draw[-] (4,-3) -- (3,-4);
\draw[-] (5,-3) -- (4,-4);
\draw[-] (6,-3) -- (5,-4);
\draw[-] (7,-3) -- (1,-4);
\draw[-] (8,-3) -- (7,-4);
\draw[-] (9,-3) -- (9,-4);
\draw[-] (10,-3) -- (15,-4);
\draw[-] (11,-3) -- (17,-4);
\draw[-] (12,-3) -- (11,-4);
\draw[-] (13,-3) -- (12,-4);
\draw[-] (14,-3) -- (13,-4);
\draw[-] (15,-3) -- (14,-4);
\draw[-] (16,-3) -- (10,-4);
\draw[-] (17,-3) -- (16,-4);
\draw[-] (18,-3) -- (18,-4);
\draw[-] (19,-3) -- (19.5,-3.1);
\draw[-] (19,-3) -- (23.5,-3.9);
\draw[-] (20,-3) -- (23.5,-3.58);
\draw[-] (21,-3) -- (20,-4);
\draw[-] (22,-3) -- (21,-4);
\draw[-] (23,-3) -- (22,-4);
\draw[-] (19,-4) -- (23.5,-3.25);
\draw[-] (23,-4) -- (23.5,-3.5);
\end{tikzpicture}
\caption[smallcaption]{Unique bi-infinite {\sc fcfs} matching (top) and {\sc lcfs} matching (bottom) for the matching and arrival graphs of Figure \ref{Fig:NNarrival} and the sample $\om$ in (\ref{eq:om}).
(The coordinate in bold is the origin of time.)} \label{Fig:matchingNN}
\end{center}
\end{figure}
\end{ex}
|
{
"timestamp": "2018-03-08T02:11:53",
"yymm": "1803",
"arxiv_id": "1803.02788",
"language": "en",
"url": "https://arxiv.org/abs/1803.02788"
}
|
\section{Introduction}
Depth estimation is one of the fundamental problems in computer vision. It finds important applications in a large number of areas such as robotics, augmented reality, 3D reconstruction and self-driving car, etc. This problem is heavily studied in the literature and is mainly tackled with two types of technical methodologies namely active stereo vision such as structured light \cite{ScharsteinS03light}, time-of-flight \cite{ZhuWYDP11TOF}, and passive stereo vision including stereo matching\cite{lad2015stereo,luo2016stereo}, structure from motion \cite{SturmT96sfm}, photometric stereo \cite{EstebanVC08} and depth cue fusion \cite{Saxena09make3D}, etc. Among passive stereo vision methods, stereo matching is arguably the most widely applicable technique because it is accurate and it poses little assumption to the sensors and the imaging procedure. Recent advances in this field show that the quality of stereo matching can be significantly improved by deep models trained with synthetic data and finetuned with limited amount real data \cite{mayer2016disp,pang2017cascade}.
\begin{figure}[t!]
\includegraphics[width=8.2cm,height=3.4cm]{figures/fig1.pdf}
\vspace{-5pt}
\caption{Pipeline of our approach on monocular depth estimation. We decompose the task into two parts: view synthesis and stereo matching. Both networks enforce the geometric reasoning capacity. With this new formulation, our approach is able to achieve state-of-the-art performance.}
\vspace{-13pt}
\label{figure1}
\end{figure}
On the other hand, the applicability of monocular depth estimation is greatly limited by its accuracy though the single camera setting is much more preferred in practice in order to avoid calibration errors and synchronization problems occur to the stereo camera setting. Estimating depth from a single view is difficult because it is an ill-posed and geometrically ambiguous problem. Advancement of monocular depth estimation has recently been made by deep learning methods \cite{eigen2014depth,laina2016deeper,li2015depth,liu2015deep}
. However, comparing to the mentioned passive stereo vision methods which are grounded by geometric correctness, the formulation in the current state-of-the-art monocular method is problematic. The reasons are twofold. First, current deep learning approaches to this problem almost completely rely on the high-level semantic information and directly relate it to the absolute depth value. Because the operations in the network are general and do not have any prior knowledge on the function it needs to approximate, learning such semantic information is difficult even some special constraints are imposed in the loss function. Second, even the effective learning can be achieved, the relationship between scene understanding and depth needs to be established by a huge number of real data with ground truth depth. Such data is not only very expensive to obtain at scale, collecting high-quality dense labels is very difficult and time consuming if not entirely impossible. This significantly limits the potential of the current formulation.
In this paper, we take a novel perspective and show for the first time that monocular depth estimation problem can be formulated as a stereo matching problem in which the right view is automatically generated by a high-quality view synthesis network. The whole pipeline is shown in figure \ref{figure1}. The key insights here are that i) both view synthesis and stereo matching respect the underlying geometric principles; ii) both of them can be trained without using the expensive real depth data and thus generalize well; iii) the whole pipeline can be collectively trained in an end-to-end fashion that optimize the geometrically correct objectives. Our method shares a similar idea as revealed in the Spatial Transformation Network \cite{Jaderberg2015STN}. Although deep models can learn necessary transformations by themselves, it might be more beneficial for us to explicitly model such transformations. We discover that the resulting model is able to outperform all the previous methods in the challenging KITTI dataset \cite{Geiger2012CVPR} by only using a small number of real training data. The model also generalizes well to other monocular depth estimation datasets.
Our contributions can be summarized as follows.
\begin{itemize}
\item First, we discover that the monocular depth estimation problem can be effectively decoupled into two sub-problems with geometrical soundness. It forms a new foundation in advancing the performance in this field.
\item Second, we show that the whole pipeline can be trained end-to-end and it outperforms all the previous monocular methods by a large margin using a fraction of training data. Notably, this is the first monocular method to outperform the stereo blocking matching algorithm in terms of the overall accuracy.
\end{itemize}
\section{Related Works}
There exists a large body of literature on depth estimation from images, either using single view~\cite{saxena20083}, stereo views~\cite{scharstein2002taxonomy}, several overlapped images from different viewpoints~\cite{furukawa2015multi}, or temporal sequence~\cite{ranftl2016dense}.
For monocular depth estimation, Saxena \etal~\cite{saxena20083} propose one of the first supervised learning-based approaches to single image depth map prediction. They model depth prediction in a Markov random field and use multi-scale texture features that have been hand-crafted. Recently, deep learning has proven its ability in many computer vision tasks, including the single image depth estimation. Eigen \etal~\cite{eigen2014depth} propose the first CNN framework that predicts the depth in a coarse-to-fine manner. Laina \etal~\cite{laina2016deeper} employ a deeper ResNet~\cite{He2015res} structure with an efficient up-sampling design and achieve a boosted performance. Liu \etal~\cite{liu2015deep} also propose a deep structured learning approach that allows for training CNN features of unary and pairwise potentials in an end-to-end way. Chen \etal~\cite{chen2016wild} provide a novel insight by incorporating pair-wise depth relation into CNN training. Compared with depth, these rankings on pixel level are much more easy to obtain. Further lines of research in supervised training of depth map prediction use the idea of depth transfer from example images~\cite{karsch2012depth, konrad20122d, liu2014discrete}, or combining semantic segmentation~\cite{eigen2015predicting,ladicky2014pulling,li2010towards,liu2010single,
wang2015towards}. However, large amount of high-quality labels are in need to establish the transformation from image space to depth space. Such data are not easy to collect at scale in real life.
Recently, a small number of deep network based methods attempt to estimate depth in an unsupervised way. Garg \etal~\cite{garg2016unsupervised} first introduce the unsupervised method by only supervising on the image alignment loss. However, their loss is not fully differentiable so that they apply first Taylor expansion to linearize their loss for back-propagation. Godard \etal~\cite{godard2016unsupervised} also propose an unsupervised deep learning framework, and they employ a novel loss function to enforce consistency between the predicted depth maps from each camera view. Kuznietsov \etal~\cite{kuznietsov2017semi} adopt a semi-supervised deep method to predict depths from single images. Sparse depth from LiDAR sensors is used for supervised learning, while a direct image alignment loss is integrated to produce photoconsistent dense depth maps in a stereo setup. Zhou \etal~\cite{zhou2017unsupervised} jointly estimate depth and camera pose in an unsupervised manner.
Despite that those unsupervised methods reduce the demand of expensive depth ground truth, their mechanisms are still inherently problematic since they are attempting to regress a depth/disparity directly from a single image. The network architecture itself does not assume any geometric constraints and it acts like a black box. In our work, we propose a novel strategy to decompose this task into two separate procedures, namely synthesizing a corresponding right view followed by a stereo matching procedure. Such idea is similar to the Spatial Transformation Network \cite{Jaderberg2015STN}, which learns a transformation within the network before conducting visual tasks like recognition.
To synthesize a novel view, DeepStereo ~\cite{flynn2015deepStereo} first proposes to render an unseen view by taking pixels from other views, and \cite{zhou2016view} predicts the appearance flow to reconstruct the target view. The Deep3D network of Xie \etal~\cite{xie2016deep3d} addresses the problem of generating the corresponding right view from an input left image. Their method produces a distribution over all the possible disparities for each pixel, which is used to generate the right image.
\begin{figure*}[t!]
\includegraphics[width=17.4cm,height=6.5cm]{figures/fig2.pdf}
\vspace{-20pt}
\caption{Details of our single view stereo matching network. Upper part is the view synthesis network. The input image is first processed by a CNN. It results in probabilistic disparity maps that help to reconstruct a synthetic right view by selectively taking pixels from nearby locations on the original left image. A stereo matching network, which is shown on the lower part of the figure, then takes both the original left image and synthetic right image to calculate an accurate disparity, which can be transformed into a corresponding depth map given the camera settings.}
\label{figure2}
\vspace{-15pt}
\end{figure*}
Conducting stereo matching on the original left input and the synthetic right view is now a 1D matching problem. The vast majority of works on stereo matching focus on learning a matching function that searches the corresponding pixels on two images \cite{lad2015stereo,luo2016stereo}. Mayer \etal~\cite{mayer2016disp} introduce their fully convolutional DispNet to directly regress the disparity from the stereo pair. Later, Pang \etal~\cite{pang2017cascade} adopt a multi-scale residual network developed from DispNet and obtain refined results. These methods still rely on large amount labelled disparity as ground truth. Instead of using data from the real world, training on synthetic data \cite{mayer2016disp} becomes a more feasible solution to these approaches.
\section{Analysis and our approach}
In this section, we demonstrate how we decompose the task of monocular depth estimation into two separate tasks. And we illustrate our model design for view synthesis and stereo matching separately.
\subsection{Analysis of the whole pipeline}
In our pipeline, we decompose the task of monocular depth estimation into two tasks, namely view synthesis and stereo matching. The whole pipeline is shown in figure \ref{figure2}. By tackling this problem using two separate steps, we find that both procedures obey primary geometric principles and they can be trained without expensive data supply. After that, these networks can be collectively trained in an end-to-end manner. We further hypothesize that, when the whole pipeline is trained end-to-end, both components will not degrade their capacity of constraining geometric correctness, and the performance of the whole pipeline will be promoted thanks to joint training. Therefore, we are desired to choose both methods that can explicitly model the geometric transformation in the network design.
The first stage is view synthesis. For a stereo pair, binocular views are rendered by well synchronized and calibrated cameras, resulting in the strong correspondence between pixels in the horizontal direction. Unlike previous warp-based methods that generally require an accurate estimation of the underlying geometry, Deep3D ~\cite{xie2016deep3d} proposes a new probabilistic scheme to transfer pixels from the original image. By this mean, it directly formulates the transformation from left image to right image using a differentiable selection layer. We adopt its design and develop our view synthesis network based on it. Other reconstruction plans \cite{garg2016unsupervised,godard2016unsupervised,kuznietsov2017semi} are also viable alternatives, but the choice of the specific view synthesis method is independent of the main insight of the paper.
After generating a high-quality novel view, our stereo matching network transforms the high-level scene understanding problem into a 1D matching problem, which results in less computational complexity. In order to better utilize the geometric relation between two views, we take the idea of 1D correlation employed in DispNetC\cite{mayer2016disp}. We further adopt the DispFullNet structure mentioned in \cite{pang2017cascade} to achieve full resolution prediction.
\subsection{View synthesis network}
Our view synthesis network is shown in the upper part of figure \ref{figure2}. We develop this network based on Deep3D ~\cite{xie2016deep3d} model. Here we briefly introduce the structure of it. At the very beginning, an input left image $I_{l}$ is processed by a baseline network. We then upsample the features from different intermediate levels to the same resolution, in order to incorporate low-level features into final use. Those features are then summed up to further produce a probabilistic disparity map. After completing a selection operation, pixels on original $I_{l}$ can be selectively mixed up to form a new pixel on the right image.
The operation of selection is the core component in this network. This module is also illustrated in figure \ref{figure2}. Denote $I_{l}$ as the input left image, previous Depth Image-Based Rendering (DIBR) techniques choose to directly warp the left image based on estimated disparity into a corresponding right image. Suppose $D$ is the predicted disparity aligned with the left image, the procedure can be formulated as
\vspace{-8pt}
\begin{equation}
\label{DIBR}
\begin{aligned}
\widetilde{I}_{r}(i,j - D_{i,j})&=I_{l}(i,j),&\quad (i,j)\in{\Omega_{l}},&\\
\end{aligned}
\end{equation}
\vspace{-13pt}
where $\Omega_{l}$ is the image space of $I_{l}$ and $i$, $j$ refer to the row and column on $I_{l}$ respectively. Though this function captures the geometric correspondence between images in a stereo setup, it requires an accurate disparity map to reconstruct the right view. At the same time, the function is not fully differentiable with respect to $D$ which limits the opportunity of training by a deep neural network. The selection module, instead, formulates the reconstruction as a process of probabilistic summation. Denote $D\in\mathbb{R}^{W\times H\times C}$ as the probabilistic disparity result, where $W$ and $H$ are the width and height of left image and
$C$ indicates the number of possible disparity shifts, the reconstruction can then be formulated as
\vspace{-11pt}
\begin{equation}
\label{selection}
\begin{aligned}
\widetilde{I}_{r}&=\sum_d I_{l}^{(d)}D^d.
\end{aligned}
\end{equation}
\vspace{-9pt}
Here, $I_{l}^{(d)}(i,j) = I_{l}(i,j+d)$ is the shifted left image whose stride is predetermined by possible disparity values $d$. This operation sums up the stacked shifted input by learned weights and ensures the differentiability of the whole system.
To supervise the reconstruction quality, we do not propose any special loss function. We find that a simple L1 loss supervising on the reconstructed appearance is sufficient for the task of view synthesis:
\vspace{-8pt}
\begin{equation}
\label{L1Loss}
\begin{aligned}
L_{view} = \frac{1}{N}\sum_{i,j}\left| \widetilde{I}_{r}(i,j) - I_{r}(i,j)\right|
\end{aligned}
\end{equation}
\subsection{Stereo matching network}
There exists a large body of literature tackling the problem of stereo matching. Recent advancements are achieved by deep learning models. Not only because deep networks help to effectively find out similar pixel pairs, research also show that these networks can be trained on a large amount of synthetic data and they can still generalize well on real images \cite{mayer2016disp}. In our pipeline, we select the state-of-the-art DispNetC \cite{mayer2016disp} structure as the desired network for the stereo matching task. We further follow the modifications made in \cite{pang2017cascade} to adopt a DipFulNet structure for full-resolution output. The structure of this method can be seen in the lower part of figure \ref{figure2}. We briefly illustrate the method here, and the detailed settings can be found in their papers.
After processed by several convolutional operations, 1D correlation will be calculated based on resulted features. This correlation layer is found very useful in the stereo matching problem since it explicitly encodes the geometric relationship into the model design, and the horizontal correlation is indeed an effective cue for finding the most similar pairs. The features will be further concatenated with higher-level features from the left image $I_{l}$. An encoder-decoder network further processes the concatenated features and produces disparity at different scales. These intermediate and final results will be supervised by ground truth disparity using L1 loss.
\subsection{End-to-end training of the whole pipeline}
These two networks can be combined for joint training once being trained to obtain the ability of geometric reasoning for the task of view synthesis and stereo matching separately. End-to-end training of the whole pipeline can thus be performed to enforce the collaboration of these two sub-networks.
\section{Experiments}
In this section, we present our experiments and results. Our method achieves state-of-the-art monocular depth estimation result on the widely used KITTI dataset \cite{Geiger2012CVPR}. We discover and show the key insights of this method and prove the correctness of our methodology. We also make the first attempt to run our single view approach on the challenging KITTI Stereo 2015 benchmark \cite{Menze2015CVPR}.
\subsection{Dataset and Evaluation Metrics}
We evaluate our approach on the publicly available KITTI benchmark \cite{Geiger2012CVPR}. In order to fairly compare with other methods on monocular depth estimation, we use the raw sequences of KITTI and employ the split scheme proposed by Eigen \etal~\cite{eigen2014depth}. This split results in a test set with 697 images. Remaining data is used for training and validation. Overall we have 22600 stereo pairs for training our view synthesis network. Except for stereo image pairs, the dataset also contains sparse 3D laser measurements taken from a Velodyne laser sensor. They can be projected onto image space and served as the depth labels. Parameters of the stereo setup and the camera intrinsics are also provided, therefore we can transfer depth into disparity as ground truth during end-to-end training and recover the depth from disparity during inference.
Evaluation metrics are as follows and they indicate the error and performance on predicted monocular depth.
ARD = $\frac{1}{N}\sum_{i=1}^N{\left|Dep_i - Dep_i^{g.t.}\right|}/{Dep_i^{g.t.}}$
SRD = $\frac{1}{N}\sum_{i=1}^N{\lVert|Dep_i - Dep_i^{g.t.}\rVert^2}/{Dep_i^{g.t.}}$
RMSE = $\sqrt{\frac{1}{N}\sum_{i=1}^N\lVert Dep_i - Dep_i^{g.t.} \rVert^2}$
RMSE(log) = $\sqrt{\frac{1}{N}\sum_{i=1}^N\lVert log(Dep_i) - log(Dep_i)^{g.t.} \rVert^2}$
Accuracy = \% $Dep_i$ : $max(\frac{Dep_i}{Dep_i^{g.t.}},\frac{Dep_i^{g.t.}}{Dep_i})=\delta < thr$
Here N is the number of pixels that are not empty on the depth ground truth.
To compare with other works in a consistent manner, we only evaluate on a cropped region proposed by Eigen \etal~\cite{eigen2014depth}. Also, previous methods restrict the depth distance in different ranges for evaluation, we provide our result using both the cap of 0-80m (following Eigen \etal~\cite{eigen2014depth}) and 1-50m (following Garg \etal~\cite{garg2016unsupervised}). This requires to discard the pixels on which the depth is outside the proposed range.
\subsection{Implementation Details}
\label{secion:impl_details}
The training of the model is divided into two stages. First we train the two networks used for different purposes separately. In the second stage, we combine the two parts and further finetune the whole pipeline in an end-to-end fashion. The training is conducted using caffe framework \cite{jia14caffe}.
In the first stage, networks are trained separately. For the training of view synthesis network, 22600 stereo pairs from KITTI are taken into use. We select VGG16 as the baseline network and initialize the weights of it using the model pre-trained from ImageNet \cite{Simonyan14c}. All other weights are initialized following the same scheme in \cite{xie2016deep3d}. Compared with original deep3D model \cite{xie2016deep3d}, we make some modifications to make it suitable for view synthesis task on KITTI dataset. First, the size of input is larger and is selected to be $640\times192$. It retains the aspect ratio of original KITTI images. Second, one more convolution layer is employed before deconvolution at each branch. Third, since the disparity ranges differently in KITTI and 3D movie dataset, we change the possible disparity range. A 65-channel probabilistic map representing possible disparity from 0 to 64 now becomes the final features. Last, to accommodate larger inputs and the deeper network structure, we decrease the batch size as 2, and we remove the origin BatchNorm layers in the deep3D model. The model is trained for 200K iterations with initial learning rate equals to 0.0002. For the training of DispFullNet used for stereo matching, we follow the training scheme specified in \cite{pang2017cascade}. The model is trained mainly on the synthetic FlyingThings3D dataset \cite{mayer2016disp} and optional finetuned on the KITTI stereo training set \cite{Menze2015CVPR}. This KITTI stereo training set contains 200 stereo pairs with relatively high-quality disparity labels, and it has not overlap with the test data from KITTI Eigen test set. The detailed settings can be found in Pang's paper \etal~\cite{pang2017cascade}.
In the second stage, two networks with pre-trained weights are now trained end-to-end. A small number of data from the KITTI Eigen training set with ground truth disparity labels will be taken to finetune the whole pipeline. Since the input to the stereo matching network has a larger dimension, upsample is performed inside the network to enlarge the synthetic right view resulted from the first stage.
Data augmentation is optionally done in both stages. The input will be randomly resized to a dimension slightly greater than the desired input size. And then it will be cropped into the desired size and fed into the network. The color intensity will also multiply a factor between 0.8 to 1.2.
\begin{table*}[tp]
\setlength{\tabcolsep}{10pt}
\footnotesize
\centering
\begin{adjustbox}{max width=1.0\textwidth}
\begin{tabular}{@{}l|c|cccc|ccc@{}}
\toprule
\multicolumn{1}{l|}{Approach} & \multicolumn{1}{c|}{cap} & ARD & SRD & RMSE & RMSE(log) & $\delta < 1.25$ & $\delta < 1.25^2$ & $\delta < 1.25^3$ \\ \cline{3-9}
& & \multicolumn{4}{c|}{lower is better} & \multicolumn{3}{c}{higher is better} \\ \midrule
Stereo\_gt\_right & $0-80$ m & 0.062 & 0.424 & 3.677 & 0.164 & 0.939 & 0.968 & 0.981 \\ \midrule
Eigen \etal~\cite{eigen2014depth} & $0-80$ m & 0.215 & 1.515 & 7.156 & 0.270 & 0.692 & 0.899 & 0.967 \\
Liu \etal~\cite{liu2014discrete} & $0-80$ m & 0.217 & 1.841 & 6.986 & 0.289 & 0.647 & 0.882 & 0.961 \\
Zhou \etal~\cite{zhou2017unsupervised} & $0-80$ m & 0.183 & 1.595 & 6.709 & 0.270 & 0.734 & 0.902 & 0.959 \\
Godard \etal~\cite{godard2016unsupervised} & $0-80$ m & 0.114 & 0.898 & 4.935 & 0.206 & 0.861 & 0.949 & 0.976 \\
Kuznietsov \etal~\cite{kuznietsov2017semi} & $0-80$ m & 0.113 & 0.741 & 4.621 & 0.189 & 0.862 & 0.960 & \textbf{0.986} \\
Ours, w/o end-to-end finetuned & $0-80$ m& 0.102 & 0.700 & 4.681 & 0.200 & 0.872 & 0.954 & 0.978 \\
Ours & $0-80$ m& \textbf{0.094} & \textbf{0.626} & \textbf{4.252} & \textbf{0.177} & \textbf{0.891} & \textbf{0.965} & 0.984 \\ \bottomrule
Stereo\_gt\_right & $1-50$ m & 0.058 & 0.316 & 2.675 & 0.152 & 0.947 & 0.971 & 0.983 \\ \midrule
Zhou \etal~\cite{zhou2017unsupervised} & $1-50$ m & 0.190 & 1.436 & 4.975 & 0.258 & 0.735 & 0.915 & 0.968 \\
Garg \etal~\cite{garg2016unsupervised} & $1-50$ m & 0.169 & 1.080 & 5.104 & 0.273 & 0.740 & 0.904 & 0.962 \\
Godard \etal~\cite{godard2016unsupervised} & $1-50$ m & 0.108 & 0.657 & 3.729 & 0.194 & 0.873 & 0.954 & 0.979 \\
Kuznietsov \etal~\cite{kuznietsov2017semi} & $1-50$ m & 0.108 & 0.595 & 3.518 & 0.179 & 0.875 & 0.964 & \textbf{0.988} \\
Ours, w/o end-to-end finetuned & $1-50$ m & 0.097 & 0.539 & 3.503 & 0.187 & 0.885 & 0.960 & 0.981 \\
Ours & $1-50$ m & \textbf{0.090} & \textbf{0.499} & \textbf{3.266} & \textbf{0.167} & \textbf{0.902} & \textbf{0.968} & 0.986 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\vspace{-6pt}
\caption{Quantitative results of our method and approaches reported in the literature on the test set of the KITTI Raw dataset used by Eigen \etal~\cite{eigen2014depth} for different caps on ground-truth and/or predicted depth. Best results are shown in bold. Our proposed method achieves improvement over all compared state-of-the-art approaches.}
\label{table:kitti_res}
\vspace{-8pt}
\end{table*}
\subsection{Depth Estimation by Stereo Matching method}
First, the evaluation of depth estimation of the stereo matching network given perfect right images
is presented. The result is shown in the Table \ref{table:kitti_res}, denoted as ``Stereo\_gt\_right". The stereo matching network clearly outperforms state-of-the-art methods for single image depth estimation, even the stereo matching network is mainly trained on rendered dataset~\cite{mayer2016disp}.
The intuition here is that predicting depth from stereo images has a much higher accuracy than predicting depth by any of the previous monocular depth methods. This means we are able to achieve much higher performance if we can provide a sophisticated view synthesis module.
\begin{table*}[tp]
\centering
\begin{adjustbox}{max width=1.0\textwidth}
\begin{tabular}{@{}l|c|c|c|cccc|ccc@{}}
\toprule
\multicolumn{1}{l|}{Approach} & \multicolumn{1}{c|}{FT VSN} & \multicolumn{1}{c|}{FT SMN}& \multicolumn{1}{c|}{cap} & ARD & SRD & RMSE & RMSE(log) & $\delta < 1.25$ & $\delta < 1.25^2$ & $\delta < 1.25^3$ \\ \cline{5-11}
& & & & \multicolumn{4}{c|}{lower is better} & \multicolumn{3}{c}{higher is better} \\ \midrule
Finetune-0 & \xmark & \xmark & $0-80$ m &0.102 & 0.700 & 4.681 & 0.200 & 0.872 & 0.954 & 0.978 \\
Fintuned\_synthesis\_200 & \checkmark & \xmark & $0-80$ m & 0.100 & 0.682 & 4.515 & 0.195 & 0.879 & 0.957 & 0.979 \\
Fintuned\_synthesis\_700 & \checkmark & \xmark & $0-80$ m & 0.099 & 0.672 & 4.593 & 0.194 & 0.879 & 0.957 & 0.979 \\ \midrule
Finetuned\_stereo\_gt\_right\_0 & \xmark & \xmark & $0-80$ m & 0.062 & 0.424 & 3.677 & 0.164 & 0.939 & 0.968 & 0.981 \\
Finetuned\_stereo\_gt\_right\_200 & \xmark & \checkmark & $0-80$ m & 0.065 & 0.452 & 3.844 & 0.168 & 0.933 & 0.967 & 0.981 \\
Finetuned\_stereo\_gt\_right\_700 & \xmark & \checkmark & $0-80$ m & 0.053 & 0.382 & 3.400 & 0.144 & 0.947 & 0.975 & 0.986 \\ \midrule
Finetune-200 & \checkmark & \checkmark & $0-80$ m & 0.100 & 0.670 & 4.437 & 0.192 & 0.882 & 0.958 & 0.979 \\
Finetune-500 & \checkmark & \checkmark & $0-80$ m & 0.094 & 0.635 & 4.275 & 0.179 & 0.889 & 0.964 & 0.984 \\
Finetune-700 & \checkmark & \checkmark & $0-80$ m & 0.094 & 0.626 & 4.252 & 0.177 & 0.891 & 0.965 & 0.984 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\vspace{-5pt}
\caption{Quantitative results of different variants of our proposed method on the test set of the KITTI Raw dataset used by Eigen \etal~\cite{eigen2014depth} at the cap of 80m. ``FT VSN'' denotes whether the view synthesis network has been finetuned in an end-to-end fashion, while ``FT SMN'' denotes whether the stereo matching network has been finetuned in an end-to-end fashion. Top three rows: comparisons of different view synthesis network settings. Middle three rows: comparisons of different stereo matching network settings. Bottom three rows: empirical comparisons by different number of training samples. The number in the method names means the number of samples to finetune the network.}
\label{table:kitti_ablation}
\vspace{-10pt}
\end{table*}
\subsection{Comparisions with state-of-the-art methods}
Next, results on the KITTI Eigen split dataset are reported when right images are predicted by our view synthesis network. Results are compared to six recent baseline methods as showed in Table~\ref{table:kitti_res}, \cite{eigen2014depth,liu2014discrete} are supervised methods, \cite{kuznietsov2017semi} is a semi-supervised method, and \cite{godard2016unsupervised,zhou2017unsupervised,garg2016unsupervised} are unsupervised methods. Our proposed method is also a semi-supervised method.
\textbf{Result without end-to-end finetuning:}
After the training of both networks converged, we directly feed the right image synthesized by the view synthesis network to the stereo matching network to predict the depth for the given left images. The result is reported in Table \ref{table:kitti_res}.
As one can see, even without finetuning the whole network in KITTI dataset, our method performs better than the unsupervised method~\cite{godard2016unsupervised}, and gets comparable performance with the state-of-the-art semi-supervised method~\cite{kuznietsov2017semi}. The performance achieved by our method demonstrates that decoupling the problem of monocular depth estimation into two separate sub-problems is simple yet effective by explicitly enforcing geometrics constraints, which is critical for estimating depth from images.
\textbf{Result with end-to-end finetuning:}
We further finetune the whole system with a small amount of training data from KITTI Eigen split training set, \ie 700 training samples. The left, right images and the depth images are used as training samples to our proposed method.
The results are reported in Table~\ref{table:kitti_res}, as one can see, our method outperforms all compared methods, with ARD metric reduced by \textbf{17.5\%} compared with Godard \etal~\cite{godard2016unsupervised} and \textbf{16.8\%} compared with Kuznietsov \etal~\cite{kuznietsov2017semi} at the cap of 80 m. Our proposed method performs the best for almost all metrics. It shows that end-to-end training further optimizes the collaboration of these two sub-networks and it leads to the state-of-the-art result. Qualitative comparisons are shown in Figure~\ref{fig:qualitative-reults}. Our proposed method also achieves much more visually accurate estimations than the compared methods.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\linewidth]{figures/comparision_with_others.png}
\begin{subfigure}[b]{0.16\linewidth}
\caption{Input}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\caption{Ground-truth}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\caption{Eigen \etal~\cite{eigen2014depth}}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\caption{Garg \etal~\cite{garg2016unsupervised}}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\caption{Godard \etal~\cite{godard2016unsupervised}}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\caption{Ours}
\end{subfigure}
\vspace{-12pt}
\caption{Qualitative results on the KITTI Eigen test set. Sparse ground-truth labels have been interpolated for visualization.
Note that the prediction of our method can better separate the background and foreground or different entities close to each other. Also, our results are crisper and neater. In addition, we are doing better on the objects such as trees, poles, traffic sign and pedestrians, whose depth are generally hard to be inferred accurately.}
\label{fig:qualitative-reults}
\vspace{-10pt}
\end{figure*}
\subsection{Analyzing the function of two sub-networks after end-to-end training}
In this section, we analyze the function of two sub-networks after end-to-end training. If the end-to-end training breaks the origin functionality of the two sub-networks but the overall performance increases, the whole network would be overfitted to the KITTI dataset, which will make it hard to generalize to other datasets or scenes. To examine the function of two sub-networks, we conduct the following two groups of experiments.
\textbf{Analyzing function of view synthesis sub-network:}
We replaced the stereo matching sub-network in the finetuned network with the one before finetuneing. Since pre-trained stereo matching sub-network is only pre-trained to complete the stereo matching task using real left-right pairs, if after replacing, the whole network could still get good performance in the task of single image depth estimation, the origin functionality of the view synthesis network after the finetuning process could still be retained.
The results are reported in top three rows of Table~\ref{table:kitti_ablation}, denoted as ``Finetuned\_synthesis\_K'', where K represents the number of training samples. As one can see from Table~\ref{table:kitti_ablation}, the results by ``Finetuned\_synthesis\_K'' outperform the method without finetune. From another perspective, the average PSNR between synthesized views and ground truth views in test set increases from 21.29dB to 21.32dB after finetuning. The preservation of functionality may be due to the reason that during the finetuning process, the stereo matching sub-network acts as another loss to better constrain the view synthesis sub-network to generate geometric-reasonable right images.
\vspace{-4pt}
\begin{table}[!htp]
\centering
\footnotesize
\begin{adjustbox}{max width=1.0\textwidth}
\renewcommand{\arraystretch}{0}
\begin{tabular}{@{}l|c|c|c|c|ccc|ccc@{}}
\toprule
\multicolumn{1}{l|}{Experiment} & \multicolumn{1}{c|}{cap} & ARD & SRD & RMSE & RMSE(log) \\ \midrule
Our\_Best & $0-80$ m &0.094 & 0.626 & 4.252 & 0.177 \\
Kuznietsov \cite{kuznietsov2017semi} & $0-80$ m &0.113 & 0.741 & 4.621 & 0.189 \\ \midrule
Prob\_Disp & $0-80$ m & 0.212 & 2.075 & 6.314 & 0.294 \\ \midrule
NoKitti200\_BF & $0-80$ m &0.119 & 0.969 & 5.079 & 0.207 \\
NoKitti200\_AF & $0-80$ m &0.101 & 0.673 & 4.425 & 0.176 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\vspace{-4pt}
\caption{Additional experimental results. Upper part is our best result and the previous state-of-the-art result. Middle part shows the result directly calculated from the probabilistic disparity map obtained in our view synthesis network. Lower part shows the results before and after finetuning without 200 high-quality KITTI labels.}
\label{table:addResults}
\vspace{-6pt}
\end{table}
\textbf{Analyzing function of stereo matching sub-network:}
In order to validate the function of stereo matching sub-network after end-to-end training, we test the stereo matching performance of the finetuned stereo matching sub-network by providing the true left and right image as inputs to predict the depth.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\linewidth]{figures/compared_all_v2.png}
\begin{subfigure}[b]{0.24\linewidth}
\caption{Input image}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\caption{Godard \etal~\cite{godard2016unsupervised}}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\caption{OCV-BM}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\caption{Ours}
\end{subfigure}
\vspace{-14pt}
\caption{Empirical study on the qualitative comparisons on KITTI 2015 Stereo test set. The figures from left to right correspond to the input left images, estimated disparity maps or error maps by Godard \etal ~\cite{godard2016unsupervised}, block matching, and our method respectively. And the second and fourth rows are the error maps while the estimated disparity maps are plotted above each error maps, the synthesized right views are also presented in the first column. The error map uses the log-color scale described in~\cite{Menze2015CVPR}, depicting correct estimates in \textcolor{blue}{blue} and wrong estimates in \textcolor{red}{red} color tones. Best view in color.}
\label{fig:stereo-comparison}
\vspace{-12pt}
\end{figure*}
The results are provided in the middle three rows of Table~\ref{table:kitti_ablation}, denoted as ``Finetuned\_stereo\_gt\_right\_K''. As shown in Table~\ref{table:kitti_ablation}, ``Finetuned\_stereo\_gt\_right\_200'' performs slightly worse than ``Finetuned\_stereo\_gt\_right\_0'', this may be due to the reason that the finetuning process has forced the stereo matching sub-network to better fit on the imperfect synthesized right images. However, ``Finetuned\_stereo\_gt\_right\_700'' outperforms the pre-trained stereo matching sub-network. The high performance of stereo matching results clearly demonstrates the stereo matching network still maintains its functionality after end-to-end finetuned.
Combining the above two experiment groups, we could conclude that after end-to-end training, the two sub-modules collaborate more effectively while preserving their individual functionalities. This may imply that our proposed method could generalized well to other datasets. Some qualitative results on Cityscape dataset~\cite{Cordts2016Cityscapes} and Make3D dataset~\cite{Saxena09make3D} are shown in Figure~\ref{fig:generalization}, which are estimated by our method finetuned in KITTI dataset. The results demonstrate the generalization ability of our proposed method on unseen scenes.
\subsection{Primitive disparity obtained in the view synthesis network}
Our view synthesis network produces a primitive disparity in order to do the rendering. The middle part in table \ref{table:addResults} shows the estimation accuracy calculated from this probabilistic disparity map. We can see the result is much inferior to the final result of our proposed method. It shows our approach indeed makes a great improvement over the primitive disparity.
\subsection{Analyzing the effect of training sample number}
To study the effectiveness of our proposed method, we also evaluate our proposed method finetuned by different numbers of samples, \ie, 0, 200, 500, 700, named as ``Finetune-K''. Note that, when K equals to 0, finetuning is not performed on the whole network.
The results are reported in the bottom three rows of Table~\ref{table:kitti_ablation}. As one can see from the results, more end-to-end finetuning samples could achieve higher performance, and our proposed method could outperform previous state-of-the-art methods by a clear margin only using 700 samples to finetune the whole network.
\subsection{Use of 200 high-quality KITTI labels}
As described before, we use 200 high-quality KITTI labels to optionally finetune the stereo matching network. In the lower part of table \ref{table:addResults}, we present the result without these labels before and after finetune(\_BF\&\_AF). We can see that without seeing any real disparity from KITTI, our method already gets promising results. After finetuning without those high-quality labels, our method still beats the current state-of-the-art method. These high-quality labels, in fact, increase the capacity of the model to a certain extent, but without them, our method still makes an improvement under the same condition.
\begin{table}
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Method & D1-bg & D1-fg & D1-all \\ \hline
Godard \etal~\cite{godard2016unsupervised} & 27.00 & 28.24 & 27.21 \\ \hline
OCV-BM & \textbf{24.29} & 30.13 & 25.27 \\ \hline
Ours & 25.18 & \textbf{20.77} & \textbf{24.44} \\ \hline
\end{tabular}
\vspace{-5pt}
\caption{Quantitative results on the test set of the KITTI 2015 Stereo Benchmark~\cite{Menze2015CVPR}. Best results are shown in bold. The number is the percentage of erroneous pixels, and a pixel is considered to be correctly estimated if the disparity is within 3px compared to the ground-truth disparity. Our method has already surpassed the stereo matching method, \ie Block Matching method. }
\label{table:kitti-stereo}
\vspace{-10pt}
\end{table}
\begin{figure}[]
\centering
\includegraphics[width=0.95\linewidth, height=0.5\linewidth]{figures/generalize_img_v2.png}
\vspace{-5pt}
\caption{Qualitative results on Make3D dataset \cite{Saxena09make3D} (top two rows) and Cityscapes dataset \cite{Cordts2016Cityscapes} (bottom two rows).}
\label{fig:generalization}
\vspace{-15pt}
\end{figure}
\subsection{Comparison with stereo matching method}
In this section, the comparisons with the proposed approach for depth estimation from single images and stereo matching method from stereo images are presented.
The results are summarized in Table~\ref{table:kitti-stereo}. As one can see, our method is the first single image depth estimation approach that surpasses the traditional stereo matching method, \ie block matching method denoted as ``OCV-BM'' in the table. Exemplar visual results are shown in Fig.~\ref{fig:stereo-comparison}. Because the block matching method directly using low-level image feature to search the matched pixels in the left and right images, the disparity maps predicted by the block matching method are usually noised, which greatly degrades its performance, but the results are still geometrically correct. The geometric reasoning capacity is built in our network and high-level image feature is processed in the deep learning network, these two reasons enable our method to outperform the stereo matching method. Due to the miss of explicit geometric constraints in Godard \etal~\cite{godard2016unsupervised}, its method gets sub-optimal results. Better performance of our method can be seen from the box regions in the figure.
\section{Conclusion}
In this work, we propose a novel perspective to tackle the problem of monocular depth estimation. We show for the first time that this problem can be decomposed into two problems, namely a view synthesis problem and a stereo matching problem. We explicitly encode the geometric transformation within both networks to better tackle the problems individually. Collectively training the whole pipeline results in an overall boost and we prove that both networks are able to preserve their original functionality after end-to-end training. Without using a large amount of expensive ground truth labels, we outperform all previous methods on a monocular depth estimation benchmark. Remarkably, we are the first to outperform the stereo blocking matching algorithm on a stereo matching benchmark using a monocular method.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-03-12T01:06:19",
"yymm": "1803",
"arxiv_id": "1803.02612",
"language": "en",
"url": "https://arxiv.org/abs/1803.02612"
}
|
\section{Introduction}
We show in this work that high dimensional expanders imply lattices with good distance. There are constructions of good error-correcting codes from expanders~\cite{SS96,Spi96,LMSS01}, and since error-correcting codes and lattices are of a similar flavor, it is natural to expect that it would be possible to construct good lattices from expanders. However, prior to our work, no such construction has been known to exist. We provide a new framework for constructing lattices from high dimensional expanders, and show that a certain family of high dimensional expanders can be used in order to construct lattices with good distance.
\paragraph{Error-correcting codes.}
An error-correcting code is a subset of $n$-bit strings $\mathcal{C} \subseteq \{0,1\}^n$ called codewords. In coding theory, a good code has the following two (conflicting) properties: First, any two codewords are far from each other, i.e., many bit flips are required in order to transform one codeword into another. And second, there are many codewords, i.e., $\mathcal{C}$ is dense in $\{0,1\}^n$.
The fact that error-correcting codes and expander graphs are related is well known by now. The idea to construct codes from graphs was initiated by Gallager~\cite{Gal63} already in 1963. Gallager suggested to use a randomly chosen sparse bipartite graph, as explicit expanders did not exist at that time. Sipser and Spielman~\cite{SS96} at their celebrated result used expander graphs for explicit constructions of asymptotically good error-correcting codes, and this idea was taken further by others (for example~\cite{Spi96} and~\cite{LMSS01}).
\paragraph{Lattices.}
Given a real vector space $W$ with a basis $B = \{w_1,\dotsc,w_n\}$, the lattice $\mathcal{L} \subset W$ generated by $B$ is the subgroup of all integer linear combinations of $B$, i.e.,
$$\mathcal{L} = \left\{\sum_{i=1}^{n}a_iw_i \;\Big|\; a_i \in \mathbb{Z}, w_i \in B \right\}.$$
In a similar sense to error-correcting codes, a good lattice has the following two (conflicting) properties: First, any two points in the lattice are far from each other. And second, there are many lattice points, i.e., $\mathcal{L}$ is dense in $W$. Lattices and error-correcting codes do not only sound similar, but also have been proven to be related. See~\cite{CS13} for constructions of lattices from error-correcting codes.
In this work we initiate the study of the following question:\\[5pt]
\textbf{Question.} \emph{Is it possible to construct a good lattice directly from an expander?}\vspace{5pt}
We show that high dimensional expanders can be used in order to construct lattices with large distance which have the potential to be dense. We then show the existence of such expanders, proving the following theorem.
\begin{theorem}[Main]\label{thm:main-informal}
There exists an infinite family of high dimensional expanders which give rise to lattices with good distance.
\end{theorem}
Let us start by illustrating the strategy we use for constructing a lattices from expanders. Let $G=(V,E)$ be a graph with $k$ connected components, each contains $l$ vertices. For each connected component $S \subset V$, define its characteristic vector $\mathbf{1}_S$ which is $1$ on every vertex $v \in S$ and $0$ on every vertex $v \notin S$. We measure the size of each vector by its hamming weight, i.e., the number of entries which are not $0$. Now, consider the lattice generated by the $\mathbb{Z}$-span of these vectors. This lattice has dimension $k$ and distance $l$. Of course we have $k\cdot l \le |V|$, so we cannot hope to have both dimension and distance linear in $|V|$. Surprisingly, when moving to higher dimensions we can have both at the same time. So we are looking for higher dimensional analogs of graphs and we want that all of their (high dimensional) connected components would be large. This would give us lattices with large distance with the potential to have also large dimension.
The high dimensional analogs of graphs are called \emph{simplicial complexes}. A $d$-dimensional simplicial complex is a $(d+1)$-hypergraph with a closure property, namely, for any $(d+1)$-hyperedge in the complex, all of its subsets are also in the complex. An hyperedge is called a \emph{face} of the complex, and its dimension is one less than its cardinality. For a complex $X$, we denote by $X(0)$ the set of $0$-dimensional faces, which are the vertices, by $X(1)$ the $1$-dimensional faces, which are the edges, and so on up to $X(d)$, which are the top dimensional faces. As an example, a $1$-dimensional complex is just a graph, and a $2$-dimensional complex contains also triangles in addition to vertices and edges. Let us introduce two more definitions regarding high dimensional complexes.
\begin{enumerate}
\item For any $0 \le k \le d-1$, the \emph{$k$-skeleton} of $X$ is the complex obtained by taking only faces of dimension $\le k$ in $X$. In particular, the $1$-skeleton of $X$ is its underlying graph (ignoring the higher dimensional faces).
\item For any face $\sigma \in X$, the \emph{link} of $\sigma$ is the subcomplex obtained by taking all faces in $X$ which contain $\sigma$ and removing $\sigma$ from all of them, formally defined as $X_\sigma = \{\tau \setminus \sigma \;|\; \sigma \subseteq \tau \in X \}$. Note that $X_\sigma$ is a subcomplex of dimension $d-|\sigma|$.
\end{enumerate}
\subsection{Cohomology of complexes}
The high dimensional analogs of connected components are captured by the \emph{cohomology groups} of the complex. Let us consider the simple case of $d=1$, so $X=(V,E)$ is a graph. In this case, there is only one cohomology group, which corresponds to the connected components in the graph. A connected component in $X$ is a subset of vertices $S \subseteq V$ such that all edges are either inside $S$ or outside of $S$. The graph is connected if the only subsets $S$ which satisfy this criterion are trivial, i.e., $S=\emptyset$ or $S=V$. Instead of thinking of subsets of vertices, we could consider functions which give an integer value to each vertex, namely, $f:V\to\mathbb{Z}$. The equivalent way of saying that an edge $\{u,v\}$ is inside or outside $S$ is if $f(u) - f(v) = 0$. Thus, the graph is connected if the only functions for which all edges are either inside or outside of them are the constant functions. The $0$-cohomology of $X$, denoted by $H^0(X;\mathbb{Z})$, is the group of functions which vanish on all edges, where we identify functions that differ by a constant function as equivalent. If $X$ is connected, then $H^0(X;\mathbb{Z})$ is trivial, since any function that vanishes on all edges is constant and hence equivalent to the $\mathbf{0}$ function. If $X$ has more than one connected component, then its $0$-cohomology is an abelian group generated by the functions $\mathbf{1}_S$ for each connected component $S \subset V$.
Let us now move to dimension $2$, so $X = (V,E,T)$ is a $2$-dimensional simplicial complex. Now $X$ has two cohomology groups, where its $0$-cohomology is the same as before, and its $1$-cohomology corresponds to functions which vanish on all triangles. For technical reasons, we consider the faces in $X$ with orientation and say that $X$ has all possible orientations of its faces, i.e., if $(u,v) \in E$ then also $(v,u) \in E$. Consider the set of all antisymmetric functions $f:E\to \mathbb{Z}$ which assign an integer value to each edge in the complex (antisymmetric means that $f((u,v)) = -f((v,u))$ for any $(u,v) \in E$). We say that $f$ vanishes on all triangles if for any triangle $(u,v,w) \in T$, $f((u,v)) + f((v,w)) - f((u,w)) = 0$. In this case we also have functions which trivially vanish on all triangles: Take any subset of vertices $S \subseteq V$ and define the function $f$ by $f((u,v)) = \mathbf{1}_S(v) - \mathbf{1}_S(u)$. Then for any triangle $(u,v,w) \in T$,
$$f((u,v)) + f((v,w)) - f((u,w)) =
\mathbf{1}_S(v) - \mathbf{1}_S(u) + \mathbf{1}_S(w) - \mathbf{1}_S(v) - (\mathbf{1}_S(w) - \mathbf{1}_S(u)) = 0.$$
The $1$-cohomology of $X$, denoted by $H^1(X;\mathbb{Z})$, is the group of functions $f:E\to\mathbb{Z}$ which vanish on all triangles, where we identify functions that differ by a trivially vanishing function as equivalent.
In general, the $k$-cohomology captures the amount of functions $f:X(k) \to \Z$ which vanish on all $(k+1)$-dimensional faces. For any $k$ we have functions which trivially vanish on all $(k+1)$-dimensional faces, so again we consider two functions as equivalent if they differ by a trivially vanishing function.
As could be understood from the description in the above paragraphs, the cohomology groups are actually \emph{quotient spaces}. We start with the space of all functions $f:X(k) \to \Z$. Out of that we take the subspace of functions which vanish on all $(k+1)$-dimensional faces. Then we take a quotient space by identifying two functions as equivalent if they differ by a trivially vanishing function. When we construct a lattice from this quotient space, we take as a basis for the lattice a minimal representative from each equivalence class, and take the $\Z$-span of these basis elements.
In the case of graphs we could not have many connected components which are all large, so the $0$-cohomology group could not have many large elements. But for high dimensional complexes, it could be the case that for some $k > 0$, the $k$-cohomology would contain many elements, where all of them are large. Then the question we address is which complexes have only large elements in their cohomology groups. The way we answer this question is through local considerations. Roughly speaking, if every local piece of the complex is expanding, then all the elements in its cohomology groups are large. In the following we explain this criterion.
\subsection{High dimensional expanders}
Our aim in this section is to introduce briefly the notion of expansion in higher dimensions. In recent years, several definitions for high dimensional expansion have been studied. Before presenting them, let us recall expansion in graphs.
\subsubsection{Graph expansion}
\paragraph{Combinatorial expansion.}
Expander graphs have been defined explicitly by Pinsker~\cite{Pin73} as bounded degree graphs which are strongly connected. The strong connectivity of a graph is measured by its Cheeger constant, defined as follows. Let $G=(V,E)$ be a $k$-regular graph. For any subset of vertices $S \subseteq V$, denote by $E(S,\bar{S})$ the set of edges with one endpoint inside $S$ and one endpoint outside of $S$. Note that $G$ is connected if and only if $E(S,\bar{S}) \ne \emptyset$ for any $S \subseteq V$ which is not $\emptyset$ or $V$. The Cheeger constant of $G$ is defined as
$$h(G) = \min_{\emptyset \ne S \subsetneq V}\frac{|E(S,\bar{S})|}{\dist(S,\{\emptyset,V\})},$$
where $\dist(S,\{\emptyset,V\})$ is measured with hamming distance, so $\dist(S,\{\emptyset,V\}) = \min\{|S|,|V\setminus S| \}$. The graph $G$ is said to be an $\varepsilon$-combinatorial expander if $h(G) \ge \varepsilon k$ for some constant $\varepsilon > 0$.
\paragraph{Spectral expansion.}
Another notion of expansion of graphs is captured by their spectral gap. Let $A = A(G)$ be the graph's adjacency matrix, and denote by $\lambda_1 \ge \lambda_2 \ge \dotsb \ge \lambda_{|V|}$ the eigenvalues of $A$. Note that since $G$ is $k$-regular then $\lambda_1 = k$. We say that $G$ is an $\varepsilon$-spectral expander if $\lambda_2/k \le \varepsilon$ for some constant $\varepsilon > 0$. As it turns out, spectral expansion controls the graph's pseudorandom behavior. This is demonstrated by the following mixing lemma.
\begin{lemma}[Expander mixing lemma]\label{lem:expander-mixing-lemma}
Let $G=(V,E)$ be an $\varepsilon$-spectral expander. Then for any subset of vertices $S \subseteq V$,
$$\frac{|E(S)|}{|E|} \le \left(\frac{|S|}{|V|} \right)^2 + \varepsilon\frac{|S|}{|V|},$$
where $E(S)$ denotes the set of edges with both endpoints in $S$.
\end{lemma}
Note that the expectation of the fraction of edges inside $S$ in a random graph is $(|S|/|V|)^2$, so the spectral expansion measures how close is $G$ to the behavior of a random graph.
\subsubsection{High dimensional expansion}
There are several different ways to extend the notion of expansion from graphs to simplicial complexes. In the following we provide an informal definition of them just for dimension $2$, for formal definitions see~\secref{sec:preliminaries}.
\paragraph{Coboundary expansion.}
The notion of \emph{coboundary expansion} has been introduced by Linial and Meshulam~\cite{LM06} in their work on homological connectivity of random complexes, and independently by Gromov~\cite{Gro10} in his work on the topological overlapping property. Coboundary expansion is a natural extension of graph's combinatorial expansion from a homological point of view.
Let $X=(V,E,T)$ be a $2$-dimensional simplicial complex and assume that any vertex is contained in $k_1$ edges and any edge is contained in $k_2$ triangles. For any subset of vertices $S \subseteq V$, let $\delta(S) \subseteq E$ be the coboundary of $S$, defined as the set of edges which $S$ touches odd many times. Note that $\delta(S) = E(S,\bar{S})$, so this measures exactly the set of outgoing edges of $S$. Recall that for $S \in \{\emptyset,V\}$, $\delta(S)$ is trivially empty. For any subset of edges $F \subseteq E$, let $\delta(F) \subseteq T$ be the coboundary of $F$, defined as the set of triangles which $F$ touches odd many times. For subsets of edges we also have sets which their coboundary is trivially empty: Consider the set of edges $F$ between $S$ and $V\setminus S$ for some subset $S \subseteq V$. This set is called a cut in a graph. Note that for any $F$ which is a cut, $\delta(F) = \emptyset$. We say that $X$ is an $\varepsilon$-coboundary expander if:
\begin{enumerate}
\item For any $S \subset V$, $S \notin \{\emptyset, V \}$,
$$\frac{|\delta(S)|}{\dist(S,\{\emptyset, V \})} \ge \varepsilon k_1.$$
\item For any $F \subset E$, $F \notin \{\mbox{cuts} \}$,
$$\frac{|\delta(F)|}{\dist(F,\{\mbox{cuts}\})} \ge \varepsilon k_2.$$
\end{enumerate}
Condition 1 in the definition is exactly the Cheeger constant of the underlying graph of $X$, and condition 2 is its high dimensional analog for the edges of $X$.
\paragraph{Cosystolic expansion.}
Coboundary expansion is a very strong requirement and as of now it is not known if bounded degree coboundary expanders of dimension greater than $1$ even exist. A relaxation of coboundary expansion, called \emph{cosystolic expansion}, has been defined by~\cite{EK16}, and the existence of bounded degree cosystolic expanders of any dimension has been proven in~\cite{KKL14,EK16}.
In cosystolic expansion, we allow non-trivial sets to have coboundary $0$ as long as they are large. We call the sets which have coboundary $0$, the \emph{cocycles}. Then $X$ is said to be an $(\varepsilon,\mu)$-cosystolic expander if:
\begin{enumerate}
\item For any $S \subseteq V$, $S \notin \{\emptyset, V\}$:
\begin{enumerate}
\item If $|\delta(S)| = 0$ then $|S| \ge \mu|V|$.
\item Otherwise, $$\frac{|\delta(S)|}{\dist(S,\{\mbox{cocycles}\})} \ge \varepsilon k_1.$$
\end{enumerate}
\item For any $F \subseteq E$, $F \notin \{\mbox{cuts}\}$:
\begin{enumerate}
\item If $|\delta(F)| = 0$ then $|F| \ge \mu|E|$.
\item Otherwise, $$\frac{|\delta(F)|}{\dist(F,\{\mbox{cocycles}\})} \ge \varepsilon k_2.$$
\end{enumerate}
\end{enumerate}
Condition 1 in the definition is like saying that the underlying graph of $X$ is composed of many large connected components, where each of them is an $\varepsilon$-combinatorial expander, and condition 2 is its high dimensional analog for the edges of $X$.
\subsection{Constructing lattices from high dimensional expanders}
The idea of constructing good distance lattices from high dimensional expanders is the following. We take a complex which is a cosystolic expander, so we know that it has only large non-trivial cocycles (this is condition (a) in the definition above). We consider its cohomology group, which is a quotient space of non-trivial cocycles, where we identify two cocycles as equivalent if they differ by a trivial cocycle. Now we take a minimal representative from each equivalence class as a basis for the lattice and consider their $\Z$-span. But for that we need to know that the complex is a cosystolic expander over $\Z$. Let us explain what that means.
\subsubsection{Cosystolic expansion over $\Z$}
Note that both of the above definitions of coboundary and cosystolic expansion relate to subsets of faces. This is identical to considering functions from the vertices to $\F_2$ and from the edges to $\F_2$. These definitions extend naturally to functions over any ring (with a small modification to the coboundary operator, see~\secref{sec:preliminaries}).
In the work of~\cite{EK16}, they showed the existence of cosystolic expanders for functions over $\F_2$. This is not enough for us as we need cosystolic expansion for functions over $\Z$: If $X$ is a cosystolic expander with respect to functions over $\Z$, then any element in the $\Z$-span of non-trivial cocycles is large (this is part of the definition of cosystolic expansion). Therefore, for our lattice construction we need to prove the existence of cosystolic expanders over $\Z$.
We generalize the proof of~\cite{EK16} so it would work over any ring. First, we have translated their proof to language of probabilities, which makes the proof simpler even though the main ideas remain the same. Second, when working over general rings and not only over $\F_2$, there is the matter of orientations of faces which is needed to be taken care of. Previous works did not worry about orientations as they worked only over $\F_2$, where addition and subtraction are the same. We work over general rings, hence we have to cope with orientations of faces. This was not done in previous works.
The key point for proving cosystolic expansion over any ring is to show that any cochain can be decomposed into local parts, so its global expansion would be implied by the expansion of its local parts. Our main technical contribution is the following theorem.
\begin{theorem}[Existence of good dimension, informal, for formal see~\ref{thm:existence-of-good-dimension}]\label{thm:existence-of-good-dimension-informal}
If the underlying graph of any link in $X$ is a good enough spectral expander, then for any function $f:X(k) \to R$ for any ring $R$, there exists a dimension $0 \le i \le k$, such that $f$ can be decomposed to local parts of $i$-dimensional faces and most of the expansion of $f$ is implied by local expansion in the links of $i$-dimensional faces.
\end{theorem}
The above theorem tells us that the global expansion of a complex can be deduced from the expansion of its links. Thus, in order to get cosystolic expansion over any ring we only need to show that the links are good, i.e., their underlying graph is a spectral expander and they are coboundary expanders over any ring. We show that Ramanujan complexes have this property.
\subsubsection{Ramanujan complexes and their links}
Ramanujan complexes are the high dimensional analogs of the celebrated LPS graphs~\cite{LPS88}. LPS graphs are constructed by taking quotients of the infinite tree, which is the best expander possible. The infinite tree has an high dimensional analog, called the Bruhat-Tits building. This led~\cite{LSV05.1} to study quotients of it as a generalization of LPS graphs. By taking quotients of the Bruhat-Tits building,~\cite{LSV05.2} achieve an explicit construction of bounded degree simplicial complexes which locally look like the infinite object. These complexes are called \emph{Ramanujan complexes}. (For more on Ramanujan complexes see~\cite{Lub14}.)
Every link of a Ramanujan complex is a very symmetric complex called the \emph{spherical building} (more details on the spherical building are presented in \secref{sec:spherical-building}). The spherical building by itself is of unbounded degree, since the number of faces incident to any vertex grows with the number of vertices in the complex. But as links of a Ramanujan complex, the global complex is of a bounded degree. In~\cite{EK16}, the authors showed that the $1$-skeleton of the spherical building is an excellent spectral expander (its expansion quality is controlled by a parameter called the \emph{thickness} of the building), so it is left to show that the spherical building is a coboundary expander over any ring.
In~\cite{LMM16}, the authors showed that the spherical building is a coboundary expander over $\F_2$. This is not enough for us as we need expansion over $\Z$. We generalize the work of~\cite{LMM16} by taking care of orientations of faces (which was not necessary in their work since they proved only for $\F_2$). We show that with some modifications, which take orientations into account, the proof of~\cite{LMM16} can work over any ring. We prove the following theorem.
\begin{theorem}[The spherical building is a coboundary expander]
The spherical building is a coboundary expander over any ring.
\end{theorem}
Since we got that the links of (thick enough) Ramanujan complexes are spectral and coboundary expanders over any ring, we achieve the following theorem.
\begin{theorem}[Ramanujan complexes are cosystolic expanders over any ring]\label{thm:ramanujan-complexes-are-cosystolic-expanders}
For a thick enough $d$-dimensional Ramanujan complex, its $(d-1)$-skeleton is a cosystolic expander over any ring.
\end{theorem}
\subsubsection{The dimension of the lattice}
Up to now we got that Ramanujan complexes are cosystolic expanders over $\Z$, and thus can be used in order to construct lattices with large distance. It is left to consider the dimension of these lattices. It is clear that the dimension of the lattice is controlled by the amount of elements in the cohomology groups over $\Z$. However, usually it is easier to understand the cohomology groups over $\F_2$, where understanding them over $\Z$ could be a very hard task. Moreover, as proven in~\cite{KKL14}, we already know that Ramanujan complexes have non-trivial cohomology groups over $\F_2$ (actually we know that the number of elements in $H^1(X;\F_2)$ is logarithmic in the size of the complex~\cite{Lub}). Luckily, there is a way to relate cohomology groups over $\Z$ to cohomology groups with other coefficients. In a way, the cohomology groups over $\Z$ are considered universal, so they determine the cohomology groups with any other coefficients. This is done by \emph{the universal coefficient theorem}, which we introduce next.
\paragraph{The universal coefficient theorem.}
Since understanding this theorem requires a lot of background in algebraic topology, we introduce it in some sort of informal way. Any finitely generated abelian group $H$ has a decomposition to its free part and torsion part. The torsion part contains all elements of finite order (i.e., all $h \in H$ for which there exists $n \in \mathbb{N}$ such that $nh = 0$). Thus $H$ can be decomposed to
$$H \cong \Z^k \oplus T(H),$$
where $k$ is the number of free generators in $H$, and $T(H)$ is its torsion subgroup.
The universal coefficient theorem gives us information about this decomposition for the cohomology groups. In particular, it tells us that there exist $k,l \ge 0$ and $n_1, n_2, \dotsc, n_l \ge 0$, such that:
\begin{enumerate}
\item $H^1(X;\F_2) \cong \Z^k \oplus \bigoplus_{i=1}^l \F_{2^{n_i}}$.
\item $H^1(X;\Z) \cong \Z^k$.
\item $H^2(X;\Z)$ contains $\bigoplus_{i=1}^l \F_{2^{n_i}}$ in its decomposition.
\end{enumerate}
Therefore we have the following corollary.
\begin{corollary}
If $H^1(X;\F_2)$ is large then either $H^1(X;\Z)$ is large or $H^2(X;\Z)$ is large.
\end{corollary}
A good situation for us would be that the dominant part of $H^1(X;\F_2)$ comes from the free part of the decomposition and not from the torsion part. This would imply that the lattice we construct has a large dimension, as it has many free generators. However, it is still an open question whether this is the case in Ramanujan complexes, and this theorem gives us a lead how to approach the study of the cohomology groups over $\Z$.
\subsection{The lattice construction}
Overall we have that the links of Ramanujan complexes are good spectral and coboundary expanders over $\Z$, and by theorem~\ref{thm:existence-of-good-dimension-informal}, this implies that Ramanujan complexes are cosystolic expanders over $\Z$. So taking the $\Z$-span of minimal representatives of their cohomology groups yields lattices with good distance.
The advantage of this construction is that now we can have both distance and dimension large. Recall that in graphs there is a trivial tradeoff, as the multiplication of the distance and the dimension of the lattice is bounded by $n$, where $n$ is the number of vertices in the graph. When using this framework for high dimensional expanders, it could be the case that both distance and dimension are large. In particular, we show that the distance of lattices constructed from Ramanujan complexes is linear in $n$, and by the universal coefficient theorem, it might be that their dimension is logarithmic in $n$. This yields that their multiplication could potentially be of order $n \log n$. In general we expect that there should be bounded degree co-systolic expanders over some rings that would be resulted in good distance lattices which are sufficiently dense.
\subsection{Discussion and future work}
Recent works showed that cosystolic expansion over $\mathbb{F}_2$ implies Gromov's topological overlapping property~\cite{EK16}. We show here that cosystolic expansion over $\Z$ implies the construction of lattices with good distance. We believe that cosystolic expansion over general rings should have far reaching applications that are beyond one's imagination. For instance, it could lead to new constructions of locally testable codes.
\subsection{Organization}
We start with a preliminaries section that contains the basics of cochains with norms in high dimensional complexes. In section~\ref{sec:cosystolic-expansion} we prove theorem~\ref{thm:existence-of-good-dimension-informal} and show how it implies cosystolic expansion over any ring. In section~\ref{sec:spherical-building} we introduce the links of Ramanujan complexes, which are called spherical buildings, and prove that they are coboundary expanders over any ring.
\section{Preliminaries}\label{sec:preliminaries}
Let $X$ be a $d$-dimensional simplicial complex. For any $-1 \le k \le d$, denote by $X(k)$ the set of $k$-dimensional faces of $X$ (where $X(-1) = \{\emptyset\}$ contains the only $-1$-dimensional face, which is the face with $0$ vertices). An ordered set $\vec{\sigma} = (v_0,v_1, \dotsc,v_k)$ is an \emph{ordered face} of $X$ if the unordered set $\sigma = \{v_0,v_1,\dotsc,v_k \}$ is a face of $X$. Denote by $\vec{X}(k)$ the set of ordered $k$-dimensional faces of $X$. The space of \emph{$k$-cochains} over a ring $R$ is defined as
$$C^k = C^k(X;R) = \{f : \vec{X}(k) \to R \;|\; f \mbox{ is antisymmetric} \},$$
where $f$ is antisymmetric if for any permutation $\pi \in Sym(k+1)$,
$$f((v_{\pi(0)},v_{\pi(1)},\dotsc,v_{\pi(k)})) = sgn(\pi)f((v_0,v_1,\dotsc,v_k)).$$
Note that for $R = \mathbb{F}_2$, the $k$-cochains are just subsets of $X(k)$ (where we identify a subset of faces with its characteristic function). In the works of~\cite{LMM16} and~\cite{EK16}, which we generalize in this paper, the authors worked only with cochains over $\mathbb{F}_2$ so they did not have to worry about ordered faces and change of signs. We let the cochains to be over any ring so we need to take these considerations into account.
We measure the size of a cochain according to its hamming weight with proportion to the top dimension of the complex, as follows. Let $r_d,r_{d-1},\dotsc,r_{-1}$ be a sequence of random faces of $X$, where $r_d$ is distributed uniformly on $X(d)$, and for any $k<d$, $r_k$ is obtained by removing a uniformly random vertex from $r_{k+1}$. All the probabilities we measure in this work would be over this distribution of random faces. For any $k$-cochain $f \in C^k$, we denote its support by $A = \supp(f) = \{\sigma \in X(k) \;|\; f(\sigma) \ne 0\}$, and define its norm to be $\norm{f} = \norm{A} = \Pr[r_k \in A]$. (Note that the support of $f$ is a set of unordered faces, and it is well defined even though the cochain is defined on ordered faces, since it does not matter which ordering we take.)
For any $\vec{\sigma} = (v_0, v_1, \dotsc, v_k)$ we denote by $\vec{\sigma}\setminus \{v_i\} = (v_0,\dotsc, v_{i-1},v_{i+1},\dotsc,v_k)$ the ordered $(k-1)$-face obtained by removing $v_i$ from $\sigma$. The $k$-coboundary operator $\delta = \delta^k: C^k \to C^{k+1}$ is defined as
$$\delta(f)(\vec{\sigma}) = \sum_{i=0}^{k+1}(-1)^i f(\vec{\sigma}\setminus \{v_i\}).$$
Denote by $B^k = \mbox{Im}(\delta^{k-1}) = \{\delta^{k-1}(f) \;|\; f \in C^{k-1}\}$ the $k$-coboundaries of $X$, and by $Z^k = \ker(\delta^k) = \{f \in C^k \;|\; \delta^k(f) = 0\}$ the $k$-cocycles of $X$. The $k$-cohomology group is the quotient space $H^k = Z^k/B^k$. The distance of a $k$-cochain $f \in C^k$ from the $k$-coboundaries is defined as $\dist(f,B^k) = \min\{\norm{f - b} \;|\; b \in B^k \}$. Similarly, the distance from the $k$-cocycles is defined as $\dist(f,Z^k) = \min\{\norm{f - z} \;|\; z \in Z^k \}$.
We can now present the notion of coboundary expansion as was introduced by Linial-Meshulam~\cite{LM06} and by Gromov~\cite{Gro10}.
\begin{definitoin}[Coboundary expansion]\label{def:coboundary-expansion}
Let $X$ be a $d$-dimensional simplicial complex and $R$ a ring. $X$ is called an \emph{$\varepsilon$-coboundary expander} over $R$, if for any $k$-cochain which is not a $k$-coboundary $f \in C^k(X;R) \setminus B^k(X;R)$, $0 \le k \le d-1$,
$$\frac{\norm{\delta(f)}}{\dist(f, B^k(X;R))} \ge \varepsilon.$$
\end{definitoin}
As it turns out, coboundary expansion is a very strong requirement. Currently it is not known whether bounded degree coboundary expanders of dimension $\ge 2$ even exist. This leads to the relaxation of coboundary expansion, called cosystolic expansion, which was introduced by~\cite{EK16}, and is defined as follows.
\begin{definitoin}[Cosystolic expansion]\label{def:cosystolic-expansion}
Let $X$ be a $d$-dimensional simplicial complex and $R$ a ring. $X$ is called an \emph{$(\varepsilon, \mu)$-cosystolic expander} over $R$, if:
\begin{enumerate}
\item For any $f \in C^k(X;R) \setminus Z^k(X;R)$, $0 \le k \le d-1$,
$$\frac{\norm{\delta(f)}}{\dist(f, Z^k(X;R))} \ge \varepsilon.$$
\item For any $z \in Z^k(X;R) \setminus B^k(X;R)$, $0 \le k \le d-1$,
$$\norm{z} \ge \mu.$$
\end{enumerate}
\end{definitoin}
Recall that for any $\sigma \in X$, its \emph{link} is the subcomplex obtained by taking all the faces in $X$ which contain $\sigma$, and removing $\sigma$ from all of them. Since the link of $\sigma$ is a complex by itself, we can talk about cochains and norms in the link. Consider a $(k-|\sigma|)$-cochain in the link of $\sigma$, $f\in C^{k-|\sigma|}(X_\sigma;R)$. Its norm in the link is the probability that a random face would fall in $\supp(f)$ when the top face is distributed uniformly over the top faces in $X_\sigma$. Thus,
$$\norm{f} = \Pr[r_k \setminus \sigma \in \supp(f) \;|\; r_{|\sigma|-1} = \sigma],$$
where $\norm{f}$ is the norm in the link, and $r_k,r_{|\sigma|-1}$ are the random faces chosen in $X$.
From now on we fix for any face in the complex an arbitrary choice of ordering, so for any $\sigma \in X$ there is one fixed ordered face $\vec{\sigma}$ which corresponds to it. The choice of ordering does not matter, it just has to be consistent. For any $k$-cochain $f \in C^k$ and any face $\sigma \in X$, we define the \emph{localization} of $f$ to the link of $\sigma$, denoted by $f_\sigma$, as follows. For any ordered $(k-|\sigma|)$-face $\vec{\tau} \in \vec{X}_\sigma(k-|\sigma|)$, we define $f_\sigma(\vec{\tau})=f(\vec{\sigma\tau})$, where $\vec{\sigma\tau} \in \vec{X}(k)$ is the ordered $k$-face obtained by concatenating $\vec{\tau}$ to $\vec{\sigma}$. We say that a cochain $f \in C^k$ is \emph{minimal} if $\norm{f} = \dist(f, B^k)$. We say that $f$ is \emph{locally minimal} if its localization to any link is minimal, i.e., if $f_\sigma$ is minimal in $X_\sigma$ for any $\emptyset \ne \sigma \in X$.
The following two lemmas regarding minimal and locally minimal cochains will be necessary later.
\begin{lemma}[Minimal cochains are closed under inclusion]\label{lem:minimal-cochains-are-closed-under-inclusion}
Let $X$ be a $d$-dimensional simplicial complex and $R$ a ring. For any $f,g\in C^k(X;R)$, $0 \le k \le d$, if $f$ is a minimal cochain and $g(\vec{\sigma}) = f(\vec{\sigma})$ for any $\sigma \in \supp(g)$, then $g$ is a minimal cochain.
\end{lemma}
\begin{proof}
Note that for any $k$-cochain $h\in C^k$,
\begin{equation}\label{eq:subset-of-minimal-cochain-is-minimal-1}
\norm{f-h} - \norm{g-h} \le \norm{f-g} = \norm{f} - \norm{g},
\end{equation}
where the equality follows by the fact that $g(\vec{\sigma}) = f(\vec{\sigma})$ for any $\sigma \in \supp(g)$. Then for any $k$-coboundary $b\in B^k$,
$$\norm{g} = \norm{g} + \norm{f} - \norm{f} \le
\norm{g} + \norm{f-b} - \norm{f} \le \norm{g-b},$$
where the first inequality follows by the fact that $f$ is a minimal cochain, and the second inequality follows by~\eqref{eq:subset-of-minimal-cochain-is-minimal-1}.
\end{proof}
\begin{lemma}[Minimal cochain is also locally minimal]\label{lem:minimal-cochain-is-also-locally-minimal}
Let $X$ be a $d$-dimensional simplicial complex and $R$ a ring. For any $f \in C^k(X;R)$, $0 \le k \le d$, if $f$ is minimal, then $f$ is also locally minimal.
\end{lemma}
\begin{proof}
Let $f \in C^k(X;R)$ be a minimal cochain. Assume towards contradiction that $f$ is not locally minimal. There exists a face $\emptyset \ne \sigma \in X$ and a cochain $h \in C^{k-|\sigma|-1}(X_\sigma;R)$ such that
\begin{equation}\label{eq:minimal-cochain-is-also-locally-minimal-1}
\norm{f_\sigma - \delta(h)} < \norm{f_\sigma}.
\end{equation}
Define $g \in C^{k-1}(X;R)$ by $g(\sigma\tau) = h(\tau)$ for any $\tau \in X_\sigma$, and for any other face $g(\tau) = 0$. Note that $g_\sigma = h$, then by~\eqref{eq:minimal-cochain-is-also-locally-minimal-1},
$$\norm{f - \delta(g)} < \norm{f},$$
in contradiction to the minimality of $f$. It follows that $f$ is locally minimal.
\end{proof}
We have one more definition we want to present in this section.
\begin{definitoin}[Skeleton expansion]\label{def:skeleton-expansion}
Let $X$ be a $d$-dimensional simplicial complex. $X$ is called an \emph{$\alpha$-skeleton expander}, if for any subset of vertices $S \subseteq X(0)$,
$$\norm{E(S)} \le \norm{S}^2 + \alpha\norm{S},$$
where $E(S)$ denotes the set of edges with both endpoints in $S$.
\end{definitoin}
\section{Cosystolic expansion}\label{sec:cosystolic-expansion}
Our aim in this section is to show that good links imply cosystolic expansion over any ring. In~\cite{KKL14}, the authors showed that for bounded degree complexes, cosystolic expansion of the $(d-1)$-skeleton is implied by the expansion of small cochains which are locally minimal. Let us define this sort of small-set expansion.
\begin{definitoin}[Small-set expander]
Let $X$ be a $d$-dimensional simplicial complex and $R$ a ring. $X$ is called an \emph{$(\varepsilon,\mu)$-small-set expander} over $R$, if for any $f \in C^k(X;R)$, $0 \le k \le d-1$,
$$f \mbox{ is locally minimal and } \norm{f} \le \mu \quad\Rightarrow\quad \norm{\delta(f)} \ge \varepsilon\norm{f}.$$
\end{definitoin}
We start by showing that small-set expansion implies cosystolic expansion, and then we will show that good links imply small-set expansion.
\subsection{Small-set expansion implies cosystolic expansion}
The criterion of having only large non-trivial cocycles is immediate from the small-set expansion, and this actually holds for unbounded degree complexes as well.
\begin{proposition}[Small-set expansion implies large non-trivial cocycles]\label{pro:small-set-expansion-implies-large-non-trivial-cocycles}
Let $X$ be a $d$-dimensi-onal $(\varepsilon,\mu)$-small-set expander over a ring $R$. For any $z \in Z^k(X;R) \setminus B^k(X;R)$, $0 \le k \le d-1$, it holds that $\norm{z} \ge \mu$.
\end{proposition}
\begin{proof}
Note that it is enough to show that it holds for minimal non-trivial cocycles, since if $z \in Z^k(X;R) \setminus B^k(X;R)$ is not minimal, then there exists a coboundary $b \in B^k(X;R)$ such that $\norm{z} \ge \norm{z - b}$ and $z - b \in Z^k(X;R) \setminus B^k(X;R)$ is minimal.
Let $z \in Z^k(X;R)$ be a minimal cocycle. We show that if $\norm{z} < \mu$, then $z \in B^k(X;R)$. Since $z$ is minimal, by lemma~\ref{lem:minimal-cochain-is-also-locally-minimal} $z$ is also locally minimal, so by the small-set expansion, $\norm{\delta(z)} \ge \varepsilon\norm{z}$. But on the other hand $z \in Z^k(X;R)$ so $\norm{\delta(z)} = 0$. It follows that $\norm{z} = 0$, so $z \in B^k(X;R)$ as required.
\end{proof}
We say that a complex $X$ is \emph{$Q$-bounded degree} if for any $v \in X(0)$, $|X_v| \le Q$. In order to show that small-set expansion implies that any cochain in a bounded degree complex expands with respect to the cocycles (first criterion in the definition of cosystolic expansion), we need the following lemma.
\begin{lemma}\label{lem:fix-cochain-to-locally-minimal-is-small}
Let $X$ be a $d$-dimensional $Q$-bounded degree simplicial complex and $R$ a ring. For any $f \in C^k(X;R)$, $0 \le k \le d-1$, there exists $g \in C^{k-1}(X;R)$ such that:
\begin{enumerate}
\item $\norm{g} \le Q^2\norm{f}$.
\item $f - \delta(g)$ is locally minimal.
\item $\norm{f - \delta(g)} \le \norm{f}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove by induction on $\norm{f}$. For the base case, $\norm{f} = 0$, the claim holds trivially for $g = 0$. Assume the claim holds for any cochain $f'$ with $\norm{f'} < \norm{f}$. Now, if $f$ is locally minimal, the claim holds for $g = 0$. Otherwise, there exists some $\sigma \in X$ such that $f_\sigma$ is not minimal in $X_\sigma$. So there exists a cochain in the link of $\sigma$, $h \in C^{k-1-|\sigma|}(X_\sigma;R)$, such that $\norm{f_\sigma + \delta(h)} < \norm{f_\sigma}$. Define $g' \in C^{k-1}(X;R)$ by $g'(\sigma\tau) = h(\tau)$ for any $\tau \in X_\sigma$ and for any other face $g'(\tau) = 0$. It follows that $\norm{f + \delta(g')} < \norm{f}$. By the induction assumption, there exists $g'' \in C^{k-1}(X;R)$ such that:
\begin{enumerate}
\item $\norm{g''} \le Q^2\norm{f + \delta(g')}$.
\item $f + \delta(g') + \delta(g'') = f + \delta(g' + g'')$ is locally minimal.
\item $\norm{f + \delta(g') + \delta(g'')} = \norm{f + \delta(g' + g'')} \le \norm{f + \delta(g')} < \norm{f}$.
\end{enumerate}
Denote by $g = g' + g''$, and note that conditions 2 and 3 are satisfied. As for condition 1, note that since
$$\norm{f} =
\sum_{\sigma \in \text{supp}(f)}\Pr[r_k = \sigma] =
\sum_{\sigma \in \text{supp}(f)}\sum_{\substack{\tau \in X(d)\\\tau \supset \sigma}}\Pr[r_d = \tau \wedge r_k = \sigma] =
\sum_{\sigma \in \text{supp}(f)}\sum_{\substack{\tau \in X(d)\\\tau \supset \sigma}}\frac{1}{|X(d)|\binom{d+1}{k+1}},$$
Then
\begin{equation}\label{eq:fix-cochain-to-locally-minimal-is-small-1}
\norm{f + \delta(g')} \le \norm{f} - \frac{1}{|X(d)|\binom{d+1}{k+1}},
\end{equation}
And also, since $\supp(g')$ contains only faces which contain $\sigma$, then
\begin{equation}\label{eq:fix-cochain-to-locally-minimal-is-small-2}
\begin{aligned}
\norm{g'} &\le
\sum_{\substack{\tau \in X(k) \\ \tau \supset \sigma}}\Pr[r_k = \tau] =
\sum_{\substack{\tau \in X(k) \\ \tau \supset \sigma}}\sum_{\substack{\rho \in X(d) \\ \rho \supset \tau}}\Pr[r_d = \rho \wedge r_k = \tau] \\&=
\sum_{\substack{\tau \in X(k) \\ \tau \supset \sigma}}\sum_{\substack{\rho \in X(d) \\ \rho \supset \tau}}\frac{1}{|X(d)|\binom{d+1}{k+1}} \le
\frac{Q^2}{|X(d)|\binom{d+1}{k+1}}.
\end{aligned}
\end{equation}
Combining~\eqref{eq:fix-cochain-to-locally-minimal-is-small-1} and~\eqref{eq:fix-cochain-to-locally-minimal-is-small-2} yields
$$\norm{g} \le
\norm{g'} + \norm{g''} \le
\frac{Q^2}{|X(d)|\binom{d+1}{k+1}} +
Q^2\left(\norm{f} - \frac{1}{|X(d)|\binom{d+1}{k+1}}\right) =
Q^2\norm{f},$$
and condition 1 is satisfied as well.
\end{proof}
Now we can show the expansion criterion of any cochain (up to one dimension less) with respect to the cocycles.
\begin{proposition}[Small-set expansion implies cocycle expansion for one dimension less]\label{pro:small-set-expansion-implies-cocyle-expansion-for-one-dimension-less}
Let $X$ be a $d$-dimensional $Q$-bounded degree $(\varepsilon,\mu)$-small-set expander over a ring $R$. For any $f \in C^k(X;R) \setminus Z^k(X;R)$, $0 \le k \le d-2$, it holds that $$\frac{\norm{\delta(f)}}{\dist(f, Z^k(X;R))} \ge \min\{\mu,Q^{-2}\}.$$
\end{proposition}
\begin{proof}
Let $f \in C^k(X;R) \setminus Z^k(X;R)$, $0 \le k \le d-2$. If $\norm{\delta(f)} \ge \mu$ we are done, so assume that $\norm{\delta(f)} < \mu$. Let $g \in C^k(X;R)$ be the $k$-cochain promised by lemma~\ref{lem:fix-cochain-to-locally-minimal-is-small} when applied on $\delta(f)$. By properties 2 and 3 of lemma~\ref{lem:fix-cochain-to-locally-minimal-is-small}, $\delta(f) - \delta(g) = \delta(f - g)$ is a $(k+1)$-cochain which is locally minimal and $\norm{\delta(f - g)} \le \mu$. By the small-set expansion,
$$0 = \norm{\delta(\delta(f - g))} \ge \varepsilon\norm{\delta(f - g)},$$
so $\delta(f - g) = 0$, which means that $f - g \in Z^k(X;R)$. By property 3 of lemma~\ref{lem:fix-cochain-to-locally-minimal-is-small} we know that $\norm{g} \le Q^2\norm{\delta(f)}$, which yields
$$\norm{\delta(f)} \ge
Q^{-2}\norm{g} =
Q^{-2}\norm{f - (f - g)} \ge
Q^{-2}\cdot \dist(f, Z^k(X;R)).$$
\end{proof}
The following is an immediate corollary of propositions~\ref{pro:small-set-expansion-implies-large-non-trivial-cocycles} and~\ref{pro:small-set-expansion-implies-cocyle-expansion-for-one-dimension-less}.
\begin{corollary}[Small-set expansion implies cosystolic expansion for one dimension less]
If $X$ is a $d$-dimensional $Q$-bounded degree $(\varepsilon,\mu)$-small-set expander over a ring $R$, then the $(d-1)$-skeleton of $X$ is a $(\min\{\mu,Q^{-2}\}, \mu)$-cosystolic expander over $R$.
\end{corollary}
\subsection{Good links imply small-set expansion}
We follow the strategy of~\cite{EK16} in order to show that good links imply small-set expansion. The following theorem is what we observe as the key point of the proof. It shows that if all the links are skeleton expanders, then for small cochains there exists a dimension which most of the global expansion is determined by local expansion in the links of that dimension.
\begin{theorem}[Existence of good dimension]\label{thm:existence-of-good-dimension}
Let $X$ be a $d$-dimensional simplicial complex, $R$ a ring and $0 \le k \le d-1$. If for any $\sigma \in X$, the link $X_\sigma$ is an $\alpha$-skeleton expander, then for any constants $0 = c_{-1} \le c_0 \le \dotsb c_k \le 1$ and $k$-cochain $f \in C^k(X;R)$, if $f$ is locally minimal and $\norm{f} \le \alpha$, then there exists $0 \le i \le k$ such that
$$\norm{\delta(f)} \ge
\left(\beta_ic_i - (k+1-i)(i+1)c_{i-1} - \alpha^{2^{-d}}(k+1)(k+2)2^{k+2}\right)\norm{f},$$
where
$$\beta_i = \min\left\{\frac{\norm{\delta(g)}}{\dist(g, B^{k-|\sigma|}(X_\sigma;R))} : \sigma \in X(i),\; g \in C^{k-|\sigma|}(X_\sigma;R) \setminus B^{k-|\sigma|}(X_\sigma;R) \right\}.$$
\end{theorem}
This theorem is enough to prove that good links imply small-set expansion as follows.
\begin{theorem}[Good links imply small-set expansion]\label{thm:good-links-imply-small-set-expansion}
Let $X$ be a $d$-dimensional simplicial complex, $R$ a ring and $\beta > 0$. There exist $\varepsilon = \varepsilon(d,\beta)$ and $\alpha = \alpha(d,\beta)$, such that if for any $\sigma \in X$ the link $X_\sigma$ is an $\alpha$-skeleton expander and for any $\emptyset \ne \sigma \in X$ the link $X_\sigma$ is a $\beta$-coboundary expander over $R$, then $X$ is an $(\varepsilon,\alpha)$-small-set expander over $R$.
\end{theorem}
\begin{proof}
Let $0 < \rho < 1$,
$$
\varepsilon = (1-\rho)\left(1 + \frac{d(d-1)^{2(d-1)}}{\beta^{d-1}(1-\beta)} \right)^{\!-1},\quad\quad
\alpha = \left(\frac{\rho}{1-\rho}\cdot\frac{\varepsilon}{d(d+1)2^{d+1}}\right)^{2^d},$$
and define the following constants:
\begin{align*}
&\bullet\quad c_{-1} = 0, \\[9pt]
&\bullet\quad c_0 = \frac{\varepsilon}{(1-\rho)\beta},\\
&\bullet\quad c_i = c_0 + \frac{k^2}{\beta}c_{i-1}\quad\quad \forall i \in \{1,\dotsc,k-1\},\\[8pt]
&\bullet\quad c_k = \beta c_0 + (k+1)c_{k-1}.
\end{align*}
Note that
$$c_k = c_0\left(\beta + (k+1)\sum_{i=0}^{k-1}\left(\frac{k^2}{\beta}\right)^{\!i}\right) \le
c_0\left(\beta + \frac{(k+1)k^{2k}}{\beta^{k-1}(k^2-\beta)}\right) =
\frac{\varepsilon}{1-\rho} \left(1+\frac{(k+1)k^{2k}}{\beta^k(k^2-\beta)}\right) \le 1,
$$
so the conditions of theorem~\ref{thm:existence-of-good-dimension} are satisfied.
Let $f \in C^k(X;R)$ be a locally minimal $k$-cochain with $\norm{f} \le \alpha$, and let $0 \le i \le k$ be the good dimension promised by theorem~\ref{thm:existence-of-good-dimension}.
\begin{enumerate}
\item If $i=k$, note that for any $\sigma \in \supp(f)$, $\norm{\delta(f_\sigma)} \ge \norm{f_\sigma}$, so theorem~\ref{thm:existence-of-good-dimension} yields
$$\norm{\delta(f)} \ge \left(c_k - (k+1)c_{k-1} - \frac{\rho}{1-\rho}\varepsilon\right)\norm{f} \ge \varepsilon\norm{f}.$$
\item Otherwise, by the $\beta$-coboundary expansion of the links, theorem~\ref{thm:existence-of-good-dimension} yields
$$\norm{\delta(f)} \ge \left(\beta c_i - k^2c_{i-1} - \frac{\rho}{1-\rho}\varepsilon\right) \norm{f} \ge \varepsilon\norm{f}.$$
\end{enumerate}
\end{proof}
The rest of this section is dedicated to proving theorem~\ref{thm:existence-of-good-dimension}. We need to show that any small cochain can be decomposed into local parts such that the expansion of the local parts would imply the global expansion. In the following lemma we show that whenever all the information of a cochain is seen in a link, then its local coboundaries coincide with global coboundaries.
\begin{lemma}[Local-to-global coboundaries]\label{lem:when-link-sees-everything-coboundaries-are-global}
Let $X$ be a $d$-dimensional simplicial complex, $R$ a ring, $f\in C^k(X;R)$, $0 \le k \le d-1$, and $\sigma \in X(i)$, $i<k$. For any $\vec{\tau} \in \vec{X}_\sigma(k-i)$, if $\sigma\cup\tau\setminus\{v\} \notin \supp(f)$ for any $v\in\sigma$ and $\tau \in \supp(\delta(f_\sigma))$, then $\sigma\cup\tau\in \supp(\delta(f))$.
\end{lemma}
\begin{proof}
Let us denote $\vec{\sigma}=(v_0,\dotsc,v_i)$ and $\vec{\tau}=(v_{i+1},\dotsc,v_{k+1})$ (where $\vec{\sigma}$ is the fixed ordered face corresponding to $\sigma$). Then
\begin{align*}
\delta(f)(\vec{\sigma\tau}) &=
\sum_{j=0}^{k+1}(-1)^jf(\vec{\sigma\tau}\setminus\{v_j\}) = \sum_{j=i+1}^{k+1}(-1)^jf(\vec{\sigma\tau}\setminus\{v_j\}) \\&= (-1)^{i+1}\sum_{j=0}^{k-i}(-1)^jf_\sigma(\vec{\tau}\setminus\{v_{j+i+1}\}) =
(-1)^{i+1}\delta(f_\sigma)(\vec{\tau}) \ne 0.
\end{align*}
\end{proof}
We now define a machinery of fat faces, which essentially lets us move calculations down the dimensions. Let $\eta>0$ be a fatness constant. For any subset of $k$-faces $A \subseteq X(k)$ we define the sets of \emph{fat faces} as follows. The set of fat $k$-faces is defined as $A_k = A$, and for any $-1 \le i \le k-1$ we define the set of fat $i$-faces $A_i \subseteq X(i)$ by $$A_i = \{\sigma \in X(i) \;|\; \Pr[r_{i+1} \in A_{i+1} \;|\; r_i = \sigma] \ge \eta^{2^{k-i-1}} \}.$$
The following lemma shows that for any $-1 \le i \le k-1$, the size of $A_i$ cannot be much larger than the size of $A$.
\begin{lemma}\label{lem:upper-bound-on-fat-faces}
Let $X$ be a $d$-dimensional simplicial complex and $\eta>0$ a fatness constant. For any subset of $k$-faces $A \subseteq X(k)$, $0 \le k \le d-1$, and $-1 \le i \le k-1$,
$$\norm{A_i} \le \eta^{1-2^{k-i}}\norm{A}.$$
\end{lemma}
\begin{proof}
By laws of probability, for any $-1\le j \le k-1$,
\begin{equation}\label{eq:upper-bound-on-fat-faces-1}
\Pr[r_j \in A_j] =
\frac{\Pr[r_{j+1} \in A_{j+1} \wedge r_j \in A_j]}{\Pr[r_{j+1} \in A_{j+1} \;|\; r_j \in A_j]} \le \eta^{-2^{k-j-1}}\Pr[r_{j+1} \in A_{j+1}].
\end{equation}
Applying \eqref{eq:upper-bound-on-fat-faces-1} iteratively for $j = i,i+1,\dotsc,k-1$ finishes the proof.
\end{proof}
For any $\sigma \in X(i)$, $-1 \le i \le k$, we denote by $A\down \sigma\subseteq A$ the set of faces in $A$ which have a sequence of containments (in~\cite{EK16} it is called a ladder) of fat faces down to $\sigma$, formally,
$$A\down \sigma = \{\tau \in A \;|\; \exists \tau_{k-1}\in A_{k-1},\dotsc,\tau_{i+1}\in A_{i+1} \mbox{ s.t. }\tau \supset \tau_{k-1} \supset \dotsb \supset \tau_{i+1} \supset \sigma \}.$$
Recall that for a $k$-cochain $f \in C^k$, we denote its support by $A = \supp(f)$. So we also define $f\down\sigma$ to be the restriction of $f$ to $A\down\sigma$, formally,
$$
(f\down\sigma)(\vec{\tau}) =
\begin{cases}
f(\vec{\tau}) & \tau \in A\down \sigma, \\
0 & \mbox{otherwise}.
\end{cases}
$$
A good situation for us is that for any two fat faces which intersect on a codimension $1$ face, their intersection is a fat face. This essentially allows us to move calculations down the dimensions. We denote by $\Upsilon \subseteq X(k+1)$ the set of bad $(k+1)$-faces, for which a bad situation exists, formally,
$$\Upsilon = \{\tau \in X(k+1) \;|\; \exists \sigma,\sigma' \subset \tau \mbox{ s.t. } \sigma,\sigma' \in A_i \mbox{ and } \sigma \cap \sigma' \in X(i-1)\setminus A_{i-1} \}.$$
In the following proposition we show how we use this machinery of fat faces. The idea is that either we get a lot of expansion from a certain dimension or we can move down one dimension lower.
\begin{proposition}\label{pro:coboundaries-of-links}
Let $X$ be a $d$-dimensional simplicial complex, $R$ a ring and $\eta>0$ a fatness constant. For any $f \in C^k(X;R)$, $0 \le k \le d-1$, and $0 \le i \le k$,
\begin{align*}
\norm{\delta(f)} \ge{} &\min_{\sigma \in A_i}\left\{\frac{\norm{\delta((f\down\sigma)_{\sigma})}}{\norm{(f\down\sigma)_{\sigma}}}\right\} \Pr[r_k \in A\down r_i \wedge r_i \in A_i] -{} \\[5pt] &(k+1-i)(i+1)\Pr[r_k \in A\down r_{i-1} \wedge r_{i-1} \in A_{i-1}] - \norm{\Upsilon}.
\end{align*}
\end{proposition}
\begin{proof}
By lemma~\ref{lem:when-link-sees-everything-coboundaries-are-global} we know that every local coboundary of $(f\down\sigma)_\sigma$ is also a global coboundary, i.e., $\tau \in \supp(\delta((f\down\sigma)_\sigma)) \;\Rightarrow\; \sigma\cup\tau \in \supp(\delta(f\down\sigma))$. Thus,
\begin{equation}\label{eq:coboundaries-of-links-1}
\norm{\delta((f\down\sigma)_\sigma)} \le
\norm{(\delta(f\down\sigma))_\sigma} =
\Pr[r_{k+1} \in \supp(\delta(f\down\sigma)) \;|\; r_i = \sigma].
\end{equation}
Consider a face $\tau \in \supp(\delta(f\down\sigma))$. By definition, it contains at least one $k$-face $\tau^* \subset \tau$, such that $\tau^* \in A\down\sigma$. We claim that one of the following cases must occur:
\begin{enumerate}
\item $\tau$ is a bad face.
\item $\sigma$ contains a fat $(i-1)$-face $\sigma^* \in A_{i-1}$, such that $\tau^* \in A\down\sigma^*$.
\item $\tau \in \supp(\delta(f))$.
\end{enumerate}
If $\tau$ is a bad face, the claim holds, so assume that $\tau$ is not a bad face. By definition, there exists a sequence of fat faces $\tau_{k-1} \in A_{k-1}, \tau_{k-2} \in A_{k-2},\dotsc, \tau_{i+1} \in A_{i+1}$, such that $\tau \supset \tau^* \supset \tau_{k-1} \supset \dotsb \supset \tau_{i+1} \supset \sigma$. Let us denote $\tau = \{v_0,v_1,\dotsc,v_{k+1}\}$, $\tau^* = \tau \setminus \{v_{k+1}\}$, $\tau_{k-1} = \tau^* \setminus \{v_k\}$, and so on down to $\sigma = \tau_{i+1} \setminus \{v_{i+1}\}$. Now, if $\tau \setminus \{v_j\} \in A$ for some $j \in \{0,\dotsc, i\}$, then $\tau^* \setminus \{v_j\} \in A_{k-1}$ since it is the intersection of two fat $k$-faces, and then $\tau_{k-1} \setminus \{v_j\} \in A_{k-2}$, and so on down to $\sigma^* = \sigma \setminus \{v_j\} \in A_{i-1}$, and case 2 holds. Otherwise, for any $j \in \{i+1,\dotsc,k\}$, a similar argument shows that if $\tau \setminus \{v_j\} \in A$ then $\tau \setminus \{v_j\} \in A\down\sigma$. It follows that $f$ and $f\down\sigma$ agree on all $k$-faces that are contained in $\tau$, and case 3 holds. Thus,
\begin{equation}\label{eq:coboundaries-of-links-3}
\tau \in \supp(\delta(f\down\sigma)) \quad\Rightarrow\quad
(\tau \in \Upsilon) \vee (\tau^* \in A\down\sigma^*) \vee (\tau \in \supp(\delta(f))).
\end{equation}
Using~\eqref{eq:coboundaries-of-links-3} and summing over all $\tau \in \supp(\delta(f\down\sigma))$ yields
\begin{equation}\label{eq:coboundaries-of-links-4}
\begin{aligned}
\Pr[r_{k+1} \in \supp(\delta(f\down\sigma)) \;|\; &r_i = \sigma] \le{}\\[5pt]
&\Pr[r_{k+1} \in \Upsilon \;|\; r_i = \sigma] +{} \\[5pt]
&(k+1-i)(i+1)\Pr[r_k \in A\down r_{i-1} \wedge r_{i-1} = A_{i-1} \;|\; r_i = \sigma] +{} \\[5pt]
&\Pr[r_{k+1} \in \supp(\delta(f)) \;|\; r_i = \sigma],
\end{aligned}
\end{equation}
where the $(k+1-i)(i+1)$ factor is due the probability that $r_k = \tau^*$ and $r_{i-1} = \sigma^*$ given that $r_{k+1} = \tau \supset \tau^*$ and $r_i = \sigma \supset \sigma^*$. Substituting~\eqref{eq:coboundaries-of-links-1} in~\eqref{eq:coboundaries-of-links-4}, and multiplying and dividing by $\norm{(f\down\sigma)_\sigma} = \Pr[r_k \in A\down\sigma \;|\; r_i = \sigma]$ yields
\begin{equation}\label{eq:coboundaries-of-links-6}
\begin{aligned}
\frac{\norm{\delta((f\down\sigma)_\sigma)}}{\norm{(f\down\sigma)_\sigma}}\Pr[r_k \in f\down\sigma \;|\; &r_i = \sigma] \le{}\\
&\Pr[r_{k+1} \in \Upsilon \;|\; r_i = \sigma] +{} \\[6pt]
&(k+1-i)(i+1)\Pr[r_k \in A\down r_{i-1} \wedge r_{i-1} = A_{i-1} \;|\; r_i = \sigma] +{} \\[6pt]
&\Pr[r_{k+1} \in \supp(\delta(f)) \;|\; r_i = \sigma].
\end{aligned}
\end{equation}
Multiplying~\eqref{eq:coboundaries-of-links-6} by $\Pr[r_i = \sigma]$, summing over all $\sigma \in A_i$, and applying the law of total probability to the right-hand side yields
\begin{align*}
\sum_{\sigma \in A_i}\frac{\norm{\delta((f\down\sigma)_\sigma)}}{\norm{(f\down\sigma)_\sigma}}\Pr[r_k \in f\down\sigma \wedge{} &r_i = \sigma] \le{}\\[-7pt]
&\Pr[r_{k+1} \in \Upsilon] +{} \\[6pt]
&(k+1-i)(i+1)\Pr[r_k \in A\down r_{i-1} \wedge r_{i-1} = A_{i-1}] +{} \\[6pt]
&\Pr[r_{k+1} \in \supp(\delta(f))].
\end{align*}
Taking the minimum over all $\sigma \in A_i$ and rearranging completes the proof.
\end{proof}
It is left to bound the size of the bad faces. The following proposition shows that the size of the bad faces is controlled by the skeleton expansion of the links.
\begin{proposition}[Skeleton expansion implies small set of bad faces]\label{pro:skeleton-expansion-implies-small-set-of-bad-faces}
Let $X$ be a $d$-dimensional simplicial complex, $\eta>0$ a fatness constant and $0 < \alpha \le \eta^{2^{d-1}}$. If for any $\sigma \in X$, the link $X_\sigma$ is an $\alpha$-skeleton expander, then for any subset of $k$-faces $A \subseteq X(k)$, $0 \le k \le d-1$,
$$\norm{\Upsilon} \le \eta(k+1)(k+2)2^{k+2}\norm{A}.$$
\end{proposition}
\begin{proof}
By definition, any bad face $\tau \in \Upsilon$ contains at least one pair of faces $\sigma,\sigma' \subset \tau$ such that $\sigma,\sigma' \in A_i$, $\sigma \cup \sigma' \in X(i+1)$, and $\sigma \cap \sigma' \in X(i-1) \setminus A_{i-1}$ for some $0 \le i \le k$. For any $\tau \in \Upsilon$, choose one such pair $\sigma,\sigma' \subset \tau$ and denote by $\widehat{\tau} = \sigma \cup \sigma'$ and by $\widecheck{\tau} = \sigma \cap \sigma'$. Note that $\widehat{\tau}$ is seen in the link of $\widecheck{\tau}$ as an edge between two fat vertices. Denote by $\Upsilon_i = \{\tau \in \Upsilon \;|\; \widehat{\tau} \in X(i) \}$, so the set of bad faces can be decomposed to $\Upsilon = \bigsqcup_{i=1}^{k+1}\Upsilon_i$. Now,
\begin{align*}
\Pr[r_{k+1} \in \Upsilon] &=
\sum_{i=1}^{k+1}\sum_{\tau \in \Upsilon_i}\Pr[r_{k+1} = \tau] =
\sum_{i=1}^{k+1}\sum_{\tau \in \Upsilon_i}\frac{\Pr[r_{k+1} = \tau \wedge r_i = \widehat{\tau} \wedge r_{i-2} = \widecheck{\tau}]}{\Pr[r_i = \widehat{\tau} \wedge r_{i-2} = \widecheck{\tau} \;|\; r_{k+1} = \tau]} \\&\le
\sum_{i=1}^{k+1}\sum_{\tau \in \Upsilon_i}\binom{k+2}{i+1}\binom{i+1}{i-1}\Pr[r_i = \widehat{\tau} \wedge r_{i-2} = \widecheck{\tau}] \\&\le
\sum_{i=1}^{k+1}\sum_{\tau \in \Upsilon_i}\binom{k+2}{i+1}\binom{i+1}{i-1}(\eta^{2^{k+1-i}} + \alpha)\Pr[r_{i-1} \in A_{i-1} \wedge r_{i-2} = \widecheck{\tau}] \\&\le
\sum_{i=1}^{k+1}\binom{k+2}{i+1}\binom{i+1}{i-1}\frac{2\eta^{2^{k+1-i}}}{\eta^{2^{k+1-i}-1}}\Pr[r_k \in A] \\&\le
(k+2)(k+1)\eta\Pr[r_k \in A]\sum_{i=1}^{k+1}\binom{k+2}{i+1},
\end{align*}
where the second inequality follows by the $\alpha$-skeleton expansion of the links, and the third inequality follows by the law of total probability and by lemma~\ref{lem:upper-bound-on-fat-faces}.
\end{proof}
We can now prove theorem~\ref{thm:existence-of-good-dimension}.
\begin{proof}[Proof of theorem~\ref{thm:existence-of-good-dimension}]
Define the fatness constant $\eta = \alpha^{2^{-d}}$. Now, let $f \in C^k(X;R)$ be a locally minimal $k$-cochain with $\norm{f} \le \alpha \le \eta^{2^{k+1}}$. By lemma~\ref{lem:upper-bound-on-fat-faces} it follows that
$$\norm{A_{-1}} \le \eta^{1-2^{k+1}}\norm{f} \le \eta < 1.$$
But since $X(-1)$ contains only one face, i.e., $\norm{A_{-1}} \in \{0,1\}$, then $\norm{A_{-1}} = 0$. In other words, the empty-set is not a fat face, thus $\Pr[r_k \in A\down r_{-1} \wedge r_{-1} \in A_{-1}] = 0$. Also note that $\Pr[r_k \in A\down r_k \wedge r_k \in A_k] = \norm{f} \ge c_k\norm{f}$.
Now, if $\Pr[r_k \in A\down r_i \wedge r_i \in A_i] \ge c_i\norm{f}$ for all $0 \le i \le k$, then applying proposition~\ref{pro:coboundaries-of-links} on $i=0$ yields
\begin{equation}\label{eq:existence-of-good-dimension-1}
\norm{\delta(f)} \ge \min_{\sigma \in A_0}\left\{\frac{\norm{\delta((f\down \sigma)_\sigma)}}{\norm{(f\down\sigma)_\sigma}}\right\}c_0 - \norm{\Upsilon}.
\end{equation}
Otherwise, let $0 \le j \le k-1$ be the maximal for which $\Pr[r_k \in A\down r_j \wedge r_j \in A_j] < c_j\norm{f}$. Applying proposition~\ref{pro:coboundaries-of-links} on $i = j + 1$ yields
\begin{equation}\label{eq:existence-of-good-dimension-2}
\norm{\delta(f)} \ge \min_{\sigma \in A_i}\left\{\frac{\norm{\delta((f\down \sigma)_\sigma)}}{\norm{(f\down\sigma)_\sigma}}\right\}c_i - (k+1-i)(i+1)c_{i-1} - \norm{\Upsilon}.
\end{equation}
Since $f$ is locally minimal, by lemma~\ref{lem:minimal-cochains-are-closed-under-inclusion}, $(f\down\sigma)_\sigma$ is minimal in $X_\sigma$ for any $\emptyset \ne \sigma \in X$. Thus,
\begin{equation}\label{eq:existence-of-good-dimension-3}
\norm{(f\down\sigma)_\sigma} = \dist((f\down\sigma)_\sigma, B^{k-|\sigma|}(X_\sigma;R)).
\end{equation}
By proposition~\ref{pro:skeleton-expansion-implies-small-set-of-bad-faces} we know that
\begin{equation}\label{eq:existence-of-good-dimension-4}
\norm{\Upsilon} \le \alpha^{2^{-d}}(k+1)(k+2)2^{k+2}\norm{f}.
\end{equation}
Substituting~\eqref{eq:existence-of-good-dimension-3} and~\eqref{eq:existence-of-good-dimension-4} in~\eqref{eq:existence-of-good-dimension-1} or~\eqref{eq:existence-of-good-dimension-2} completes the proof.
\end{proof}
\section{Spherical buildings}\label{sec:spherical-building}
Spherical buildings are very symmetrical complexes with a nice geometrical structure. An example for a spherical building is the following complex. Let $d \in \mathbb{N}$ and $q$ a prime power. Denote by $V = \F_q^d$ the $d$-dimensional vector space over $\F_q$. The vertices of the complex are proper subspaces of $V$ (i.e., not $\{0\}$ and $V$), and its faces are flags of subspaces. The resulting complex is a $(d-2)$-dimensional spherical building (since maximal flags have $d-1$ vertices). For $d=3$ this is the famous "lines versus planes" graph which is known to be an excellent expander.
Any $d$-dimensional spherical building $X$ comes with a collection of $d$-dimensional subcomplexes, called \emph{apartments}, such that all the apartments are isomorphic to each other and for any two faces in the complex there exists an apartment containing both of them. An important fact is that the size of each apartment is bounded by a constant $\theta_d$ which depends only on $d$ (and not on the number of vertices). Also, there exists a group of automorphisms $G \le Aut(X)$ which acts transitively on $X$, i.e., for any $\sigma,\sigma' \in X(k)$, $0 \le k \le d$, there exists $g \in G$ such that $g\sigma = \sigma'$.
In~\cite{EK16}, the authors showed that the spherical building is an $\alpha$-skeleton expander for $\alpha>0$ as small as we want (it is controlled by a parameter called the \emph{thickness} of the building). In~\cite{LMM16}, the authors showed that the spherical building is a coboundary expander, but only over $\F_2$. This is not enough for us as we need coboundary expansion over $\Z$. We follow their strategy and with some modifications we prove the following theorem.
\begin{theorem}\label{thm:the-spherical-building-is-coboundary-expander}
The $d$-dimensional spherical building is a $\beta$-coboundary expander over any ring for $$\beta = \big(2^d\theta_d\big)^{-1}.$$
\end{theorem}
The proof of theorem~\ref{thm:the-spherical-building-is-coboundary-expander} is essentially composed of two propositions. We use its geometrical structure in order to relate the coboundary of a cochain to its distance from the coboundaries. By this relation we over-count each face in the coboundary many times. Then we use the symmetrical structure of the building in order to bound these over-counts.
For any $-1 \le k \le d-1$, we denote by $\mathcal{F}_k = X(d) \times X(k)$ the set of all pairs of top faces and $k$-dimensional faces . For any $(\sigma, \tau) \in \mathcal{F}_k$, let $A_{\sigma,\tau}$ be the complex obtained by the intersection of all the apartments in $X$ which contain both $\sigma$ and $\tau$. Note that if $\tau \subset \tau'$, then $A_{\sigma,\tau} \subset A_{\sigma,\tau'}$.
The following proposition is implied by the geometrical structure of the spherical building. Each apartment of the spherical building is a sphere and any piece of it, as we defined above, is either a sphere or it is contractible. It allows us to relate the coboundary of a cochain to its distance from the coboundaries by over-counting each face as the amount of apartments containing it.
\begin{proposition}\label{pro:spherical-building-homological}
Let $X$ be a $d$-dimensional spherical building and $R$ a ring. For any $f \in C^k(X;R)$, $-1 \le k \le d-1$, and $\sigma \in X(d)$,
$$\dist(f, B^k(X;R)) \le
\sum_{\tau \in X(k)}\norm{\tau} \cdot |\supp(\delta(f)) \cap A_{\sigma,\tau}|.$$
\end{proposition}
The next proposition is implied by the symmetrical structure of the spherical building. Since it possess so many symmetries, all the apartments are spread around it evenly. This implies that any face cannot be contained in many apartments, so we can bound the number of times we over-count each face.
\begin{proposition}\label{pro:spherical-building-symmetrical}
Let $X$ be a $d$-dimensional spherical building. For any $-1 \le k \le d-1$ and $\rho \in X$,
$$\sum_{\substack{(\sigma,\tau) \in \mathcal{F}_k:\\\rho \in A_{\sigma,\tau}}} \norm{\tau} \le \theta_d \cdot |\{\sigma \in X(d) \;|\; \rho \subseteq \sigma\}|.$$
\end{proposition}
We show first how theorem~\ref{thm:the-spherical-building-is-coboundary-expander} is implied by the above two propositions.
\begin{proof}[Proof of theorem~\ref{thm:the-spherical-building-is-coboundary-expander}]
Let $f \in C^k(X;R)$, $-1 \le k \le d-1$. By proposition~\ref{pro:spherical-building-homological},
\begin{equation}\label{eq:the-spherical-building-is-coboundary-expander-1}
\begin{aligned}
|X(d)|\cdot\dist(f,B^k(X;R)) &=
\sum_{\sigma \in X(d)} \dist(f,B^k(X;R)) \\&\le
\sum_{\sigma \in X(d)} \sum_{\tau \in X(k)} \norm{\tau} \cdot |\supp(\delta(f)) \cap A_{\sigma,\tau}| \\&=
\sum_{(\sigma,\tau) \in \mathcal{F}_k} \norm{\tau} \cdot |\supp(\delta(f)) \cap A_{\sigma,\tau}| \\&=
\sum_{\rho \in \text{supp}(\delta(f))} \sum_{\substack{(\sigma,\tau) \in \mathcal{F}_k:\\\rho \in A_{\sigma,\tau}}} \norm{\tau}.
\end{aligned}
\end{equation}
By proposition~\ref{pro:spherical-building-symmetrical},
\begin{equation}\label{eq:the-spherical-building-is-coboundary-expander-2}
\sum_{\substack{(\sigma,\tau) \in \mathcal{F}_k:\\\rho \in A_{\sigma,\tau}}} \norm{\tau} \le
\theta_d \cdot |\{\sigma \in X(d) \;|\; \rho \subseteq \sigma\}| =
\theta_d |X(d)|\binom{d+1}{k+2} \norm{\rho}
\end{equation}
Combining~\eqref{eq:the-spherical-building-is-coboundary-expander-1} and~\eqref{eq:the-spherical-building-is-coboundary-expander-2} yields
$$
\dist(f,B^k(X;R)) \le \sum_{\rho \in \text{supp}(\delta(f))} \theta_d \binom{d+1}{k+2} \norm{\rho} = \theta_d \binom{d+1}{k+2} \norm{\delta(f)},
$$
where rearranging completes the proof.
\end{proof}
\subsection{Proof of proposition~\ref{pro:spherical-building-homological}}
We recall some basic definitions of simplicial complexes. Let $X$ be a $d$-dimensional simplicial complex and $R$ a ring. For any $-1 \le k \le d$, a \emph{$k$-chain} is a linear combination of the $k$-dimensional faces with coefficients in $R$. Denote the space of $k$-chains by
$$C_k(X;R) = \left\{\sum_{\sigma \in X(k)}a_\sigma \!\cdot\! \sigma \;|\; \forall \sigma, a_\sigma \in R\right\}.$$
We fix some arbitrary orientations of the faces in $X$, so when considering a face, there is one fixed ordering of its vertices. Then, the boundary of a $k$-face $(v_0,v_1,\dotsc,v_k) \in X(k)$ is
$$\partial((v_0,\dotsc,v_k)) = \sum_{i=0}^{k} (-1)^i(v_0,\dotsc,v_{i-1},v_{i+1},\dotsc,v_k),$$
and the boundary of a $k$-chain $c \in C_k(X;R)$ is
$$\partial(c) = \sum_{\sigma \in X(k)} a_\sigma \!\cdot\! \partial(\sigma).$$
For ease of notation, for $\sigma = (v_0,\dotsc,v_k)$, we denote $\sigma_i = (v_0,\dotsc,v_{i-1},v_{i+1},\dotsc,v_k)$. Note that the boundary operator commutes with the coboundary operator defined in~\secref{sec:preliminaries}, i.e., for any $k$-cochain $f \in C^k(X;R)$ and a $(k+1)$-face $\sigma \in X(k+1)$,
$$\delta(f)(\sigma) = \sum_{i=0}^k (-1)^i f(\sigma_i) = f(\partial(\sigma)).$$
The following lemma from~\cite{LMM16} shows a nice filling property of the complexes $A_{\sigma,\tau}$ defined above.
\begin{lemma}\label{lem:filling-propoerty-of-apartments}\cite[Claim~3.5]{LMM16}
Let $X$ be a $d$-dimensional spherical building and $R$ a ring. For any $(\sigma,\tau) \in \mathcal{F}_k$, $-1 \le k \le d-1$, and an $i$-chain $c \in C_i(A_{\sigma,\tau};R)$, $0 \le i \le d-1$, if $\partial(c) = 0$ then there exists an $(i+1)$-chain $c' \in C_{i+1}(A_{\sigma,\tau};R)$ such that $\partial(c') = c$.
\end{lemma}
We use this filling property in order to define a family of chains such that each two consecutive chains are related by the boundary operator.
\begin{lemma}
Let $X$ be a $d$-dimensional spherical building and $R$ a ring. There exists a family of chains $$\mathcal{C} = \{c_{\sigma,\tau} \in C_{k+1}(A_{\sigma,\tau};R) \;|\; -1 \le k \le d-1,\; (\sigma,\tau) \in \mathcal{F}_k\},$$ such that $$\partial(c_{\sigma,\tau}) = (-1)^{k+1}\tau + \sum_{i=0}^k(-1)^ic_{\sigma,\tau_i}.$$
\end{lemma}
\begin{proof}
We define $\mathcal{C}$ inductively. For $k=-1$, we have only the empty set $\emptyset \in X(-1)$. For any $\sigma \in X(d)$, choose an arbitrary vertex $v_\sigma \in A_{\sigma,\emptyset}(0)$ and define $c_{\sigma,\emptyset} = v_\sigma$. Then it holds that
$$\partial(c_{\sigma,\emptyset}) = \partial(v_\sigma) = (-1)^0\emptyset,$$
as required. Assume now that $\mathcal{C}$ is defined for any $-1 \le i \le k-1$. For any $(\sigma,\tau) \in \mathcal{F}_k$ define $c_{\sigma,\tau}$ as follows. Consider the $k$-chain $c = (-1)^{k+1}\tau + \sum_{i=0}^{k}(-1)^ic_{\sigma,\tau_i}$, and note that $\partial(c) = 0$ since
\begin{align*}
\partial(c) &= (-1)^{k+1}\partial(\tau) + \sum_{i=0}^k(-1)^i \partial(c_{\sigma,\tau_i}) \\&=
(-1)^{k+1}\partial(\tau) + \sum_{i=0}^k(-1)^i \left((-1)^k\tau_i + \sum_{j=0}^{k-1}(-1)^jc_{\sigma,\tau_{ij}} \right) \\[7pt]&=
(-1)^{k+1}\partial(\tau) + (-1)^k\partial(\tau) + \sum_{i>j}(-1)^{i+j}c_{\sigma,\tau_{ij}} + \sum_{i<j}(-1)^{i+j-1}c_{\sigma,\tau_{ij}} = 0.
\end{align*}
By lemma~\ref{lem:filling-propoerty-of-apartments} it follows that there exists an $(k+1)$-chain $c' \in C_{k+1}(A_{\sigma,\tau};R)$ such that $\partial(c') = c$, so define $c_{\sigma,\tau} = c'$.
\end{proof}
For any $\sigma \in X(d)$ and $0 \le k \le d$, we define the contraction operator
$\iota_\sigma = \iota_{\sigma,k}:C^k(X;R) \to C^{k-1}(X;R)$ as follows. For any $f \in C^k(X)$ and $\tau \in X(k-1)$,
$$\iota_\sigma(f)(\tau) = (-1)^kf(c_{\sigma,\tau}).$$
This contraction operator allows us to relate the coboundary of a cochain to its distance from the coboundaries, as shown in the next lemma.
\begin{lemma}\label{lem:coboundary-plus-contraction}
Let $X$ be a $d$-dimensional spherical building and $R$ a ring. For any $f \in C^k(X;R)$, $0 \le k \le d-1$, and $\sigma \in X(d)$,
$$\delta(\iota_\sigma(f)) + \iota_\sigma(\delta(f)) = f.$$
\end{lemma}
\begin{proof}
For any $\tau \in X(k)$,
\begin{align*}
\delta(\iota_\sigma(f))&(\tau) + \iota_\sigma(\delta(f))(\tau) \\&=
\sum_{i=0}^k(-1)^i(\iota_\sigma(f))(\tau_i) + (-1)^{k+1}(\delta(f))(c_{\sigma,\tau}) \\&=
\sum_{i=0}^k(-1)^i(-1)^k f(c_{\sigma,\tau_i}) + (-1)^{k+1}f(\partial (c_{\sigma,\tau})) \\&=
(-1)^k\sum_{i=0}^k(-1)^i f(c_{\sigma,\tau_i}) + (-1)^{k+1}\left((-1)^{k+1}f(\tau) + \sum_{i=0}^k(-1)^if(c_{\sigma,\tau_i})\right) =
f(\tau).
\end{align*}
\end{proof}
We can now prove proposition~\ref{pro:spherical-building-homological}.
\begin{proof}[Proof of proposition~\ref{pro:spherical-building-homological}]
Let $f \in C^k(X;R)$, $-1 \le k \le d-1$. By lemma~\ref{lem:coboundary-plus-contraction}, for any $\sigma \in X(d)$,
\begin{equation}\label{eq:spherical-building-homological-1}
\norm{\iota_\sigma(\delta(f))} = \norm{f - \delta(\iota_\sigma(f))} \ge \dist(f, B^k(X;R)).
\end{equation}
Note that for any $\tau \in X(k)$,
\begin{align*}
\iota_\sigma(\delta(f))(\tau) \ne 0 \quad&\Rightarrow\quad
\delta(f)(c_{\sigma,\tau}) \ne 0 \\[5pt]&\Rightarrow\quad
\exists \rho \in \supp(\delta(f)) \cap \supp(c_{\sigma,\tau}) \\[5pt]&\Rightarrow\quad
\exists \rho \in \supp(\delta(f)) \cap A_{\sigma,\tau},
\end{align*}
which yields that
\begin{equation}\label{eq:spherical-building-homological-2}
\norm{\iota_\sigma(\delta(f))} =
\sum_{\tau \in \text{supp}(\iota_\sigma(\delta(f)))}\norm{\tau} \le
\sum_{\tau \in X(k)} \norm{\tau} \cdot |\supp(\delta(f)) \cap A_{\sigma,\tau}|.
\end{equation}
Combining~\eqref{eq:spherical-building-homological-1} and~\eqref{eq:spherical-building-homological-2} finishes the proof.
\end{proof}
\subsection{Proof of proposition~\ref{pro:spherical-building-symmetrical}}
The key point of the proof is that the spherical building possess so many symmetries, so for any face, only a small portion of the apartments contains it. We show it formally in the following lemma.
\begin{lemma}\label{lem:spherical-building-many-symmetries}
Let $X$ be a $d$-dimensional spherical building and $G \le Aut(X)$ the group that acts transitively on $X$. For any $\rho \in X$ and $(\sigma,\tau) \in \mathcal{F}_k$, $-1 \le k \le d-1$,
$$\frac{|\{g \in G \;|\; g\rho \in A_{\sigma,\tau}\}|}{|G|} \le
\theta_d \frac{|\{\sigma \in X(d) \;|\; \rho \subseteq \sigma\}|}{|X(d)|}.$$
\end{lemma}
\begin{proof}
For any $\rho \in X$, denote by $G_\rho = \{g \in G \;|\; g\rho = \rho\}$ the stabilizer of $\rho$, and consider the quotient $G/ G_\rho$. The elements in $G/ G_\rho$ are equivalence classes of the form $gG_\rho = \{gh \;|\; h \in G_\rho\}$. Consider a face $\sigma \in X(d)$ such that $\rho \subseteq \sigma$. The elements in $G_\rho$ can move $\sigma$ only to other $d$-dimensional faces which contain $\rho$. It follows that for any equivalence class $gG_\rho$, the number of $d$-dimensional faces that the elements in $gG_\rho$ can move $\sigma$ is bounded by the number of $d$-dimensional faces which contain $\sigma$. Since $G$ is transitive, there must be enough equivalence classes to cover all $X(d)$. Thus,
\begin{equation}\label{eq:spherical-building-many-symmetries-1}
\frac{|G|}{|G_\rho|} =
\left|\faktor{G}{G_\rho}\right| \ge
\frac{|X(d)|}{|\{\sigma \in X(d) \;|\; \rho \subseteq \sigma\}|},
\end{equation}
where the equality follows by Lagrange's theorem. Next, note that for any $\rho' \in X$, there are $|G_\rho|$ elements $g \in G$ for which $g\rho = \rho'$. Therefore, for any $(\sigma,\tau) \in \mathcal{F}_k$,
$$
|\{g \in G \;|\; g\rho \in A_{\sigma,\tau}\}| \le |A_{\sigma,\tau}|\cdot|G_\rho| \le
\theta_d \frac{|\{\sigma \in X(d) \;|\; \rho \subseteq \sigma\}|}{|X(d)|} |G|,
$$
where the second inequality follows by~\eqref{eq:spherical-building-many-symmetries-1}.
\end{proof}
We can now prove proposition~\ref{pro:spherical-building-symmetrical}.
\begin{proof}[Proof of proposition~\ref{pro:spherical-building-symmetrical}]
Note that for any $\rho \in X$ and $g \in G$,
\begin{equation}
\sum_{\substack{(\sigma,\tau) \in \mathcal{F}_k:\\\rho \in A_{\sigma,\tau}}}\norm{\tau} =
\sum_{\substack{(g\sigma,g\tau) \in \mathcal{F}_k:\\g\rho \in gA_{\sigma,\tau}}}\norm{\tau} =
\sum_{\substack{(g\sigma,g\tau) \in \mathcal{F}_k:\\g\rho \in A_{g\sigma,g\tau}}}\norm{\tau} =
\sum_{\substack{(\sigma,\tau) \in \mathcal{F}_k:\\g\rho \in A_{\sigma,\tau}}}\norm{\tau}.
\end{equation}
Thus, it is possible to change the order of summation, i.e.,
$$
\sum_{g \in G} \sum_{\substack{(\sigma,\tau) \in \mathcal{F}_k:\\\rho \in A_{\sigma,\tau}}} \norm{\tau} =
\sum_{(\sigma,\tau) \in \mathcal{F}_k} \sum_{\substack{g \in G:\\g\rho \in A_{\sigma,\tau}}} \norm{\tau}.
$$
It follows that for any $\rho \in X$,
\begin{align*}
\sum_{\substack{(\sigma,\tau) \in \mathcal{F}_k:\\\rho \in A_{\sigma,\tau}}} \norm{\tau} &=
\frac{1}{|G|} \sum_{g \in G} \sum_{\substack{(\sigma,\tau) \in \mathcal{F}_k:\\\rho \in A_{\sigma,\tau}}} \norm{\tau} =
\frac{1}{|G|} \sum_{(\sigma,\tau) \in \mathcal{F}_k} \sum_{\substack{g \in G:\\g\rho \in A_{\sigma,\tau}}} \norm{\tau} \\&=
\sum_{(\sigma,\tau) \in \mathcal{F}_k} \norm{\tau} \frac{|\{g \in G \;|\; g\rho \in A_{\sigma,\tau}\}|}{|G|} \\&\le
\theta_d \frac{|\{\sigma \in X(d) \;|\; \rho \subseteq \sigma\}|}{|X(d)|} \sum_{(\sigma,\tau) \in \mathcal{F}_k} \norm{\tau} \\[5pt]&=
\theta_d |\{\sigma \in X(d) \;|\; \rho \subseteq \sigma\}|,
\end{align*}
where the inequality follows by lemma~\ref{lem:spherical-building-many-symmetries}.
\end{proof}
\bibliographystyle{alpha}
|
{
"timestamp": "2018-03-09T02:00:43",
"yymm": "1803",
"arxiv_id": "1803.02849",
"language": "en",
"url": "https://arxiv.org/abs/1803.02849"
}
|
\section{Notations and preliminaries}\label{sec:notations}
The main aim of this section is to provide the setup for the rest of the paper. After establishing the definitions we need about points with multiplicities and linear systems, we recall two important techniques to bound the dimension of a given system, namely restriction and specialization. Classically, the latter roughly consist of moving some of the imposed points to a special position. A variation of the standard specialization is presented in Construction \ref{cons:collapse}, where the points are allowed not only to be moved to a special position, but also to collapse. Proposition \ref{pro:molteplicità_limite} computes the multiplicity of the resulting limit scheme and points out the connection with interpolation theory. Lemma \ref{lem:grassisugrasso} is the prototype of a collision of fat points, and will be useful in the rest of the paper.
We work over the complex field $\mathbb{C}$. Every scheme will be projective, unless we specify it is not. For a scheme $X$ and a closed subscheme $Y\subset X$, we will write $\mathcal{I}_{Y,X}$ to denote the ideal sheaf of $Y$ in $X$.
If no ambiguity is likely to arise, we will write simply $\mathcal{I}_Y$ instead of $\mathcal{I}_{Y,X}$. If $\mathcal{F}$ is a coherent sheaf on $X$ and $i\in\mathbb{N}$, we will write $H^i\mathcal{F}$ to denote the cohomology group $H^i(X,\mathcal{F})$ and $\mathop{\rm h}\nolimits^i\mathcal{F}$ for its dimension.
\begin{dfn} Let $X$ be a $0$-dimensional scheme. The \textit{degree}, or \textit{length}, of $X$, denoted by $\deg X$, is the dimension of its ring of regular functions as a complex vector space. If $X$ is supported on a point $p$, we define the \textit{multiplicity} of $X$, denoted by $\mathop{\rm mult}\nolimits X$, to be the largest $k\in\mathbb{N}$ such that $X$ contains the $k$-tuple point supported on $p$.
\end{dfn}
There is a more general definition of multiplicity. If $X$ is a scheme of any dimension and $Y$ is an irreducible component of $X_{\mbox{\Small{red}}}$, one can define the multiplicity of $X$ at $Y$ as the length of the local ring $\mathcal{O}_{X,Y}$ (see \cite[Section 1.2.1]{EisenbudHarris}).
If $X\subset\p^n$ is a 0-dimensional subscheme, then $\deg X$ is the limit value of the Hilbert function of $X$. In other words, if $d$ is large enough, then $X$ imposes $\deg X$ independent linear conditions to degree $d$ divisors of $\p^n$.
Let us recall a basic fact about 0-dimensional schemes.
\begin{lem}\label{stessogrado} \label{basic} Let $X$ be a $0$-dimensional schemes supported at a point, and let $Y$ be a subscheme of $X$. If $\deg Y=\deg X$, then $Y = X$.
\end{lem}
Since we will deal with linear systems with assigned singularitie
, we introduce the notations we are going to use.
\begin{dfn} Let $V$ be a smooth quasi-projective variety, let $p_1,\ldots, p_r\in V$.
The \textit{linear system}
$$ L_{V,d}(m_1,\dots,m_r)(p_
,\ldots,p_r)\subset H^0\mathcal{O}_{V}(d)$$
is the vector space of divisors of $V$ having multiplicities at least $m_i$ at the point $p_i$.
In other words, if $X=m_1p_1+\ldots+m_rp_r\subset V$ is a fat point subscheme, then $$L_{V,d}(m_1,\dots,m_r)(p_
,\ldots,p_r)=H^0\mathcal{I}_{X,V}(d).$$
We will write $\mathcal{L}_{V,d}(m_1,\dots,m_r)(p_
,\ldots,p_r)$ to denote the associated ideal sheaf, that is,
$$\mathcal{L}_{V,d}(m_1,\dots,m_r)(p_
,\ldots,p_r):=\mathcal{I}_{X,V}(d).$$
If either the points $p_1,\ldots, p_r$
are in general position, or no confusion is likely to arise, then we set
$$ L_{V,d}(m_1
,\dots,m_r):= L_{V,d}(m_1,\dots,m_r)(p_
,\ldots,p_r).
$$
Moreover, if $m_1=\ldots=m_s=m$ then we indicate
$$ L_{V,d}(m^s,m_{s+1},\dots,m_r):= L_{V,d}(m_1,\dots,m_r).$$
We will write $ L_{n,d}(m_1,\dots,m_r)$ instead of $ L_{\p^n,d}(m_1,\dots,m_r)$. Finally, we will use $ L_{\p^1\times\p^1,(a,b)}(m_1,\dots,m_r)$
to indicate the system of bidegree $(a,b)$ curves on $\p^1\times\p^1$ with the prescribed singularities.
\end{dfn}
Some authors in the literature consider a linear system as a projective space, so they work with $\p(L_{n,d}(m_1,\dots,m_r))$. The two approaches are equivalent. We only have to be aware that in this paper we work with affine dimensions, rather than projective dimensions.
\begin{dfn}
Let $n:=\dim V$. The \textit{virtual dimension} of such a linear system is
$$\mathop{\rm vdim}\nolimits L_{V,d}(m_1
,\dots,m_r):=\mathop{\rm h}\nolimits^0\mathcal{O}_{V}(d)-\sum_{i=1}^r\binom{m_i-1+n}{n
.$$
The \textit{expected dimension} is defined as
$$\mathop{\rm edim}\nolimits L_{V,d}(m_1,\dots,m_r):=\max
\left\{\mathop{\rm vdim}\nolimits L_{V,d}(m_1,\dots,m_r),0\right\},$$
where expected dimension $0$ indicates that the linear system is expected to be empty.
Note that
$$\dim L_{V,d}(m_1,\dots,m_r)\ge\mathop{\rm edim}\nolimits
L_{V,d}(m_1,\dots,m_r).$$
When the linear conditions imposed by the base points are dependent, then previous inequality is strict, and the linear system is said to be \textit{special}. On the other hand, if the conditions are independent, then $\dim L_{V,d}(m_1,\dots,m_r)=\mathop{\rm edim}\nolimits L_{V,d}(m_1,\dots,m_r)$ and the system is called \textit{non-special}.
\end{dfn}
Not much is known about the classification of special linear systems $ L_{n,d}(m_1,\dots,m_r)$ for an arbitrary $n$. The most important result in this direction is the celebrated Alexander-Hirschowitz theorem, proven in \cite{AH}, that solves the problem for systems with general double points.
\begin{thm}[Alexander-Hirschowitz] \label{thm:AH} The linear system $ L_{n,d}(2^h)$ is special if and only if $(n,d,h)$ is one of the following:
\begin{itemize}
\item[i)] $(n,2,h)$ with $2\leq h\leq n$,
\item[ii)] $(2,4,5)$, $(3,4,9)$, $(4,3,7)$, $(4,4,14)$.
\end{itemize}
\end{thm}
Let us introduce two classical tools to deal with the computation of the dimension of a linear system.
The first one is an useful exact sequence that will help us later.
\begin{dfn}
\label{dfn:castelnuovo}
Let $S\subset V$ be a smooth hypersurface and $L$ a linear system on $V$. Let $\rho: L\to L_{|S}$ be the restriction map. Let $ L-S:=\ker(\rho)$, that is $ L-S=\{0\}\cup\{D\in L\mid D\supset S\}$. Denote by $\mathcal{L}-S$ the associated sheaf and by $\mathcal{L}_{|S}$ the sheaf associated to $L_{|S}$. There is a short exact sequence of sheaves on $V$
\[0\to \mathcal{L}-S\to \mathcal{L}\to \mathcal{L}_{|S}\to 0,\]
called \textit{restriction sequence} or
\textit{Castelnuovo sequence}.
\end{dfn}
By Castelnuovo sequence, if both $ L-S$ and $ L_{|S}$ are non-special of non-negative virtual dimension, then $ L$ is non-special.
Another thing we can do with a linear system $ L:= L_{V,d}(m_1,\dots,m_r)$ is to degenerate it, namely we can pick $q_1,\dots,q_r\in V$ and move the singularities of $ L$ from general position to the point we choose. In this way we have to deal with
\[ L_0:= L_{V,d}(m_1,\dots,m_r)(q_1,\dots,q_r)\]
instead of $ L$. If we choose the points $q_i$ wisely, hopefully we can say something on $ L_0$ (for instance, on its dimension) and use semicontinuity to get information about $ L$. Now we want to make this intuitive notion more precise. The next definitions are based on \cite{CM1}.
\begin{dfn} Let $Y$ be a smooth variety. A \textit{degeneration} is a proper and flat morphism $\pi: Y\to\Delta$, where $\Delta\ni 0,1$ is a complex disk. For any $t\in\Delta$, we denote by $Y_t$ the fiber of $\pi$ over $t$. Let $\sigma_i:\Delta\to Y$ be sections of $\pi$ and let $Z$ be a scheme supported on $\bigcup_i\sigma_i(\Delta)$. For $t\in\Delta$, define $Z_t:=Z_{|Y_t}$, so that $Z_0$ is the flat limit of the schemes $Z_t$.
We say that $Z_0$ is a \textit{specialization} of $Z_t$.
\end{dfn}
For the sake of simplicity, sometimes we will say that $Z_0$ is a specialization of $Z_1$, instead of $Z_t$, implying that 1 is any general point of $\Delta$.
\begin{construction}[Specialization without collisions]\label{cons:notationlinearsystem}
Let $m_1,\dots,m_r\in\mathbb{N}$ and let $V$ be a smooth variety. Let $Y:=V\times\Delta$ and let $\pi:Y\to\Delta$ be the projection. Fix $r$ disjoint sections $\sigma_1,\ldots,\sigma_{r}:\Delta\to Y$.
Let
\[Z:=\bigcup_{i=1}^r \sigma_i(\Delta)^{m_i}\subset Y\]
be the scheme supported on the sections with multiplicity $m_i$ along $\sigma_i(\Delta)$.
Let
$$\mathbb{L}
:=H^0\mathcal{I}_{Z,Y}(d)$$
be the linear system on $Y$ associated to degree $d$ divisors having multiplicities at least $m_i$ along $\sigma_i(\Delta)$.
Then, for a general $t\in \Delta$, the linear system
$\mathbb{L}
_{|Y_t}$ coincides with
$$ L_t:= L_{V,d}(m_1,\dots,m_r)(\sigma_1(t),\ldots,\sigma_r(t)).$$
\label{rem:specialize_to_get_non_speciality}
By semicontinuity, we have $$\dim L_0\geq \dim L_t.$$
Therefore, in order to prove that $ L_t$ is non-special, it is enough to produce a degeneration such that $ L_0$ is non-special.
\end{construction}
In this paper we are interested in a different kind of degeneration, namely we want to drop the hypothesis that $\sigma_1,\dots,\sigma_r$ are disjoint. We now modify Construction \ref{cons:notationlinearsystem} in order to allow the specialized points to collapse. Since a limit is a local technique, we can work with the affine space instead of a variety $V$. This idea is based on \cite{CM3}.
\begin{construction}[Specialization with $h$ collapsing points]\label{cons:collapse}
Let $Y:=\mathbb{A}^n\times\Delta$, with second projection
$\pi:Y\to\Delta$ and fibers $Y_t:=\mathbb{A}^n\times\{t\}$.
Fix a point $q\in Y_0$ and $h$ general sections $\sigma_1,\ldots,\sigma_{h}:\Delta\to Y$ of $\pi$ such that $\sigma_i(0)=q$.
Define $$Z:=\bigcup_i\sigma_i(\Delta)^{m_i}\subset Y.$$
Let $X\to Y$ be the blow-up of $Y$ at the point $q$, with exceptional divisor $W$. Then we have
a degeneration $\pi_{X}:X\to\Delta$ and sections $\sigma_{X,i}:\Delta\to X$. The fiber $X_0$ is reducible, and it is given by $W\cup \tilde{Y_0}$, where $W\cong\p^n$ and $\tilde{Y_0}=\mathop{\rm Bl}\nolimits_q Y_0$ is $\mathbb{A}^n$ blown up at one point. Let $R=W\cap \tilde{Y_0}\cong\p^{n-1}$ be the exceptional divisor of this blow-up. We want to stress that, since the sections $\sigma_i$ are general, $\sigma_{X,1}(0),\dots,\sigma_{X,h}(0)$ are general points of $W$. With these notations, we say that $Z_0=Z_{|Y_0}$ is the flat limit of $h$ collapsing points of multiplicities $m_1,\dots,m_h$.
\end{construction}
Our goal will be to describe $Z_0$. Once we understand the limit, we may study the dimension of a linear system via its specializations with collapsing points, using the same technique described in Construction~\ref{rem:specialize_to_get_non_speciality}.
\begin{rmk} \label{rmk:possiamo assumere di essere nell'affine}\begin{enumerate}
\item Since a collision is a local construction, our results about collisions on $\mathbb{A}^n$ hold on any smooth variety.
\item When we consider degenerations as in Construction \ref{cons:notationlinearsystem} or \ref{cons:collapse}, by flatness we know that the length is preserved, so $\deg Z_0=\deg Z_1$.
\item One could give the same definitions without requiring that the sections $\sigma_i$ are general, but in this case the theory becomes more involved and less interesting for applications.
\end{enumerate}
\end{rmk}
As a warm-up, we start with an easy result that describes collisions of fat points on smooth curves.
\begin{pro}\label{pro:sullaretta}
Let $m_1,\dots,m_h\in\mathbb{N}$ and let $m=m_1+\ldots+m_h$. The limit of $h$ collapsing points of multiplicities $m_1,\dots,m_h$ in $\mathbb{A}^1$ is an $m$-tuple point.
\begin{proof}
It is enough to observe that the only length $m$ subscheme of $\mathbb{A}^1$ supported at a point is the $m$-tuple point.
\end{proof}
\end{pro}
Now that the case $n=1$ is settled, for the rest of this paper we assume $n\ge 2$ and we try to move to some more interesting cases in higher dimension. In order to understand what $Z_0$ is, the first problem to tackle is to compute its multiplicity.
In \cite[Proposition 4]{EvainIdea}, the author solves the problem for $n=2$, under the assumption that the sections $\sigma_i$ are given by a homothetic transformation. It is proved that the multiplicity of the limit scheme $Z_0$ is the minimum integer $j$ such that the linear system $H^0\mathcal{I}_{Z_1,\mathbb{A}^2}(j)=L_{n,j}(m_1,\dots,m_h)$ is nonzero. In \cite[Theorem 2.6]{Ne}, the assumption on the section is dropped, but only a bound is provided. In \cite{GM}, a different proof allows the authors to generalize this bound to any dimension. Now we want to improve this result, and show that the estimated value is actually achieved with equality.
\begin{pro}\label{pro:molteplicità_limite}
Let $k:=\min\{j\in\mathbb{N}\mid H^0\mathcal{I}_{Z_1,\mathbb{A}^n
}(j)\neq 0\}$. Then $\mathop{\rm mult}\nolimits Z_0=k$. In particular, the multiplicity of the limit scheme does not depend on $\sigma_i$, as long as they are general.
\begin{proof}
Thanks to \cite[Lemma 20]{GM}, it suffices to prove that $\mathop{\rm mult}\nolimits Z_0\le k$, so we only have to show that $Z_0$ is contained in a degree $k$ divisor. For $t\neq 0$, set $l=\mathop{\rm h}\nolimits^0\mathcal{I}_{Z_t}(k)$. Since the points of $Z_t$ are in general position, $l$ does not depend on $t$, and by hypothesis $l\ge 1$. Let $P\subset Y_t=\mathbb{A}^n$ be a set of $l-1$ general points, and define $Z'_t=Z_t\cup P$.
Observe that $Z'_t\supset Z_t$ for every $t$, and there is a unique degree $k$ divisor $D_t\subset Y_t$ such that $D_t\supset Z'_t$. Let $f_t$ be the polynomial defining $D_t$ as a divisor in $Y_t$.
Then $f_0$ defines a divisor $D_0$ of $Y_0$ which is the flat limit of the $D_t$'s. Hence $\deg f_0\le \deg f_t=k$
and $D_0\supset Z'_0\supset Z_0$, so $\mathop{\rm mult}\nolimits Z_0\le k$.
\end{proof}
\end{pro}
In some cases the multiplicity is enough to compute the limit scheme.
\begin{lem}\label{lem:grassisugrasso} Let $m_1,\dots,m_h,m,n\in\mathbb{N}$ be such that
\[\binom{m_1+n-1}{n}+\ldots+\binom{m_h+n-1}{n}=\binom{m+n}{n}.\]
If
$ L_{n,m}(m_1,\dots,m_h)$ is non-special, then the limit of $h$ colliding points of multiplicities $m_1,\dots,m_h$ in $\mathbb{A}^n$ is a $(m+1)$-tuple point.
\begin{proof} Consider the scheme $Z_1\subset\mathbb{A}^n$ made by $h$ general fat points of multiplicities $m_1,\dots,m_h$. By hypothesis, $L_{n,m}(m_1,\dots,m_h)=H^0\mathcal{I}_{Z_1,\mathbb{A}^n}(m)$ is non-special and it has expected dimension $0$, so it is empty. On the other hand, $ L_{n,m+1}(m_1,\dots,m_h)$ has positive expected dimension, so it is not empty. By Proposition \ref{pro:molteplicità_limite}, the limit scheme $Z_0$ contains a $(m+1)$-tuple point. We know that $\deg Z_0=\deg Z_1$ by flatness, and by hypothesis the length of $Z_1$ coincides with that of a $(m+1)$-tuple point. We conclude by Lemma \ref{stessogrado}.
\end{proof}\end{lem}
The previous Lemma will be very useful for our purposes. Indeed, when we use limits to specialize a linear system, the most effective result would be a description of $Z_0$ as a fat point of some multiplicity. However, this can happen only if the hypothesis of Lemma \ref{lem:grassisugrasso} are satisfied. When the scheme we are specializing does not have the degree of a multiple point, this analysis is not enough to determine the limit scheme.
In the notations of Construction \ref{cons:collapse},
let \[\Sigma:=\bigcup_{i=1}^h\sigma_{X,i}(\Delta)\subset X\] be the smooth scheme associated to strict transform $Z_X$ of $Z$ on $X$.
Let $\mathcal{X}\to X$ be the blow-up of the ideal sheaf $\mathcal{I}_{\Sigma}$ with exceptional divisors $\mathcal{E}_1,\dots,\mathcal{E}_h$, and let $\varphi:\mathcal{X}\to\Delta$ be the degeneration onto $\Delta$. Note that this blow-up is an isomorphism in a neighbourhood of $Y_0$. The central fiber is
$$\mathcal{X}_0:=\varphi^{-1}(0)=P\cup \tilde{Y}_0,$$
where $P$ is the blow-up of $W\cong\p^n$ at $h$ general points. With abuse of notation, we identify $R\subset X$ with its strict transform $R=P\cap \tilde{Y}_0$. The linear systems we are interested in are $ L:=H^0\mathcal{O}_\mathcal{X}(-\sum_i m_i\mathcal{E}_i-\mathop{\rm mult}\nolimits(Z_0)P)$ and its restrictions $ L_P$, $ L_R$, to $P$ and $R$.
The linear system $ L$ is complete. However, the following example (\cite[Example 2.10]{Ne}) shows that in general $ L_R$ is not complete.
\begin{es}\label{3su3plo}
In the case of 3 colliding double points in $\mathbb{A}^2$, the limit has multiplicity 3. On the other hand, a triple point has degree $6$, while $\deg Z_1=9$, therefore the limit is not only the triple point.
In order to better understand the first infinitesimal neighborhood of the limit, we look at the system $ L_P\cong L_{2,3}(2^3)$. A plane cubic with 3 double points in general position is the union of 3 lines, that intersect $R$ in 3 points. Hence $ L_R\subsetneq\mathcal{O}_R(3)$ is not complete, but rather it has only one nonzero section, consisting of the 3 intersection points. Those 3 intersections are base points for $ L_P$, so they are tangent directions (infinitely near points) in the limit scheme. We proved that $Z_0$ contains a triple point with 3 infinitely near simple points. Since these tangent directions impose independent conditions on cubics of $R$, that is, $ L_R$ is non-special, this subscheme of $Z_0$ has degree $6+3=9=\deg Z_0$. By Lemma \ref{stessogrado}, $Z_0$ is a triple point with 3 infinitely near simple points.
\end{es}
It is worth to mention that, unlike $ L_R$, the system $ L_P$ is always complete, as proven in \cite[Lemma 24]{GM}. Observe that in Example \ref{3su3plo} we computed the degree of our candidate by checking that $ L_R$ is non-special. This is an important and often nontrivial step, as we will see in Section \ref{sec:doublepoints}. In order to make this precise, we need a lemma.
\begin{lem}
\label{lem:degree of a fat point with infinitely near simple points}
Let $q\in\mathbb{A}^n$. Let $E$ be the exceptional divisor of $\mathop{\rm Bl}\nolimits_q\mathbb{A}^n$ and $B:=\{p_1,\dots,p_t\}\subset E$ a set of $t$ simple points. Let $X$ be the scheme supported at $q$ consisting of a $m$-ple point with $t$ infinitely near points $p_1,\dots,p_t$. Then
\[\deg X=\binom{n+m-1}{n}+\binom{n+m-1}{n-1}-\mathop{\rm h}\nolimits^0\mathcal{I}_{B,E}(m)=\binom{n+m}{n}-\mathop{\rm h}\nolimits^0\mathcal{I}_{B,E}(m).\]
\begin{proof}
We argue by induction on $t$. If $t=0$, then $X$ is just a $m$-ple point in $\mathbb{A}^n$, so $\deg X=\binom{n+m-1}{n}$. On the other hand, $B=\varnothing$, so $\mathop{\rm h}\nolimits^0\mathcal{I}_{B,E}(m)=\mathop{\rm h}\nolimits^0\mathcal{O}_{\p^n}(m)=\binom{n+m-1}{n-1}$ and the statement holds. Assume then $t\ge 1$. Let $B':=B\setminus\{p_t\}$ and let $X'\subset X$ be the subscheme consisting of a $m$-ple point with $t-1$ infinitely near points $p_1,\dots,p_{t-1}$. By induction hypothesis,
\[\deg X'=\binom{n+m}{n}-\mathop{\rm h}\nolimits^0\mathcal{I}_{B',E}(m).\]
There are two possibilities. If $p_t$ is a base point for the linear system $H^0\mathcal{I}_{B',E}(m)$, then
$\mathcal{I}_{B',E}(m)=\mathcal{I}_{B,E}(m)$, so
\[\deg X=\deg X'=\binom{n+m}{n}-\mathop{\rm h}\nolimits^0\mathcal{I}_{B,E}(m).\]
If $p_t$ is not a base point for $H^0\mathcal{I}_{B',E}(m)$, then $\mathop{\rm h}\nolimits^0\mathcal{I}_{B',E}(m)=1+\mathop{\rm h}\nolimits^0\mathcal{I}_{B,E}(m)$ and $X'$ is a proper subscheme of $X$, so $\deg X'<\deg X$. Since $p_t$ is a simple point, the difference of the degrees cannot be more than 1, so
\begin{align*}
\deg X&=1+\deg X'=1+\binom{n+m}{n}-\mathop{\rm h}\nolimits^0\mathcal{I}_{B',E}(m)\\
&=1+\binom{n+m}{n}-(1+\mathop{\rm h}\nolimits^0\mathcal{I}_{B,E}(m))
.\qedhere
\end{align*}
\end{proof}
\end{lem}
Notice that Lemma \ref{lem:degree of a fat point with infinitely near simple points} does not need to hold when the infinitely near points have multiplicities greater than 1. An explicit computation shows that the subscheme of $\mathbb{A}^3$ consisting of a triple point with an infinitely near double point has degree 12. Nonetheless, we will often encounter infinitely near simple points. In those cases, the lemma allows us to get information on the limit scheme.
\begin{cor}\label{cor:condizioni imposte dai punti infinitamente vicini}
In the notation of Construction \ref{cons:collapse}, let $m=\mathop{\rm mult}\nolimits Z_0$ and let $B:=(\mathop{\rm Bs}\nolimits( L_P))_{|R}$. If $B$ is a set of simple points, then $\deg Z_0\ge\binom{n+m}{n}-\dim( L_R)$. In particular, if $ L_R$ is non-special then $\deg Z_0\ge\binom{n+m-1}{n}+t$.
\begin{proof}
By hypothesis $Z_0$ contains a subscheme $S$ consisting of a $m$-ple point with some infinitely near simple points. We conclude by Lemma \ref{lem:degree of a fat point with infinitely near simple points}.
\end{proof}
\end{cor}
Before we move to the first results on limits, it is important to have clear in mind what kind of characterization we want. In general it will be too complicated to determine the limit up to isomorphism. For instance, let $Z_1$ consist of 14 general simple points in $\mathbb{A}^2$ and let $Z_0$ be their collision. By Proposition \ref{pro:molteplicità_limite}, $\mathop{\rm mult}\nolimits Z_0=4$. However, just as in Example \ref{3su3plo}, the scheme cannot be just a fourtuple point. In order to find more information, we look at the linear system $ L_P\cong L_{2,4}(1^{14})$. This system has only one nonzero section $C$, so its restriction to the exceptional line $R$ consists of 4 simple points. Thus the candidate limit is a fourtuple point with 4 tangent directions, and since $ L_R$ is non-special it has degree at least
\[\binom{3+2}{2}+4=14=\deg Z_1\]
by Corollary \ref{cor:condizioni imposte dai punti infinitamente vicini}. Therefore, our candidate actually coincides with the limit. However, notice that if we change the sections $\sigma_1,\dots,\sigma_{14}$, then we will have different tangent directions to the limit. Recall that two 4-tuples of points in $\p^1$ are not projectively equivalent in general, so the limits do not need to be isomorphic. Nonetheless, we will be satisfied to say that the limit is a fourtuple point with 4 infinitely near simple points.
We also want to stress that our analysis works as long as we make all points collide at once. If we collide some of them to a limit scheme $\tilde{Z_1}$ and then we collide the others together with $\tilde{Z_1}$, we are not guaranteed to obtain the same limit scheme as if we collide all of them at once.
As an example, let $Z_1$ be the scheme consisting of a double point and 3 simple points in $\mathbb{A}^2$. If we make them collide, the multiplicity of the limit scheme $Z_0$ is 3 by Proposition \ref{pro:molteplicità_limite}. On the other hand, we could collide the 3 simple points to a double point, but the limit of 2 colliding double points has multiplicity 2. However, this kind of multi-staged collisions provides new legitimate ways to degenerate a linear system.
It is time to move to the description of the limit $Z_0$, and we start with the limit of a bunch of colliding double points. While in some sense it is simpler, the study of collisions of simple points requires a different and peculiar treatment, and will be addressed in another paper.
\section{Double points}\label{sec:doublepoints}
In this section we assume that all the collapsing points have multiplicity 2. First we review some of the well-understood cases. In Definition \ref{dfn:candidate} we explicitly construct a scheme consisting of one triple point with some tangent directions, we prove that it is a subscheme of the limit and we conjecture that they coincide. This is equivalent to compute the number of independent conditions given by a set of simple points in a given special position, see Conjecture \ref{con:n+k}. Lemma \ref{lem:quantecodizionidanno} allows us to prove that the conjecture holds for several small values. Conversely, in Theorem \ref{thm:lift} we prove that the general triple point with the suitable number of tangent direction can be obtained as a collision of double points.
When dealing with linear systems with double points, we will repeatedly use Theorem \ref{thm:AH}. As a warm-up, we deal with the cases satisfying the hypothesis of Lemma \ref{lem:grassisugrasso}.
\begin{pro}\label{pro:doppisumultiplo}
Let $m\ge 2$ and $(n,m)\notin\{(2,3),(2,5)
(4,4),(4,5)\}$. Define $h:=\frac{\binom{n+m-1}{n}}{n+1}$. If $h\in\mathbb{N}$, then the limit of $h$ colliding double points in $\p^n$ is an $m$-tuple point.
\begin{proof}
By our numerical assumption, Theorem \ref{thm:AH} implies that $ L_{n,m-1}(2^h)$ is non-special. Now the statement follows by Lemma \ref{lem:grassisugrasso}.
\end{proof}
\end{pro}
As we already noticed, in most cases the limit is not just a point with multiplicity. As Example \ref{3su3plo} shows, once we understand the minimum degree of a divisor containing $Z_1$, we need information on the base locus of such divisors.
When we deal with double points, it is convenient to work in the case $h>n$. Indeed, $h\le n$ yields $\mathop{\rm mult}\nolimits Z_0=2$, and $ L_{n,2}(2^h)$ has a nonreduced base locus, so it is difficult to describe the conditions imposed on the limit linear system. On the other hand, if $h>n$ then we have $\mathop{\rm mult}\nolimits Z_0=3$, at least for $n$ big enough, and the base locus of cubics with assigned double points is very well understood. We start with a technical result.
\begin{lem}\label{lem:indip} Let $n\geq 2$ and $l\le n+2$. Let $A:=\{a_1,\dots,a_{l}\}$ be a set of $l$ general points in $\p^n$ and let $R$ be a hyperplane such that $A\cap R=\varnothing$. Let $p_{ij}:=\langle a_i,a_j\rangle\cap R$ and
$$B:=\{p_{ij}\mid 1\le i<j\le l\}.$$
Then $ L_{n-1,2}(B)$ and $ L_{n-1,3}(B)$ are non-special, that is, the points of $B$ impose independent conditions to quadrics and cubics of $R$.
\begin{proof}
It is enough to prove the claim for $l=n+1$ for quadrics and $l=n+2$ for cubics. $ L_{n-1,2}(B)$ is non-special by \cite[Lemma 25]{GM}.
Now assume that $l=n+2$. We prove that the points of $B$ are general for cubics by induction on $n$. It is easy to check that the statement holds for $n=2$, so we assume $n\ge 3$. Specialize $a_1,\dots,a_{n+1}$ on a general hyperplane $W\subset\p^{n}$. Define
$$B_1:=\{p_{ij}\mid 1\le i<j\le n+1\}\mbox{ and }B_2:=\{p_{1,n+2},\dots,p_{n+1,n+2}\}.$$
Observe that the points of $B_2$ are in general position on $R$, and $B=B_1\cup B_2$. Let $H:=W\cap R=\p^{n-2}$. Castelnuovo exact sequence reads
\[0\to\mathcal{I}_{B_2,R}(2)\to\mathcal{I}_{B,R}(3)\to\mathcal{I}_{B_1,H}(3)\to 0.\]
Since $B_2$ is a set of general points of $R$, $\mathop{\rm h}\nolimits^1\mathcal{I}_{B_2,R}(2)=0$. If we set
$$A_1:=\{a_1,\dots,a_{n+1}\},$$
then $A_1$ is a set of general points in $W$ and $H$ is an hyperplane of $W$ such that $A_1\cap H=\varnothing$. By induction hypothesis, $\mathop{\rm h}\nolimits^1\mathcal{I}_{B_1,H}(3)=0$. Hence $\mathop{\rm h}\nolimits^1\mathcal{I}_{B,R}(3)=0$ and so $B$ imposes independent conditions on cubics of $R$. Since the conditions are independent when $A$ and $B$ are in this specialized configuration, the statement holds by semicontinuity.
\end{proof}
\end{lem}
\begin{rmk}\label{rmk:nongenerici}
Even if $B$ imposes independent conditions, the points of $B$ are not in general position.
For every choice of $t$ points of $A$, their span is a $\p^{t-1}$, so the corresponding $\binom{t}{2}$ points of $B$ lie on a $\p^{t-2}$.
\end{rmk}
The next two Propositions, proven in \cite{GM}, solve the cases $h=n+1$ and $h=n+2$.
\begin{pro} \label{collision1} If $n\ge 2$, then the limit of $n+1$ collapsing double points in $\mathbb{A}^n$ is a triple point with $\binom{n+1}{2}$ tangent directions. The infinitely near simple points are in the special position described by Remark \ref{rmk:nongenerici}.
\end{pro}
\begin{pro} \label{collision2} Let $Z_0$ be the limit of $n+2$ collapsing double points in $\mathbb{A}^n$.
\begin{enumerate}
\item If $n=2$, then $Z_0$ is a $4$-tuple point, together with the involution described in \cite[Proposition 3.1]{CM3}.
\item If $n=3$, then $Z_0$ is a $4$-tuple point.
\item If $n\ge 4$, then $Z_0$ is a triple point with $\binom{n+2}{2}$ tangent directions. In this case the infinitely near simple points are in the special position described by Remark \ref{rmk:nongenerici}.
\end{enumerate}
\end{pro}
The proofs rely on Proposition \ref{pro:molteplicità_limite} to compute the multiplicity. Then we determine the base locus of the linear system and we apply Lemma \ref{lem:indip} to check that $ L_R$ is non-special, so that we can conclude by Corollary \ref{cor:condizioni imposte dai punti infinitamente vicini} and Lemma \ref{stessogrado}.
Despite the previous results, the limit scheme can be more complicated than a fat point with a bunch of infinitely near points.
For instance, the limit of 5 colliding double points in the plane is described in \cite[Proposition 3.1]{CM3} as a fourtuple point with a pair of infinitely near tacnodal points.
We could try to apply the argument of Propositions \ref{collision1} and \ref{collision2} to an higher number of colliding double points. Anyway, we cannot expect the same proof to work, because Lemma \ref{lem:indip} does not hold for $l\ge n+3$. As an example, let us work out one of the exceptions of Theorem \ref{thm:AH}.
\begin{es}\label{es:7doppiinp4}
Consider a set of general points $A:=\{a_1,\dots,a_7\}\subset\p^4$. As in Lemma \ref{lem:indip}, let $R$ be a hyperplane such that $A\cap R=\varnothing$ and $p_{ij}:=\langle a_i,a_j\rangle\cap R$. Then $B:=\{p_{ij}\mid 1\le i<j\le 7\}$ has 21 points, while $\mathop{\rm h}\nolimits^0\mathcal{O}_R(3)=20$, so $L_{R,3}(B)=H^0\mathcal{I}_{B,R}(3)$ is expected to be empty. However, we know that there is a cubic $C\subset\p^4$ singular at $a_1,\dots,a_7$. $C$ contains all the lines joining pairs of points of $A$, so in particular $C_{|R}\supset B$. Consider Castelnuovo exact sequence
\[0\to \mathcal{L}_{4,2}(2^7)\to\mathcal{L}_{4,3}(2^7)\to\mathcal{I}_{B,R}(3)\to 0.
\]
Observe that $L_{4,2}(2^7)=0$, so the restriction $ L_{4,3}(2^7)\to H^0\mathcal{I}_{B,R}(3)$ is injective and therefore
$C_{|R}$ is a nonzero element of $H^0\mathcal{I}_{B,R}(3)$. Since $\mathop{\rm h}\nolimits^0\mathcal{O}_{\p^3}(3)=20$, the 21 points of $B$ impose at most 19 independent conditions on cubics of $R$. A software computation shows that $B$ actually imposes exactly 19 independent conditions.
\end{es}
More generally, let $Z_1$ be a scheme of $n+3$ double points, with $n\ge 5$. Observe that $\deg Z_1=(n+1)(n+3)$ and $\mathop{\rm mult}\nolimits Z_0=3$. It is easy to see that $\mathop{\rm Bs}\nolimits L_{n,3}(2^{n+3})$ consists of the double points and of the $\binom{n+3}{2}$ lines joining the pair of points. Then we have $\binom{n+3}{2}$ simple points infinitely near to the limit triple point. However, these simple points do not impose independent conditions to cubics. Indeed, if they did, then Corollary \ref{cor:condizioni imposte dai punti infinitamente vicini} would imply
\[\deg Z_0\ge\binom{n+2}{2}+\binom{n+3}{2}=n^2+4n+4=1+\deg Z_1.\]
Hence those $\binom{n+3}{2}$ simple points impose dependent conditions on $Z_0$. On the other hand, at least $\binom{n+2}{2}$ of them are independent by Lemma \ref{lem:indip}. How can we give a description of the limit is these cases?
\begin{rmk}\label{rmk:graziekris}
Let $Z$ be an $m$-tuple point supported at $q\in\p^n$, with an infinitely near simple point, and let $l$ be the line containing $q$ corresponding to the infinitely near point. The restriction of $Z$ to a general line through $q$ is an $m$-tuple point, while $Z_{|l}$ has multiplicity $m+1$. This suggests a possible description of the limit of $n+k$ collapsing double points. Assume that $\mathop{\rm mult}\nolimits Z_0=3$, and let $l_1,\dots,l_{\binom{n+k}{2}}$ be the base lines, all passing through the limit point $q$. Let $S^4_i$ be the multiplicity 4 subscheme of $l_i$ supported at $q$. We know that $Z_0$ contains the union of the $S^4_i$'s, and we conjecture that they coincide. Now we want to precisely formulate the problem and to provide a solution for small $k$.
\end{rmk}
\begin{dfn}\label{dfn:candidate}
Let $n,m\ge 2$, and let $l_1,\dots,l_t\subset\mathbb{A}^n$ be lines meeting at the origin. Let $S^m_i$ be the 0-dimensional degree $m$ subscheme of $l_i$ supported at the origin, and let $ I_{S^m_i,\mathbb{A}^n}$ be the ideal defining $S^m_i$ in $\mathbb{A}^n$. Define $Z_n(l_1,\dots,l_t)$ to be the union scheme associated to the ideal $$ I_n(l_1,\dots,l_t):= I_{S^m_1,\mathbb{A}^n}\cap\ldots\cap I_{S^m_t,\mathbb{A}^n}.$$
If $l_1,\dots,l_t$ are general lines through the origin and $m=4$, then we define $$Z_{n,t}:=Z_n(l_1,\dots,l_t)\mbox{ and }\ I_{n,t}:= I_n(l_1,\dots,l_t).$$
When $\mathop{\rm mult}\nolimits Z_{n,t}=3$, we can think of this scheme as a triple point with $t$ infinitely near simple points, representing the directions corresponding to $l_1,\dots,l_t$.
\end{dfn}
\begin{rmk}\label{rmk:limit_description}
Consider $n+k$ colliding double points in $\mathbb{A}^n$ and assume the limit has multiplicity 3. Then the limit triple point has $\binom{n+k}{2}$ infinitely near simple points, in special position, giving possibly dependent conditions on cubics. Nevertheless, the restriction of the limit scheme to one of the $\binom{n+k}{2}$ corresponding lines $l_1,\dots,l_{\binom{n+k}{2}}$ has degree strictly greater than 3. In particular the limit scheme contains $Z_n\left( l_1,\dots,l_{\binom{n+k}{2}}\right) $. So if we prove that they have the same degree, then we get an explicit description of the limit scheme.
\end{rmk}
If we want to identify the limit of a bunch of colliding double points with some $Z_n(l_1,\dots,l_t)$, our next task is to study such schemes. First we compute the multiplicity.
\begin{lem}\label{lem:molteplicità _unione}
Let $R=\mathbb{A}^{n-1}$ be a general hyperplane in $\mathbb{A}^n$, and $p_i:=l_i\cap R$. Define $B:=\{p_1,\dots,p_t\}$ and set
\[k:=\min\{m\in\mathbb{N}\mid H^0\mathcal{I}_{B,R}(m)\neq 0\}.\]
Then $\mathop{\rm mult}\nolimits Z_n(l_1,\dots,l_t)=\min(4,k)$.
\begin{proof}
First note that $\mathop{\rm mult}\nolimits Z_n(l_1,\dots,l_t)$ is nondecreasing with respect to $t$. Moreover, $\mathop{\rm mult}\nolimits Z_n(l_1,\dots,l_t)\le 4$ by construction. Indeed, once multiplicity 4 is reached, the restriction to any line has degree at least 4, so by adding another $S^4_i$ we do not change anything. Now let $D\subset R$ be a degree $m$ divisor containing $p_1,\dots,p_t$. The cone $C$ over $D$ with vertex the origin is a degree $m$ divisor in $\mathbb{A}^n$ containing $l_1,\dots,l_t$ and therefore $C\supset S^4_1\cup\ldots\cup S^4_t$. Hence the ideal of $Z_n(l_1,\dots,l_t)$ contains a generator of degree $m$ and so $\mathop{\rm mult}\nolimits Z_n(l_1,\dots,l_t)\le m$. This implies $\mathop{\rm mult}\nolimits Z_n(l_1,\dots,l_t)\le\min(4,k)$.
On the other hand, if $\mathop{\rm mult}\nolimits Z_n(l_1,\dots,l_t)=4\ge \min(4,k)$, then there is nothing else to prove. Suppose that $m:=\mathop{\rm mult}\nolimits Z_n(l_1,\dots,l_t)\in\{1,2,3\}$. Then $Z_n(l_1,\dots,l_t)$ is contained in a degree $m$ divisor $C\subset\mathbb{A}^n$. Since $C$ has an $m$-tuple point, it is a cone. Moreover the restriction of $Z_n(l_1,\dots,l_t)$ to each $l_i$ has degree $4>m$ so $C$ contains each $l_i$, and in particular $C_{|R}$ is a degree $m$ divisor in $R$ containing $p_1,\dots,p_t$.
\end{proof}
\end{lem}
\begin{cor}\label{cor:molteplicità _unione}
Let $t\in\mathbb{N}$ and let $R=\mathbb{A}^{n-1}$ be a general hyperplane in $\mathbb{A}^n$. Set
\[k:=\min\{m\in\mathbb{N}\mid \mathop{\rm h}\nolimits^0\mathcal{O}_R(m)>t\}.\]
If $l_1,\dots,l_t$ are general lines, then $\mathop{\rm mult}\nolimits Z_{n,t}=\min(4,k)$.
\begin{proof}
Apply Lemma \ref{lem:molteplicità _unione} in the case $p_1,\dots,p_t\in R$ are general.
\end{proof}
\end{cor}
Now we want to determine the length of $Z_n(l_1,\dots,l_t)$. The next Lemma provides a way to compute it inductively.
\begin{lem}\label{lem:grado_unione} Let $n\ge 2$. Then
\begin{enumerate}
\item $\deg Z_n(l_1)=4$,
\item $\deg Z_n( l_1,\dots,l_t,l_{t+1}) =\deg Z_n(l_1,\dots,l_t)+4-\deg ( Z_n(l_1,\dots,l_t)_{|l_{t+1}}) $,
\item $\deg Z_{n,t+1}=\deg Z_{n,t}+4-\mathop{\rm mult}\nolimits Z_{n,t}$.
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item The degree of $Z_n(l_1)=S^4_1$ does not depend on its embedding. Regarding $S^4_1$ as a divisor in $l_1=\p^1$, it has degree 4 by construction.
\item Let $\mu=\deg ( Z_n(l_1,\dots,l_t)_{|l_{t+1}})$. Of course $Z_n(l_1,\dots,l_t)\supset S_{t+1}^\mu$, so
\[Z_n(l_1,\ldots,l_t)=S_1^4\cup\ldots\cup S_t^4=S_1^4\cup\ldots\cup S_t^4\cup S_{t+1}^\mu.\]
Hence the difference $\deg Z_n( l_1,\dots,l_t,l_{t+1})-\deg Z_n(l_1,\dots,l_t)$ coincides with the difference $\deg S_{t+1}^4-\deg S_{t+1}^\mu=4-\mu$.
\item When $l_1,\dots,l_t,l_{t+1}$ are general, the restriction of $Z_{n,t}$ to $l_{t+1}$ has degree equal to $\mathop{\rm mult}\nolimits Z_{n,t}$, so it is enough to apply (2).\qedhere
\end{enumerate}
\end{proof}
\end{lem}
Corollary \ref{cor:molteplicità _unione} and Lemma \ref{lem:grado_unione} allow us to compute multiplicity and degree of the scheme $Z_{n,t}$ for every $n$ and $t$.
Now we consider what happens when the lines are not general. Namely, we are interested in the configuration described in Remark \ref{rmk:nongenerici}.
\begin{dfn}
Let $\{l_{ij}\mid 1\le i<j\le m\}$ be a set of $\binom{m}{2}$ lines in $\mathbb{A}^n$ meeting at the origin, such that $l_{ab}$, $l_{bc}$ and $l_{ac}$ lie on the same plane for every $1\le a<b<c\le m$. Define $$\tilde{Z}_{n,\binom{m}{2}}:=Z_n(l_{ij}\mid 1\le i<j\le m).$$
\end{dfn}
\begin{rmk}\label{rmk:trivialtilda} Let $n,m\ge 2$. We start with the following simple observations.
\begin{enumerate}
\item $Z_{2,\binom{m}{2}}=\tilde{Z}_{2,\binom{m}{2}}$.
\item $\tilde{Z}_{n,1}=Z_{2,1}$ and $\tilde{Z}_{n,3}=Z_{2,3}$.
\item More generally, if $n\ge m$ then $\left\langle l_1,\dots,l_{\binom{m}{2}}\right\rangle\subseteq\mathbb{A}^{m-1}$. Thus $\mathop{\rm mult}\nolimits \tilde{Z}_{n,\binom{m}{2}}=1$ and $\tilde{Z}_{n,\binom{m}{2}}=\tilde{Z}_{m-1,\binom{m}{2}}$.
\end{enumerate}
\end{rmk}
We are now ready to compute multiplicity and degree of $\tilde{Z}_{n,\binom{m}{2}}$. By Remark \ref{rmk:trivialtilda}, we know multiplicity and degree of $\tilde{Z}_{2,\binom{m}{2}}$ from Lemma \ref{lem:grado_unione}. Now we tackle the cases $n=3$ and $n=4$.
\begin{es}
The next table shows the values of $\deg\tilde{Z}_{3,\binom{m}{2}}$ and $\mathop{\rm mult}\nolimits\tilde{Z}_{3,\binom{m}{2}}$.
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$m$ & $t$ & $\deg \tilde{Z}_{3,t}$ & $\mathop{\rm mult}\nolimits\tilde{Z}_{3,t}$ \bigstrut\\
\hline
2 & 1 & 4 & 1\\
3 & 3 & 9 & 1\\
4 & 6 & 16 & 3\\
$m\ge 5$ & $t\ge 10$ & 20& 4 \\
\hline
\end{tabular}
\end{center}
Degrees and multiplicities of $\tilde{Z}_{4,\binom{m}{2}}$ are presented in the following one.
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$m$ & $t$ & $\deg \tilde{Z}_{4,t}$ & $\mathop{\rm mult}\nolimits\tilde{Z}_{4,t}$ \bigstrut\\
\hline
2 & 1 & 4 & 1\\
3 & 3 & 9 & 1\\
4 & 6 & 16 & 1\\
5 & 10 & 25 & 3\\
6 & 15 & 30 & 3\\
7 & 21 & 34 & 3\\
$m\ge 8$ & $t\ge 28$ & 35 & 4\\
\hline
\end{tabular}
\end{center}
In order to compute the multiplicities, it is enough to apply Remark \ref{rmk:trivialtilda} and Lemma \ref{lem:molteplicità _unione}, together with Lemma \ref{lem:indip}. After that, Lemma \ref{lem:grado_unione} allows us to compute the degree. We only have to pay attention for $(n,m)=(4,7)$. Indeed,
this is an exception of Theorem \ref{thm:AH}, and we already considered it in Example \ref{es:7doppiinp4}.
\end{es}
If we look at $\tilde{Z}_{3,6}$ and $\tilde{Z}_{3,10}$, we see that their multiplicities and degrees are consistent with the cases of 4 and 5 collapsing double points in $\mathbb{A}^3$. In the same way, the numbers we found about $\tilde{Z}_{4,10}$ and $\tilde{Z}_{4,15}$ are consistent with the case of 5 and 6 colliding double points in $\mathbb{A}^4$.
We will try now to find a general statement about the degree and the multiplicity of $\tilde{Z}_{n,\binom{m}{2}}$. The situation is easy when $m\le n$.
\begin{pro}\label{pro:emmebasso} If $3\le m\le n$, then $\mathop{\rm mult}\nolimits\tilde{Z}_{n,\binom{m}{2}}=1$ and $\deg\tilde{Z}_{n,\binom{m}{2}
=m^2.$
\begin{proof}
If $m\le n$, then $\mathop{\rm mult}\nolimits\tilde{Z}_{n,\binom{m}{2}}=1$ by Remark \ref{rmk:trivialtilda}.
We prove the statement about the degree by induction on $m$. We saw that $\deg\tilde{Z}_{n,3}=9$. Let us assume $\deg\tilde{Z}_{n,\binom{m}{2}}=m^2$ and let us compute $\deg\tilde{Z}_{n,\binom{m+1}{2}}$. $\tilde{Z}_{n,\binom{m+1}{2}}$ is obtained from $\tilde{Z}_{n,\binom{m}{2}}\subset\mathbb{A}^m=H$ by adding $S^4_{1,m+1},\dots,S^4_{m,m+1}$. Observe that $S^4_{1,m+1}\not\subset H$, so it increases the degree by 3; the resulting scheme is contained in some $W=\mathbb{A}^{m+1}$, and by adding $S^4_{1,m+1},\dots,S^4_{m-1,m+1}$, we remain inside $W$. As a subscheme of $W$, $\tilde{Z}_{n,\binom{m+1}{2}}$ has multiplicity 2, because there are only $\binom{m}{2}+m-1<\mathop{\rm h}\nolimits^0\mathcal{O}_{\mathbb{A}^{n-1}}(2)$ lines. Even if they are in special position, they are general for quadrics by Lemma \ref{lem:indip}, so each new addition of $S^4_{2,m+1},\dots,S^4_{m,m+1}$ increases the degree by $4-2=2$. Hence
\[\deg\tilde{Z}_{n,\binom{m+1}{2}}=\deg\tilde{Z}_{n,\binom{m}{2}}+3+2(m-1)=m^2+2m+1=(m+1)^2.
\qedhere\]
\end{proof}\end{pro}
Before we move to the more interesting case $m>n\ge 5$, we need some technical results. We already observed that Lemma \ref{lem:indip} does not hold in the case of more than $n+2$ points in $\p^n$, so our next goal is to understand what happens with larger numbers of points.
\begin{lem} \label{lem:quantecodizionidanno} For $k\in\mathbb{N}$, define
$$n_k:=\min\left\lbrace t\ge 2\mid \frac{\binom{t+3}{3}}{t+1}-t>k\right\rbrace.$$
For every $n\ge n_k$ and every $r\in\mathbb{N}$, let $A_r:=\{a_1,\dots,a_{r}\}\subset\p^{n}$ be a set of $r$ general points, and let $R\subset\p^n$ be a hyperplane such that $A_r\cap R=\varnothing$. Let $p_{ij}:=\langle a_i,a_j\rangle\cap R$ and
$$B_r:=\{p_{ij}\mid 1\le i<j\le r\}.$$
Assume that $B_{n_k+k}$ imposes $\binom{n_k+k}{2}-\binom{k-1}{2}$ independent conditions to cubics of $R$. Then $B_{n+k}$ impose exactly $\binom{n+k}{2}-\binom{k-1}{2}$ independent conditions to cubics of $R$ for every $n\ge n_k$.
\begin{proof}
We prove the statement by induction on $n\ge n_k$. The first step of induction is granted by hypothesis, so we suppose that $n>n_k$. In order to lighten the notation, throughout this proof we will write $A$ and $B$ instead of $A_{n+k}$ and $B_{n+k}$.
Specialize $a_1,\dots,a_{n+k-1}$ on an hyperplane $W\subset\p^{n}$. Define $$B_1:=\{p_{ij}\mid 1\le i<j\le n+k-1\}\mbox{ and } B_2:=\{p_{1,n+k},\dots,p_{n+k-1,n+k}\}.$$ Let $H:=W\cap R=\p^{n-2}$. Castelnuovo exact sequence reads
\[0\to\mathcal{I}_{B_2,R}(2)\to\mathcal{I}_{B,R}(3)\to\mathcal{I}_{B_1,H}(3)\to 0.\]
First observe that the points of $B_2$ are general on $R$, so $$\mathop{\rm h}\nolimits^0\mathcal{I}_{B_2,R}(2)=\binom{n+1}{2}-(n+k-1)\mbox{ and }\mathop{\rm h}\nolimits^1\mathcal{I}_{B_2,R}(2)=0.$$
Now we want to compute the dimension of the right hand side of the sequence.
Note that $A_1:=\{a_1,\dots,a_{n+k-1}\}$ is a set of general points in $W=\p^{n-1}$, $H$ is a hyperplane of $W$ with $A_1\cap H=\varnothing$ and $B_1=\{\langle a_i,a_j\rangle\cap H\mid 1\le i<j\le n+k-1\}$, so by induction hypothesis $$\mathop{\rm h}\nolimits^0\mathcal{I}_{B_1,H}(3)=\binom{n+1}{3}-\binom{n+k-1}{2}+\binom{k-1}{2}.$$
Therefore
\begin{align}
\mathop{\rm h}\nolimits^0\mathcal{I}_{B,R}(3)&=\mathop{\rm h}\nolimits^0\mathcal{I}_{B_2,R}(2)+\mathop{\rm h}\nolimits^0\mathcal{I}_{B_1,H}(3)\nonumber\\
&=\binom{n+1}{2}-(n+k-1)+\binom{n+1}{3}-\binom{n+k-1}{2}+\binom{k-1}{2}\nonumber\\
&=\binom{n+2}{3}-\binom{n+k}{2}+\binom{k-1}{2}.\nonumber
\end{align}
Since the points of $B$ impose $\binom{n+k}{2}-\binom{k-1}{2}$ conditions in this specialized configuration, they impose at least $\binom{n+k}{2}-\binom{k-1}{2}$ conditions in the original configuration. We already noticed they cannot impose more than $\binom{n+k}{2}-\binom{k-1}{2}$ conditions.
\end{proof}
\end{lem}
Lemma \ref{lem:quantecodizionidanno} provides an inductive way to prove that $B$ imposes the suitable number of conditions on cubics of $R$. However, in order to apply it we need the first step of induction for every $k$. While we are not able to prove this first step in general, we believe this is the right way to compute the number of independent conditions imposed by $B$.
\begin{con}\label{con:n+k}
Assume $0\le k<\frac{\binom{n+3}{3}}{n+1}-n$. Let $A:=\{a_1,\dots,a_{n+k}\}$ be a set
of $n+k$ general points in $\p^n$ and $R$ a hyperplane such that $A\cap R=\varnothing$. Let
$$B:=\{\langle a_i,a_j\rangle\cap R\mid 1\le i<j\le n+k\}.$$
Then the points of $B$ impose exactly $\binom{n+k}{2}-\binom{k-1}{2}$ independent conditions to cubics of $R$.
\end{con}
By applying Lemma \ref{lem:quantecodizionidanno}, it is easy to prove that Conjecture \ref{con:n+k} holds for $k\in\{0,1,2\}$, and in this way we recover some of the results of Lemma \ref{lem:indip}. Moreover, a software computation allows us to prove the first step for $k\le 4$ as well.
\begin{rmk}
Assume Conjecture \ref{con:n+k} is true. Then we have a way to compute degree and multiplicity of $\tilde{Z}_{n,\binom{m}{2}}$. Indeed, assume that
\begin{equation}\label{eq:range k}
1\le k<\frac{\binom{n+3}{3}}{n+1}-n\mbox{ and } (n,k)\neq (4,3).
\end{equation}
On one hand, $\mult\tilde{Z}_{n,\binom{n+k}{2}}\ge 3$ by Lemma \ref{lem:molteplicità _unione}. On the other hand, we observed in Remark \ref{rmk:graziekris} that $\tilde{Z}_{n,\binom{n+k}{2}}$ is a subscheme of the limit of $n+k$ collapsing double points, which has multiplicity 3 because $k$ is in the range (\ref{eq:range k}). Hence $\mathop{\rm mult}\nolimits\tilde{Z}_{n,\binom{n+k}{2}}=3$. Then, by Lemma \ref{lem:degree of a fat point with infinitely near simple points}, its degree is
\[\deg\tilde{Z}_{n,\binom{n+k}{2}}=\binom{n+2}{n}+\binom{n+k}{2}-\binom{k-1}{2}=(n+1)(n+k),\]
where the last equality can be proven by induction on $k$.
Therefore, under this assumption, the limit of $n+k$ collapsing double points in $\mathbb{A}^n$ is $\tilde{Z}_{n,\binom{n+k}{2}}$. Since we know that Conjecture \ref{con:n+k} holds for small values of $k$, this improves Propositions \ref{collision1} and \ref{collision2}.
However, this approach only works in the range (\ref{eq:range k}). When $k\le 0$, the limit scheme has multiplicity 2. As we already pointed out, the linear system $ L_{n,2}(2^{n+k})$ has nonreduced base locus, and this makes it difficult to understand the first order neighbourhood of the limit point. On the other hand, when $k+n\ge\frac{\binom{n+3}{3}}{n+1}$, the limit scheme has multiplicity at least 4 and the base locus may not give us information. It is enough to consider $(n,k)=(3,3)$ to bump into the linear system $ L_{3,4}(2^6)$, which has no base locus outside the imposed singularities. Our work on infinitely near points gives us no clue in this type of cases.
\end{rmk}
One could argue in a similar way with higher multiplicities and hope to find other cases in which there are base lines. For instance, we could work with triple points, and we know that the lines joining a pair of triple points are in the base locus of quintics.
\begin{es}\label{es:sporadici1}
Consider 5 collapsing 5-ple points in $\mathbb{A}^3$. Since $ L_{3,8}(5^4)$ is empty, the limit has multiplicity 9 by Proposition \ref{pro:molteplicità_limite}. The base locus of $ L_{3,9}(5^4)$ consists of 10 lines which cut 10 simple points on the exceptional divisor $R$. They are in the special position described by Remark \ref{rmk:nongenerici}, but still they impose independent conditions on 9-ics of $R$ by Lemma \ref{lem:indip}. The limit scheme has degree 175 and contains a 9-tuple point with 10 infinitely near simple points. Since the latter has degree 175 by Lemma \ref{lem:degree of a fat point with infinitely near simple points}, they coincide by Lemma \ref{stessogrado}.
\end{es}
Unfortunately, this strategy works only if we know the degree of the linear system we are dealing with. By Proposition \ref{pro:molteplicità_limite}, this is equivalent to compute the smallest degree of a divisor in $\p^n$ containing a bunch of general multiple points. The answer is unknown in general.
Moreover, Corollary \ref{cor:condizioni imposte dai punti infinitamente vicini} does not work when the infinitely near points have multiplicity greater than 1.
It is also worth to mention that, given a scheme $X\subset\mathbb{A}^n$ made by a triple point with $t$ tangent directions, in general we cannot produce $X$ as a limit of double points. Indeed, first we need that $t=\binom{n+k}{n}$ for some $k$ in the range (\ref{eq:range k}). Moreover, the tangent directions have to be in the special position described in Remark \ref{rmk:nongenerici}. It is legitimate to wonder if there are more conditions to be met in order to express $X$ as a limit of double points. In other words, can we lift $X$ to a bunch of double points in such a way that $X$ is the limit of those colliding points, under the previous assumptions?
We will now give a positive answer to this question.
Remark \ref{rmk:nongenerici} describes the configurations of the points in the exceptional divisor and suggests the following definition.
\begin{dfn} Let $n\ge 2$ and $t\ge 3$. Define
\[W_{n,t}=\left\lbrace (x_{ij})_{1\le i<j\le t}\in (\p^n)^{\binom{t}{2}}\mid x_{bc}\in\langle x_{ab}, x_{ac}\rangle\ \forall\ 1\le a<b<c\le t\right\rbrace .\]
If we look at $R=\p^n$ as a general hyperplane in $\p^{n+1}$, there is a rational map
\[\pi_{n,t}:\left( \p^{n+1}\right) ^t\dashrightarrow W_{n,t}\subset (\p^n)^{\binom{t}{2}}\]
defined by sending $(p_1,\dots,p_t)$ to $(x_{ij})_{1\le i<j\le t}$, where $x_{ij}$ is the intersection of the line $\langle p_i,p_j\rangle$ with $R$.
\end{dfn}
For $1\le k\le 4$, we know that the limit of $n+k$ double points in $\p^n$ is a triple point with $\binom{n+k}{2}$ infinitely near simple points. The simple points form a $\binom{n+k}{2}$-tuple $(x_{ij})_{1\le i<j\le n+k}\in W_{n,n+k}$. We want to understand whether such schemes can be obtained as limits of double points. This is equivalent to ask if $\pi_{n,n+k}$ is dominant. We will prove the following result.
\begin{thm}\label{thm:lift}
$\pi_{n,t
$ is dominant for every $n\ge 2$ and every $t\ge 3$. The general fiber has dimension $n+2$.
\end{thm}
Let us start with some simple observations.
\begin{obs}\label{obs:configuraz}
\begin{enumerate}
\item We have $\dim W_{n,t}=n(t-1)+t-2$. Indeed, one can choose freely $t-1$ general points $x_{12},\dots,x_{1t}\in \p^n$. Then, for $i\in\{3,\dots,t\}$, it is possible to choose the $t-2$ points $x_{2i}$ general on $\langle x_{12},x_{1i}\rangle$. After that, for $3\le j<k\le t$, the other points $x_{jk}$ are defined by $\langle x_{1j},x_{1k}\rangle\cap\langle x_{2j},x_{2k}\rangle$.
\item Assume that $t\ge 4$ and let $(x_{ij})_{1\le i<j\le t}\in W_{n,t}$. For $1\le a<b<c\le t$, let $l_{abc}$ be the line containing $x_{ab},x_{ac},x_{bc}$. Note that $l_{abc}$ and $l_{bcd}$ meet at $x_{bc}$, so they span a plane containing $l_{acd}$ and $l_{abd}$ as well. This plane therefore passes through the 6 points $\{x_{ij}\mid i,j\in\{a,b,c,d\},i<j\}$. By the same argument, if $t\ge m$ then for every choice of $m$ indexes $1\le i_1<\ldots<i_m\le t$ the $\binom{m}{2}$ points $\{x_{ij}\mid i,j\in\{i_1,\dots,i_m\}\}$ lie on the same $\p^{m-2}$.
\item In particular, if $t\le n+1$ then $p_1,\dots,p_t\in\p^{n+1}$ lie on a linear subspace $L=\p^{t-1}$. Hence the $\binom{t}{2}$ points $\langle p_i,p_j\rangle\cap R$ all lie on $L\cap R=\p^{t-2}$.
Then $W_{n,t}=W_{t-2,t}$, and $\pi_{n,t}$ restricts to $\pi_{t-2,t}:L^t=(\p^{t-1})^t\dashrightarrow W_{t-2,t}$. For this reason, from now on we will assume $t\ge n+2$.
\end{enumerate}
\end{obs}
The next Lemma is the first step towards the proof of Theorem \ref{thm:lift}.
\begin{lem}\label{lem:sollevamento1}
$\pi_{n,n+2}:(\p^{n+1})^{n+2}\dashrightarrow W_{n,n+2}$ is dominant for every $n\ge 2$. The general fiber has dimension $n+2$.
\begin{proof}
Let $x=(x_{ij})_{1\le i<j\le n+2}\in W_{n,n+2}$ be general.
For $i\in\{1,\dots,n+2\}$, let $L_i=\Span{x_{jk}\mid j,k\neq i}$ be the dimension $n-1$ linear subspace of $R=\p^n$ obtained by choosing all indexes except $i$.
Let $\Pi_i\subset\p^{n+1}$ be a general hyperplane containing $L_i$. For $j\in\{1,\dots,n+2\}$, define the point
\[p_j:=\bigcap_{i\neq j}\Pi_i.\]
If $k,h\in\{1,\dots,n+2\}$ and $h\neq k$, then $p_{k}$ and $p_{h}$ are distinct points of the line $\bigcap_{i\neq k,h}\Pi_i$, so \[\langle p_h,p_k\rangle\cap R=\bigcap_{i\neq k,h}\Pi_i\cap R=\bigcap_{i\neq k,h}L_i,\]
which is one of the $x_{ij}$'s. Then, up to reorder, $(p_1,\dots,p_{n+2})$ is a preimage of $(x_{ij})_{1\le i<j\le n+2}$.
To determine the dimension of the general fiber, we can either note that for each of the $n+2$ points $p_i$ we chose a hyperplane $\Pi_i$ in the pencil of those containing $L_i$, or we can compute the difference $\dim (\p^{n+1})^{n+2}-\dim W_{n,n+2}$.
\end{proof}
\end{lem}
One could give the definition of $W_{1,t}$ and $\pi_{1,t}$ as well. However, we are computing limits under the assumption that $n\ge 2$. Moreover $W_{1,t}=(\p^1)^{\binom{t}{2}}$, so the case $n=1$ is not very interesting for us.
We are now ready to prove the result we claimed.
\begin{proof}[Proof of Theorem \ref{thm:lift}]
As we noticed in Observation \ref{obs:configuraz}, we may assume that $t\ge n+2$. We argue by induction on $t$.
The case $t=n+2$ is the content of Lemma \ref{lem:sollevamento1}, so we focus on the case $t>n+2$.
Let $(x_{ij})_{1\le i<j\le t}\in W_{n,t}$ be general. By induction hypothesis exist $t-1$ general points $p_1,\dots,p_{t-1}\in\p^{n+1}$ such that $\langle p_i,p_j\rangle\cap R=x_{ij}$. Define
$$p_t:=\langle p_1,x_{1t}\rangle\cap\langle p_2,x_{2t}\rangle.$$ We have to make sure that $\langle p_i,p_t\rangle$ meets $R$ at $x_{it}$ for every $i\in\{3,\dots,t-1\}$. Observe that
\[\langle x_{1i},x_{1t}\rangle=\langle p_1,x_{1i},x_{1t}\rangle \cap R=\langle p_1,p_i,p_t\rangle\cap R,\]
because $p_t\in\langle p_1,x_{1t}\rangle$ by construction. Hence
\begin{align}
\langle p_i,p_t\rangle \cap R&=\left( \langle p_1,p_i,p_t\rangle\cap \langle p_2,p_i,p_t\rangle\right) \cap R\nonumber\\
&=\left( \langle p_1,p_i,p_t\rangle\cap R\right) \cap\left( \langle p_2,p_i,p_t\rangle\cap R\right)\nonumber\\
&=\langle x_{1i},x_{1t}\rangle\cap \langle x_{2i},x_{2t}\rangle=x_{it}.\nonumber
\end{align}
The general fiber has dimension $\dim\left( \p^{n+1}\right) ^t-\dim W_{n,t}=n+2$.
\end{proof}
In terms of collision, this means that if $t\in\{n+1,\dots,n+4\}$, then every subscheme of $\p^n$ made by a triple point with $\binom{t}{2}$ infinitely near simple points $x_{ij}$ such that $(x_{ij})_{1\le i<j\le t}$ is a general point of $W_{n,t}$ can be obtained as a limit of $t$ collapsing double points in $\p^{n+1}$. If Conjecture \ref{con:n+k} is true, the same holds for the collision of $t$ double points, where
\[n+1\le t<\frac{\binom{n+3}{3}}{n+1}.\]
\section{Higher multiplicities}\label{sec:othercollisions}\label{sec:lowdimension}
Degenerations are widely used in interpolation theory to compute the dimension of linear systems. The most studied cases are dimension 2 and 3, where there are conjectures about the reasons why a linear system is special. For $n\in\{2,3\}$, all known special linear systems $L_{n,d}(m_1,\dots,m_r)$ have a base locus containing a particular variety. Roughly speaking, what those conjectures state is that the only geometric reason for a linear system to be special is the existence of such a \textit{special effect variety} in its base locus. The precise definition of special effect varieties can be found in
\cite{Boc1}. Some examples of special effect varieties are known (see \cite{BDP1} and \cite{BDP2}) and the hard problem is to classify all of them.
We will not look into special effect varieties, but we will apply some of the known results in interpolation theory to describe limits of colliding multiple points.
As a first example, we can easily extend Proposition \ref{pro:doppisumultiplo} to higher multiplicity.
\begin{pro}\label{pro:n-pli_su_m-plo}\label{omogenei_su_multiplo_P^3} Let $l,m\in\mathbb{N}$, let $n\in\{2,3\}$ and define
$$h:=\frac{\binom{m+n-1}{n}}{\binom{l+n-1}{n}}.$$
Assume that $h\in\mathbb{N}$.
\begin{enumerate}
\item Consider $n=2$. If $l,m\le 42$ and $h\ge 10$, then the limit of $h$ collapsing $l$-tuple points in $\mathbb{A}^2$ is an $m$-tuple point.
\item Consider $n=3$. If $l\le 5$ and $m\ge 2l+1$, then the limit of $h$ collapsing $l$-tuple points in $\mathbb{A}^3$ is an $m$-tuple point.
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item Observe that $ L_{2,m-1}(l^h)$ is a system of plane curves with $h\ge 10$ fixed points of the same multiplicity, hence \cite[Conjecture 5.10]{Cil} predicts that it is non-special. For $l,m\le 42$ the conjecture is proven true in \cite[Theorem 32]{Du}. Now we conclude by Lemma \ref{lem:grassisugrasso}.
\item By Lemma \ref{lem:grassisugrasso}, it is enough to prove that $ L_{3,m-1}(l^h)$ is non-special. If $m\ge 12$, we conclude by \cite[Theorem 1]{BBCS}. Let us check the remaining cases.
If $l=5$, we only have to check $m=11$, but in this case $h\notin\mathbb{N}$. If $l=4$, we have to consider $m\in\{9,10,11\}$. Only $m=10$ gives an integer $h$, and $L_{3,9}(4^{11})$ is non-special by \cite[Theorem 14]{BB}. Finally, if $l=3$ we have to check $7\le m\le 11$. If $m\in\{7,9,11\}$ then $h\notin\mathbb{N}$. A software computation shows that $L_{3,7}(3^{12})$ and $L_{3,9}(3^{22})$ are non-special.\qedhere
\end{enumerate} \end{proof}
\end{pro}
Observe that in point (2) the assumption $m\ge 2l+1$ is necessary. For instance, consider $l=4$ and $m=8$. Then $h=6$ and we deal with $ L_{3,7}(4^6)$. We can study it by applying a standard Cremona transformation in $\p^3$. By \cite[Proposition 2.1]{LU},
$$ L_{3,7}(4^6)\cong L_{3,5}(4^2,2^4)\cong L_{3,3}(2^4)$$
is not empty, so $\mathop{\rm mult}\nolimits Z_0=7<m$.
Up to now, we mostly considered collisions of points with the same multiplicities, but of course there are many other cases in which we can determine the limit $Z_0$.
The next two Propositions will deal with the case of a fat point colliding together with a bunch of low multiplicity points.
\begin{pro}\label{pro:grossoesemplici}
Let $m,n\ge 2$ and let $h:=\binom{n+m-1}{n}$. Then the limit of $h$ simple points and a point of multiplicity $m$ colliding in $\mathbb{A}^n$ is a $(m+1)$-tuple point.
\begin{proof}
Since $m\ge 2$, $ L_{n,m}(m,1^h)=0$. On the other hand, $ L_{n,m+1}(m,1^h)\neq 0$, so $\mathop{\rm mult}\nolimits Z_0=m+1$ by Proposition \ref{pro:molteplicità_limite}. To conclude, observe that a $(m+1)$-tuple point has degree $\binom{n+m}{m}=\deg Z_1$, so $Z_0$ is a $(m+1)$-tuple point.
\end{proof}
\end{pro}
\begin{pro}
Let $m,n\ge 3$ and $(m,n)\notin\{(4,3),(3,5)\}$. Suppose that $h:=\frac{\binom{m+n-1}{n-1}}{n}\in\mathbb{N}$. Then the limit of $h$ double points and a point of multiplicity $m$ colliding in $\mathbb{A}^n$ is a $(m+1)$-tuple point with $h$ infinitely near general simple points.
\begin{proof}
By hypothesis \begin{align}
\mathop{\rm vdim}\nolimits L_{n,m+1}(m,2^h) &=\binom{n+m+1}{n}-\binom{n+m-1}{n}-h(n+1)\nonumber\\
=&\binom{n+m+1}{n}-\binom{n+m-1}{n}-\frac{\binom{m+n-1}{n-1}}{n}(n+1)\nonumber\\
=&\binom{n+m+1}{n}-\binom{n+m-1}{n}-\binom{m+n-1}{n-1}-\frac{\binom{m+n-1}{n-1}}{n}\nonumber\\
=&\frac{(n+m+1)!}{n!(m+1)!}-\frac{(n+m-1)!}{n!(m-1)!}-\frac{(m+n-1)!}{(n-1)!m!}-\frac{(m+n-1)!}{m!n!}\nonumber\\
=&\frac{(n+m-1)!}{(n-1)!(m-1)!}\left[\frac{(n+m+1)(n+m)}{(m+1)mn}-\frac{1}{n}-\frac{1}{m}-\frac{1}{mn}\right]\nonumber\\
=&\frac{(n+m-1)!}{(n-1)!(m-1)!}\left[\frac{n^2 + mn - m - 1}{(m^2 + m)n}\right]>0,\nonumber
\end{align}
hence $ L_{n,m+1}(m,2^h)$ is not empty. On the other hand,
\[\binom{n+m-1}{n-1}-hn=\binom{n+m-1}{n-1}-\binom{m+n-1}{n-1}=0,\]
so $ L_{n,m}(m,2^h)\cong L_{n-1,m}(2^h)$ is expected to be empty. The latter is non-special by Theorem \ref{thm:AH}, so $\mathop{\rm mult}\nolimits Z_0=m+1$ by Proposition \ref{pro:molteplicità_limite}. The $h$ general lines joining the $m$-tuple point and one of the double points are contained in the base locus of $ L_{n,m+1}(m,2^h)$, and they cut $h$ simple points on $R$. The candidate limit scheme is a $(m+1)$-tuple point with $h$ infinitely near general simple points, which by Lemma \ref{lem:degree of a fat point with infinitely near simple points} has length $\binom{n+m-1}{n}+h=\deg Z_1$.
\end{proof}
\end{pro}
We now focus on $n=3$. Recall that 8 is the maximum $r$ such that we know the full classification of special linear systems $ L_{3,d}(m_1,\dots,m_r)$, see \cite[Theorem 5.3]{DL}. The following Proposition will be useful to get some results beyond this bound.
\begin{pro}\label{pro:limitediottograssi}
The limit of the collision of $8$ $m$-tuple points and $m+1$ simple points in $\mathbb{A}^3$ is a point of multiplicity $2m+1$.
\begin{proof}
First we check that
\[8\binom{m+2}{3}+m+1=\binom{2m+3}{3}.\]
By Lemma \ref{lem:grassisugrasso}, it is enough to prove that $ L_{3,2m}(m^8,1^{m+1})$ is non-special. Since general simple points always give independent conditions, it suffices to show that $ L_{3,2m}(m^8)$ is non-special. The latter is true by \cite[Theorem 5.3]{DL}.
\end{proof}
\end{pro}
In this Section, our arguments to describe limits of collisions of fat points rely on known results on non-special linear systems of $\p^2$ and $\p^3$. With a similar approach, other results can be applied in the same way to prove more statements about collisions. Examples on $\p^3$ include \cite[Theorem 5.8]{BDP2} and the aforementioned \cite[Theorem 5.3]{DL}. On $\p^2$ there is \cite[Theorem 34]{DJ}, as well as the results contained in \cite{Roé} and in the survey \cite{Cil}.
While most of our knowledge of special linear systems is concentrated in low dimensional varieties, there is also something we can say about any $\p^n$. As an example, there are the results contained in \cite{BDP1}.
\begin{pro}\label{pro:collisionetripliinp4}
The limit of the collision of $6$ triple points and $36$ simple points in $\mathbb{A}^4$ is a point of multiplicity $6$.
\begin{proof} The proof works as in Proposition \ref{pro:limitediottograssi}. We check that
\[6\binom{6}{4}+36=\binom{9}{4}.\]
By Lemma \ref{lem:grassisugrasso}, it is enough to prove that $ L_{4,5}(3^6,1^{36})$ is non-special. Again, general simple points always give independent conditions, so we only have to show that $ L_{4,5}(3^6)$ is non-special. The latter is true by \cite[Corollary 4.8]{BDP1}.
\end{proof}
\end{pro}
Next Proposition has a slightly different flavour. It states that, up to adding a bunch of simple points, we can always turn two fat points into a unique fat point.
\begin{pro}\label{pro:duesumultiplo}
Let $m_1,m_2\in\mathbb{N}$. Then exist $h,m\in\mathbb{N}$, depending on $n,m_1,m_2$, such that the limit of two points of multiplicity $m_1$ and $m_2$ and $h$ simple points in $\mathbb{A}^n$ is an $m$-tuple point.
\begin{proof}
Define $m:=m(n,m_1,m_2):=m_1+m_2+1$ and
\begin{equation*}\label{eq:numeropuntisemplici}
h:=h(n,m_1,\dots,m_s):=\binom{m+n}{n}-\binom{m_1+n-1}{n}-\binom{m_2+n-1}{n}-1.
\end{equation*}
By construction $\mathop{\rm vdim}\nolimits L_{n,m}(m_1,m_2,1^h)\ge 0$, hence $ L_{n,m}(m_1,m_2,1^h)$ is not empty. Since an $m$-tuple point has degree equal to the length of the starting scheme, it is enough to show that $ L_{n,m-1}(m_1,m_2,1^h)$ is non-special, and therefore empty. Since the $h$ simple points always give independent conditions, it suffices to prove that $ L_{n,m-1}(m_1,m_2)$ is non-special. By \cite[Corollary 4.8]{BDP1}, such system is linearly non-special, so in order to conclude we just need to observe that there are no base linear cycles.
\end{proof}
\end{pro}
\section{Applications to interpolation theory}\label{section:interpolation}
A first important application of collisions to interpolation theory is \cite{GM}, where a collision of double points allows the authors to solve a long-standing problem about Waring decompositions of polynomials.
Another application is \cite{EvainSHGH}, where a suitable collision of fat points in $\p^2$ is used to prove that Segre's conjecture (see \cite[Conjecture 4.1]{Cil}) holds for an infinite family of linear systems of plane curves.
This approach can be pushed further. In Section \ref{sec:lowdimension} we used known results in interpolation theory to provide clues about what the limit is. It is just fair to try to return the favour, using the limits we constructed as tools to specialize linear systems and prove that they are non-special.
We begin
with our contribution to Laface-Ugaglia conjecture (see \cite[Conjecture 4.1]{LU} and \cite[Conjecture 5.1]{BDP2}). When all multiplicities are the same, the conjecture predicts that $ L_{3,d}(m^r)$ is non-special whenever
\[(d+1)^2>9\binom{m+1}{2}\mbox{ and }d\ge 2m-1.\]
The most restrictive of these two conditions is the first one, which reads
\[d>-1+3\sqrt{\frac{m^2+m}{2}}=\frac{3}{\sqrt{2}}m+o(m).\]
As we stated in Theorem \ref{thm:LUcondgrande}, for systems with at most 15 base points we are able to prove that the conjecture holds under the stronger assumption $d\ge 3m$.
\begin{proof}[Proof of Theorem \ref{thm:LUcondgrande}]
Consider $d\ge 3m$. We only have to prove that $L_{3,d}(m^{15})$ is non-special. Since $\mathop{\rm vdim}\nolimits L_{3,d}(m^{15})>m+1$, it is enough to prove that $ L_{3,d}(m^{15},1^{m+1})$ is non-special. We apply Proposition \ref{pro:limitediottograssi} to degenerate $ L_{3,d}(m^{15},1^{m+1})$ to $ L_{3,d}(2m+1,m^7)$, and the latter is non-special by \cite[Theorem 5.3]{DL}.
\end{proof}
Proposition \ref{pro:limitediottograssi} allows us to confirm Laface-Ugaglia conjecture for another family of linear systems.
\begin{pro}
Let $m_1\ge\ldots\ge m_8$ be non-negative integers. Assume that
\begin{enumerate}
\item $\mathop{\rm vdim}\nolimits L_{3,d}(m_1^8,\dots,m_8^8)>8+m_1+\ldots+m_8$;
\item $d\ge 2(m_1+m_2)+1$.
\end{enumerate}
Then $ L_{3,d}(m_1^8,\dots,m_8^8)$ is non-special.
\begin{proof}
By assumption (1), it is enough to prove that $ L_{3,d}(m_1^8,\dots,m_8^8,1^{8+m_1+\ldots+m_8})$ is non-special. We apply Proposition \ref{pro:limitediottograssi} to degenerate $ L_{3,d}(m_1^8,\dots,m_8^8,1^{8+m_1+\ldots+m_8})$ to $ L_{3,d}(2m_1+1,\dots,2m_8+1)$. The latter is non-special by assumption (2) and \cite[Theorem 5.3]{DL}.
\end{proof}
\end{pro}
We now aim to provide further examples of non-special linear systems.
\begin{pro}\label{pro:interpolazionesuperfici}
Let $l,m\le 42$. Set
\[h:=\frac{\binom{m+1}{2}}{\binom{l+1}{2}}.\]
Assume that $h\in\mathbb{N}$ and $h\ge 10$.
\begin{enumerate}
\item Let $d\in\mathbb{N}$. Then $ L_{2,d}(l^h)$ is non-special and $ L_{2,d}(l^t)$ is non-special whenever $\mathop{\rm vdim}\nolimits L_{2,d}(l^t)\ge (h-t)\binom{l+1}{2}$.
\item Let $a,b\in\mathbb{N}$ such that $a,b\ge m$. Then $ L_{\p^1\times\p^1,(a,b)}(l^h)$ is non-special and $ L_{\p^1\times\p^1,(a,b)}(l^t)$ is non-special whenever $\mathop{\rm vdim}\nolimits L_{\p^1\times\p^1,(a,b)}(l^t)\ge (h-t)\binom{l+1}{2}$.
\end{enumerate}
\begin{proof} We apply Proposition \ref{pro:n-pli_su_m-plo} to specialize our linear system.
\begin{enumerate}
\item We degenerate $ L_{2,d}(l^h)$ to $ L_{2,d}(m)$, which is always non-special. Moreover $ L_{2,d}(l^h))$ is not empty, hence $ L_{2,d}( l^t)) $ is non-special as well.
\item We degenerate $ L_{\p^1\times\p^1,(a,b)}(l^h))$ to $ L_{\p^1\times\p^1,(a,b)}(m)$. By \cite[Theorem 1.5]{CGG}, the latter is isomorphic to $ L_{2,a+b}(a,b,m)$. Since $a+m, b+m\le a+b$, the base locus does not contain double lines, so the system is non-special by \cite[Theorem 5.1]{Cil}. Moreover $ L_{\p^1\times\p^1,(a,b)}(l^h))$ is not empty, hence $ L_{\p^1\times\p^1,(a,b)}( l^t)) $ is non-special as well.\qedhere
\end{enumerate}
\end{proof}
\end{pro}
\begin{pro}
Let $d,l,m\in\mathbb{N}$ such that $l\le 5$ and $m\ge 2l+1$. Set
\[h:=\frac{\binom{m+2}{3}}{\binom{l+2}{3}}\]
and assume that $h\in\mathbb{N}$.
Then $ L_{3,d}(l^h)$ is non-special and $ L_{3,d}(l^t)$ is non-special whenever $\mathop{\rm vdim}\nolimits L_{3,d}(l^t)\ge (h-t)\binom{l+2}{3}$. Furthermore, take $m_1,\dots,m_t\in\mathbb{N}$. Then $ L_{3,d}(m_1,\dots,m_7,l^h)$ is non-special under the assumption that $d>i+j$ for every $i,j\in\{m_1,\dots,m_7,m\}$.
\begin{proof}
We apply Proposition \ref{omogenei_su_multiplo_P^3} to degenerate the system. For the first two statements we argue as in Proposition \ref{pro:interpolazionesuperfici}. For the last part, we degenerate $ L_{3,d}(m_1,\dots,m_7,l^h)$ to $ L_{3,d}(m_1,\dots,m_7,m)$, which is non-special by \cite[Theorem 5.3]{DL}.
\end{proof}
\end{pro}
Let us point out that there are cases in which we can compute the limit with weaker assumptions, and therefore we can still apply this degeneration. For instance, on surfaces the hypothesis $h\ge 10$ can be relaxed.
\begin{es}
Pick $l=5$, $m=14$, and $h=7$. By a sequence of Cremona transformation, it is easy to check that $ L_{2,13}( 5^7) $ is empty and therefore non-special, so the limit of 7 collapsing $5$-tuple points in $\mathbb{A}^2$ is a $14$-tuple point by Lemma \ref{lem:grassisugrasso}.
\end{es}
With some extra effort, we can employ a sequence of collisions to show that other linear systems are non-special.
\begin{pro}
Let $m,n_1,\dots,n_s\in\mathbb{N}$. For $i\in\{1,\dots,s\}$, set $h_i:=\frac{m(m+1)}{n_i(n_i+1)}$. Assume $h_i\in\mathbb{N}$ and $h_i\ge 10$ for every $i\in\{1,\dots,s\}$. If $m,n_1,\dots,n_s\le 42$, then $$ L_{2,d}(m^k,n_1^{t_1h_1},\dots,n_s^{t_sh_s})$$ is non-special for every $k,t_1\,\dots,t_s\in\mathbb{N}$.
\begin{proof}
By Proposition \ref{pro:n-pli_su_m-plo}, we can collapse $h_1$ of the $n_1$-tuple points into an $m$-tuple point, thereby degenerating $ L_{2,d}(m^k,n_1^{t_1h_1},\dots,n_s^{t_sh_s})$ to
$$ L_{2,d}(m^{k+1},n_1^{(t_1-1)h_1},\dots,n_s^{t_sh_s}).$$
By performing $t_1$ of these collisions, we obtain the system $$ L_{2,d}( m^{k+t_1},n_2^{t_2h_2},\dots,n_s^{t_sh_s}) .$$
Then we apply Proposition \ref{pro:n-pli_su_m-plo} again to collapse $h_2$ of the $n_2$-tuple points into an $m$-tuple point. By performing $t_2$ of these collisions, we specialize the system to
\[ L_{2,d}(m^{k+t_1+t_2},n_3^{t_3h_3},\dots,n_s^{t_sh_s}).\]
We iterate the argument till the $s$-th step. At the end we are dealing with the specialized system $ L_{2,d}( m^{k+t_1+\ldots+ t_s}) $. The latter is non-special by \cite[Theorem 32]{Du}, and this implies $ L_{2,d}(m^k,n_1^{t_1h_1},\dots,n_s^{t_sh_s})$ is non-special.
\end{proof}
\end{pro}
Up to now, we could benefit from known results about non-special systems on $\p^2$ and $\p^3$. In these two cases, there are very precise conjectural classifications of special systems, and such conjectures are known to hold in many cases. However, for $n\ge 4$ not even a conjectural solution of the problem is known. For this reason, our results on $\p^4$ are limited to triple points, but they still provide hints to understand an almost unexplored topic.
\begin{pro}\label{pro:interpolazioneinp4}
If $d\ge 8$, then $ L_{4,d}(3^r)$ is non-special for every $r\le 11$. Moreover, if $d\ge 11$ then $ L_{4,d}(3^r)$ is non-special for every $r\le 66$.
\begin{proof}
For the first part we only have to prove that $ L_{4,d}(3^{11})$ is non-special. Since $\mathop{\rm vdim}\nolimits L_{4,d}(3^{11})>36$, it is enough to prove that $ L_{4,d}(3^{11},1^{36})$ is non-special. We apply Proposition \ref{pro:collisionetripliinp4} to degenerate $ L_{4,d}(3^{11},1^{36})$ to $ L_{4,d}(6,3^5)$. By using reducible divisors, it is easy to show that $ L_{4,d}(6,3^5)$ has a 0-dimensional base locus, and therefore it is non-special by \cite[Corollary 4.8]{BDP1}.
For the second part, assume that $d\ge 11$. We only have to prove the case $r=66$. Since $\mathop{\rm vdim}\nolimits L_{4,d}(3^{66})>216$, it is enough to prove that $ L_{4,d}(3^{66},1^{216})$ is non-special. Again, we use Proposition \ref{pro:collisionetripliinp4} to degenerate $ L_{4,d}(3^{66},1^{216})$ to $ L_{4,d}(6^6)$. By using reducible divisors, we see that $ L_{4,d}(6^6)$ has a 0-dimensional base locus, and therefore it is non-special by \cite[Corollary 4.8]{BDP1}.
\end{proof}
\end{pro}
Actually, something stronger holds. Proposition \ref{pro:collisionetripliinp4} can be generalized, by proving that the collision of $n+2$ triple points and a bunch of simple points in $\p^n$ give a point of multiplicity 6. Thus we can repeat the argument of Proposition \ref{pro:interpolazioneinp4} to show that $ L_{n,8}(3^{2n+3})$ is non-special. In a similar fashion, $ L_{4,11}(4^{11})$ is non-special. However, these linear system have a large virtual dimension, so we feel that the most interesting results are the ones stated in Proposition \ref{pro:interpolazioneinp4}.
|
{
"timestamp": "2019-06-25T02:27:45",
"yymm": "1803",
"arxiv_id": "1803.02746",
"language": "en",
"url": "https://arxiv.org/abs/1803.02746"
}
|
\section{Introduction}
For quantum information applications, it is often more interesting to learn if multipartite quantum states are entangled than to identify quantum states themselves, e.g., \cite{intro1, intro2, intro3}. This is in fact what {\it direct detection of entanglement} executes, which aims to find if quantum states are entangled even before identifying quantum states. Entanglement witnesses (EWs) that work with individual measurements followed by post-processing of the outcomes \cite{terhal} provide an experimentally feasible approach for this purpose in general \cite{ewreport, ewreview}. Entanglement detection under less assumptions, for instance, when detectors are not trusted \cite{mdiew1, mdiew2,mdiew3} or dimensions are unknown \cite{semiew}, is of practical significance for cryptographic applications.
For the practical usefulness of entanglement detection, it is worth exploring the experimental resources. If {\it a priori} information about a quantum state is given, a set of EWs may be constructed accordingly and exploited for entanglement detection. With no {\it a priori} information multiple EWs may be required. One possible method is quantum state tomography (QST) which verifies a $d$-dimensional quantum state with $O(d^2)$ measurements. Then, theoretical tools such as positive maps \cite{pmap}, e.g. partial transpose, or numerical tests involving semidefinite programming \cite{sdp1, sdp2, sdp3} can be applied. For EWs, however, little is known about the minimal measurements for their realization. In fact, it may happen that repeating experiments for multiple EWs may be less cost effective than QST \cite{laflamme}, and quite possible that no useful information is obtained, neither for entanglement detection nor for quantum state identification. This raises questions on the usefulness of EWs, in particular when {\it a priori} information about a particular state is not available.
A useful experimental setup for entanglement detection may distinguish the largest collection of entangled states with as few measurements as possible.
It is noteworthy that a tomographically complete measurement can ultimately identify a quantum state so that theoretical tools may completely determine whether it is entangled or separable. From a practical point of view, it would be therefore highly desirable that measurements for entanglement detection are constructive, i.e., they can be extended to a tomographically complete set by augmenting more detectors.
In this work we establish a feasible and practical framework of entanglement detection by applying a subset of measurements taken from a quantum $2$-design, namely mutually unbiased bases (MUBs)~\cite{ref:schwinger} and symmetric informationally complete states (SICs) \cite{ref:renes}. The connections between entanglement detection, MUBs, and quantum $2$-designs have first been explored in Refs. \cite{ref:spengler, ref:bae}, and subsequent results were found in, e.g.~\cite{ref:chen, ref:li, ref:graydon}.
Let us emphasize here that the detection via MUBs is in some cases more powerful than the Peres-Horodecki criterion since also bound entangled states, those mixed entangled states from which no entanglement can be distilled, are detected. Furthermore, measurement setups with MUBs are very experimentally friendly, indeed the MUB criterion~\cite{ref:spengler} resulted in the first experimental demonstration of bipartite bound entanglement~\cite{ref:hiesmayr2, ref:hiesmayr1}, predicted in 1998~\cite{ref:Horodecki}. Here we present a unifying approach to these connections with a three-fold advantage. First, by using incomplete sets of MUBs and SICs, the entanglement detection scheme then extends naturally to an optimal reconstruction of the quantum state \cite{mubd2,scott2006}: once direct detection of entanglement fails, additional detectors are applied in the measurement scheme to distinguish a larger set of entangled states, and can be ultimately utilised to find its separability via state tomography. This demonstrates in a natural framework that larger sets of detectors are more useful for distinguishing entangled states. Next, our results have {\it twice} the efficiency of standard EWs, in the sense that both a lower and upper bound for separable states exist, whereas EWs have only the zero-valued lower bound. Finally, the scheme can be readily applied to a measurement-device-independent (MDI) scenario for which the assumptions on the detectors are relaxed. This can be achieved by converting the measurement into the preparation of a quantum $2$-design.
Let us begin with a brief summary on the implementation of EWs in practice. EWs correspond to observables that have non-negative expectation values for all separable states as well as negative values for some entangled states. They can be factorized into local observables in general, which are then decomposed by positive-operator-valued-measure (POVM) elements. A witness $W$ can be written with POVMs denoted by $\{M_{i}^{(X)} \}$ for party $X=A,B$, where the measurement is complete, i.e., $\sum_{i} M_{i}^{(X)} = \mathrm{I}_{X}$ where $\mathrm{I}_X$ denotes the identity operator on system $X$, as
\begin{eqnarray}
W = \sum_{i} c_i~ M_i,~~\mathrm{where}~~ M_i = M_{i}^{(A)} \otimes M_{i}^{(B)}, \label{eq:1}
\end{eqnarray}
with constants $\{ c_i \}$. In implementation, a POVM element can be realized by projective measurements with ancillary systems, see e.g., \cite{naimark}. For a state $\rho$, the probabilities $\mathrm{Pr}[M_i | \rho] = \mbox{tr}[\rho M_i]$ are estimated experimentally by the detectors $\{ M_i \}$. Then, the expectation value of $W$ for a state $\rho$ is obtained by computing the linear combination, $\sum_{i}c_i \mathrm{Pr} [M_i | \rho ]$, which equals $\mbox{tr}[W\rho]$.
Although the factorization with local measurements in Eq.~(\ref{eq:1}) is not necessary to realize EWs, it provides a natural framework for converting standard EWs to the MDI scenario that closes all loopholes arising from detectors. In such a scenario two parties Alice and Bob, who want to learn if an unknown quantum state $\rho_{AB}$ is entangled, prepare a set of quantum states, after which a measurement is performed by untrusted parties. A standard witness in Eq. (\ref{eq:1}) can be used to construct an MDI-EW as follows,
\begin{eqnarray}
W_{\mathrm{MDI}} = \sum_{i} c_i~ M_{i}^{(A)\top } \otimes M_{i}^{(B)\top}, \label{eq:2}
\end{eqnarray}
where the transpose $\top$ is performed in a chosen basis of $\mathcal{H}_Y$ for $Y=A,B$ \cite{mdiew2}. The separable decomposition in Eq. (\ref{eq:2}) shows which quantum states the two parties must prepare, $\{ \widetilde{M}_{i}^{(A) } \}$ and $\{ \widetilde{M}_{i}^{(B) } \}$, where $\widetilde{M}_{i}^{( Y) } = M_{i}^{(Y) } / \mbox{tr}[M_{i}^{(Y) }]$ correspond to the quantum states.
Let us reiterate that EWs with local measurements in Eq. (\ref{eq:1}) are readily converted to their counterparts in an MDI scenario, where entangled states are detected with less assumptions. We also note that, to the best of our knowledge, there is no general and systematic way of finding the factorization with a minimal number of local measurements. The decomposition with a minimal number of POVMs is essential, as mentioned, to take the advantage of EWs which can detect entangled states without QST.
We now introduce a particular set of POVMs called a quantum $2$-design. A set of quantum states $\{|\psi_i\rangle \}_k$ in a $d$-dimensional Hilbert space, $|\psi_i \rangle \in \mathcal{H}_d$, or their corresponding rank-one operators, is called a quantum $2$-design if the average value of any second order polynomial over the set $\{|\psi_i\rangle \}_k$ is equal to the average $f(\psi)$ over all normalized states given a suitable measure, such as the Haar measure. This holds true if and only if the average of $| \psi_i \rangle \langle \psi_i |^{\otimes 2}$ over the entire $2$-design is proportional to the symmetric projection onto $\mathcal{H}_d \otimes \mathcal{H}_d$. A complete set of $(d+1)$ MUBs, and a SIC-POVM containing $d^2$ elements, are both quantum $2$-designs. In fact, the existence of $(d+1)$ MUBs and $d^2$ SIC states in all dimensions have been long-standing open problems in quantum information theory \cite{ref:openmub, ref:opensic}. For instance, complete sets of MUBs are known to exist in prime-power dimensions~\cite{mubd1, mubd2, mubd3, mubd4, mubd5, mubd6,mubd7} but have not been found in in any other composite dimension. For example, when $d=6$, it is conjectured that only $3$ MUBs exist~\cite{mubdp1, mubdp2, mubdp3, mubdp4,mubdp5}, but no proof exists. While it is conjectured that a SIC-POVM exists for any $d$, the largest dimension for which an example has been found is $d=323$ \cite{scott2017}.
Let $\mathcal{B}_k = \{|b_{i}^{k} \rangle \}_{i=1}^{d} $ denote a set of MUBs in the Hilbert space $\mathcal{H}_d$, and let $S_d = \{ |s_k\rangle \}_{k=1}^{d^2}$ denote a SIC-POVM in the same Hilbert space. The two sets satisfy the equations
\begin{eqnarray}
|\langle b_{i}^{l} | b_{j}^{k}\rangle |^2 = d^{-1},~\mathrm{and}~ |\langle s_k | s_l \rangle |^2 = (d+1)^{-1}, ~~\label{mubsic}
\end{eqnarray}
respectively, for all $k\neq l$. It is well known that a full set of $(d+1)$ MUBs and a SIC-POVM are tomographically complete: measurements from either set determine a quantum state uniquely. Furthermore, the sets are both optimal and simple for QST, in that they minimize the error of the estimated statistics while at the same time having exceptionally simple state reconstruction formulas \cite{mubd2,scott2006}. Note that both MUBs and SIC-POVMs are experimentally feasible, and have been implemented for the purpose of QST. A recent demonstration has been given in \cite{ref:sicexp1}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.14]{pic2}
\caption{ Our strategy for detecting entangled states via MUBs and SICs is illustrated, where $X = \mathrm{M}, \mathrm{S}$ and $n=m,\widetilde{m}$, see inequalities in Eqs. (\ref{eq:mubinq}) and (\ref{eq:sicinq}) satisfied by all separable states. Violation of the bounds implies detection of entangled states. Once the measurement outcomes are collected, they are exploited {\it twice} to find if the upper or lower bound is violated, in which case entangled states are detected. }
\label{fig:illustration}
\end{center}
\end{figure}
We now consider tomographically incomplete sets of MUBs and SICs for detecting entangled states. We denote by $I_{m,d}^{( \mathrm{M})}$ and $I_{\widetilde{m},d}^{( \mathrm{S})}$ the collections of probabilities when the measurements are applied in MUBs and SICs, respectively,
\begin{eqnarray}
I_{m,d}^{(\mathrm{M})} (\rho: \{ \mathcal{B}_k\}_{k=1}^{m}) & = & \sum_{k=1}^{m} \sum_{i=1}^{d} \mathrm{Pr} ( i , i | \mathcal{B}_k, \mathcal{B}_k),~~ \label{eq:imub} \\
I_{\widetilde{m},d}^{(\mathrm{S})} (\rho:S_{\widetilde{m}} ) & = & \sum_{j=1}^{\widetilde{m}} \mathrm{Pr} ( j , j | S_{\widetilde{m}} ,S_{\widetilde{m}}),~~ \label{eq:isic}
\end{eqnarray}
where $S_{\widetilde{m}}$ denotes a collection of $\widetilde{m}$ states out of $d^2$ SICs, and $ \mathrm{Pr} ( \alpha, \beta | A,B)$ the probability of obtaining outcome $(\alpha, \beta)$ given a measurement in $A$ and $B$. To be explicit, for state $\rho$, $\mathrm{Pr} ( i , i | \mathcal{B}_k, \mathcal{B}_k) = \mbox{tr}[ |b_{i}^{k} \rangle \langle b_{i}^{k} | \otimes |b_{i}^{k} \rangle \langle b_{i}^{k} | ~ \rho ]$ and $\mathrm{Pr} ( j,j | S_{\widetilde{m}}, S_{\widetilde{m}}) = \mbox{tr}[ | s_{j} \rangle \langle s_{j} | \otimes | s_{j} \rangle \langle s_{j} | ~ \rho ]$ \cite{footnote}. These probabilities can be obtained simply by preparing local measurements in MUBs or SICs. Note that we have $m\leq d+1 $ and $\widetilde{m}\leq d^2$, where the equality corresponds to cases that the measurement setting is tomographically complete. Then, from the measurements one can construct the quantum state for which one can apply all theoretically known criteria to detect entanglement.
Since the set of all separable states forms a convex set, the quantities $I_{m,d}^{(\mathrm{M})}$ and $I_{\widetilde{m},d}^{(\mathrm{S})}$ as defined in Eqs. (\ref{eq:imub}) and (\ref{eq:isic}) have both nontrivial upper and lower bounds satisfied by all separable states. In what follows, the bounds for selections of $m$ MUBs and $\widetilde{m}$ SICs are explicitly presented. We minimize and maximize each of the bounds with respect to the set of MUBs and SICs, e.g., minimizing (maximizing) the lower bound over all MUBs gives $\mathrm{L}_{m,d}^{- (\mathrm{M})} $ ( $\mathrm{L}_{m,d}^{+(\mathrm{M})} $). The former (latter) gives a bound which is independent (dependent) of the choice of MUBs. Consequently, $\mathrm{L}_{m,d}^{+(\mathrm{M})} $ detects a larger set of entangled states but only applies for a certain collection of MUBs.
\setlength{\arrayrulewidth}{0.5pt}
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c|c|c|c|c||c|c|c||}
\hhline{|t=====:t:===t|}
&\multicolumn{4}{c||}{Lower Bounds}&\multicolumn{3}{c||}{Upper Bounds}\\
\hhline{||-|-|-|-|-||-|-|-||}
&$\cellcolor{blue!50}d=2$&$\cellcolor{red!50}d=3$&\multicolumn{2}{c||}{\cellcolor{green!50}$d=4$}& \cellcolor{blue!50}$d=2$&\cellcolor{red!50}$d=3$&\cellcolor{green!50}$d=4$\\
\hhline{||-|-|-|-|-||-|-|-||}
$m$&$\vphantom{\biggl\lbrace} \mathrm{L}_{m,2}^{(\mathrm{M})}$ ~&~ $\mathrm{L}_{m,3}^{(\mathrm{M})}$ ~&~ $\mathrm{L}_{m,4}^{- (\mathrm{M})}$ ~&~ $\mathrm{L}_{m,4}^{+ (\mathrm{M})}$ ~&~ $\mathrm{U}_{m,2}^{(\mathrm{M})}$ ~&~ $\mathrm{U}_{m,3}^{(\mathrm{M})}$
~&~ $\mathrm{U}_{m,4}^{(\mathrm{M})}$ \\
\hhline{|:=====||===:|}
$2$ & 1/2& 0.211& 0 & 0 & 3/2 & 4/3 & 5/4 \\ \hhline{||-|-|-|-|-|-|-|-||}
$3$ & 1& 1/2& 1/4 & 1/2 & 2 & 5/3 & 6/4 \\ \hhline{||-|-|-|-|-|-|-|-||}
$4$ & & 1& 1/2 & 1/2 & & 2 & 7/4 \\ \hhline{||-|-|-|-|-|-|-|-||}
$5$ & & & 1 & 1 & & & 2 \\
\hhline{|b:=====:b:===:b|}
\end{tabular}
\end{center}
\caption{Lower and upper bounds on MUBs, $\mathrm{L}_{m,d}^{\pm (\mathrm{M})}$ and $\mathrm{U}_{m,d}^{(\mathrm{M})}$, see Eqs. (\ref{eq:mubl1}), (\ref{eq:mubl2}) and (\ref{eq:mubu}), are summarized for $m$ MUBs in $\mathcal{H} = \mathbb{C}^d$, for $d=2,3,4$. For $d=2,3$, different full sets of MUBs are unitarily equivalent, hence we have $\mathrm{L}_{m,d}^{+ (\mathrm{M})} = \mathrm{L}_{m,d}^{ - (\mathrm{M})}$. }
\label{tab:MUB}
\end{table}
When the measurements are taken from a set of MUBs, the minimal and maximal lower bounds, $\mathrm{L}_{m,d}^{- (\mathrm{M})} $ and $\mathrm{L}_{m,d}^{+(\mathrm{M})} $, respectively, are given by
\begin{eqnarray}
\mathrm{L}_{m,d}^{- (\mathrm{M})} &=& \min_{\{ \mathcal{B}_{k} \}_{k=1}^m} \min_{\sigma_{\mathrm{sep}}} ~ I_{m,d}^{(\mathrm{M})} (\sigma_{\mathrm{sep}} : \{ \mathcal{B}_{k} \}_{k=1}^m ), ~~\label{eq:mubl1} \\
\mathrm{L}_{m,d}^{+ (\mathrm{M})} &=& \max_{ \{ \mathcal{B}_{k} \}_{k=1}^m } \min_{\sigma_{\mathrm{sep}}} ~ I_{m,d}^{(\mathrm{M})} (\sigma_{\mathrm{sep}} : \{ \mathcal{B}_{k} \}_{k=1}^m ), ~~ \label{eq:mubl2}
\end{eqnarray}
where the optimisation is taken over all separable states $\sigma_{\text{sep}}$ and all possible collections of $m$ MUBs, $\{ \mathcal{B}_{k} \}_{k=1}^m$, that exist in dimension $d$. It is clear that $\mathrm{L}_{m,d}^{+ (\mathrm{M})} \geq \mathrm{L}_{m,d}^{- (\mathrm{M})} $, and the gap between the bounds is due to different sets of $m$ MUBs having different overlaps with the set of separable states.
Unfortunately, we do not find a systematic and general method of obtaining these bounds but had to consider all possible sets of $m$ MUBs minimizing $I_{m,d}^{ (\mathrm{M})}$ over all separable states. In Table \ref{tab:MUB}, lower bounds are shown for $d=2,3,4$, which are obtained analytically. It turns out that $\mathrm{L}_{m,d}^{- (\mathrm{M})} = \mathrm{L}_{m,d}^{ + (\mathrm{M})}$ for $d=2,3$, but for $d=4$ we found $\mathrm{L}_{m,4}^{- (\mathrm{M})} \geq \mathrm{L}_{m,4}^{ + (\mathrm{M})}$. The difference here is due to the existence of an infinite family of 3 MUBs in $d=4$, resulting in unitarily inequivalent triples. The triple which gives $ \mathrm{L}_{m,4}^{ - (\mathrm{M})}=1/4$ is the only extendible set of 3 MUBs, in the sense that no other triple extends to a complete set of 5 MUBs. For $d=2,3,$ all subsets of $m$ MUBs are equivalent and extendible.
In Ref.~\cite{ref:spengler}, it has been shown that the upper bound does not depend on selections of MUBs, and is given by
\begin{eqnarray}
\mathrm{U}_{m,d}^{(\mathrm{M})} & = & \max_{\sigma_{\mathrm{sep}}} ~ I_{m,d}^{(\mathrm{M})} ( \sigma_{\mathrm{sep}}: \{ \mathcal{B}_{k}\}_{ k=1}^{m} ) = 1 + \frac{m-1}{d}, ~~~\label{eq:mubu}
\end{eqnarray}
for any $m$ MUBs $\{ \mathcal{B}_{k } \}_{k =1}^{m}$. Note that in the case of a quantum $2$-design with $m=d+1$, the upper bound satisfies $\mathrm{U}_{d+1,d}^{(\mathrm{M})} =2$, which is independent of the dimension $d$. Notice also that by removing a single basis from $I_{m,d}^{( \mathrm{M})}$ the upper bound decreased uniformly by $1/d$, i.e.,
\begin{eqnarray}
\mathrm{U}_{m+1 ,d }^{(\mathrm{M})} - \mathrm{U}_{m ,d }^{(\mathrm{M})} ~= ~d^{-1} \nonumber
\end{eqnarray}
for all $m$ MUBs.
In our first main result, using Table~\ref{tab:MUB} and Eq.~(\ref{eq:mubu}), we can construct the inequalities with optimization over $m$ MUBs in Eq.~(\ref{eq:imub}) as
\begin{eqnarray}
\mathrm{L}_{m,d}^{-(\mathrm{M})} \leq I_{m,d}^{(\mathrm{M})} (\sigma_{\mathrm{sep}} ) \leq \mathrm{U}_{m,d}^{(\mathrm{M})}\,, \label{eq:mubinq}
\end{eqnarray}
that are satisfied by all separable states in $\mathcal{H}_d \otimes \mathcal{H}_d$. A quantum state must be entangled if it violates one of the inequalities above, see also Fig. \ref{fig:illustration}. It is also worth mentioning that these inequalities detect bound entangled states when $m=d+1$, as shown in \cite{ref:hiesmayr2, ref:hiesmayr1}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{||c | c | c | c || c | c | c||}
\hhline{|t====:t:===t|}
&\multicolumn{3}{c||}{Lower Bounds}&\multicolumn{3}{c||}{Upper Bounds}\\
\hhline{||-|-|-|-||-|-|-||}
&\cellcolor{blue!50}$d=2$&\multicolumn{2}{c||}{\cellcolor{red!50}$d=3$}& \cellcolor{blue!50}$d=2$&\multicolumn{2}{c||}{\cellcolor{red!50}$d=3$}\\
\hhline{||-|-|-|-||-|-|-||}
$\widetilde{m}~ $ & $\vphantom{\biggl\lbrace} ~\mathrm{L}_{\widetilde{m},2}^{(\mathrm{S})}$ ~ & ~$ \mathrm{L}_{\widetilde{m},3}^{ - (\mathrm{S} ) } $ ~ &~ $\mathrm{L}_{\widetilde{m}, 3}^{+(\mathrm{S})} $ ~& ~ $\mathrm{U}_{\widetilde{m} ,2}^{(\mathrm{S})}$ ~ &~ $ \mathrm{U}_{\widetilde{m}, 3}^{+ ( \mathrm{S} ) } $ ~ & ~ $\mathrm{U}_{\widetilde{m}, 3}^{- (\mathrm{S})}$ \\
\hhline{|:====||===:|}
3 ~& 0 & 0 & 0 & 1.244 & 1.254 & 9/8 \\ \hhline{||----||---||}
4 ~& 4/15 & 0 & 0 & 4/3 &1.400 & 1.25 \\ \hhline{||----||---||}
5 ~& 2/3 & 0 & 0 & 4/3 &1.463 & 1.400 \\\hhline{||----||---||}
6 ~& & 0 & 0.112& & 3/2 & 1.482 \\ \hhline{||----||---||}
7 ~& & 3/20 & 3/20 & & 3/2 & 3/2 \\ \hhline{||----||---||}
8 ~& & 3/8 & 3/8 & & 3/2 & 3/2 \\ \hhline{||----||---||}
9 ~& & 3/4 & 3/4 & & 3/2 & 3/2 \\
\hhline{|b====:b:===b|}
\end{tabular}
\end{center}
\caption{The lower and upper bounds via SICs, $\mathrm{L}_{\widetilde{m},d}^{\pm (\mathrm{S})}$ and $\mathrm{U}_{\widetilde{m},d}^{\pm (\mathrm{S})}$, are shown for $d=2,3$. For $d=2$ there is only one SIC-POVM while for $d=3$ we use the Hesse SIC defined in the Appendix. Note that $\mathrm{L}_{\widetilde{m},2}^{ + (\mathrm{S})} = \mathrm{L}_{\widetilde{m},2}^{ - (\mathrm{S})}$ and $\mathrm{U}_{\widetilde{m},2}^{ + (\mathrm{S})} = \mathrm{U}_{\widetilde{m},2}^{ - (\mathrm{S})}$. In contrast to MUBs, we find that $\mathrm{U}_{\widetilde{m},d}^{ + (\mathrm{S})} \geq \mathrm{U}_{\widetilde{m},d}^{ - (\mathrm{S})}$. }
\label{tab:SICs}
\end{table}
In a similar way, lower and upper bounds for SICs are denoted as follows, with $g=\pm$, and $\mathrm{opt^{+}} = \max$ and $\mathrm{opt^{-}} = \min$,
\begin{eqnarray}
\mathrm{L}_{\widetilde{m},d}^{g ~(\mathrm{S })} &=& \mathrm{opt^g}_{ S_{\widetilde{m}}\subseteq S_{d^2} } \min_{\sigma_{\mathrm{sep}}} ~ I_{\widetilde{m},d}^{(\mathrm{S })} (\sigma_{\mathrm{sep}} : S_{\widetilde{m}}) ~\mathrm{and}~~\label{eq:sicl} \\
\mathrm{U}_{\widetilde{m},d}^{g~ (\mathrm{S })} &=& \mathrm{opt^g}_{ S_{\widetilde{m}}\subseteq S_{d^2} } \max_{\sigma_{\mathrm{sep}}} ~I_{\widetilde{m},d}^{(\mathrm{S })} (\sigma_{\mathrm{sep}} : S_{\widetilde{m}}), ~~\label{eq:sicu}
\end{eqnarray}
where $S_{\widetilde{m}}$ is a set of $\widetilde{m}$ SICs. Then, the full set of SICs is denoted by $S_{d^2}$. Again, we do not find a systematic and general method of computing upper and lower bounds. However, having explored all possible subsets of SICs in $d=2,3$, for a given SIC-POVM, we present these bounds in Table \ref{tab:SICs}. Suboptimal bounds for $d=4$ are also presented in the Appendix. We observe that $\mathrm{U}_{\widetilde{m},d}^{ + (\mathrm{S})} \geq \mathrm{U}_{\widetilde{m},d}^{ - (\mathrm{S})}$, i.e., differences in the subsets of SICs give rise to the gap between these upper bounds. Therefore, the inequalities which are satisfied by all separable states are constructed in our second main result as
\begin{eqnarray}
\mathrm{L}_{\widetilde{m},d}^{-(\mathrm{S})} \leq I_{\widetilde{m},d}^{(\mathrm{S})} (\sigma_{\mathrm{sep}} ) \leq \mathrm{U}_{\widetilde{m},d}^{+(\mathrm{S})}\,, \label{eq:sicinq}
\end{eqnarray}
where $\mathrm{L}_{\widetilde{m},d}^{-(\mathrm{S})}$ and $\mathrm{U}_{\widetilde{m},d}^{+(\mathrm{S})}$ are found in Table \ref{tab:SICs}. Even tighter inequalities with $\mathrm{L}_{\widetilde{m},d}^{+(\mathrm{S})}$ and $\mathrm{U}_{\widetilde{m},d}^{-(\mathrm{S})}$ can be derived by specifying the corresponding subset of $\widetilde{m}$ SICs. We note that for large $\widetilde{m}$ the upper bounds become independent of the choice of SICs, e.g., $U^{+(S)}_{\widetilde{m},3}=U^{-(S)}_{\widetilde{m},3}=3/2$ for $\widetilde{m}=7,8,9$.
\begin{figure}
\includegraphics[scale=0.13]{picmub}
\caption{The inequalities $I_{2,3}^{(\mathrm{M})}$, $I_{3,3}^{(\mathrm{M })}$, and $I_{4,3}^{(\mathrm{M })}$ are applied to detect entangled states. Once $I_{m,d}^{}(\mathrm{M})$ for unknown quantum states is obtained, it can be utilized twice for entanglement detection with both upper and lower bounds. E.g., the upper bounds are violated by entangled isotropic states and the lower bounds by entangled Werner states.}
\label{fig:picmub}
\end{figure}
While these inequalities have been obtained by extensively considering all sets of MUBs and SICs, analytic expressions for the upper and lower bounds can be derived for a quantum $2$-design,
\begin{eqnarray}
1\leq I_{d+1,d}^{(\mathrm{M})}(\sigma_{\mathrm{sep}})\leq 2,~~\frac{d}{d+1} \leq I_{d^2,d}^{(\mathrm{S})}(\sigma_{\mathrm{sep}})\leq \frac{2d}{d+1},~~~~ \label{eq:inqd}
\end{eqnarray}
as shown in the Appendix. The upper bounds to $I_{d+1,d}^{(\mathrm{M})}$ and $I_{d^2,d}^{(\mathrm{S})}$ are proven in Refs. \cite{ref:spengler} and \cite{ref:li}, respectively. Lower bounds are shown in Ref. \cite{ref:bae} and later in Ref. \cite{ref:graydon}. As mentioned earlier, when the full measurement set of a quantum 2-design is used, it is more efficient to exploit the measurements for QST, and use theoretical tools to solve the separability problem that is known to be $NP$-hard.
To illustrate the effectiveness of the inequalities in Eqs. (\ref{eq:mubinq}) and (\ref{eq:sicinq}), consider the isotropic and Werner states,
\begin{eqnarray}
&& \mathrm{Werner ~state}: ~\rho_{\mathrm{W}} ( p ) = p\; \widetilde{\Pi }_{\mathrm{sym} } + (1-p)\; \widetilde{\Pi }_{\mathrm{asym}} \label{eq:werner} \\
&& \mathrm{isotropic ~ state}: ~\rho_{\mathrm{iso} } (q) = q | \Phi^+ \rangle \langle \Phi^{+} | + (1-q ) \mathbbm{1}_d \otimes \mathbbm{1}_d~~~~~~\label{eq:iso}
\end{eqnarray}
where $\widetilde{\Pi}_{\mathrm{sym}}$ and $\widetilde{\Pi}_{\mathrm{asym}}$ denote the normalized projections onto the symmetric and anti-symmetric subspaces, respectively, and $\mathbbm{1}_d = \mathbbm{1}/d$, the normalized identity operator in dimension $d$. It is known that $\rho_{\mathrm{W}}$ is entangled iff $p<1/2$ and $\rho_{\mathrm{iso}}$ iff $q>(d+1)^{-1}$. In Fig. \ref{fig:picmub}, the capability of entanglement detection with $I_{m,3}^{(\mathrm{M})}$ is shown for $m=2,3,4$. The capability of entanglement detection via SICs is given in the Appendix.
Due to the linearity of Eqs. (\ref{eq:imub}) and (\ref{eq:isic}), with respect to the state $\rho$, one may expect that the inequalities in Eq. (\ref{eq:inqd}) are closely connected to standard EWs. Here we point out the equivalence between the lower bounds in Eq. (\ref{eq:inqd}) and the partial transpose criterion, by considering the so-called structural physical approximation (SPA) \cite{intro1}. For recent reviews on the SPA see \cite{intro2, intro3}, as well as the Appendix for further details. The Choi-Jamiolkowski (CJ) operator for the transpose map corresponds to an EW, denoted by $W$, i.e., $\mbox{tr}[\sigma_{\mathrm{sep} }W] \geq 0$, and $\mbox{tr}[\rho W]<0$ for some entangled states $\rho$ which include the entangled Werner states in Eq. (\ref{eq:werner}). By applying the SPA to the transpose map, the resulting CJ operator denoted by $\widetilde{W}$ is given by $\widetilde{W} = \widetilde{\Pi}_{\mathrm{sym}}$. The condition $\mbox{tr}[\sigma_{\mathrm{sep} }W] \geq 0$ then translates to $\mbox{tr}[\sigma_{\mathrm{sep} } \widetilde{W}] \geq [d(d+1)]^{-1}$, see Ref. \cite{ref:bae}, which is equivalent to the lower bounds in Eq. (\ref{eq:inqd}).
Finally, we can see that $I_{m,d}^{(\mathrm{M})} (\rho) = \mbox{tr}[ W_{m,d}^{(\mathrm{M})} \rho]$ and $I_{\widetilde{m},d}^{(\mathrm{S})} (\rho) = \mbox{tr}[ W_{\widetilde{m},d}^{(\mathrm{S})} \rho]$ are readily converted for entanglement detection in a MDI scenario where,
\begin{eqnarray}
W_{m,d}^{( \mathrm{M})} (\{ \mathcal{B}_{k}\}_{k=1}^m ) &=& \sum_{k=1}^{m} \sum_{i=1}^d | b_{i}^{k} \rangle \langle b_{i}^{k}| \otimes | b_{i}^{k} \rangle \langle b_{i}^{k}|, \nonumber \\
W_{\widetilde{m},d}^{( \mathrm{S})} (S_{\widetilde{m}} ) &=& \sum_{j = 1}^{\widetilde{m}} | s_{j} \rangle \langle s_{j} | \otimes | s_{j} \rangle \langle s_{j} |. \nonumber
\end{eqnarray}
As described in Eq. (\ref{eq:2}), both $I_{m,d}^{(\mathrm{M})}$ and $I_{\widetilde{m},d}^{(\mathrm{S})}$ can be obtained in an MDI manner with $W_{m,d}^{( \mathrm{M}) \top} (\{ \mathcal{B}_{k}\}_{k=1}^m )$ and $W_{\widetilde{m},d}^{( \mathrm{S}) \top} (S_{\widetilde{m}} ) $, respectively, by preparing the set of quantum states $\{ \mathcal{B}_{ k} \}_{k=1}^m$ and $S_{\widetilde{m}}$ instead of measurements in these bases. Note also that this provides both upper and lower MDI bounds as opposed to standard MDI-EWs.
To conclude, let us recall the problem addressed at the outset. How do we learn efficiently if an unknown quantum state is entangled, with a measurement that is tomographically incomplete? We also assume that, for practical purposes, the setup is constructive in that it can be easily extended to that of QST. While EWs are useful for direct detection of entanglement, it is highly non-trivial to compare and connect their measurements to those which are useful for QST. However, this is a crucial requirement when experimentalists decide whether to perform direct detection of entanglement or ultimately add more detectors to identify the separability problem via state reconstruction. Our results achieve this objective with a measurement setup which can detect entangled states with cost effective measurements, and which extend naturally to the tomographically complete setup of a quantum 2-design which allows for optimal state reconstruction. Furthermore, they offer {\it double} the efficiency of standard and non-linear EWs by providing both upper and lower bounds. One consequence of our analysis is that certain sets of MUBs are more `useful' for entanglement detection than others. For instance, in dimension $d=4$, the set of 3 MUBs which extends to a complete set provides the minimal (weakest) lower bound and therefore detects a smaller set of entangled states than unextendible MUBs. Thus, one might expect that unextendible MUBs are more useful in other dimensions too. We also note that the results can be generalized to weighted $2$-designs \cite{roy}, which would allow for entanglement detection and QST in dimensions where the existence of MUBs and SICs is not yet known.
We envisage directions in entanglement detection beyond standard EWs and towards related problems in quantum information theory. While we have already shown some links between standard EWs and the MUB-inequality~(\ref{eq:mubinq}) and the SIC-inequality~(\ref{eq:sicinq}), we expect further connections to also hold true. For example, recently it has been shown that MUBs can be used to construct positive but not completely positive maps, which lead to a class of EWs \cite{darek}. Further relations in this direction may reveal additional capabilities of EWs at an even deeper level. It would also be interesting to consider nonlinearity, e.g., in Ref. \cite{nl2}, to improve the inequalities. We also hope that the presented framework of entanglement detection may offer insightful hints towards a solution of the existence problem for MUBs and SICs from an entanglement perspective \cite{ref:openmub, ref:opensic}. In addition, MUBs and SICs have quite recently been generalized by relaxing the rank-$1$ condition to so-called mutually unbiased measurements (MUMs) and symmetric informationally complete measurements (SIMs), which exist in all finite dimensions \cite{ref:kalevgourmub, ref:kalevgoursic}. Both MUMs and SIMs, as well as other similar measurements, could be applied to our framework in similar ways, leading to more experimentally feasible entanglement detection methods in arbitrary dimensions.
\section*{Acknowledgement}
J.B. is supported by the Institute for Information \& communications Technology Promotion(IITP) grant funded by the Korea government(MSIP) (R0190-17-2028), National Research Foundation of Korea (NRF-2017R1E1A1A03069961), the KIST Institutional Program (2E26680-17-P025), and the People Programme (Marie Curie Actions) of the European Union Seventh Framework Programme (FP7/2007- 2013) under REA grant agreement N. 609305. B.C.H gratefully acknowledges the Austrian Science Fund FWF-P26783. D.M. has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 663830.
|
{
"timestamp": "2018-03-08T02:09:42",
"yymm": "1803",
"arxiv_id": "1803.02708",
"language": "en",
"url": "https://arxiv.org/abs/1803.02708"
}
|
\section{Introduction
\label{sec:intro}
The rapid emergence and deployment of Internet of Things (IoT) is causing millions of devices and sensors to come online as part of public and private networks~\cite{iot-scale}. This marks a convergence of cheap sensing hardware, pervasive wireless and wired communication networks, and the democratization of computing capacity through Clouds. It also reflects the growing need to leverage automation to enhance the efficiency of public systems and quality of life for society. While consumer devices such as smart-phones and fitness bands highlight the ubiquity of IoT in a digitally immersed lifestyle, of equal (arguably, greater) importance is the role of IoT in managing infrastructure such as city utilities and industrial manufacturing~\cite{simmhan:cise:2013,iiot}. Smart Cities and Industrial IoT deploy sensing and actuation capabilities as part of physical systems such as power and water grids, road networks, manufacturing equipment, etc. This enables the use of data-driven approaches to efficiently and reliably manage the operations of such vital \emph{Cyber Physical Systems (CPS)}~\cite{spe/JaraGB15,ibm,cps}.
As the number of IoT devices soon reach the billions, it is essential to have a distributed software architecture that facilitates the sustainable \emph{management} of these physical devices and communication networks, and \emph{access} to their data streams and controls for developing innovative IoT applications. Three synergistic concepts come together to enable this.
\emph{Service-Oriented Architectures (SOA)}~\cite{soa,Bouguettaya:2017:SCM} offer standard mechanisms and protocols for discovery, addressing, access control, invocation and composition of services that are available on the World Wide Web (WWW), by leveraging and extending web-based protocols such as HTTP and open representation models like XML~\cite{soa-ws}.
\emph{Cloud computing} is a manifestation of this paradigm where infrastructure, platform and software resources are available ``as a service'' (IaaS, Paas and SaaS), often served from geographically distributed data centers world-wide. These offer economies of scale and enable access to elastic resources using a pay-as-you-go model~\cite{botta2016integration}. Such commodity clusters on the Cloud have also enabled the growth of \emph{Big Data platforms} that allow data-driven applications to be composed and scaled on tens or hundreds of Virtual Machines (VMs), and deal both with data volume and velocity, among other dimensions~\cite{vilajosana2013bootstrapping}.
Unlike traditional enterprise or scientific applications, however, the IoT domain is distinct in the way these technologies converge to support emerging applications.
(1) IoT integrates hardware, communication, software and analytics, and links the physical and the digital world. Hence \emph{infrastructure management}, including of Cloud, Fog and Edge devices, is an intrinsic part of the software architecture~\cite{prateeksha:icfec:2017}. (2) These devices and services may not always be on the WWW, and instead connect within private networks or the public Internet (not WWW). Hence, \emph{network heterogeneity} is also a concern. (3) The communication connectivity and indeed even their hardware availability \emph{may not be reliable}, with transient network and hardware faults being common in wide-area deployments. (4) The \emph{scale} of the IoT infrastructure services (and micro-services) is likely to be orders of magnitude more than traditional business and eCommerce services, eventually reaching billions. (5) Lastly, the \emph{potential applications} that will be built on top of IoT is not yet well-defined and the scope of innovation is vast -- provided that the software architecture is open and extensible.
These necessitate a software architecture that encompasses a \emph{management fabric} and a \emph{data-driven middleware} that can leverage SOA, Clouds and Big Data platforms in a meaningful manner to support the needs of IoT applications.
One can envision convergence onto a \emph{core set of interoperable, open standards} -- an approach that contributed to the success of the WWW using HTTP, HTML, URL, etc. specifications from IETF, or they may fragment into vertical silos pushed by \emph{proprietary consortia}, such as seen in public Clouds from Amazon AWS and Microsoft Azure (who themselves are evolving for IoT). Both can prove to be successful, but we argue the need for the former. \modc{The few major public Cloud providers have large customer bases. Hence, custom APIs that they offer based on web standards will have a captive market.} On the other hand, IoT will need to leverage open web and Internet standards, both existing and emerging, to allow interoperability and reuse of existing tools and software stacks. This is particularly of concern to developing countries with mega-cities that are transitioning to Smart Cities. Such an open-approach will also catalyze the development of novel applications for consuming the data and application services exposed by the city utility.
\textbf{Contributions.} In this article, we propose a service-oriented and data-driven software architecture for Smart City utilities. This is motivated by representative applications for smart water management and validated for managing the infrastructure and applications within a Smart Campus IoT testbed at the Indian Institute of Science (IISc), Bangalore. We make the following specific contributions. (1) We characterize the requirements of an IoT fabric and application middleware to support innovative Smart City applications. (2) We develop a service-oriented software architecture, based on open protocols, standards and software, to meet these requirements while leveraging Cloud and Big Data platforms. This includes a novel bootstrapping mechanism to on-board new devices, and support for streaming synchronous and batch asynchronous analytics. (3) We integrate these technology blocks together within a real IoT field deployment at the IISc campus testbed, that spans sensing, communication, data integrating and analytics, to validate our design
\textbf{Organization.}The rest of the article is organized as follows. First, in \S~\ref{sec:bg}, we offer a background of the IISc Smart Campus project and highlight the unique requirements of Smart City deployments in emerging nations like India. In \S~\ref{sec:comm}--\ref{sec:analyze}, we discuss different aspects of our proposed scalable, data-centric, service-oriented software architecture for the Smart Campus. This includes sensing and communication (\S~\ref{sec:comm}); management fabric for the devices and the network (\S~\ref{sec:fabric}); data platforms for data acquisition (\S~\ref{sec:acquire}); and Cloud and edge-based analytics for decision-making (\S~\ref{sec:analyze}).
We contrast our work against related efforts globally in \S~\ref{sec:related}, and finally offer our conclusions and discuss future directions in \S~\ref{sec:conclusion}.
\section{Background
\label{sec:bg}
We present an overview of the Smart Campus project at IISc that is developing an IoT management fabric and application platform for smart utility management. We use this, as well as our prior experience with the Los Angeles Smart Grid project~\cite{simmhan:cise:2013}, to motivate the unique needs of a Smart City software architecture.
\begin{figure}[t]
\centering
\includegraphics[width=0.65\columnwidth]{figures/campus-water.pdf}
\caption{\modc{IISc Campus Map and Water Infrastructure}}
\label{fig:campus}
\end{figure}
\subsection{IISc Smart Campus project}
The Government of India is undertaking a mission to upgrade $100$ cities into \emph{Smart Cities}~\footnote{Smart Cities Mission, Government of India, \url{http://smartcities.gov.in/}} over the next several years, at a cost of about USD~$14~billion$. While the exact characteristics of a ``Smart City'' are loosely defined, smart energy, water and waste management, urban mobility, and digital services for citizens are some of the thematic areas. Several township-scale and community-scale research and deployment projects have been initiated to understand the unique aspects of smart city management in a developing country like India, and the role of open technology in realizing this vision.
The \emph{Smart Campus project}~\footnote{IISc Smart Campus Project, \url{http://smartx.cds.iisc.ac.in}} at the Indian Institute of Science, the top graduate school in India, is one such effort to design, develop and validate a campus-wide \emph{IoT fabric}. This ``living laboratory'' will offer a platform to try novel IoT technologies and Smart City services, with a captive base of about $10,000$ students, faculty, staff and family who largely reside on campus. The gated campus spread across $1.8~km^2$ has over $50$ departments and centers, and about $100$ buildings which host offices, lecture halls, research labs, supercomputing facility, hostels, staff housing, restaurants, health center, grocery stores, and so on (Fig.~\ref{fig:campus}). This is representative of large communities and towns in India, and offers a unique real-world ecosystem to validate IoT technologies for Smart Cities.
The project aims to design, develop and deploy a \emph{reference IoT architecture} as a horizontal platform that can support various vertical utilities such as smart power, water and transportation management, with smart water management serving as the initial domain for validation of the fabric. In effect, the effort for this project is in sifting through and selecting the best-practices and standards in the public domain across various layers of the IoT stack, integrating them to work seamlessly, and validating them for one canonical domain at the campus scale. By its very nature, this limits \emph{de novo} blue-sky architectures that work in a lab setup but are infeasible, impractical, costly or do not scale. At the same time, the architecture also offers an open platform for research into sensing, networking, Cloud and Big Data platforms, and analytics.
IISc owns and manages the water distribution network within the campus, and in Bangalore, like other cities in India, water supply from the city utility is not $24\times7$ but rather periodic. As a result, there are under-ground reservoirs (ground-level reservoirs, GLR) to buffer the water from the city's inlets, large overhead tanks (OHT) where water is pumped up to from the GLRs, and rooftop tanks at the building-level where water is routed to from these OHTs using gravity. About $4$ city inlets, $13$ GLRs, $8$ OHTs, and over $50$ rooftop tanks form the campus water distribution network, and support an average daily consumption of $4~Million~cm^3$ of water. Fig.~\ref{fig:campus} shows these inlets, GLR and OHTs. The campus also consumes $10~MW$ of electricity, a tangible fraction of which goes to moving water between the storages.
The goal for \emph{smart water management} is to leverage the IoT stack to: (1) assure the quality of water that is distributed, (2) ensure the reliability of supply, (3) avoid wastage of water, (4) pro-actively and responsively maintain the water infrastructure, (5) reduce the costs of water and electricity used for pumping, and (6) engage consumers in water conservation. All of these will be achieved through \emph{domain-driven analytics} over the rich and real-time data that will be available on the water network from the IoT infrastructure.
\modc{The campus has $14$ water distribution zones that are grouped into $4$ logical regions for deploying and managing the network operations, as shown in Fig.~\ref{fig:campus}. Each region requires approximately $30$ wireless motes that transmit values sampled from sensors they are connected to. A gateway connects clusters of these nodes, and transmits the data to the Cloud through the campus network back-haul.}
A combination of water level and quality sensors, flowmeters, and smart power meters are used to sense the water network, with actuators for valve and pump controls planned. \modc{As we discuss later, the design of the \emph{ad hoc} wireless network is a key operational challenge.} At the same time, we need to ease the deployment, monitoring and management overheads of the IoT infrastructure
These make for a unique validation environment for smart urban utility systems, with distinctive local challenges for observation, analytics and actuation, compared to developed nations. In contrast, a similar smart campus effort by the lead author at the University of Southern California (USC), Los Angeles, addressed challenges of demand-response optimization for Smart Grids~\cite{simmhan:cise:2013}. There, power was assured $24\times7$ by the Los Angeles Department of Water and Power (LA~DWP) but the goal was to change the campus energy use profile, on-demand, to reduce the load on the city power grid as more intermittent renewable sources are included within their energy portfolio. Also, the entire campus was instrumented using proprietary Smart Meters from Honeywell that worked off reliable wired LAN, could be centrally monitored and controlled using their custom software, had adequate bandwidth to push all data and analytics to the Cloud, and also carried a comparably high price tag for the solution \modc{-- such high-cost and proprietary solutions are impractical for emerging nations.}
\subsection{Desiderata}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figures/functional.pdf}
\caption{Functional IoT Architecture}
\label{fig:func}
\end{figure}
Fig.~\ref{fig:func} illustrates the functional layers of an IoT software architecture, spanning sensors and communication, to data acquisition and analytics. We distinguish two parts to the IoT architecture. One, the \emph{IoT fabric} that manages the hardware and communication infrastructure and offers core resource management and networking. The other is the \emph{application platform} that acquires the data, and enables analytics and decision making that is fed-back to the infrastructure.
There are several guiding principles and high-level requirements for the software architecture~\cite{mishra:iotn:2015}.
\begin{enumerate}
\item \textbf{\modc{Scalablity.}} Scalability of the architecture is paramount. The design should not have any intrinsic limitations or assumptions that prevent it from scaling city-wide even as the validation is for a township scale. The system should \emph{weakly scale} with the number of sensors, devices and motes that are part of the IoT infrastructure, the rate at which data is acquired, the number of domains and analytics, and the number of users of the system.
This recognizes the need to validate the design at small and medium scales to de-risk it before expanding to large scale urban environments, without fundamentally changing the model.
\item \textbf{\modc{Generalizability.}} The design should be generalizable enough to include additional utility domains such as smart power or waste management. While the sensors and analytics themselves can be domain dependent (or optionally shared across domains), the enabling fabric and platform layers must remain common across domains -- either conceptually or using different implementations or configurations.
\item \textbf{\modc{Modular Manageability.}} The architecture should allow new sensors, devices, data sources and applications to be included over time with limited overheads. The interface boundaries should be clearly defined to allow minimal configuration overheads. Support must be present for both static and transient devices, edge and Cloud resources, and for physical and crowd-sourced data collection and actuation.
\item \textbf{Reliability and Accessibility.} The architecture should monitor and ensure the health of the sensing, communication and computation layers, with autonomous actions where possible. Depending on the application domain, the QoS for data collection, decision making and enactment may be mission-critical or a best effort. Resource usage should be sensitive to current computing, networking and energy capacities.
\item \textbf{Open Standards.} The architecture should use open protocols and standards, supported by standardization bodies and community efforts, as opposed to proprietary technologies or closed consortia. It will leverage existing open source Big Data platforms and tooling, and contribute to them to facilitate their growth. It should balance the benefits of emerging IoT standards, and the reliability of mature ones, even if repurposed from other fields. It should be extensible and incorporate standards as they evolve.
\item \textbf{\modc{Cost Effectiveness.}} The design process will consider the costs for purchasing, developing, integrating, deploying and managing the architecture. These include hardware, software and service costs, as well as human capital to configure and maintain the IoT fabric, in the context of emerging nations (where human cost may be lower but technology costs higher). It should leverage commodity and open source technologies where possible. This recognizes that technology is a means to a sustainable end, rather than the end in itself. Designing such low-cost, innovative and sustainable technologies is locally termed as \emph{Jugaad}~\cite{jugaad}.
\item \textbf{Security and Auditing.} The access to devices, data, analytics, and actuation services should be secured to prevent unauthorized access over the network, or even if the physical device is compromised. An audit trail must be established, and provenance must ensure data ownership and trust. Mechanisms for open data sharing and crowd-sourcing should be exposed, with a possible micro-payment model.
\end{enumerate}
\section{Sensing and Communication
\label{sec:comm}
Sensing, actuation and communication are integral to the physical IoT fabric. The service-oriented software platform must be cognizant of their characteristics to allow for fabric management. Here, we discuss the capabilities and constraints of the edge and networking devices in the Smart Campus project for the water domain, which can be generalized to other utilities.
\subsection{Sensing and Actuation}
There are several types of physical sensors that are deployed for collecting real-time observations on the state of the water distribution network within the campus, and to perform demand-supply water balance studies. \emph{Flow meters} use electromagnetic induction to measure the volume of water flowing through the pipes in the distribution network, and \emph{pressure pads} observe the water pressure at various points in the network. These help us understand the flow of water through the major distribution lines across campus, and ensure sufficient pressure is available to deliver water. They are typically placed between the city inlet, the GLR and the OHT.
\emph{Smart power meters} at pumping stations let us know the energy usage for actively moving water between the various tanks, and can be correlated with the flow meters and pressure pads. In addition, \emph{water level meters} measure the depth of water in the OHT, GLR and the rooftop tanks continuously using ultrasonic signals to estimate the range from the top of these tanks to the water surface~\cite{verma2015towards}. By knowing the dimensions of the water tanks and when the pumps are operating, we can estimate the supply and the demand of water in individual buildings. These meters also record ambient temperature.
\begin{figure}[t]
\centering
\begin{minipage}[c]{0.6\columnwidth}
\centering
\subfloat[Water Quality Card]{
\includegraphics[width=1.0\columnwidth]{figures/quality_card.pdf}
\label{fig:quality}
}
\end{minipage}
~~~~
\begin{minipage}[c]{0.3\columnwidth}
\centering
\subfloat[SmartWater Reporting App]{
~~~~\includegraphics[width=0.9\columnwidth]{figures/app.png}~~~~
\label{fig:app}
}
\end{minipage}
\caption{Crowd-sourced data collection of water quality}
\label{fig:crowd}
\end{figure}
The water level sensors can also serve as \emph{actuators} that control the pumps and the valves in the future. Physical actuators will automate the enactment of pumping and distribution decisions, in the absence of which, an SMS sent to a cell phone present with the pump operator can serve as a manual feedback control. Another form of actuation is to control the fabric itself. For example, the duty cycles of wireless motes and sampling rates for the various sensors and observation types can be controlled on the fly based on decision made by the management and analytics layers using information on the network, energy and computation resources, and the current application requirements.
Another important class of sensing and actuation within IoT is through \emph{crowd-sourcing} to supplement physical devices~\cite{cardone2013fostering}. Typically, crowd-sourcing can be used when the costs for deploying physical devices is high, or to engage the community through citizen science. Physical water quality sensors that can measure chemical properties are costly, and the number of potable water dispensers on campus is large. So we leverage the IISc residents in collecting quality measurements from dispensers that are distributed across buildings. Reagent strips available for $US\$0.25$ can be dipped in the water sample, placed against a water quality color card (Fig.~\ref{fig:quality}), and our Android smart phone app (Fig.~\ref{fig:app}) used to photograph and capture the color changes to the strip after normalizing for ambient light using the quality card~\cite{mit:little}. This reports water quality parameters such as nitrates, chlorine, hardness, pH, etc. The app can also be used to report maintenance issues such as water leakage and drips, water overflow or underflow in buildings, etc. Such participatory sensing engages the campus users in their own health, and instills a community value.
\subsection{Networking and Communication}
\label{sec:nw}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{figures/nw-arch.pdf}
\caption{Wireless Sensor Network Deployment at IISc}
\label{fig:nw-arch:wsn}
\end{figure}
\subsubsection{Network Protocols and Infrastructure} Communication networks are required to evacuate data from the sensors to the data platform or to trigger the actuators based on control decisions. Gateway devices and backend computing infrastructure hosting the platform, such as Cloud VMs, are on public or private infrastructure networks such as wired or wireless LAN. Accessing the sensors and edge devices becomes less challenging if such infrastructure networks are available within their vicinity, or if 2G/3G/4G connectivity can be made use of. However, field deployments may not be within range of LAN or WLAN, cellular connectivity may be costly, or devices that use these communication protocols may consume higher energy, which will be a constraint if they are powered by battery or solar renewable.
As an alternative, \emph{ad hoc} and \emph{peer to peer (P2P)} network protocols
are popular for IoT deployments. There are multiple standards that can be leveraged here. \emph{Bluetooth Low Energy (BLE)} has gained popularity for Personal Area Networks (PAN) due to their ubiquity in smart phones. It is designed for P2P communication between proximate devices, such as smart phones and IoT beacons, within 10's of feet of each other, and supports 10's of kbytes/sec bandwidth.
Alternatively, \emph{IEEE 802.15.4} specifies the physical (PHY) and media access control (MAC) protocol layers for PANs~\cite{802.15.4}. It operates in the unlicensed Industrial, Scientific and Medical (ISM) radio bands, typically $2.4~GHz$, and forms the basis for \emph{ZigBee}. It has been extended specifically for IoT usage as well. \emph{IEEE 802.15.4g} was proposed for P2P communications and for smart utility networks like gas, water and power metering. The Thread Group, including consortium members Samsung, Google Nest, Qualcomm and ARM, also use this standard for an IPv6-addressable \emph{Thread protocol} for smart home automation.
More broadly, IETF's IPv6 over Low power Wireless Personal Area Networks \emph{(6LoWPAN)} extends IPv6 support for IEEE 802.15.4 on low power devices~\cite{6lowpan}. A single IPv6 packet has a Maximum Transmission Unit (MTU) of $1280~bytes$ which fits in traditional Ethernet links having an MTU of $1500~bytes$. But IEEE 802.15.4 only has an MTU of $127~bytes$, and 6LoWPAN acts as an adaptation layer to allow IPv6 packets to be fragmented and reassembled at this data link layer. It also enables IPv6 link-local auto-addressing and provides datagram compression.
The range and bandwidth of wireless networks depend on the
transmission power,
size of antenna,
and the terrain. Typically, two of the three dimensions -- high bandwidth, low power, and long range -- are achievable.
PANs choose a lower range in favor of higher speed and lower power. E.g., ZigBee with $2.4~GHz$ offers a range of $10-100~meters$, line of sight, and a bandwidth of $\approx30~kbytes/sec$. Using the \emph{sub-GHz} spectrum offers a longer range for Wide Area Networks (WAN), due to low attenuation of the low-frequency waves, but also a lower speed of $\approx5~kbytes/sec$. While IEEE 802.15.4g supports this frequency, \emph{LoRaWAN} technology has been developed specifically for such long ranges of a kilometer using, say, $868~MHz$ sub-GHz radio in the IoT context~\cite{lora}. LoRa uses a star-of-stars topology, and is well suited for applications with low data-rate of $0.25\sim5~kbytes/sec$,
but the current implementation lacks support for the IP stack and uses a proprietary chipset.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\columnwidth]{figures/network-stack.pdf}
\caption{Network Protocol Stack of the IoT Fabric}
\label{fig:nw-arch:stack}
\end{figure}
For the Smart Campus network fabric, the buildings have W/LAN access, and devices within WiFi range can use the backbone network. However, many of the OHT, GLR and pump houses are not in WiFi range. Hence, we deploy an \emph{ad hoc} 6LoWPAN Wireless Sensor Network (WSN) for such field devices (Fig.~\ref{fig:nw-arch:wsn} and~\ref{fig:nw-arch:stack}). We use Zolertia's \emph{RE-Mote}~\cite{remote} wireless hardware platform which has a dual-radio of a $2.4~GHz$ IEEE 802.15.4 and a sub-GHz $868/915~MHz$ RF transceiver. It runs an
ARM Cortex-M3 CPU
at $32~MHz$ clock speed, with $512~KB$ of programmable flash and $32~KB$ of RAM.
These motes connect to the sensors to acquire data and pass control signals, act as WSN relays, or are the border router connected to the gateway device.
A \emph{Raspberry Pi 3} serves as the gateway that connects to the border router through a USB interface. Besides connecting the WSN to the campus backbone network, the Pi also acts as a proxy between the IPv6 WSN and the IPv4 campus network using the \texttt{tunslip} utility.
Thus, all motes are IP addressable, with end-to-end IP-based connectivity across the campus.
The use of such diverse network protocols coordinated through a gateway is generalized as an \emph{Area Sensor Network (ASN)} that serve as a bridging layer for composable regions of sensor networks that can scale to a city in a federated manner~\cite{molina:csmc:2014}.
A novel use of crowd-sourcing uses people as \emph{data sherpas} when sensors require many WSN hops to reach a building W/LAN but where human footfall is high~\cite{mishra:iotn:2015}. Here, data from the sensor is broadcast using a BLE beacon, which is picked up by the Smart Campus app on users' phones and pushed to our data platform through 3G/WiFi.
Lastly, small scale experiments using LoRaWAN is also being investigated. While they may be adequate for periodic water level or flow meter data, their bandwidth will limit the reuse of the network fabric for other data-heavy IoT domains
\subsubsection{Network Deployment Design}
The WSN need to be designed and deployed across regions of the campus to ensure robust quality of service (QoS), and avoid data loss due to packet collisions and scattering of waves by dense buildings.
\emph{SmartConnect}~\cite{smartconnect} is an in-house tool for designing IEEE 802.15.4 networks. When given the sensor locations, their expected data traffic, the required QoS, and possible locations for relays, it identifies the lowest-cost relay placement with a given path redundancy in the multi-hop WSN.
SmartConnect uses two field measurements for pairwise placement of the motes at each candidate relay location: (1) the minimum \emph{Received Signal Strength Indicator (RSSI)} for which the \emph{Packet Error Rate (PER)} is consistently $\le 2\%$, and (2) the maximum radio reception distance, $R_{max}$, for which the packet delivery rate is $\ge 95\%$.
\modc{Fig.~\ref{fig:nw:stats} captures the results of several experiments with the Sub-GHz WSN deployed at different regions of campus to plan the deployment. Fig.~\ref{fig:nw:stats:per} shows the result of conducting wired back-to-back testing of motes to determine the optimal operation characteristics under ideal conditions. This offers a best-case baseline on the PER as we increase the signal strength, i.e., when inter-mote distance is not a concern. After calibrating the devices with the minimum RSSI, controlled experiments were conducted to obtain the practical operating distance range between motes for the required QoS as shown in Fig.~\ref{fig:nw:stats:range}. Here, $P_{out}$ indicates the upper bound of PER while $P_{bad}$ is the probability of a link having a PER worse than $P_{out}$, as the link distance is varied. Based on this, a minimum RSSI of $-97~dBm$ and a maximum range of $R_{max}=400~m$ were chosen for the field deployments. These were further validated on the field to capture the effect of topological characteristics on the network range, such as open spaces, buildings, tree cover, etc.~\cite{rathod:2015}. Fig.~\ref{fig:nw:heatmap} shows the heatmap of the signal strength and ranges. Here, R3 is in a wooded area and R4 is near dormitory buildings, and both show higher signal attenuation. R5 is measured near the recreational center with open spaces, and has a higher signal strength.}
Based on these experiments, for a QoS delay of $200~msec$, potential locations were suggested by SmartConnect for relay placement. These targeted experiments and analytical planning avoid having to actually deploy different permutations of the relays at every possible field location to determine the optimal placement for a reliable WSN.
\begin{figure}[t]
\centering
\subfloat[PER vs. RSSI]{
\includegraphics[width=0.45\textwidth]{figures/per-rssi.pdf}
\label{fig:nw:stats:per}
}
\subfloat[Fraction of links $P_{bad}$ with $\text{\emph{outage probability}}>P_{out}$ at different distance ranges.]{
\includegraphics[width=0.50\textwidth]{figures/pbad-range.pdf}
\label{fig:nw:stats:range}
}\\
\subfloat[\modc{Heatmap of RSSI Range at different locations on campus}]{
\includegraphics[width=0.55\columnwidth]{figures/campus-nw.pdf}
\label{fig:nw:heatmap}
}
\caption{Network characteristics for RE-Mote on the field.}
\label{fig:nw:stats}
\end{figure}
Once the motes are deployed, we implement the \emph{Routing Protocol for Low power lossy networks (RPL)}~\cite{rpl} for the formation and maintenance of the WSN (Fig.~\ref{fig:nw-arch:stack}). RPL maintains a Destination Oriented Directed Acyclic Graph (DODAG) among the motes, with every node having one or more multi-hop path(s) to the root, which is the border router. This supports multipoint-to-point (MP2P), point-to-multipoint (P2MP), and point-to-point (P2P) communication patterns. Packets traversing through such a multi-hop Low-power and Lossy Network (LLN) may get lost in transit due to various link outages at intermediate relay nodes. To ensure high Packet Delivery Ratio (PDR) in the LLNs running RPL, we include a lightweight functionality, \emph{LinkPeek}~\cite{linkpeek}, to the network layer's packet forwarding task. Here, the forwarding node iteratively retransmits the packet to its next best parent in the same DODAG whenever a preset MAC layer retransmission count for the current best parent is exceeded.
\section{IoT Fabric Management
\label{sec:fabric}
A high level protocol stack for the entire software architecture is shown in Fig.~\ref{fig:proto}. In this, \emph{fabric management} deals with the health and life-cycle of devices present in the IoT deployment. The primary devices that require this management are the sensors, actuators, motes and gateway devices that are physically deployed in the field. The fabric also ensures that endpoints are available to manage the devices and to acquire data or send signals. Here, we describe the service-oriented fabric management architecture for the IISc Smart Campus.
\subsection{Service Protocols for Lifecycle and Discovery}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{figures/protocol.pdf}
\caption{Protocols and standards used in the IoT architecture}
\label{fig:proto}
\end{figure}
IETF's \emph{Constrained RESTful Environments} working group (CoRE WG)~\cite{core} is developing standards for frameworks that manage \emph{resource-oriented} applications in constrained environments such as IoT. It is intended to align with existing web standards like REST and HTTP, as well as emerging IoT network standards for IPv6. This makes it well suited for designing a standards-compliant service-oriented IoT architecture, and we leverage several specifications from CoRE.
Fig.~\ref{fig:fabric} shows an interaction diagram of various service components that enable fabric management (orange boxes and arrows). We adopt a \emph{stateful resource} model, similar to REST, for managing devices as services in the IoT deployment. These go beyond just the domain sensors and actuators, and also include network devices and gateways. Each device exposes one or more resources through a \emph{service endpoint}, each of which are either an \emph{observable} entity that can be sensed, or a \emph{controllable} entity that can be changed and the setup updated. E.g., a resource can represent domain observations, such as the water level or pump state, fabric telemetry, such as battery level of a mote, or a device setup state, such as sampling interval.
Two key services for the lifecycle management and discovery are the \emph{Light\-weight Directory Access Protocol (LDAP)}~\cite{ldap} and the \emph{CoRE Resource Directory (RD)}~\cite{rd}. Both of these are standards-compliant directory services, but play distinct roles in our design. LDAP is used to store \emph{static metadata} about various devices and resources that are, or can be, present in the IoT fabric. We use it as a \emph{bootstrapping} mechanism for devices to update their initial state during deployment. This reduces the overhead of deployment and configuration of the devices on the field, which may be done by a non-technical person, and instead have the device pull its configuration from the LDAP once it is online. RD, on the other hand, is responsible for maintaining the state and endpoint of resources that are currently active, and is used for \emph{dynamic discovery} of resources and interacting with them. RD supports frequent updates, and importantly, a \emph{lifetime} capability that automatically removes a service entry if it does not renew its lease within the interval specified when registering. This allows an eventually consistent set of active devices to be maintained in the RD, even if the devices do not cleanly de-register. Both these services are hosted on Cloud VMs to allow discovery by external clients, and sharing across private networks.
We adopt \emph{CoAP (Constrained Application Protocol)}~\cite{coap}, part of the CoRE specifications, as our service invocation protocol. CoAP is designed as the equivalent of REST over HTTP for constrained devices and well-suited for our 6LoWPAN network. CoAP has compact specification of service messages, uses UDP by default, has direct mappings to/from stateless HTTP protocol, and support Datagram TLS (DTLS) security. It has both request/response and observe/notify models of interaction, and offers differential reliability using confirmable/non-confirmable message types. We use CoAP as the default service protocol for all our devices on campus, including motes on the WSN and gateways like the Pi on the LAN.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{figures/smartx.pdf}
\caption{Interactions between architecture components for fabric management and data acquisition}
\label{fig:fabric}
\end{figure}
CoAP is an asynchronous protocol where requests and responses are sent as independent messages correlated by a token. It requires both the service and the client to be network addressable and accessible to each other -- this may not be possible for devices that are behind a firewall on a private network or data center. At the same time, while CoAP's use of UDP makes it light-weight within the private network, it can be lossy when operating over the public Internet. To address these two limitations, we switch from CoAP to traditional REST/HTTP over TCP/IP when interacting with services and clients on the public Internet from the campus LAN. Two proxy services present at the campus DMZ, \emph{CoAP to HTTP (C2H)} and \emph{HTTP to CoAP (H2C)}, enable this translation. Similarly, within the Cloud data center, we use an H2C proxy to switch back to CoAP in the private network to access the RD that is based on CoAP.
While devices in the WSN are IP addressable and their CoAP service endpoints accessible by clients, they operate as an IPv6 network on 6LoWPAN. Hence, yet another proxy is present at the gateway device to translate between IPv4 used in the campus and the public network to IPv6 used within the WSN.
One of the advantages of leveraging emerging IoT standards from IETF and IEEE is that these protocol translations are well-defined, transparent and seamless.
These various services are shown in Fig.~\ref{fig:fabric}, and implemented using open source software, either used as is or extended by us to reflect recent evolutions of the specifications. We use the \emph{Eclipse Californium (Cf)} CoAP framework~\cite{californium} for the CoRE services such as Resource Directory, CoAP clients and services, and the C2H and H2C proxies on non-constrained devices that can run Java, such as the Pi and Cloud VMs. We also use the \emph{Erbium (Er)} CoAP service and client implementation for the ContikiOS running on the embedded mote platforms~\cite{ebrium}. The \emph{Eclipse Copper (Cu)} plugin for Firefox provides an interactive client to invoke CoAP services and browse the RD. \emph{Apache Directory Service} serves as our LDAP implementation.
\subsection{Device Bootstrapping and Discovery}
Each IoT device that comes online needs to determine its endpoint, the resources it hosts, and their metadata. Some are static to the device, while others depend on where the device's spatial placement. This device configuration during on-boarding has to be \emph{autonomic} to allow manual deployment of the last-mile field devices by non-technical personnel. We propose such an automated process for the bootstrapping using the LDAP for device initialization, and the RD for device discovery.
Fig.~\ref{fig:bootstrap} shows the sequence diagram of messages for a device that comes online and connects to its gateway as part of the WSN -- a subset of these messages hold for devices not part of a WSN. Fig.~\ref{fig:fabric} shows the corresponding high level interactions. All IP-addressable devices in the deployment are considered as \emph{endpoints} that contain resources which are logically \emph{grouped}. These have to be auto-discovered based on minimal \emph{a priori} information. Each device is assigned, and will be aware of, just a globally unique UUID, and a ``well-known'' LDAP URL. Separately, an administrator registers the UUID and its metadata for all devices that will be deployed on the field in the LDAP directory information tree (DIT). The DIT is organized by domain, location, sensor type, etc. to allow group updates (\emph{(1)} in Fig.~\ref{fig:fabric}).
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{figures/bootstrap_seq.pdf}
\caption{Sequence to bootstrap a device from LDAP.}
\label{fig:bootstrap}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{figures/rd_seq.pdf}
\caption{Sequence to register resources with RD \& renew lifetime.}
\label{fig:rd}
\end{figure}
When a device connects to the campus IoT network, it does an HTTP query by UUID to the LDAP service for its metadata. Constrained motes, instead, perform a CoAP \texttt{GET} on an LDAP lookup service running on the gateway Pi, whose IP address matches the border gateway of the WSN. The Pi lookup service translates this to an LDAP HTTP query (Fig.~\ref{fig:bootstrap}; \emph{(2)} in Fig.~\ref{fig:fabric}). The response, optionally mapped from HTTP/LDIF to CoAP/JSON at the Pi, returns the entity type, its group(s), Distinguished Name (DN), spatial location, etc., and global URLs for the proxy services, RD, MQTT broker, etc. (Fig.~\ref{fig:bootstrap}). We use a well-defined rule to generate \emph{unique URI paths} for resources at this endpoint based on their metadata, which combines the spatial location, device and sensor type, and observation type, as shown below.
After a device is bootstrapped, it needs to register the resources available at its endpoint (ep) with the RD so that users or Machine-to-Machine (M2M) clients can discover their existence. The RD uses the \emph{CoRE link format}~\cite{link}, based on HTTP Web Linking standard, for this resource metadata. Each CoRE link contains the URI of the resource -- the optional endpoint hostname/IP:port, and the URI path -- along with the \emph{resource type (rt)}, the \emph{interface type (if)}, and the \emph{maximum size (sz)} of a \texttt{GET} response on this resource. Further, the RD also allows specifying the \emph{content type (ct)} such as JSON, the \emph{groups (gp)} the resource belongs to, and if the resource is \emph{observable (obs)}, i.e., can be subscribed to for notifications~\cite{observe}. Lastly, we use the extensibility of CoRE links to include an \emph{MQTT topic (mt)} parameter for observable resources which will publish their state changes to this topic at a publish-subscribe broker (\S~\ref{sec:mqtt}).
Below is a sample CoRE link for an endpoint path \texttt{`grid/43p/mote1/sensor2/waterlevel'} with an observable \texttt{`waterlevel'} resource from \texttt{`sensor2'} that is attached to \texttt{`mote1'} placed at UTM grid location \texttt{`43p'} and returning JSON content type (\texttt{`ct=50'}).
{\textcolor{BLUE}{
\centering\texttt{<grid/43p/mote1/sensor2/waterlevel>;ct=50;rt="waterlevel";\\
if="sensor";obs;gp="ap/water~sp/43p~fn/waterlevel";\\
mt="water/43p/waterlevel"\\
}
}
}
Fig.~\ref{fig:rd} shows the sequence of operations for the device to register its resource(s) with the RD. Note the use of the C2H and H2C proxies to translate from CoAP within campus to HTTP on the public Internet, and back to CoAP within the VM hosting the RD. Registrations with the RD should also include a lifetime (\emph{lt}) for the entry in seconds, with the default being $24~hours$. If the resource does not renew this within this lifetime, the RD removes this entry and the resources are presumed to be unavailable. Clients can browse the RD (Fig.~\ref{fig:rdviz}), or query it using its CoAP or HTTP REST proxy API to discover resources of interest, and subsequently interact with the resource endpoint using CoAP.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{figures/res-dir.png}
\caption{Portal displaying RD entries for the ECE building. 1 Pi and 3 mote endpoints each have multiple resources.}
\label{fig:rdviz}
\end{figure}
\subsection{Monitoring and Control}
We make use of service endpoints to monitor the health and manage the configuration of devices such as motes and gateways as well. All motes expose CoAP resources to monitor their telemetry such as battery voltage, link cost with parent, and frames dropped, while gateway Pis report their CPU, memory and network usage statistics. These go beyond the liveliness that RD reports, and is in real-time. They help monitor the health of the network and device fabric, and take preventive or corrective actions, say, if a mote exhibits sustained packet drops or a Pi's memory usage becomes high. While some issues may require personnel on the field to fix things, others may be resolved remotely using control endpoints, such as restarting a mote or changing the sampling interval to reduce battery usage cost or packet drops. The analytics platforms, introduced later, that support the domain applications are also leveraged for such decision-making to optimize the IoT infrastructure.
\section{Data Acquisition and Storage
\label{sec:acquire}
One of the characteristics of IoT applications is the need to acquire data about the system in real-time and make decisions. Given an operational IoT deployment and the ability to discover resources for observable and controllable devices, the next step is to acquire data about the utility infrastructure, and pre-process and persist them for downstream analytics. Data acquisition from $100-1000's$ of sensors has to happen at scale and with low latency, and from constrained devices. Once acquired, these streams of observations have to be transformed and validated at fast rates to ensure data quality. We make a design choice to integrate all observation streams in the Cloud to allow us to utilize scalable VMs and platform services, and collocate real-time data with historic data in the data-center on which analytics are performed. Next, we discuss our approach of using publish-subscribe mechanisms and fast data platforms for these needs.
\subsection{Asynchronous Access to Publish-Subscribe Observations}
The transient nature of sensor resources and the diverse applications and clients that may be interested in their observations means that using a synchronous request-response model to poll the resource state will not scale. Further, the rate at which the observations change may be infrequent for many sensors (e.g., the water level, or even battery level, gradually drains) and repetitive polling is inefficient. Rather, an asynchronous service invocation based on a subscription pattern is better suited. Here, the client registers interest in a resource, and is notified when its state changes.
We explore two mechanisms for such asynchronous observations of sensors, leveraging the native capabilities of CoAP and the scalable features of MQTT message brokers that are designed for IoT.
\subsubsection{CoAP's Observe Pattern}
CoAP services have an intrinsic ability to transmit data by subscription to clients interested in changes to the resource state~\cite{observe}. CoAP resources that indicate in their CoRE link format as being \emph{observable} allow this capability, and it complements the request-response model. Clients (\emph{observers}) can register interest in a resource (\emph{subject}), and the service then notifies the client of their updated state when it changes. The resource can also be parameterized to offer flexibility in terms of what constitutes a ``change'', say, by passing a query that observes changes to a moving average of the resource's state, or when a certain time goes by since the last update. The service maintains a list of observers and notifies them of their state change, but is designed to be eventually consistent rather than perfectly up to date. This ensures that the CoAP service is not frequently polled, making it amenable to the compute and network constrained environments like 6LoWPAN.
All our motes expose this capability for their fabric resources and the sensor resources \modc{that} they are connected to. This model, however, does have its limitations. It requires the service to maintain the list of observers, which can grow large and unmanageable for constrained devices. Further, this is a point-to-point model and each observer has to be individually invoked to send the notification, duplicating the overhead. Also, current open-source software support for \modc{CoAP} is limited to only resource state changes without any parameterization, though this is expected to change.
\subsubsection{MQTT Broker}
\label{sec:mqtt}
Publish-subscribe (or pub-sub)~\cite{pubsub} is a messaging pattern that is asynchronous, and \modc{uses a hub-and-spoke rather than point-to-point communication.} Here, the source of the message (\emph{publisher}) is not directly accessed by the message consumer(s) (\emph{subscriber(s)}). Instead, the messages are sent by the publisher(s) to an intermediate \emph{broker} service, which forwards a copy of the message to interested subscribers. The message routing may be based on topics (like a shared mailbox), or the type or content of the message.
The pub-sub pattern is highly scalable for IoT since the publishers and subscribers are agnostic to each other. This ensures loose coupling in the distributed environment while reducing their management overheads. Also, we drop from $m \times n$ messages set between $m$ publishers and $n$ subscribers to $m + n$ messages, avoiding duplication. The publishers and subscribers can also be on different private networks, and use the public broker for message exchange.
We use the \emph{Message Queue Telemetry Transport (MQTT)} ISO standard which was developed as a light-weight pub-sub protocol for IoT~\cite{mqtt}. Publishers can publish messages to a \emph{topic} in the broker, and subscribers can subscribe to one or more topics, including wildcards, to receive the messages. The topics have a hierarchical structure, allowing us to embed semantics into the topic names. Clients initiate the connection to the broker and keep it alive, allowing them to stay behind firewalls as long as the broker is accessible. The control payload is light-weight. The last published message to a topic may optionally be retained for future subscribers to access. It also supports a ``last will and testament'' message that is published to the \emph{will topic} if the client connection is killed, letting subscribers know of departing publishers. Three different delivery QoS (and costs) are supported -- at most once (best effort), at least once, and exactly once.
We use the \emph{Apache Apollo} MQTT broker implementation hosted in a VM in the Cloud as part of our IoT platform stack. It supports client authentication and TLS security. Topics are created for observable resources in the Smart Campus based on a production rule over the resource metadata, including the domain, spatial location, device and observation types, and the UUID for the device. This allows wild-card subscriptions, say, to all \texttt{waterlevel} messages or all messages from the \texttt{ECE} building. The MQTT topic is present in the CoRE link registered with the RD, allowing the discovery and subscription to these topics.
Non-constrained devices like the Pi gateways and devices on the public network, such as the Android App, publish their resource state changes and observations to the MQTT broker. For reasons we explain next, constrained devices do not \emph{directly} publish to the broker. We adopt IETF's \emph{Sensor Markup Language (SenML)} for publishing observations to topics~\cite{senml}. This offers a self-descriptive format for time-series observations, single and multiple data points, delta values, simple aggregations like sum, and built-in SI units. It also has well-defined serializations to JSON, CBOR, XML and EXI. Clients interested in the real-time sensor streams, such as our data acquisition platform, Smart Campus portal (Fig.~\ref{fig:streamviz}), and the Smart Phone app, subscribe to these topics and can visualize or process the SenML observations.
\subsubsection{Automated Observe and Publish from Gateway}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{figures/observe.pdf}
\caption{Sequence for data acquisition from sensors \modc{using AOP}. The gateway initiates a CoAP \texttt{observe} and auto-publishes SenML values to MQTT. Clients can subscribe to the MQTT topic.}
\label{fig:observe}
\end{figure}
Publishing \modc{directly} to the MQTT broker is still heavyweight for our constrained devices and WSN for several reasons. One, is the overhead to initiate and keep the network connection open to the broker. Two, is the memory footprint for the MQTT client library on these embedded platforms, besides the CoAP service. Third, our choice to publish SenML causes a payload much larger than the native observations.
In order to offer the transparency of the pub-sub architecture while keeping with the limitations of the devices and WSN, we develop an \emph{Automated Observe and Publish (AOP)} service at the Pi gateway that couples the CoAP Observe capability with the MQTT publisher design. This is illustrated in Fig.~\ref{fig:fabric}, and the sequence of operations \modc{is} shown in Fig.~\ref{fig:observe}. This service on the Pi periodically queries the RD for new resources registered in the WSN group it belongs to (\emph{(4)} in Fig.~\ref{fig:fabric}; Fig.~\ref{fig:observe}). If discovered, the AOP service registers an \texttt{observe} request with the service endpoint for all new resources (\emph{(5)} in Fig.~\ref{fig:fabric}). When the endpoint notifies AOP of an updated resource state (\emph{(6)} in Fig.~\ref{fig:fabric}), AOP maps them to SenML/JSON and, as a data proxy, publishes them to the MQTT topic for that resource as listed in its CoRE link in the RD (\emph{(7)} in Fig.~\ref{fig:fabric}).
This design has the additional benefit of allowing clients that are interested in the observation to subscribe to the MQTT broker on the Cloud VM rather than the CoAP service on the constrained device. Consumers in the private network that are latency sensitive can always use the CoAP observe feature, or poll the service directly, and avoid the round trip time to the MQTT broker. \modc{E.g., Figs.~\ref{fig:perf:nw:lat} and~\ref{fig:perf:nw:bw} show the round trip latency and the bandwidth of pairs of Pi's within the campus backbone network, and between the Pi gateway devices on campus and the Azure VMs at Microsoft's Singapore Cloud data center. These violin plot distributions are sampled over a $24~hour$ period, and indicate the Edge-to-Edge and Edge-to-Cloud network profiles~\cite{ghosh:tcps:2017}. We see substantial latency benefits in subscribing to the event streams from within the campus network, which has a median value of $10~ms$ (green bar), compared to $153~ms$ when publishing to the Cloud. However, some regions of the campus have to go through multiple network switches and their latencies approach that of moving to the Cloud, as shown by the higher mean value (red bar). The bandwidth within campus is also $50\%$ faster and tighter, compared to between campus and Cloud. Our IoT middleware offers multiple means of accessing the observation streams to allow applications to choose the most appropriate one based on their presence in the network topology.}
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{figures/pipelines.pdf}
\caption{Interactions between streaming data acquisition and analytics dataflows in the data platform hosted on the Cloud}
\label{fig:pipelines}
\end{figure}
\subsection{Fast Data Processing and Persistence}
Once data is published to the MQTT broker in the Cloud, there is a multitude of Big Data platforms that can be leveraged for processing the sensor streams in the Cloud data center. We take an approach similar to our earlier work~\cite{simmhan:cise:2013}, but with contemporary data platforms and updated domain logic relevant to the IISc Smart Campus.
Data published to MQTT needs to be subscribed to and persisted as otherwise these transient sensors streams are lost forever. At the same time, the data arriving from heterogeneous sensors have to be validated before they are used for analytics and decision making, such as turning off pumps or notifying users of a water quality issue. Hence, the cleaned observations should be available with limited delay. \emph{Distributed Stream Processing Systems (DSPS)} are Big Data platforms tailored for applications that need to process continuous data streams at high velocity within low latency on commodity cluster and Cloud VMs~\cite{dsps}. DSPS allow users to compose persistent applications as a dataflow graph, where task vertices have user logic, and edges stream messages between the tasks.
\modc{There are several contemporary DSPS such as Apache Storm, Flink, Spark Streaming, Azure HDInsight, etc. We choose to use the \emph{Apache Storm} DSPS~\cite{storm} from Twitter due to its maturity and active open-source support, and its ability to compose a Directed Acyclic Graph (DAG) of modular user-defined tasks, rather than just higher order primitives.} Storm is used as our data acquisition platform for executing several streaming dataflow pipelines on sensor observations published to the MQTT broker (Fig.~\ref{fig:pipelines}). Two important and common classes of dataflows are \emph{Extract-Transform-Load (ETL)} and \emph{Statistical Summarization (STATS)}~\cite{shukla:riot:2017}.
The ETL pipeline helps address data format changes and quality issues before storing the observations. The input to ETL is by subscribing to wild-card topics in the MQTT broker by sensor type, which allows all observation types supported by this pipeline to be acquired. Care is taken to cover all relevant topics so that no observation stream is lost; alternatively, it can query the RD to subscribe to specific topics in the broker, or use a special advertisement topic when new devices are on-boarded. The incoming messages may arrive from heterogeneous sources in different measurement units and formats, though SenML is preferred. Tasks like parsing, format and unit conversion help normalize these observations. There can also be missing or invalid values, say, due to network packet drop or sensor error. For example, we see the water level sensor report incorrect depths due to perturbation in the water surface or sunlight reflecting into the ultra-sonic detector. Range filters, smoothing and interpolation tasks perform such basic validation, quality checks and corrections. Lastly, the raw and validated data will need to be stored for future reference and batch analytics. We use \emph{Hadoop Distributed File System (HDFS)} to store the raw observations from MQTT and the \emph{HBase NoSQL database}~\cite{hbase} to store the cleaned time-series data for batch analytics.
The ETL dataflow also publishes the resulting cleaned sensor event stream to an MQTT topic which then can be subscribed to by downstream applications.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{figures/smart-campus-01-trim.png}
\caption{Real-time visualization of published observations}
\label{fig:streamviz}
\end{figure}
Basic statistical analyses are performed over the cleaned data to offer a summarized view of the state of the IoT system. These are used for monitoring the domain or the IoT fabric, information dissemination across campus users, or for human decision making. The STATS streaming dataflow (Fig.~\ref{fig:pipelines}) performs operations like statistical aggregation, moving window averages, probability distributions, and basic plotting using libraries like XChart. Our STATS pipeline subscribes to the MQTT topic to which ETL publishes the cleaned observation streams. The statistical aggregates generated by STATS are likewise published to MQTT from which, e.g., the portal can plot realtime visualizations like Fig.~\ref{fig:streamviz}, while the plotted files are pushed to file store which can then be embedded in static webpages or reports.
Earlier, we have developed the \emph{RIoTBench benchmark} that has composable IoT logic blocks and generic IoT dataflows that are used for evaluating DSPS platforms~\cite{shukla:riot:2017}. We customize and configure these dataflow pipelines for the Smart Campus and the water management domain. As a validation of the scalability of the proposed solution, we have shown that Apache Storm can support event rates of over $1000/sec$ for many classes of tasks, as illustrated in Fig.~\ref{fig:stream:perf:storm}, when operating on an Azure Cloud VM. The tasks that were benchmarked span different IoT business logic categories such as parsing sensor payloads, filtering and quality checks, statistical and predictive analytics, and Cloud I/O operations. These are then assembled together and customized for the domain processing analytics, such as smart water management.
\modc{While we use Storm as our preferred DSPS in our software stack, it can be transparently replaced by any other DSPS that can compose these dataflow pipelines. The tasks we leverage from RIoTBench are designed as Java libraries, and hence many stream processing systems can incorporate them directly for modular composition. Since the interaction between these pipelines is through the MQTT pub-sub broker, it offers loose coupling between the dataflows and the platform components. In fact, multiple DSPS can co-exist if need be, say, to support higher-order queries using Spark or a lambda-architecture over streaming and static data using Flink. Even our Apollo MQTT broker can be replaced by Cloud-based pub-sub platforms like Azure IoT Hub that uses the MQTT protocol. Likewise, our choice of HBase can be replaced by other NoSQL platforms or Cloud Storage like Azure Tables as well. As we note next in \S~\ref{sec:analyze}, the HDFS or NoSQL store plays a similar role of loose coupling between Big Data batch processing platforms that need to operate on archived data.}
\section{Data Analytics and Decision Making
\label{sec:analyze}
\begin{figure}[t]
\centering
\subfloat[\modc{Round Trip Latency}]{
\includegraphics[width=0.3\columnwidth]{figures/latency.pdf}
\label{fig:perf:nw:lat}
}~
\subfloat[\modc{Network Bandwidth}]{
\includegraphics[width=0.3\columnwidth]{figures/bandwidth.pdf}
\label{fig:perf:nw:bw}
}\\
\subfloat[Peak task input rate on an Azure VM~\cite{shukla:riot:2017}.]{
\includegraphics[width=0.6\columnwidth]{figures/peakRateBarplot_updated.pdf}
\label{fig:stream:perf:storm}
}
\caption{(a) Network latency and (b) Bandwidth distribution within Campus edge LAN (E2E) and from Campus to Cloud WAN (E2C). (c) Peak input stream rate supported for each Apache Storm DSPS task.}
\label{fig:stream:perf}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[Peak Query Throughput on Pi]{
\includegraphics[width=0.45\columnwidth]{figures/cep-pi-throughput.pdf}
\label{fig:cep:thruput:pi}
}~~
\subfloat[Peak Query Throughput on Azure]{
\includegraphics[width=0.45\columnwidth]{figures/cep-azure-throughput.pdf}
\label{fig:cep:thruput:az}
}\\
\subfloat[\modc{Query Latency on Pi}]{
\includegraphics[width=0.45\columnwidth]{figures/cep-pi-latency.pdf}
\label{fig:cep:lat:pi}
}~~\subfloat[\modc{Query Latency on Azure}]{
\includegraphics[width=0.45\columnwidth]{figures/cep-azure-latency.pdf}
\label{fig:cep:lat:az}
}
\caption{\emph{Peak Throughput} and respective \emph{Query Latency} for various CEP queries on Pi and Azure VM~\cite{ghosh:tcps:2017}}
\label{fig:stream:perf:cep}
\end{figure}
There are several types of analytics that can help with manual and automated decision-making about the water domain, and the IoT fabric management as well. Similar to the stream processing pipelines for data acquisition above, streaming dataflows can perform analytics and decision making as well. Fig.~\ref{fig:pipelines} shows such \emph{online analytics and decision making pipeline (DECIDE)} that consumes cleaned observation streams from MQTT and can perform time-series analysis using auto-regressive models for, say, water demand prediction. Feature-based analytics, such as decision tree, can be embedded to correlate environmental observations with specific outcomes, such as days of the week with the water footprint in buildings. The figure also shows how such predictive models can be trained using streaming or batch dataflows from historic data (\emph{TRAIN}), and the updated models feed into the online predictions, periodically. We use logic blocks from the \emph{Weka} library~\cite{weka} within the Apache Storm dataflow for such online \emph{predictive analytics}, and several are made available as part of RIoTBench.
One of the most intuitive analytics for utility management is through the detection of event patterns. \emph{Complex Event Processing (CEP)} enables a form of \emph{reactive analytics} by allowing us to specify patterns over event streams, and identify situations of interest~\cite{cep}. It uses a query model similar to SQL that executes continuously over the event stream, and specifically allows window aggregations and sequence matching. The former applies an aggregation function over count or time windows, in a batch or sliding manner, while the latter allows a sequence of events matching specific predicates to be detected. E.g., these queries can detect when the moving average of water pressure goes above a certain threshold, or when the water level in a tank drops by $>5\%$ over successive events spread over $10~mins$. The former may indicate a blockage in the water distribution network, while the latter may identify rapid water leakage in building~\cite{govindarajan:comad:2014}.
We use \emph{WSO2 Siddhi}~\cite{siddhi} as our CEP engine for such ``fast data'' event-analytics and validate its scalability both on gateway devices such as a Raspberry Pi 2 for edge-computing, as well as on an Azure VM for Cloud computing~\cite{ghosh:tcps:2017}. Fig.~\ref{fig:stream:perf:cep} shows prior results for $21$ representative queries that perform sequence and pattern matching, filtering and aggregation, etc. over water level streams on the Pi. As we can see, these event queries are light-weight and can support rates of over $25,000~events/sec$ even on a Pi, with the corresponding Azure benchmarks showing a $3\times$ improvement (Figs.~\ref{fig:cep:thruput:pi} and ~\ref{fig:cep:thruput:az}). \modc{We can also infer the per-event query latency from these peak throughputs (Figs.~\ref{fig:cep:lat:pi} and ~\ref{fig:cep:lat:az}), and most execute in $\le 0.04~ms$ on the Pi and in $\le 0.005~ms$ on the Cloud. There is limited variability in the execution latency or the throughput. While the execution on the Cloud is much faster, when coupled with the Edge-to-Cloud latency for transferring the event from a sensor on campus to the Cloud (Fig.~\ref{fig:perf:nw:lat}), execution on the Pi has a lower makespan.} These validate the use of event analytics for both edge and Cloud computing.
These analytics can provide trends, classifications, patterns, etc. that can then be used by humans to manually take decisions, or for rule-based systems to automatically enact controls. These actions can include
automatically turning water-pumps on and off based on the water level, notifying users of contamination in a spatial water network region, reporting leaking pipes and taps to maintenance crew, etc. These strategies are currently being investigated as a meaningful corpus of water distribution and usage data within the campus is accumulated. Computational and network models that leverage these sensed data are being developed by our collaborators as well~\cite{msmk}.
In addition, these data streams and analytics help more immediately with understanding and managing the IoT fabric, particularly during the development and deployment phase of the infrastructure. They help identify, say, when the WSN is unable to form a tree or has high packet drops, when the sensors and motes are going to drain their battery, or when gateways go offline (e.g., due to \emph{wild monkeys} fiddling with the devices, as we have seen!). It also helps validate the performance of network algorithms like RPL, and build a repository of network signal strengths at different parts of campus, over time.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{figures/batchanalytics.pdf}
\caption{Workflow for Asynchronous Batch Analytics Service}
\label{fig:batch}
\end{figure}
Often, these exploratory analyses are performed on historic data collected over days and months within our data platform.
We leverage the \emph{Apache Spark}~\cite{spark} distributed data processing engine for such batch analytics. Spark allows fast, in-memory iterative computations and has been shown to out-perform traditional Hadoop MapReduce platforms. It also offers intuitive programming models such as SparkQL for easy specification of analytics requirements. Spark uses HBase, where we archive the cleansed sensor data, as its distributed data source. It can also be used to train predictive models in batch using its Machine Learning libraries (MLLib). While we currently perform periodic model training using a Storm dataflow for convenience (\emph{TRAIN} in Fig.~\ref{fig:pipelines}), we propose to switch to Spark in the near future.
We expose a \emph{Batch Analytics REST Service} wrapper around Spark to ease the execution of simple analytics from the Smart Campus web portal. This allows temporal and sensor-based filtering, and aggregation and transformation operations over the observational datasets to be mapped as parameterized Spark jobs that run on Cloud VMs. The Spark jobs can run for several minutes to hours, depending on the complexity of the analysis and source data size, and generate KB to GB of data. Hence, a synchronous REST call from the portal will timeout. Instead, we define an asynchronous service pattern based on the existing architectural components, as shown in Fig.~\ref{fig:batch}.
When the user submits an analytics query from their browser, the REST service first creates a unique MQTT topic for this session, and then invokes a Spark job by populating its parameters, including this topic. The REST service returns this topic to the browser, which subscribes to the topic with the broker. The Spark engine fetches the source data from HBase, runs the analysis, and writes the output to a Cloud file storage. It then publishes the URL of this result file to the unique topic in the broker. The browser gets notified of this URL and can use it to either stream and visualize the results, or allow the user to download it. This exhibits the flexibility of our service-oriented architecture to easily compose complex data management and analytics operations. \modc{In future, this REST API and asynchronous execution pattern can be easily extended to allow \emph{ad hoc} Spark SQL queries to be directly submitted for execution. This will allow developers to construct more powerful exploratory analytics, besides the user-oriented query template that is currently supported.}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{figures/campus-map-iot.png}
\caption{Geo-spatial visualization of Smart Campus water infrastructure, motes and sensors}
\label{fig:viz}
\end{figure}
Lastly, we also support several types of \emph{visual analytics} \modc{that are
exposed through the \emph{Smart Campus portal}. The portal itself was developed as part of this project, and includes a} dashboard for displaying real-time and temporal analytics (Fig.~\ref{fig:streamviz}) using JavaScript plugins like \texttt{D3.js} and \texttt{Rickshaw}, and also multi-layered geo-spatial visualizations of the IoT network on the IISc Campus using Open Street Maps (Fig.~\ref{fig:viz}). \modc{These leverage the self-describing SenML format used by the sensors for publishing observation streams, allowing plots to be automatically formatted for arbitrary sensors.}
These help with information dissemination to the end-users on campus, as well as simple visualization for resource managers. \modc{The portal also serves as a way for the campus managers to monitor the state of the IoT infrastructure using the RD, and potentially initiate actuation signals for enactment.}
\modc{As before for the choice DSPS, we can also replace Siddhi with other CEP engines like Apache Edgent, and Spark with platforms like Apache Pig or Hadoop. Our architectural design is agnostic to the specific platform, and the presence of pub-sub brokers and NoSQL data stores enable loose-coupling between diverse platforms that interface through them. Our selection of these specific platforms are indicative of what is adequate for the needs of the Smart Campus, and bounded by the scalability experiments that we have performed and reported. Other deployments may pick contemporary alternatives that are appropriate for their needs.}
\section{Related Work
\label{sec:related}
There has been heightened interest recently in designing software fabrics and data platforms to manage IoT infrastructure, and data and applications within them, with even a special issue dedicated to such software systems~\cite{spe/ChenWZ17}. These are emerging from standards bodies \emph{(IETF CoRE, W3C Web of Things, ITU-T, ISO)}, industry and consortia \emph{(Azure IoT, AWS Greengrass, Threads Group, OneM2M, AllSeen Alliance, FIWARE, LoRa)}, and academia \emph{(IERC, IoT-A, OpenIoT)}, with implementations by the open source community \emph{(Californium, Kura, Sentilo, Kaa)}. While some of these, like MQTT, have gained traction, others are competing for mind-share and market share. However, we are at an early evolutionary stage and there is a lack of clarity on what would be the most suitable technical solutions, and what would gain popular acceptance (these being two different factors). In this context, having a practical implementation and validation of an integrated IoT architecture on the field using these functional designs and protocols, as we have presented in this article, will better inform these conceptual exercises and reference designs. While we make specific service-oriented design, protocol and implementation choices for the Smart Campus project, driven by Smart Utility needs in India, there are other numerous relevant efforts and alternatives, and we discuss a representative sample here.
\subsection{\modc{Community Specifications}}
The concept of a \emph{Web of Things (WoT)} was proposed several years back by W3C but did not translate to proactive standardization efforts like IETF's~\cite{guinard2011internet}. Recently, the W3C WoT working group has begun developing a formal WoT architecture for IoT~\cite{wot}. It leverages simplified forms of Web standards like HTTP, REST and JSON to support use-cases on Smart Homes, Smart Factory and Connected Cars. In this evolving draft,
device, gateway (edge) and cloud are seen as first-class building blocks, similar to our own differentiation, and supported environments include browser, smart phones, edge hubs and cloud VMs. They also propose a \emph{servient} software stack to design and deploy applications built using a scripting framework, and protocol bindings to more popular IoT standards such as MQTT and CoAP. These bindings ensure that our own design that leverages existing standards is likely to be able to be able to interface with a WoT stack in the future.
The COMPOSE API for IoT~\cite{compose} takes a similar WoT view and defines REST operations and JSON payloads on Service Objects that wrap physical things. Applications can be composed across multiple physical devices using these APIs.
While these are still early days for WoT, it is likely to find wider industry support given the past history of W3C standards.
\emph{OneM2M} is a broad-based effort to develop open specifications for a horizontal IoT middleware that will enable inter-operability for M2M communications. It proposes comprehensive service specifications for device identification, RESTful resource and container management, and synchronous and asynchronous information flows, with mappings to open protocols like CoAP, MQTT and HTTP~\cite{onem2m}. This is targeted at large-sale IoT deployments with complex devices and use-cases, and multiple vendors. This effort is driven by major telecom providers and government agencies such as US NIST and India's Department of Telecommunication, and is expected to gain traction once the standards are formalized. Our goal in this article is much more modest, and we validate a slice of these complex interactions within the campus-scale IoT deployment, using similar functional layers and open protocols.
\subsection{\modc{Open Source Efforts}}
\modc{\emph{FIWARE}~\cite{fiware}~\footnote{https://www.fiware.org} is an open IoT software standard and a platform that is gaining recent attention, and whose features overlap with our middleware requirements. Like us, it uses MQTT and CoAP protocols for accessing observations that are coordinated through a context broker, supports CEP processing using IBM PROTON for alerts, and uses HDFS and Hive for archival storage and querying. It pays particular attention to capturing the device context, data models, dashboarding and security, making it a holistic solution. However, our proposed architecture pushes this abstraction down to the network layer, with patterns for capturing data across public and private networks, and across embedded and gateway devices, as discussed in \S~\ref{sec:acquire}. We also place emphasis on the post-processing of captured event streams by DSPS to make them ready for analytics. These are practical needs from the field. That said, FIWARE can be used as a base implementation that is complemented with these mechanisms we propose.}
\emph{WSO2} has proposed a reference architecture for an IoT platform, much like IoT-A, as a starting point for software architects~\cite{wso2arch}. However, they focus primarily on the data and analytics platform rather than the networking and fabric management, which are essential on the field. They too leverage CoAP, MQTT and HTTP for communications, but unlike us abstract away device and communication concerns. They have their custom device management interface, targeted more at smart phones, and identity management for users using LDAP. They offer an open implementation of their WSO2 IoT software stack that supports MQTT message brokering, event analytics using Siddhi engine (which we also use), and enterprise dashboarding. Commercial software support is also offered.
\emph{Sentilo}~\cite{sentilo}~\footnote{https://www.sentilo.org/} is an open source platform for managing sensors and actuators in a smart city environment, supported by the Barcelona City Council. In their stack, devices need to be added to a catalog using a dashboard and a pre-defined data model to get an authentication token. Registered devices can then use their token to publish data and alerts to a Redis in-memory data store, that also has a pub-sub interface. Applications can register for these alerts and data changes and perform actions, but data pre-processing and analytics platforms nor their application logic are explicitly proposed. They also make no distinction between registered and online resources, unlike our LDAP and RD, and this makes it difficult to know the state of the devices without querying. They offer protocol adapters for SCADA and Smart Meters, but their design is not inherently suited for constrained devices. While it has similar architectural goals and functional elements as our design, it is not as grounded in standards compliance and inter-operability other than using RESTful APIs. However, they have deployed the stack at multiple city locations, giving it practical validation.
\subsection{\modc{Research Activities}}
There are multiple efforts in the European Union (EU) on defining reference models for IoT, including IERC and AIOTI. \emph{Internet of Things-Architecture (IoT-A)} is one such EU FP7 project that proposes an application independent model that can then be mapped to a concrete architecture and platform-specific implementation~\cite{iota}. They offer a comprehensive survey of design requirements, and their reference architecture spans the device, communication, IoT service, virtual entity and business process layers, with service management and security serving as orthogonal layers. They also have a structured information model. This has a close correlation with our functional model, with device shadows (virtual entities) and security being gaps we need to address in the future. Further, rather than stop at a functional design, we also make specific platform and protocol choices for these functional entities, and deploy it in practice within the IISc campus.
\modc{One of the key challenges of IoT networks and platforms is the plethora of co-existing and overlapping standards, and the need to interface across them. \emph{Aloi, et al.}~\cite{aloi2017enabling} highlight the need to operate over diverse communications technologies and network protocols as requirements for opportunistic IoT scenarios. Specifically, they examine the use of smart phones as mobile gateways to act as a bridge between communication protocols like ZigBee, Bluetooth, WiFi and 3G/4G. This abstracts the data access by the applications and user interface from the underlying technologies. Such a model is well suited for generalizing our crowd-sourced data collection using mobile apps, and offers a parallel with the sensor data management in our Pi gateways.}
\modc{Yet another dimension of large scale IoT deployments is the ability to plan the deployment ahead of time, and with limited field explorations. Here, modeling and simulation environments are useful design tools~\cite{chernyshev2017internet}. While our SmartConnect tool~\cite{smartconnect} helps with WSN design planning, more comprehensive tools exist to allow one to span sensing, networking, device management and data management design within the IoT ecosystem~\cite{fortino2017modeling}. Large scale deployments will benefit from mapping the proposed solutions to such simulation environments to evaluate specific technologies.}
A recent special journal issue focused on software systems to manage smart city applications that deal with large datasets~\cite{spe/ChenWZ17}. However, these articles fail to take a holistic view of the entire software stack and limited themselves to specific Big Data platforms such as Spark, or analytics techniques like Support Vector Machines (SVM). We instead investigate the fundamental software architecture design to support a wide variety of domain applications and analytics techniques.
\subsection{\modc{Smart City Deployments}}
In this regard, other EU projects like \emph{OpenIoT} translate the IERC reference architecture into practical implementations~\cite{openiot}. However, they do not pay adequate attention to protocol choices for constrained devices and compatibility with emerging standards like CoRE, and offer just a proof-of-concept validation. The \emph{Ahab} framework goes further by examining the analytics stack that is necessitated by the use of both streaming and static smart city data through a lambda architecture~\cite{spe/VoglerSID17}. However, key aspects such as interaction models for device and sensor registration and the impact of network protocols are not considered.
\modc{The \emph{SmartSantander} testbed is one of the more progressive Smart City deployments, and it offers insights on traffic and human mobility from Spain~\cite{lanza2016managing,sanchez2014smartsantander}. They offer their design requirements, and a software architecture for managing the testbed. This includes gateway and server runtimes, registry services and resource management. Authentication, Authorization and Accounting (AAA) services, and sensor, actuator and application deployment through a service interface is provided as well. They offer examples of the potential data sources and analytics, such as environment monitoring, landscape irrigation, traffic and parking management. Many of our requirements and architectural design exhibit similarities.}
\section{Conclusion
\label{sec:conclusion}
In this article, we have set out the design goals for an IoT fabric and data management platform in the context of Smart Utilities, with the IISc Campus serving as a testbed for validation and smart water management being the motivating domain. Our \emph{functional architecture} is similar to other IoT reference models, with layers for communication, data acquisition, analytics and decision making, and resource and device management. We also make \emph{specific protocol and software platform choices} that advance a data-driven, service-oriented design that integrates Big Data platforms and edge and Cloud computing.
We also identify \emph{interaction patterns} for the integrated usage of these disparate standards, protocols and services that are evolving independently. At the same time, our design is \emph{generic to support other domains} such as smart power grids or intelligent transportation, and such a translation is underway as part of a ``lightpole computing'' effort within the Bangalore city~\cite{amrutur:iotdi:2017}. The experiences from the project will help in understanding the distinctive needs of Smart City utilities in developing countries like India.
Our performance results for the network design, as well as the Cloud-based stream pre-processing using Storm and edge-based event-analytics using Siddhi \emph{validate the scalability} of the software stack at the IISc campus. In particular, the platform is shown to scale to thousands of events per second for real IoT application logic on single VMs and Pi devices. These are inherently designed to weakly-scale, thus allowing these rates supported to further increase for city-wide deployments by adding more VMs and edge devices. The software stack also is available online as an open source contribution, allowing the open architecture design and implementation to be replicated at other campuses and communities as well.
\modc{Having a service API and standards-based IoT middleware enables the rapid development of novel and practical applications, both for our intended goal of smart water management and beyond. Some such applications include mobile apps for crowd-sourced water quality reporting and user notification, with linkages to trouble-ticket management by the campus maintenance crew. These data sources are also helping with water balance study and leak detection applications within campus, such as ones done by our collaborators~\cite{amrutur:2016:cpsweek,msmk}. The key distinction is the ability to perform such studies on-demand and incorporate outcomes in real-time, rather than require custom time-consuming field experiments, as was the norm. This accelerates the translation of science into operational benefits. Further, the same IoT stack was used for crowd-sourced collection of WiFi signal strengths for use by the campus Information Technology team and for an IoT Summer School hackathon, as part of campus outreach programs~\cite{msr:school}.}
\modc{The initial field trials using hundreds of sensors and devices are underway across campus. However, to ensure that the scope of the project was kept manageable, several additional aspects were deferred for future exploration.} Key among them are security and policy frameworks which are essential in a public utility infrastructure~\cite{Bertino:2016:toit}. Several authentication and authorization standards already exist for the web, with billions of mobile devices and web application utilizing them. Utilities however have a higher threat perception and end-to-end security mechanisms will need to be enforced. Similarly, auditing and provenance will be essential to identify the operational decision making chain, especially with automation of mission-critical systems~\cite{Simmhan:ijwsr:2008}. Trust mechanisms have to be established for using crowd-sourced data for operations, and privacy within pervasive sensing is a concern. From a platform perspective, we are also investigating the use of edge and fog devices to complement a Cloud-centric data platform~\cite{prateeksha:icfec:2017,ghosh:tcps:2017,pushkar:icsoc:2017}. Energy aware computing and mobility of devices also needs attention. These will find place in our future work.
\bibliographystyle{plain}
\section{Acknowledgments}
This work was supported by grants from the \emph{Ministry of Electronics and Information Technology (MeitY), Government of India}; the \emph{Robert Bosch Center for Cyber Physical Systems (RBCCPS) at IISc}; Microsoft's \emph{Azure for Research} program; and \emph{VMWare}.
The authors acknowledge the contributions of other project investigators, M.S. Mohankumar, B. Amrutur and R. Sundaresan, to the design discussions and deployment activities.
We also recognize the design and development efforts of other staff and students during the course of this project, including Abhilash K., Akshay P.M., Anand S.V.R., Anshu S., Arun V., Ashish J., Ashutosh S., Jay W., Jayanth K., Lovelesh P., Nithin J., Nithya G., Parama P., Prasant M., Ranjitha P., Rajrup G., Sieglinde P., Shashank S., Siva Prakash K.R., Tejus D.H., Vasanth R., Vikas H., Vyshak G., and among others.
|
{
"timestamp": "2018-03-08T02:03:32",
"yymm": "1803",
"arxiv_id": "1803.02500",
"language": "en",
"url": "https://arxiv.org/abs/1803.02500"
}
|
\section{Introduction}
\paragraph{\emph{Expressiveness of metric temporal logics}}
One of the most prominent specification formalisms used in
verification is \emph{Linear Temporal Logic} (\textup{\textmd{\textsf{LTL}}}{}), which is
typically interpreted over the non-negative integers or reals. A
celebrated result of Kamp~\cite{Kamp1968} states that, in either case,
\textup{\textmd{\textsf{LTL}}}{} has precisely the same expressive power as the \emph{Monadic
First-Order Logic of Order} ($\mathsf{FO[<]}$). These logics,
however, are inadequate to express specifications for systems whose
correct behaviour depends on quantitative timing requirements. Over
the last three decades, much work has therefore gone into lifting
classical verification formalisms and results to the real-time
setting. \emph{Metric Temporal Logic} (\textup{\textmd{\textsf{MTL}}}{}),\footnote{In this article, we refer to the logic with
constrained `Until' and `Since' modalities exclusively as `\textup{\textmd{\textsf{MTL}}}{}',
and use the term `metric temporal logics'
in a broader sense to refer to temporal logics with modalities definable in \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} (see below).}
which extends \textup{\textmd{\textsf{LTL}}}{} by constraining the modalities by time intervals, was introduced
by Koymans~\cite{Koymans1990} in 1990 and has emerged as a central
real-time specification formalism.
\textup{\textmd{\textsf{MTL}}}{} enjoys two main semantics, depending intuitively on whether
atomic formulas are interpreted as \emph{state predicates} or as
(instantaneous) \emph{events}. In the former, the system is assumed to
be under observation at every instant in time, leading to a
`continuous' semantics based on \emph{signals},
whereas in the latter, observations of the system are taken to be
(finite or infinite) sequences of timestamped snapshots, leading to a
`pointwise' semantics based on \emph{timed words}---this is the prevalent
interpretation for systems modelled as timed automata~\cite{Alur1994}.
In both cases, the time domain is usually taken to be the non-negative
real numbers. Both semantics have been extensively studied; see,
e.g.,~\cite{Ouaknine2008} for a historical account.
Alongside these developments, researchers proposed the \emph{Monadic
First-Order Logic of Order and Metric \emph{($\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}$)}} as a natural quantitative extension of $\mathsf{FO[<]}$.
Like \textup{\textmd{\textsf{MTL}}}{}, \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} can be interpreted over signals~\cite{Hirshfeld2004} or timed words~\cite{Wilke1994}.
An obvious question to ask is whether
\textup{\textmd{\textsf{MTL}}}{} has the same expressive power as \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}, i.e.,~an analogue of Kamp's theorem holds in the real-time setting.
Unfortunately, Hirshfeld
and Rabinovich~\cite{Hirshfeld2007} showed that no `finitary'
extension of \textup{\textmd{\textsf{MTL}}}{}---and \emph{a fortiori} \textup{\textmd{\textsf{MTL}}}{} itself---could have
the same expressive power as $\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}$ over the reals.\footnote{Hirshfeld and Rabinovich's result was only stated and
proved for the continuous semantics, but we believe that their
approach would also carry through for the pointwise semantics.
In any case, using different techniques Prabhakar and D'Souza~\cite{Prabhakar2006}
and Pandya and Shah~\cite{Pandya2011}
independently showed that \textup{\textmd{\textsf{MTL}}}{} is
strictly weaker than \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} in the pointwise semantics.}
Still, in the continuous semantics, \textup{\textmd{\textsf{MTL}}}{} can be made expressively complete for \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}
by extending the logic with an infinite family of `\emph{counting modalities}'~\cite{Hunter2013}
or considering only \emph{bounded} time domains~\cite{Ouaknine2009,Ouaknine2010}.
Nonetheless, and rather surprisingly, \textup{\textmd{\textsf{MTL}}}{} with counting modalities
remains strictly less expressive than $\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}$ over bounded time domains in the pointwise semantics, i.e.,~over timed words of
bounded duration.
\paragraph{\emph{Monitoring of real-time specifications}}
In recent years, \emph{runtime verification} (see~\cite{Leucker2009, Sokolsky2011} for surveys) has emerged as a light-weight complementary technique
to \emph{model checking}~\cite{Clarke1981, Queille1982}.
It is particularly useful for systems whose
internal details are either too complex to be modelled faithfully
or simply not accessible.
Roughly speaking, while in model checking one considers all behaviours of the model,
in runtime verification one focusses on one particular behaviour---the current one.
Given a specification $\varphi$ and a finite timed word $\rho$ (which we call a finite \emph{trace} in this context),
the \emph{prefix} problem asks whether all infinite traces
extending $\rho$ satisfy $\varphi$.
The \emph{monitoring} problem, as far as we are concerned here,
can be seen as an \emph{online}
version of the prefix problem where $\rho$ grows incrementally (i.e.,~one event at a time):
the monitoring procedure (\emph{monitor}) for $\varphi$ is executed in parallel with the system under scrutiny,
and it is required to output an answer when either (i)~all infinite extensions of the current trace satisfy $\varphi$, or (ii)~no infinite extension of the
current trace can satisfy $\varphi$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[->,>=stealth', auto, node distance=0.8cm,
semithick, bend angle=25, every state/.style={fill=none,draw=black,text=black,shape=rectangle}]
\node[state] (v2) [very thick, minimum width=3cm, minimum height=2cm, rounded corners] {\textsf{System}};
\node[state] (vr) [fill=lightgray, left=0.3cm of v2, minimum width=0.5cm, minimum height=0.5cm] {\scriptsize r};
\node[state] (vu) [fill=lightgray, left=0.5cm of vr, minimum width=0.5cm, minimum height=0.5cm] {\scriptsize u};
\node[state] (vu') [draw=none, above of=vu] {$\longleftarrow$};
\node[state] (vu') [draw=none, below of=vu] {$\longleftarrow$};
\node[state] (vo) [fill=lightgray, left=0.5cm of vu, minimum width=0.5cm, minimum height=0.5cm] {\scriptsize o};
\node[state] (vi) [fill=lightgray, left=0.5cm of vo, minimum width=0.5cm, minimum height=0.5cm] {\scriptsize i};
\node[state] (vi') [draw=none, above of=vi] {$\longleftarrow$};
\node[state] (vi') [draw=none, below of=vi] {$\longleftarrow$};
\node[state] (vv) [fill=lightgray, left=0.5cm of vi, minimum width=0.5cm, minimum height=0.5cm] {\scriptsize v};
\node[state] (va) [fill=lightgray, left=0.5cm of vv, minimum width=0.5cm, minimum height=0.5cm] {\scriptsize a};
\node[state] (va') [draw=none, above of=va] {$\longleftarrow$};
\node[state] (va') [draw=none, below of=va] {$\longleftarrow$};
\node[state] (vh) [fill=lightgray, left=0.5cm of va, minimum width=0.5cm, minimum height=0.5cm] {\scriptsize h};
\node[state] (ve) [fill=lightgray, left=0.5cm of vh, minimum width=0.5cm, minimum height=0.5cm] {\scriptsize e};
\node[state] (ve') [draw=none, above of=ve] {$\longleftarrow$};
\node[state] (ve') [draw=none, below of=ve] {$\longleftarrow$};
\node[state] (vb) [fill=lightgray, left=0.5cm of ve, minimum width=0.5cm, minimum height=0.5cm] {\scriptsize b};
\node[state] (v1) [very thick, left=0cm of vb, rounded corners] {\textsf{Monitor}};
\end{tikzpicture}
\caption{A monitor receives the trace incrementally (one event at a time).}%
\label{fig:mon-example}
\end{figure}
Ideally, we would also like to require a monitoring procedure to be \emph{trace-length independent}~\cite{Rosu2012, Bauer2013} in the sense
that its time and space requirements should not depend on the length of
the input trace (this is important since input traces in practical applications tend to be very long; cf., e.g.,~\cite{Basin2014}).
In the untimed case, this is not difficult to achieve: one can translate
\textup{\textmd{\textsf{LTL}}}{} formulas into B\"uchi automata~\cite{Gastin2003, Dax2010a} and
turn them into efficient trace-length independent monitors~\cite{Arafat2005}.
Unfortunately, a number of obstacles hinder the application of this methodology
to the real-time setting:
it is known that \textup{\textmd{\textsf{MTL}}}{} is expressively incomparable with timed automata;
even though certain fragments of \textup{\textmd{\textsf{MTL}}}{} can be translated into timed automata,
the latter are not always determinisable as required for the purpose of monitoring~\cite{Baier2009}.
For this reason, researchers purposed automata-free monitoring procedures that work
directly with metric temporal logic formulas (e.g.,~\cite{Maler2004, Basin2011}).
However, it proved difficult to maintain trace-length independence while allowing
\textup{\textmd{\textsf{MTL}}}{} in its full generality, i.e.,~with unbounded intervals and nesting of future and past modalities.
Almost all monitoring procedures for metric temporal logics in the
literature have certain syntactic or semantic limitations, e.g.,
only allowing bounded future modalities or assuming integer time.
A notable exception is~\cite{Baldor2012} which handles full \textup{\textmd{\textsf{MTL}}}{} over
signals, but which unfortunately fails to be trace-length independent.
\paragraph{\emph{Contributions}}
We study the expressiveness of various fragments and extensions of \textup{\textmd{\textsf{MTL}}}{}
over timed words.
In particular, we highlight a fundamental deficiency in the pointwise interpretation of \textup{\textmd{\textsf{MTL}}}{}.
To amend this, we propose new (first-order definable) modalities \emph{generalised `Until'} ($\mathbin{\mathfrak{U}}$) and \emph{generalised `Since'} ($\mathbin{\mathfrak{S}}$). With these new modalities and the techniques developed in~\cite{Pandya2011, Ouaknine2009, Hunter2012},
we establish the following results:
\begin{enumerate}[label=(\roman*).]
\item There is a strict hierarchy of metric temporal logics based on their expressiveness over bounded timed words (see Figure~\ref{fig:expsummary}
where the arrows indicate `strictly more expressive than'
and the edges indicate `equally expressive'). Note that this hierarchy collapses in the continuous semantics.
\item The metric temporal logic with the new modalities $\mathbin{\mathfrak{U}}$ and $\mathbin{\mathfrak{S}}$ (denoted \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}) is expressively complete for \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}
over bounded timed words.
\item The time-bounded satisfiability and model-checking problems for \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} are $\mathrm{EXPSPACE}$-complete, the same as that of \textup{\textmd{\textsf{MTL}}}{}.
\item Any \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula is equivalent to a \emph{syntactically separated} one.
\item \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} is expressively complete for \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{} (the rational variant of \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}) over unbounded (i.e.,~infinite non-Zeno) timed words
if we allow the use of rational constants in modalities.
\end{enumerate}
\begin{figure}[ht]
\begin{minipage}[b]{0.45\linewidth}
\centering
\begin{tikzpicture}[>=to, scale=.8, transform shape]
\node (v0) at (0, 0) {\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}};
\node (v1c) at (0.5, -2) {\raisebox{5.5pt}{\mtlpastcnt{}}};
\node (v1) at (-2, -2) {\textup{\textmd{\textsf{MTL}}}{}};
\node (v2) at (-4, -4) {\textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{}};
\path (v0) edge [->] (v1)
(v0) edge [->] (v1c)
(v1) edge [->] (v2);
\end{tikzpicture}
\caption*{known results}
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\centering
\begin{tikzpicture}[>=to, scale=.8, transform shape]
\node (v) at (2.5, 0) {\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}};
\node (v0) at (0, 0) {\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}};
\node (v0') at (-1, -1) {\textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{}};
\node (v1) at (-2, -2) {\textup{\textmd{\textsf{MTL}}}{}};
\node (v1c) at (0.8, -2) {\raisebox{5.5pt}{\mtlpastcnt{}}};
\node (v1') at (-3, -3) {\textup{\textmd{$\textsf{MTL}_\textsf{fut}$[$\Phi_{\textit{int}}$]}}{}};
\node (v2) at (-4, -4) {\textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{}};
\path (v0) edge [->] (v0')
(v0') edge [->] (v1)
(v1) edge [->] (v1')
(v1) edge [-] (v1c)
(v1') edge [->] (v2)
(v0) edge [-] (v);
\end{tikzpicture}
\caption*{our results}
\end{minipage}
\caption{Expressiveness results over bounded timed words.}%
\label{fig:expsummary}
\end{figure}
\noindent
For monitoring, we focus on a restricted version of the monitoring problem of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}, based on the
notion of \emph{informative prefixes}~\cite{Kupferman2001a}.
The main idea of our approach is to work with \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas of a special form:
\textup{\textmd{\textsf{LTL}}}{} formulas over atomic formulas comprised of bounded \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas.\footnote{It follows
from the syntactic separation result that
no expressiveness is sacrificed in restricting to this fragment.}
The truth values of bounded \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas can be computed and stored efficiently with a dynamic programming algorithm; these values are then used as input to
deterministic finite automata obtained from `backbone' \textup{\textmd{\textsf{LTL}}}{} formulas.
As a result, we obtain the first trace-length independent
monitoring procedure for a metric temporal logic that subsumes \textup{\textmd{\textsf{MTL}}}{}.
The procedure is free of dynamic memory allocations, linked lists, etc., and hence can be
implemented efficiently (the \emph{amortised} running time per event is linear
in the number of subformulas in all bounded formulas). To be more precise:
\begin{enumerate}[label=(\roman*)., start=6]
\item We give a trace-length independent monitoring procedure (which detects informative good/bad prefixes) for \textup{\textmd{\textsf{LTL}}}{} formulas
over atomic formulas comprised of bounded \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas.
\item For an arbitrary \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula, we show that its informative good/bad prefixes are preserved
by the syntactic rewriting rules (and thus can be monitored in a
trace-length independent manner).
\end{enumerate}
\paragraph{\emph{Related work}}
Bouyer, Chevalier, and Markey~\cite{Bouyer2005} showed that \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} (the future-only fragment of \textup{\textmd{\textsf{MTL}}}{})
is strictly less expressive than \textup{\textmd{\textsf{MTL}}}{} in both the continuous and pointwise semantics.
This, together with the aforementioned results~\cite{Hirshfeld2007, Pandya2011}, form a strict hierarchy
of expressiveness that holds in the both semantics:
\[
\textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} \subsetneq \textup{\textmd{\textsf{MTL}}}{} \subsetneq \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} \,.
\]
Ouaknine, Rabinovich, and Worrell~\cite{Ouaknine2009} showed that the hierarchy collapses in the
continuous semantics when one considers bounded time domains of the form $\ropen{0, N}$.
Our results show that this is not the case in the pointwise semantics.
Another route to expressive completeness is by allowing rational constants.
In particular, counting modalities become expressible in \textup{\textmd{\textsf{MTL}}}{}~\cite{Bouyer2005}.
Exploiting this observation, Hunter, Ouaknine, and Worrell~\cite{Hunter2012} showed that \textup{\textmd{\textsf{MTL}}}{} with rational constants
is expressively complete for \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{} in the continuous semantics.
However, as can be immediately derived from a result of Prabhakar and D'Souza~\cite{Prabhakar2006},
this pleasant result does not hold in the pointwise semantics.
On the other hand, D'Souza and Tabareau~\cite{DSouza2004} showed that
\textup{\textmd{\textsf{MTL}}}{} with rational constants is expressively complete for \textmd{\textsf{rec-TFO[$\LTLdiamond, \LTLdiamondminus$]}} (an `input-determined' fragment of \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{}) in the pointwise semantics.
We complement these results by extending \textup{\textmd{\textsf{MTL}}}{} with the new modalities
$\mathbin{\mathfrak{U}}$ and $\mathbin{\mathfrak{S}}$ to make it expressively complete for \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{} in the pointwise semantics.
In a pioneering work, Thati and Ro{\c{s}}u~\cite{Thati2005} proposed a rewriting-based
monitoring procedure for \textup{\textmd{\textsf{MTL}}}{} over integer-timed traces.
Their procedure is trace-length independent and amenable to efficient implementations.
However, trace-length independent monitoring of \textup{\textmd{\textsf{MTL}}}{} is not possible in dense real-time settings: a monitor would have to `remember' an
infinite number of timestamps. For this reason, researchers often impose a
\emph{bounded-variability} assumption on input traces, i.e.,~only a bounded number of events may occur in any time unit.
Under such an assumption, Nickovic and Piterman~\cite{Nickovic2010a} showed that \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} formulas can be
translated into deterministic timed automata.
Unfortunately, their approach does not easily extend to full \textup{\textmd{\textsf{MTL}}}{}.
It is known that the non-punctual fragment of \textup{\textmd{\textsf{MTL}}}{}, called \textup{\textmd{\textsf{MITL}}}{}, can be
translated into timed automata. Since the standard constructions~\cite{Alur1992, Alur1996}
are notoriously complicated, there have been some proposals for simplified or
improved constructions~\cite{Maler2006, Kini2011, DSouza2013, Brihaye2014, Brihaye2017}.
The difficulty in using these constructions
for monitoring, again, lies in the fact that timed automata cannot be
determinised in general. In principle one can carry out on-the-fly
determinisation for input traces of bounded variability (cf., e.g.,~\cite{Tripakis2002, Baier2009});
however, it is not clear that this approach can yield an efficient procedure.
\section{Preliminaries}
\subsection{Automata and logics for real-time}
\paragraph{\emph{Timed words}}
Let the \emph{time domain} $\mathbb{T}$ be a subinterval of $\mathbb{R}_{\geq 0}$ that contains $0$.
A \emph{time sequence} $\tau = \tau_0\tau_1\dots$ is a non-empty finite or infinite sequence over
$\mathbb{T}$ (\emph{timestamps}) that satisfies the requirements below (we
denote the length of $\tau$ by $|\tau|$):
\begin{itemize}
\item \emph{Initialisation}: $\tau_0 = 0$
\item \emph{Strict monotonicity}: For all $i$, $0 \leq i < |\tau| - 1$, we
have $\tau_i < \tau_{i+1}$.\footnote{This requirement is chosen to simplify
the presentation; all the results
still hold (with some minor modifications) in the case of weakly-monotonic time,
i.e.,~requiring instead $\tau_i \leq \tau_{i+1}$ for all $i$, $0 \leq i < |\tau| - 1$.}
\end{itemize}
If $\tau$ is infinite we require it to be unbounded, i.e.,~we disallow
so-called Zeno sequences.
Given a finite alphabet $\Sigma$, a \emph{$\mathbb{T}$-timed word} over
$\Sigma$ is a pair $\rho = (\sigma, \tau)$ where $\sigma =
\sigma_0\sigma_1\dots$ is a non-empty finite or infinite word over
$\Sigma$ and $\tau$ is a time sequence over $\mathbb{T}$ of the same length.
We refer to a $\mathbb{T}$-timed word simply as a \emph{timed word} when $\mathbb{T} = \mathbb{R}_{\geq 0}$.\footnote{By the non-Zeno requirement, if $\mathbb{T}$ is bounded then a $\mathbb{T}$-timed word must be a finite timed word.}
We refer the pair $(\sigma_i, \tau_i)$ as the \emph{$i^{th}$ event} in $\rho$,
and define the \emph{distance} between $i^{th}$ and $j^{th}$ ($i \leq j$) events to
be $\tau_j - \tau_i$.
In this way, a timed word can be equivalently regarded as a sequence of events.
We denote by $|\rho|$ the number of events in $\rho$.
A \emph{position} in $\rho$ is a number $i$ such that $0 \leq i < |\rho|$.
The \emph{duration} of $\rho$ is defined as $\tau_{|\rho| - 1}$ if $\rho$ is finite.
We write $t \in \rho$ if $t$ is equal to one of the timestamps in $\rho$.
For a finite alphabet $\Sigma$, we write $T\Sigma^*$ and $T\Sigma^\omega$ for the respective sets of
finite and infinite timed words over $\Sigma$.
A \emph{timed (finite-word) language} over $\Sigma$ is a subset of $T\Sigma^\omega$ ($T\Sigma^*$).
\paragraph{\emph{Timed automata}}
The most popular model for real-time systems are \emph{timed automata}~\cite{Alur1994}, introduced by Alur and Dill in the early 1990s. Timed automata extends finite automata by real-valued variables (called \emph{clocks}).
\begin{defi}
Given a set of clocks $X$, the set $\mathcal{G}(X)$ of clock constraints $g$ is
defined inductively by
\[
g ::= \mathbf{true} \,\mid\, x \bowtie c \,\mid\, g_1 \wedge g_2
\]
where $x \in X$, $c \in \mathbb{N}$ and $\bowtie \; \in \{<, \leq, >, \geq\}$.
\end{defi}
\begin{defi}
A (non-deterministic) timed automaton $\mathcal{A}$ is a tuple $\langle \Sigma, S, S_0, X, I, E, F \rangle$ where
\begin{itemize}
\item $\Sigma$ is a finite alphabet
\item $S$ is a finite set of locations
\item $S_0 \subseteq S$ is the set of initial locations
\item $X$ is a finite set of clocks
\item $I: S \mapsto \mathcal{G}(X)$ is a mapping that labels each location in $S$ with a clock constraint in $\mathcal{G}(X)$ (an `invariant')
\item $E \subseteq S \times \Sigma \times 2^X \times \mathcal{G}(X) \times S$ is the set of edges.
An edge $\langle s, a, \lambda, g, s' \rangle$
denotes an $a$-labelled edge from location $s$ to location $s'$ where $g$ (a `guard') specifies
when the edge is enabled and $\lambda \subseteq X$ is the set of clocks to be reset with this edge
\item $F$ is the set of accepting locations.
\end{itemize}
\end{defi}
\noindent
We say that $\mathcal{A}$ is \emph{deterministic} if it (i) has only one initial location and
(ii) for each $s \in S$, $a \in \Sigma$ and every pair of edges $\langle s, a, \lambda_1, g_1, s_1 \rangle$,
$\langle s, a, \lambda_2, g_2, s_2 \rangle$, $g_1$ and $g_2$ are mutually exclusive (i.e.,~$g_1 \wedge g_2$ is unsatisfiable).
We say that $\mathcal{A}$ is \emph{complete} if for each $s \in S$ and $a \in \Sigma$,
the disjunction of the clock constraints of the $a$-labelled edges starting
at $s$ is a valid formula.
Assume that $\mathcal{A}$ has $n$ clocks. We define its set of clock values as
$\textsf{Val} = [0, c_{\maxit}] \cup \{\top\}$ where $c_{\maxit}$ is the maximum constant
appearing in $\mathcal{A}$. A \emph{state} of $\mathcal{A}$ as a pair $(s, \mathbf{v})$ where
$s \in S$ is a location and $\mathbf{v} \in \textsf{Val}^n$ is a \emph{clock valuation}. Write $\mathbf{v}(x)$ for
the value of clock $x$ in $\mathbf{v}$. We denote by $Q = S \times \textsf{Val}^n$ the set of all states of $\mathcal{A}$.
A \emph{run} of $\mathcal{A}$ on a timed word can be seen as follows: the automaton takes some edge when an event arrives, otherwise it stays in the same location as time elapses.
More precisely, $\mathcal{A}$ induces a labelled transition system $\mathcal{T_A} = \langle Q, \leadsto, \rightarrow \rangle$
where $\leadsto \; \subseteq Q \times \mathbb{R}_{> 0} \times Q$ is the \emph{delay-step relation} and $\rightarrow \; \subseteq
Q \times \Sigma \times Q$ is the \emph{discrete-step relation}. In these steps, corresponding invariants and guards must be met (define $\top > c$ for all constants $c$):
\begin{itemize}
\item For $(s, \mathbf{v}) \overset{t}{\leadsto} (s', \mathbf{v'})$, $s' = s$, $\mathbf{v'} = \mathbf{v} + t$
and $\mathbf{v} + t' \models I(s)$ for all $0 \leq t' \leq t$.
\item For $(s, \mathbf{v}) \overset{a}{\rightarrow} (s', \mathbf{v'})$, there is an edge $\langle s, a, \lambda, g, s' \rangle \in E$
such that $\mathbf{v'} = \mathbf{v}[\lambda := 0]$ and $\mathbf{v} \models g$.
\end{itemize}
The clock valuation $\mathbf{v} + t$ maps each clock $x$ to $\mathbf{v}(x) + t$ if $\mathbf{v}(x) + t \leq c_{\maxit}$, otherwise $\top$.
$\mathbf{v}[\lambda := 0]$ maps $x$ to $\mathbf{v}(x)$ if $x \notin \lambda$, otherwise $0$.
Formally, a run of $\mathcal{A}$ on $\rho = (\sigma, \tau)$ is an alternating
sequence of delay steps and discrete steps
\[
(s_0, \mathbf{v}_0) \overset{\sigma_0}{\rightarrow} (s_1, \mathbf{v}_1) \overset{d_0}{\leadsto} (s_2, \mathbf{v}_2)
\overset{\sigma_1}{\rightarrow} (s_3, \mathbf{v}_3) \overset{d_1}{\leadsto} (s_4, \mathbf{v}_4) \overset{\sigma_2}{\rightarrow} \dots
\]
where $d_i = \tau_{i+1} - \tau_{i}$ for $i \geq 0$, $s_0 \in S_0$ and $\mathbf{v}_0 = 0^n$.
A finite timed word $\rho'$ is \emph{accepted} by $\mathcal{A}$ if there is an \emph{accepting} run (i.e.,~ending in an
accepting location) of $\mathcal{A}$ on $u'$.
We can also equip $\mathcal{A}$ with a B\"uchi acceptance condition; in this case,
a run is \emph{accepting} if it visits an accepting location infinitely often,
and an infinite timed word $\rho$ is \emph{accepted} by $\mathcal{A}$ if there is
such a run of $\mathcal{A}$ on $\rho$.
The \emph{timed (finite-word) language} defined by $\mathcal{A}$ is the set of (finite) timed words
accepted by $\mathcal{A}$.
Note that timed automata are not closed under complementation; for example,
the complement of the timed language accepted by the timed automaton in the
example below cannot be recognised by any timed automaton.
\begin{exaC}[\cite{Alur2004}]
Consider the timed automaton with $\Sigma = \{a, b\}$ in Figure~\ref{fig:taexample}.
The automaton accepts timed words containing an $a$ event at some time $t$ such that no
event occurs at time $t + 1$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[->, >=stealth', shorten >=1pt, auto, node distance=6cm, transform shape, scale=0.8,
semithick, bend angle=25, every state/.style={fill=none,draw=black,text=black,shape=circle}]
\node[state] (l0) [initial left, initial text={}] {\large $l_0$};
\node[state, accepting] (l1) [right=4cm of l0] {\large $l_1$};
\path
(l0) edge node [align=center] {\large $a$ \\ $x := 0$} (l1)
(l1) edge [loop above] node [align=center] {\large $a$, $b$ \\ $x \neq 1$} (l1)
(l0) edge [loop above] node [align=center] {\large $a$, $b$} (l0);
\end{tikzpicture}
\caption{A timed automaton.}%
\label{fig:taexample}
\end{figure}
\end{exaC}
\paragraph{\emph{Monadic First-Order Logic of Order and Metric}}
We now define the \emph{Monadic First-Order Logic of Order and Metric} (\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{})~\cite{Wilke1994}
which subsumes all the other logics discussed in this article.
\begin{defi}
Given a set of monadic predicates $\mathbf{P}$,
the set of \emph{\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}} formulas is generated by the grammar
\[
\begin{array}{rcl}
\vartheta & ::= & \mathbf{true} \,\mid\, P(x) \,\mid\, x < x' \,\mid\, d(x, x') \sim c \,\mid\, \vartheta_1 \wedge \vartheta_2 \,\mid\, \neg \vartheta \,\mid\,
\exists x \, \vartheta \,,
\end{array}
\]
where $P \in \mathbf{P}$, $x, x'$ are variables, $\sim \; \in \{ <, > \}$ and
$c \in \mathbb{N}$.\footnote{Note that whilst we refer to the logic as \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}, we adopt an equivalent definition where binary distance predicates $d(x, x') \sim c$ (as in~\cite{Wilke1994}) are used in place of the usual $+1$ function symbol.}
\end{defi}
The fragment where $d(x, x') \sim c$ is absent is called the
\emph{Monadic First-Order Logic of Order} (\textup{\textmd{\textsf{FO[$<$]}}}).
\paragraph{\emph{Metric temporal logics}}
Formulas of metric temporal logics are
\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formulas (with a single free variable)
built from monadic predicates using Boolean connectives and \emph{modalities} (or \emph{operators}).
A $k$-ary modality is defined by an \mbox{\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}} formula $\varphi(x, X_1, \dots, X_k)$
with a single free variable $x$ and $k$ free monadic predicates $X_1, \dots, X_k$.
For example, the \textup{\textmd{\textsf{MTL}}}{}~\cite{Koymans1990} modality $\mathbin{\mathcal{U}}_{(0, 5)}$
is defined by the \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula
\[
\arraycolsep=0.5ex
\begin{array}{rcll}
\mathbin{\mathcal{U}}_{(0, 5)}(x, X_1, X_2) & = & \exists x' \, \Bigl( & x < x' \wedge d(x, x') < 5 \wedge X_2(x') \\
& & & {} \wedge \forall x'' \, \bigl( x < x'' \wedge x'' < x' \implies X_1(x'') \bigr) \Bigr) \,.
\end{array}
\]
The \textup{\textmd{\textsf{MTL}}}{} formula $\varphi_1 \mathbin{\mathcal{U}}_{(0, 5)} \varphi_2$ (usually written
in infix notation) is obtained by substituting \textup{\textmd{\textsf{MTL}}}{} formulas $\varphi_1, \varphi_2$ for $X_1, X_2$, respectively.
\begin{defi}
Given a set of monadic predicates $\mathbf{P}$, the set of \emph{\textup{\textmd{\textsf{MTL}}}{}} formulas is generated by the grammar
\[
\begin{array}{rcl}
\varphi & ::= & \mathbf{true} \,\mid\, P \,\mid\, \varphi_1
\wedge \varphi_2 \,\mid\, \neg \varphi \,\mid\, \varphi_1 \mathbin{\mathcal{U}}_I
\varphi_2 \,\mid\, \varphi_1 \mathbin{\mathcal{S}}_I \varphi_2 \,,
\end{array}
\]
where $P \in \mathbf{P}$ and $I \subseteq (0, \infty)$ is an interval
with endpoints in $\mathbb{N} \cup \{\infty\}$.
\end{defi}
The (future-only) fragment \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} is obtained by disallowing subformulas
of the form $\varphi_1 \mathbin{\mathcal{S}}_I \varphi_2$.
We write $|I|$ for $\sup(I) - \inf(I)$.
If $I$ is not present as a subscript then it is assumed to be $(0, \infty)$.
We sometimes use pseudo-arithmetic expressions to denote intervals, e.g.,~`$\geq 1$' denotes $\ropen{1, \infty}$
and `$= 1$' denotes the singleton $\{1\}$.
We also employ the usual syntactic sugar,
e.g.,~$\mathbf{false} \equiv \neg \mathbf{true}$, $\LTLdiamond_I \varphi \equiv \mathbf{true} \, \mathbin{\mathcal{U}}_I \varphi$,
$\LTLdiamondminus_I \varphi \equiv \mathbf{true} \mathbin{\mathcal{S}}_I \varphi$,
$\LTLsquare_I \varphi \equiv \neg \LTLdiamond_I \neg
\varphi$ and $\LTLcircle_I \varphi \equiv \mathbf{false} \mathbin{\mathcal{U}}_I \varphi$, etc.
For convenience, we also use `weak' temporal operators as syntactic sugar, e.g.,~$\varphi_1 \mathbin{\mathcal{U}}^w_I \varphi_2 \equiv \varphi_1
\wedge (\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2)$ if $0 \notin I$ and $\varphi_1
\mathbin{\mathcal{U}}^w_I \varphi_2 \equiv \varphi_2 \vee \left( \varphi_1 \wedge
(\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2) \right)$ if $0 \in I$ (we allow $0 \in I$ in the case of weak temporal operators).
We denote by $|\varphi|$ the number of subformulas in $\varphi$.
\paragraph{\emph{The pointwise semantics}}
With each $\mathbb{T}$-timed word $\rho = (\sigma, \tau)$ over $\Sigma_{\mathbf{P}} = 2^\mathbf{P}$ we associate a structure $M_\rho$.
Its universe $U_\rho$ is the subset $\{ \tau_i \mid 0 \leq i < |\rho|\}$ of $\mathbb{T}$.
The order relation $<$ and monadic predicates in $\mathbf{P}$ are interpreted in the expected way, e.g.,~$P(\tau_i)$ holds in $M_\rho$ iff $P \in \sigma_i$.
The binary \emph{distance predicate} $d(x, x') \sim c$ holds iff $|x - x'| \sim c$.
The satisfaction relation is defined inductively as usual.
We write $M_\rho, t_0, \dots, t_{n-1} \models \vartheta(x_0, \dots, x_{n-1})$
(or $\rho, t_0, \dots, t_{n-1} \models \vartheta(x_0, \dots, x_{n-1})$)
if $t_0, \dots, t_{n-1} \in U_\rho$
and $\vartheta(t_0, \dots, t_{n-1})$ holds in $M_\rho$.
We say that two \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formulas $\vartheta_1(x)$ and $\vartheta_2(x)$ are \emph{equivalent} over
$\mathbb{T}$-timed words if for all $\mathbb{T}$-timed words $\rho$ and $t \in U_\rho$,
\[
\rho, t \models \vartheta_1(x) \iff \rho, t \models \vartheta_2(x) \,.
\]
We say that a metric logic $L'$ is \emph{expressively complete} for metric logic $L$
over $\mathbb{T}$-timed words iff for any formula $\vartheta(x) \in L$,
there is an equivalent formula $\varphi(x) \in L'$ over $\mathbb{T}$-timed words.
We say that $L'$ is \emph{at least as expressive as}
(or \emph{more expressive than}) $L$ over $\mathbb{T}$-timed words (written $L \subseteq L'$)
iff for any formula $\vartheta \in L$, there is an \emph{initially equivalent} formula $\varphi \in L'$ over $\mathbb{T}$-timed words
(i.e.,~$\vartheta$ and $\varphi$ evaluates to the same truth value at the beginning of any $\mathbb{T}$-timed word).
If $L \subseteq L'$ but $L' \nsubseteq L$ then we say that $L'$ is \emph{strictly more expressive than} $L$
(or $L$ is \emph{strictly less expressive than} $L'$) over $\mathbb{T}$-timed words.
As we have seen earlier, each \textup{\textmd{\textsf{MTL}}}{} formula can be defined as an \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}
formula with a single free variable. Here, for the sake of completeness we
give an (equivalent) traditional inductive definition of the satisfaction relation for \textup{\textmd{\textsf{MTL}}}{}
over timed words. We write $\rho \models \varphi$ if $\rho, 0 \models \varphi$.
\begin{defi}
The satisfaction relation $\rho, i \models \varphi$ for an \emph{\textup{\textmd{\textsf{MTL}}}{}}
formula $\varphi$, a timed word $\rho = (\sigma, \tau)$ and a position $i$ in $\rho$ is defined as follows:
\begin{itemize}
\item $\rho, i \models \mathbf{true}$
\item $\rho, i \models P$ iff $P(\tau_i)$ holds in $M_\rho$
\item $\rho, i \models \varphi_1 \wedge \varphi_2$ iff $\rho, i
\models \varphi_1$ and $\rho, i \models \varphi_2$
\item $\rho, i \models \neg \varphi$ iff $\rho, i \not \models
\varphi$
\item $\rho, i \models \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$ iff there exists $j$, $i < j < |\rho|$
such that $\rho, j \models \varphi_2$, $\tau_j - \tau_i \in I$
and $\rho, k \models \varphi_1$ for all $k$ with $i < k < j$
\item $\rho, i \models \varphi_1 \mathbin{\mathcal{S}}_I \varphi_2$ iff there exists
$j$, $0 \leq j < i$ such that $\rho, j \models \varphi_2$, $\tau_i - \tau_j \in I$
and $\rho, k \models \varphi_1$ for all $k$ with $j < k < i$.
\end{itemize}
\end{defi}
\begin{exa}
The \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} formula
\begin{equation} \label{for:example}
\varphi = \LTLsquare (P \implies \LTLdiamond_{< 3} Q)
\end{equation}
is satisfied by a timed word $\rho$ if and only if there is a $P$-event in $\rho$ (say at time $t$), and
there is a $Q$-event in $\rho$ with timestamp in $(t, t+3)$.
\end{exa}
\paragraph{\emph{Safety relative to the divergence of time}}
Recall that we require the timestamps of any infinite timed word to
be a strictly-increasing divergent sequence. Based upon this assumption, we
define \emph{safety properties} in exactly the same way as in the qualitative case~\cite{Alpern1987}; for example, (\ref{for:example}) is a safety property as any infinite timed word $u'$ violating $\varphi$ must have a prefix $u$
such that there is a $P$-event in $u$ with no $Q$-event in the following three time units.
On the other hand, had we allowed Zeno timed words, $\varphi$ would not be safety as
\[
(\{P\}, 0)(\{P\}, 1)(\{P\}, 1 + \frac{1}{2})(\{P\}, 1 + \frac{1}{2} + \frac{1}{4}) \dots
\]
violates $\varphi$ without having a prefix that cannot be extended into an infinite timed word
satisfying $\varphi$.
The notion we adopt here is called
\emph{safety relative to the divergence of time} in the literature~\cite{Henzinger1992}.
\paragraph{\emph{The continuous semantics}}
Another way to interpret metric logics is to regard time as a continuous entity;
a behaviour of a system can thus be viewed as a continuous function.
Formally, a $\mathbb{T}$-\emph{signal} over finite alphabet $\Sigma$ is a function $f: \mathbb{T} \mapsto \Sigma$ that
is \emph{finitely variable}, i.e.,~the restriction of $f$ to a subinterval of $\mathbb{T}$ of finite length
has only a finite number of discontinuities.
We refer to a $\mathbb{T}$-signal simply as a \emph{signal} when $\mathbb{T} = \mathbb{R}_{\geq 0}$.
With each signal $f$ over $\Sigma_{\mathbf{P}}$ we associate a structure $M_f$.
Its universe $U_f$ is $\mathbb{T}$.
The order relation $<$ and monadic predicates in $\mathbf{P}$ are interpreted in the expected way,
e.g., $P(x)$ holds in $M_f$ iff $P \in f(x)$.
The binary \emph{distance predicate} $d(x, x') \sim c$ holds iff $|x - x'| \sim c$.
We write $M_f, t_0, \ldots, t_{n-1} \models \vartheta(x_0, \ldots, x_{n-1})$
(or $f, t_0, \ldots, t_{n-1} \models \vartheta(x_0, \ldots, x_{n-1})$)
if $t_0, \ldots, t_{n-1} \in U_f$
and $\vartheta(t_0, \ldots, t_{n-1})$ holds in $M_f$.
The notions of equivalence of formulas, expressiveness of
metric logics, etc.\ are defined as in the case of timed words.
The satisfaction relation for \textup{\textmd{\textsf{MTL}}}{} over signals is defined as follows.
We write $f \models \varphi$ if $f, 0 \models \varphi$.
\begin{defi}
The satisfaction relation $f, t \models \varphi$ for an \emph{\textup{\textmd{\textsf{MTL}}}{}}
formula $\varphi$, a signal $f$ and $t \in U_f$ is defined as follows:
\begin{itemize}
\item $f, t \models P$ iff $P(t)$ holds in $M_f$
\item $f, t \models \mathbf{true}$
\item $f, t \models \varphi_1 \wedge \varphi_2$ iff $f, t \models \varphi_1$ and $f, t \models \varphi_2$
\item $f, t \models \neg \varphi$ iff $f, t \not \models \varphi$
\item $f, t \models \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$ iff there exists $t' > t$, $t' \in \mathbb{T}$
such that $f, t' \models \varphi_2$, $t' - t \in I$
and $f, t'' \models \varphi_1$ for all $t''$ with $t < t'' < t'$
\item $f, t \models \varphi_1 \mathbin{\mathcal{S}}_I \varphi_2$ iff there exists
$t' < t$, $t' \in \mathbb{T}$ such that $f, t' \models \varphi_2$, $t - t' \in I$
and $f, t'' \models \varphi_1$ for all $t''$ with $t' < t'' < t$.
\end{itemize}
\end{defi}
\paragraph{\emph{Relating the two semantics}}
Note that timed words can be regarded as a particular kind of signal:
for a given $\mathbb{T}$-timed word $\rho$ over $\Sigma_P$,
we can introduce a `silent' monadic predicate $P_\epsilon$ and construct the corresponding $\mathbb{T}$-signal $f^{\rho}$ over $\Sigma_{\mathbf{P}'}$,
where $\mathbf{P}' = \mathbf{P} \cup \{ P_\epsilon \}$, as follows:
\begin{itemize}
\item $f^{\rho}(\tau_i) = \sigma_i$ for all $i$, $0 \leq i < |\rho|$
\item $f^{\rho}(\tau_i) = \{P_\epsilon\}$.
\end{itemize}
This enables us to interpret metric logics over timed words `continuously'.
We can thus compare the expressiveness of metric logics in both semantics by restricting the models of the continuous interpretations of metric logics to signals of this form (i.e.,~$f^\rho$ for some timed word $\rho$).
For example, we say that continuous \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} is at least as
expressive as pointwise \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} since for each \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula $\vartheta_{\mathit{pw}}(x)$,
there is an `equivalent' \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula $\vartheta_{\mathit{cont}}(x)$ such that
$\rho, t \models \vartheta_{\mathit{pw}}(x)$ iff $f^{\rho}, t \models \vartheta_{\mathit{cont}}(x)$.
\begin{exa}
Consider the timed word $\rho$ illustrated in Figure~\ref{fig:semanticsexample}
where the red boxes denote $P$-events.
The \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} formula
\[
\varphi = \LTLdiamond (\LTLdiamond_{=1} P)
\]
does not hold at the beginning of $\rho$ in the pointwise semantics (i.e.,~$\rho \not \models \varphi$) since
there is no event at exactly one time unit before the second event in $\rho$.
On the other hand, $\varphi$ holds
at the beginning of $\rho$ in the continuous semantics
(i.e.,~$f^\rho \models \varphi$) since
there is a point (at which $P_\epsilon$ holds) at exactly one time unit before
the second event in $f^\rho$.
We can, however, simulate the pointwise semantics with
\[
\varphi' = \LTLdiamond \Bigl(\neg P_\epsilon \wedge \bigl(\LTLdiamond_{=1} (\neg P_\epsilon \wedge P) \bigr)\Bigr)\,,
\]
for which we have $\pi \models \varphi$ iff $f^{\pi} \models \varphi'$
for all timed words $\pi$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1.0]
\begin{scope}[>=latex]
\draw[|-|, loosely dashed] (0pt,0pt) -- (280pt,0pt) node[at start, left] {$\rho$};
\draw[loosely dashed] (0pt,-10pt) -- (0pt,10pt) node[at start,below] {$0$};
\draw[loosely dashed] (70pt,-10pt) -- (70pt,10pt) node[at start,below] {$0.5$};
\draw[loosely dashed] (140pt,-10pt) -- (140pt,10pt) node[at start,below] {$1$};
\draw[loosely dashed] (210pt,-10pt) -- (210pt,10pt) node[at start,below] {$1.5$};
\draw[loosely dashed] (280pt,-10pt) -- (280pt,10pt) node[at start,below] {$2$};
\draw[draw=black, fill=red] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=red] (209pt, -4pt) rectangle (211pt, 4pt);
\end{scope}
\end{tikzpicture}
\caption{The timed word $\rho$.}%
\label{fig:semanticsexample}
\end{figure}
\end{exa}
As we see in the example above, the pointwise and continuous interpretations
of metric logics differs in the range of first-order quantifiers.
While the ability to quantify over time points \emph{between} events appears to increase the expressiveness
of metric logics, this is not the case for \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} as both interpretations
are indeed equally expressive (when one considers only signals of the form $f^\rho$)~\cite{DSouza2007}.\footnote{The translation in~\cite{DSouza2007} also holds in a time-bounded setting with trivial modifications.}
By contrast, \textup{\textmd{\textsf{MTL}}}{} is strictly more expressive in the continuous semantics
than in the pointwise semantics~\cite{Prabhakar2006}.
\subsection{Model checking}
A key advantage in using \textup{\textmd{$\textsf{LTL}_\textsf{fut}$}}{} (or \textup{\textmd{\textsf{LTL}}}{}) in verification is that its \emph{model-checking}
problem is $\mathrm{PSPACE}$-complete~\cite{Sistla1985}, much better than the complexity of the same problem for \textmd{\textsf{FO[$<$]}} (non-elementary~\cite{Stockmeyer1974}).
Given a B\"uchi automaton $\mathcal{A}$ that models the system and a specification written as an \textup{\textmd{$\textsf{LTL}_\textsf{fut}$}}{} formula $\Phi$, the corresponding model-checking problem
asks whether the language defined by $\mathcal{A}$ is included in
the language defined by $\Phi$. By a fundamental result in verification---\textup{\textmd{$\textsf{LTL}_\textsf{fut}$}}{} formulas
can be translated into B\"uchi automata~\cite{Wolper1983}---this reduces to the
\emph{emptiness} problem on the product B\"uchi automaton
of $\mathcal{A}$ and the B\"uchi automaton $\mathcal{B}_{\neg \Phi}$ translated from $\neg \Phi$.
The latter problem can be solved, e.g., by a standard fixed-point algorithm~\cite{Emerson1986}.
This is sometimes called the \emph{automata-theoretic approach} to \textup{\textmd{$\textsf{LTL}_\textsf{fut}$}}{} model checking.
In the real-time setting, given a timed (B\"uchi) automaton $\mathcal{A}$ and a specification $\varphi$
(e.g., a formula of some metric logic), the corresponding model-checking problem asks
whether the timed (finite-word) language defined by $\mathcal{A}$ is included in the timed (finite-word) language
defined by $\varphi$. By analogy with the untimed case, one may solve this problem
by first translating $\neg \varphi$ into a timed automaton $\mathcal{A}_{\neg \varphi}$
and then checking the emptiness of the product of $\mathcal{A}$ and $\mathcal{A}_{\neg \varphi}$.
This methodology works for certain metric logics;
for example, each formula of \textup{\textmd{$\textsf{MITL}_\textsf{fut}$}}{} (the non-punctual fragment of \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{})
can be translated into a timed automaton, and the model-checking problem for timed automata
against \textup{\textmd{$\textsf{MITL}_\textsf{fut}$}}{} is $\mathrm{EXPSPACE}$-complete~\cite{Alur1996}.
However, this does not apply to \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} as
\textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} formulas, in general, cannot be translated into timed automata.
\subsection{Monitoring}
The \emph{prefix} problem~\cite{Bauer2013} asks the following:
given a specification $\Phi$ and a finite word $u$, do all infinite extensions
of $u$ satisfy $\Phi$? If the answer is `yes', then we say that $u'$ is a \emph{good prefix} for $\Phi$.
Similarly, $u$ is a \emph{bad prefix} for $\Phi$ if the answer to the dual problem is `yes', i.e.,~none of its infinite extensions satisfies $\Phi$.
The \emph{monitoring} problem takes instead a specification $\Phi$ and
an infinite word $u'$ as inputs. In contrast to standard decision problems,
the latter input is given \emph{incrementally}, i.e.,~one symbol at a time;
a monitor (a procedure that `solves' the monitoring problem) is required to
continuously check whether the currently accumulated finite word $u$
(a prefix of $u'$) is a good/bad prefix for $\Phi$ and report as necessary.
\section{Expressive completeness of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} over bounded timed words}
In this section, we study the expressiveness of \textup{\textmd{\textsf{MTL}}}{} (and its various fragments and extensions) in
a time-bounded pointwise setting, i.e.,~all timed words are assumed to have durations less than a positive integer $N$.
We first recall \textup{\textmd{\textsf{MTL}}}{} EF games~\cite{Pandya2011},
which serves as our main tool in proving expressiveness results.
Then we demonstrate a strict hierarchy of metric temporal logics (based on their expressiveness
over bounded timed words) as we extend \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} incrementally towards \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}.
Finally, we show that \textup{\textmd{\textsf{MTL}}}{}, equipped with both
the forwards and backwards temporal modalities generalised `Until' ($\mathbin{\mathfrak{U}}_I^c$) and
generalised `Since' ($\mathbin{\mathfrak{S}}_I^c$),
has precisely the same expressive
power as \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} over bounded time domains in the pointwise
semantics.
For the time-bounded satisfiability and model-checking problems,
we show that the relevant constructions (and hence the complexity bounds) for \textup{\textmd{\textsf{MTL}}}{} in~\cite{Ouaknine2009}
carry over to our new logic \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}.
\subsection{\textup{\textmd{\textsf{MTL}}}{} EF games}
Ehrenfeucht-Fra\"{\i}ss\'{e} games are handy tools in proving
the inexpressibility of certain properties in first-order logics.
In many proofs in this section,
we resort to (extended versions of) Pandya and Shah's \textup{\textmd{\textsf{MTL}}}{} \emph{EF games} on timed words~\cite{Pandya2011}, which
itself is a timed generalisation of Etessami and Wilke's \textup{\textmd{\textsf{LTL}}}{} EF games~\cite{Etessami1996}.
An $m$-round \textup{\textmd{\textsf{MTL}}}{} EF game starts with round $0$ and ends with round $m$.
The game is played by two players (\emph{Spoiler} and \emph{Duplicator}) on a pair of timed words $\rho$ and~$\rho'$.\footnote{We follow the convention
that \emph{Spoiler} is male and \emph{Duplicator} is female.}
A \emph{configuration} is a pair of positions $(i, j)$, respectively in $\rho$ and~$\rho'$.
In each round $r$ ($0 \leq r \leq m$), the game proceeds as follows.
\emph{Spoiler} first checks whether the two events that correspond to the current configuration $(i_r, j_r)$
in $\rho$ and $\rho'$ satisfy the same set of monadic predicates.
If this is not the case then he wins the game. Otherwise if $r < m$, \emph{Spoiler}
chooses an interval $I \subseteq (0, \infty)$ with endpoints in $\mathbb{N} \cup \{ \infty \}$
and plays either of the following moves:
\begin{itemize}
\item \emph{$\mathbin{\mathcal{U}}_{I}$-move}: \emph{Spoiler} chooses one of the two timed words (say $\rho$).
He then picks $i_r'$ such that $i_r < i_r'$ and $\tau_{i_r'} - \tau_{i_r} \in I$ where $\tau_{i_r'}$ and $\tau_{i_r}$ are the corresponding timestamps in $\rho$ (if there is no such $i_r'$ then \emph{Duplicator} wins the game).
\emph{Duplicator} must choose a position $j_r'$ in $\rho'$ such that the difference of
the corresponding timestamps in $\rho'$ is in $I$. If she cannot find such a position then \emph{Spoiler}
wins the game. Otherwise, \emph{Spoiler} plays either of the following `parts':
\begin{itemize}
\item \emph{$\LTLdiamond$-part}: The game proceeds to the next round with $(i_{r+1}, j_{r+1}) = (i_r', j_r')$.
\item \emph{$\mathbin{\mathcal{U}}$-part}: If $j_r' = j_r + 1$ the game proceeds to the next round
with $(i_{r+1}, j_{r+1}) = (i_r', j_r')$. If $i_r' = i_r + 1$ but $j_r' \neq j_r + 1$ then \emph{Spoiler} wins the game.
Otherwise \emph{Spoiler} picks another position $j_r''$ in $\rho'$ such that $j_r < j_r'' < j_r'$.
\emph{Duplicator} have to choose a position $i_r''$ in $\rho$ such that $i_r < i_r'' < i_r'$ in response.
If she cannot find such a position then \emph{Spoiler} wins the game;
otherwise the game proceeds to the next round with $(i_{r+1}, j_{r+1}) = (i_r'', j_r'')$.
\end{itemize}
\item \emph{$\mathbin{\mathcal{S}}_{I}$-move}: Defined symmetrically.
\end{itemize}
We say that \emph{Duplicator}
has a \emph{winning strategy} for the $m$-round \textup{\textmd{\textsf{MTL}}}{} EF game on $\rho$ and $\rho'$ that starts from configuration $(i, j)$
if and only if, no matter how \emph{Spoiler} plays, he cannot win the $m$-round \textup{\textmd{\textsf{MTL}}}{} EF game on $\rho$ and $\rho'$ with $(i_0, i_0) = (i, j)$.
If this is not the case then we say that \emph{Spoiler} has a winning strategy.
It is obvious that the moves in \textup{\textmd{\textsf{MTL}}}{} EF games
are closely related to the semantics of modalities in \textup{\textmd{\textsf{MTL}}}{} formulas.
For example, the $\mathbin{\mathcal{U}}_I$-move can be seen as \emph{Spoiler}'s attempt
to verify that a formula of the form $\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$
holds at $i_r$ in $\rho$ if and only if it holds at $j_r$ in $\rho'$: the $\LTLdiamond$-part
and the remaining rounds verify that $\varphi_2$ holds at $i_r'$ in $\rho$
iff it holds at $j_r'$ in $\rho'$, whereas the $\mathbin{\mathcal{U}}$-part and the remaining rounds
verify that $\varphi_1$ holds at all $i_r''$, $i_r < i_r'' < i_r'$ in $\rho$
iff it holds at all $j_r''$, $j_r < j_r'' < j_r'$ in $\rho'$.
Formally, the following theorem relates the number of rounds of \textup{\textmd{\textsf{MTL}}}{} EF games to the
\emph{modal depth} (i.e.,~the maximal depth of nesting of modalities) of
\textup{\textmd{\textsf{MTL}}}{} formulas.
\begin{thmC}[\cite{Pandya2011}]\label{thm:mtlef}
For (finite) timed words $\rho, \rho'$ and an \emph{\textup{\textmd{\textsf{MTL}}}{}} formula $\varphi$ of modal depth $\leq m$,
if \emph{Duplicator} has a winning strategy for the $m$-round \emph{\textup{\textmd{\textsf{MTL}}}{}} EF game on
$\rho, \rho'$ with $(i_0, j_0) = (0, 0)$, then
\[
\rho \models \varphi \iff \rho' \models \varphi \,.
\]
\end{thmC}
In other words, $\rho, \rho'$ can be distinguished by an \textup{\textmd{\textsf{MTL}}}{} formula
of modal depth $\leq m$ if and only if \emph{Spoiler} has
a winning strategy for the $m$-round \textup{\textmd{\textsf{MTL}}}{} EF game on $\rho, \rho'$
with $(i_0, j_0) = (0, 0)$.
Note that specialised versions of Theorem~\ref{thm:mtlef} also hold for sublogics of \textup{\textmd{\textsf{MTL}}}{};
for example, the corresponding theorem for \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} is obtained by banning the $\mathbin{\mathcal{S}}_I$-move.
\begin{exa}
Consider the timed words $\rho$ and $\rho'$ illustrated in Figure~\ref{fig:mtlefexample} where
the white, red and blue boxes represent events at which no monadic predicate holds,
$P$-events, and $Q$-events, respectively. The positions are labelled above
the events.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1.0]
\begin{scope}[>=latex]
\draw[|-|, loosely dashed] (0pt,40pt) -- (140pt,40pt) node[at start, left] {$\rho$};
\draw[|-|, loosely dashed] (0pt,0pt) -- (140pt,0pt) node[at start, left] {$\rho'$};
\draw[loosely dashed] (0pt,-10pt) -- (0pt,0pt) node[at start,below] {$0$};
\draw[loosely dashed] (0pt, 15pt) -- (0pt,40pt);
\draw[loosely dashed] (140pt,-10pt) -- (140pt,50pt) node[at start,below] {$1$};
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=white] (1pt, 36pt) rectangle (-1pt, 44pt);
\node[above] at (0pt, 5pt) {{\tiny $0$}};
\node[above] at (0pt, 45pt) {{\tiny $0$}};
\draw[draw=black, fill=red] (19pt, -4pt) rectangle (21pt, 4pt);
\draw[draw=black, fill=red] (19pt, 36pt) rectangle (21pt, 44pt);
\node[above] at (20pt, 5pt) {{\tiny $1$}};
\node[above] at (20pt, 45pt) {{\tiny $1$}};
\draw[draw=black, fill=red] (39pt, -4pt) rectangle (41pt, 4pt);
\draw[draw=black, fill=red] (39pt, 36pt) rectangle (41pt, 44pt);
\node[above] at (40pt, 5pt) {{\tiny $2$}};
\node[above] at (40pt, 45pt) {{\tiny $2$}};
\draw[draw=black, fill=white] (59pt, -4pt) rectangle (61pt, 4pt);
\draw[draw=black, fill=red] (59pt, 36pt) rectangle (61pt, 44pt);
\node[above] at (60pt, 5pt) {{\tiny $3$}};
\node[above] at (60pt, 45pt) {{\tiny $3$}};
\draw[draw=black, fill=red] (79pt, -4pt) rectangle (81pt, 4pt);
\draw[draw=black, fill=red] (79pt, 36pt) rectangle (81pt, 44pt);
\node[above] at (80pt, 5pt) {{\tiny $4$}};
\node[above] at (80pt, 45pt) {{\tiny $4$}};
\draw[draw=black, fill=red] (99pt, -4pt) rectangle (101pt, 4pt);
\draw[draw=black, fill=red] (99pt, 36pt) rectangle (101pt, 44pt);
\node[above] at (100pt, 5pt) {{\tiny $5$}};
\node[above] at (100pt, 45pt) {{\tiny $5$}};
\draw[draw=black, fill=blue] (119pt, -4pt) rectangle (121pt, 4pt);
\draw[draw=black, fill=blue] (119pt, 36pt) rectangle (121pt, 44pt);
\node[above] at (120pt, 5pt) {{\tiny $6$}};
\node[above] at (120pt, 45pt) {{\tiny $6$}};
\end{scope}
\end{tikzpicture}
\caption{$\rho$ and $\rho'$ can be distinguished by $P \mathbin{\mathcal{U}} Q$.}%
\label{fig:mtlefexample}
\end{figure}
In the $1$-round \textup{\textmd{\textsf{MTL}}}{} EF game on $\rho$, $\rho'$ with $(i_0, j_0) = (0, 0)$,
a winning strategy for \emph{Spoiler} can be described as follows:
\begin{enumerate}
\item The two events that correspond to $(i_0, j_0) = (0, 0)$ in $\rho$ and $\rho'$ satisfy
the same set of monadic predicates, so \emph{Spoiler} does not win here.
\item \emph{Spoiler} chooses $I = (0, \infty)$ and $i_0' = 6$ in $\rho$.
\item If \emph{Duplicator} chooses $j_0' \neq 6$ in $\rho'$, she will lose at the beginning
of round $1$. So she chooses $j_0' = 6$.
\item \emph{Spoiler} plays the $\mathbin{\mathcal{U}}$-part and chooses $j_0'' = 3$ in $\rho'$.
\item \emph{Duplicator} can only choose $i_0''$ in $\rho$ such that $1 \leq i_0'' \leq 5$.
But she will then lose at the beginning of round $1$.
\end{enumerate}
It follows that there is an \textup{\textmd{\textsf{MTL}}}{} formula of modal depth $1$
that distinguishes $\rho$ and $\rho'$.
One such formula is $P \mathbin{\mathcal{U}} Q$, which can be obtained
from \emph{Spoiler}'s winning strategy above.
\end{exa}
\subsection{A hierarchy of expressiveness}\label{subsec:hierarchy}
We now present a sequence of successively more expressive
extensions of \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} over bounded timed words.
The technique we use here is to construct two \emph{families} of
models---parametrised by $m$---such that there is a certain formula
of the more expressive logic telling them apart for all $m$, yet they cannot be distinguished by any
formula of the less expressive logic with modal depth $\leq m$ (i.e.,~\emph{Duplicator} has
a winning strategy in the corresponding $m$-round \textup{\textmd{\textsf{MTL}}}{} EF game).
Along the way we highlight the key features that give rise to the differences in expressiveness.
The necessity of new modalities
is justified by the fact that no known extension can lead to expressive completeness.
\paragraph{\emph{Definability of the beginning of time}}
Recall that \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} and \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} have the same expressiveness over $\ropen{0, N}$-signals~\cite{Ouaknine2009}.
This result fails in the pointwise semantics.
\begin{prop}[Corollary of~{\cite[Section 8]{Prabhakar2006}}]
\emph{\textup{\textmd{\textsf{MTL}}}{}} is strictly more expressive than \emph{\textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{}} over $\ropen{0, N}$-timed words.\footnote{The models constructed in~{\cite[Section 8]{Prabhakar2006}} are bounded timed words.}
\end{prop}
To explain this discrepancy between the two semantics, observe that a distinctive feature
of the continuous interpretation of \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} is exploited in~\cite{Ouaknine2009}:
in any $\ropen{0, N}$-signal, the formula $\LTLdiamond_{=(N-1)} \mathbf{true}$ holds in $\ropen{0, 1}$
and nowhere else. One can make use of conjunctions of similar formulas to determine the integer part of
the current instant (where the relevant formula is being evaluated).
Unfortunately, since the duration of a given bounded timed word is not known \emph{a priori},
this trick does not work for \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} in the pointwise semantics.
For example, the formula $\LTLdiamond_{=1} \mathbf{true}$ does not hold at any position in the
$\ropen{0, 2}$-timed word $\rho = (\sigma_0, 0)(\sigma_1, 0.5)$.
However, the same effect can be achieved in \textup{\textmd{\textsf{MTL}}}{} by using past modalities.
Let
\[
\varphi_{i, i+1} = \LTLdiamondminus_{\ropen{i, i+1}} (\neg \LTLdiamondminus \mathbf{true})\label{def:intformulas}
\]
and
$\Phi_{\mathit{int}} = \{\varphi_{i, i+1} \mid i \in \mathbb{N}\}$.
Note that the subformula $\neg \LTLdiamondminus \mathbf{true}$ can only hold at the very first event (with timestamp $0$),
thus $\varphi_{i, i+1}$ holds only at events with timestamps in $\ropen{i, i + 1}$.
Denote by \textup{\textmd{$\textsf{MTL}_\textsf{fut}$[$\Phi_{\textit{int}}$]}}{} the extension of \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} obtained by allowing these formulas as atomic formulas.
It turned out that this very restrictive use of past modalities strictly increases the expressiveness of \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{}
over bounded timed words. Indeed, the main result of this section (Theorem~\ref{thm:boundedexpcomp})
crucially depends on the use of these formulas.
\begin{prop}\label{prop:unit-strict}
\emph{\textup{\textmd{$\textsf{MTL}_\textsf{fut}$[$\Phi_{\textit{int}}$]}}{}} is strictly more expressive than \emph{\textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{}} over $\ropen{0, N}$-timed words.
\end{prop}
\begin{proof}
For a given $m \in \mathbb{N}$, we construct the following models:
\[
\begin{array}{rcl}
\mathcal{A}_{m} & = & (\emptyset, 0)(\emptyset, 1-\frac{1.5}{2m+5})(\emptyset, 1-\frac{0.5}{2m+5})\ldots(\emptyset, 1+\frac{m+2.5}{2m+5}) \,, \\
\mathcal{B}_{m} & = & (\emptyset, 0)(\emptyset, 1-\frac{0.5}{2m+5})(\emptyset, 1+\frac{0.5}{2m+5})\ldots(\emptyset, 1+\frac{m+3.5}{2m+5}) \,.
\end{array}
\]
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.0]
\begin{scope}[>=latex]
\draw[|-|, loosely dashed] (0pt,40pt) -- (280pt,40pt) node[at start, left] {$\mathcal{A}_{m}$};
\draw[|-|, loosely dashed] (0pt,0pt) -- (280pt,0pt) node[at start, left] {$\mathcal{B}_{m}$};
\draw[loosely dashed] (0pt,-10pt) -- (0pt,50pt) node[at start,below] {$0$};
\draw[loosely dashed] (140pt,-10pt) -- (140pt,50pt) node[at start,below] {$1$};
\draw[loosely dashed] (210pt,-10pt) -- (210pt,50pt) node[at start,below] {$1.5$};
\draw[loosely dashed] (280pt,-10pt) -- (280pt,50pt) node[at start,below] {$2$};
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=white] (1pt, 36pt) rectangle (-1pt, 44pt);
\draw[draw=black, fill=white] (126pt, 36pt) rectangle (124pt, 44pt);
\draw[draw=black, fill=white] (136pt, -4pt) rectangle (134pt, 4pt);
\draw[draw=black, fill=white] (136pt, 36pt) rectangle (134pt, 44pt);
\draw[draw=black, fill=white] (146pt, -4pt) rectangle (144pt, 4pt);
\draw[draw=black, fill=white] (146pt, 36pt) rectangle (144pt, 44pt);
\draw[draw=black, fill=white] (156pt, -4pt) rectangle (154pt, 4pt);
\draw[-, very thick, loosely dotted] (165pt,20pt) -- (190pt,20pt);
\draw[draw=black, fill=white] (201pt, 36pt) rectangle (199pt, 44pt);
\draw[draw=black, fill=white] (211pt, -4pt) rectangle (209pt, 4pt);
\draw[draw=black, fill=white] (211pt, 36pt) rectangle (209pt, 44pt);
\draw[draw=black, fill=white] (221pt, -4pt) rectangle (219pt, 4pt);
\end{scope}
\end{tikzpicture}
\caption{Models $\mathcal{A}_{m}$ and $\mathcal{B}_{m}$.}%
\label{fig:unit-expressiveness}
\end{figure}
The models are illustrated in Figure~\ref{fig:unit-expressiveness}, where each white box represents an event (at which no monadic predicate holds).
We play an $m$-round \textup{\textmd{\textsf{MTL}}}{} EF game on $\mathcal{A}_{m}$, $\mathcal{B}_{m}$ and allow only $\mathbin{\mathcal{U}}_{I}$-move.
After round $0$, either (i) $i_1 = j_1 \geq 1$ (in which case \emph{Duplicator} can, obviously, win the remaining rounds)
or (ii) $(i_1, j_1) = (2, 1)$ (\emph{Spoiler} chooses position $2$ in $\mathcal{A}_{m}$) or $(i_1, j_1) = (3, 2)$ (\emph{Spoiler} chooses position $2$ in $\mathcal{B}_{m}$). In the latter case, it is easy to verify that
in any remaining round $r$, \emph{Duplicator} can make $i_{r+1} = j_{r+1} \geq 1$ or $(i_{r+1}, j_{r+1}) = (i_r + 1, j_r + 1)$.
It follows from the \textup{\textmd{\textsf{MTL}}}{} EF Theorem that no \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} formula of modal depth $\leq m$ can distinguish $\mathcal{A}_{m}$
and $\mathcal{B}_{m}$; however, the formula
\[
\LTLdiamond_{(0, 1)} ( \varphi_{0, 1} \wedge \LTLcircle \varphi_{0, 1} ) \,,
\]
which says ``in the next time unit there are two events with timestamps in $\ropen{0, 1}$'', distinguishes $\mathcal{A}_{m}$ and $\mathcal{B}_{m}$ for any $m \in \mathbb{N}$
(when evaluated at position $0$).
\end{proof}
\paragraph{\emph{Past modalities}}
The conservative extension above uses past modalities in a very restricted way.
This is not sufficient for obtaining the full expressiveness of \textup{\textmd{\textsf{MTL}}}{}:
the following proposition says that non-trivial nesting of future modalities and past modalities
gives more expressiveness.
\begin{prop}\label{prop:past-strict}
\emph{\textup{\textmd{\textsf{MTL}}}{}} is strictly more expressive than \emph{\textup{\textmd{$\textsf{MTL}_\textsf{fut}$[$\Phi_{\textit{int}}$]}}{}} over $\ropen{0, N}$-timed words.
\end{prop}
\begin{proof}
For a given $m \in \mathbb{N}$, we construct
\[
\begin{array}{rcl}
\mathcal{C}_{m} & = & (\emptyset, 0)(\emptyset, \frac{0.5}{2m+3})(\emptyset, \frac{1.5}{2m+3})\ldots(\emptyset, 2 - \frac{0.5}{2m+3}) \,.
\end{array}
\]
$\mathcal{D}_{m}$ is constructed as $\mathcal{C}_{m}$ except that the event at time $\frac{m+1.5}{2m+3} = 0.5$ is missing.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1.0]
\begin{scope}[>=latex]
\draw[|-|, loosely dashed] (0pt,40pt) -- (260pt,40pt) node[at start, left] {$\mathcal{C}_{m}$};
\draw[|-|, loosely dashed] (0pt,0pt) -- (260pt,0pt) node[at start, left] {$\mathcal{D}_{m}$};
\draw[loosely dashed] (0pt,-10pt) -- (0pt,50pt) node[at start,below] {$0$};
\draw[loosely dashed] (130pt,-10pt) -- (130pt,50pt) node[at start,below] {$1$};
\draw[loosely dashed] (260pt,-10pt) -- (260pt,50pt) node[at start,below] {$2$};
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=white] (1pt, 36pt) rectangle (-1pt, 44pt);
\draw[draw=black, fill=white] (6pt, -4pt) rectangle (4pt, 4pt);
\draw[draw=black, fill=white] (6pt, 36pt) rectangle (4pt, 44pt);
\draw[draw=black, fill=white] (16pt, -4pt) rectangle (14pt, 4pt);
\draw[draw=black, fill=white] (16pt, 36pt) rectangle (14pt, 44pt);
\draw[-, very thick, loosely dotted] (26pt,20pt) -- (44pt,20pt);
\draw[draw=black, fill=white] (56pt, -4pt) rectangle (54pt, 4pt);
\draw[draw=black, fill=white] (56pt, 36pt) rectangle (54pt, 44pt);
\draw[draw=black, fill=white] (66pt, 36pt) rectangle (64pt, 44pt);
\draw[draw=black, fill=white] (76pt, -4pt) rectangle (74pt, 4pt);
\draw[draw=black, fill=white] (76pt, 36pt) rectangle (74pt, 44pt);
\draw[-, very thick, loosely dotted] (86pt,20pt) -- (104pt,20pt);
\draw[draw=black, fill=white] (114pt, -4pt) rectangle (116pt, 4pt);
\draw[draw=black, fill=white] (114pt, 36pt) rectangle (116pt, 44pt);
\draw[draw=black, fill=white] (124pt, -4pt) rectangle (126pt, 4pt);
\draw[draw=black, fill=white] (124pt, 36pt) rectangle (126pt, 44pt);
\draw[draw=black, fill=white] (136pt, -4pt) rectangle (134pt, 4pt);
\draw[draw=black, fill=white] (136pt, 36pt) rectangle (134pt, 44pt);
\draw[draw=black, fill=white] (146pt, -4pt) rectangle (144pt, 4pt);
\draw[draw=black, fill=white] (146pt, 36pt) rectangle (144pt, 44pt);
\draw[-, very thick, loosely dotted] (156pt,20pt) -- (234pt,20pt);
\draw[draw=black, fill=white] (246pt, -4pt) rectangle (244pt, 4pt);
\draw[draw=black, fill=white] (246pt, 36pt) rectangle (244pt, 44pt);
\draw[draw=black, fill=white] (256pt, -4pt) rectangle (254pt, 4pt);
\draw[draw=black, fill=white] (256pt, 36pt) rectangle (254pt, 44pt);
\end{scope}
\end{tikzpicture}
\caption{Models $\mathcal{C}_{m}$ and $\mathcal{D}_{m}$.}%
\label{fig:past-expressiveness}
\end{figure}
The models are illustrated in Figure~\ref{fig:past-expressiveness}, where each white box represents an event (at which no monadic predicate holds).
We play an $m$-round \textup{\textmd{\textsf{MTL}}}{} EF game on $\mathcal{C}_{m}$ and $\mathcal{D}_{m}$, allowing only $\mathbin{\mathcal{U}}_{I}$-move.
For simplicity, assume that we can use special monadic predicates to refer to formulas in $\Phi_{\mathit{int}}$.
In each round $r$, \emph{Duplicator} can either make (i) $i_{r+1} = j_{r+1} + 1$ and $i_{r+1} \geq m + 3$ (in which case she can win the remaining rounds)
or (ii) $i_{r+1} = j_{r+1}$ and $i_{r+1}$ is not equal to $2m + 2$, $2m + 3$ or $4m + 5$.
It follows from the \textup{\textmd{\textsf{MTL}}}{} EF Theorem that no \textup{\textmd{$\textsf{MTL}_\textsf{fut}$[$\Phi_{\textit{int}}$]}}{} formula of modal depth $\leq m$ can distinguish $\mathcal{C}_{m}$
and $\mathcal{D}_{m}$; but the formula
\[
\LTLsquare_{(1, 2)} (\LTLdiamondminus_{= 1} \mathbf{true}) \,,
\]
which says ``for each event in $(1, 2)$ from now, there is a corresponding event exactly $1$ time unit earlier'',
distinguishes $\mathcal{C}_{m}$ and $\mathcal{D}_{m}$ for any $m \in \mathbb{N}$ (when evaluated at position $0$).
\end{proof}
\paragraph{\emph{Counting modalities}}
The modality $C_n(x, X)$ asserts that $X$ holds at least at $n$ points in the open interval $(x, x + 1)$.
The modalities $C_n$ for $n \geq 2$ are called \emph{counting modalities}.
It is well-known that these modalities are not expressible in \textup{\textmd{\textsf{MTL}}}{} over signals~\cite{Hirshfeld2007}.
For this reason, they (and variants thereof) are often used to prove
inexpressiveness results for various metric logics.
For example, the following property
\begin{itemize}
\item $P$ holds at an event at time $y$ in the future
\item $Q$ holds at an event at time $y' > y$
\item $R$ holds at an event at time $y'' > y' > y$
\item Both the $Q$-event and the $R$-event are within $(1, 2)$ from the $P$-event
\end{itemize}
can be expressed as the \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula
\[
\arraycolsep=0.3ex
\begin{array}{rcll}
\vartheta_{\mathit{pqr}}(x) & = & \exists y \, \biggl( x < y \ \wedge & P(y) \wedge \exists y' \, \Bigl(y < y' \wedge d(y, y') > 1 \wedge d(y, y') < 2 \wedge Q(y') \\
& & & {} \wedge \exists y'' \, \bigl(y' < y'' \wedge d(y, y'') > 1 \wedge d(y, y'') < 2 \wedge R(y'') \bigr)\Bigr)\biggr) \,,
\end{array}
\]
yet it has no equivalent in \textup{\textmd{\textsf{MTL}}}{} over timed words~\cite{Pandya2011}.
The difficulty here is that while we can easily write `there is a $Q$-event within $(1, 2)$
from a $P$-event in the future' as $\LTLdiamond(P \wedge \LTLdiamond_{(1, 2)} Q)$, it is not possible
to express `there is a $R$-event after the $Q$-event' and `that $R$-event is within $(1, 2)$ from the $P$-event'
simultaneously in \textup{\textmd{\textsf{MTL}}}{}.
Indeed, it was shown recently that in the continuous semantics, \textup{\textmd{\textsf{MTL}}}{} extended with counting modalities and their past counterparts
(which we denote by \mtlpastcnt{})
is expressively complete for \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}~\cite{Hunter2013}.
In other words, counting modalities are exactly what separates
the expressiveness of \textup{\textmd{\textsf{MTL}}}{} and \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} in the continuous semantics.
In the time-bounded pointwise case, however,
they add no expressiveness to \textup{\textmd{\textsf{MTL}}}{}.
To see this, observe that the following formula is equivalent to $\vartheta_{\mathit{pqr}}(x)$ over $\ropen{0, N}$-timed words (we make use of the formulas in $\Phi_{\mathit{int}}$ defined earlier):
\[
\arraycolsep=0.3ex
\begin{array}{ll}
\LTLdiamond \Biggl(\bigvee_{0 \leq i \leq N - 1} \biggl( P \wedge \varphi_{i, i + 1} \wedge \Bigl(& \underbrace{\LTLdiamond_{> 1} \bigl(Q \wedge \LTLdiamond(R \wedge \varphi_{i + 1, i + 2}) \bigr)}_\text{Case (i)} \\
& {} \vee \underbrace{\LTLdiamond_{< 2} \bigl(R \wedge \varphi_{i + 2, i + 3} \wedge \LTLdiamondminus (Q \wedge \varphi_{i + 2, i + 3}) \bigr)}_\text{Case (ii)} \\
& {} \vee \underbrace{\bigl(\LTLdiamond_{> 1} (Q \wedge \varphi_{i + 1, i + 2}) \wedge \LTLdiamond_{< 2} (R \wedge \varphi_{i + 2, i + 3}) \bigr)}_\text{Case (iii)} \Bigr) \biggr) \Biggr) \,.
\end{array}
\]
The three cases that correspond to the subformulas are illustrated in Figure~\ref{fig:countingex}
where time is measured relative to the very first event (with timestamp $0$).
Note how we use the `integer boundaries' as an alternative distance measure and thus ensure that
both the $Q$-event and the $R$-event are within $(1, 2)$ from the $P$-event.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.0]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-20pt,0pt) -- (290pt,0pt);
\draw[-, loosely dashed] (0pt,-10pt) -- (0pt,10pt) node[at start, below] {$i$};
\draw[-, loosely dashed] (120pt,-10pt) -- (120pt,10pt) node[at start, below] {$i + 1$};
\draw[-, loosely dashed] (240pt,-10pt) -- (240pt,10pt) node[at start, below] {$i + 2$};
\draw[draw=black, fill=red] (32pt, -4pt) rectangle (30pt, 4pt);
\draw[draw=black, fill=blue] (190pt, -4pt) rectangle (188pt, 4pt);
\draw[draw=black, fill=green] (224pt, -4pt) rectangle (222pt, 4pt);
\draw[-, loosely dashed] (-20pt,-60pt) -- (290pt,-60pt);
\draw[-, loosely dashed] (0pt,-70pt) -- (0pt,-50pt) node[at start, below] {$i$};
\draw[-, loosely dashed] (120pt,-70pt) -- (120pt,-50pt) node[at start, below] {$i + 1$};
\draw[-, loosely dashed] (240pt,-70pt) -- (240pt,-50pt) node[at start, below] {$i + 2$};
\draw[draw=black, fill=red] (32pt, -64pt) rectangle (30pt, -56pt);
\draw[draw=black, fill=blue] (246pt, -64pt) rectangle (244pt, -56pt);
\draw[draw=black, fill=green] (256pt, -64pt) rectangle (254pt, -56pt);
\draw[-, loosely dashed] (-20pt,-120pt) -- (290pt,-120pt);
\draw[-, loosely dashed] (0pt,-130pt) -- (0pt,-110pt) node[at start, below] {$i$};
\draw[-, loosely dashed] (120pt,-130pt) -- (120pt,-110pt) node[at start, below] {$i + 1$};
\draw[-, loosely dashed] (240pt,-130pt) -- (240pt,-110pt) node[at start, below] {$i + 2$};
\draw[draw=black, fill=red] (32pt, -124pt) rectangle (30pt, -116pt);
\draw[draw=black, fill=blue] (190pt, -124pt) rectangle (188pt, -116pt);
\draw[draw=black, fill=green] (256pt, -124pt) rectangle (254pt, -116pt);
\draw[-, loosely dashed] (151pt,-140pt) -- (151pt,20pt);
\draw[-, loosely dashed] (271pt,-140pt) -- (271pt,20pt);
\node at (-50pt, 0pt) {\small{Case (i)}};
\node at (-50pt, -60pt) {\small{Case (ii)}};
\node at (-50pt, -120pt) {\small{Case (iii)}};
\end{scope}
\begin{scope}[>=to]
\draw[|<->|][dotted] (0pt,10pt) -- (31pt,10pt) node[midway,above] {{$d$}};
\draw[|<->|][dotted] (120pt,10pt) -- (151pt,10pt) node[midway,above] {{$d$}};
\draw[|<->|][dotted] (240pt,10pt) -- (271pt,10pt) node[midway,above] {{$d$}};
\draw[|<->|][dotted] (0pt,-50pt) -- (31pt,-50pt) node[midway,above] {{$d$}};
\draw[|<->|][dotted] (120pt,-50pt) -- (151pt,-50pt) node[midway,above] {{$d$}};
\draw[|<->|][dotted] (240pt,-50pt) -- (271pt,-50pt) node[midway,above] {{$d$}};
\draw[|<->|][dotted] (0pt,-110pt) -- (31pt,-110pt) node[midway,above] {{$d$}};
\draw[|<->|][dotted] (120pt,-110pt) -- (151pt,-110pt) node[midway,above] {{$d$}};
\draw[|<->|][dotted] (240pt,-110pt) -- (271pt,-110pt) node[midway,above] {{$d$}};
\end{scope}
\end{tikzpicture}
\caption{Counting modalities is expressible in \textup{\textmd{\textsf{MTL}}}{} over $\ropen{0, N}$-timed words. The red, blue, and green boxes represent $P$-events, $Q$-events, and $R$-events respectively.}%
\label{fig:countingex}
\end{figure}
The same idea can readily be generalised to handle counting modalities and their past counterparts.
We therefore have the following proposition.
\begin{prop}\label{prop:countinguseless}
\emph{\textup{\textmd{\textsf{MTL}}}{}} is expressively complete for \emph{\mtlpastcnt{}} over $\ropen{0, N}$-timed words.
\end{prop}
\paragraph{\emph{Non-local properties (one reference point)}}
Proposition~\ref{prop:countinguseless} shows that a part of the expressiveness hierarchy
of metric logics over ($\mathbb{R}_{\geq 0}$-)timed words collapses in a time-bounded pointwise setting.
Nonetheless, \textup{\textmd{\textsf{MTL}}}{} is still not expressive enough to capture the whole of \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}
in such a setting. Recall that another feature of the continuous interpretation of \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{}
used in the proof in~\cite{Ouaknine2009} is that
$\LTLdiamond_{= k} \varphi$ holds at $t$ \emph{iff} $\varphi$ holds at $t + k$.
Suppose that we want to specify the following property over $\mathbf{P} = \{P, Q\}$
for some positive integer $a$ (let the current instant be $t_1$):
\begin{itemize}
\item There is an event at time $t_2 > t_1 + a$ where $Q$ holds
\item $P$ holds at all events in $(t_1 + a, t_2)$.
\end{itemize}
In the continuous semantics, the property can easily be expressed as
the following \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} formula
\[
\varphi_{\mathit{cont1}} = \LTLdiamond_{=a} \bigl( (P \vee P_{\epsilon}) \mathbin{\mathcal{U}} Q \bigr)
\]
over signals of the form $f^\rho$ (over $\Sigma_{\mathbf{P'}}$ where $\mathbf{P'} = \mathbf{P} \cup \{P_\epsilon\}$); see Figure~\ref{fig:puntilq} for an example where
the formula $\varphi_{\mathit{cont1}}$ holds at $t_1$ in the continuous semantics.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-5pt,0pt) -- (285pt,0pt);
\draw[-, loosely dashed] (0pt,-10pt) -- (0pt,10pt) node[at start, below] {$t_1$};
\draw[-, loosely dashed] (220pt,-10pt) -- (220pt,10pt) node[at start, below] {$t_1 + a$};
\draw[-, loosely dashed] (260pt,-10pt) -- (260pt,10pt) node[at start, below] {$t_2$};
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=red] (231pt, -4pt) rectangle (233pt, 4pt);
\draw[draw=black, fill=red] (240pt, -4pt) rectangle (242pt, 4pt);
\draw[draw=black, fill=red] (254pt, -4pt) rectangle (256pt, 4pt);
\draw[draw=black, fill=blue] (261pt, -4pt) rectangle (259pt, 4pt);
\end{scope}
\end{tikzpicture}
\caption{$\varphi_{\mathit{cont1}}$ holds at $t_1$ in the continuous semantics. The red boxes denote $P$-events and the blue boxes denote $Q$-events.}%
\label{fig:puntilq}
\end{figure}
Essentially, when the current instant is $t_1$, the continuous interpretation of \textup{\textmd{\textsf{MTL}}}{} allows one to speak of
events `from' $t_1 + a$, regardless of whether there is an actual event at $t_1 + a$.
As we will show, it is not possible to do the same with the pointwise interpretation of \textup{\textmd{\textsf{MTL}}}{} when there is no event at $t_1 + a$.
To remedy this issue within the pointwise semantic framework, we introduce a simple
family of modalities $\mathcal{B}^\rightarrow_I$ (`Beginning')
and their past versions $\mathcal{B}^\leftarrow_I$.
They can be used to refer to the \emph{first} (earliest or latest, respectively) event in a given interval.
For example, we define the modality that asserts ``$X$ holds at the first event in $(a, b)$ relative
to the current instant'' as the following \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula:
\[
\arraycolsep=0.3ex
\begin{array}{rcll}
\mathcal{B}^\rightarrow_{(a, b)}(x, X) & = & \exists x' \, \Bigl( & x < x' \wedge d(x, x') > a \wedge d(x, x') < b \wedge X(x') \\
& & & {} \wedge \nexists x'' \, \bigl(x < x'' \wedge x'' < x' \wedge d(x, x'') > a \bigr) \Bigr) \,.
\end{array}
\]
The property above can now be written as $\mathcal{B}^\rightarrow_{(a, \infty)} \bigl( Q \vee (P \mathbin{\mathcal{U}} Q) \bigr)$ in
the pointwise semantics.
We refer to the extension of \textup{\textmd{\textsf{MTL}}}{} with $\mathcal{B}^\rightarrow_I, \mathcal{B}^\leftarrow_I$
as \textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{}.\footnote{Readers may find the modalities $\mathcal{B}^\rightarrow_I$ similar to
the modalities $\triangleright_I$ in \emph{Event-Clock Logic}~\cite{Henzinger1998}.
The difference is that the formula $\mathcal{B}^\rightarrow_I \varphi$ requires
$\varphi$ to hold at the \emph{first} event in $I$, whereas the formula $\triangleright_I \varphi$
requires (i) $\varphi$ to hold at \emph{some} event in $I$ and that (ii)
$\varphi$ does not hold at any other event between the current instant and the time of that event.}
The following proposition states that this extension is indeed non-trivial.
\begin{prop}\label{prop:beginning-strict}
\emph{\textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{}} is strictly more expressive than \emph{\textup{\textmd{\textsf{MTL}}}{}} over $\ropen{0, N}$-timed words.
\end{prop}
\begin{proof}
The proof we give here is inspired by a proof in~{\cite[Section 5]{Pandya2011}}.
Given $m \in \mathbb{N}$, we describe models $\mathcal{E}_{m}$
and $\mathcal{F}_{m}$ that are indistinguishable by \textup{\textmd{\textsf{MTL}}}{} formulas of modal depth $\leq m$
but distinguishing in \textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{}.
We first describe $\mathcal{F}_{m}$.
Let $g = \frac{1}{2m + 6}$ and pick positive $\varepsilon < \frac{g}{\frac{1}{g} - 1}$.
The first event (at time $0$) satisfies $\neg P \wedge \neg Q$.
Then, a sequence of overlapping segments (arranged as described below) starts at time $\frac{0.5}{2m + 5}$; see Figure~\ref{fig:segment} for an illustration of a segment.
Each segment consists of an
event satisfying $P \wedge \neg Q$
and an event satisfying $\neg P \wedge Q$
(we refer to them as $P$-events and $Q$-events, respectively).
If the $P$-event in the $i^{\mathit{th}}$ segment is at time $t$, then its $Q$-event is at time $t+2g+ \frac{1}{2} \varepsilon$.
All $P$-events in neighbouring segments are separated by $g - \frac{g}{\frac{1}{g} - 1}$.
We put a total of $4m + 12$ segments.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-10pt,0pt) -- (200pt,0pt);
\draw[loosely dashed] (0pt,-20pt) -- (0pt,10pt);
\draw[loosely dashed] (90pt,-20pt) -- (90pt,10pt);
\draw[loosely dashed] (180pt,-20pt) -- (180pt,10pt);
\draw[loosely dashed] (190pt,-20pt) -- (190pt,10pt);
\draw[draw=black, fill=red] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=blue] (186pt, -4pt) rectangle (184pt, 4pt);
\end{scope}
\begin{scope}[>=to]
\draw[|<->|][dotted] (0pt,-20pt) -- (90pt,-20pt) node[midway,below] {{$g$}};
\draw[|<->|][dotted] (90pt,-20pt) -- (180pt,-20pt) node[midway,below] {$g$};
\draw[|<->|][dotted] (180pt,-20pt) -- (190pt,-20pt) node[midway,below] {$\varepsilon$};
\end{scope}
\end{tikzpicture}
\caption{A single segment in $\mathcal{F}_{m}$. The red box denotes a $P$-event and the blue box denotes a $Q$-event.}%
\label{fig:segment}
\end{figure}
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=1]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-35pt,40pt) -- (265pt,40pt);
\draw[-, loosely dashed] (-35pt,0pt) -- (265pt,0pt);
\draw[-, loosely dashed] (-125pt,40pt) -- (-110pt,40pt) node[at start, left] {$\mathcal{E}_{m}$};
\draw[-, loosely dashed] (-125pt,0pt) -- (-110pt,0pt) node[at start, left] {$\mathcal{F}_{m}$};
\draw[-, very thick, loosely dotted] (-105pt,20pt) -- (-70pt,20pt);
\draw[very thick, draw=gray] (115pt, 20pt) ellipse (5.5cm and 4cm);
\draw[loosely dashed] (0pt,-10pt) -- (0pt,50pt) node[at start, below] {$t_{3m+9}$};
\draw[loosely dashed] (130pt,-10pt) -- (130pt,50pt) node[at start, below] {$1.5$};
\draw[loosely dashed] (-60pt,-10pt) -- (-60pt,50pt) node[at start, below] {$0.5$};
\draw[loosely dashed] (-120pt,-10pt) -- (-120pt,50pt) node[at start, below] {$0$};
\draw[loosely dashed] (52pt,-20pt) -- (52pt,50pt);
\draw[draw=black, fill=white] (-121pt, -4pt) rectangle (-119pt, 4pt);
\draw[draw=black, fill=white] (-121pt, 36pt) rectangle (-119pt, 44pt);
\draw[draw=black, fill=red] (-61pt, -4pt) rectangle (-59pt, 4pt);
\draw[draw=black, fill=red] (-61pt, 36pt) rectangle (-59pt, 44pt);
\draw[draw=black, fill=red] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=red] (1pt, 36pt) rectangle (-1pt, 44pt);
\draw[draw=black, fill=red] (53pt, -4pt) rectangle (51pt, 4pt);
\draw[draw=black, fill=red] (53pt, 36pt) rectangle (51pt, 44pt);
\draw[draw=black, fill=red] (105pt, -4pt) rectangle (103pt, 4pt);
\draw[draw=black, fill=red] (105pt, 36pt) rectangle (103pt, 44pt);
\draw[draw=black, fill=red] (157pt, -4pt) rectangle (155pt, 4pt);
\draw[draw=black, fill=red] (157pt, 36pt) rectangle (155pt, 44pt);
\draw[draw=black, fill=red] (209pt, -4pt) rectangle (207pt, 4pt);
\draw[draw=black, fill=red] (209pt, 36pt) rectangle (207pt, 44pt);
\draw[draw=black, fill=red] (261pt, -4pt) rectangle (259pt, 4pt);
\draw[draw=black, fill=red] (261pt, 36pt) rectangle (259pt, 44pt);
\draw[draw=black, fill=blue] (-21pt, -4pt) rectangle (-23pt, 4pt);
\draw[draw=black, fill=blue] (-21pt, 36pt) rectangle (-23pt, 44pt);
\draw[draw=black, fill=blue] (31pt, -4pt) rectangle (29pt, 4pt);
\draw[draw=black, fill=blue] (31pt, 36pt) rectangle (29pt, 44pt);
\draw[draw=black, fill=blue] (83pt, -4pt) rectangle (81pt, 4pt);
\node[above] at (82pt, 5pt) {{\tiny $y'$}};
\draw[draw=black, fill=blue] (83pt, 36pt) rectangle (81pt, 44pt);
\draw[draw=black, fill=blue] (135pt, -4pt) rectangle (133pt, 4pt);
\node[above] at (134pt, 5pt) {{\tiny $y$}};
\draw[draw=black, fill=blue] (127pt, 36pt) rectangle (125pt, 44pt);
\node[above] at (126pt, 45pt) {{\tiny $x$}};
\draw[draw=black, fill=blue] (187pt, -4pt) rectangle (185pt, 4pt);
\draw[draw=black, fill=blue] (187pt, 36pt) rectangle (185pt, 44pt);
\node[above] at (186pt, 45pt) {{\tiny $x'$}};
\draw[draw=black, fill=blue] (239pt, -4pt) rectangle (237pt, 4pt);
\draw[draw=black, fill=blue] (239pt, 36pt) rectangle (237pt, 44pt);
\end{scope}
\begin{scope}[>=to]
\draw[|<->|][dotted] (0pt,-30pt) -- (52pt,-30pt) node[midway,below] {{\scriptsize $g - \frac{g}{\frac{1}{g} - 1}$}};
\draw[|<->|][dotted] (0pt,-50pt) -- (65pt,-50pt) node[midway,below] {{\scriptsize $g$}};
\draw[|<->|][dotted] (65pt,-50pt) -- (130pt,-50pt) node[midway,below] {{\scriptsize $g$}};
\end{scope}
\end{tikzpicture}
\caption{A close-up near the ${(3m+9)}^{\mathit{th}}$-segments in $\mathcal{E}_{m}$ and $\mathcal{F}_{m}$.}%
\label{fig:3mp9segments}
\end{figure}
$\mathcal{E}_{m}$ is almost identical to $\mathcal{F}_{m}$ except the ${(3m + 9)}^{\mathit{th}}$ segment.
Let this segment start at $t_{3m + 9}$. In $\mathcal{E}_{m}$, we move the corresponding $Q$-event to
$t+2g-\frac{1}{2} \varepsilon$ (see Figure~\ref{fig:3mp9segments}).
Note in particular that there are $P$-events at time $0.5$ in both models (in their ${(m+4)}^{\mathit{th}}$ segment).
The only difference in two models is a pair of $Q$-events, which we
denote by $x$ and $y$ respectively and write their corresponding timestamps
as $t_x$ and $t_y$ (see Figure~\ref{fig:3mp9segments}).
It is easy to verify that no two events are separated by an integer distance.
We say a configuration $(i, j)$ is \emph{identical} if $i = j$.
For $i \geq 1$, we denote by $\mathit{seg}(i)$ the segment that the $i^{\mathit{th}}$ event belongs to,
and we write $P(i)$ if the $i^{\mathit{th}}$ event is a $P$-event and $Q(i)$ if its a $Q$-event.
\begin{prop}\label{prop:beginning-strict-induction}
\emph{Duplicator} has a winning strategy for $m$-round \emph{\textup{\textmd{\textsf{MTL}}}{}} EF game
on $\mathcal{E}_{m}$ and $\mathcal{F}_{m}$ with $(i_0, j_0) = (0, 0)$.
In particular, she has a winning strategy such that for each round $0 \leq r \leq m$,
the $i_r^{\mathit{th}}$ event in $\mathcal{E}_{m}$ and the $j_r^{\mathit{th}}$ event in $\mathcal{F}_{m}$
satisfy the same set of monadic predicates and
\begin{itemize}
\item if $i_r \neq j_r$, then
\begin{itemize}
\item $\mathit{seg}(i_r) - \mathit{seg}(j_r) < r$
\item $(m + 1 - r) < \mathit{seg}(i_r), \mathit{seg}(j_r) < (m + 5 + r)$ or $(3m + 8 - r) < \mathit{seg}(i_r), \mathit{seg}(j_r) < (3m + 12 + r)$.
\end{itemize}
\end{itemize}
\end{prop}
\noindent
We prove the proposition by induction on $r$. The idea is to try to make the resulting configurations identical.
When this is not possible \emph{Duplicator} simply imitates what \emph{Spoiler} does.
\begin{itemize}
\item \emph{Base step.} The proposition holds trivially for $(i_0, j_0) = (0, 0)$.
\item \emph{Induction step.} Suppose that the claim holds for $r < m$. We prove it
also holds for $r + 1$.
\begin{itemize}
\item $(i_r, j_r) = (0, 0)$:\\
\emph{Duplicator} can always make $(i_{r+1}, j_{r+1})$ identical.
\item $(i_r, j_r) \neq (0, 0)$ is identical:\\
\emph{Duplicator} tries to make $(i_r', j_r')$ identical.
This may only fail when
\begin{itemize}
\item $P(i_r)$, $P(j_r)$ and $\mathit{seg}(i_r) = \mathit{seg}(j_r) = m + 4$.
\item $Q(i_r)$, $Q(j_r)$ and $\mathit{seg}(i_r) = \mathit{seg}(j_r) = 3m + 9$, i.e.,~$x$ and $y$.
\end{itemize}
In these cases, \emph{Duplicator} chooses another event in a neighbouring segment
that minimises $|\mathit{seg}(i_r') - \mathit{seg}(j_r')|$.
For example, if $(i_r, j_r)$ corresponds to $x$ and $y$ and \emph{Spoiler} chooses
$j_r'$ such that $P(j_r')$ and $\mathit{seg}(j_r') = m + 4$ in a $\mathbin{\mathcal{S}}_{(1, \infty)}$-move,
\emph{Duplicator} chooses $i_r'$ with $\mathit{seg}(i_r') = m + 3$.
If \emph{Spoiler} then plays $\LTLdiamondminus$-part, the resulting configuration $(i_{r+1}, j_{r+1}) = (i_r', j_r')$
clearly satisfy the claim. If she plays $\mathbin{\mathcal{S}}$-part, \emph{Duplicator} makes $(i_r'', j_r'')$
identical whenever possible. Otherwise she chooses a suitable event that minimises
$|\mathit{seg}(i_r'') - \mathit{seg}(j_r'')|$. For instance, if $Q(i_r'')$ and $\mathit{seg}(i_r'') = m + 1$,
\emph{Duplicator} chooses $j_r''$ such that $Q(j_r'')$ and $\mathit{seg}(j_r'') = m + 2$.
The resulting configuration $(i_{r+1}, j_{r+1}) = (i_r'', j_r'')$ clearly satisfies the claim.
\item $(i_r, j_r)$ is not identical:\\
\emph{Duplicator} tries to make $(i_r', j_r')$ identical. If this is not possible,
then \emph{Duplicator} chooses an event that minimises $|\mathit{seg}(i_r') - \mathit{seg}(j_r')|$.
For example, consider $\mathit{seg}(i_r) = m + 4$, $\mathit{seg}(j_r) = m + 3$ such that $P(i_r)$ and $P(j_r)$,
and \emph{Spoiler} chooses $x$ in an $\mathbin{\mathcal{U}}_{(0, 1)}$-move. In this case, \emph{Duplicator}
cannot choose $y'$, but she may choose the first $Q$-event that happens before $y'$. \emph{Duplicator} responds to
$\mathbin{\mathcal{U}}$-parts and $\mathbin{\mathcal{S}}$-parts in similar ways as before. It is easy to see that the claim holds.
\end{itemize}
\end{itemize}
Proposition~\ref{prop:beginning-strict} now follows from Proposition~\ref{prop:beginning-strict-induction},
the \textup{\textmd{\textsf{MTL}}}{} EF Theorem, and the fact that
$\mathcal{E}_{m} \models \LTLdiamond (P \wedge \mathcal{B}^\rightarrow_{(1, 2)} P)$ but
$\mathcal{F}_{m} \not \models \LTLdiamond (P \wedge \mathcal{B}^\rightarrow_{(1, 2)} P)$.
\end{proof}
\paragraph{\emph{Non-local properties (two reference points)}}
Adding modalities $\mathcal{B}^\rightarrow_I, \mathcal{B}^\leftarrow_I$ to \textup{\textmd{\textsf{MTL}}}{} allows one to specify properties
with respect to a distant time point even when there is no event at that point.
However, the following proposition shows that this is still not enough for expressive completeness.
\begin{prop}\label{prop:foone-strict}
\emph{\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}} is strictly more expressive than \emph{\textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}} over $\ropen{0, N}$-timed words.
\end{prop}
\begin{proof}
This is similar to a proof in~{\cite[Section 7]{Prabhakar2006}}.
Given $m \in \mathbb{N}$, we construct two models as follows.
Let
\[
\arraycolsep=0.3ex
\begin{array}{rcl}
\mathcal{G}_{m} & = & (\emptyset, 0)(\emptyset, \frac{0.5}{2m+3})(\emptyset, \frac{1.5}{2m+3})\ldots(\emptyset, 1-\frac{0.5}{2m+3}) \\
& & (\emptyset, 1+\frac{0.5}{2m+2})(\emptyset, 1+\frac{1.5}{2m+2})\ldots\ldots(\emptyset, 2-\frac{0.5}{2m+2}) \,.
\end{array}
\]
$\mathcal{H}_{m}$ is constructed as $\mathcal{G}_{m}$ except that the event at time $\frac{m+1.5}{2m+3} = 0.5$ is missing.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1.0]
\begin{scope}[>=latex]
\draw[|-|, loosely dashed] (0pt,40pt) -- (252pt,40pt) node[at start, left] {$\mathcal{G}_{m}$};
\draw[|-|, loosely dashed] (0pt,0pt) -- (252pt,0pt) node[at start, left] {$\mathcal{H}_{m}$};
\draw[loosely dashed] (0pt,-10pt) -- (0pt,50pt) node[at start,below] {$0$};
\draw[loosely dashed] (126pt,-10pt) -- (126pt,50pt) node[at start,below] {$1$};
\draw[loosely dashed] (252pt,-10pt) -- (252pt,50pt) node[at start,below] {$2$};
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=white] (1pt, 36pt) rectangle (-1pt, 44pt);
\draw[draw=black, fill=white] (10pt, -4pt) rectangle (8pt, 4pt);
\draw[draw=black, fill=white] (10pt, 36pt) rectangle (8pt, 44pt);
\draw[draw=black, fill=white] (28pt, -4pt) rectangle (26pt, 4pt);
\draw[draw=black, fill=white] (28pt, 36pt) rectangle (26pt, 44pt);
\draw[draw=black, fill=white] (46pt, -4pt) rectangle (44pt, 4pt);
\draw[draw=black, fill=white] (46pt, 36pt) rectangle (44pt, 44pt);
\draw[draw=black, fill=white] (64pt, 36pt) rectangle (62pt, 44pt);
\draw[draw=black, fill=white] (82pt, -4pt) rectangle (80pt, 4pt);
\draw[draw=black, fill=white] (82pt, 36pt) rectangle (80pt, 44pt);
\draw[draw=black, fill=white] (100pt, -4pt) rectangle (98pt, 4pt);
\draw[draw=black, fill=white] (100pt, 36pt) rectangle (98pt, 44pt);
\draw[draw=black, fill=white] (118pt, -4pt) rectangle (116pt, 4pt);
\draw[draw=black, fill=white] (118pt, 36pt) rectangle (116pt, 44pt);
\draw[draw=black, fill=white] (137.5pt, -4pt) rectangle (135.5pt, 4pt);
\draw[draw=black, fill=white] (137.5pt, 36pt) rectangle (135.5pt, 44pt);
\draw[draw=black, fill=white] (158.5pt, -4pt) rectangle (156.5pt, 4pt);
\draw[draw=black, fill=white] (158.5pt, 36pt) rectangle (156.5pt, 44pt);
\draw[draw=black, fill=white] (179.5pt, -4pt) rectangle (177.5pt, 4pt);
\draw[draw=black, fill=white] (179.5pt, 36pt) rectangle (177.5pt, 44pt);
\draw[draw=black, fill=white] (200.5pt, -4pt) rectangle (198.5pt, 4pt);
\draw[draw=black, fill=white] (200.5pt, 36pt) rectangle (198.5pt, 44pt);
\draw[draw=black, fill=white] (221.5pt, -4pt) rectangle (219.5pt, 4pt);
\draw[draw=black, fill=white] (221.5pt, 36pt) rectangle (219.5pt, 44pt);
\draw[draw=black, fill=white] (242.5pt, -4pt) rectangle (240.5pt, 4pt);
\draw[draw=black, fill=white] (242.5pt, 36pt) rectangle (240.5pt, 44pt);
\end{scope}
\end{tikzpicture}
\caption{Models $\mathcal{G}_{m}$ and $\mathcal{H}_{m}$ for $m = 2$.}%
\label{fig:prabhakarmodels}
\end{figure}
Figure~\ref{fig:prabhakarmodels} illustrates the models for the case $m = 2$ where white boxes
represent events at which no monadic predicate holds.
Observe that no two events are separated by an integer distance. We say that a configuration
$(i, j)$ is \emph{synchronised} if they correspond to events with the same timestamp.
Here we extend \textup{\textmd{\textsf{MTL}}}{} EF games with the following moves
to obtain \textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{} EF games:
\begin{itemize}
\item \emph{$\mathcal{B}^\rightarrow_{I}$-move}: \emph{Spoiler} chooses one of the two timed words (say $\rho$)
and picks $i_r'$ such that (i) $\tau_{i_r'} - \tau_{i_r} \in I$ in $\rho$
and (ii) there is no position $i' < i_r'$ in $\rho$ such that $\tau_{i'} - \tau_{i_r} \in I$.
\emph{Duplicator} must choose a position $j_r'$ in $\rho'$ such that $j_r'$ is the
first position in $I$ relative to $j_r$ in $\rho'$. If she cannot find such a position
then \emph{Spoiler} wins the game.
\item \emph{$\mathcal{B}^\leftarrow_{I}$-move}: Defined symmetrically.
\end{itemize}
\begin{thm}[\textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{} EF Theorem]\label{thm:mtlbef}
For (finite) timed words $\rho$, $\rho'$ and an \emph{\textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{}} formula $\varphi$ of modal depth $\leq m$,
if \emph{Duplicator} has a winning strategy for the $m$-round \emph{\textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{}} EF game on
$\rho$, $\rho'$ with $(i_0, j_0) = (0, 0)$, then
\[
\rho \models \varphi \iff \rho' \models \varphi \,.
\]
\end{thm}
\begin{prop}\label{prop:foone-strict-induction}
\emph{Duplicator} has a winning strategy for $m$-round \emph{\textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{}} EF game
on $\mathcal{G}_{m}$ and $\mathcal{H}_{m}$ with $(i_0, j_0) = (0, 0)$.
In particular, she has a winning strategy such that for each round $0 \leq r \leq m$,
the $i_r^{\mathit{th}}$ event in $\mathcal{G}_{m}$ and the $j_r^{\mathit{th}}$ event in $\mathcal{H}_{m}$
satisfy the same set of monadic predicates and
\begin{itemize}
\item if $(i_r, j_r)$ is not synchronised, then
\begin{itemize}
\item $|i_r - j_r| = 1$
\item $(m + 1 - r) < i_r, j_r < (m + 3 + r)$ or $(3m + 4 - r) < i_r, j_r < (3m + 5 + r)$.
\end{itemize}
\end{itemize}
\end{prop}
\noindent
We prove the proposition by induction on $r$. The idea, again, is to try to make the resulting configurations identical.
\begin{itemize}
\item \emph{Base step.} The proposition holds trivially for $(i_0, j_0) = (0, 0)$.
\item \emph{Induction step.} Suppose that the claim holds for $r < m$. We prove it
also holds for $r + 1$.
\begin{itemize}
\item $(i_r, j_r) = (0, 0)$:\\
\emph{Duplicator} tries to make $(i_r', j_r')$ synchronised.
If \emph{Spoiler} chooses $i_r' = m + 2$, \emph{Duplicator} chooses either
$j_r' = m + 1$ or $j_r' = m + 2$.
\item $(i_r, j_r) \neq (0, 0)$ is synchronised:\\
\emph{Duplicator} tries to make $(i_r', j_r')$ synchronised.
If this is not possible then \emph{Duplicator} chooses a suitable event that minimises $|i_r' - j_r'|$.
It is easy to see that the resulting configuration $(i_{r+1}, j_{r+1})$ satisfies the claim
regardless of how \emph{Spoiler} plays.
\item $(i_r, j_r)$ is not synchronised:\\
The strategy of \emph{Duplicator} is same as the case above.
\end{itemize}
\end{itemize}
Proposition~\ref{prop:foone-strict} now follows from Proposition~\ref{prop:foone-strict-induction},
Theorem~\ref{thm:mtlbef}, and the fact that the \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula
\[
\arraycolsep=0.3ex
\begin{array}{ll}
\exists x' \, \biggl( d(x, x') > 1 \wedge d(x, x') < 2 \wedge \exists x'' \, \Bigl( & x' < x'' \wedge \nexists y' \, (x' < y' \wedge y' < x'') \\
& {} \wedge d(x, x'') > 1 \wedge d(x, x'') < 2 \\
& {} \wedge \nexists y'' \, \bigl( d(x', y'') < 1 \wedge d(x'', y'') > 1 \bigr) \Bigr) \biggr)
\end{array}
\]
distinguishes $\mathcal{G}_{m}$ and $\mathcal{H}_{m}$ for any $m \in \mathbb{N}$ (when evaluated at position $0$).
This formula asserts that there is a pair of neighbouring events in $(1, 2)$ such that
there is no event between them if they are both mapped to exactly one time unit earlier.
\end{proof}
One way to understand why \textup{\textmd{\textsf{MTL[$\mathcal{B}^\leftrightarrows$]}}}{} is still less expressive than \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}
is to consider the arity of modalities.
Let the current instant be $t_1$, and suppose that we want
to specify the following property for some positive integers $a$ and $c$ ($a > c$):\footnote{We remark that a closely related yet different property
is used in~\cite{Lasota2008} to show that one-clock alternating timed automata and
timed automata are expressively incomparable.}
\begin{itemize}
\item There is an event at $t_2 > t_1 + a$ where $Q$ holds
\item $P$ holds at all events in $\bigl(t_1 + c, t_1 + c + (t_2 - t_1 - a)\bigr)$.
\end{itemize}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-5pt,0pt) -- (285pt,0pt);
\draw[-, loosely dashed] (0pt,-10pt) -- (0pt,10pt) node[at start, below] {$t_1$};
\draw[-, loosely dashed] (120pt,-10pt) -- (120pt,20pt) node[at start, below] {$t_1 + c$};
\draw[-, loosely dashed] (160pt,-10pt) -- (160pt,20pt);
\draw[-, loosely dashed] (220pt,-10pt) -- (220pt,20pt) node[at start, below] {$t_1 + a$};
\draw[-, loosely dashed] (260pt,-10pt) -- (260pt,20pt) node[at start, below] {$t_2$};
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=white] (14pt, -4pt) rectangle (12pt, 4pt);
\draw[draw=black, fill=white] (34pt, -4pt) rectangle (32pt, 4pt);
\draw[draw=black, fill=white] (70pt, -4pt) rectangle (68pt, 4pt);
\draw[draw=black, fill=white] (86pt, -4pt) rectangle (84pt, 4pt);
\draw[draw=black, fill=white] (114pt, -4pt) rectangle (116pt, 4pt);
\draw[draw=black, fill=red] (136pt, -4pt) rectangle (138pt, 4pt);
\draw[draw=black, fill=red] (144pt, -4pt) rectangle (146pt, 4pt);
\draw[draw=black, fill=red] (148pt, -4pt) rectangle (150pt, 4pt);
\draw[draw=black, fill=red] (156pt, -4pt) rectangle (158pt, 4pt);
\draw[draw=black, fill=white] (171pt, -4pt) rectangle (173pt, 4pt);
\draw[draw=black, fill=white] (191pt, -4pt) rectangle (189pt, 4pt);
\draw[draw=black, fill=white] (199pt, -4pt) rectangle (201pt, 4pt);
\draw[draw=black, fill=white] (231pt, -4pt) rectangle (233pt, 4pt);
\draw[draw=black, fill=white] (240pt, -4pt) rectangle (242pt, 4pt);
\draw[draw=black, fill=white] (254pt, -4pt) rectangle (256pt, 4pt);
\draw[draw=black, fill=blue] (261pt, -4pt) rectangle (259pt, 4pt);
\end{scope}
\begin{scope}[>=to]
\draw[|<->|][dotted] (120pt,20pt) -- (160pt,20pt) node[midway,above] {{\scriptsize $t_2 - t_1 - a$}};
\draw[|<->|][dotted] (220pt,20pt) -- (260pt,20pt) node[midway,above] {{\scriptsize $t_2 - t_1 - a$}};
\end{scope}
\end{tikzpicture}
\caption{$\varphi_{\mathit{cont2}}$ holds at $t_1$ in the continuous semantics. The red boxes denote $P$-events and the blue boxes denote $Q$-events.}%
\label{fig:puntilq2}
\end{figure}
See Figure~\ref{fig:puntilq2} for an example. In the continuous semantics, this property can
be expressed as the following simple formula over signals of the form $f^\rho$:
\[
\varphi_{\mathit{cont2}} = \bigl(\LTLdiamond_{=c} (P \vee P_{\epsilon}) \bigr) \mathbin{\mathcal{U}} (\LTLdiamond_{=a} Q) \,.
\]
Observe how this formula talks about events from two (instead of just one) time points: $t_1 + c$ and $t_1 + a$.
In the same vein,
the following formula can be used to distinguish $\mathcal{G}_{m}$ and $\mathcal{H}_{m}$
(defined in the proof of Proposition~\ref{prop:foone-strict})
in the continuous semantics:
\[
\varphi_{\mathit{cont3}} = \LTLdiamond_{(1, 2)} \bigl( \neg P_\epsilon \wedge (\LTLdiamondminus_{= 1} P_\epsilon) \mathbin{\mathcal{U}} (\neg P_\epsilon) \bigr) \label{for:cont3} \,.
\]
Indeed, to express such properties in the pointwise semantics, we need \emph{binary} variants of $\mathcal{B}^\rightarrow_I, \mathcal{B}^\leftarrow_I$, which are exactly what we propose
in the next section.
\subsection{New modalities}
We define a family of modalities which can be understood as generalisations
of the usual `Until' and `Since' modalities.
Intuitively, these new modalities closely mimic
the meanings of formulas of the form $(\LTLdiamond_{={k_1}} \varphi_1) \mathbin{\mathcal{U}}_{< k_3} (\LTLdiamond_{={k_2}} \varphi_2)$
or $(\LTLdiamond_{={k_1}} \varphi_1) \mathbin{\mathcal{S}}_{< k_3} (\LTLdiamond_{={k_2}} \varphi_2)$
in the continuous semantics.
\paragraph{\emph{Generalised `Until' and `Since'}}
Let $I \subseteq (0, \infty)$ be an interval
with endpoints in $\mathbb{N} \cup \{\infty\}$
and $c \in \mathbb{N}$, $c \leq \inf(I)$.
The formula $\varphi_1 \mathbin{\mathfrak{U}}_{I}^c \varphi_2$, when imposed at $t_1$, asserts that
\begin{itemize}
\item There is an event at $t_2$ where $\varphi_2$ holds and $t_2 - t_1 \in I$
\item $\varphi_1$ holds at all events in the open interval $\biggl(t_1 + c, t_1 + c + \Bigl(t_2 - \bigl(t_1 + \inf(I)\bigr)\Bigr)\biggr)$.
\end{itemize}
For example, the formula $P \mathbin{\mathfrak{U}}_{(a, \infty)}^c Q$ (which is `equivalent' to $\varphi_{\mathit{cont2}}$ when the latter is interpreted over signals of the form $f^\rho$) holds at time $t_1$ in Figure~\ref{fig:puntilq2}.
Formally, for $I = (a, b) \subseteq (0, \infty)$, $a \in \mathbb{N}$, $b \in \mathbb{N} \cup \{ \infty \}$
and $c \in \mathbb{N}$ with $c \leq a$,
we define the \emph{generalised `Until'} modality $\mathbin{\mathfrak{U}}_{(a, b)}^c$ by the following \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula:
\[
\arraycolsep=0.3ex
\begin{array}{rclll}
\mathbin{\mathfrak{U}}_{(a, b)}^c(x, X_1, X_2) & = & \exists x' \, \Bigl(& x < x' & {} \wedge d(x, x') > a \wedge d(x, x') < b \wedge X_2(x') \\
& & & {} \wedge \forall x'' \, & \bigl(x < x'' \wedge d(x, x'') > c \wedge x'' < x' \\
& & & & \hspace{1mm} {} \wedge d(x', x'') > (a-c) \implies X_1(x'') \bigr) \Bigr) \,.
\end{array}
\]
Symmetrically, we define the \emph{generalised `Since'} modality $\mathbin{\mathfrak{S}}_{(a, b)}^c$ as
\[
\arraycolsep=0.3ex
\begin{array}{rclll}
\mathbin{\mathfrak{S}}_{(a, b)}^c(x, X_1, X_2) & = & \exists x' \, \Bigl(& x' < x & {} \wedge d(x, x') > a \wedge d(x, x') < b \wedge X_2(x') \\
& & & {} \wedge \forall x'' \, & \bigl(x'' < x \wedge d(x, x'') > c \wedge x' < x'' \\
& & & & \hspace{1mm} {} \wedge d(x', x'') > (a-c) \implies X_1(x'') \bigr) \Bigr) \,.
\end{array}
\]
We also define the modalities for $I \subseteq (0, \infty)$ being a half-open interval or a closed interval
in the expected way and refer to the logic obtained by adding these modalities to \textup{\textmd{\textsf{MTL}}}{}
as \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}. Note that the usual `Until' and `Since' modalities can be written in terms of the generalised
modalities. For instance,
\[
\varphi_1 \mathbin{\mathcal{U}}_{(a, b)} \varphi_2 = \varphi_1 \mathbin{\mathfrak{U}}_{(a, b)}^{a} \varphi_2 \wedge \neg \left(\mathbf{true} \mathbin{\mathfrak{U}}_{\lopen{0, a}}^0 (\neg \varphi_1)\right) \,.
\]
\paragraph{\emph{More liberal bounds}}
In defining modalities $\mathbin{\mathfrak{U}}_{(a, b)}^c$ and $\mathbin{\mathfrak{S}}_{(a, b)}^c$
we required that $0 \leq c \leq a$. We now show that more liberal uses of bounds (constraining
intervals and superscript `$c$') are indeed syntactic sugars, and we therefore allow them in the rest of this section.
For instance, suppose that we want to
to assert the following property (which translates to $\bigl(\LTLdiamond_{=10} (\varphi_1 \vee P_{\epsilon}) \bigr) \mathbin{\mathcal{U}}_{<3} (\LTLdiamond_{=2} \varphi_2)$ in the continuous semantics) at $t_1$:
\begin{itemize}
\item There is an event at $t_2$ where $\varphi_2$ holds and $t_2 - t_1 \in (2, 5)$
\item $\varphi_1$ holds at all events in $\bigl(t_1 + 10, t_1 + 10 + (t_2 - t_1 - 2)\bigr)$.
\end{itemize}
This can be expressed in \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} as
\[
\arraycolsep=0.3ex
\begin{array}{lll}
\exists x' \, \Bigl(& x < x' & {} \wedge d(x, x') > 2 \wedge d(x, x') < 5 \wedge X_2(x') \\
& {} \wedge \forall x'' \, & \bigl(x < x'' \wedge d(x, x'') > 10 \wedge d(x', x'') < 8 \implies X_1(x'') \bigr) \Bigr)
\end{array}
\]
where $X_1, X_2$ are to be substituted with $\varphi_1, \varphi_2$.
While we could define a modality
\[
\mathbin{\mathfrak{U}}_{(2, 5)}^{10}(x, X_1, X_2)
\]
by this formula, this is not necessary as the formula is indeed equivalent to
\[
\LTLdiamond_{(2, 5)} \varphi_2 \wedge \neg \Bigl( (\neg \varphi_2) \mathbin{\mathfrak{U}}_{(10, 13)}^2 \bigl( \neg \varphi_1 \wedge \neg (\LTLdiamondminus_{= 8} \varphi_2) \bigr) \Bigr) \,.
\]
In the continuous semantics we can, of course, also refer points in the past
in such formulas, e.g.,~$(\LTLdiamondminus_{={k_1}} \varphi_1) \mathbin{\mathcal{U}}_{< k_3} (\LTLdiamond_{={k_2}} \varphi_2)$.
We now generalise the idea above to handle these cases.
\begin{prop}
Let the current instant be $t_1$. The property (and its past counterpart):
\begin{itemize}
\item There is an event at $t_2$ where $\varphi_2$ holds and $t_2 - t_1 \in I$
\item $\varphi_1$ holds at all events in $\biggl(t_1 + c, t_1 + c + \Bigl(t_2 - \bigl(t_1 + \inf(I)\bigr)\Bigr)\biggr)$
\end{itemize}
where $I \subseteq (-\infty, \infty)$, $\inf(I) \in \mathbb{Z}$, $\sup(I) \in \mathbb{Z} \cup \{ \infty \}$ and $c \in \mathbb{Z}$
can be expressed with the modalities defined earlier (i.e.,~$\mathbin{\mathfrak{U}}_I^c, \mathbin{\mathfrak{S}}_I^c$ with $I \subseteq (0, \infty)$ and $c \leq \inf(I)$).
\end{prop}
\begin{proof}
Without loss of generality, we shall only focus on expressing the future version of the property for
the case of $I$ being an open interval.
To ease the presentation, we use the following convention in all the illustrations in this proof: the red boxes denote $\varphi_1$-events,
blue boxes denote $\varphi_2$-events, and white boxes denote events where neither $\varphi_1$ nor $\varphi_2$ hold.
We prove the claim in each of the following cases:
\begin{itemize}
\item \emph{$a \geq 0$ and $0 \leq c \leq a$}: This corresponds to the standard
version of $\mathbin{\mathfrak{U}}_I^c$ that we have already defined.
\item \emph{$a \geq 0$ and $c > a$}:
$\varphi_1 \mathbin{\mathfrak{U}}_{(a, b)}^c \varphi_2$ does not hold at $t_1$ if and only if one of the following holds \mbox{at $t_1$}:
\begin{itemize}
\item \emph{There is no $\varphi_2$-event in $(t_1 + a, t_1 + b)$}: This can be enforced by
\[
\neg (\LTLdiamond_{(a, b)} \varphi_2) \,.
\]
\item \emph{$\neg \varphi_1$ holds at an event at $t_3 \in \bigl( t_1 + c, t_1 + c + (b - a) \bigr)$ and there is no
$\varphi_2$-event in $\lopen{t_1 + a, t_1 + a + (t_3 - t_1 - c)}$}: This can be enforced by
\[
(\neg \varphi_2) \mathbin{\mathfrak{U}}_{\bigl(c, c + (b - a)\bigr)}^a \bigl(\neg \varphi_1 \wedge \underbrace{\neg(\LTLdiamondminus_{=(c-a)} \varphi_2)}_{\psi} \bigr) \,.
\]
We need the subformula $\psi$ to ensure that there is no $\varphi_2$-event at $t_1 + a + (t_3-t_1-c)$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-5pt,0pt) -- (285pt,0pt);
\draw[-, loosely dashed] (0pt,-10pt) -- (0pt,10pt) node[at start, below] {$t_1$};
\draw[-, loosely dashed] (40pt,-10pt) -- (40pt,10pt) node[at start, below] {$t_1 + a$};
\draw[-, loosely dashed] (120pt,-10pt) -- (120pt,10pt) node[at start, below] {$t_1 + b$};
\draw[-, loosely dashed] (160pt,-10pt) -- (160pt,10pt) node[at start, below] {$t_1 + c$};
\draw[-, loosely dashed] (90pt,0pt) -- (90pt,20pt);
\draw[-, loosely dashed] (210pt,-10pt) -- (210pt,20pt) node[at start, below] {$t_3$};
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=white] (10pt, -4pt) rectangle (8pt, 4pt);
\draw[draw=black, fill=white] (30pt, -4pt) rectangle (28pt, 4pt);
\draw[draw=black, fill=white] (50pt, -4pt) rectangle (48pt, 4pt);
\draw[draw=black, fill=white] (80pt, -4pt) rectangle (78pt, 4pt);
\draw[draw=black, fill=white] (91pt, -4pt) rectangle (89pt, 4pt);
\draw[draw=black, fill=blue] (99pt, -4pt) rectangle (101pt, 4pt);
\draw[draw=black, fill=white] (130pt, -4pt) rectangle (128pt, 4pt);
\draw[draw=black, fill=white] (144pt, -4pt) rectangle (146pt, 4pt);
\draw[draw=black, fill=white] (150pt, -4pt) rectangle (148pt, 4pt);
\draw[draw=black, fill=white] (154pt, -4pt) rectangle (156pt, 4pt);
\draw[draw=black, fill=red] (165pt, -4pt) rectangle (163pt, 4pt);
\draw[draw=black, fill=red] (181pt, -4pt) rectangle (179pt, 4pt);
\draw[draw=black, fill=white] (211pt, -4pt) rectangle (209pt, 4pt);
\draw[draw=black, fill=white] (221pt, -4pt) rectangle (223pt, 4pt);
\draw[draw=black, fill=white] (240pt, -4pt) rectangle (242pt, 4pt);
\draw[draw=black, fill=white] (248pt, -4pt) rectangle (250pt, 4pt);
\draw[draw=black, fill=white] (260pt, -4pt) rectangle (258pt, 4pt);
\end{scope}
\begin{scope}[>=to]
\draw[|<->|][dotted] (40pt,20pt) -- (90pt,20pt) node[midway,above] {{\scriptsize $t_3 - t_1 - c$}};
\draw[|<->|][dotted] (160pt,20pt) -- (210pt,20pt) node[midway,above] {{\scriptsize $t_3 - t_1 - c$}};
\end{scope}
\end{tikzpicture}
\end{figure}
\end{itemize}
The desired formula is the conjunction of the negations of these two formulas.
\item \emph{$a \geq 0$ and $c < 0$}:
Let $t_2$ be the first time instant in $(t_1 + a, t_1 + b)$ where there is a $\varphi_2$-event.
Consider the following subcases:
\begin{itemize}
\item \emph{There is no event in $\bigl(t_1, t_1 + (t_2 - t_1 - a)\bigr)$}:
This can be enforced by
\[
\varphi = \mathbf{false} \, \mathbin{\mathfrak{U}}_{(a, b)}^0 \varphi_2 \,.
\]
Then we can enforce that $\varphi_1$ holds at all events in \circled{\scriptsize 1} in the illustration below
by
\[
\varphi' = (\neg \varphi_2) \mathbin{\mathfrak{U}}_{(a, b)}^{a} \bigl( \varphi_2 \wedge \underbrace{(\varphi_1 \mathbin{\mathfrak{S}}_{(a, b)}^{a + |c|} \mathbf{true})}_{\psi'} \bigr) \,.
\]
Observe that the subformula $\psi'$ must hold at $t_2$ if $\varphi_1$ holds at all events in \circled{\scriptsize 1}.
This is because, by assumption, there must be an event at $t_1$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-5pt,0pt) -- (325pt,0pt);
\draw[-, loosely dashed] (0pt,-10pt) -- (0pt,10pt) node[at start, below] {$t_1 + c$};
\draw[-, loosely dashed] (60pt,-40pt) -- (60pt,10pt);
\draw[-, loosely dashed] (0pt,-30pt) -- (0pt,-40pt);
\draw[-, loosely dashed] (120pt,-10pt) -- (120pt,10pt) node[at start, below] {$t_1$};
\draw[-, loosely dashed] (180pt,-40pt) -- (180pt,10pt);
\draw[-, loosely dashed] (120pt,-30pt) -- (120pt,-40pt);
\draw[-, loosely dashed] (220pt,-10pt) -- (220pt,10pt) node[at start, below] {$t_1 + a$};
\draw[-, loosely dashed] (280pt,-10pt) -- (280pt,10pt) node[at start, below] {$t_2$};
\draw[-, loosely dashed] (290pt,-30pt) -- (290pt,10pt) node[at start, below] {$t_1 + b$};
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=red] (14pt, -4pt) rectangle (12pt, 4pt);
\draw[draw=black, fill=red] (20pt, -4pt) rectangle (22pt, 4pt);
\draw[draw=black, fill=red] (36pt, -4pt) rectangle (38pt, 4pt);
\draw[draw=black, fill=white] (70pt, -4pt) rectangle (68pt, 4pt);
\draw[draw=black, fill=white] (86pt, -4pt) rectangle (84pt, 4pt);
\draw[draw=black, fill=white] (114pt, -4pt) rectangle (116pt, 4pt);
\draw[draw=black, fill=white] (119pt, -4pt) rectangle (121pt, 4pt);
\draw[draw=black, fill=white] (191pt, -4pt) rectangle (189pt, 4pt);
\draw[draw=black, fill=white] (199pt, -4pt) rectangle (201pt, 4pt);
\draw[draw=black, fill=white] (215pt, -4pt) rectangle (217pt, 4pt);
\draw[draw=black, fill=white] (231pt, -4pt) rectangle (233pt, 4pt);
\draw[draw=black, fill=white] (240pt, -4pt) rectangle (242pt, 4pt);
\draw[draw=black, fill=white] (250pt, -4pt) rectangle (252pt, 4pt);
\draw[draw=black, fill=white] (261pt, -4pt) rectangle (259pt, 4pt);
\draw[draw=black, fill=blue] (281pt, -4pt) rectangle (279pt, 4pt);
\draw[draw=black, fill=white] (304pt, -4pt) rectangle (306pt, 4pt);
\end{scope}
\begin{scope}[>=latex]
\draw[|<->|] (0pt,20pt) -- (60pt,20pt) node[midway] {\circled{\footnotesize 1}};
\end{scope}
\begin{scope}[>=to]
\draw[|<->|][dotted] (0pt,-40pt) -- (60pt,-40pt) node[midway,below] {{\scriptsize $t_2 - t_1 - a$}};
\draw[|<->|][dotted] (120pt,-40pt) -- (180pt,-40pt) node[midway,below] {{\scriptsize $t_2 - t_1 - a$}};
\end{scope}
\end{tikzpicture}
\end{figure}
\item \emph{There are events in $\bigl( t_1, t_1 + (t_2 - t_1 - a) \bigr)$}:
In this case, $\varphi'$
can only ensure that $\varphi_1$ holds at all events in \circled{\footnotesize 2} (see the illustration below
where $d_1 + d_2 = t_2 - t_1 - a$).
We can enforce that $\varphi_1$ holds at all events in \circled{\footnotesize 1} by
\[
\varphi'' = \psi'' \mathbin{\mathcal{U}} ( \varphi \wedge \psi'' )
\]
where
\[
\psi'' = \underbrace{(\varphi_1 \mathbin{\mathfrak{S}}_{(0, b-a)}^{|c|} \mathbf{true})}_{\psi'''} \wedge \neg (\LTLdiamondminus_{=|c|} \neg \varphi_1) \,.
\]
It is easy to see that $\varphi$ must hold at the last event in $\bigl( t_1, t_1 + (t_2 - t_1 - a) \bigr)$.
The correctness of our use of the subformula $\psi'''$ here again depends on the fact that there is an event at $t_1$.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=1]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-5pt,0pt) -- (325pt,0pt);
\draw[-, loosely dashed] (0pt,-10pt) -- (0pt,10pt) node[at start, below] {$t_1 + c$};
\draw[-, loosely dashed] (60pt,-40pt) -- (60pt,10pt);
\draw[-, loosely dashed] (0pt,-30pt) -- (0pt,-40pt);
\draw[-, loosely dashed] (30pt,10pt) -- (30pt,-40pt);
\draw[-, loosely dashed] (120pt,-10pt) -- (120pt,10pt) node[at start, below] {$t_1$};
\draw[-, loosely dashed] (180pt,-40pt) -- (180pt,10pt);
\draw[-, loosely dashed] (120pt,-30pt) -- (120pt,-40pt);
\draw[-, loosely dashed] (150pt,10pt) -- (150pt,-40pt);
\draw[-, loosely dashed] (220pt,-10pt) -- (220pt,10pt) node[at start, below] {$t_1 + a$};
\draw[-, loosely dashed] (280pt,-10pt) -- (280pt,10pt) node[at start, below] {$t_2$};
\draw[-, loosely dashed] (290pt,-30pt) -- (290pt,10pt) node[at start, below] {$t_1 + b$};
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=red] (14pt, -4pt) rectangle (12pt, 4pt);
\draw[draw=black, fill=red] (20pt, -4pt) rectangle (22pt, 4pt);
\draw[draw=black, fill=red] (36pt, -4pt) rectangle (38pt, 4pt);
\draw[draw=black, fill=white] (70pt, -4pt) rectangle (68pt, 4pt);
\draw[draw=black, fill=white] (86pt, -4pt) rectangle (84pt, 4pt);
\draw[draw=black, fill=white] (114pt, -4pt) rectangle (116pt, 4pt);
\draw[draw=black, fill=white] (119pt, -4pt) rectangle (121pt, 4pt);
\draw[draw=black, fill=white] (128pt, -4pt) rectangle (130pt, 4pt);
\draw[draw=black, fill=white] (136pt, -4pt) rectangle (138pt, 4pt);
\draw[draw=black, fill=white] (149pt, -4pt) rectangle (151pt, 4pt);
\draw[draw=black, fill=white] (191pt, -4pt) rectangle (189pt, 4pt);
\draw[draw=black, fill=white] (199pt, -4pt) rectangle (201pt, 4pt);
\draw[draw=black, fill=white] (215pt, -4pt) rectangle (217pt, 4pt);
\draw[draw=black, fill=white] (231pt, -4pt) rectangle (233pt, 4pt);
\draw[draw=black, fill=white] (240pt, -4pt) rectangle (242pt, 4pt);
\draw[draw=black, fill=white] (250pt, -4pt) rectangle (252pt, 4pt);
\draw[draw=black, fill=white] (261pt, -4pt) rectangle (259pt, 4pt);
\draw[draw=black, fill=blue] (281pt, -4pt) rectangle (279pt, 4pt);
\draw[draw=black, fill=white] (304pt, -4pt) rectangle (306pt, 4pt);
\end{scope}
\begin{scope}[>=latex]
\draw[|<->|] (0pt,20pt) -- (30pt,20pt) node[midway] {\circled{\footnotesize 1}};
\draw[|<->|] (30pt,20pt) -- (60pt,20pt) node[midway] {\circled{\footnotesize 2}};
\end{scope}
\begin{scope}[>=to]
\draw[|<->|][dotted] (0pt,-40pt) -- (30pt,-40pt) node[midway,below] {{\scriptsize $d_1$}};
\draw[|<->|][dotted] (30pt,-40pt) -- (60pt,-40pt) node[midway,below] {{\scriptsize $d_2$}};
\draw[|<->|][dotted] (120pt,-40pt) -- (150pt,-40pt) node[midway,below] {{\scriptsize $d_1$}};
\draw[|<->|][dotted] (150pt,-40pt) -- (180pt,-40pt) node[midway,below] {{\scriptsize $d_2$}};
\end{scope}
\end{tikzpicture}
\end{figure}
\end{itemize}
The desired formula is $\varphi' \wedge (\varphi \vee \varphi'')$.
\item \emph{$a < 0$ and $c \geq 0$}:
Without loss of generality we assume $a < b < 0$. Similar to the case \emph{$a \geq 0$ and $c > a$} above, the desired formula is
\[
\LTLdiamondminus_{(|b|, |a|)} \varphi_2 \wedge \neg \Bigl( (\neg \varphi_2) \mathbin{\mathfrak{U}}_{\bigl(c, c + (b-a)\bigr)}^{a} \bigl( \neg \varphi_1 \wedge \neg (\LTLdiamondminus_{= (c-a)} \varphi_2) \bigr) \Bigr) \,.
\]
\item \emph{$a < 0$ and $c \leq a$}:
Without loss of generality we assume $a < b < 0$.
Let $t_2$ be the first time instant in $(t_1 + a, t_1 + b)$ where there is a $\varphi_2$-event.
Similar to the case $a \geq 0$ and $c < 0$ above,
consider the following subcases:
\begin{itemize}
\item \emph{There is no event in $\bigl(t_1, t_1 + (t_2 - t_1 - a)\bigr)$}:
We enforce that $\varphi_1$ holds at all events in \circled{\footnotesize 1}
in the illustration below by
\[
\varphi''' = \mathbf{false} \, \mathbin{\mathfrak{U}}_{(a, b)}^0 \bigl( \varphi_2 \wedge (\varphi_1 \mathbin{\mathfrak{S}}_{(a, b)}^{a + |c|} \mathbf{true}) \bigr)\,.
\]
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-5pt,0pt) -- (325pt,0pt);
\draw[-, loosely dashed] (0pt,-10pt) -- (0pt,10pt) node[at start, below] {$t_1 + c$};
\draw[-, loosely dashed] (60pt,-10pt) -- (60pt,10pt);
\draw[-, loosely dashed] (0pt,-30pt) -- (0pt,-40pt);
\draw[-, loosely dashed] (60pt,-30pt) -- (60pt,-40pt);
\draw[-, loosely dashed] (120pt,-10pt) -- (120pt,10pt) node[at start, below] {$t_1 + a$};
\draw[-, loosely dashed] (180pt,-10pt) -- (180pt,10pt);
\draw[-, loosely dashed] (210pt,-30pt) -- (210pt,10pt) node[at start, below] {$t_1 + b$};
\draw[-, loosely dashed] (180pt,-10pt) -- (180pt,10pt) node[at start, below] {$t_2$};
\draw[-, loosely dashed] (230pt,-10pt) -- (230pt,10pt) node[at start, below] {$t_1$};
\draw[-, loosely dashed] (230pt,-30pt) -- (230pt,-40pt);
\draw[-, loosely dashed] (290pt,-40pt) -- (290pt,10pt);
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=red] (14pt, -4pt) rectangle (12pt, 4pt);
\draw[draw=black, fill=red] (20pt, -4pt) rectangle (22pt, 4pt);
\draw[draw=black, fill=red] (36pt, -4pt) rectangle (38pt, 4pt);
\draw[draw=black, fill=white] (70pt, -4pt) rectangle (68pt, 4pt);
\draw[draw=black, fill=white] (86pt, -4pt) rectangle (84pt, 4pt);
\draw[draw=black, fill=white] (114pt, -4pt) rectangle (116pt, 4pt);
\draw[draw=black, fill=white] (131pt, -4pt) rectangle (133pt, 4pt);
\draw[draw=black, fill=white] (140pt, -4pt) rectangle (142pt, 4pt);
\draw[draw=black, fill=white] (150pt, -4pt) rectangle (152pt, 4pt);
\draw[draw=black, fill=white] (166pt, -4pt) rectangle (168pt, 4pt);
\draw[draw=black, fill=blue] (181pt, -4pt) rectangle (179pt, 4pt);
\draw[draw=black, fill=white] (191pt, -4pt) rectangle (189pt, 4pt);
\draw[draw=black, fill=white] (199pt, -4pt) rectangle (201pt, 4pt);
\draw[draw=black, fill=white] (231pt, -4pt) rectangle (229pt, 4pt);
\draw[draw=black, fill=white] (304pt, -4pt) rectangle (306pt, 4pt);
\end{scope}
\begin{scope}[>=latex]
\draw[|<->|] (0pt,20pt) -- (60pt,20pt) node[midway] {\circled{\footnotesize 1}};
\end{scope}
\begin{scope}[>=to]
\draw[|<->|][dotted] (0pt,-40pt) -- (60pt,-40pt) node[midway,below] {{\scriptsize $t_2 - t_1 - a$}};
\draw[|<->|][dotted] (230pt,-40pt) -- (290pt,-40pt) node[midway,below] {{\scriptsize $t_2 - t_1 - a$}};
\end{scope}
\end{tikzpicture}
\end{figure}
\item \emph{There are events in $\bigl( t_1, t_1 + (t_2 - t_1 - a) \bigr)$}:
We enforce that $\varphi_1$ holds at all events in \circled{\footnotesize 1} and \circled{\footnotesize 2}
in the illustration below (in which $d_1 + d_2 = t_2 - t_1 - a$) by
\[
\LTLdiamondminus_{(|b|, |a|)} \varphi_2 \wedge
\bigl( \psi'' \mathbin{\mathcal{U}} ( \varphi''' \wedge \psi'' ) \bigr) \,,
\]
where $\psi''$ is defined in the case $a \geq 0$ and $c < 0$ above.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1]
\begin{scope}[>=latex]
\draw[-, loosely dashed] (-5pt,0pt) -- (325pt,0pt);
\draw[-, loosely dashed] (0pt,-10pt) -- (0pt,10pt) node[at start, below] {$t_1 + c$};
\draw[-, loosely dashed] (60pt,-10pt) -- (60pt,10pt);
\draw[-, loosely dashed] (0pt,-30pt) -- (0pt,-40pt);
\draw[-, loosely dashed] (60pt,-30pt) -- (60pt,-40pt);
\draw[-, loosely dashed] (30pt,10pt) -- (30pt,-40pt);
\draw[-, loosely dashed] (120pt,-10pt) -- (120pt,10pt) node[at start, below] {$t_1 + a$};
\draw[-, loosely dashed] (120pt,-30pt) -- (120pt,-40pt);
\draw[-, loosely dashed] (180pt,-10pt) -- (180pt,10pt);
\draw[-, loosely dashed] (180pt,-30pt) -- (180pt,-40pt);
\draw[-, loosely dashed] (210pt,-30pt) -- (210pt,10pt) node[at start, below] {$t_1 + b$};
\draw[-, loosely dashed] (180pt,-10pt) -- (180pt,10pt) node[at start, below] {$t_2$};
\draw[-, loosely dashed] (230pt,-10pt) -- (230pt,10pt) node[at start, below] {$t_1$};
\draw[-, loosely dashed] (230pt,-30pt) -- (230pt,-40pt);
\draw[-, loosely dashed] (290pt,-40pt) -- (290pt,10pt);
\draw[-, loosely dashed] (260pt,-40pt) -- (260pt,10pt);
\draw[draw=black, fill=white] (1pt, -4pt) rectangle (-1pt, 4pt);
\draw[draw=black, fill=red] (14pt, -4pt) rectangle (12pt, 4pt);
\draw[draw=black, fill=red] (20pt, -4pt) rectangle (22pt, 4pt);
\draw[draw=black, fill=red] (36pt, -4pt) rectangle (38pt, 4pt);
\draw[draw=black, fill=white] (70pt, -4pt) rectangle (68pt, 4pt);
\draw[draw=black, fill=white] (86pt, -4pt) rectangle (84pt, 4pt);
\draw[draw=black, fill=white] (114pt, -4pt) rectangle (116pt, 4pt);
\draw[draw=black, fill=white] (131pt, -4pt) rectangle (133pt, 4pt);
\draw[draw=black, fill=white] (140pt, -4pt) rectangle (142pt, 4pt);
\draw[draw=black, fill=white] (150pt, -4pt) rectangle (152pt, 4pt);
\draw[draw=black, fill=white] (166pt, -4pt) rectangle (168pt, 4pt);
\draw[draw=black, fill=blue] (181pt, -4pt) rectangle (179pt, 4pt);
\draw[draw=black, fill=white] (191pt, -4pt) rectangle (189pt, 4pt);
\draw[draw=black, fill=white] (199pt, -4pt) rectangle (201pt, 4pt);
\draw[draw=black, fill=white] (231pt, -4pt) rectangle (229pt, 4pt);
\draw[draw=black, fill=white] (238pt, -4pt) rectangle (240pt, 4pt);
\draw[draw=black, fill=white] (252pt, -4pt) rectangle (254pt, 4pt);
\draw[draw=black, fill=white] (259pt, -4pt) rectangle (261pt, 4pt);
\draw[draw=black, fill=white] (304pt, -4pt) rectangle (306pt, 4pt);
\end{scope}
\begin{scope}[>=latex]
\draw[|<->|] (0pt,20pt) -- (30pt,20pt) node[midway] {\circled{\footnotesize 1}};
\draw[|<->|] (30pt,20pt) -- (60pt,20pt) node[midway] {\circled{\footnotesize 2}};
\end{scope}
\begin{scope}[>=to]
\draw[|<->|][dotted] (0pt,-40pt) -- (30pt,-40pt) node[midway,below] {{\scriptsize $d_1$}};
\draw[|<->|][dotted] (30pt,-40pt) -- (60pt,-40pt) node[midway,below] {{\scriptsize $d_2$}};
\draw[|<->|][dotted] (230pt,-40pt) -- (260pt,-40pt) node[midway,below] {{\scriptsize $d_1$}};
\draw[|<->|][dotted] (260pt,-40pt) -- (290pt,-40pt) node[midway,below] {{\scriptsize $d_2$}};
\end{scope}
\end{tikzpicture}
\end{figure}
\end{itemize}
The desired formula is the disjunction of these two formulas.
\item \emph{$a < 0$ and $a < c < 0$}:
Without loss of generality we assume $a < b < 0$.
The desired formula is identical to the formula in the case $a < 0$ and $c \geq 0$ above. \qedhere
\end{itemize}
\end{proof}
\noindent
We can now give an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula that distinguishes, in the pointwise semantics,
the models $\mathcal{G}_{m}$ and $\mathcal{H}_{m}$ in the proof of
Proposition~\ref{prop:foone-strict}:
\[
\LTLdiamond_{(1, 2)} \bigl( \mathbf{true} \wedge (\mathbf{false} \mathbin{\mathfrak{U}}_{> 0}^{-1} \mathbf{true}) \bigr) \,.
\]
This formula is `equivalent' to the formula $\varphi_{\mathit{cont3}}$
which distinguishes $\mathcal{G}_{m}$ and $\mathcal{H}_{m}$ in the continuous semantics.
\subsection{The translation}\label{subsec:translation}
We now give a translation from an arbitrary \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula with a single free variable into an equivalent \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula over $\ropen{0, N}$-timed words.
Our proof strategy closely follows~\cite{Ouaknine2009}:
first convert the formula into a non-metric formula, then
translate this formula into \textup{\textmd{\textsf{LTL}}}{}, and finally construct
an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula equivalent to the original formula.
The crux of the translation is a \emph{`stacking' bijection} between $\ropen{0, N}$-timed words over $\Sigma_\mathbf{P}$
and a set of $\ropen{0, 1}$-timed words over an extended alphabet.
Roughly speaking, since the time domain is bounded, we can encode the integer parts of timestamps
with a bounded number of new monadic predicates.
This enables us to work instead with `stacked' $\ropen{0, 1}$-timed words, in which only the ordering of events are relevant.
\paragraph{\emph{Stacking bounded timed words}}
For each monadic predicate $P \in \mathbf{P}$, we introduce
fresh monadic predicates $P_i$, $0 \leq i \leq N - 1$ and let
the set of all these new monadic predicates be $\overline{\mathbf{P}}$.
The intended meaning is that for $x \in \ropen{0, 1}$, $P_i(x)$ holds in a stacked $\ropen{0, 1}$-timed word iff
$P$ holds at time $i + x$ in the corresponding $\ropen{0, N}$-timed word.
We also introduce $\overline{\mathbf{Q}} = \{Q_i \mid 0 \leq i \leq N - 1\}$
such that for $x \in \ropen{0, 1}$, $Q_i(x)$ holds in a stacked $\ropen{0, 1}$-timed word
iff there is an event at time $i + x$ in the corresponding $\ropen{0, N}$-timed word,
regardless of whether any $P \in \mathbf{P}$ holds there.
Let
\[
\vartheta_{\mathit{event}} = \forall x \, \left(\bigvee_{0 \leq i \leq N - 1} Q_i(x)\right) \wedge \forall x \, \left(\bigwedge_{0 \leq i \leq N - 1} \bigl( P_i(x) \implies Q_i(x) \bigr) \right)
\]
and $\vartheta_{\mathit{init}} = \exists x \, \bigl( \nexists x' \, (x' < x) \wedge Q_0(x) \bigr)$.\label{for:wellformed}
There is an obvious `stacking' bijection (indicated by overlining) between $\ropen{0, N}$-timed words over $\Sigma_\mathbf{P}$ and
$\ropen{0, 1}$-timed words over $\Sigma_{\overline{\mathbf{P}} \cup \overline{\mathbf{Q}}}$
satisfying $\vartheta_{\mathit{event}} \wedge \vartheta_{\mathit{init}}$. For a concrete example, the stacked counterpart
of the $\ropen{0, 2}$-timed word
\[
\rho = (\{A\}, 0)(\{A, C\}, 0.3)(\{B\}, 1)(\{B, C\}, 1.5)
\]
with $\mathbf{P} = \{A, B, C\}$ is the $\ropen{0, 1}$-timed word:
\[
\overline{\rho} = (\{Q_0, Q_1, A_0, B_1\}, 0)(\{Q_0, A_0, C_0\}, 0.3)(\{Q_1, B_1, C_1\}, 0.5) \,.
\]
\paragraph{\emph{Stacking \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formulas}}
Let $\vartheta(x)$ be an \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula with a single free variable $x$ where each
quantifier uses a fresh new variable.
Without loss of generality, we assume that $\vartheta(x)$ contains only existential quantifiers (this can be achieved by syntactic rewriting).
Replace the formula by
\[
\bigl(Q_0(x) \wedge \vartheta[x/x]\bigr) \vee \bigl(Q_1(x) \wedge \vartheta[x+1/x]\bigr) \vee \ldots \vee \bigl(Q_{N-1}(x) \wedge \vartheta[x + (N - 1)/x] \bigr)
\]
where $\vartheta[e/x]$ denotes the formula obtained by substituting all free occurrences of $x$
in $\vartheta$ by (an expression) $e$.
Then, similarly, recursively replace every subformula $\exists x' \, \theta$ by
\[
\exists x' \, \Bigl( \bigl( Q_0(x') \wedge \theta[x'/x'] \bigr) \vee \ldots \vee \bigl( Q_{N-1}(x') \wedge \theta[x' + (N - 1)/x'] \bigr) \Bigr) \,.
\]
Note that we do not actually have the $+k$ functions in our pointwise version of \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{};
they are only used as annotations here and will be removed later,
e.g., $x' + k$ means that $Q_k(x')$ holds. We then carry out the following syntactic substitutions:
\begin{itemize}
\item For each inequality of the form $x_1 + k_1 < x_2 + k_2$, replace it with
\begin{itemize}
\item $x_1 < x_2$ if $k_1 = k_2$
\item $\mathbf{true}$ if $k_1 < k_2$
\item $\mathbf{false}$ if $k_1 > k_2$
\end{itemize}
\item For each distance formula, e.g., $d(x_1 + k_1, x_2 + k_2) < 2$, replace it with
\begin{itemize}
\item $\mathbf{true}$ if $|k_1 - k_2| \leq 1$
\item $x_2 < x_1$ if $k_2 - k_1 = 2$
\item $x_1 < x_2$ if $k_1 - k_2 = 2$
\item $\mathbf{false}$ if $|k_1 - k_2| > 2$
\end{itemize}
\item Replace terms of the form $P(x_1 + k)$ with $P_k(x_1)$.
\end{itemize}
\noindent
This gives a non-metric first-order formula $\overline{\vartheta}(x)$ over $\overline{\mathbf{P}} \cup \overline{\mathbf{Q}}$.
Denote by $\mathit{frac}(t)$ the fractional part of a non-negative real $t$.
It is not hard to see that for each $\ropen{0, N}$-timed word $\rho = (\sigma, \tau)$ over $\Sigma_{\mathbf{P}}$
and its stacked counterpart $\overline{\rho}$, the following holds:
\begin{itemize}
\item $\rho, t \models \vartheta(x)$ implies $\overline{\rho}, \overline{t} \models \overline{\vartheta}(x)$ where $\overline{t} = \mathit{frac}(t)$
\item $\overline{\rho}, \overline{t} \models \overline{\vartheta}(x)$ implies there
exists $t \in \rho$ with $\mathit{frac}(t) = \overline{t}$ such that $\rho, t \models \vartheta(x)$.
\end{itemize}
Moreover,
if $\rho, t \models \vartheta(x)$, then the integer part of $t$
indicates which disjunct in $\overline{\vartheta}(x)$ is satisfied
when $x$ is substituted with $\overline{t} = \mathit{frac}(t)$, and vice versa.
By Kamp's theorem~\cite{Kamp1968} (applied individually on each $\vartheta[x + i/x]$), $\overline{\vartheta}(x)$ is equivalent to an \textup{\textmd{\textsf{LTL}}}{} formula
$\overline{\varphi}$ of the following form:
\[
(Q_0 \wedge \overline{\varphi}_0) \vee (Q_1 \wedge \overline{\varphi}_1) \vee \ldots \vee (Q_{N-1} \wedge \overline{\varphi}_{N-1}) \,.
\]
\paragraph{\emph{Unstacking}}
We construct inductively an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $\psi$
for each subformula $\overline{\psi}$ of $\overline{\varphi}_i$ (for some $i \in \{0, \ldots, N - 1\}$).
Again, we make use of the formulas in $\Phi_{\mathit{int}}$ defined earlier.
\begin{itemize}
\item $\overline{\psi} = P_j$: Let
\[
\psi = (\varphi_{0, 1} \wedge \LTLdiamond_{=j} P) \vee \ldots \vee (\varphi_{j, j+1} \wedge P)
\vee \ldots \vee (\varphi_{N-1, N} \wedge \LTLdiamondminus_{=\left( (N-1) - j \right)} P) \,.
\]
\item $\overline{\psi} = Q_j$: Similarly, let
\[
\psi = (\varphi_{0, 1} \wedge \LTLdiamond_{=j} \mathbf{true}) \vee \ldots \vee (\varphi_{j, j+1} \wedge \mathbf{true})
\vee \ldots \vee (\varphi_{N-1, N} \wedge \LTLdiamondminus_{=\left( (N-1) - j \right)} \mathbf{true}) \,.
\]
\item $\overline{\psi} = \overline{\psi}_1 \mathbin{\mathcal{U}} \overline{\psi}_2$:
Let $\psi^{j, k, l} = \psi_1 \mathbin{\mathfrak{U}}_{(j, j + 1)}^k (\psi_2 \wedge \varphi_{l, l + 1})$.
The desired formula is
\[
\psi = \bigvee_{0 \leq i \leq N - 1} \left(\varphi_{i, i + 1} \wedge
\bigvee_{\substack{-i \leq j \leq (N-1) - i \\ l = i + j}} \left(
\bigwedge_{-i \leq k \leq (N-1) - i} \psi^{j, k, l}
\right)
\right) \,.
\]
\item $\overline{\psi} = \overline{\psi}_1 \mathbin{\mathcal{S}} \overline{\psi}_2$: This is symmetric to the case of $\overline{\psi}_1 \mathbin{\mathcal{U}} \overline{\psi}_2$.
\end{itemize}
The construction for the other cases are trivial and therefore omitted.
\begin{prop}\label{prop:translation}
Let $\overline{\psi}$ be a subformula of $\overline{\varphi}_i$ for some $i \in \{0, \ldots, N - 1\}$.
There is an \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\psi$ such that for any $\ropen{0, N}$-timed word $\rho$,
$t \in \rho$ and $\mathit{frac}(t) = \overline{t} \in \overline{\rho}$, we have
\[
\overline{\rho}, \overline{t} \models \overline{\psi} \iff \rho, t \models \psi \,.
\]
\end{prop}
\begin{proof}
Induction on the structure of $\overline{\psi}$ and $\psi$, where the latter is constructed as described above.
\begin{itemize}
\item $\overline{\psi} = P_j$: Assume $\overline{\rho}, \overline{t} \models \overline{\psi}$.
If $t = j + \overline{t}$, the disjunct $(\varphi_{j, j+1} \wedge P)$ of $\psi$
clearly holds at $t$ in $\rho$. If $t = j' + \overline{t}$ where $j' \neq j$, since there is a $P$-event
at time $j + \overline{t}$ in $\rho$, the $j'$-th disjunct of $\psi$ must hold at $t$ in $\rho$.
The proof for the other direction is similar.
\item $\overline{\psi} = \overline{\psi}_1 \mathbin{\mathcal{U}} \overline{\psi}_2$:
Assume $\overline{\rho}, \overline{t} \models \overline{\psi}$
and let the `witness' (i.e.,~where $\overline{\psi}_2$ holds) be at $\overline{t'}$. By construction and the induction hypothesis, there is an event at $t' = l + \overline{t}$ in $\rho$ for some $l \in \{0, \ldots, N - 1\}$
such that $\rho, t' \models \psi_2$. Moreover, since we have $\overline{\rho}, \overline{t''} \models \overline{\psi_1}$
for all $\overline{t''}$, $\overline{t} < \overline{t''} < \overline{t'}$, we must have
$\rho, t'' \models \psi_1$
for all $t'' \in \rho$ with
$t'' = k' + \overline{t''}$ for some $\overline{t} < \overline{t''} < \overline{t'}$
and $0 \leq k' \leq N - 1$.
Now let $t = i + \overline{t}$ for some $i \in \{0, \ldots, N - 1\}$ be a timestamp in $\rho$
and let $j = l - i$. It is clear that $\rho, t \models \varphi_{i, i + 1}$
and
\[
\rho, t \models \bigwedge_{\substack{0 \leq k' \leq N-1 \\ k = k' - i}} \psi^{j, k, l} \,,
\]
as required. For the other direction, let $t = i + \overline{t}$ for
some $i \in \{0, \ldots, N - 1\}$ and let
\[
\rho, t \models \bigwedge_{-i \leq k \leq (N-1) - i} \psi^{j, k, l}
\]
for some $j \in \{-i, \ldots, (N - 1) - i\}$ and $l = i + j$. It follows that there is a
(minimal) $\overline{t'} > \overline{t}$ such that $\rho, l + \overline{t'} \models \psi_2$
and $\rho, k' + \overline{t''} \models \psi_1$
for all $t'' \in \rho$ with
$t'' = k' + \overline{t''}$ for some $\overline{t} < \overline{t''} < \overline{t'}$
and $0 \leq k' \leq N - 1$. The claim follows by construction and the induction hypothesis.
\end{itemize}
The other cases are trivial or symmetric.
\end{proof}
Using the construction above, we obtain an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $\varphi_i$ for each $\overline{\varphi}_i$.
Substitute them into $\overline{\varphi}$ and replace all remaining $Q_i$ by $\varphi_{i, i + 1}$
to obtain our final formula $\varphi$,
which is equivalent to the original \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula $\vartheta(x)$ over $\ropen{0, N}$-timed words.
\begin{prop}
For any $\ropen{0, N}$-timed word $\rho$ and $t \in \rho$, we have
\[
\rho, t \models \varphi(x) \iff \rho, t \models \vartheta(x) \,.
\]
\end{prop}
We are now ready to state the main result of this section.
\begin{thm}\label{thm:boundedexpcomp}
\emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} is expressively complete for \emph{\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}} over $\ropen{0, N}$-timed words.
\end{thm}
\subsection{Time-bounded verification}
We claim that the \emph{timed-bounded satisfiability} and \emph{time-bounded model-checking}
problems for \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} are $\mathrm{EXPSPACE}$-complete in both the pointwise and continuous semantics.
\begin{thm}
The time-bounded satisfiability problem for \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} (in both the pointwise and continuous semantics)
is $\mathrm{EXPSPACE}$-complete.
\end{thm}
\begin{proof}
First note that for each \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula over timed words one can construct,
in linear time, an `equivalent' \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula over signals of the form $f^\rho$.
Then, in the continuous semantics, one can replace all subformulas of the form $\varphi_1 \mathbin{\mathfrak{U}}_{(a, b)}^c \varphi_2$
in an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula by
\[
(\LTLdiamond_{= c} \varphi_1) \mathbin{\mathcal{U}}_{< b} (\LTLdiamond_{= a} \varphi_2)
\]
(this can incur at most a linear blow-up). The claim therefore follows from~\cite{Ouaknine2009}.
However, for the sake of completeness, we give a direct proof (for the case of pointwise semantics)
along the lines of~\cite{Ouaknine2009} here; see Section~\ref{sec:conclusion} for a discussion on the practical implication.
For each subformula $\psi$ of a given \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $\varphi$ and every $i \in \{0, \ldots, N\}$,
we introduce a monadic predicate $F^{\psi}_i$. We then add suitable
subformulas into $\overline{\varphi}$ to ensure that $F^{\psi}_i$ holds
at $\overline{t}$ in $\overline{\rho}$ iff $\psi$ holds at $t = \overline{t} + i$ in $\rho$.
As an example, let $A \mathbin{\mathfrak{U}}_{(2, 3)}^1 B$ be a subformula of $\varphi$.
We require the following formula to hold at every point in time
(assume that $i \leq N - 4$):
\[
\arraycolsep=0.3ex
\begin{array}{rcl}
F_i^{A \mathbin{\mathfrak{U}}_{(2, 3)}^1 B} & \iff & \bigl( (F_{i+1}^Q \implies F_{i+1}^A) \mathbin{\mathcal{U}} F_{i+2}^B\bigr) \\
& & \vee \Bigl( \LTLsquare (F_{i+1}^Q \implies F_{i+1}^A) \wedge \LTLdiamondminus \bigl(F_{i+3}^B \wedge \LTLsquareminus (F_{i+2}^Q \implies F_{i+2}^A)\bigr) \Bigr)\,.
\end{array}
\]
We also add the \textup{\textmd{$\textsf{LTL}_\textsf{fut}$}}{} equivalents of $\vartheta_{\mathit{event}}$ and $\vartheta_{\mathit{init}}$
into $\overline{\varphi}$ as conjuncts.
It is clear that $\overline{\varphi}$ is of size exponential in the size of $\varphi$.
$\mathrm{EXPSPACE}$-hardness follows from the corresponding result of
\textmd{\textsf{Bounded-MTL}} (in the pointwise semantics) in~\cite{Bouyer2007}.
\end{proof}
Since the time-bounded model-checking problem and satisfiability problem
are inter-reducible in both the pointwise and continuous semantics~\cite{Wilke1994, Henzinger1998},
we have the following theorem.
\begin{thm}
The time-bounded model-checking problem for timed automata against \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} (in both the pointwise
and continuous semantics) is $\mathrm{EXPSPACE}$-complete.
\end{thm}
\section{Expressive completeness of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} over unbounded timed words}
Recall that the counting modality $C_2(x, X)$ asserts that $X$ holds at at least two points
in $(x, x+1)$. While the modality is not expressible in \textup{\textmd{\textsf{MTL}}}{}, it is equivalent to
the following \textup{\textmd{\textsf{MTL}}}{} formula with \emph{rational} constants:
\[
\LTLdiamond_{(0, \frac{1}{2})} (X \wedge \LTLdiamond_{(0, \frac{1}{2})} X)
\vee \LTLdiamond_{(\frac{1}{2}, 1)} (X \wedge \LTLdiamondminus_{(0, \frac{1}{2})} X)
\vee ( \LTLdiamond_{(0, \frac{1}{2})} X \wedge \LTLdiamond_{(\frac{1}{2}, 1)} X ) \,.
\]
Indeed, \textup{\textmd{\textsf{MTL}}}{} with rational constants
is expressively complete for \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{} (the rational version of \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}) over signals~\cite{Hunter2012}.
Unfortunately, even with rational endpoints, \textup{\textmd{\textsf{MTL}}}{} is still less expressive than \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} in the pointwise semantics~\cite{Prabhakar2006}.
We show in this section that expressive completeness of \textup{\textmd{\textsf{MTL}}}{} over (infinite) timed words can be recovered by adding (the rational versions of) the modalities
generalised `Until' ($\mathbin{\mathfrak{U}}_I^c$) and generalised `Since' ($\mathbin{\mathfrak{S}}_I^c$)
we introduced in the last section.
Our presentation in this section essentially follows~\cite{Hunter2012}. We first give a set of rewriting
rules that `extract' unbounded temporal operators from the scopes of bounded temporal operators.
Then we invoke Gabbay's separation theorem~\cite{Gabbay1980} to obtain a syntactic separation result for \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}
in the pointwise semantics. Exploiting a normal form for \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} in~\cite{Gabbay1980},
we show that any bounded \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{} formula can be rewritten into an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula.
Finally, these ideas are combined to obtain the desired result.
\subsection{Syntactic separation of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas}\label{sec:separation}
We present a series of logical equivalence rules that can
be used to rewrite a given \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula into an equivalent
formula in which no unbounded temporal operators occurs within the
scope of a bounded temporal operator. Only the rules for
open intervals are given, as the rules for other types of intervals are
straightforward variants.
\paragraph{\emph{A normal form for \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}}}
We say an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula is in \emph{normal form} if it satisfies:
\begin{enumerate}[label=(\roman*).]
\item All occurrences of unbounded temporal operator are of the form $\mathbin{\mathcal{U}}_{(0, \infty)}$, $\mathbin{\mathcal{S}}_{(0, \infty)}$, $\LTLsquare_{(0, \infty)}$, $\LTLsquareminus_{(0, \infty)}$.
\item All other occurrences of temporal operators are of the form
$\mathbin{\mathcal{U}}_I$, $\mathbin{\mathcal{S}}_I$, $\mathbin{\mathfrak{U}}^c_I$, $\mathbin{\mathfrak{S}}^c_I$ with \mbox{bounded $I$}.
\item Negation is only applied to monadic predicates or bounded temporal operators.
\item In any subformula of the form $\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$, $\varphi_1 \mathbin{\mathcal{S}}_I \varphi_2$,
$\LTLdiamond_I \varphi_2$, $\LTLdiamondminus_I \varphi_2$, $\varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2$, $\varphi_1 \mathbin{\mathfrak{S}}_I^c \varphi_2$ where $I$ is bounded,
$\varphi_1$ is a disjunction of subformulas and $\varphi_2$ is a conjunction of subformulas.
\end{enumerate}
We now describe how to rewrite a given formula into normal form.
To satisfy (i) and (ii), apply the usual rules (e.g.,~$\LTLsquare_I \varphi \iff \neg \LTLdiamond_I \neg \varphi$) and the rules:
\[
\arraycolsep=0.3ex
\begin{array}{rcl}
\varphi_1 \mathbin{\mathcal{U}}_{(a, \infty)} \varphi_2 & \iff & \varphi_1 \mathbin{\mathcal{U}} \varphi_2 \wedge
\LTLsquare_{\lopen{0, a}} (\varphi_1 \wedge \varphi_1 \mathbin{\mathcal{U}} \varphi_2) \\
\varphi_1 \mathbin{\mathfrak{U}}^c_{(a, \infty)} \varphi_2 & \iff & \varphi_1 \mathbin{\mathfrak{U}}^c_{\lopen{a, 2a}} \varphi_2 \vee \Big( \LTLdiamond^w_{[0, c]} \big(\varphi_1 \mathbin{\mathcal{U}}_{(a, \infty)} (\varphi_2 \vee \LTLdiamond_{\leq a - c} \varphi_2) \big) \Big) \,.
\end{array}
\]
To satisfy (iii), use the usual rules and the rule:
\[
\arraycolsep=0.3ex
\begin{array}{rcl}
\neg (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) & \iff & \LTLsquare \neg \varphi_2 \vee \big( \neg \varphi_2 \mathbin{\mathcal{U}} (\neg \varphi_2 \wedge \neg \varphi_1) \big) \,.
\end{array}
\]
For (iv), use the usual rules of Boolean algebra and the rules below:
\[
\arraycolsep=0.3ex
\begin{array}{rcl}
\phi \mathbin{\mathcal{U}}_I (\varphi_1 \vee \varphi_2) & \iff & (\phi \mathbin{\mathcal{U}}_I \varphi_1) \vee (\phi \mathbin{\mathcal{U}}_I \varphi_2) \\
(\varphi_1 \wedge \varphi_2) \mathbin{\mathcal{U}}_I \phi & \iff & (\varphi_1 \mathbin{\mathcal{U}}_I \phi) \wedge (\varphi_2 \mathbin{\mathcal{U}}_I \phi) \\
\phi \mathbin{\mathfrak{U}}_I^c (\varphi_1 \vee \varphi_2) & \iff & (\phi \mathbin{\mathfrak{U}}_I^c \varphi_1) \vee (\phi \mathbin{\mathfrak{U}}_I^c \varphi_2) \\
(\varphi_1 \wedge \varphi_2) \mathbin{\mathfrak{U}}_I^c \phi & \iff & (\varphi_1 \mathbin{\mathfrak{U}}_I^c \phi) \wedge (\varphi_2 \mathbin{\mathfrak{U}}_I^c \phi) \,.
\end{array}
\]
The rules for past temporal operators are as symmetric.
We prove one of these rules as the others are simpler.
\begin{prop}\label{prop:removeunboundedguntil}
The following equivalence holds over infinite timed words:
\[
\varphi_1 \mathbin{\mathfrak{U}}^c_{(a, \infty)} \varphi_2 \iff \varphi_1 \mathbin{\mathfrak{U}}^c_{\lopen{a, 2a}} \varphi_2 \vee \Big( \LTLdiamond^w_{[0, c]} \big(\varphi_1 \mathbin{\mathcal{U}}_{(a, \infty)} (\varphi_2 \vee \LTLdiamond_{\leq a - c} \varphi_2) \big) \Big) \,.
\]
\end{prop}
\begin{proof}
Let the current position be $i$ and the witness be at position $w$. Consider the following cases:
\begin{itemize}
\item $\tau_w \in \lopen{\tau_i + a, \tau_i + 2a}$: $\varphi_1 \mathbin{\mathfrak{U}}^c_{\lopen{a, 2a}} \varphi_2$ clearly holds.
\item $\tau_w \in (\tau_i + 2a, \infty)$: Consider the following subcases:
\begin{itemize}
\item $\varphi_1$ holds at all positions $j < w$ such that $\tau_j > \tau_i + c$:
$\varphi_1 \mathbin{\mathcal{U}}_{(a, \infty)} \varphi_2$ holds at the maximal position $j'$ such that $\tau_{j'} \in [\tau_i, \tau_i + c]$.
\item $\varphi_1$ holds at all positions $j < w$ such that $\tau_j > \tau_i + c$ and $\tau_w - \tau_j > a - c$:
By assumption, there is a position $j'$ at which $\varphi_1$ does not hold and $\tau_w - \tau_{j'} \leq a - c$.
Since $\tau_w > \tau_i + 2a$, we have $\tau_{j'} > \tau_i + a + c$.
It follows that $\varphi_1 \mathbin{\mathcal{U}}_{(a, \infty)} (\LTLdiamond_{\leq a - c} \varphi_2)$ holds at the maximal position in $[\tau_i, \tau_i + c]$.
\end{itemize}
\end{itemize}
The other direction is obvious.
\end{proof}
\paragraph{\emph{Extracting unbounded operators from bounded operators}}
We now provide a set of rewriting rules that extract unbounded temporal operators from the scopes of bounded temporal operators.
In what follows, let $\varphi_\textit{xlb} = \mathbf{false} \mathbin{\mathcal{U}}_{(0, b)} \mathbf{true}$,
$\varphi_\textit{ylb} = \mathbf{false} \mathbin{\mathcal{S}}_{(0, b)} \mathbf{true}$ and
\[
\arraycolsep=0.3ex
\begin{array}{rcl}
\varphi_\textit{ugb} & = & \Big( (\varphi_\textit{xlb} \implies \LTLsquare_{(b, 2b)} \varphi_1) \wedge \big( \neg \varphi_\textit{ylb} \implies ( \varphi_1 \wedge \LTLsquare_{\lopen{0, b}} \varphi_1 ) \big) \Big) \\[0.5em]
& & \mathbin{\mathcal{U}} \Bigg( \big(\varphi_1 \wedge (\varphi_1 \mathbin{\mathcal{U}}_{(b, 2b)} \varphi_2)\big) \vee \bigg( \neg \varphi_\textit{ylb} \wedge \Big( \varphi_2 \vee \big(\varphi_1 \wedge (\varphi_1 \mathbin{\mathcal{U}}_{\lopen{0, b}} \varphi_2) \big) \Big) \bigg) \Bigg) \,, \\[1.5em]
\varphi_\textit{ggb} & = & \LTLsquare \Big( (\varphi_\textit{xlb} \implies \LTLsquare_{(b, 2b)} \varphi_1) \wedge \big( \neg \varphi_\textit{ylb} \implies ( \varphi_1 \wedge \LTLsquare_{\lopen{0, b}} \varphi_1 ) \big) \Big) \,.
\end{array}
\]
The intended meanings of the formulas $\varphi_\textit{ugb}$ and $\varphi_\textit{ggb}$ are similar (yet not identical) to $\varphi_1 \mathbin{\mathfrak{U}}^b_{>b} \varphi_2$ and
$\neg \big(\mathbf{true} \mathbin{\mathfrak{U}}^b_{>b} (\neg \varphi_1) \big)$, respectively.
Indeed, the equivalences in the following proposition
still hold if we replace all occurrences of $\varphi_\textit{ugb}$ and $\varphi_\textit{ggb}$
by these simpler formulas.
We, however, have to use these complicated formulas here as we aim to pull the
unbounded `Until' operator to the outermost level.
The subformulas $\LTLsquare_{(b, 2b)} \varphi_1$ and $\LTLsquare_{\lopen{0, b}} \varphi_1$
assert that $\varphi_1$ holds continuously in short `strips', and
we use the subformulas $\varphi_\textit{xlb}$ and $\varphi_\textit{ylb}$ to ensure that each event
before the point where $\varphi_2$ holds is covered by such a strip.
\begin{prop}\label{prop:extract}
The following equivalences hold over infinite timed words:
\small
\[
\arraycolsep=0.3ex
\begin{array}{rcl}
\theta \mathbin{\mathcal{U}}_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \wedge \chi \big) & \iff & \theta \mathbin{\mathcal{U}}_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \wedge \chi \big) \vee \Big( \big( \theta \mathbin{\mathcal{U}}_{(a, b)} (\LTLsquare_{(0, 2b)} \varphi_1 \wedge \chi) \big) \wedge \varphi_\textit{ugb} \Big) \\[0.5em]
\theta \mathbin{\mathcal{U}}_{(a, b)} (\LTLsquare \varphi \wedge \chi) & \iff & \big( \theta \mathbin{\mathcal{U}}_{(a, b)} (\LTLsquare_{(0, 2b)} \varphi \wedge \chi) \big) \wedge \varphi_\textit{ggb} \\[0.5em]
\theta \mathbin{\mathcal{U}}_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{S}} \varphi_2) \wedge \chi \big) & \iff & \theta \mathbin{\mathcal{U}}_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{S}}_{(0, b)} \varphi_2) \wedge \chi \big) \vee \Big( \big( \theta \mathbin{\mathcal{U}}_{(a, b)} (\LTLsquareminus_{(0, b)} \varphi_1 \wedge \chi) \big) \wedge \varphi_1 \mathbin{\mathcal{S}} \varphi_2 \Big) \\[0.5em]
\theta \mathbin{\mathcal{U}}_{(a, b)} (\LTLsquareminus \varphi \wedge \chi) & \iff & \big( \theta \mathbin{\mathcal{U}}_{(a, b)} (\LTLsquareminus_{(0, b)} \varphi \wedge \chi) \big) \wedge \LTLsquareminus \varphi \\[0.5em]
\big( (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(a, b)} \theta & \iff & \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(a, b)} \theta \\
& & {} \vee \bigg( \Big( \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(0, b)} (\LTLsquare_{(0, 2b)} \varphi_1) \Big) \wedge \LTLdiamond_{(a, b)} \theta \wedge \varphi_\textit{ugb} \bigg) \\[0.5em]
\big( (\LTLsquare \varphi) \vee \chi \big) \mathbin{\mathcal{U}}_{(a, b)} \theta & \iff & \chi \mathbin{\mathcal{U}}_{(a, b)} \theta \vee \big( \chi \mathbin{\mathcal{U}}_{(0, b)} (\LTLsquare_{(0, 2b)} \varphi_1) \wedge \LTLdiamond_{(a, b)} \theta \wedge \varphi_\textit{ggb} \big) \\[0.5em]
\big( (\varphi_1 \mathbin{\mathcal{S}} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(a, b)} \theta & \iff & \big( (\varphi_1 \mathbin{\mathcal{S}}_{(0, b)} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(a, b)} \theta \\ & & {} \vee \bigg( \Big( \big(\LTLsquareminus_{(0, b)} \varphi_1 \vee (\varphi_1 \mathbin{\mathcal{S}}_{(0, b)} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(a, b)} \theta \Big) \wedge \varphi_1 \mathbin{\mathcal{S}} \varphi_2 \bigg) \\[0.5em]
\big( (\LTLsquareminus \varphi) \vee \chi \big) \mathbin{\mathcal{U}}_{(a, b)} \theta & \iff & \chi \mathbin{\mathcal{U}}_{(a, b)} \theta \vee \Big( \big( (\LTLsquareminus_{(0, b)} \varphi \vee \chi) \mathbin{\mathcal{U}}_{(a, b)} \theta \big) \wedge \LTLsquareminus \varphi \Big) \,.
\end{array}
\]
\end{prop}
\begin{proof}
We sketch the proof for the first rule. In what follows, let the current position be $i$.
For the forward direction, let the witness be at position $w$. If $\tau_w < \tau_j + 2b$ for some $j$ such that $\tau_j \in (\tau_i + a, \tau_i + b)$,
the subformula $\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2$ clearly holds at $j$ and we are done.
Otherwise, let $j$ be the maximal position such that $\tau_j \in (\tau_i + a, \tau_i + b)$.
We know that $\LTLsquare_{(0, 2b)} \varphi_1$ must hold at $j$, so
$(\varphi_\textit{xlb} \implies \LTLsquare_{(b, 2b)} \varphi_1)$,
$\varphi_\textit{ylb}$, and hence $\big( \neg \varphi_\textit{ylb} \implies ( \varphi_1 \wedge \LTLsquare_{\lopen{0, b}} \varphi_1 ) \big)$
must hold at all positions $j'$, $i < j' < j$.
Let $l > j$ be the minimal position such that $\tau_w \in (\tau_l + b, \tau_l + 2b)$.
Consider the following cases:
\begin{itemize}
\item There exists such $l$:
It is clear that $\big(\varphi_1 \wedge (\varphi_1 \mathbin{\mathcal{U}}_{(b, 2b)} \varphi_2)\big)$ holds at $l$.
Since $\LTLsquare_{(b, 2b)} \varphi_1$ holds at all positions $j''$, $j \leq j'' < l$ by
the minimality of $l$, $(\varphi_\textit{xlb} \implies \LTLsquare_{(b, 2b)} \varphi_1)$ also holds at these positions.
For the other conjunct, note that $\varphi_\textit{ylb}$ holds at $j$ and $\varphi_1 \wedge \LTLsquare_{\lopen{0, b}} \varphi_1$ holds at all positions $j'''$, $j < j''' < l$.
\item There is no such $l$:
Consider the following cases:
\begin{itemize}
\item $\neg \varphi_\textit{ylb}$ and $\neg \LTLdiamondminus_{= b} \mathbf{true}$ hold at $w$:
By assumption, there is no event in $(\tau_w - 2b, \tau_w)$. The proof is similar to the case where $l$ exists.
\item $\neg \varphi_\textit{ylb}$ and $\LTLdiamondminus_{= b} \mathbf{true}$ hold at $w$: Let $l'$ be
the position such that $\tau_{l'} = \tau_w - b$. By assumption, there is no event in $(\tau_{l'} - b, \tau_{l'})$.
It follows that $\neg \varphi_\textit{ylb}$ and $\big(\varphi_1 \wedge (\varphi_1 \mathbin{\mathcal{U}}_{\lopen{0, b}} \varphi_2) \big)$
hold at $l'$. The proof is similar.
\item $\varphi_\textit{ylb}$ holds at $w$:
By assumption, there is no event in $(\tau_w - 2b, \tau_w - b)$. It is easy to see that there
is a position such that $\neg \varphi_\textit{ylb} \wedge \big(\varphi_1 \wedge (\varphi_1 \mathbin{\mathcal{U}}_{\lopen{0, b}} \varphi_2) \big)$
holds. The proof is again similar.
\end{itemize}
\end{itemize}
\noindent
We prove the other direction by contraposition. Consider the interesting case where $\LTLsquare_{(0, 2b)} \varphi_1$ holds at the maximal position $j$
such that $j \in (\tau_i + a, \tau_i + b)$, yet $\varphi_1 \mathbin{\mathcal{U}} \varphi_2$ does not hold at $j$.
By assumption, there is no $\varphi_2$-event in $(\tau_j, \tau_j + 2b)$.
If $\varphi_2$ never holds in $\ropen{\tau_j + 2b, \infty}$
then we are done. Otherwise, let $l > j$ be the minimal position such that both $\varphi_1$ and $\varphi_2$ do not hold at $l$
(note that $\tau_l \geq \tau_j + 2b$).
It is clear that
\[
\Bigg( \big(\varphi_1 \wedge (\varphi_1 \mathbin{\mathcal{U}}_{(b, 2b)} \varphi_2)\big) \vee \bigg( \neg \varphi_\textit{ylb} \wedge \Big( \varphi_2 \vee \big(\varphi_1 \wedge (\varphi_1 \mathbin{\mathcal{U}}_{\lopen{0, b}} \varphi_2) \big) \Big) \bigg) \Bigg)
\]
does not hold at all positions $j'$, $i < j' \leq l$. Consider the following cases:
\begin{itemize}
\item $\neg \varphi_\textit{ylb}$ holds at $l$: $\varphi_1 \wedge \LTLsquare_{\lopen{0, b}} \varphi_1$
does not hold at $l$, and therefore $\varphi_\textit{ugb}$ fails to hold at $i$.
\item $\varphi_\textit{ylb}$ holds at $l$: Consider the following cases:
\begin{itemize}
\item There is an event in $(\tau_l - 2b, \tau_l - b)$: Let $j''$ be the maximal position of such an event. We have $j'' + 1 < l$, $\tau_{j'' + 1} - \tau_{j''} \geq b$
and $\tau_l - \tau_{j'' + 1} < b$. However, it follows that $\varphi_\textit{ylb}$ does not hold at $j'' + 1$
and $\varphi_1 \wedge \LTLsquare_{\lopen{0, b}} \varphi_1$ holds at $j'' + 1$, which is a contradiction.
\item There is no event in $(\tau_l - 2b, \tau_l - b)$: Let $j''$ be the minimal position such that
$\tau_{j''} \in \ropen{\tau_l - b, \tau_l}$. It is clear that $\varphi_\textit{ylb}$ does not hold at $j''$ and
$\varphi_1 \wedge \LTLsquare_{\lopen{0, b}} \varphi_1$ must hold at $j''$, which is a contradiction. \qedhere
\end{itemize}
\end{itemize}
\end{proof}
\begin{prop}
The following equivalences hold over infinite timed words:
\small
\[
\arraycolsep=0.3ex
\begin{array}{rcl}
\theta \mathbin{\mathfrak{U}}^c_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \wedge \chi \big) & \iff & \theta \mathbin{\mathfrak{U}}^c_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \wedge \chi \big) \vee \Big( \big( \theta \mathbin{\mathfrak{U}}^c_{(a, b)} (\LTLsquare_{(0, 2b)} \varphi_1 \wedge \chi) \big) \wedge \varphi_\textit{ugb} \Big) \\[0.5em]
\theta \mathbin{\mathfrak{U}}^c_{(a, b)} (\LTLsquare \varphi \wedge \chi) & \iff & \big( \theta \mathbin{\mathfrak{U}}^c_{(a, b)} (\LTLsquare_{(0, 2b)} \varphi \wedge \chi) \big) \wedge \varphi_\textit{ggb} \\[0.5em]
\theta \mathbin{\mathfrak{U}}^c_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{S}} \varphi_2) \wedge \chi \big) & \iff & \theta \mathbin{\mathfrak{U}}^c_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{S}}_{(0, b)} \varphi_2) \wedge \chi \big) \vee \Big( \big( \theta \mathbin{\mathfrak{U}}^c_{(a, b)} (\LTLsquareminus_{(0, b)} \varphi_1 \wedge \chi) \big) \wedge \varphi_1 \mathbin{\mathcal{S}} \varphi_2 \Big) \\[0.5em]
\theta \mathbin{\mathfrak{U}}^c_{(a, b)} (\LTLsquareminus \varphi \wedge \chi) & \iff & \big( \theta \mathbin{\mathfrak{U}}^c_{(a, b)} (\LTLsquareminus_{(0, b)} \varphi \wedge \chi) \big) \wedge \LTLsquareminus \varphi \\[0.5em]
\big( (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \vee \chi \big) \mathbin{\mathfrak{U}}^c_{(a, b)} \theta & \iff & \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \vee \chi \big) \mathbin{\mathfrak{U}}^c_{(a, b)} \theta \\
& & {} \vee \bigg( \Big( \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \vee \chi \big) \mathbin{\mathfrak{U}}^c_{\big(c, c + (b - a)\big)} (\LTLsquare_{(0, 2b)} \varphi_1) \Big) \wedge \LTLdiamond_{(a, b)} \theta \wedge \varphi_\textit{ugb} \bigg) \\[0.5em]
\big( (\LTLsquare \varphi) \vee \chi \big) \mathbin{\mathfrak{U}}^c_{(a, b)} \theta & \iff & \chi \mathbin{\mathfrak{U}}^c_{(a, b)} \theta
\vee \big( \chi \mathbin{\mathfrak{U}}^c_{\big(c, c + (b - a)\big)} (\LTLsquare_{(0, 2b)} \varphi_1) \wedge \LTLdiamond_{(a, b)} \theta \wedge \varphi_\textit{ggb} \big) \\[0.5em]
\big( (\varphi_1 \mathbin{\mathcal{S}} \varphi_2) \vee \chi \big) \mathbin{\mathfrak{U}}^c_{(a, b)} \theta & \iff & \big( (\varphi_1 \mathbin{\mathcal{S}}_{(0, b)} \varphi_2) \vee \chi \big) \mathbin{\mathfrak{U}}^c_{(a, b)} \theta \\
& & {} \vee \bigg( \Big( \big(\LTLsquareminus_{(0, b)} \varphi_1 \vee (\varphi_1 \mathbin{\mathcal{S}}_{(0, b)} \varphi_2) \vee \chi \big) \mathbin{\mathfrak{U}}^c_{(a, b)} \theta \Big) \wedge \varphi_1 \mathbin{\mathcal{S}} \varphi_2 \bigg) \\[0.5em]
\big( (\LTLsquareminus \varphi) \vee \chi \big) \mathbin{\mathfrak{U}}^c_{(a, b)} \theta & \iff & \chi \mathbin{\mathfrak{U}}^c_{(a, b)} \theta \vee \Big( \big( (\LTLsquareminus_{(0, b)} \varphi \vee \chi) \mathbin{\mathfrak{U}}^c_{(a, b)} \theta \big) \wedge \LTLsquareminus \varphi \Big) \,.
\end{array}
\]
\end{prop}
\begin{lem}\label{lem:termination}
For any \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\varphi$, we can use the rules above to obtain an
equivalent \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\hat{\varphi}$ in which no unbounded temporal
operator appears in the scope of a bounded temporal operator.
In particular, all occurrences of $\mathbin{\mathfrak{U}}_I^c, \mathbin{\mathfrak{S}}_I^c$ have $I$ bounded.
\end{lem}
\begin{proof}
Define the \emph{unbounding depth} $ud(\varphi)$ of an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula
$\varphi$ to be the modal depth of $\varphi$ counting only unbounded
operators. We demonstrate a rewriting process on $\varphi$ which terminates in an
equivalent formula $\hat{\varphi}$ such that any subformula $\hat{\psi}$ of $\hat{\varphi}$
with outermost operator bounded has $ud(\hat{\psi}) = 0$.
Assume that the input formula $\varphi$ is in normal form.
Let $k$ be the largest unbounding depth among all subformulas of $\varphi$
with bounded outermost operators.
We pick all minimal (w.r.t.\ subformula order) such subformulas $\psi$ with $ud(\psi) = k$.
By applying the rules in Section~\ref{sec:separation},
we can rewrite $\psi$ into $\psi'$ where all subformulas of $\psi'$
with bounded outermost operators have unbounded depths strictly
less than $k$.
We then substitute these $\psi'$ back into $\varphi$
to obtain $\varphi'$.
We repeat this step until there remain no bounded temporal operators with unbounding depth $k$.
The rules that rewrite a formula into normal form are used whenever necessary on
relevant subformulas---this never affects their unbounding depths, and note that we never
introduce $\mathbin{\mathfrak{U}}_I^c$ or $\mathbin{\mathfrak{S}}_I^c$.
It is easy to see that we will eventually obtain such a formula $\varphi^*$.
Now rewrite $\varphi^*$ into normal form
and start over again. This is to be repeated until we reach $\hat{\varphi}$.
\end{proof}
\paragraph{\emph{Completing the separation}}
We now have an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $\hat{\varphi}$ in which no unbounded temporal
operator appears in the scope of a bounded temporal operator.
If we regard each bounded subformula as a new monadic predicate, the formula $\hat{\varphi}$ can
be seen as an \textup{\textmd{\textsf{LTL}}}{} formula $\Phi$, on which Gabbay's separation theorem is applicable.
\begin{thmC}[{\cite[Theorem $3$]{Gabbay1980}}]
Every \emph{\textup{\textmd{\textsf{LTL}}}{}} formula is equivalent (over discrete complete models)
to a Boolean combination of
\begin{itemize}
\item atomic formulas
\item formulas of the form $\varphi_1 \mathbin{\mathcal{U}} \varphi_2$ such that $\varphi_1$ and $\varphi_2$ use only $\mathbin{\mathcal{U}}$
\item formulas of the form $\varphi_1 \mathbin{\mathcal{S}} \varphi_2$ such that $\varphi_1$ and $\varphi_2$ use only $\mathbin{\mathcal{S}}$.
\end{itemize}
\end{thmC}
\begin{lem}
Every \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula is equivalent to a Boolean combination of
\begin{itemize}
\item bounded \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas
\item formulas that use arbitrary $\mathbin{\mathcal{U}}_I$ but only bounded $\mathbin{\mathcal{S}}_I$, $\mathbin{\mathfrak{U}}_I^c$, $\mathbin{\mathfrak{S}}_I^c$
\item formulas that use arbitrary $\mathbin{\mathcal{S}}_I$ but only bounded $\mathbin{\mathcal{U}}_I$, $\mathbin{\mathfrak{U}}_I^c$, $\mathbin{\mathfrak{S}}_I^c$.
\end{itemize}
\end{lem}
\noindent
We now prove the main theorem of this subsection: each \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula
is equivalent to a \emph{syntactically separated} \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula.
\begin{thm}\label{thm:separation}
Every \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula can be written as a
Boolean combination of
\begin{itemize}
\item bounded \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas
\item formulas of the form $\mathbf{false} \mathbin{\mathfrak{U}}_{\geq M}^M \varphi$ where $M \in \mathbb{N}$
\item formulas of the form $\mathbf{false} \mathbin{\mathfrak{S}}_{\geq M}^M \varphi$ where $M \in \mathbb{N}$.
\end{itemize}
\end{thm}
\begin{proof}
Suppose that we have an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $\varphi$ with no unbounded $\mathbin{\mathcal{S}}$.
If $\varphi$ is bounded then we are done. Otherwise we can apply Lemma~\ref{lem:termination}
(note in particular that it does not introduce new unbounded $\mathbin{\mathcal{U}}$ operators)
and further assume that $\varphi = \varphi_1 \mathbin{\mathcal{U}} \varphi_2$.
Then, for any $M \in \mathbb{N}$, we can rewrite $\varphi$ into
\[
\varphi_1 \mathbin{\mathcal{U}}_{< M} \varphi_2 \vee \Bigg( \LTLsquare_{<M} \varphi_1 \wedge \bigg( \mathbf{false} \mathbin{\mathfrak{U}}_{\geq M}^M \Big( \varphi_2 \vee \big( \varphi_1 \wedge (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \big) \Big) \bigg) \Bigg) \,.
\]
It is clear that $\varphi_1$ and $\varphi_2$, and therefore $\varphi_1 \mathbin{\mathcal{U}}_{< M} \varphi_2$ and $\LTLsquare_{<M} \varphi_1$,
have strictly fewer unbounded $\mathbin{\mathcal{U}}$ operators than $\varphi$. By the induction hypothesis, $\varphi$
is equivalent to a syntactically separated \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula. The case of
formulas with no unbounded $\mathbin{\mathcal{U}}$ is symmetric.
\end{proof}
\subsection{Expressing bounded \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formulas}
In this section, we describe how to express bounded \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formulas with a
single free variable in \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}. The use of rational constants
is crucial here; by~\cite{Hirshfeld2007}, \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} cannot express all counting modalities (which can be written as bounded \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formulas) if only integer constants are allowed.
As some techniques here are exactly similar to that of Section~\ref{subsec:translation}, we omit certain explanations.
Suppose that we are given such a formula $\vartheta(x)$.
As before, we assume that each quantifier in $\vartheta(x)$ uses a fresh new variable
and $\vartheta(x)$ contains only existential quantifiers.
We say that $\vartheta(x)$ is \emph{$N$-bounded} if each subformula $\exists x' \, \psi$ of $\vartheta(x)$
is of the form
\[
\exists x' \, \big( (x' > x \implies d(x, x') < N) \wedge (x' < x \implies d(x, x') \leq N) \wedge \ldots \big) \,.
\]
Namely, $\vartheta(x)$ only refers to the events in the half-open interval $\ropen{x - N, x + n}$.
Similarly, we say that $\vartheta(x)$ is a \emph{unit formula} if each subformula $\exists x' \, \psi$ of $\vartheta(x)$
is of the form
\[
\exists x' \, \big(x' \geq x \wedge d(x, x') < 1 \wedge \ldots \big) \,.
\]
In this case, $\vartheta(x)$ only refers to the events in $\ropen{x, x+1}$.
\paragraph{\emph{Stacking events around a point}}
Let $\rho$ be an infinite timed word over $\Sigma_\mathbf{P}$,
$\overline{\mathbf{P}} = \{P_i \mid P \in \mathbf{P}, -N \leq i < N\}$ and $\overline{\mathbf{Q}} = \{Q_i \mid N \leq i < N \}$.
For each $t \in \rho$, we can construct a (finite) $\ropen{0, 1}$-timed word $\overline{\rho_t}$ over $\Sigma_{\overline{\mathbf{P}} \cup \overline{\mathbf{Q}}}$
that satisfies the following:
\begin{itemize}
\item For all $\overline{t} \in \ropen{0, 1}$ and $-N \leq i < N$, $P_i$ holds at $\overline{t} \in \overline{\rho_t}$ iff $P$ holds at $i + \overline{t} \in \rho$.
\item For all $\overline{t} \in \ropen{0, 1}$ and $-N \leq i < N$, $Q_i$ holds at $\overline{t} \in \overline{\rho_t}$ iff $i + \overline{t} \in \rho$.
\end{itemize}
\paragraph{\emph{Stacking $N$-bounded \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formulas}}
Now let $\vartheta(x)$ be an $N$-bounded \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formula.
Recursively replace every subformula $\exists x' \, \theta$ by
\[
\exists x' \, \Big( \big( Q_{-N}(x') \wedge \theta[x' + (-N)/x'] \big) \vee \ldots \vee \big( Q_{N-1}(x') \wedge \theta[x' + (N - 1)/x'] \big) \Big)
\]
where $\vartheta[e/x]$ denotes the formula obtained by substituting all free occurrences of $x$
in $\vartheta$ by $e$. We then carry out the following syntactic substitutions:
\begin{itemize}
\item For each inequality of the form $x_1 + k_1 < x_2 + k_2$, replace it with
\begin{itemize}
\item $x_1 < x_2$ if $k_1 = k_2$
\item $\mathbf{true}$ if $k_1 < k_2$
\item $\mathbf{false}$ if $k_1 > k_2$
\end{itemize}
\item For each distance formula, e.g., $d(x_1 + k_1, x_2 + k_2) < 2$, replace it with
\begin{itemize}
\item $\mathbf{true}$ if $|k_1 - k_2| \leq 1$
\item $x_2 < x_1$ if $k_2 - k_1 = 2$
\item $x_1 < x_2$ if $k_1 - k_2 = 2$
\item $\mathbf{false}$ if $|k_1 - k_2| > 2$
\end{itemize}
\item Replace terms of the form $P(x_1 + k)$ with $P_k(x_1)$.
\end{itemize}
Finally, recursively replace every subformula $\exists x' \, \theta$ by $\exists x' \, \big( x' \geq x \wedge d(x, x') < 1 \wedge \theta \big)$.
This gives a unit formula $\overline{\vartheta}(x)$ such that for each $t \in \rho$,
\[
\rho, t \models \vartheta(x) \iff \overline{\rho_t}, 0 \models \overline{\vartheta}(x) \,.
\]
\paragraph{\emph{Unstacking}}
For each $\overline{\rho_t}$, we add an event at time $1$ (at which no monadic predicate holds) and call the resulting
$\ropen{0, 1}$-timed word $\overline{\rho_t}'$. It is clear that
\[
\overline{\rho_t}, 0 \models \overline{\vartheta}(x) \iff \overline{\rho_t}', 0, 1 \models \overline{\vartheta}'(x, y)
\]
where $\overline{\vartheta}'(x, y)$ is a non-metric \textmd{\textsf{FO[$<$]}} formula
obtained by replacing all distance formulas of the form $d(x, x') < 1$ with $x' < y$ in $\overline{\vartheta}(x)$.
We now invoke a normal form lemma from~\cite{Gabbay1980} to rewrite $\overline{\vartheta}'(x, y)$ into
a disjunction of \emph{decomposition formulas}.
\begin{lemC}[\cite{Gabbay1980}]
Every \emph{\textmd{\textsf{FO[$<$]}}} formula $\theta(x, y)$ in which all quantifications are of the form
$\exists x' \, (x' \geq x \wedge x' < y \wedge \ldots )$
is equivalent to a disjunction of decomposition formulas, i.e.,~\emph{\textmd{\textsf{FO[$<$]}}} formulas of the form
\[
\arraycolsep=0.3ex
\begin{array}{ll}
x < y & {} \wedge \exists z_0 \ldots \exists z_n \, (x = z_0 < \cdots < z_n = y) \\
& {} \wedge \bigwedge \{ \Phi_i(z_i): 0 \leq i < n \} \\
& {} \wedge \bigwedge \{ \forall u \, \big(z_i < u < z_{i+1} \implies \Psi_i(u)\big): 0 \leq i < n \}
\end{array}
\]
where $\Phi_i$ and $\Psi_i$ are \emph{\textup{\textmd{$\textsf{LTL}_\textsf{fut}$}}{}} formulas.\footnote{This version of the lemma
follows from Lemma $4$ and Main Lemma in~\cite{Gabbay1980}.}
\end{lemC}
In fact, when the underlying order is discrete (as is the case here), we can further
postulate that $\Phi_i$ and $\Psi_i$ are simply Boolean combinations of atomic formulas~\cite{Dam1994}.
It follows that $\overline{\vartheta}(x)$ is equivalent to a disjunction of unit formulas $\overline{\delta}(x)$ of the form
\[
\arraycolsep=0.3ex
\begin{array}{ll}
\exists z_0 \ldots \exists z_{n-1} \, (x = z_0 < \cdots < z_{n-1}) & {} \wedge d(x, z_{n - 1}) < 1 \\
& {} \wedge \bigwedge \{ \Phi_i(z_i): 0 \leq i < n \} \\
& {} \wedge \bigwedge \{ \forall u \, \big(z_i < u < z_{i+1} \implies \Psi_i(u)\big): 0 \leq i < n - 1 \} \\
& {} \wedge \forall u \, \big(z_{n-1} < u \wedge d(x, u) < 1 \implies \Psi_{n - 1}(u) \big)
\end{array}
\]
where $\Phi_i$ and $\Psi_i$ are Boolean combinations of atomic formulas.
It remains to show that for each such unit formula $\overline{\delta}(x)$ and each $t \in \rho$, we can construct an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula
$\varphi$ such that
\[
\overline{\rho_t}, 0 \models \overline{\delta}({x}) \iff \rho, t \models \varphi \,.
\]
For later convenience, we prove a stronger claim, i.e.,~we can handle
\textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{} formulas of the following form for any rational number $r$, $0 \leq r < 1$:
\[
\arraycolsep=0.3ex
\begin{array}{ll}
\exists z_0 \ldots \exists z_{n-1} \, (x = z_0 < \cdots < z_{n-1}) & {} \wedge d(x, z_1) > r \wedge d(x, z_{n - 1}) < 1 \\
& {} \wedge \bigwedge \{ \Phi_i(z_i): 1 \leq i < n \} \\
& {} \wedge \forall u \, \big(x < u \wedge u < z_1 \wedge d(x, u) > r \implies \Psi_{0}(u) \big) \\
& {} \wedge \bigwedge \{ \forall u \, \big(z_i < u < z_{i+1} \implies \Psi_i(u)\big): 1 \leq i < n - 1 \} \\
& {} \wedge \forall u \, \big(z_{n-1} < u \wedge d(x, u) < 1 \implies \Psi_{n - 1}(u) \big) \,.
\end{array}
\]
The proof is by induction on the number of existential quantifiers in $\overline{\delta}(x)$.
Before we proceed with the proof, we define a function $f$ that maps
a Boolean combination $\Omega$ of atomic formulas over $\overline{\mathbf{P}} \cup \overline{\mathbf{Q}}$
and $i$, $-N \leq i < N$ to an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $f(\Omega, i)$ over $\mathbf{P}$:
\begin{itemize}
\item $f(P_j, i) = \begin{cases}
\LTLdiamondminus_{= (i - j)} P & \text{if } i > j\\
P & \text{if } i = j \\
\LTLdiamond_{= (j - i)} P & \text{if } i < j \\
\end{cases}$
\item $f(Q_j, i) = \begin{cases}
\LTLdiamondminus_{= (i - j)} \mathbf{true} & \text{if } i > j\\
\mathbf{true} & \text{if } i = j \\
\LTLdiamond_{= (j - i)} \mathbf{true} & \text{if } i < j \\
\end{cases}$
\item $f(\mathbf{true}, i) = \mathbf{true}$
\item $f(\Omega_1 \wedge \Omega_2, i) = f(\Omega_1, i) \wedge f(\Omega_2, i)$
\item $f(\neg \Omega, i) = \neg f(\Omega, i)$.
\end{itemize}
\noindent
Now first consider the base step. We have
\[
\overline{\delta}(x) = \forall u \, \big(x < u \wedge d(x, u) > r \wedge d(x, u) < 1 \implies \Psi(u) \big)
\]
where $\Psi$ is a Boolean combination of atomic formulas. It is clear that
\[
\varphi = \bigwedge_{0 \leq i < N} \big(\LTLsquare_{(i + r, i + 1)} f(\Psi, i)\big) \wedge \bigwedge_{-N \leq i < 0} \big(\LTLsquareminus_{(|i + 1|, |i + r|)} f(\Psi, i)\big) \,.
\]
For the induction step we need to consider how $z_1$, \ldots, $z_{n-1}$ are scattered in $(r, 1)$.
Let us split $(r, 1)$ into an open interval $(r, r + \frac{1 - r}{2n})$ and $2n - 1$
half-open intervals $\ropen{r + \frac{1 - r}{2n}, r + \frac{2(1 - r)}{2n}}$, $\ropen{r + \frac{2(1-r)}{2n}, r + \frac{3(1-r)}{2n}}$, \ldots, $\ropen{r + \frac{(2n-1)(1-r)}{2n}, 1}$.
Consider the following cases:
\begin{enumerate}[label=(\roman*).]
\item $\{ z_1, \ldots, z_{n - 1} \} \subseteq (r, r + \frac{1 - r}{2n})$ or $\{ z_1, \ldots, z_{n - 1} \} \subseteq \ropen{r + \frac{k(1 - r)}{2n}, r + \frac{(k + 1)(1 - r)}{2n}}$ for some $k$, $1 \leq k < n$.
\item $\{ z_1, \ldots, z_{n - 1} \} \subseteq \ropen{r + \frac{k(1 - r)}{2n}, r + \frac{(k + 1)(1 - r)}{2n}}$ for some $k$, $n \leq k < 2n$.
\item There exists $k$, $1 \leq k < 2n$ and $l$, $1 \leq l < n - 1$ such that $z_l < r + \frac{k(1-r)}{2n} \leq z_{l + 1}$ (i.e.,~$z_1$, \ldots, $z_{n-1}$ are not in a single interval).
\end{enumerate}
We detail the construction of a formula $\psi$ in each case; the desired formula $\varphi$ is the disjunction of these $\psi$.
The correctness proofs are omitted as they are similar to the proof of Proposition~\ref{prop:translation}.
\begin{itemize}
\item Case (i):
Consider the subcase $z_1 > r + \frac{k(1-r)}{2n}$. Let
\[
\overrightarrow{\varphi}^i_{n-1} = \bigwedge_{0 \leq j < N - i} \big( \LTLsquare_{(j, j + \frac{1-r}{2n})} f(\Psi_{n-1}, i + j) \big) \wedge \bigwedge_{-N - i \leq j < 0} \big( \LTLsquareminus_{(|j + \frac{1-r}{2n}|, |j|)} f(\Psi_{n-1}, i + j) \big)
\]
for all $i$, $-N \leq i < N$ and recursively define
\[
\overrightarrow{\varphi}^i_{m} = \bigvee_{-N - i \leq j < N - i} \bigg( \bigwedge_{-N - i \leq h < N - i} \Big( \big( f(\Psi_m, i + h) \big) \mathbin{\mathfrak{U}}_{(j, j + \frac{1-r}{2n})}^h \big( f(\Phi_{m+1}, i + j) \wedge \overrightarrow{\varphi}^{i+j}_{m+1} \big) \Big) \bigg)
\]
for all $i$, $-N \leq i < N$ and $m$, $1 \leq m < n - 1$. Let $\alpha_k$ be the conjunction of
\[
\bigwedge_{0 \leq i < N} \big( \LTLsquare_{\lopen{i + r, i + r + \frac{k(1-r)}{2n}}} f(\Psi_0, i) \big) \wedge \bigwedge_{-N \leq i < 0} \big( \LTLsquareminus_{\ropen{|i + r + \frac{k(1-r)}{2n}|, |i + r|}} f(\Psi_0, i) \big)
\]
and
\[
\bigvee_{-N \leq j < N} \bigg( \bigwedge_{-N \leq h < N} \Big( \big( f(\Psi_0, h) \big) \mathbin{\mathfrak{U}}_{(j + r + \frac{k(1-r)}{2n}, j + r + \frac{(k+1)(1-r)}{2n})}^{h + r + \frac{k(1-r)}{2n}} \big( f(\Phi_{1}, j) \wedge \overrightarrow{\varphi}^{j}_{1} \big) \Big) \bigg)
\]
and
\[
\bigwedge_{0 \leq i < N} \big( \LTLsquare_{\ropen{i + r + \frac{(k + 1)(1-r)}{2n}, i + 1}} f(\Psi_{n-1}, i) \big) \wedge \bigwedge_{-N \leq i < 0} \big( \LTLsquareminus_{\lopen{|i + 1|, |i + r + \frac{(k + 1)(1-r)}{2n}|}} f(\Psi_{n - 1}, i) \big) \,.
\]
Similarly, we construct $\alpha_k'$ to handle the subcase $z_1 = r + \frac{k(1-r)}{2n}$.
The formula $\psi$ is the disjunction of formulas $\{ \alpha_k \mid 0 \leq k < n \}$ and $\{ \alpha_k' \mid 0 < k < n \}$.
\item Case (ii):
Let
\[
\overleftarrow{\varphi}^i_{1} = \bigwedge_{0 < j < N - i} \big( \LTLsquare_{(j - \frac{1-r}{2n}, j)} f(\Psi_{0}, i + j) \big) \wedge \bigwedge_{-N - i \leq j \leq 0} \big( \LTLsquareminus_{(|j|, |j - \frac{1-r}{2n}|)} f(\Psi_{0}, i + j) \big)
\]
for all $i$, $-N \leq i < N$ and recursively define
\[
\overleftarrow{\varphi}^i_{m} = \bigvee_{-N - i \leq j < N - i} \bigg( \bigwedge_{-N - i \leq h < N - i} \Big( \big( f(\Psi_{m-1}, i + h) \big) \mathbin{\mathfrak{S}}_{(j, j + \frac{1-r}{2n})}^h \big( f(\Phi_{m-1}, i + j) \wedge \overleftarrow{\varphi}^{i+j}_{m-1} \big) \Big) \bigg)
\]
for all $i$, $-N \leq i < N$ and $m$, $1 < m \leq n - 1$. Let $\beta_k$ be the conjunction of
\[
\bigwedge_{0 \leq i < N} \big( \LTLsquare_{\ropen{i + r + \frac{(k + 1)(1-r)}{2n}, i + 1}} f(\Psi_{n-1}, i) \big) \wedge \bigwedge_{-N \leq i < 0} \big( \LTLsquareminus_{\lopen{|i + 1|, |i + r + \frac{(k+1)(1-r)}{2n}|}} f(\Psi_{n-1}, i) \big)
\]
and
\[
\bigvee_{-N \leq j < N} \bigg( \bigwedge_{-N \leq h < N} \Big( \big( f(\Psi_{n - 1}, h) \big) \mathbin{\mathfrak{S}}_{\lopen{-(j + r + \frac{(k + 1)(1-r)}{2n}), -(j + r + \frac{k(1-r)}{2n})}}^{-(h + r + \frac{(k + 1)(1-r)}{2n})} \big( f(\Phi_{n - 1}, j) \wedge \overleftarrow{\varphi}^{j}_{n - 1} \big) \Big) \bigg)
\]
and
\[
\bigwedge_{0 \leq i < N} \big( \LTLsquare_{(i + r, i + r + \frac{k(1-r)}{2n})} f(\Psi_{0}, i) \big) \wedge \bigwedge_{-N \leq i < 0} \big( \LTLsquareminus_{(|i + r + \frac{k(1-r)}{2n}|, |i + r|)} f(\Psi_{0}, i) \big) \,.
\]
The formula $\psi$ is the disjunction of $\beta_k$, $n \leq k < 2n$.
\item Case (iii):
Suppose that $z_l < r + \frac{k(1-r)}{2n} \leq z_{l+1}$ for some $k$, $1 \leq k < 2n$
and $l$, $1 \leq l < n - 1$. Consider the following subcases:
\begin{itemize}
\item $r + \frac{k(1-r)}{2n} < z_{l+1}$:
This can be handled by the conjunction of the formulas below:
\begin{itemize}
\item $\{z_1, \ldots, z_l\} \subseteq (r, r + \frac{k(1-r)}{2n})$: We can scale the corresponding \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{} formula by
$\frac{1}{r + \frac{k(1-r)}{2n}}$, apply the induction hypothesis (with $r' = \frac{r}{r + \frac{k(1-r)}{2n}}$)
and scale the resulting \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula by $r + \frac{k(1-r)}{2n}$.
\item $\{z_{l+1}, \ldots, z_{n-1}\} \subseteq (r + \frac{k(1-r)}{2n}, 1)$: We can set $r' = r + \frac{k(1-r)}{2n}$ and apply the induction hypothesis.
\end{itemize}
\item $r + \frac{k(1-r)}{2n} = z_{l+1}$: Exactly similar except that we also use the following formula as a conjunct:
\[
\bigvee_{0 \leq i < N} \big( \LTLdiamond_{= i + r + \frac{k(1-r)}{2n}} f(\Phi_{l + 1}, i) \big) \vee \bigvee_{-N \leq i < 0} \big( \LTLdiamondminus_{= |i + r + \frac{k(1-r)}{2n}|} f(\Phi_{l+1}, i) \big) \,.
\]
\end{itemize}
The formula $\psi$ is the disjunction of these formulas for all $k$, $1 \leq k < 2n$ and $l$, $1 \leq l < n - 1$.
\end{itemize}
Finally, observe that the original claim can be achieved by setting $r = 0$ and using the conjunct $f(\Phi_0, 0)$.
We can now state the following theorem.
\begin{thm}\label{thm:expcompforbounded}
For every $N$-bounded \emph{\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}} formula $\vartheta(x)$, there exists an equivalent \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\varphi$ (with rational constants).
\end{thm}
\subsection{Expressive completeness of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}}\label{sec:unboundedexpcomp}
In this section, we show that any \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{} formula with one free variable
can be expressed as an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula. The crucial idea here is that
we can separate formulas `far enough' that all references to a certain
variable become vacuous.
To this end, we define $\mathit{fr}(\varphi)$ and $\mathit{pr}(\varphi)$
(\emph{future-reach} and \emph{past-reach}) for an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $\varphi$ as follows:
\begin{itemize}
\item $\mathit{fr}(\mathbf{true}) = \mathit{pr}(\mathbf{true}) = \mathit{fr}(P) = \mathit{pr}(P) = 0$ for all $P \in \mathbf{P}$
\item $\mathit{fr}(\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2) = \sup(I) + \max\big(\mathit{fr}(\varphi_1), \mathit{fr}(\varphi_2)\big)$
\item $\mathit{pr}(\varphi_1 \mathbin{\mathcal{S}}_I \varphi_2) = \sup(I) + \max\big(\mathit{pr}(\varphi_1), \mathit{pr}(\varphi_2)\big)$
\item $\mathit{fr}(\varphi_1 \mathbin{\mathcal{S}}_I \varphi_2) = \max\big(\mathit{fr}(\varphi_1), \mathit{fr}(\varphi_2) - \inf(I)\big)$
\item $\mathit{pr}(\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2) = \max\big(\mathit{pr}(\varphi_1), \mathit{pr}(\varphi_2) - \inf(I)\big)$
\item $\mathit{fr}(\varphi_1 \mathbin{\mathfrak{U}}^c_I \varphi_2) = \max\big( c + |I| + \mathit{fr}(\varphi_1), \sup(I) + \mathit{fr}(\varphi_2) \big)$
\item $\mathit{pr}(\varphi_1 \mathbin{\mathfrak{S}}^c_I \varphi_2) = \max\big( c + |I| + \mathit{pr}(\varphi_1), \sup(I) + \mathit{pr}(\varphi_2) \big)$
\item $\mathit{fr}(\varphi_1 \mathbin{\mathfrak{S}}^c_I \varphi_2) = \max\big( \mathit{fr}(\varphi_1) - c, \mathit{fr}(\varphi_2) - \inf(I) \big)$
\item $\mathit{pr}(\varphi_1 \mathbin{\mathfrak{U}}^c_I \varphi_2) = \max\big( \mathit{pr}(\varphi_1) - c, \mathit{pr}(\varphi_2) - \inf(I) \big)$.
\end{itemize}
The cases for Boolean connectives are defined in the expected way.
Intuitively, these are meant as over-approximations of the lengths of the time horizons needed to
determine the truth value of $\varphi$.
\begin{thm}
For every \emph{\textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{}} formula $\vartheta(x)$, there exists an equivalent \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\varphi$ (with rational constants).
\end{thm}
\begin{proof}
The proof is by induction on the quantifier depth of $\vartheta(x)$.
In what follows, let the set of monadic predicates be $\mathbf{P}$.
As before, we assume that each quantifier in $\vartheta(x)$ uses a fresh new variable.
\begin{itemize}
\item \emph{Base step.} $\vartheta(x)$ is a Boolean combination of atomic formulas $P(x)$, $x < x$, $d(x, x) \sim c$, $\mathbf{true}$.
We can replace them by $P$, $\mathbf{false}$, $0 \sim c$ and $\mathbf{true}$ respectively to obtain $\varphi$.
\item \emph{Induction step.} Without loss of generality assume that $\vartheta(x) = \exists y \, \theta(x, y)$.
Our goal here is to remove $x$ from $\theta(x, y)$. For this purpose, we can remove $x < x$ and $d(x, x) \sim c$
as before and use a mapping $\gamma: \mathbf{P} \mapsto \{ \mathbf{true}, \mathbf{false} \}$ to determine
the truth value of $P(x)$ for each $P \in \mathbf{P}$. Thus we can rewrite $\exists y \, \theta(x, y)$ as
\begin{equation}\label{eqn:gamma}
\bigvee_{\gamma} \big( \eta_\gamma(x) \wedge \exists y \, \theta_\gamma(x, y) \big)
\end{equation}
where
\[
\eta_\gamma = \bigwedge_{P \in \mathbf{P}} \big(P(x) \iff \gamma(P)\big)
\]
and $\theta_\gamma(x, y)$ is obtained by replacing $P(x)$ with $\gamma(P)$ for all $P \in \mathbf{P}$ in $\theta(x, y)$.
Observe that in each $\theta_\gamma(x, y)$, $x$ only appears in atomic formulas of the form $x < z$, $z < x$, $d(x, z) \sim c$
and $d(z, x) \sim c$ where $\sim \; \in \{<, >\}$. We now introduce new monadic predicates $P_{<}$, $P_{>}$, and $P_{\sim c}$ for each $d(x, z) \sim c$ or $d(z, x) \sim c$
that correspond to these atomic formulas. Namely, we write $x < z$ as $P_{<}(z)$, $z < x$ as $P_{>}(z)$, and
$d(x, z) \sim c$ or $d(z, x) \sim c$ as $P_{\sim c}(z)$. Apply these substitutions to (\ref{eqn:gamma})
yields
\begin{equation}\label{eqn:nox}
\bigvee_{\gamma} \big( \eta_\gamma(x) \wedge \exists y \, \theta_\gamma'(y) \big)
\end{equation}
where $x$ does not occur in each $\theta_\gamma'(y)$. In particular, (\ref{eqn:gamma}) and (\ref{eqn:nox})
have the same truth value at any given point if $P_{<}$, $P_{>}$ and all $P_{\sim c}$ are interpreted correctly
with respect to that point.
Each $\eta_\gamma(x)$ is clearly equivalent to an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $\psi_\gamma$.
By the induction hypothesis, each $\theta_\gamma'(y)$ is also equivalent to an \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $\varphi_\gamma$.
It follows that (\ref{eqn:nox}) is equivalent to the following \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula:
\[
\varphi' = \bigvee_{\gamma} \big( \psi_\gamma \wedge (\LTLdiamondminus \varphi_\gamma \vee \varphi_\gamma \vee \LTLdiamond \varphi_\gamma) \big) \,.
\]
By Theorem~\ref{thm:separation} and noting that $M \in \mathbb{N}$
can be chosen arbitrarily, $\varphi'$ is equivalent to a Boolean combination $\varphi''$ of
\begin{itemize}
\item bounded formulas
\item formulas of the form $\mathbf{false} \mathbin{\mathfrak{U}}_{\geq M}^M \psi$ such that $M > c_\maxit + \mathit{pr}(\psi)$
\item formulas of the form $\mathbf{false} \mathbin{\mathfrak{S}}_{\geq M}^M \psi$ such that $M > c_\maxit + \mathit{fr}(\psi)$.
\end{itemize}
where $c_\maxit$ are the largest constants appearing in monadic predicates of the form
$P_{\sim c}$ in respective formulas $\psi$.
Now suppose that $\varphi''$ is evaluated at $t_1$.
For the formulas of the second type, since all references to $P_{<}$, $P_{>}$ and all $P_{\sim c}$
must happen at time strictly greater than $t_1 + c_\maxit$, we can simply replace them with
$\mathbf{true}$, $\mathbf{false}$ and $c_\maxit + 1 \sim c$ to obtain equivalent \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}
formulas over $\mathbf{P}$. The formulas of the third type
can be dealt with similarly. Finally, for the formulas of the first type, we replace
$P_{<}$, $P_{>}$ and all $P_{\sim c}$ with $x < z$, $z < x$ and $d(x, z) \sim c$.
The resulting formulas are clearly bounded \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{} formulas. We can scale them to bounded \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} formulas,
apply Theorem~\ref{thm:expcompforbounded} and then scale back
to obtain equivalent \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas over $\mathbf{P}$. \qedhere
\end{itemize}
\end{proof}
\noindent
The main result of this chapter now follows from a simple scaling argument.
\begin{thm}\label{thm:unboundedexpcomp}
\emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} with rational constants is expressively complete for \emph{\textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{}} over infinite timed words.
\end{thm}
\section{Monitoring of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} properties}
While the expressive completeness result in the last section
may be interesting from a theoretical point of view,
it is unclear how it can benefit
practical verification tasks
as the model-checking problem for \textup{\textmd{$\textsf{MTL}_\textsf{fut}$}}{} is
already undecidable~\cite{Ouaknine2006a}.
Nonetheless, we show that those results can be very useful in
\emph{monitoring}, a core element of \emph{runtime verification}.
We first define some basic notions used throughout
this section.
Then we give a bi-linear offline trace-checking algorithm for \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{},
which is later modified to work in an online fashion (under a
bounded-variability assumption) and used as the basis of a
monitoring procedure for an `easy-to-monitor' fragment of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}.\footnote{In this section
we assume that all timestamps are rational, can be finitely represented (e.g., with a built-in data type), and additions and subtractions on them can be done in constant time.}
The main advantage of the proposed procedure is that it is \emph{trace-length independent}, i.e.,~the space usage
is independent of the length of the (growing) trace.
Finally, we show that our approach extends to arbitrary \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas
via the syntactic rewriting rules in Section~\ref{sec:separation}.
\subsection{Satisfaction over finite prefixes}
\paragraph{\emph{Truncated semantics}}
As one is naturally concerned with truncated traces in monitoring,
it is useful to define satisfaction relations of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas
over finite timed words. To this end, we adopt a timed version of the
\emph{truncated semantics}~\cite{Eisner2003} which offers
\emph{strong} and \emph{weak} views on satisfaction over finite timed words.
Intuitively, these views indicate whether the satisfaction of
the formula on the whole (infinite) trace is `clearly' confirmed/refuted
by the finite prefix read so far.
In the strong view, one is \emph{pessimistic} on satisfaction---for example,
$\LTLsquare P$ can never be strongly satisfied by any finite timed word, as any such finite timed word
can be extended into an infinite timed word that violates the formula.
Conversely, in the weak view one is \emph{optimistic} on satisfaction---for example, $\LTLdiamond_{< 5} P$ is weakly satisfied by any finite
timed word whose timestamps are all strictly less than $5$, since
one can always extend such into an infinite timed word that satisfies the formula.
We also consider the \emph{neutral} view, which extends the
traditional \textup{\textmd{\textsf{LTL}}}{} semantics over finite words~\cite{Manna1995}
to \textup{\textmd{\textsf{MTL}}}{}. In what follows, we denote the strong, neutral and weak
satisfaction relations by $\mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}}$, $\mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}}$ and $\mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}}$ respectively.
We write $\rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi$ ($\rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi$, $\rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi$)
if $\rho, 0 \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi$ ($\rho, 0 \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi$, $\rho, 0 \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi$).
\begin{defi}
The satisfaction relation $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi$ for an
\emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\varphi$, a finite timed word $\rho =
(\sigma, \tau)$ and a position $i$, $0 \leq i < |\rho|$ is defined
as follows:
\begin{itemize}
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} P$ iff $P \in \sigma_i$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \mathbf{true}$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \wedge \varphi_2$ iff $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1$ and $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_2$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \neg \varphi$ iff $\rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$ iff there
exists $j$, $i < j < |\rho|$ such that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}}
\varphi_2$, $\tau_j - \tau_i \in I$, and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}}
\varphi_1$ for all $k$ with $i < k < j$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathcal{S}}_I \varphi_2$ iff there exists $j$, $0 \leq j < i$ such
that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_2$, $\tau_i - \tau_j \in I$,
and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1$ for all $k$ with $j < k < i$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathfrak{U}}^c_I \varphi_2$ iff there exists $j$, $i < j < |\rho|$
such that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_2$, $\tau_j - \tau_i \in I$,
and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1$ for all $k$, $i < k < j$ such that $\tau_{k} - \tau_i > c$ and
$\tau_j - \tau_{k} > a - c$ where $a = \inf(I)$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathfrak{S}}^c_I \varphi_2$ iff there exists $j$, $0 \leq j < i$
such that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_2$, $\tau_i - \tau_j \in I$,
and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1$ for all $k$, $j < k < i$ such that $\tau_i - \tau_{k} > c$
and $\tau_{k} - \tau_{j} > a - c$ where $a = \inf(I)$.
\end{itemize}
\end{defi}
\begin{defi}
The satisfaction relation $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi$ for an
\emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\varphi$, a finite timed word $\rho =
(\sigma, \tau)$ and a position $i$, $0 \leq i < |\rho|$ is defined
as follows:
\begin{itemize}
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} P$ iff $P \in \sigma_i$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \mathbf{true}$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \wedge \varphi_2$ iff $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1$ and $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_2$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \neg \varphi$ iff $\rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$ iff there
exists $j$, $i < j < |\rho|$ such that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}}
\varphi_2$, $\tau_j - \tau_i \in I$, and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}}
\varphi_1$ for all $k$ with $i < k < j$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \mathbin{\mathcal{S}}_I \varphi_2$ iff there exists $j$, $0 \leq j < i$ such
that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_2$, $\tau_i - \tau_j \in I$,
and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1$ for all $k$ with $j < k < i$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \mathbin{\mathfrak{U}}^c_I \varphi_2$ iff there exists $j$, $i < j < |\rho|$
such that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_2$, $\tau_j - \tau_i \in I$,
and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1$ for all $k$, $i < k < j$ such that $\tau_{k} - \tau_i > c$ and
$\tau_j - \tau_{k} > a - c$ where $a = \inf(I)$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \mathbin{\mathfrak{S}}^c_I \varphi_2$ iff there exists $j$, $0 \leq j < i$
such that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_2$, $\tau_i - \tau_j \in I$,
and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1$ for all $k$, $j < k < i$ such that $\tau_i - \tau_{k} > c$
and $\tau_{k} - \tau_{j} > a - c$ where $a = \inf(I)$.
\end{itemize}
\end{defi}
\begin{defi}
The satisfaction relation $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi$ for an
\emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\varphi$, a finite timed word $\rho =
(\sigma, \tau)$ and a position $i$, $0 \leq i < |\rho|$ is defined
as follows:
\begin{itemize}
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} P$ iff $P \in \sigma_i$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \mathbf{true}$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \wedge \varphi_2$ iff $\rho, i
\mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ and $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_2$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \neg \varphi$ iff $\rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$ iff either of
the following holds:
\begin{itemize}
\item there exists $j$, $i < j < |\rho|$ such that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_2$, $\tau_j -
\tau_i \in I$, and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ for all $k$ with $i < k < j$
\item $\tau_{|\rho| - 1} - \tau_i < \sup(I)$ and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ for all $k$ with $i < k < |\rho|$
\end{itemize}
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \mathbin{\mathcal{S}}_I \varphi_2$ iff there exists
$j$, $0 \leq j < i$ such that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_2$, $\tau_i
- \tau_j \in I$, and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ for all $k$ with $j < k < i$
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \mathbin{\mathfrak{U}}^c_I \varphi_2$ iff
either of the following holds:
\begin{itemize}
\item there exists $j$, $i < j < |\rho|$ such that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_2,
\tau_j - \tau_i \in I$, and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ for all $k$, $i < k < j$
such that $\tau_{k} - \tau_i > c$ and $\tau_{j} - \tau_{k} > a - c$ where $a = \inf(I)$
\item $\tau_{|\rho| - 1} - \tau_i < \sup(I)$ and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ for all $k$, $i < k < |\rho|$
such that $\tau_{k} - \tau_i > c$ and $\tau_{|\rho| - 1} - \tau_{k} \geq a - c$ where $a = \inf(I)$
\end{itemize}
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \mathbin{\mathfrak{S}}^c_I \varphi_2$ iff there exists $j$, $0 \leq j < i$ such that
$\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_2$, $\tau_i - \tau_j \in I$,
and $\rho, k \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ for all $k$, $j < k < i$
such that $\tau_i - \tau_{k} > c$ and $\tau_{k} - \tau_{j} > (a - c)$ where $a = \inf(I)$.
\end{itemize} \end{defi}
\begin{prop}\label{prop:stontow}
For a finite timed word $\rho$, a position $i$ in $\rho$ and an \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula
$\varphi$,
\[
\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi \implies \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi \text{
and } \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi \implies \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}}
\varphi \,.
\]
\end{prop}
\paragraph{\emph{Informative prefixes}}
We say that $\rho$ is \emph{informative} for $\varphi$ if either
of the following holds:
\begin{itemize}
\item $\rho$ strongly satisfies $\varphi$, i.e.,~$\rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi$. In this case we say that $\rho$ is an \emph{informative good prefix} for $\varphi$; or
\item $\rho$ fails to weakly satisfy $\varphi$, i.e.,~$\rho \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi$. In this case we say that $\rho$ is an \emph{informative bad prefix} for $\varphi$.\footnote{Note that informative
good/bad prefixes are under-approximations of good/bad prefixes; see Section~\ref{sec:conclusion} for a discussion.}
\end{itemize}
The following proposition follows immediately from the definition of
informative prefixes. In words, negating (syntactically) a formula swaps its
set of informative good prefixes and informative bad prefixes.
\begin{prop}\label{prop:negxchggoodbad}
For an \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula, a finite timed word $\rho$ is an informative good prefix for $\varphi$
if and only if $\rho$ is an informative bad prefix for $\neg \varphi$.
\end{prop}
\begin{exa}\label{ex:pathologicallysafe} Consider the
following formula over $\{P\}$: \[ \varphi = \LTLdiamond
\LTLsquare (\neg P) \wedge \LTLsquare(P \implies
\LTLdiamond_{< 3} P) \,. \] We say that the finite timed word $\rho =
(\{P\}, 0)(\{P\}, 2)(\emptyset, 5.5)$ is an informative bad
prefix for $\varphi$ as the second conjunct has been `clearly' violated,
i.e.,~there is a $P$-event with no $P$-event in the following three time units
($\rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \neg \varphi$, or equivalently $\rho \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi$).
On the other hand, while $\rho' = (\{P\}, 0)(\{P\},
2)(\{P\}, 4)$ is indeed a bad prefix for $\varphi$, it is not informative
as both the first and second conjuncts are not yet `clearly' violated
($\rho' \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \neg \varphi$, or equivalently $\rho' \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi$).
\end{exa}
\begin{exa}\label{ex:intentionallysafe} Consider the
following formula over $\{P\}$: \[ \varphi' = \LTLsquare (\neg P)
\wedge
\LTLsquare(P \implies \LTLdiamond_{<3} P) \,. \]
This formula is equivalent to the formula $\varphi$ in the
previous example. However, all the bad prefixes $\rho$ for $\varphi'$ are
informative ($\rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \neg \varphi$, or equivalently $\rho \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi$).
\end{exa}
\subsection{Offline trace-checking algorithm}
\emph{Trace checking} can be seen as a much more restricted case of
model checking where one is only concerned with a single finite trace.
Formally, the trace-checking problem for \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} asks the following:
given a finite trace $\rho$ and an \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\varphi$, is $\rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi$? An offline algorithm
for the problem is shown as
Algorithms~\ref{alg:pathcheckinguntil}~and~\ref{alg:pathcheckingguntil}.
For given $\rho$ and $\varphi$, the algorithm maintains a two-dimensional Boolean array \texttt{table} of $|\psi|$ rows and $|\rho|$ columns.
Each row is used to store the truth values of a subformula at all positions.
The algorithm proceeds by filling up the array \texttt{table} in a bottom-up manner, starting from minimal subformulas.
We only detail the cases for $\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$ and $\varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2$
as other cases are either symmetric or trivial.
In what follows, we write $x \leq I$ for $x < \sup(I)$ if $I$ is right-open and for $x \leq \sup(I)$ otherwise.
To ease the presentation we omit the array-bounds checks, e.g., the algorithm
should stop when $\mathit{ptr}$ ($\mathit{ptr1}$) reaches $-1$.
\begin{algorithm}[ht]
\caption{$\textsc{FillTable}(\texttt{table}, \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2)$}%
\label{alg:pathcheckinguntil}
\begin{algorithmic}[1]
\State $\mathit{ptr} \gets |\rho| - 1$
\For{$j = |\rho| - 1$ to $0$}
\If{$\mathit{ptr} = j$}
\State $\texttt{table}[\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2][\mathit{ptr}] \gets \bot$
\State $\mathit{ptr} \gets \mathit{ptr} - 1$
\EndIf
\If{$\texttt{table}[\varphi_2][j]$}
\If{$\mathit{ptr} = j - 1$}
\State $\texttt{table}[\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2][\mathit{ptr}] \gets (\tau_j - \tau_{\mathit{ptr}} \in I)$
\State $\mathit{ptr} \gets \mathit{ptr} - 1$
\EndIf
\While{$\texttt{table}[\varphi_1][\mathit{ptr} + 1] \wedge \tau_j - \tau_{\mathit{ptr}} \leq I$}
\If{$\tau_j - \tau_{\mathit{ptr}} \in I$}
\State $\texttt{table}[\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2][\mathit{ptr}] \gets \top$
\Else
\State $\texttt{table}[\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2][\mathit{ptr}] \gets \bot$
\EndIf
\State $\mathit{ptr} \gets \mathit{ptr} - 1$
\EndWhile
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{prop}
After executing $\textsc{FillTable}(\emph{\texttt{table}}, \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2)$, we have
\[
\emph{\texttt{table}}[\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2][i] \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2
\]
for all $0 \leq i < |\rho|$ if $\emph{\texttt{table}}[\varphi_1]$ and $\emph{\texttt{table}}[\varphi_2]$
were both correct.
\end{prop}
\begin{proof}
Suppose that $\texttt{table}[\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2][i] = \top$.
Since each entry in $\texttt{table}[\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2]$ is filled exactly once,
it must be filled at either line $8$ or line $12$. In the former case it is clear that
$\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$.
In the latter case we must have
$\mathit{ptr} \leq j - 2$. If $\mathit{ptr} = j - 2$ then we are done, so we assume $\mathit{ptr} < j - 2$.
If there is a maximal position $\mathit{ptr}'$, $\mathit{ptr} + 1 < \mathit{ptr}' < j$ such that
$\texttt{table}[\varphi_1][\mathit{ptr}'] = \bot$, we must have $\mathit{ptr} + 1 = \mathit{ptr}'$, which
is a contradiction. We therefore conclude that $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$.
For the other direction, assume $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$ and let $j' > i$ be the witness
position, i.e.,~$\tau_{j'} - \tau_i \in I$, $\texttt{table}[\varphi_2][j'] = \top$ and $\texttt{table}[\varphi_1][j''] = \top$
for all $j''$, $i < j'' < j'$. Now consider $j = j'$. If $\mathit{ptr} \geq i$ then we are done.
So we assume $\mathit{ptr} < i$. If we already have $\texttt{table}[\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2][i] = \bot$,
then it must be the case that $\tau_{j'} - \tau_i \notin I$, which is a contradiction.
Therefore we must have $\texttt{table}[\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2][i] = \top$.
\end{proof}
\begin{algorithm}[ht]
\caption{$\textsc{FillTable}(\texttt{table}, \varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2)$}%
\label{alg:pathcheckingguntil}
\begin{algorithmic}[1]
\State $\mathit{ptr1}$, $\mathit{ptr2} \gets |\rho| - 1$
\For{$j = |\rho| - 1$ to $0$}
\While{$\tau_j - \tau_{\mathit{ptr2}} \leq \inf(I) - c \vee \texttt{table}[\varphi_1][\mathit{ptr2}]$}
\State $\mathit{ptr2} \gets \mathit{ptr2} - 1$
\EndWhile
\If{$\mathit{ptr1} = j$}
\State $\texttt{table}[\varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2][\mathit{ptr1}] \gets \bot$
\State $\mathit{ptr1} \gets \mathit{ptr1} - 1$
\EndIf
\If{$\texttt{table}[\varphi_2][j]$}
\If{$\mathit{ptr1} = j - 1$}
\State $\texttt{table}[\varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2][\mathit{ptr1}] \gets (\tau_j - \tau_{\mathit{ptr1}} \in I)$
\State $\mathit{ptr1} \gets \mathit{ptr1} - 1$
\EndIf
\While{$\tau_j - \tau_{\mathit{ptr1}} \leq I \wedge \tau_{\mathit{ptr2}} - \tau_{\mathit{ptr1}} \leq c$}
\If{$\tau_j - \tau_{\mathit{ptr1}} \in I$}
\State $\texttt{table}[\varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2][\mathit{ptr1}] \gets \top$
\Else
\State $\texttt{table}[\varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2][\mathit{ptr1}] \gets \bot$
\EndIf
\State $\mathit{ptr1} \gets \mathit{ptr1} - 1$
\EndWhile
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{prop}
After executing $\textsc{FillTable}(\emph{\texttt{table}}, \varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2)$, we have
\[
\emph{\texttt{table}}[\varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2][i] \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \varphi_1 \mathbin{\mathfrak{U}}_I^c \varphi_2
\]
for all $0 \leq i < |\rho|$ if $\emph{\texttt{table}}[\varphi_1]$ and $\emph{\texttt{table}}[\varphi_2]$
were both correct.
\end{prop}
\begin{proof}
Observe that after line $5$, $\mathit{ptr2}$ is equal to the maximal position
such that $\tau_j - \tau_{\mathit{ptr2}} > \inf(I) - c$ and $\texttt{table}[\varphi_1][\mathit{ptr2}] = \bot$.
The proof is very similar to the case of $\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$.
\end{proof}
\subsection{Monitoring procedure}\label{subsec:monitoring}
Conceptually, we can regard a monitor as a \emph{deterministic} automaton
over finite traces.
The monitoring process, then, can be carried out by simply moving a token as directed by
the prefix.
However, it is well-known that in a dense real-time setting, such a monitor
(say, which accepts all the bad prefixes for $\varphi$)
needs an unbounded number of clocks and therefore cannot be realised
in practice~\cite{Alur1992, Maler2005, Reynolds2014}.
For this reason, we shall from now on assume that all input traces have
variability at most $k_{\mathit{var}}$, i.e.,~there are
at most $k_{\mathit{var}}$ events in any (open) unit time interval.
Based on this assumption, we give a monitoring procedure
for \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas of the form
\[
\hat{\varphi} = \Phi(\psi_1, \ldots, \psi_m)
\]
where $\psi_1$, $\ldots$, $\psi_m$ are \emph{bounded} \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas and $\Phi$ is an
\textup{\textmd{\textsf{LTL}}}{} formula.
The main idea is similar to the one used in the previous section:
we introduce new propositions $\mathbf{Q} = \{Q_1, \ldots, Q_m\}$
that correspond to $\psi_1$, $\ldots$, $\psi_m$.
In this way, we can monitor $\Phi$ as an \textup{\textmd{\textsf{LTL}}}{} property over $\mathbf{Q}$.\footnote{A similar idea is used in~\cite{Finkbeiner2009} to synthesise smaller monitor circuits for \textup{\textmd{$\textsf{LTL}_\textsf{fut}$}}{} formulas.}
Since these propositions correspond to bounded formulas,
their truth values can be obtained by running the trace-checking algorithm on subtraces:
as the input trace has variability at most $k_{\mathit{var}}$,
we only need to store a `sliding window' of a certain size.
\paragraph{\emph{The untimed \textup{\textmd{\textsf{LTL}}}{} part}}
We recall briefly the standard methodology to construct finite automata that accept
exactly the informative good/bad prefixes for a given \textup{\textmd{$\textsf{LTL}_\textsf{fut}$}}{} formula~\cite{Kupferman2001a}.
Given such a formula $\Psi$, first use a standard construction~\cite{Vardi1996} to
obtain a language-equivalent alternating B\"uchi automaton
$\mathcal{A}_\Psi$. Then redefine its accepting set to be
the empty set and treat it as an automaton over finite words; the
resulting automaton $\mathcal{A}_\Psi^\mathit{true}$ accepts exactly all
informative good prefixes for $\Psi$. In particular, one can determinise
$\mathcal{A}_\Psi^\mathit{true}$ with the usual subset
construction. The same can be done for $\neg \Psi$ to obtain a
deterministic automaton that accepts exactly the informative bad prefixes for $\Psi$.
In our case, we first translate the \textup{\textmd{\textsf{LTL}}}{} formulas $\Phi$ and $\neg
\Phi$ into a pair of \emph{two-way} alternating B\"uchi automata~\cite{Gastin2003}.
With the same modifications, we obtain two automata
that accept informative good prefixes and informative bad prefixes for $\Phi$.
We then apply existing procedures that translate two-way alternating automata over
finite words into deterministic automata (e.g.,~\cite{Birget1993}) and obtain
$\mathcal{D}_{\mathit{good}}$ and $\mathcal{D}_\mathit{bad}$, respectively.
To detect both types of prefixes simultaneously,
we will execute $\mathcal{D}_\mathit{good}$ and $\mathcal{D}_\mathit{bad}$ in parallel.
\begin{prop}
For an \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\hat{\varphi}$ of the form described above, the automata $\mathcal{D}_\mathit{good}$
and $\mathcal{D}_\mathit{bad}$ are of size $2^{2^{O(|\Phi|)}}$ where $\Phi$ is
the `backbone' \emph{\textup{\textmd{\textsf{LTL}}}{}} formula.
\end{prop}
\paragraph{\emph{Na\"{\i}ve evaluation of the bounded metric parts}}
In what follows, let $l_{\mathit{fr}}(\psi) = k_{\mathit{var}} \cdot \lceil \mathit{fr}(\psi) \rceil$
and $l_{\mathit{pr}}(\psi) = k_{\mathit{var}} \cdot \lceil \mathit{pr}(\psi) \rceil$
(the functions $\mathit{fr}$ and $\mathit{pr}$ are defined in Section~\ref{sec:unboundedexpcomp}).
Suppose that we want to obtain the truth value of $\psi_i$ at
position $j$ in the input trace $\rho = (\sigma, \tau)$. Since $\psi_i$ is bounded, only the events occurring between $\tau_j -
\mathit{pr}(\psi_i)$ and $\tau_j + \mathit{fr}(\psi_i)$ can affect the truth value of
$\psi_i$ at $j$. This implies that $\rho, j \models \psi_i
\iff \rho', j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \psi_i$ where $\rho'$ is a
prefix of $\rho$ that contains all the events between $\tau_j -
\mathit{pr}(\psi_i)$ and $\tau_j + \mathit{fr}(\psi_i)$ in $\rho$. Since $\rho$ is of bounded
variability $k_{\mathit{var}}$, there can be at most $l_\mathit{pr}(\psi_i) + 1 + l_\mathit{fr}(\psi_i)$
events between $\tau_j - pr(\psi_i)$ and $\tau_j + fr(\psi_i)$. It
follows that we can simply `record' all events in this interval
with a two-dimensional array of $l_\mathit{pr}(\psi_i) + 1 + l_\mathit{fr}(\psi_i)$ columns and
$1 + |\psi_i|$ rows: the first row is used to store the timestamps whereas the other rows are used to store the truth values.
Intuitively, the two-dimensional array acts as a sliding window around position $j$ in $\rho$.
Now consider all the propositions in $\mathbf{Q}$: their truth values
at position $j$ can be evaluated using a two-dimensional array
of $l_\mathit{pr}^\mathbf{Q} + 1 + l_\mathit{fr}^\mathbf{Q}$ columns and $1 + |\psi_1| + \cdots + |\psi_m|$ rows where $l_\mathit{pr}^\mathbf{Q} =
\displaystyle{\max_{1 \leq i \leq m} {l_\mathit{pr}(\psi_i)}}$ and $l_\mathit{fr}^\mathbf{Q} = \displaystyle{\max_{1 \leq i \leq m}
{l_\mathit{fr}(\psi_i)}}$. Each row can be filled in
time $O(l_\mathit{pr}^\mathbf{Q} + 1 + l_\mathit{fr}^\mathbf{Q})$ with the trace-checking algorithm.
Overall, we need a two-dimensional array of size $O(k_{\mathit{var}} \cdot c_{\mathit{sum}} \cdot |\hat{\varphi}|)$ where $c_{\mathit{sum}}$ is the sum of the constants in
$\hat{\varphi}$; for each position $j$, we need time $O(k_{\mathit{var}} \cdot
c_{\mathit{sum}} \cdot |\hat{\varphi}|)$ to obtain the truth values of all
propositions in $\mathbf{Q}$, which are then used as input to $\mathcal{D}_\mathit{good}$ and $\mathcal{D}_\mathit{bad}$.
\paragraph{\emph{Incremental evaluation of the bounded metric parts}}
While the procedure above uses only bounded space, it is clearly inefficient as for each $j$
we have to fill the whole two-dimensional array from scratch.
This is because some of the filled entries (other than those for position $j$)
may depend on the events outside of the sliding window, and thus can be incorrect.
We now describe an optimisation which enables the reuse of
previously filled entries.
We first deal with the simpler case of past subformulas.
Observe that as the trace-checking algorithm is filling a row for
$\varphi_1 \mathbin{\mathcal{S}}_I \varphi_2$ or $\varphi_1 \mathbin{\mathfrak{S}}_I^c \varphi_2$,
the variables $\mathit{ptr}$, $\mathit{ptr1}$ and $\mathit{ptr2}$ all increases \emph{monotonically}.
This implies that for past subformulas, the trace-checking algorithm
can be used in an online manner: simply suspend the algorithm
when we have filled all entries using the truth values of $\varphi_1$ and $\varphi_2$ (at
various positions) that are currently known, and resume the algorithm when the
truth values of $\varphi_1$ and $\varphi_2$ (at some other positions)
that are previously unknown become available.
The case of future subformulas is more involved.
Suppose that we want to evaluate the truth value of a subformula $P_1 \mathbin{\mathcal{U}}_{(a, b)} P_2$ at position $j$
in the input trace $\rho = (\sigma, \tau)$.
It is clear that the value may depend on future events
if $\tau_j + b$ is greater than the timestamp of the last acquired event.
However, observe that even when this is the case, we may still do the evaluation if any of the following holds:
\begin{itemize}
\item $P_1$ fails to hold at some position $j'$ such that
$\tau_{j'}$ is less or equal than the timestamp of the last acquired event.
In this case, we know that all the truth values of $P_1 \mathbin{\mathcal{U}}_{(a, b)} P_2$ at positions $< j'$
cannot depend on the events at positions $> j'$.
\item $P_2$ holds at some position $j' > j$ and $P_1$ holds at all positions $j''$, $j < j'' < j'$.
In this case, the truth values of $P_1 \mathbin{\mathcal{U}}_{(a, b)} P_2$ at positions $k < j'$
where $\tau_{j'} - \tau_{k} \in (a, b)$ are $\top$ and
do not depend on the events at positions $> j'$.
\end{itemize}
\noindent
We generalise this observation to handle the general case of
updating the row for $\varphi_1 \mathbin{\mathcal{U}}_{(a, b)} \varphi_2$.
First of all, we maintain indices $j_{\varphi_1}$, $j_{\varphi_2}$, $j_{\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2}$
which point to the first unknown entries in the rows for
$\varphi_1$, $\varphi_2$ and $\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$.
Let $t_{\maxit} = \min \{ \tau_{(j_{\varphi_1} - 1)}, \tau_{(j_{\varphi_2} - 1)} \}$ and update
its value when either $j_{\varphi_1}$ or $j_{\varphi_2}$ changes.
Whenever $t_{\maxit}$ is updated to a new value,
we also update the following indices:
\begin{itemize}
\item $j_1$ is the maximal position such that $\tau_{j_1} + b \leq t_\maxit$
\item $j_2$ is the maximal position such that $\tau_{j_2} \leq t_\maxit$ and $\varphi_2$ holds at $j_2$
\item $j_3$ is the maximal position such that $\tau_{j_3} + a < \tau_{j_2}$
\item $j_4$ is the maximal position such that $\tau_{j_4} \leq t_\maxit$ and $\varphi_1$ does not hold at $j_4$.
\end{itemize}
Now, after both the rows for $\varphi_1$ and $\varphi_2$ have been updated, if any of $j_1$, $j_3$, $j_4 - 1$ is greater or equal than $j_{\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2}$,
we let $j_5 = \max\{j_1, j_3, j_4 - 1\}$ and start Algorithm~\ref{alg:pathcheckinguntil}
from line $3$ with $\mathit{ptr} = j_5$ and $j = j_2$. We run the algorithm
till all the entries at positions $\leq j_5$ in the row for $\varphi_1 \mathbin{\mathcal{U}}_I \varphi_2$
have been filled.
The crucial observation here is that $j_1$, $j_2$, $j_3$, $j_4$ all increase monotonically,
and therefore can be maintained in amortised linear time.
Also, the truth value of any subformula at any position will be filled only once.
The case of $\varphi_1 \mathbin{\mathfrak{U}}_{(a, b)}^c \varphi_2$ is similar (but slightly more involved).
These observations imply that each entry in the two-dimensional array can be filled in
amortised constant time.
Assuming that moving a token on a deterministic finite automaton takes constant time,
we can state the following theorem.
\begin{thm}
For an \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\hat{\varphi}$ of the form described earlier and an infinite
trace of variability $k_\mathit{var}$, our monitoring procedure
uses two DFAs of size $2^{2^{O(|\Phi|)}}$, a two-dimensional array of size $O(k_\mathit{var} \cdot c_\mathit{sum} \cdot
|\hat{\varphi}|)$ where $c_\mathit{sum}$ is the sum of the constants in
$\hat{\varphi}$, and amortised time $O(|\hat{\varphi}|)$ per event.
\end{thm}
\paragraph{\emph{Correctness}}
We now show that our procedure is sound and complete for detecting informative prefixes.
\begin{prop}\label{prop:boundedpartinformative} For a bounded
\emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\psi$, a finite trace $\rho = (\sigma, \tau)$ and a
position $0 \leq i < |\rho|$ such that $\tau_i + \mathit{fr}(\psi) \leq
\tau_{|\rho| - 1}$, we have \[ \rho, i
\mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \psi \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu \phantom{+}}} \psi \iff
\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \psi \,. \] \end{prop}
\begin{prop}\label{prop:lengthen}
For an \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\varphi$, a finite trace $\rho$ and a position $i$ in $\rho$,
if $\rho$ is a prefix of a longer finite trace $\rho'$, then
\[
\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi \implies \rho', i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi
\text{ and } \rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi \implies \rho', i
\centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi \,.
\]
\end{prop}
\begin{thm}[Soundness]\label{thm:soundness}
In our procedure, if we ever reach an accepting state of
$\mathcal{D}_\mathit{good}$ ($\mathcal{D}_\mathit{bad}$) via a finite word $u \in
\Sigma_Q^*$, then we must eventually read an
informative good (bad) prefix for $\hat{\varphi}$. \end{thm}
\begin{proof} For such $u$ and a corresponding $\rho = (\sigma, \tau)$
such that $\tau_{|u| - 1} + l_\mathit{fr}^Q \leq \tau_{|\rho| - 1}$, we have
\[ \forall i \in \ropen{0, |u|} \, \big( (u, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \Psi \implies \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \psi)
\wedge
(u, i \centernot
\mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \Psi \implies \rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \psi) \big)
\] where $\Psi$ is a subformula of $\Phi$ and $\psi =
\Psi(\psi_1, \ldots, \psi_m)$. This can easily be proved by
structural induction and Proposition~\ref{prop:boundedpartinformative}. If $u$ is accepted by $\mathcal{D}_\mathit{good}$, we
have $u, 0 \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \Phi$ by construction. By the above we have $\rho, 0
\mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \Phi(\psi_1, \ldots, \psi_m)$, as desired. The case of
$\mathcal{D}_\mathit{bad}$ is symmetric.
\end{proof}
\begin{thm}[Completeness]\label{thm:completeness}
Whenever we read an informative good (bad) prefix $\rho = (\sigma,
\tau)$ for $\hat{\varphi}$, $\mathcal{D}_\mathit{good}$ ($\mathcal{D}_\mathit{bad}$) must
eventually reach an accepting state. \end{thm}
\begin{proof}
For the finite word $u' \in \Sigma_Q^*$ obtained a bit later with $|u'| = |\rho|$,
\[
\forall i \in \ropen{0, |u'|} \, \big( (\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \psi
\implies u', i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \Psi) \wedge (\rho, i \centernot
\mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \psi \implies u', i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \Psi)
\big) \] where $\Psi$ is a subformula of $\Phi$ and $\psi =
\Psi(\psi_1, \ldots, \psi_m)$. This can be proved by
structural induction and Proposition~\ref{prop:lengthen}. The theorem follows.
\end{proof}
\subsection{Preservation of informative prefixes}
As we have seen earlier in Example~\ref{ex:pathologicallysafe} and~\ref{ex:intentionallysafe},
it is possible for two equivalent \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas to have different sets of informative good/bad prefixes.
In this section, we show that this is cannot be the case when the two formulas
are related by one of the rewriting rules in Section~\ref{sec:separation}.
In other words, the rewriting rules in
Section~\ref{sec:separation} preserves the `informativeness' of
formulas.
\begin{lem}\label{lem:informativenesspreservation}
For an \emph{\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}} formula $\varphi$, let $\varphi'$ be the formula obtained from $\varphi$
by applying one of the rules in Section~\ref{sec:separation} on
some of its subformula. We have \[
\rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi \iff \rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi'
\text{ and } \rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi \iff \rho \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}}
\varphi' \,. \]
\end{lem}
Given the lemma above, we can state the following theorem on any \textup{\textmd{\textsf{MTL}}}{} formula
$\varphi$ and the equivalent formula $\hat{\varphi}$ (of our desired form)
obtained from $\varphi$ by applying the rewriting rules in Section~\ref{sec:separation}.
\begin{thm}\label{thm:coincide}
The set of informative good prefixes of $\varphi$ coincides with the
set of informative good prefixes of $\hat{\varphi}$. The same holds for
informative bad prefixes. \end{thm}
We now have a way to detect the informative good/bad prefixes for an arbitrary
\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula $\varphi$: use the rewriting rules to obtain $\hat{\varphi}$, and apply
the monitoring procedure we described in the last subsection.
The monitor only needs a bounded amount of memory,
even for complicated \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formulas with arbitrary nestings of (bounded and unbounded) past and future operators.
\begin{proof}[Proof of Lemma~\ref{lem:informativenesspreservation}]
Since the satisfaction relations are defined inductively, we can work directly on the relevant subformulas.
We would like to prove that for a finite timed word $\rho$ and a position $i$ in $\rho$,
\[
\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi' \text{ and } \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi'
\]
where $\phi \iff \phi'$ matches one of the rules in Section~\ref{sec:separation}.
For a group of similar rules we will only prove a representative one, as the proof for others follow similarly. In the following let the LHS be $\phi$ and RHS be $\phi'$.
\begin{itemize}[itemsep=0.7em,topsep=0.7em]
\item $\varphi_1 \mathbin{\mathcal{U}}_{(a, \infty)} \varphi_2 \iff \varphi_1 \mathbin{\mathcal{U}} \varphi_2 \wedge \LTLsquare_{\lopen{0, a}} (\varphi_1 \wedge \varphi_1 \mathbin{\mathcal{U}} \varphi_2)$:
\begin{itemize}[itemsep=0.5em,topsep=0.5em]
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi'$:
Assume $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi$. By definition we have $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$.
If there is no event in $\lopen{\tau_i, \tau_i + a}$, since there must be an event in $\lopen{\tau_i + a, \tau_{|\rho| - 1}}$, we are done.
If there are events in $\lopen{\tau_i, \tau_i + a}$, then for all $j$ such that $\tau_j - \tau_i \in \lopen{0, a}$ we have $\rho, j \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \neg \varphi_1$.
Also for all such $j$ we have $\rho, j \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \neg \varphi_1 \mathbin{\mathcal{U}} \varphi_2$ since it is obvious that $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$.
For the other direction, if the witness (for $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$) is in $(\tau_i + a, \tau_{|\rho| - 1})$ then we are done. If this is
not the case, since $\rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \LTLdiamond_{\lopen{0, a}} \big( \neg \varphi_1 \vee \neg (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \big)$, we must have $\tau_{|\rho| - 1} \geq a$.
Now for all $j$ such that $\tau_j - \tau_i \in \lopen{0, a}$ we have $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1$ and $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$, which
imply $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi$.
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi'$:
Assume $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi$. This holds if there is a witness in $(a, \infty)$ or $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \LTLsquare \varphi_1$.
In both cases we have $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$. If there is no event in $\lopen{\tau_i, \tau_i + a}$ then we are done.
If there is a witness, then for all such $j$ that $\tau_j - \tau_i \in \lopen{0, a}$ we have $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ and $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$.
If there is no witness then for all such $j$ we again have $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ and $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$.
For the other direction, if there is no event in $\lopen{\tau_i, \tau_i + a}$ we are done. If there are events in $\lopen{\tau_i, \tau_i + a}$, all $j$
such that $\tau_j - \tau_i \in \lopen{0, a}$ will satisfy $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1$ and $\rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$.
This clearly gives $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi$.
\end{itemize}
\item $\varphi_1 \mathbin{\mathfrak{U}}^c_{(a, \infty)} \varphi_2 \iff \varphi_1 \mathbin{\mathfrak{U}}^c_{\lopen{a, 2a}} \varphi_2 \vee \Big( \LTLdiamond^w_{[0, c]} \big(\varphi_1 \mathbin{\mathcal{U}}_{(a, \infty)} (\varphi_2 \vee \LTLdiamond_{\leq a - c} \varphi_2) \big) \Big)$:
\noindent
The proof is very similar to the proof of Proposition~\ref{prop:removeunboundedguntil}.
\item $\neg (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \iff \LTLsquare \neg \varphi_2 \vee \big( \neg \varphi_2 \mathbin{\mathcal{U}} (\neg \varphi_2 \wedge \neg \varphi_1) \big)$:
\begin{itemize}[itemsep=0.5em,topsep=0.5em]
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi'$:
Assume $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi \iff \rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$.
This implies that $\varphi_1$ fails to hold before $\varphi_2$ holds, and we have $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \neg \varphi_2 \mathbin{\mathcal{U}} (\neg \varphi_2 \wedge \neg \varphi_1)$.
For the other direction note that $\rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \LTLsquare \neg \varphi_2$, the second disjunct must be satisfied,
and it is easy to see that $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi$.
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi'$:
Assume $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \neg (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \iff \rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$.
This implies either $\rho, j \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_2 \iff \rho, j \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \neg \varphi_2$ for all $j > i$ in $\rho$
(this gives $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \LTLsquare \neg \varphi_2$) or $\varphi_1$ fails to hold before $\varphi_2$ holds---$\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \neg \varphi_2 \mathbin{\mathcal{U}} (\neg \varphi_2 \wedge \neg \varphi_1)$.
For the other direction, if $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \LTLsquare \neg \varphi_2 \iff \rho, i \centernot \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \LTLdiamond \varphi_2$
then $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$ cannot hold. If $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \neg \varphi_2 \mathbin{\mathcal{U}} (\neg \varphi_2 \wedge \neg \varphi_1)$ then
either $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \LTLsquare \neg \varphi_2$ or there is a witness, and it is easy to see that $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \varphi_1 \mathbin{\mathcal{U}} \varphi_2$ cannot hold.
\end{itemize}
\item $\theta \mathbin{\mathcal{U}}_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \wedge \chi \big) \iff \theta \mathbin{\mathcal{U}}_{(a, b)} \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \wedge \chi \big) \\
\vee \Big( \big( \theta \mathbin{\mathcal{U}}_{(a, b)} (\LTLsquare_{(0, 2b)} \varphi_1 \wedge \chi) \big) \wedge \varphi_{ugb} \Big)$:
The proof is very similar to the proof of Proposition~\ref{prop:extract}.
\item $\big( (\varphi_1 \mathbin{\mathcal{U}} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(a, b)} \theta \iff \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(a, b)} \theta \\
\vee \bigg( \Big( \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(0, b)} (\LTLsquare_{(0, 2b)} \varphi_1) \Big) \wedge \LTLdiamond_{(a, b)} \theta \wedge \varphi_{ugb} \bigg)$
\begin{itemize}[itemsep=0.5em,topsep=0.5em]
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi'$:
Assume $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi$. It is obvious that $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \LTLdiamond_{(a, b)} \theta$ holds. If the first disjunct of $\phi'$ does not hold,
then $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(0, b)} (\LTLsquare_{(0, 2b)} \varphi_1)$ must hold.
The last conjunct holds by an argument similar to the proof of Proposition~\ref{prop:extract}.
For the other direction, if the first disjunct of $\phi'$ holds then we are done.
If it does not hold, then there must be a witness (at which $\varphi_2$ holds)
in $[\tau_i + 2b, \tau_{|\rho| - 1}]$, and it is easy to see that $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu +}} \phi$.
\item $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi \iff \rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi'$:
Assume $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi$. If the first disjunct of $\phi'$ does not hold then there must be events in $[\tau_i + 2b, \tau_{|\rho| - 1}]$.
It follows that $\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \big( (\varphi_1 \mathbin{\mathcal{U}}_{(0, 2b)} \varphi_2) \vee \chi \big) \mathbin{\mathcal{U}}_{(0, b)} (\LTLsquare_{(0, 2b)} \varphi_1)$ and
$\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \LTLdiamond_{(a, b)} \theta$ must hold. The rest is similar to the proof to Proposition~\ref{prop:extract}.
For the other direction, if the first disjunct of $\phi'$ holds then we are done. Otherwise if $\tau_{|\rho| - 1} < b$, it is easy to see that
$\rho, i \mathbin{\models_{\mkern-6mu f}^{\mkern-6mu -}} \phi$. If this is not the case then the proof again follows Proposition~\ref{prop:extract}. \qedhere
\end{itemize}
\end{itemize}
\end{proof}
\section{Conclusion and future work}\label{sec:conclusion}
\paragraph{\emph{Expressive completeness over bounded timed words}}
We showed that \textup{\textmd{\textsf{MTL}}}{} extended with our new modalities
`\emph{generalised Until}' and `\emph{generalised Since}' (\textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}) is expressively complete for \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} over bounded timed words.
Moreover, the time-bounded satisfiability and model-checking problems for \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} remain $\mathrm{EXPSPACE}$-complete, same as that of \textup{\textmd{\textsf{MTL}}}{}.
The situation here is similar to \textup{\textmd{\textsf{LTL}}}{} over general,
possibly non-Dedekind complete, linear orders (e.g., the rationals):
in this case, \textup{\textmd{\textsf{LTL}}}{} can be made expressively complete (for \textmd{\textsf{FO[$<$]}}) by adding the Stavi modalities~\cite{Gabbay1994},
yet the complexity of the satisfiability problem remains $\mathrm{PSPACE}$-complete~\cite{Rabinovich2010}.
Along the way, we also obtained a strict hierarchy of metric temporal logics
based on their expressiveness over bounded timed words.
One drawback of the modalities $\mathbin{\mathfrak{U}}_I^c$ and $\mathbin{\mathfrak{S}}_I^c$ is that they are not very intuitive.
However, as we proved that simpler versions
of these modalities ($\mathcal{B}^\rightarrow_I$ and $\mathcal{B}^\leftarrow_I$) are strictly less expressive,
we believe it is unlikely that any other expressively complete extension of \textup{\textmd{\textsf{MTL}}}{} could be much simpler than ours.
The satisfiability and model-checking procedures for \textup{\textmd{\textsf{MTL}}}{} over time-bounded signals in~\cite{Ouaknine2009}
are based on the satisfiability procedure for \textup{\textmd{\textsf{LTL}}}{} over signals in~\cite{Reynolds2010}.
While the satisfiability problem for \textup{\textmd{\textsf{LTL}}}{} remains $\mathrm{PSPACE}$-complete when interpreted over signals,
very few implementations are currently available~\cite{French2013}.
This is in sharp contrast with the discrete case where a number of mature, industrial-strength tools (e.g., SPIN~\cite{Holzmann1997}) are readily available.
Our results enable the direct application of these tools to
time-bounded verification.
Whether this yields efficiency gains in practice, however, can only be
evaluated empirically, which we leave as future work.
\paragraph{\emph{Expressive completeness over unbounded timed words}}
Building upon a previous work of Hunter, Ouaknine and Worrell~\cite{Hunter2012}, we showed that the \emph{rational} version of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} is expressively complete for \textup{\textmd{\textsf{FO[$<, +\mathbb{Q}$]}}}{}
over infinite timed words. The result answers an implicit open question in a long line of research started in~\cite{Alur1990}
and further developed in~\cite{Bouyer2005, Prabhakar2006, DSouza2006a, Pandya2011}.
It is known that the \emph{integer} version of \textup{\textmd{\textsf{MTL}}}{} extended with counting modalities (and their past counterparts)
is expressively complete for \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} over the reals~\cite{Hunter2013}.\footnote{This result is stronger than~\cite{Hunter2012} as counting modalities (and their past counterparts) can be expressed in \textup{\textmd{\textsf{MTL}}}{} with rational endpoints.}
We conjecture that the analogous result holds in the pointwise semantics, i.e.,~the integer version of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}
becomes expressively complete for \textup{\textmd{\textsf{FO[\texorpdfstring{$<, +1$}{<, +1}]}}}{} when we add counting modalities.
Adapting the proof in~\cite{Hunter2013} to the pointwise case, however, is not a straight-forward task.
In particular, the proof relies on the expressive completeness of \textup{\textmd{\textsf{MITL}}}{} with counting modalities
for \textmd{\textsf{Q2MLO}}~\cite{Hirshfeld2004},
a result that itself requires a highly non-trivial proof~\cite{Hirshfeld2006}
and seems to hold only in the continuous semantics.
Besides expressiveness, another major concern in the study of metric logics
is \emph{decidability}.
We intend to investigate whether the expressiveness of
\textup{\textmd{$\textsf{MITL}_\textsf{fut}$}}{} or \textup{\textmd{\textsf{MITL}}}{} can be enhanced with the new modalities
while retaining decidability.
Specifically, we would like to answer the following question:
what is the complexity of the satisfiability problem for the logic obtained by
adding $\mathcal{B}^\rightarrow_I$ (with non-singular $I$) into \textup{\textmd{$\textsf{MITL}_\textsf{fut}$}}{}?
Since $\mathcal{B}^\rightarrow_I$ can be expressed in one-clock alternating timed automata,
it can possibly be handled in the framework of~\cite{Brihaye2014}.
More generally, we may consider \textup{\textmd{\textsf{MITL}}}{} extended with $\mathcal{B}^\rightarrow_I$ and $\mathcal{B}^\leftarrow_I$ (with non-singular $I$);
it is not clear whether allowing these modalities simultaneously
leads to undecidability.
\paragraph{\emph{Monitoring}}
We identified an `easy-to-monitor' fragment of \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}, for which we proposed
an efficient trace-length independent monitoring procedure.
This fragment is much more expressive than the fragments previously considered
in the literature.
Moreover, we showed that informative good/bad prefixes
are preserved by the syntactic rewriting rules in Section~\ref{sec:separation}.
It follows that the informative good/bad prefixes for an arbitrary \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula
can be monitored in a trace-length independent fashion,
thus overcoming a long-standing barrier to the runtime verification
of real-time properties.
For an arbitrary \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{} formula, the syntactic rewriting process could potentially induce a non-elementary blow-up.
In practice, however, the resulting formula is often of
comparable size to the original one, which itself is typically small.
For example, consider the following formula:
\[
\LTLsquare \big( \texttt{ChangeGear} \implies \LTLdiamond_{(0, 30)} (\texttt{InjectFuel} \wedge \LTLdiamondminus \texttt{InjectAir}) \big) \,.
\]
The resulting formula after rewriting is
\[
\arraycolsep=0.3ex
\begin{array}{rcl}
\LTLsquare \big( \texttt{ChangeGear} & \implies & \LTLdiamond_{(0, 30)} (\texttt{InjectFuel} \wedge \LTLdiamondminus_{(0, 30)} \texttt{InjectAir}) \\
& & {} \vee ( \LTLdiamond_{(0, 30)} \texttt{InjectFuel} \wedge \LTLdiamondminus \texttt{InjectAir} ) \big) \,.
\end{array}
\]
In fact, it can be argued that most common real-time specification patterns~\cite{Konrad2005} belong syntactically to our `easy-to-monitor' fragment
and thus need no rewriting.
Another way to alleviate the issue is to allow more liberal syntax
(or more derived operators).
For example, the procedure described in Section~\ref{subsec:monitoring}
can handle subformulas with unbounded past without modification.
To detect informative bad prefixes, our monitoring procedure uses a deterministic finite automaton
doubly-exponential in the size of the input formula.
While such a blow-up is rarely a problem in practice (see~\cite[Section $2.5$]{Bauer2011}),
it would be better if it could be avoided altogether.
In the untimed setting, it is known that if a safety property can be written as an \textup{\textmd{\textsf{LTL}}}{} formula,
then it is equivalent to a formula of the form $\LTLsquare \psi$
where $\psi$ is a past-only \textup{\textmd{\textsf{LTL}}}{} formula~\cite{Lichtenstein1985}.
So, if we restrict our attention to safety properties,
it suffices to consider formulas of this form,
for which there is an efficient monitoring procedure that
uses $O(| \psi |)$ time (per event) and $O(| \psi |)$ space~\cite{Havelund2001}.
Unfortunately, the question of whether a corresponding result holds for \textup{\textmd{\textsf{MTL}}}{} (or similar metric temporal logics) is still open.
Our procedure detects only \emph{informative} good/bad prefixes, which themselves can
be regarded as easily-checkable certificates for the fulfilment/violation of the property.
While we believe this limitation is in no way severe---in fact, the limitation
is implicit in almost all current approaches to monitoring real-time properties---there are certain
practical scenarios where detecting \emph{all} good/bad prefixes is preferred.
We could have used two deterministic finite automata that detect all good/bad prefixes
for the backbone \textup{\textmd{\textsf{LTL}}}{} formula, but still they cannot detect all good/bad prefixes
for the whole formula (consider Example~\ref{ex:pathologicallysafe}).
We leave as future work a procedure that detects all good/bad prefixes.
Finally, we remark that the offline trace-checking problem is of independent theoretical interest~\cite{Markey2003}.
It is known that the trace-checking problem for \textup{\textmd{\textsf{LTL}}}{}~\cite{Kuhtz2009} and \textup{\textmd{\textsf{MTL}}}{}~\cite{Bundala2014}
are both in $\mathrm{AC^1[\log \mathrm{DCFL}]}$, yet their precise complexity is still open.
It would be interesting to see whether the construction for \textup{\textmd{\textsf{MTL}}}{} in~\cite{Bundala2014} carries over to \textup{\textmd{\textsf{MTL[\texorpdfstring{$\guntil, \gsince$}{G, U}]}}}{}.
\bibliographystyle{alpha}
|
{
"timestamp": "2019-05-10T02:10:55",
"yymm": "1803",
"arxiv_id": "1803.02653",
"language": "en",
"url": "https://arxiv.org/abs/1803.02653"
}
|
\section{Overview}
In this short paper, we present a research program aimed at understanding the bulk-boundary relationship from the bulk non-perturbative quantum gravity perspective.
Our goal is to clarify the role of boundaries and the physics of edge modes in quantum gravity.
In non-perturbative approaches to quantum gravity, it is natural to consider finite, or `quasi-local' boundaries.
On these, we can derive the boundary theory induced by the choice of classical boundary conditions or class of quantum boundary states.
In a holographic approach, these boundary theories define holographic duals, still encoding the same physical content as the bulk theory, providing a non-trivial representation of bulk observables and allowing to reconstruct the bulk quantum geometry from boundary correlations.
Our objective is to test this non-perturbative holographic approach, which provides a quasi-local version of the AdS/CFT correspondence \cite{Witten:1998qj}.
Three-dimensional gravity is an ideal testbed for this program.
Indeed, it can be formulated as a topological field theory, which allows for an exact non-perturbative quantization \cite{Witten:1988hc}.
In this context, we propose to study the Ponzano--Regge state-sum model \cite{PonzanoRegge1968,Freidel:2004vi,Freidel:2005bb,Barrett:2008wh}.
This is a three-dimensional discrete topological quantum field theory (TQFT), whose relation to the Turaev--Viro \cite{Turaev:1992hq} and Reshetikhin--Turaev topological invariants is explicitly understood \cite{reshetikhin1990ribbon,ReshetikhinTuraev1991,Freidel:2004nb}.
Concretely, the Ponzano--Regge model provides a quantized path integral for 3d Regge calculus, which is a well-defined discretization of general relativity \cite{Regge1961,Regge:2000wu}.
Specifically, it corresponds to a theory of quantum gravity with vanishing cosmological constant in Euclidean signature. Generalizations of the model to non-vanishing cosmological constant exists \cite{Turaev:1992hq,MizoguchiTada1992,TaylorWoodward2005,Freidel:2005bb}, but we postpone their investigation to future work.
From the Ponzano--Regge state-sum, transition amplitudes for physical states of quantum geometry can be explicitly computed.
However, the question of holographic dualities has not yet been systematically explored in this framework, despite being a very natural one.
This is because, the Ponzano--Regge model provides a bulk-local description of the 3d quantum geometry. It is thus natural to focus on finite bounded regions of space-time, explore the various possible quantum boundary conditions and investigate the resulting quantum boundary theories.
Since the Ponzano-Regge model is intrinsically discret
, we expect the Ponzano-Regge amplitude with boundaries to induce discrete statistical physics models on the 2d boundary. Such models typically lead to 2d conformal field theories (CFTs) in their continuum limit in their critical regime. This scenario would lead to a quasi-local version of the AdS${}_{3}$/CFT${}_{2}$ correspondence \cite{brown1986central,Carlip2005b,Heydeman:2016ldy,Sfondrini:2014via,Kraus:2006wn}.
An important question is whether the continuum limit can be reached for finite boundaries at all.
Indeed, quantum gravity naturally sets a fundamental minimal length scale and for this reason it is likely that the infinite refinement limit needed for reaching the continuum can not be achieved for finite boundaries.
In turn, this would mean that the dual statistical physics boundary theories can only reach their critical regime for asymptotic boundaries.
In other words, they would flow towards their limit CFT as the typical length scale of the boundary geometry is much larger than the fundamental quantum gravity length scale, and to the extent that the latter scale can be neglected.
For finite boundaries, we would then be left with the non-critical discrete statistical models as holographic duals.
Their correlations should still allow to probe the bulk 3d quantum geometry and faithfully represent the 3d quantum gravity amplitudes.
At this point a remark is in order. Three-dimensional gravity is topological, i.e. it does not possess any local physical degrees of freedom. For this reason, discrete models such as the Ponzano--Regge topological state-sum can capture all the relevant degrees of freedom of the continuum theory, irrespectively of the coarseness of the employed discretization.
However, the introduction of a boundary with metric boundary conditions does reveal the discretization, which explains our remarks in the previous paragraph.\\
In this paper, we present and streamline our recent results derived in details \cite{PRholo1,PRholo2} and push their analysis further. In an effort to clarify the context of that work, we explain how to derive the holographic duals on the boundary of the Ponzano--Regge model (see also \cite{Riello2018}), and discuss the critical problem of identifying a quantum notion of asymptotic infinity. In particular, we discuss how we are able to reproduce results found in perturbative contexts \cite{Barnich:2015mui, BonzomDittrich} while at the same time extending these results by non-perturbative corrections.
{
Since in the Ponzano--Regge model there are two ways of increasing the size of the boundary and the number of degrees of freedom it carries, two naive notions of large scale limit exist: one consists in taking a large number of Planck-sized building blocks, the other in taking a large value for the spins $j$ at fixed number of building blocks.
In the former approach, a renormalization flow can be defined that describes effective actions taking the effects of a finer and finer lattice into account (see e.g. \cite{Dittrich:2014ala}).
On the top of this refinement limit, a semi-classical limit can also be taken.
The latter approach, instead, turns out to be best understood not in in terms of a large-scale limit, but rather in terms of a semiclassical (e.g. 0 or 1-loop) limit on a fixed discretization (e.g. \cite{PonzanoRegge1968,Bianchi:2006uf,Livine:2006ab,Han:2013hna,Barrett:2009mw}).
On the top of this semiclassical limit, a continuum limit can also be taken.
Therefore, what seemed to be two different ways of taking a large scale limit, are rather two different limits altogether.
\footnote{
For completeness, let us mention an extra possibility with regards to the continuum limit, which we will not pursue here. It is possible to consider the discretization itself as a quantum degree of freedom to be ``summed over''. This is actually the main insight for the dynamical triangulation approach to quantum gravity. In the context of spinfoam models, this point of view leads to group field theory and tensor models, for which one define and study a renormalization flow \cite{Carrozza:2013mna,Rivasseau:2011hm}. It is an open question how these various limits and renormalization flows connect to each other.
}
When both are taken one after the other, the result will not a priori be independent of the order in which the limits are taken.
Nonetheless, we showed in \cite{PRholo1,PRholo2} that some important features agree in the two approaches.
Indeed, both these choices remarkably lead to the same divergence behavior of the associated amplitudes \cite{PRholo1,PRholo2}. Crucially, the same behavior that characterizes the perturbative results as well. In the case of the semi-classical calculation, the one-loop result not only reproduces the perturbative ones obtained both in the continuum \cite{Barnich:2015mui} and in the discrete \cite{BonzomDittrich}, but also includes contributions of saddle points associated to non-classical `quantum' backgrounds.
We hope that the present program will also help elucidate renormalization in discrete quantum gravity models \cite{Dittrich:2014ala, Dittrich:2014mxa,Riello:2013bzw,Banburski:2014cwa} and in particular provide first steps to implement holographic renormalization \cite{deBoer:1999tgo,deHaro:2000vlm} in non-perturbative quantum gravity models.
}
\section{Statistical Duals}
The Ponzano--Regge model \cite{PonzanoRegge1968,Freidel:2004vi,Freidel:2005bb,Barrett:2008wh} proposes a topological path integral for discretized 3d geometries.
Initially defined for triangulations, it can readily be extended to arbitrary 3d cellular decomposition. The amplitudes are well-defined through suitable gauge-fixing \cite{Freidel:2004vi,Bonzom:2010zh,Bonzom:2012mb} and define a topological invariant \cite{Barrett:2008wh,Freidel:2004nb}. They have two equivalent definitions, either in terms of products of spin recoupling symbols (such as the $\{6j\}$-symbol representing a quantized tetrahedron) or as a discretized path integral over holonomies encoding the parallel transport along the triangulated manifold. Here, we will not review these definitions of the Ponzano--Regge bulk amplitude, but we will focus on the boundary states and induced boundary theory.
The general relation between boundary conditions, or boundary states, and (dual) boundary theories, is to be looked for in the correspondence between spin-network states and statistical models {\color{Green} \cite{Witten1989,Witten1990, Turaev1992, KirillovReshetikhin1989, Westbury1998, Dittrich:2013jxa, Bonzom:2015ova, Riello2018}}.
Spin-network states are gauge-invariant Wilson graph observables naturally associated to the connection representation of gravity.
Denoting by $\omega$ the relevant connection 1-form and by $g_{l_i} = P\exp\int_{l_i} \omega$ the parallel transport along the $i$-th link $l_i$ of the Wilson graph $\Gamma$, such states have the form
\begin{equation}
\Psi[\omega] = \Psi^\Gamma(\{g_{l_i}\}_i) = \Psi^\Gamma(\{ h_{t(l_i)} g_{l_i} h_{s(l_i)}^{-1}\}_i ),
\end{equation}
where the last equality is a representation of gauge invariance, with $s(l)$ and $t(l)$ denoting the source and target vertices of $l$, respectively.
Using standard loop quantum gravity techniques, when the supporting graph is dual to a surface, spin-network states---or some specific superpositions thereof---can be interpreted as quantum boundary metrics.
Being supported on a graph, these are intrinsically discrete objects.
It is important to notice, however, that the Hilbert space of spin-network states in loop quantum gravity can be understood as a space of {\it continuum} quantum connections,\footnotemark~ and in this sense all spin-network states are continuum states.
Nonetheless, their {\it operational} geometrical interpretation still reposes on discrete structures, and their discreteness can be interpreted as the result of a ``physical'' finite-resolution pre- or post-selecting measurement.
In this case, the graph itself can be interpreted quite literally as a network of physical beacons and space-measuring devices.%
\footnotetext{This is guaranteed by cylindrical consistency of the wave functions together with the existence of an inductive limit on the wave functions: spin networks are actually defined on infinite sequences of refining graphs. The inductive continuum limit can moreover be defined both for the Ashtekar--Lewandowski and $BF$ representations of LQG \cite{Ashtekar:1994mh,Baez:1994hx,Freidel:2011ue,Bahr:2015bra}.}
On top of the graph-induced discreteness, when dealing with Euclidean theories, another type of discreteness comes into play.
This is the Planck-scale discreteness associated to the {\it spectrum} of metric operators.
The mathematical origin of this discreteness lies in the compactness of the parallel transport variables $g_l$ along the edges of the graph, which replaced their infinitesimal counterparts---the connection variable $\omega$---as the fundamental variable.
In three Euclidean dimensions, $g_l\in\mathrm{SU}(2)$ and the specifics of the quantum metric described by a spin-network state are encoded in the choice of ($i$) $\mathrm{SU}(2)$ spins $j_l$ attached to the edges of the spin-network graph, and of ($ii$) gauge-invariant tensors, aka intertwiners $\iota_v$, attached to its vertices.
Spin-network states with fixed spins and intertwiners will be denoted by
\begin{equation}
\Psi^\Gamma_{(j,\iota)}(g_l).
\end{equation}
More specifically, intertwiners are $\mathrm{SU}(2)$-invariant states in the tensor product of several $\mathrm{SU}(2)$ representations, say labeled by $N$ spins $j_{1},..,j_{N}$,
\begin{equation}
\iota_v= \bigotimes_{i=1}^N D^{j_i}(h) \triangleright \iota_v
\,,\quad\forall h\in\mathrm{SU}(2)
\,,
\end{equation}
that is, explicitly showing the sum over magnetic indices,
\begin{equation}
(\iota_v)_{n_1\dots n_N} = \sum_{\{m\}}\Big( \prod_{i=1}^N D^{j_i}(h)_{n_i m_i}\Big) (\iota_v)_{m_1\dots m_N}
\,.
\end{equation}
With this notation, $\Psi^\Gamma_{(j,\iota)}(g_l)$ is defined as the contraction of the vertex intertwiners with the $j_l$-representation Wigner matrices of the parallel transports $g_{l}$, according to the combinatorics imposed by the underlying graph $\Gamma$:
\begin{equation}
\Psi^\Gamma_{(j,\iota)}(g_l)
\,=\,
\mathrm{Tr}_\Gamma \Big(\bigotimes_{v}\iota_{v}\otimes \bigotimes_{l}D^{j_{l}}(g_{l})\Big)
\,,
\end{equation}
(here the trace stands for the sum over the magnetic indices at both ends of each link of the graph $\Gamma$.)
The amplitude $Z_M$ of a quantum gravitational process in a finite region $M$ subjected to the boundary conditions imposed by a given spin-network $\Psi^\Gamma$, is of course a function of the spin-network state itself.
When dealing with the dynamics of flat space, that is of three-dimensional (quantum) gravity with vanishing cosmological constant, this function is extremely simple.
If $M=\mathbb B_3$ with the 2-sphere boundary, it essentially reduces to what is known as a spin-network evaluation, i.e.
\begin{equation}
Z_{\mathbb B_3}(\Psi^\Gamma) = \Psi^\Gamma(g_l = \mathbb{1}).
\end{equation}
Crucially, this evaluation gets twisted in non-trivial ways by the presence of topological features such as (bulk) non-contractible cycles \cite{Freidel:2005bb}.
Spin-network evaluations are complete contractions of intertwiners, associated to the vertices of the spin-network graph.
Such an evaluation can be read as the sum over all magnetic-index configurations one can attach to the edges of the graph, with specific Boltzmann weights given by the entries of the intertwiner tensors themselves. Symbolically,
\begin{equation}
Z_{\mathbb B_3}(\Psi^\Gamma_{(j,\iota)}) = \mathrm{Tr}_\Gamma \Big( \bigotimes_{v\in\Gamma} \iota_v \Big) = \sum_{\{m_l | l\in \Gamma\}} \prod_{v\in\Gamma} (\iota_v)_{m\dots}
\label{eq_sum_m}
\end{equation}
where in the last term the $m$-indices of $\iota_v$ label states in those representations $V_{j_l}$ of $\mathrm{SU}(2)$ which are attached to the links of $\Gamma$ starting at or leaving from the vertex $v$.
From this perspective, $\mathrm{SU}(2)$ spin-network evaluations is essentially the computation of a partition function of a statistical model, whose rotational invariance is automatically guaranteed by the $\mathrm{SU}(2)$-invariance of the intertwiners themselves.
In particular, for homogeneous spins $j=1/2$ for all edges on the boundary spin network, when the graph is a regular square lattice, which we denote $\Gamma=\square$, we can show the correspondence of the Ponzano-Regge amplitude with boundary with the partition function of the ``isotropic'' 6-vertex model \cite{PRholo1} (see also \cite{Witten1989,Witten1990} for a broader perspective and equivalence of 3d quantum gravity with statistical models), as illustrated on figure \ref{fig:6vertex_model_vertex}:
\begin{equation}
Z(\Psi^\square_{(\f12,\iota)})
= \sum_\text{arrows} a^{\#_I + \#_{II}} b^{\#_{III}+\#_{IV}} c^{\#_{V} + \#_{VI}}
\end{equation}
with
\begin{equation}
\Delta := \frac{a^2 + b^2 - c^2}{2ab} =1.
\end{equation}
\begin{figure}[h!]
\begin{tikzpicture}[scale=1]
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{>}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{>}}},postaction={decorate}] (0,-1) node[left ]{\tiny{2}}-- (0,1)node[right ]{\tiny{1}}; \draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{>}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{>}}},postaction={decorate}] (-1,0)node[above ]{\tiny{3}}--(1,0)node[below ]{\tiny{4}};
\draw (0,-1.5) node[scale=1]{$\omega(\mathrm I) =a$};
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{<}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{<}}},postaction={decorate}] (0,-4) -- (0,-2);
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{<}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{<}}},postaction={decorate}] (-1,-3)-- (1,-3);
\draw (0,-4.5) node[scale=1]{$\omega(\mathrm{II}) =a$};
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{>}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{>}}},postaction={decorate}] (3,-1) -- (3,1);
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{<}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{<}}},postaction={decorate}] (2,0)-- (4,0);
\draw (3,-1.5) node[scale=1]{$\omega(\mathrm{III}) =b$};
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{<}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{<}}},postaction={decorate}] (3,-4) -- (3,-2);
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{>}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{>}}},postaction={decorate}] (2,-3)-- (4,-3);
\draw (3,-4.5) node[scale=1]{$\omega(\mathrm{IV}) =b$};
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{<}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{>}}},postaction={decorate}] (6,-1) -- (6,1);
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{>}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{<}}},postaction={decorate}] (5,0)-- (7,0);
\draw (6,-1.5) node[scale=1]{$\omega(\mathrm{V}) =c$};
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{>}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{<}}},postaction={decorate}] (6,-4) -- (6,-2);
\draw[decoration={markings,mark=at position 0.3 with {\arrow[scale=1.5,>=stealth]{<}}},decoration={markings,mark=at position 0.8 with {\arrow[scale=1.5,>=stealth]{>}}},postaction={decorate}] (5,-3)-- (7,-3);
\draw (6,-4.5) node[scale=1]{$\omega(\mathrm{VI}) =c$};
\end{tikzpicture}
\caption{The 6 possible arrow configurations at a vertex for the 6-vertex model. The statistical weights $\omega$ associated to the vertex configurations are $\omega(I)= \omega(II) = a$, $\omega(III)= \omega(IV) = b$ and $\omega(V)= \omega(VI) = c$. For the correspondence with the spin network evaluation defined the Ponzano-Regge partition function for an homogeneous spin $j=\f12$ on the boundary lattice, the arrow direction translates into the sign of the magnetic index $m$ living on the edge. Indeed, for a spin $j=\f12$, the magnetic index can only take two values, $m=\pm \f12$.}
\label{fig:6vertex_model_vertex}
\end{figure}
In the case of non-trivial topologies, the above equality still holds locally, but further non-local operator insertions are needed to account for the existence of non-trivial holonomies along the non-contractible cycles. We explore this case in the next section (see also \cite{Riello2018}).
This is an integrable statistical model, whose transfer matrix coincides with that of the XXX Heisenberg spin-chain with spectral parameter $\lambda = i(a/c - 1/2)$, e.g. \cite{PinkBook}.
The degrees of freedom of the spin-chain are indeed the same magnetic indices appearing in Eq. \eqref{eq_sum_m}, however the system is not quite in a pure Gibbs ensamble of the form ${\mathrm{Tr}}(e^{-\beta H_\text{XXX}})$, but rather in some involved generalized ensamble, whose ``Hamiltonian'' is given by a combination of conserved quantities weighted by $\lambda$, e.g. \cite{Faddeev}.
Notice that the XXX spin chain is isotropic and hence $\mathrm{SO}(3)$ invariant.
See \cite{Riello2018} for more on these correspondences.
For generic spins, a special class of spin-network graphs is constituted by Wilson line weaves.
In a weave, a set of Wilson lines pass above and beneath each other, forming links of knots, without ever intersecting.
This situation corresponds to intertwiners whose recoupling channel is in a state of vanishing spin (notice that these form a basis of intertwiners among four spins $j=1/2$).
Such peculiar states were observed to be related to integrable models of various kinds almost thirty years ago \cite{KirillovReshetikhin1989,Turaev1992, Witten1989,Witten1990}.
Nevertheless, at the time, the perspective was different, and the correspondence was made with knot expectation values in Chern--Simons theory, rather than with boundary spin-network evaluations.%
\footnote{
The degrees of freedom in those statistical models are more naturally expressed in terms of the spins $j$, rather than the magnetic indices $m$. This `spin representation' has also a holographic/gravitational interpretation, as explained in \cite{Riello2018}.
}
The appearance of magnetic indices as degrees of freedom of the boundary theory is an aspect that deserves attention.
Magnetic indices are quantum states in a co-adjoint orbit of $\mathfrak{su}^\ast(2)$.
Therefore, following the LQG quantum geometrical interpretation, they represent the possible (quantum) orientations of a vector of fixed length
in $\mathbb R^3$.
Thus, the present correspondence explicitly fulfils the expectation of much of the contemporary work in the context of gravity and gauge theories in presence of boundaries, that the dual boundary degrees of freedom are constituted by reference frame orientation of a ``would-be-gauge'' symmetry
\cite{DonnellyFreidel2016, GomesRiello2017, Geiller2017b,Carlip2005a,Carlip2005b, Balachandran1992,Balachandran1996}.
For increasing values of the spins, the number of allowed magnetic indices grows: the number of Planck-sized cells present in the larger co-adjoint orbits grows, and the latter can be better and better approximated by their classical counterparts.
This explains why, in the large spin regime, semiclassical methods exist to analyze the relevant spin-network evaluations.
These methods are indeed well developed, both in the mathematical and physical literature, and the emergence of geometrical objects from the large spin asymptotics of such evaluations is well studied \cite{PonzanoRegge1968, Roberts1999,Gukov:2003na,TaylorWoodward2005,Barrett:2009gg,Barrett:2009mw,Haggard:2014xoa,Barrett:1998gs}.
So far, we considered only spin-network states at fixed spins.
This, however, need not be a necessary restriction, and considering superpositions of all spins has its own interest.
For example, this fact can be used to peak the state not only on an intrinsic geometry, corresponding to metric boundary conditions, but also on its extrinsic geometry, leading to more general boundary conditions \cite{BahrThiemann2007, Bianchi:2006uf}.
Another interesting example is given by the construction of so-called spin-network generating functions.
These are boundary states which depend analytically on one parameter per edge, in such a way that their power expansion in these parameters gives all possible (fixed-spin) spin-network evaluations.
Pictorially, these parameters can be interpreted as chemical potentials---or, more precisely, fugacities---for the edge spins.
Crucially, generating-function boundary states on specific graph types encode the (bosonized dual of the) Ising model \cite{Bonzom:2015ova, Dittrich:2013jxa}.
\section{Non-trivial Topologies}
As it was mentioned earlier, these correspondences are enriched by the presence of boundary non-contractible cycles, which stay non-contractible in the bulk.
At this point a clarification is necessary.
It is often assumed that in a theory of quantum gravity, one has to sum over {\it all} manifolds compatible with the boundary data, including all compatible bulk topologies.
However natural and appealing, this idea is plagued with difficulties ranging from defining evolution on non-globally-hyperbolic manifolds in a canonical setting, to the very classification of topologies in dimensions 4 and higher in a covariant setting.
For these reasons, in a large part of the quantum gravitational literature (see however \cite{Rivasseau:2011hm,Carrozza:2013mna, MaloneyWitten2007}), the issue of summing over topologies has been put aside, and the focus restricted to the ``integration'' over metrics.%
\footnote{In four dimensions, the problem is even subtler, because of the non-uniqueness and proliferation of differential structures.}
Here, we will adopt this conservative viewpoint, and hence postpone all investigations of topology change, and large diffeomorphisms alike.
This being clarified, we can address the question of how the boundary theory ``knows'' about its bulk-contractible and bulk-non-contractible cycles.
From the bulk perspective, non-contractible cycles correspond to non-trivial monodromies that need to be integrated over.
Using gauge invariance, these monodromies can be made to have support on a single small cylinder $\mathbb D_2\times [0,\epsilon]$, whose boundary counterpart is a ring $\mathbb S^1\times[0,\epsilon]$ winding around the conjugate boundary cycle.
This ring supports a non-local ``topological'' operator acting on the boundary spin-network state.
The topological nature of this operator is a consequence of the flatness of the fundamental connection.
More technically, at the spin-network level, the operator arising from the integration over all possible monodromies---when unconstrained---can be easily recognized to correspond to the insertion along the ring of a Haar intertwiner or, in other words, of a group-averaging operator%
\footnote{A Haar intertwiner is an intertwiner of the form $$(\iota_\text{Haar})_{m'_1\dots m'_L}^{m_1\dots m_L}=\int_{\mathrm{SU}(2)} \d g\, \prod_{l=1}^L D^{j_l}(g)^{m_l}{}_{m'_l},$$ where $D^j(g)$ is a Wigner matrix in the representation of spin $j$; $m,m'\in\{-j,\dots, j\}$ are magnetic indices; $\d g$ is the Haar measure on the group. Finally, in this formula, $l$ labels the edges crossing the above-mentioned ring.\label{fnt_D}}%
---see \cite{PRholo1,PRholo2,Riello2018}.
The modification of the boundary theory via this operator insertions explicitly breaks the symmetry between the various non-contractible cycles of the surface, thus imprinting on the boundary theory some knowledge of the bulk topology.
In practical computations, these topological operators play a crucial role.
We will present a concrete example of this in the next section.
\section{Example: Coherent Torus}
The simplest example that can provide insights on the above program is given by a twisted solid torus spacetime in three-dimensional Euclidean quantum gravity with vanishing cosmological constant. Twisted means that the solid torus $M_3 \cong \mathbb D_2\times\mathbb{S}_1$ is obtained by identifying the bottom and the top of the cylinder $\mathbb D_2(a)\times [0,\beta]$ up to a rotation of $\gamma$ radians. Here, $a$ stands for the radius of the two-disk, and $\beta$ for the Euclidean time extension of the cylinder.
This example has already been studied with other techniques---most notably via the perturbative quantum Einstein--Hilbert theory at 1-loop over a flat background, both in the continuum \cite{Barnich:2015mui} and in the ``discretum'' \cite{BonzomDittrich}, and as a limit of the corresponding AdS$_3$/CFT$_2$ computation \cite{MaloneyWitten2007,GiombiMaloneyYin2008,BarnichOblak2014,Oblak:2015sea}---and can therefore serve as a benchmark for the present methods.
The most prominent feature of all these approaches, is a partition function which depends on $\gamma$ in a way that admits a (formal) expansion over boundary momentum eigenmodes $p\in\mathbb N$---i.e. Fourier modes in the bulk-contractible, ``spacelike'', direction---as%
\footnote{The identification of $p$ as spacelike eigenmodes is clearest in some formulations \cite{MaloneyWitten2007,Oblak:2015sea,BonzomDittrich}, but can remain rather obscure in others \cite{GiombiMaloneyYin2008,Barnich:2015mui}. }
\begin{equation}
Z(\beta,\gamma) \sim \mathrm{e}^{-\frac{(2)\pi \beta}{\ell_\text{Pl}}} \prod_{p\geq2}\frac{1}{2 -2 \cos(\gamma p)} ,
\label{eq_Z}
\end{equation}
where $\ell_\text{Pl}=8\pi G_\text{N}\hbar$, and the factor of 2 is in parenthesis because its presence depends on the chosen boundary conditions (standard Gibbons--Hawking--York boundary conditions require it; for details, see the first section of \cite{PRholo1}).
Notice the peculiar fact that for $\gamma \in 2\pi \mathbb Q$, there are $p$'s whose contribution explodes.
In AdS space, this issue---as well as the convergence of $Z$ at $p\to\infty$---is cured by the fact that the role of $\gamma$ is played by the torus' modulus, $2\pi\tau = \gamma + i\sqrt{|\Lambda|}\beta$.
The resulting ``thermal AdS'' partition function is then closely related to Dedekind's $\eta(\tau)$ function, a modular form strictly speaking defined on the upper half complex plane, $\Im(\tau)>0$ \cite{MaloneyWitten2007}. In this respect the flat limit is expected to be quite singular, and deserves to be studied independently.
In any case, quite remarkably, even in the flat case multiple different approaches \cite{Barnich:2015mui,BonzomDittrich,Oblak:2015sea} agree with the formal result of Eq. \eqref{eq_Z}, although the mechanisms leading to this formula and the regularizations that make sense of it are technically very different.
In all cases, a crucial fact is that the modes $p=0$ and $p=1$ do not appear in the product as a consequence of diffeomorphism symmetry (see \cite{PRholo1} for a detailed comparison).
Maybe one of the most interesting derivations of the formula \eqref{eq_Z} is as a character of BMS$_3$ in the ``vacuum'' representation (``massive'' ones have a different prefactor, and a product over $p$ that starts at $p=1$) \cite{Oblak:2015sea}.
This is taken as a hint that a dual boundary theory indeed exists whose symmetry group is (a central extension of) BMS$_3$, in analogy to the Virasoro symmetry of the AdS$_3$/CFT$_2$ case.
This idea is further supported by the fact that the BMS$_3$ characters can be obtained through a zero-cosmological-constant limit of the Virasoro ones.
Here, we present results on the analysis of precisely this situation within the bulk local non-perturbative approach provided by the Ponzano--Regge (PR) model \cite{PRholo1,PRholo2}.
The first question one faces in setting up the computation within the PR model is what boundary state one is going to use.
A survey of the previous method suggests that the boundary state should encode the geometry of a rectangular cylinder.
Remarkably, this fact is made explicit in the only other quasi-local computation \cite{BonzomDittrich}, while it is implicitly used in the other computations intrinsic to the boundary---since they assume translational symmetry along the two boundary directions. It is, however, completely hidden in the perturbative Einstein--Hilbert approach.
In order to impose the desired boundary conditions, coherent spin network techniques, developed in the context of loop quantum gravity were used to design a state corresponding to an $N_t\times N_x$ regular rectangular lattice.
The twisted toroidal topology is implemented by an identification of the lattice appropriately shifted by $N_\gamma$ units in the ``spatial'' direction (see figure \ref{fig:discretization_exemple}), so that
\begin{equation}
\gamma = 2\pi \frac{ N_\gamma}{N_x},
\end{equation}
while the spin-network intertwiner encoding the rectangular plaquette geometry dual to the spin-network vertex $v$ is\footnote{See footnote \ref{fnt_D} for details on the $D$-matrix notation. Notice that the second magnetic index, $m'_l$ in the notation of footnote \ref{fnt_D}, is here fixed to its maximal value $j_l$. This choice ultimately ensures that these states are coherent states.}
\begin{equation}
\iota_v^{m_1\dots m_4} = \int_{\mathrm{SU}(2)}\d G_v \prod_{l=1}^4 D^{j_l}(G_v g_{\xi^v_l})^{m_l}{}_{j_l}.
\label{eq_coh}
\end{equation}
\begin{figure}[h!]
\begin{tikzpicture}[scale=0.55]
\coordinate (OA) at (1.59,0);
\coordinate (A1) at (0,0);
\coordinate (A2) at (1.1,0.77);
\coordinate (A3) at (2.7,0.77);
\coordinate (A4) at (3.3,0);
\coordinate (A5) at (2.1,-0.83);
\coordinate (A6) at (0.5,-0.83);
\coordinate (OB) at (1.59,-1.5);
\coordinate (B1) at (0,-1.5);
\coordinate (B2) at (1.1,-0.73);
\coordinate (B3) at (2.7,-0.73);
\coordinate (B4) at (3.3,-1.5);
\coordinate (B5) at (2.1,-2.35);
\coordinate (B6) at (0.5,-2.35);
\coordinate (OC) at (1.59,-3);
\coordinate (C1) at (0,-3);
\coordinate (C2) at (1.1,-2.23);
\coordinate (C3) at (2.7,-2.23);
\coordinate (C4) at (3.3,-3);
\coordinate (C5) at (2.1,-3.83);
\coordinate (C6) at (0.5,-3.83);
\draw (A1) -- (A2) -- (A3) -- (A4) -- (A5) -- (A6) -- cycle;
\draw (OA) -- (A1) ; \draw (OA) -- (A2); \draw (OA) -- (A3); \draw (OA)--(A4); \draw (OA) --(A5); \draw (OA)-- (A6);
\draw (B1) -- (B2) -- (B3) -- (B4) -- (B5) -- (B6) -- cycle;
\draw [dashed] (OB) -- (B1); \draw [dashed] (OB) -- (B2); \draw [dashed] (OB) -- (B3); \draw [dashed] (OB)--(B4); \draw [dashed] (OB) --(B5); \draw [dashed] (OB)-- (B6);
\draw (C1) -- (C2) -- (C3) -- (C4) -- (C5) -- (C6) -- cycle;
\draw [dashed] (OC) -- (C1); \draw [dashed] (OC) -- (C2); \draw [dashed] (OC) -- (C3); \draw [dashed] (OC)--(C4); \draw [dashed] (OC) --(C5); \draw [dashed] (OC)-- (C6);
\draw (A1)--(B1)--(C1); \draw [dashed] (A2)--(B2)--(C2); \draw [dashed] (A3)--(B3)--(C3); \draw (A4)--(B4)--(C4); \draw (A5)--(B5)--(C5); \draw (A6)--(B6)--(C6);
\draw [dashed] (OA)--(OB)--(OC);
\draw [<->,>=latex] (-0.5,0) -- (-0.5,-3); \draw (-0.5,-1.5) node[scale=0.8,left]{$\beta$};
\draw [<->,>=latex] (0,1.5) -- (1.59,1.5); \draw (0.8,1.5) node[scale=0.8,above]{$a$};
\draw (1.5,-5) node{$(a)$};
\end{tikzpicture}
\hspace*{10mm}
\begin{tikzpicture}[scale=0.4]
\draw (4.5,0) -- (13,0);
\draw (4.5,-1.5) -- (13,-1.5);
\draw (4.5,-3) -- (13,-3);
\draw (5,1) -- (5,-4);
\draw (6.5,1) -- (6.5,-4);
\draw (8,1) -- (8,-4);
\draw (9.5,1) -- (9.5,-4);
\draw (11,1) -- (11,-4);
\draw (12.5,1) -- (12.5,-4);
\draw (5.75,0) node[above]{$T$}; \draw (5,-0.75) node[right]{$L$};
\foreach \i in {5,6.5,8,9.5,11}{
\draw[rounded corners=3 pt,->] (\i,2) --(\i,1.7)-- (\i+1.5,1.6)--(\i+1.5,1.1) ;
}
\foreach \i in {5,6.5,8,9.5,11,12.5}{
\foreach \j in {0,-1.5,-3}{
\draw (\i,\j) node{$\bullet$};
}
}
\draw (5,-4.5) node{$1$}; \draw (8,-4.5) node{...}; \draw (12.5,-4.5) node{$N_x = 6$};
\draw (8.7,-6.3) node{$(b)$};
\end{tikzpicture}
\caption{Discretization of the twisted solid torus for parameters $N_t=3$, $N_x=6$ and a shift $N_\gamma=1$.
$(a)$: On the left hand side, we draw the cylinder with base the 2-disk of radius $a$ and with Euclidean time extension $\beta$. To get the twisted solid torus, we identify the top and the bottom of the cylinder up to the discrete shift $i\rightarrow i+N_{\gamma}$ for all lattice sites $i$, which causes the twist $\gamma = 2\pi \frac{N_\gamma}{N_x}$ in the gluing.
$(b)$: On the right hand side, we draw the boundary lattice associated to the discretization of the solid torus. We attach a spin $T$ (resp. $L$) to each horizontal (resp. vertical) edges on the boundary, and we attach to each boundary vertex $v$ an intertwiner as defined by formula \eqref{eq_coh}. }
\end{figure}
Here, $j_l = T\in\tfrac12 \mathbb N$ ($ L\in\tfrac12 \mathbb N$) for horizontal ``$\text{h}$'' (vertical ``$\text{v}$'') edges $l$ dual to ``timelike'' (``spacelike'') sides of the rectangular plaquettes, and the normalized spinors $\xi^v_l\in \mathbb C^2$ are
\begin{equation}
\mat{c}{1\\0}, \;\frac{1}{\sqrt 2}\mat{c}{-1\\1},\; \mat{c}{0\\1}, \text{ or } \frac{1}{\sqrt 2} \mat{c}{1\\1},
\end{equation}
for $l$ ranging from $1$ to $4$ in an anti-clockwise order around the spin-network vertex, starting from an horizontal edge.
Finally,
\begin{equation}
g_\xi = \mat{cc}{\xi^1 & -\bar\xi^2 \\ \xi^2 & \bar \xi^1}\in\mathrm{SU}(2),
\end{equation}
where the bar stands for complex conjugation.
Eq. \eqref{eq_coh} defines a coherent intertwiner, designed to encode the geometry of a flat rectangle.
The four spinors $\xi_\ell$ represent in a precise sense the sides of the rectangular plaquette along the $\hat z,\, -\hat x, \, -\hat z,$ and $\hat x$ directions respectively, while the spins $L,\,T$ correspond to their lengths, and the lift of the group elements $G_v\in\mathrm{SU}(2)$ to $\mathrm{SO}(3)$ corresponds to the orientation of the plaquette's reference frame.%
\footnote{Using the fact that at each vertex $\sum_{l=1}^4 j_l$ is an integer, one can show that $G_v\mapsto (-1) G_v $ is a symmetry of the integrand. Using this fact, one can consistently consider the $G_v$ to be elements of $\mathrm{SU}(2)/\mathbb{Z}_2\cong \mathrm{SO}(3)$, hence truly representing frame orientations. This identification will be implicitly assumed in the following treatment.}
Gauge invariance implies that all orientations are weighed equally.
Being coherent means that in the large spin regime $T,L\to\infty$ uniformly, the above state optimally minimizes the extrinsic curvature along the rectangle diagonals compatibly with Heisenberg uncertainty principle---see e.g. \cite{KapovichMillson1996,barbieri1998quantum}.
The PR amplitude for such a boundary state $\Phi_\text{coh}$ can be written, after some manipulation and gauge fixing, as the following purely boundary theory
\begin{equation}
Z_\text{PR}(\Phi_\text{coh})=\int \d \varphi \left[\prod_{v} \int \d G_{v}\right] \sin^2\left(\frac\varphi2\right)\,\mathrm{e}^{\sum_{l} S_l }
\,,
\label{eq_Zcoh}
\end{equation}
where the label ``coh'' stands for ``coherent''. The angle $\varphi\in[0,2\pi[$ corresponds to the conjugacy class of the only non-trivial holonomy wrapping around the non contractible cycle of the cylinder.
Furthermore, $S_l$ denotes the contribution to the ``action'' of the spin-network edge $l$:
\begin{equation}
S_l \hspace{-2pt}=\hspace{-3pt}
\begin{cases}
2 j_l \ln \langle \xi^{t(l)}_l| G^{-1}_{t(l)}G_{s(l)} |\xi^{s(l)_l}\rangle &\hspace{-7pt} \text{$l$ horiz.}\\
2 j_l \ln \langle \xi^{t(l)}_l| G^{-1}_{t(l)}\mathrm{e}^{-i\frac{\varphi}{2N_t} \sigma_z}G_{s(l)} |\xi^{s(l)_l}\rangle & \hspace{-7pt}\text{$l$ vert.}
\end{cases}
\end{equation}
Here, the edges $l$ are oriented to point either to the right or upwards, and $v=s(l), \,t(l)$ are therefore its source and target vertices, respectively.
Notice that the branch-cut of the logarithm plays no role.
The $G_v$ are $\mathrm{SU}(2)$ elements associated to the vertices $v$ of the boundary coming from \eqref{eq_coh}; in this formulation, they constitute the degrees of freedom of the dual field theory, replacing the magnetic indices of the statistical model formulation (the two formulations are in the end equivalent, see \cite{Riello2018} for a more thorough discussion of this point). Geometrically, the $G_v$ can be understood as providing a ``potential'' for a flat boundary connection and thus describe the embedding of the boundary into flat 3d space.
Remark now that the action $\sum_l S_l$ is complex and such that $\Re(S_\text{coh})\leq0$.
One can think of its imaginary part as providing the actual action for the boundary theory, and its real part as providing the quantum measure.\footnote{The action is imaginary although the signature of the gravitational field is Euclidean. This is because the Ponzano--Regge model is a quantization of three dimensional gravity in the form of a topological $\mathrm{SU}(2)$ $BF$-theory. In this formulation, the physical signature of the metric is purely encoded in the gauge group.
}
For notational simplicity, the two contributions will not be explicitly distinguished.
Now, using that $j_l$ is the eigenvalue
\footnote{Other operator orderings can be used, for instance in the standard loop quantum gravity spectrum is given by the square root of the Casimir, $\ell_{l} = \ell_\text{Pl} \sqrt{j_l(j_l+1)}$.}
in Planck units of the length operator along (the dual of) $l$:
\begin{equation}
\ell_{l} = \ell_\text{Pl}\, j_l,
\end{equation}
one sees that in the semiclassical limit $\hbar \to 0$, at fixed boundary geometry---that is at fixed $\ell_{l}$---the coherent intertwiners get peaked on the classical geometry, while the amplitude $Z_\text{PR}$ can be evaluated at 1-loop via a stationary phase approximation.
The stationary phase equations for the boundary action are $\Re(\sum_l S_l)=\text{max}_{\{G_v\}}\left(\Re(\sum_l S_l)\right)=0$, and $\delta_{G_v}\left(\sum_l S_l\right)=0$ which is satisfied if and only if
\begin{equation}
\begin{cases}
G_{s(l)} \xi_l^{s(l)}= \mathrm{e}^{\frac{i}{2} \psi^\text{h}_l} G_{t(l)} \xi_l^{t(l)} & \text{$l$ horizontal}\\
G_{s(l)} \xi_l^{s(l)}= \mathrm{e}^{\frac{i}{2} \psi^\text{v}_l} \mathrm{e}^{-\frac{i}{2}\frac{\varphi}{N_t} \sigma_z}G_{t(l)} \xi_l^{t(l)} & \text{$l$ vertical}.
\end{cases}
\end{equation}
Using standard spinfoam techniques \cite{Barrett:1998gs,Freidel:2002mj,Livine:2006it,Livine:2007vk,Barrett:2009mw}, these equations can be geometrically interpreted as describing an immersion (i.e. a local embedding) of the tiled toroidal surface in the ambient flat space $\mathbb R^3$ \cite{Dowdall:2009eg,2011arXiv1103.5644C}.
According to this interpretation, $\hat G_v =\mathrm{Ad}_{G_v}$ represents the $\mathrm{SO}(3)$ frame of the rectangular plaquette dual to $v$---which is defined up to some global rotation along the $\hat z$ axis%
\footnote{The restriction to the $\hat z$ axis is a consequence of a gauge fixing. Generally this is the axis picked by the non-trivial bulk holonomy.}%
---while $\psi_l$ represents the dihedral angle (extrinsic curvature) between two neighboring plaquettes connected by $l$ (see figure \ref{fig:psi_as_dihedral_angle}).
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.80,line/.style={<->,shorten >=0.1cm,shorten <=0.1cm}]
\coordinate(a1) at (0,0); \coordinate(b1) at (0.3,-1.9);
\coordinate(a2) at (3,0); \coordinate(b2) at (3.3,-1.9); \coordinate(d1) at (5.7,-0.3);
\coordinate(h1) at (0,2);
\coordinate(h2) at (3,2); \coordinate(d2) at (5.7,1.7);
\draw (a1)--(a2)--(h2)--(h1)--cycle;
\draw (a1)--(b1)--(b2)--(a2);
\draw (a2)--(d1)--(d2)--(h2);
\draw[dotted] (1.5,2.4)--(1.5,1);
\draw[dotted] (-0.4,1)--(1.5,1);
\draw[dotted,blue,thick] (1.5,1)--(3,1); \draw[blue] (3,1) node[scale=0.7,above right]{$l'$}; \draw[dotted, blue, thick] (3,1) -- (4.4,0.85);
\draw[dotted,thick,red] (1.5,1)--(1.5,0); \draw[red] (1.5,0) node[scale=0.7,above right]{$l$}; \draw[dotted,red,thick] (1.5,0)--(1.65,-0.95);
\draw[dotted] (1.65,-0.95)--(1.9,-2.4);
\draw[dotted] (-0.2,-0.95)--(3.6,-0.95);
\draw[dotted] (4.4,0.85)--(5.9,0.7); \draw[dotted] (4.4,2)--(4.4,-0.3);
\draw (1.5,1) node[above, scale=0.8]{$G_{s(l)} = G_{s(l')}$}; \draw (1.5,1) node{$\bullet$};
\draw (1.5,-0.95) node[below right,scale=0.8]{$G_{t(l)}$};\draw (1.65,-0.95) node{$\bullet$};
\draw (4.3,0.8) node[below right,scale=0.8]{$G_{t(l')}$};\draw (4.4,0.85) node{$\bullet$};
\path [red,line,bend left] (1.5,1) edge node[midway,below right]{$\psi_{l}^{v}$} (1.65,-0.95);
\path [blue,line,bend right] (1.5,1) edge node[midway,below left=-0.7mm]{$\psi_{l'}^{h}$} (4.4,0.85);
\end{tikzpicture}
\end{center}
\caption{Three plaquettes - faces - of the boundary discretization dual to 3 neighboring vertices. The dotted line are the links of the dual lattice. In red, the dihedral angle $\psi_{l}$ along the link $l$ relates the group elements $G_{s(l)}$ and $G_{t(l)}$ living on the two corresponding plaquettes, which are vertical neighbors. In blue, the dihedral angle $\psi_{l'}^{h}$ along the horizontal link $l'$ relates the group elements $G_{s(l')}$ and $G_{t(l')}$ . }
\label{fig:psi_as_dihedral_angle}
\end{figure}
The equation resulting from the stationarity of the bulk holonomy's conjugacy class $\varphi$, $\delta_\varphi\left(\sum_l S_l\right)=0$ i.e.
\begin{equation}
\hat z\cdot \sum_{v} \hat G_v\triangleright \hat x= 0 ,
\end{equation}
breaks the symmetry between the two cycles of the torus (here implicit in the appearance of $\hat x$ rather than $\hat y$ in the equation), and implies that
\begin{equation}
\psi^\text{v}_l = 0 \quad\text{if $l$ vertical}.
\end{equation}
(An alternative solution is $\psi_l=\pi$, always for $l$ vertical; solutions where this happens are called ``folded'' solutions \cite{PRholo2} and will be ignored in this article---however, cf. the comment at the end of this section on Planck-scale values of the extrinsic curvature).
The above equation means that the equation of motion for $\varphi$ dictates along which cycle the torus boundary can be curved extrinsically, and this happens precisely in the spatial direction, as it was intuitively expected.
Thus, the solutions to the equations of motion are given by rectangular cylinders whose ``spatial'' sections are (not-necessarily convex) $N_x$-sided polygons and whose ends are identified modulo a shift of $N_\gamma$ units.
Boundary conditions further require that along each ``time slice''
\begin{equation}
\sum_{x=1}^{N_x} \psi^\text{h}_{l_x} = 0 \;\text{mod}\;2\pi
\end{equation}
and
\begin{equation}
\varphi = \sum_{x=1}^{N_\gamma} \psi^\text{h}_{l_x} \;\text{mod}\;2\pi
\end{equation}
where the latter equation is understood to hold for {\it any} choice of $N_\gamma$ consecutive horizontal edges $\{l_x\}$.
Fixing $N_x$ and $N_\gamma$ to be coprime,
\begin{equation}
K\equiv\mathrm{GCD}(N_x,N_\gamma)=1,
\end{equation}
the analysis of the stationary phase equations can be pushed further.
In particular, $K=1$ implies that the polygonal sections must be regular, since it implies that
\begin{equation}
\psi_l^\text{h} = \frac{2\pi}{N_x}n \;\;\; \text{for all horizontal $l$'s}
\end{equation}
for some $n\in\{0,\pm1,\dots,\pm\tfrac{N_x-1}{2}\}$, and hence
\begin{equation}
\varphi = 2\pi \frac{N_\gamma}{N_x} n = \gamma n.
\end{equation}
For $K\neq 1$, all solutions are part of a continuous family, and the 1-loop determinant (the Hessian determinant of $\sum_l S_l$ at the stationary point) contains null directions. Geometrically, these degeneracies correspond to the fact that one can deform the polygonal section of the cylinder while staying on-shell.
Back to the case $K=1$, the integer $n$ labels the different stationary points.
We interpret it geometrically as a winding number which counts how many times the toroidal surface winds around the cylinder's axis before closing.
The existence of these solutions is due to the fact that the models relies on (compactified) holonomy rather than connection variables, and consequently---loosely speaking---the flatness condition just states that the deficit angle must be an integer multiple of $2\pi$, rather than strictly zero.
The degenerate case $n=0$ is suppressed because of the integration measure, while the case $n=1$ is of course the ``geometrical'' one.
At these stationary points, the spin-network action takes the on-shell value (recall that for vertical links, $\psi^\text{v}_l=0$)
\begin{equation}
\sum_l S_l = i N_t \sum_{x=1}^{N_x} T \psi^\text{h}_{l_x} = 2\pi i n T N_t.
\end{equation}
Formally, this action has the structure of a discretized Gibbons--Hawking--York term on the rectangular cylinder, as desired from what should correspond to the on-shell value of the Einstein--Hilbert action on a flat spacetime and in agreement with Eq. \eqref{eq_Z}. However, once exponentiated, $\mathrm{e}^{\sum_l S_l}$ reduces to a sign. This is a consequence of the discreteness of the spin $T\in\frac12\mathbb N$.
If $K=1$, moreover, the 1-loop determinant is non-degenerate and can be readily analyzed.
Using a Fourier transform adapted to the presence of the twist, one can diagonalize the Hessian, while an exact resummation formula gets rid of the explicit energy dependence.
The ensuing result for the determinant, combined with the on-shell value of the $\varphi$ measure, is
\begin{equation}
(\text{1-loop det}) \times \sin^2\frac\varphi2 = \mathcal A(n)\times \mathcal D(\gamma,n),
\end{equation}
where the $\gamma$ dependence is fully contained in
\begin{equation}
\mathcal D(\gamma, n) = (2-2\cos\gamma n ) \prod_{p=1}^{\frac{N_x-1}{2}}\frac{1}{2-2\cos\gamma p}.
\label{eqnDp}
\end{equation}
When focusing on the geometrical background geometry corresponding to $n=1$, this reproduces precisely the sought result of Eq. \eqref{eq_Z}.
For this to work, it is crucial to notice the fundamental role played by the integration over the non-trivial holonomy wrapping around the non-trivial cycle of the torus, which contributes precisely the factor $2-2\cos \gamma n$ above.
Interestingly, the product over $p$ in the equation above can be explicitly computed once we remember that the angle $\gamma$ comes from the shift $N_{\gamma}$ in the gluing of the lattice:
\begin{equation}
\prod_{p=1}^{\frac{N_x-1}{2}}\frac{1}{2-2\cos\,\frac{2\pi N_{\gamma}p}{N_{x}} }
=
N_{x}
\,
\end{equation}
(this formula holds as long as $N_{\gamma}$ and $N_{x}$ are coprime, i.e. $K=1$). This gives a surprisingly simple result for the amplitude for a given winding number $n$,
\begin{equation}
\mathcal D(\gamma, n) = 2N_{x}\,(1-\cos\gamma n) \,.
\label{eqnD}
\end{equation}
Notice that, despite this simplification, the dependence on the twist $\gamma$ does not completely disappear and we keep a non-trivial result.
A first remark is that it is very interesting that our lattice computation leads to such a straightforward finite truncation of the BMS$_3$ character formula for the partition function \eqref{eq_Z} to the product over modes $p$ bounded by the lattice size $N_{x}$. Moreover the simplification of this product is a great coincidence, which likely points out towards a underlying powerful symmetry. This point needs to be investigated further.
A second remark is that the equation \eqref{eqnDp} is actually more useful than the simplified expression \eqref{eqnD} in order to understand the physical content of the theory. Indeed, it provides the mode decomposition of the theory, in terms of the Fourier modes $p$. Although the partition function $Z$ might simplify, what matters is that the various Fourier modes have different weights with a specific $\gamma$-dependence, i.e. $(2-2\cos\gamma p)^{-1}$, which is probed e.g. by the correlations of the boundary theory.
In this sense and to this extent, this calculation perfectly agrees with the $\Lambda\to 0$ limit of the AdS case given by eq. \eqref{eq_Z} when focusing on the geometrical background corresponding to the classical solution with winding number $n=1$.
Finally, the amplitude factor ${\mathcal A}(n)$ carries the dependence on the winding number $n$. It is a rather intricate function of the spins $L$ and $T$, the lattice sizes $N_{x}$ and $N_{t}$ and of course of the label $n$. It is nevertheless possible to considerably simplify the formula derived in \cite{PRholo2} (by explicitly performing the product over Fourier modes $p$). The result is
\begin{eqnarray}
{\mathcal A}(n)
&=&
\left[\frac{iL - (L+T)\tan\frac{\psi n}{2} }{i L - (L +4TN_tN_{x}) \tan\frac{\psi n}{2}}\right]^{\f12}
\\
&&\times
\,
\left[
\frac{2(2\pi)^{3}e^{in\psi}}{LT(L+T)}
\right]^{\frac{N_{x}N_{t}}2}
\Big[2T_{N_{x}}(a_{n})-2\Big]^{-\frac{N_{t}}2}
\,,
\nonumber
\end{eqnarray}
where $\psi$ is the dihedral angle unit for the lattice and $a_{n}$ is a simple complex trigonometric function,
\begin{equation}
\psi=\frac{2\pi}{N_{x}}
\,,\quad
a_{n}=\cos n\psi+\frac{iL}{T+L} \sin n\psi
\,.
\end{equation}
Also, the notation $T_{N}(a)$ stands for the Chebyshev polynomial (of the first kind) of order $N$.%
\footnote{
Recall, $T_{N}(a)=\cosh (N\arcosh a)$ and\\
$\Big(2T_{N}(a)-2\Big)^{\f12}
=
\left(a+\sqrt{a^{2}-1}\right)^{\frac N2}
-\left(a+\sqrt{a^{2}-1}\right)^{-\frac N2}
\,.$
}
From these expressions one can first check the reality property
\begin{equation}
\mathcal A(-n) = \overline{\mathcal A}(n)
\,.
\end{equation}
This is consistent with a Hamilton--Jacobi functional, which cannot distinguish a momentum from its opposite, and more specifically with a first-order Einstein--Cartan formulation of General Relativity, of which the Ponzano-Regge model is a quantization.
Moreover, ${\cal A}(n)$ is, in the large $N_x, N_t$ limit (but already true when they are larger than 5), overwhelmingly peaked in modulus at the minimal and maximal values of $n$, that is at $n=1$ and $n=\lfloor\frac{N_x-1}{2}\rfloor$
as illustrated by the plots on figure \ref{fig:ALS_plot}.
This behavior is entirely due to the factor with the Chebyshev polynomial, $\big{(}2T_{N_{x}}(a_{n})-2)\big{)}^{-\frac{N_{t}}2}$.
We can actually plot this at fixed $N_{x},N_{t}$ as a function of the continuous variable
$x=n\psi\in[0,\pi]$,
as in figure \ref{fig:chebplot}. In order to understand better the asymptotic behavior at large $N_{x}$, let us focus on this factor. The function $\big{(}2T_{N_{x}}(a_{n})-2)\big{)}$ is well-behaved, both in modulus and phase, as one can see on figure \ref{fig:chebmodargplot}. The moot point is that the first winding number $n=1$ corresponds to the angle $x=\psi=\frac{2\pi}{N_{x}}$ which goes to $x\rightarrow 0$ as $N_{x}$ grows large but not fast enough so as the asymptotics of $\big{(}2T_{N_{x}}(a_{1})-2)\big{)}$ be simply $\big{(}2T_{N_{x}}(0)-2)\big{)}=0$. Indeed the two appearances of the lattice size $N_{x}$ conspires to give a non-trivial asymptotics:
\begin{equation}
\big{[}2T_{N_{x}}(a_{n})-2)\big{]}^{\f12}\sim e^{(1+i)\sqrt{\frac{\pi\lambda N_{x}n}{2}}}
\,,\quad\forall n\ll N_{x}
\,.
\end{equation}
This also gives the behavior for large winding numbers $n\lesssim\lfloor\frac{N_x-1}{2}\rfloor$ since the function $\big{(}2T_{N_{x}}(x)-2)\big{)}$ is (almost) symmetric
\footnote{The symmetry of $\big{(}2T_{N_{x}}(a(x))-2)\big{)}$ depends on the sign of $(-1)^{N_{x}}$. Under the reflection $x\rightarrow \pi-x$, it changes to its complex conjugate for even $N_{x}$ while it further gets an extra minus sign for odd $N_{x}$.
This means that the modulus $\big{|}2T_{N_{x}}(a_{n})-2)\big{|}$ is exactly symmetric under $n\leftrightarrow \frac{N_x}{2}-n$ for even $N_{x}$ while it is slightly skewed under the exchange $n\leftrightarrow \frac{N_x-1}{2}-n$ for odd $N_{x}$ due to the $\f12$ shift.
}
under reflections $x\leftrightarrow \pi-x$. This explains the peakedness of the amplitude pre-factor ${\cal A}(n)$ on the two limiting winding numbers $n=1$ and $n=\lfloor\frac{N_x-1}{2}\rfloor$.
\begin{figure}[t]
\begin{center}
\includegraphics[height=5cm]{ALS_plot.pdf}
\end{center}
\caption{Plots of $\frac{\log(|\mathcal A(1)|)}{\log(|\mathcal A(n)|)}$ for $n$ running from $1$ to $\frac{N_x-1}{2}$ with the parameters $N_t=20$, $L=8$, $T=8$. The four plots correspond to $N_x=50,100,200,400$ (red, blue, green, orange). The $x$-axis corresponds to $\frac{2n}{N_x-1}$, which runs from 0 to 1.}
\label{fig:ALS_plot}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=4cm]{chebplot-Nx.pdf}
\end{center}
\caption
Plot of ${\frac{-N_t}2}\log\big{|}2T_{N_{x}}(a_{n})-2)\big{|}$ in terms of the continuous angle variable $x=n\psi\in\left[\frac{2\pi}{N_x},\pi-\frac{\pi}{N_x}\right]\subset[0,\pi]$ for the odd lattice size $N_{x}=41$ and $N_{t}=1$ and for spins $T=5$ and $L=5,10,20,100$. As $L$ increases and thus the ratio $\frac TL$ goes to 0, the curves gets more and more curved and goes to the limit function (lowest curve).
}
\label{fig:chebplot}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=25mm]{cheb50LT1mod.pdf}
\hspace*{2mm}
\includegraphics[height=25mm]{cheb50LT1arg.pdf}
\end{center}
\caption{Plots of the modulus (on the left) and argument (on the right) of $\big{(}2T_{N_{x}}(a_{n})-2)\big{)}$ for $N_{x}=100$ with $n$ running from 1 to 50, for spin parameters $L=T=1$.}
\label{fig:chebmodargplot}
\end{figure}
The intuition behind the peakedness at the minimal and maximal winding numbers, $n=1$ and $n=\lfloor\frac{N_x-1}{2}\rfloor$, is that
these solutions reconstruct locally almost-flat geometries (although one can be visualized as being folded onto itself), and at flat geometries the Hessian degenerates.
This mechanism is analogous to ``Ditt-invariance'' \cite{Dittrich:2012jq,Rovelli:2011fk}.
This peakedness can be used to argue that the slightest (semiclassical) knowledge of the extrinsic curvature, such as the fact that it is non-Planckian as in the maximal $n$ case, collapses the result onto the desired classical solution at $n=1$.
\footnote{The very same physical argument allows to discard the ``folded'' solutions mentioned above.}
To summarize the role of the amplitude pre-factor ${\cal A}(n)$ is that it selects the first winding number $n=1$, which corresponds to the semi-classical embedding to our toric surface in flat ${\mathbb R}^{3}$ space and allows to reproduce the expected semi-classical partition function \eqref{eq_Z} for 3d quantum gravity as a function of the twist angle $\gamma$.
\section{Outlook\label{sec_outlook}}
In this note, we have proposed a concrete framework to analyze quasi-local holographic dualities in three dimensional quantum gravity.
In such a framework, both the bulk quantum geometry---described by the Ponzano--Regge model---and the boundary theories are readily accessible, and the bulk-boundary correspondence can be readily read from the choice of boundary conditions.
After highlighting the general character of the correspondence and the general questions one hopes to address in its context, we have reviewed the main results of \cite{PRholo1,PRholo2}.
In this outlook, we wish to discuss more specific questions which arise when studying the holographic setup that is here advocated for.
\smallskip
{\bf Boundary phase transitions} The boundary statistical theory might undergo phase transitions for specific choices of the parameters. The question is what this means from the geometrical perspective of the bulk theory. The question is particularly cogent in the case of second order phase transitions. First steps to answer this question have been taken in \cite{Dittrich:2013jxa,Bonzom:2015ova}. There it was shown that---for an infinite superposition of trivalent spin-networks, which turns out to be dual to the Ising model---criticality corresponds to peakedness around geometrical boundary conditions (modulo a global scale).
\smallskip
{\bf Asymptotic infinity} The framework presented here is well-adapted to the study of quasi-local boundaries. Which theory arises when pushing these boundaries to infinity is an independent question. In order to reach infinity, the first guess is that one needs to consider boundaries of infinite circumference. This can a priori be done in two ways: by considering an infinite number of building blocks, possibly Planck-sized (e.g. with $j=1/2$), or by rescaling a given boundary configuration by scaling the relative spins to infinity. The first procedure resembles a continuum limit of the statistical model, provided one at the same time scales down the lattice separation (and rescales the relevant boundary fields). The continuum limit, however, is usually taken by keeping the total physical size of the system constant.
The second procedure, on the other hand, is related the usual semi-classical limit of spinfoams, which leads to a classical, albeit discrete, theory of gravity. It is possible that the scaling to asymptotic infinity is a mixture of the two procedures above.
Given the relationship between continuum limits and second order phase transitions,
identifying the correct notion of asymptotic limit might well shed new light on phase transitions in spinfoam models.
\smallskip
{\bf Cosmological constant} Introducing a cosmological constant $\Lambda$ seems a natural goal, especially in relation to the (A)dS/CFT correspondence. Euclidean three-dimensional models with a cosmological constant are known and some of their properties have been widely studied \cite{Turaev:1992hq, TaylorWoodward2005}. They correspond to real ($\Lambda<0$) and root-of-unity ($\Lambda>0$) $q$-deformations of the Ponzano--Regge model used in this note and in \cite{PRholo1,PRholo2}. In particular, the model with $\Lambda>0$ coincides with the original Turaev--Viro topological field theory for $U_q({\mathfrak{su}}(2))$, $q$ root of unity. This model is finite and hence mathematically well defined from the outset, i.e. without the need of gauge fixing (which was implicit in the present treatment, see \cite{PRholo1}). There is therefore little difficulty in generalizing the present treatment to those cases. Difficulties, however, arise when one tries to compare the ensuing geometries to the (A)dS/CFT or even AdS/MERA setups (see also \cite{DittrichDonnellyRiello}).
The main issue is that the discrete quantum
geometry encoded in the Turaev--Viro weights is that of homogeneously curved building blocks.\footnote{Hence zero deficit angles in the corresponding Ponzano--Regge-like model correspond to absence of ``extra'' curvature defects, and thus to a homogeneously curved manifold. See \cite{MizoguchiTada1992,TaylorWoodward2005,Bahr:2009qc,Bonzom:2014bua,Livine:2016vhl} for a treatment in three dimensions. Generalization to four dimensions also exist \cite{Haggard:2015ima,Dittrich2017}.}
While this fact is well adapted to a MERA-like triangulation of the bulk manifold (in the AdS case), it is much less adapted to describe the common conformally-flat boundary at AdS time-like infinity.
Fortunately, this seems to be more a nuisance than a fundamental problem.
Matching the boundary theories obtained in AdS/CFT and in the present setup seems to be a more cogent problem, which might however be inextricable from the previous problem on how to encode asymptotic infinity in the present setup.
\smallskip
{\bf BMS$_\mathbf{3}$} To conclude, let us mention one more issue. Although the calculation of the partition function nicely matches previous calculations and in particular the formal expression of BMS$_3$ characters of Oblak \cite{Oblak:2015sea}, it is still unclear whether and in which sense BMS$_3$ emerges (in the continuum limit) as a symmetry of the boundary theory discussed above. Of course, this is a crucial question to answer in order to claim full understanding of the setup reviewed in the previous section.
\acknowledgements{
\vspace{-.5em}
This work is supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation.
Plots were performed using Mathematica 10.
}
\bibliographystyle{bibstyle_aldo}
|
{
"timestamp": "2018-03-08T02:11:05",
"yymm": "1803",
"arxiv_id": "1803.02759",
"language": "en",
"url": "https://arxiv.org/abs/1803.02759"
}
|
\section{Introduction}
The fifth generation (5G) cellular network is emerging to satisfy the unprecedented growth in data traffic and the number of connected devices. A key performance indicator (KPI) of 5G is the ability to provide smooth quality of user experience at cell edges, which requires above 1 Gbps data rates. Recently, orthogonal frequency division multiplexing (OFDM) index modulation (IM) \cite{Alhiga09} was proposed as one of the 5G enabling technologies, as it has advantages such as increase in per subcarrier transmit power and reduction in inter-cell interference (ICI). It was reported in \cite{Hong14} and \cite{Seol09} that the ICI of legacy OFDM networks follows a Gaussian distribution which caused most throughput degradation. However, the ICI of OFDM-IM is not Gaussian distributed. Therefore, OFDM-IM is able to provide room to optimize data rate at cell edges. The philosophy behind OFDM-IM is that only one of a number of OFDM subcarriers is active when transmitting symbols. In addition to the information carried by the transmitted symbol, the subcarrier index can also be used to convey information. The idea of conveying information via indexes was first proposed in the spatial domain, i.e., spatial modulation (SM) \cite{Mesleh08}. A variant version of OFDM-IM was proposed in \cite{Tsonev11}, where bits were divided into blocks of bits before the OFDM-IM modulator. Authors in \cite{Basar13} studied practical implementation issues of OFDM-IM such as maximum likelihood (ML) detector, log-likelihood ratio (LLR) detector, and impact of channel estimation errors. A low complexity ML detector for OFDM with in-phase/quadrature IM was discussed in \cite{Zheng15}, which was implemented with a priori knowledge of noise variance. In \cite{Xiao14}, additional interleaving was introduced to subcarriers in correlated channels, in order to provide extra diversity gain to the OFDM-IM. Later, the combination of OFDM-IM and SM was proposed in \cite{Datta15}, where the symbol domain, subcarrier domain, and spatial domain formed a three dimensional signal space. Then, the generalized OFDM-IM was introduced in \cite{Fan15}, where multiple subcarriers were active. The information conveyed by the index relies on different combinations of active subcarrier indexes. When the transmitted symbols of OFDM-IM are quadrature-amplitude modulation (QAM) symbols, this type of OFDM-IM was named frequency QAM (FQAM) \cite{Hong14}. Studies of OFDM--IM in \cite{Alhiga09},\cite{Hong14}, \cite{Basar13}--\hspace{-0.01cm}\cite{Datta15} focused on bit error rate (BER) and frame error rate (FER) performance.
Achievable rate of OFDM--IM with different settings were reported in \cite{Wen15}--\hspace{-0.001cm}\cite{Ishikawa16}. OFDM-IM with finite constellation input was discussed in \cite{Wen15}, where a closed-form lower bound of the achievable rate was derived. The application of OFDM-IM for underwater acoustic communications as well as achievable rate with finite constellation input were investigated in \cite{Wen16}. Although achievable rate of OFDM-IM with Gaussian input was investigated in \cite{Ishikawa16}, closed-form expression was not provided in \cite{Ishikawa16}.
Also, the performance of OFDM-IM in multi-cell scenarios has been less studied. System level simulations (SLSs) over typical hexagonal multi-cell network are able to provide certain insights of multi-cell performance of wireless networks with OFDM-IM. However, there are two drawbacks of SLSs. First, SLSs are time consuming. Second, the hexagonal multi-cell network layout is usually not fulfilled in realistic situations, where base stations (BSs) are approximately distributed in a uniform manner. Therefore, it is beneficial to have analytic results on the performance of multi-cell scenarios. This can be investigated via a mathematics tool called stochastic geometry \cite{Baccelli_v1}--\hspace{-0.001cm}\cite{Baccelli15}, where BSs were assumed to be distributed following a Poisson Point Process (PPP). In this case, the cumulative distribution function (CDF) of the signal to interference plus noise (SINR) was expressed in closed form in different scenarios, such as ad hoc networks \cite{Baccelli06}, cellular networks \cite{Andrews11}, and cooperative networks \cite{Baccelli15}.
Authors in \cite{Hong14} studied the statistics of ICI in multi-cell OFDM-IM with QAM inputs. The generalized Gaussian distribution was used to approximate the distribution of noise plus ICI. However, the exact distribution of noise plus ICI of multi-cell OFDM-IM with QAM inputs was missing in the literature. In this paper, this exact distribution will be found.
The contributions of this paper are listed as below:
\begin{enumerate}
\item The subcarrier index detection error probability of single cell OFDM-IM with Gaussian input is conducted. Then, closed-form single cell achievable rate of OFDM-IM with Gaussian input is derived.
\item The CDF of SINR of multi-cell OFDM-IM is derived using stochastic geometry.
\item The distribution of ICI of multi-cell OFDM-IM with QAM input is derived, showing that it follows a mixture of Gaussians (MoG) distribution. In addition, the parameters of the probability density function (PDF) of ICI are computed using a simplified expectation maximization (EM) algorithm. Then, the upper bound of sum rates of multi-cell OFDM-IM with QAM input is studied.
\end{enumerate}
The rest of this paper is structured as follows. Section~\ref{sec_System_Model} gives a general description of the system model. Achievable rate of single cell OFDM-IM with Gaussian input will be investigated in Section~\ref{sec_Single_Link_IM}. Section \ref{sec_Multi_Cell_IM} will study the CDF of SINR of multi-cell OFDM-IM with stochastic geometry. Also, the distribution and its parameters of ICI are analyzed. Simulation results and analysis are presented in Section \ref{sec_results_analysis}. Conclusions are drawn in Section~\ref{sec_conclusion_section}.
\section{System Model} \label{sec_System_Model}
Let us consider a downlink multi-cell network using OFDM-IM, where a target user equipment (UE) is located at the origin. BSs are distributed as a homogeneous PPP with density $\lambda$.
The set of all BSs is denoted as $\mathcal{S}$ and the set of BSs interfering the target UE is denoted as $\mathcal{S}'$. Let $\alpha$ be the pathloss coefficient. Assume that the OFDM-IM system has $N_\mathrm{F}$ subcarriers, then $\log_2 N_\mathrm{F}$ bits are conveyed by subcarrier indexes.
$N_\mathrm{B}$ is the number of BSs.
Let $F$ be the subcarrier index, which is a uniformly distributed random variable defined on $\left\lbrace 1, 2, \cdots, {N_\mathrm{F}}\right\rbrace$ and let $\mathcal{H}_\xi=\left\lbrace h_{\xi,1}, h_{\xi,2}, \cdots, h_{{\xi,N_{\mathrm{F}}}} \right\rbrace$ be the set of all channel coefficients from the $\xi$th BS to the target user on these subcarriers. The channel coefficients $h_{\xi,k}$ ($1\leqslant k\leqslant N_\mathrm{F}$) are independently and identically distributed (i.i.d.) zero mean unit variance complex Gaussian random variables. In practice, this i.i.d. channel assumption can be achieved by introducing interleaving between subcarriers as \cite{Xiao14}. The interleaving can be done via a pseudo random sequence, which is shared by the BS and UE, such that the BS and UE can map or de--map between the original subcarrier indices and the interleaved subcarrier indices. Throughout the paper, we assume that the target UE has perfect knowledge of the channel coefficients from the associated BS but no knowledge from other BSs. Let the $\xi$th BS be the associated BS of the target UE. The distance between the $\xi$th BS and the target UE is $d$ and the distance between the $l$th BS and the target UE is $d_l$ ($\forall l\neq \xi$). Then, the received signal $Y$ of the target UE can be expressed as
\begin{align}
Y=\sqrt{T_\xi}H_\xi X_\xi+N+I,
\label{equ_rx_signal}
\end{align}
where $N$ is a zero mean complex Gaussian noise with variance $\sigma^2_\mathrm{N}$, $H_\xi$ is uniformly distributed random variable defined on $\mathcal{H}_\xi$, $X_\xi$ is the transmitted symbol from the $\xi$th BS to the target user, $T_\xi$ is the average received power (including transmit power, path loss, and shadow fading) from the $\xi$th BS to the target UE, and $I$ is the interference from other BSs. Thus, interference $I$ can be written as
\begin{align}
I=\sum\limits_{l\neq \xi}\sqrt{T_l}H_lX_l\zeta_l,
\label{equ_interference}
\end{align}
where $\zeta_l=1$ if the $l$th BS is transmitting on the same subcarrier as the $\xi$th BS and $\zeta_l=0$ if the $l$th BS is not transmitting on the same subcarrier as the $\xi$th BS. This is due to a basic property of OFDM-IM, which activates only one subcarrier in each transmission period.
\section{Single Cell OFDM-IM} \label{sec_Single_Link_IM}
Achievable rate of OFDM-IM with QAM input and other finite constellation inputs can be found in \cite{Hong14}, \cite{Wen15}--\hspace{-0.001cm}\cite{Ishikawa16}. However, closed-form expression for achievable rate of sing-cell OFDM-IM with Gaussian input is missing in the literature. Therefore, to fill this gap, single cell achievable rate of OFDM-IM with Gaussian input is analyzed in this section. Since single cell is considered, subscript $\xi $ representing the $\xi$th BS is dropped for brevity and the interference term $I$ equals $0$. Information of OFDM-IM is conveyed by the symbol $X$ and the frequency index $F$. The achievable rate $r$ in this paper is defined by the average maximum achievable mutual information between the information source $(X,F)$ and the destination $Y$, which can be characterized as
\begin{align}
r&=\mathrm{E}\left[\max I(X,F;Y) \right]\nonumber\\
&\approx \mathrm{E}\left[\max I(X;Y|F) \right]+\mathrm{E}\left[\max I(F;Y) \right]=r_1+r_2.
\end{align}
Since the channel coefficient $H$ is determined once the subcarrier index $F$ is determined, the achievable rate of the single cell OFDM-IM generated by the symbol in terms of signal to noise ratio (SNR) $\rho$ can be calculated using the achievable rate of the Rayleigh fading channel, i.e.,
\begin{align}
r_1(\rho)&=\mathrm{E}\left[\max I(X;Y|F) \right]=\mathrm{E}\left[\max I(X;Y|H) \right]\nonumber\\
&=\int\limits_0^{\infty}\log_2\left(1+z\rho \right)e^{-z}dz=-\frac{1}{\mathrm{ln}2}\mathrm{Ei}\left(-\frac{1}{\rho} \right)\exp(\frac{1}{\rho}),
\label{equ_r1}
\end{align}
where $\mathrm{Ei}\left(\cdot \right)$ is the exponential integral \cite{TableIntegral} defined by $\mathrm{Ei}\left(-z \right)=-\int^{\infty}_{z}\frac{e^{-t}}{t}dt.$ The SNR $\rho$ can easily be controlled by adjusting the BS transit power to compensate path loss and shadow fading in a single cell scenario.
The calculation of $r_2$ is equivalent to determining how much information is retrieved from $Y$ when the information is conveyed on a certain subcarrier $F$. The information retrieving process is not perfect because of the existence of noise. As a result, the subcarrier index may be incorrectly detected. Let $\hat{F}$ be the detected subcarrier index and $P_{F\neq \hat{F}}$ be the subcarrier index detection error probability and denote $P_{F=k \cap \hat{F}=l}$ as the probability that the $k$th subcarrier is used whereas the $l$th subcarrier is detected. With the assumption that the channel coefficients on subcarriers are i.i.d., it can be observed that $P_{F=f_k \cap \hat{F}=f_l}=\frac{P_{F\neq \hat{F}}}{N_\mathrm{F}-1}$ ($\forall l\neq k$).
\begin{figure}
\centering\includegraphics[width=4in]{Symmetric_channel.eps}
\centering\caption{Diagram of an $N_\mathrm{F}$-ary symmetric channel.}
\label{fig_Symmetric_channel}
\end{figure}
Hence, the channel between $F$ and $Y$ can be abstracted by a $N_\mathrm{F}$-ary symmetric channel as depicted in Fig. \ref{fig_Symmetric_channel}. The achievable rate $r_2(\rho)$ of a $N_\mathrm{F}$-ary symmetric channel depends on the subcarrier index detection error probability and can be presented as \cite{Weidmann12}
\begin{align}
r_2(\rho)=\log_2N_{\mathrm{F}}-H_b\left[P_{F\neq \hat{F}}(\rho) \right]-P_{F\neq \hat{F}}(\rho)\log_2(N_{\mathrm{F}}-1),
\label{equ_r2}
\end{align}
where $H_b[\mu]$ is the binary entropy function defined by $H_b[\mu]=-\mu\log_2\mu-(1-\mu)\log_2(1-\mu)$. In order to calculate $r_2(\rho)$, we need to compute $P_{F\neq \hat{F}}(\rho)$ with the following lemma.
\begin{lemma}{The subcarrier index detection error probability can be presented as}
\label{Lemma1}
\begin{align}
P_{F\neq \hat{F}}(\rho)=1-\sqrt{\frac{\pi}{2}}\sum\limits_{k=0}^{N_{\mathrm{F}}-1}C^k_{N_{\mathrm{F}}-1}(-1)^{N_{\mathrm{F}}-k-1}\frac{\left[1-\Phi\left(\sqrt{\frac{N_{\mathrm{F}}-k}{2\rho (N_{\mathrm{F}}-k-1)}} \right) \right]}{\sqrt{\rho(N_{\mathrm{F}}-k-1)(N_{\mathrm{F}}-k)}}\exp\left( \frac{N_{\mathrm{F}}-k}{2\rho (N_{\mathrm{F}}-k-1)}\right),
\label{equ_P}
\end{align}
where $C^k_{N_{\mathrm{F}}-1}$ is the binomial coefficient defined by $C^k_{N_{\mathrm{F}}-1}=\frac{(N_{\mathrm{F}}-1)!}{k!(N_{\mathrm{F}}-k-1)}$ and $\Phi(z)$ is the error function defined by $\Phi(z)=\frac{2}{\sqrt{\pi}}\int\limits_0^z e^{-t^2}dt
$.
\end{lemma}
\begin{proof}
Conditioning on $X$, errors occur in the detection of the subcarrier index when the received signal power on the intended subcarrier is less then any one of the rest subcarriers. Due to the symmetry, it is sufficient to calculate the subcarrier index detection error probability assuming that the first subcarrier is used. Let $Y_k$ ($1\leqslant k \leqslant N_\mathrm{F}$) be the received signal on the $k$th subcarrier. Then, $Y_1=HX+N$ and $Y_2,Y_3,...,Y_{N_\mathrm{F}}$ have the same distribution as $N$. With the condition $X=x$, $|Y_1|^2$ is an exponential random variable with mean $x^2+\sigma_{\mathrm{N}}^2$ and $|Y_{k\neq 1}|^2$ are exponential random variables with mean $\sigma_{\mathrm{N}}^2$. Let $G_{|Y_2|^2}$ be the CDF of $|Y_2|^2$. Then, $P_{F\neq \hat{F}|X=x}(\rho)$ is calculated as
\begin{align}
P_{F\neq \hat{F}|X=x}(\rho)&=
\Pr\left\lbrace |Y_1|^2<\max\left\lbrace |Y_2|^2,\cdots,|Y_{N_{\mathrm{F}}}|^2 \right\rbrace|X=x \right\rbrace\nonumber\\
&=1-\int\limits_0^\infty p_{|Y_1|^2}(z)G_{|Y_2|^2}^{N_{\mathrm{F}}-1}(z)dz\nonumber\\
&=1-\int\limits_0^\infty\frac{\exp\left\lbrace-\frac{z}{x^2+\sigma_{\mathrm{N}}^2}\right\rbrace}{x^2+\sigma^2_{\mathrm{N}}} \sum\limits_{k=0}^{N_{\mathrm{F}}-1}\begin{pmatrix}
N_{\mathrm{F}}-1
\\
k
\end{pmatrix}(-1)^{N_{\mathrm{F}}-k-1} e^{-\frac{1}{\sigma^2_{\mathrm{N}}}(N_{\mathrm{F}}-k-1)z} dz\nonumber\\
&=1-\sum\limits_{k=0}^{N_{\mathrm{F}}-1}\begin{pmatrix}
N_{\mathrm{F}}-1
\\
k
\end{pmatrix}(-1)^{N_{\mathrm{F}}-k-1}\int\limits_0^\infty\frac{\exp\left\lbrace-\frac{z}{x^2+\sigma_{\mathrm{N}}^2}-\frac{z}{\sigma^2_{\mathrm{N}}}(N_{\mathrm{F}}-k-1)\right\rbrace}{x^2+\sigma^2_{\mathrm{N}}} dz\nonumber\\
&=1-\sum\limits_{k=0}^{N_{\mathrm{F}}-1}\begin{pmatrix}
N_{\mathrm{F}}-1
\\
k
\end{pmatrix}(-1)^{N_{\mathrm{F}}-k-1}\frac{1}{\frac{x^2}{\sigma^2_{\mathrm{N}}}(N_{\mathrm{F}}-k-1)+N_{\mathrm{F}}-k}\nonumber\\
&=1-\sum\limits_{k=0}^{N_{\mathrm{F}}-1}C^k_{N_{\mathrm{F}}-1}(-1)^{N_{\mathrm{F}}-k-1}\frac{1}{x^2\rho(N_{\mathrm{F}}-k-1)+N_{\mathrm{F}}-k}.
\end{align}
Since $x$ follows a Gaussian distribution with zero mean and variance one, the PDF of $x^2$ is the Gamma distribution with degree of freedom one, i.e., $p_{X^2}(z)=\frac{1}{\sqrt{2\pi z}}e^{-\frac{z}{2}}.$
Unconditioning the frequency index detection error probability, $P_{F\neq \hat{F}}(\rho)$ can be obtained as
\begin{align}
P_{F\neq \hat{F}}(\rho)&=\int\limits_{-\infty}^\infty P_{F\neq \hat{F}|X=x}(\rho)p_X(x)dx \nonumber\\
&=\int\limits_{0}^\infty P_{F\neq \hat{F}|X=z}(\rho)p_{X^2}(z)dz \nonumber\\
&=1-\sum\limits_{k=0}^{N_{\mathrm{F}}-1}\int\limits_0^{\infty}\frac{C^k_{N_{\mathrm{F}}-1}(-1)^{N_{\mathrm{F}}-k-1}}{z\rho(N_{\mathrm{F}}-k-1)+N_{\mathrm{F}}-k}\frac{1}{\sqrt{2\pi z}}e^{-z/2}dz.
\end{align}
Solving the integral \cite{TableIntegral}, (\ref{equ_P}) can be obtained.
\end{proof}
\section{Multi-Cell OFDM-IM} \label{sec_Multi_Cell_IM}
After deriving the single cell sum rate of OFDM-IM with Gaussian inputs in Section \ref{sec_Single_Link_IM}, multi-cell OFDM-IM is investigated in this section. In practical system, finite alphabet inputs are usually used instead of Gaussian inputs. Moreover, the impact of ICI needs to be studied. Therefore, OFDM-IM with QAM inputs is assumed in this section and other types of constellations can be obtained in a similar manner. The CDF of SINR, the PDF of noise plus ICI, and the multi-cell sum rate will be derived.
\subsection{CDF of SINR}
Since only one subcarrier of the target UE and the associated BS is active, the density of interfering BSs to the target UE is one $N_\mathrm{F}$th of the original BS density. Let $\tilde{\rho}$ denote the SINR and $P_T$ denote the transmit power of each BS.
The derivation of the CDF of SINR is directly generalized from \cite{Baccelli06}. The normalized interference power $\mathcal{I}$ from other BSs transmitting on the same subcarrier to the target user can be computed as
\begin{align}
\mathcal{I}=\sum\limits_{l\in \mathcal{S}'}|h_l|^2|d_l|^{-\alpha}.
\end{align}
The channel $h_\xi$ between the target user and his associated BS follows Rayleigh distribution. Hence, $|h_\xi|^2$ is an exponentially distributed random variable. The CDF $G_{\tilde{\rho}}(\tilde{\rho})$ of SINR can then be computed as \cite{Baccelli06}
\begin{align}
G_\mathrm{\tilde{\rho}}(\tilde{\rho})&=1-\Pr\left\lbrace \frac{P_T|h_\xi|^2d^{-\alpha}}{\sigma^2_N+P_T\mathcal{I}}\geqslant \tilde{\rho} \right\rbrace\nonumber\\
&=1-\exp\left(-P_T^{-1}d^\alpha\tilde{\rho}\sigma^2_N \right)\mathrm{E}\left[\exp\left(-d^\alpha \tilde{\rho} \mathcal{I} \right)\right].
\label{equ_G_rho}
\end{align}
Using the Laplace transform of the exponential function and the probability generating functional \cite{Baccelli06}, the expectation part in (\ref{equ_G_rho}) can be expressed as
\begin{align}
\mathrm{E}\left[\exp\left(-d^\alpha \tilde{\rho} \mathcal{I} \right)\right]&=\mathrm{E}\left[\exp\left(-z \mathcal{I} \right)\right]_{|z=d^\alpha \tilde{\rho}}\nonumber\\
&=\exp\left(-\frac{\lambda}{N_{\mathrm{F}}}\int_{\mathfrak{R}^2}\frac{1}{1+z^{-1}|y|^\alpha}dx \right)_{|z=d^\alpha \rho}\nonumber\\
&=\exp\left(-\frac{\lambda}{N_{\mathrm{F}}}z^{2/\alpha}\frac{2\pi^2}{\alpha\sin(2\pi/\alpha)} \right)_{|z=d^\alpha \tilde{\rho}}.
\end{align}
Hence, the CDF $G_{\tilde{\rho}}(\tilde{\rho})$ of multi-cell OFDM-IM can be expressed as
\begin{align}
G_\mathrm{\tilde{\rho}}(\tilde{\rho})=1-\exp\left(-P_T^{-1}d^\alpha\tilde{\rho}\sigma^2_N \right)\exp\left(-\frac{\lambda}{N_{\mathrm{F}}}d^2\tilde{\rho}^{\frac{2}{\alpha}}\frac{2\pi^2}{\alpha\sin(2\pi/\alpha)} \right).
\label{equ_G_CDF}
\end{align}
To avoid $0$ at the denominator in (\ref{equ_G_CDF}), $\alpha$ should be larger than $2$. Typical values of $\alpha$ are between $2$ to $4$.
\subsection{PDF of Noise plus Interference}
The exact PDF of noise plus ICI of multi-cell OFDM-IM with QAM inputs remained unanswered in the literature. Generalized Gaussian distributions were used to approximate the PDF in \cite{Hong14} and \cite{Seol09}. In this section, the exact PDF of noise plus ICI will be derived, showing that noise plus ICI is MoG distributed.
Let $Q=2^{N_B-1}$ and denote $\psi$ as the noise plus ICI, i.e., $\psi= N+\sum\limits_{l\neq \xi}\sqrt{T_l}H_lX_l\zeta_l$.
\begin{theorem} \label{theorem1}
The PDF $p_{\psi}$ of noise plus ICI in multi-cell OFDM-IM with QAM inputs follows a MoG distribution, i.e.,
\begin{align}
p_{\psi}(z)=\sum\limits_{k=1}^{Q}\omega_k\mathcal{N}(z;0,\sigma_k^2).
\end{align}
\label{the_PDF_NI}
\end{theorem}
\begin{proof}
In this proof, $4$QAM is used for simplicity. The distribution of one ICI term , i.e. $\psi_l= h_lX_l\zeta_l$, is first computed. It can be easily shown that $ h_lX_l \sim \mathcal{CN}(0,1)$. Also, the PDF of $\zeta_l$ for 4QAM can be expressed as $p_{\zeta_l}(\zeta_l)=\frac{1}{N_\mathrm{F}}\delta(\zeta_l-1)+\frac{N_\mathrm{F}-1}{N_\mathrm{F}}\delta(\zeta_l)$
which is directly obtained from the fact that the subcarrier index is chosen uniformly. Next, according to the product distribution,
\begin{align}
p_{ h_lX_l \zeta_l}(z)&=\int\limits_{-\infty}^{\infty}p_{\zeta_l}(\tau)\frac{1}{{\pi}}\exp(-|z/\tau|^2)\frac{1}{|\tau|}d\tau\nonumber\\
&=\frac{1}{N_\mathrm{F}}{\frac{1}{{{\pi}}}}\exp(-|z|^2)+\frac{N_\mathrm{F}-1}{N_\mathrm{F}}\lim\limits_{\epsilon\rightarrow 0}\frac{1}{{\pi}\epsilon}\exp\left(-\frac{|z|^2}{\epsilon} \right)\nonumber\\
&=\frac{1}{N_\mathrm{F}}{\frac{1}{{{\pi}}}}\exp(-|z|^2)+\frac{N_\mathrm{F}-1}{N_\mathrm{F}}\delta(z).
\end{align}
Hence, the PDF of one interference term is a weighted sum of a Gaussian function and a Dirac delta function. Using the properties of convolution, the PDF of the total interference can be expressed as
\begin{align}
p_{ h_1X_1 \zeta_1}(z)&* \cdots *p_{ h_{ \xi-1}X_{\xi-1} \zeta_{\xi-1}}(z)*p_{ h_{\xi+1}X_{\xi+1} \zeta_{\xi+1}}(z)*\cdots *p_{ h_{N_\mathrm{B}}X_{N_\mathrm{B}} \zeta_{N_\mathrm{B}}}(z)\nonumber\\
&=\sum\limits_{k=1}^{Q-1}\tilde{\omega}_k\mathcal{CN}(z;0,\tilde{\sigma}_k^2)+\tilde{\omega}_{Q}\delta(z),
\end{align}
where $*$ denotes the convolution operator and $\tilde{\omega}_k\geqslant 0$ with $\sum\limits_{k}\tilde{\omega}_k=1$. By adding the noise term, the PDF $p_\psi$ of noise plus ICI can be calculated as
\begin{align}
p_\psi(z)&=\mathcal{CN}(z;0,\sigma_\mathrm{N}^2)*\left[\sum\limits_{k=1}^{Q-1}\tilde{\omega}_k\mathcal{CN}(z;0,\tilde{\sigma}_k^2)+\tilde{\omega}_Q\delta(z)\right]\nonumber\\
&=\sum\limits_{k=1}^{2^{N_B-1}}{\omega}_k\mathcal{CN}(z;0,\sigma_k^2),
\label{equ_p_psi}
\end{align}
where ${\omega}_k\geqslant 0$ with $\sum\limits_{k}{\omega}_k=1$.
\end{proof}
\begin{lemma}
The PDF $p_{Y}$ of the received signal $Y$ in multi-cell OFDM-IM with QAM inputs follows a MoG distribution, i.e.,
\begin{align}
p_{Y}(y)=\sum\limits_{k=1}^{Q}\omega^{\prime}_k\mathcal{CN}(y;0,\sigma_k^{\prime2}).
\end{align}
\end{lemma}
\begin{proof}
This can be obtained directly from Theorem \ref{theorem1} and (\ref{equ_rx_signal}), because the desired signal also follows a Gaussian distribution.
\end{proof}
Next, parameters $\mathbf{\Omega}=\left[\omega_1 \quad \omega_2 \cdots \omega_{Q}\right]$ and $\mathbf{v}=\left[\sigma^2_1 \quad \sigma^2_2 \cdots \sigma^2_{Q}\right]$ need to be estimated. It is well known that the EM algorithm has been widely used to estimate parameters of MoG distributions. Detailed derivations of EM algorithm for MoG parameter estimation are beyond the scope of this paper. Interested readers can find details in \cite{Bishop_PRML}. In particular, for FQAM, the traditional EM algorithm \cite{Bishop_PRML} can be further simplified because the means of the channel, the transmitted symbol, the ICI, and noise are all zero. The simplified EM algorithm is presented in Fig. \ref{fig_algorithm}. Assuming samples of the noise plus ICI are measured as $\left\lbrace \tau_a \right\rbrace_{a=1}^{N_s}$. First, $\mathbf{\Omega}$ and $\mathbf{v}$ are randomly chosen as initialization. Second, E step and M step are operated iteratively until a certain convergence condition is met. The number of Gaussian functions in (\ref{equ_p_psi}) grows exponentially with the number of BSs, which is impractical due to high complexity. However, only a small number of BSs are dominating the total interference. In this case, a small number $Q'$ ($Q'<<Q$) of Gaussian functions will be sufficient to approximate the distribution of noise plus ICI. The parameters of the PDF $p_{Y}$ of the received signal, i.e., $\mathbf{\Omega}^{\prime}=\left[\omega^{\prime}_1 \quad \omega^{\prime}_2 \cdots \omega^{\prime}_{Q}\right]$ and $\mathbf{v}=\left[\sigma^{\prime2}_1 \quad \sigma^{\prime2}_2 \cdots \sigma^{\prime2}_{Q}\right]$, can also be estimated via the same procedure.
\begin{figure}
\hrulefill
\begin{algorithmic}[1]
\State Initialize $\mathbf{\Omega}$ and $\mathbf{v}$ randomly
\State E step: Compute
\begin{align}
\eta_{ak}=\frac{\omega_k\mathcal{CN}(\tau_a;0,\sigma_k^2)}{\sum\limits_{k=1}^Q\omega_k\mathcal{CN}(\tau_a;0,\sigma_k^2)}
\end{align}
\State M step: Update $\sigma_k^{2,\mathrm{new}}$ and $\omega_k^{\mathrm{new}}$ according to
\begin{align}
\sigma_k^{2,\mathrm{new}}=\frac{1}{\sum\limits_{a=1}^{N_s}\eta_{ak}}\sum\limits_{a=1}^{N_s}\eta_{ak}|\tau_a|^2
\end{align}
\begin{align}
\omega_k^{\mathrm{new}}=\frac{\sum\limits_{a=1}^{N_s}\eta_{ak}}{N_s}.
\end{align}
\State Repeat E step and M step until convergence condition is met.
\end{algorithmic}
\hrulefill
\caption{EM algorithm for parameter estimation of noise plus ICI in OFDM-IM systems.}
\label{fig_algorithm}
\end{figure}
\subsection{Multi-Cell Sum Rate}
The multi-cell sum rate $r_3$ of OFDM-IM is defined by the mutual information between the received signal and the information source as
\begin{align}
r_3=\mathrm{E}\left[\max I(X,F;Y)\right]=\mathrm{E}\left[ H(Y)\right]-\mathrm{E}\left[ H(Y|X,F)\right]=H(Y)-H(Y|X,F),
\label{equ_multicell_sumrate}
\end{align}
where $H()$ is the entropy function. The maximum operator in (\ref{equ_multicell_sumrate}) is removed because of the QAM symbol and the expectation operator is removed because the entropies of $Y$ and $N+I$ are already expectation values. Although closed-form integral of (\ref{equ_multicell_sumrate}) is not feasible because of the summation inside the logarithm function, upper bounds of the sum rate can be evaluated.
Since the entropy function $H()$ is concave, using Jensen's inequality, $H(Y|X,F)$ is lower bounded by
\begin{align}
H(Y|X,F)=H\left(\sum\limits_{k=1}^{Q}\omega_k\mathcal{CN}(z;0,\sigma_k^{2}) \right)&\geqslant \sum\limits_{k=1}^{Q}\omega_k H\left(\mathcal{CN}(z;0,\sigma_k^{2}) \right)\nonumber\\
&=\frac{1}{2}+\sum\limits_{k=1}^{Q}\omega_k\log_2 (2\pi e\sigma_k).
\label{equ_HYFX_LB}
\end{align}
On the contrary, $H(Y)$ is upper bounded by \cite{Huber08}
\begin{align}
H(Y)=H\left(\sum\limits_{k=1}^{Q}\omega^{\prime}_k\mathcal{CN}(y;0,\sigma_k^{\prime2}) \right) &\leqslant -\int \sum\limits_{k=1}^{Q}\omega^{\prime}_k\mathcal{CN}(y;0,\sigma_k^{\prime2})\log_2\left(\omega^{\prime}_k\mathcal{CN}(y;0,\sigma_k^{\prime2}) \right)dy\nonumber\\
&=\frac{1}{2}+\sum\limits_{k=1}^{Q}\omega^{\prime}_k\log_2 (2\pi e\sigma_k^{\prime}/\omega^{\prime}_k).
\label{equ_HY_UB}
\end{align}
To sum up, the multi-cell sum rate of OFDM-IM $r_3$ is upper bounded by
\begin{align}
r_3\leqslant \sum\limits_{k^{\prime}=1}^{Q}\omega^{\prime}_{k^{\prime}}\log_2 (2\pi e\sigma_{k^{\prime}}^{\prime}/\omega^{\prime}_{k^{\prime}})-\sum\limits_{k=1}^{Q}\omega_k\log_2 (2\pi e\sigma_k).
\label{equ_r3_bound}
\end{align}
This upper bound provides insights in sum rate performance of multi-cell OFDM-IM.
\section{Results and Analysis} \label{sec_results_analysis}
Subcarrier index detection error probability and achievable rates of single cell OFDM-IM with different number of subcarriers are depicted in Fig. \ref{fig_Sidx_error_prob} and Fig. \ref{fig_Index_Modulation_sumrate}, respectively. In both figures, analytic results match simulated results well. It can be observed in Fig. \ref{fig_Index_Modulation_sumrate} that when the SNR is relatively low, the achievable rate contributed by subcarrier indexes is not significant. However, when SNR increases gradually, the gaps between achievable rates with different numbers of subcarriers first increase and then become stable.
\begin{figure}
\centering\includegraphics[width=5in]{Sidx_error_prob.eps}
\centering\caption{Subcarrier index detection error probability of single cell OFDM-IM.}
\label{fig_Sidx_error_prob}
\end{figure}
\begin{figure}
\centering\includegraphics[width=5in]{Index_Modulation_sumrate.eps}
\centering\caption{Achievable rates of single cell OFDM-IM with different number of subcarriers.}
\label{fig_Index_Modulation_sumrate}
\end{figure}
Fig. \ref{fig_r2_Nfft} shows single cell OFDM-IM achievable rate contributed by subcarrier indexes with different numbers of subcarriers and different SNR values. From (\ref{equ_r2}), $r_2$ tends to zero when $N_\mathrm{F}$ tends to infinity. This suggests that the benefit of increasing $N_\mathrm{F}$ is diminishing and there is an optimal number of $N_\mathrm{F}$ such that $r_2$ reaches its peak. Closed-from expression of the optimal value of $N_\mathrm{F}$ is not available. However, this can be calculated by numerical results. It is shown in Fig. \ref{fig_r2_Nfft} that the optimal value of $N_\mathrm{F}$ increases with the SNR.
\begin{figure}
\centering\includegraphics[width=5in]{r2_Nfft.eps}
\centering\caption{The achievable rates contributed by different numbers of subcarriers and the optimal $N_\mathrm{F}$.}
\label{fig_r2_Nfft}
\end{figure}
In multi-cell simulations, thermal noise with power density $-173$ dBm/Hz \cite{36814} is assumed in the multi-cell network. Also, the bandwidth of each subcarrier is 15 kHz \cite{36814}. In this case, the noise power per subcarrier can be calculated by $\sigma_\mathrm{N}^2=7.5\times 10^{-11}$ W.
Fig. \ref{fig_IM_CDF} illustrates the CDFs of SINR in terms of different values of $N_\mathrm{F}$ in the multi-cell OFDM-IM scenario. The SINR is larger when $N_\mathrm{F}$ is larger, because the target UE has a smaller probability of being interfered, where ICI is more spreaded in the frequency domain. In addition, the simulated results reasonably well align with analytic results.
\begin{figure}
\centering\includegraphics[width=5in]{FQAM_CDF.eps}
\centering\caption{Comparison of CDFs of SINR with respect to different numbers of subcarriers of multi-cell OFDM-IM ($\lambda=10^{-4}$, $\alpha=3$, $P_T=40$W, $d=50$m).}
\label{fig_IM_CDF}
\end{figure}
The PDF of the real part of noise plus ICI of multi-cell OFDM-IM with 4QAM inputs is illustrated in Fig. \ref{fig_FQAM_Interf_PDF}. The total number of BSs is $19$ and the inter site distance (ISD) is 100m. It be can observed that the MoG distribution derived in this paper fits simulation excellently.
The generalized Gaussian model proposed in \cite{Hong14}\cite{Seol09} and the Gaussian model are also shown in Fig. \ref{fig_FQAM_Interf_PDF}. However, these two models fail to match the realistic noise plus ICI well.
The generalized Gaussian model is able to capture the peak while it does not model the spread well. On the other hand, the Gaussian model has better alignment with the PDF spread than the generalized Gaussian model, but it is not accurate to model the peak.
\begin{figure}
\centering\includegraphics[width=5in]{FQAM_Interf_PDF.eps}
\centering\caption{PDF of the real part of noise plus ICI of multi-cell OFDM-IM with 4QAM inputs ($N_\mathrm{B}=19$, $\alpha=3$, $P_T=40$W, $d=50$m, $Q'=4$).}
\label{fig_FQAM_Interf_PDF}
\end{figure}
The upper bound of sum rates of multi-cell OFDM-IM with 4QAM is depicted in Fig. \ref{fig_Index_Modulation_multicell_sumrate}. Single cell achievable rates with Gaussian input and 4QAM are included as reference. It can be observed that the sum rate of multi-cell OFDM-IM is at least approximately 20\% worse than single cell results because of ICI. When the SNR is smaller, although the multi-cell result outperforms the other two single cell results, this is caused by the upper limit.
\begin{figure}
\centering\includegraphics[width=5in]{Index_Modulation_multicell_sumrate.eps}
\centering\caption{Upper bound of sum rates of multi-cell OFDM-IM ($N_\mathrm{B}=19$, $N_\mathrm{F}=4$).}
\label{fig_Index_Modulation_multicell_sumrate}
\end{figure}
\section{Conclusions} \label{sec_conclusion_section}
In this paper, single cell achievable rate and multi-cell statistical properties of SINR, ICI, and sum rate for OFDM-IM have been studied. It has been shown that the increase of the number of subcarriers does not improve the single cell achievable rate in the low SNR regime. The main benefit of more subcarriers appears in multi-cell scenarios, where less BSs will be interfering the target UE. In addition, the PDF of noise plus ICI has been derived for OFDM-IM with QAM input, showing that noise plus ICI follows a MoG distribution. The parameters of the MoG distribution have been estimated by a simplified EM algorithm in this paper. Later, upper bound of sum rate of multi-cell OFDM-IM has been derived, which can be used as performance guideline for OFDM-IM networks. For future work, it will be practical to use the PDF of noise plus ICI to develop multi-cell OFDM-IM signal detection algorithms. Also, the analysis method in this paper can be extended to $N_{\mathrm{F}}$-ary asymmetric channel to investigate achievable rates of generalized OFDM-IMs.
\section{Acknowledgement}
This work has been performed in the framework of the
Horizon 2020 project FANTASTIC-5G (ICT-671660) receiving
funds from the European Union. The authors would like
to acknowledge the contributions of their colleagues in the
project, although the views expressed in this contribution are
those of the authors and do not necessarily represent the
project.
|
{
"timestamp": "2018-03-08T02:10:11",
"yymm": "1803",
"arxiv_id": "1803.02730",
"language": "en",
"url": "https://arxiv.org/abs/1803.02730"
}
|
\section{Introduction}
\IEEEPARstart{L}{and} mobile satellite (LMS) communications facilitate a myriad of applications such as Global Navigation Satellite System (GNSS) and commercial broadcasting systems, e.g. DVB-RCS and S-DMB \cite{DEK1}.
Due to the openness of LMS communications, both uplink and downlink channels are easily interrupted by unexpected interferences of surrounding communications as well as intentional jammers as reported in \cite{DEK2}.
Furthermore, the frequency selectivity caused by multi-path scatters at the receivers' side makes it difficult to recover the source data under LMS applications operating at high-frequency bands \cite{DEK3}.
Spread spectrum (SS) techniques are data modulation methods that spread the bandwidth of the signal over the bandwidth actually required.
Systems adopting SS techniques have been effectively utilized for the suppression of interferences, alleviation of multi-path fading effects, and the resilience against jamming signals \cite{DEK4}.
Code division multiple access (CDMA) provides multiple access capability and helps increasing system throughput by applying the SS notion.
CDMA is classified into time-hopping (TH), frequency-hopping (FH), and direct sequence (DS) SS, among which DS-CDMA has been mostly studied in the literature and widely applicable in reality due to its low complexity and implementation cost \cite{BMD-DSCDMA, BSS-ICA-CDMA}.
The performance of DS-CDMA is limited by jamming signals and interferences since they often exceed the anti-jamming capability of SS techniques.
To mitigate the effect of jamming/interference signals, the most common method is to filter the received signal in space, time, and frequency domains \cite{ST-AES-GNSS ,ST-Sen-GNSS ,ANF-ISJ-GPS, TF-AES-GNSS, STFT-IET-GPS}.
Space-time adaptive processing can mitigate wideband and narrowband jamming, but it requires additional antennas \cite{ST-AES-GNSS,ST-Sen-GNSS}. Time-frequency filtering can alleviate narrowband and continuous-wave jamming, it however requires some prior information about jamming signals \cite{ANF-ISJ-GPS, TF-AES-GNSS, STFT-IET-GPS}.
A main weakness of aforementioned filtering methods is an extreme degradation of the anti-jamming performance when the jamming signals are coming from the same direction with the source signals, and subsequently high jamming-to-signal ratio (JSR).
In this context, blind source separation (BSS) using independent component analysis (ICA) was proposed to relax requirements \cite{DEK10}. BSS with ICA separates multiple source signals by analyzing the statistical independences using higher order statistics with the assumption that signals from different sources are statistically independent \cite{DEK12}. BSS-ICA provides a wide applicability, including blind multiuser detection, which is to recover the source bit sequence from a received mixture without any knowledge of the user spreading code \cite{BMD-DSCDMA}, and jamming suppression in CDMA communications \cite{BSS-ICA-CDMA,DEK10,DEK11}.
One limitation of BSS-ICA is that it requires a number of observations equal to or larger than a number of sources that we want to separate. Additionally, the anti-jamming capability degrades when the jamming signals are varying in both the time and frequency domain, and the source is already corrupted by jamming in the uplink channel.
In this work, we investigate the anti-jamming behavior of CDMA-based LMS communications, where the satellite acts as a simple amplify-and-forward (AnF) relay.
We consider the uplink jamming scenario, which is frequently used in electronic warfare because it is efficient to impair all receivers critically at once \cite{DEK2}.
The uplink jammer is assumed as a multi-tone (MT) jamming with frequency hopping (FH), which is one of the principal categories of intelligent jamming strategies \cite{MTJ-TWCOM}. We observe that the jamming signals actually rely on a few number of jamming frequencies and a number of hopping occurrences.
With these observations, the matrix representation of the jamming signal can be modeled as a low-rank matrix having low-dimensionality. Low-rank jamming/interference can be easily found in many emerging applications, including communication and network systems \cite{LRI-TCOM,LRI-TWCOM}. Based on the scenario we discussed, descriptions of the signals and the system, including jamming signals are detailed in Subsection $\mathrm{II}$. $A$.
We also remark that the number of active users is often much less than the multi-user capacity of systems for many applications, including CDMA \cite{SUA-TCOM,RDMUD-TIT,SSP-5G-IAC}. This low activity thus implies sparse DS-CDMA signals having low-dimensionality property.
The present paper fruitfully exploits low-dimensionality attributes to recover the source signal from the received signal where an MT-FH jamming was interfering in the uplink channel. The approach we propose in this paper is to model the DS-CDMA signal and the jamming signal into matrix representations and to formulate the recovery problem into a matrix decomposition problem. To decompose the received signal by utilizing their low-dimensionality, we suggest an anti-jamming DS-CDMA receiver, including robust principal component analysis (Robust PCA) in addition to ICA based receiver. Robust PCA recovers a matrix $\mathbf{L}$ from highly corrupted measurements $\mathbf{Q=L+R}$, where $\mathbf{L}$ and $\mathbf{R}$ are low-rank and sparse matrices, respectively \cite{DEK7}.
In contrast to Gaussian noise in classical PCA, the entries in a sparse matrix $\mathbf{R}$ can have larger magnitudes which are unknown.
Extensive simulations show that Robust PCA performs better than ICA only under the assumptions of the low-rank jamming signal and the sparsity of a transmitted DS-CDMA signal when a number of users are less than the length of a spreading code. Even in the other cases, the proposed receiver guarantees a comparable anti-jamming performance to ICA only.
This paper is organized as follows.
Section $\rm{II}$ formulates the system model, uplink scenario, and downlink scenario separately.
Section $\rm{III}$ suggests a recovery problem using matrix decompositions, Robust PCA and ICA, with algorithms to solve the optimization problems.
Section $\rm{IV}$ presents numerical simulation results to justify the anti-jamming ability of the proposed receiver structure.
Finally, section $\rm{V}$ summarizes the paper.
\section{System Model}
The system model considered in this paper consists of a transmitter, a land-based jammer, a satellite, and a receiver.
We divide the system model into two subsections: uplink scenario and downlink scenario.
In the uplink scenario, the transmitted signal and jamming signal models are provided in matrix forms.
In the downlink scenario, the LMS channel is formulated as a circulant matrix, and the received signal model is given.
In what follows, the system model is explained based on the block diagram of proposed anti-jamming CDMA structure depicted in Fig. 1.
\begin{figure}[t]
\centering
\includegraphics[width=8.8cm]{01_Block_01.pdf}
\caption{Block diagram of the LMS communication systems with the anti-jamming DS-CDMA receiver}
\end{figure}
\subsection{Uplink Scenario}
On the satellite uplink, for synchronous CDMA transmissions by $K$ multi-users at the base station, the input data ${\mathbf{X}} \in {\mathbb{R}^{K \times N}}$ is given in a matrix form where $K$ users have $N$ bits.
The input data $\mathbf{X}$ can be divided into $N$ column vectors as follows:
\begin{align}
{\bf{X}} = [{{\bf{x}}_1},\, \cdots,\, {{\bf{x}}_n},\, \cdots,\, {{\bf{x}}_N}] \in {\mathbb{R}^{K \times N}},
\end{align}
where ${\mathbf{x}}_{n}=\left[x_{1,n},\, \cdots,\, x_{k,n},\, \cdots,\, x_{K,n} \right] $ is a column vector that is a collection of $n^{th}$ bits of $K$ users, and $x_{k,n}$ is the $n^{th}$ bit of the $k^{th}$ user.
The transmitted DS-CDMA signal $s(t)$ is represented as \cite{BMD-DSCDMA}:
\begin{align}\label{eq:s_t}
s(t)=\sum_{k=1}^{K}{\sum_{n=1}^{N}{\sum_{m=1}^{M}{x_{k,n}c_k(t-nT_b-mT_c
)}}},
\end{align}
where $c_k(\cdot)$ is the $k^{th}$ user spreading code, $T_c$ is chip duration, $T_b=M T_c$ is the bit duration, and $M$ is the length of the spreading code. Sampling by $T_c$, the encoding of DS-CDMA \eqref{eq:s_t} can be formulated into a matrix representation using the $n^{th}$ spreading code matrix ${{\bf{C}}^{(n)}} \in {\mathbb{R}^{M \times K}}$.
The $N$ numbers of spreading code matrices are generated at every bit index $n=1,\dots,N$, and each spreading code matrix randomly chooses $K$ column vectors from Walsh code $\mathbf{W} \in {\mathbb{R}^{M \times M}}$. Walsh code is adopted out of Gold code, maximal length code, and Walsh code due to its orthogonality and simplicity.
The transmitted signal matrix ${\bf{S}} \in {\mathbb{R}^{M \times N}}$, which is a collection of samples $s_{m,n}=s(mT_c+nT_b)$ is the output of spreading block and is formulated as:
\begin{align}
{\bf{S}} = [{{\bf{s}}_1},\, \cdots,\, {{\bf{s}}_n},\, \cdots,\, {{\bf{s}}_N}] \in {\mathbb{R}^{M \times N}},
\end{align}
where the $n^{th}$ column vector of ${\bf{S}}$ is generated by:
\begin{align}
{\bf{s}}_{n} = {{\bf{C}}^{(n)}}{\bf{x}}_{n} \in {\mathbb{R}^{M \times 1}}.
\end{align}
The transmitted signal ${\bf{S}}$ is jammed by uplink jamming signals.
Typically, jamming signals are characterized by frequency parameters such as jamming frequency bandwidth in partial band jamming and jamming frequencies in MTs jamming.
In our system model, the MT-FH jamming signal is given as:
\begin{align}
j(t)=\sqrt{\frac{P_J}{Mp}} \sum_{m=1}^{M}{\delta_m(t) \mathrm{exp}[i2\pi f_mt+\phi_m]},
\end{align}
where $P_J$ is the power of the jamming signal, the quantity $\delta_m(t)$ is equal to 1 when the $m^{th}$ frequency is jammed at time $t$ with a probability $p$, $f_m$ is the $m^{th}$ frequency, $\phi_m$ is the phase of the $m^{th}$ tone jammer. It is noted that the power of the jamming signal is divided by $Mp$ for the normalization.
The Fourier transform of the jamming signal $j(t)$, during $n^{th}$ bit duration $[nT_b,nT_b+(m-1)T_c]$, can be represented in a vector $\mathbf{j}_n^{'}\in \mathbb{R}^{M\times1}$. The $m^{th}$ frequency element of $\mathbf{j}_n^{'}$ is formed as:
\begin{align}
{j}_{m,n}^{'} = \delta_m(n) Z \sqrt{\frac{P_J}{Mp}},
\end{align}
where $\delta_m(n)$ is 1 when the $m^{th}$ frequency is jammed at $n^{th}$ bit duration with a probability $p$, and $Z$ is a random variable that is distributed normally, i.e., $Z\sim \mathcal{N}(0,1)$.
Using the function $\delta_m(n)$, various types of jamming signals, including narrowband, MT, and wideband jamming, can be generated by adjusting non-zero frequency components. The time domain representation of $\mathbf{j}_n^{'}$ is obtained by inverse Fourier transformation, i.e., $\mathbf{j}_n=\mathcal{F}^{-1}\{\mathbf{j}_n^{'}\}$.
The jamming signal ${\bf{J}} \in {\mathbb{R}^{M \times N}}$ for entire bit durations is given as:
\begin{align}
{\bf{J}} = [{{\bf{j}}_1},\, \cdots,\, {{\bf{j}}_n},\, \cdots,\, {{\bf{j}}_N}] \in {\mathbb{R}^{M \times N}}.
\end{align}
In the case of a typical MT jammer without FH, column vectors of the jamming signal ${\mathbf{J}}$ are same during the whole set of bit durations, which is represented as ${\bf{j}}_{1}={\bf{j}}_{2}=...={\bf{j}}_{N}$.
In other words, the jammer attacks the same frequency components of all column vectors that is ${\bf{j}}_{1}^{'}={\bf{j}}_{n}^{'}\forall n=1,\dots,N$, and thus, we obtain the jamming signal ${\bf{J}}$, which is a rank-1 matrix.
In addition, we consider an MT-FH jamming signal that the jammer hops the jamming frequency components several times. Consequently, the jammer also changes jamming vectors ${\bf{j}}_{n}$ depending on their frequency vectors ${\bf{j}}_{n}^{'}$.
If the number of hops increases in the MT-FH jamming signal, the rank of the jamming signal also increases.
For instance, if the jammer hops four times, then it generates four jamming frequency vectors randomly and the jammer transmits the inverse Fourier transform of each jamming frequency vector until the jammer hops their jamming frequencies. Finally, the jamming signal ${\bf{J}}$ has four distinct parts and can be calculated as a rank-4 matrix. Let rank-$r$ denote the rank of the jamming signal and $r$ represent the number of hopping events.
Signal-to-jamming ratio (SJR) is defined as follows:
\begin{align}
\textrm{SJR\ \ [dB]} = 20 \log \frac{{\lVert\mathbf{S}\lVert}_F}{{\lVert\mathbf{J}\lVert}_F}\ \ \textrm{[dB]},
\end{align}
where ${\lVert\mathbf{S}\lVert}_F=\sqrt{\sum_{m=1}^{M}{\sum_{n=1}^{N}{\lvert s_{m,n}\lvert}^2}}$ is the Frobenious norm of a matrix $\mathbf{S}$, which represents the signal energy.
The received signal in the satellite, which is jammed by the jamming signal ${\bf{J}} \in {\mathbb{R}^{M \times N}}$, can be expressed as ${\bf{H}}_{up} * ( {{\bf{S}}} + {{\bf{J}}} ) + {\bf{V}}_{1} \in {\mathbb{R}^{M \times N}}$, where ${\bf{H}}_{up}$ defines the uplink channel assuming that there always exists a highly strong line-of-sight (LOS) path due to the aid of directional antennas pointing to the satellite \cite{DEK2}.
${\bf{V}}_1$ is a simple additive white Gaussian noise (AWGN). We now consider the satellite as a simple AnF relay which amplifies signals by an amplifying gain $G_{AnF}$ and transmits the outcome to the receiver.
\begin{figure}[t]
\centering
\includegraphics[width = 8.8cm]{02_Block_02.pdf}
\caption{Details of the anti-jamming DS-CDMA receiver blocks from Fig. 1.}
\end{figure}
\subsection{Downlink Scenario}
We assume that the corresponding downlink receiver must be designed with consideration of the LMS characteristics due to the mobility of receivers.
The conventional LMS literature states that such a satellite downlink is represented as a frequency-selective channel consisting of a LOS path and 2 to 4 clustered diffuse paths with high path-loss \cite{DEK3}.
When we express the frequency-selective channel in a discrete form, the channel impulse responses are divided into three components: a direct path, near echoes and far echoes.
We mathematically model the downlink frequency-selective channel using a circulant matrix \cite{DEK13} as given below:
\begin{align}
{\bf{H}}_{down} = {\rm{CM}}[{{h_0}},\, {{h_1}},\, \cdots,\, {{h_l}},\, \cdots,\, {{h_{L - 1}}}
] \in {\mathbb{R}^{M \times M}}.
\end{align}
Let ${\bf{h}} = [{h_0},\, \cdots,\, {h_l},\, \cdots,\, {h_{L - 1}}]$ be the equivalent discrete time channel impulse response, ${\rm{CM}[\cdot]}$ indicates the circulant matrix that begins with the index ${\bf{h}}$, and $L$ denotes the number of paths of the downlink channel.
Discrete channel impulse response ${h_l}$ is a complex Gaussian random variable representing fading channel environments $(l=0,...,L-1)$.
Finally, the received DS-CDMA signal is modeled as:
\begin{align}\label{r_t}
\medmath{{\bf{Y}} = {\bf{H}}_{down} * \{G_{AnF} * {\bf{H}}_{up} * ({\bf{S}} + {\bf{J}})\} + {\bf{V}} \in {\mathbb{R}^{M \times N}},}
\end{align}
where ${\bf{V}}$ denotes the sum of both uplink and downlink AWGN channels whose elements are i.i.d.
Signal-to-noise ratio (SNR) is defined as a ratio between the average powers of the signal and the AWGN noise as follows:
\begin{align}
\textrm{SNR\ \ [dB]} = 20 \log \frac{{\lVert\mathbf{S}\lVert}_F}{{\lVert\mathbf{V}\lVert}_F}\ \ \textrm{[dB]}.
\end{align}
Fig. 2 details the proposed recovery block which comprises of four specific blocks: Channel Estimation, Robust PCA, Despreading, and ICA.
This paper deals with practical and diverse downlink channel models representing urban with/without a LOS path, and rural with/without a LOS path environments, which are specified in \cite{DEK3}.
We assume that the receiver has a perfect knowledge of the spreading code ${\bf{W}} \in {\mathbb{R}^{M \times M}}$. In addition, perfect channel estimation of the downlink channel matrix $\widehat{\mathbf{H}}_{down}={\bf{H}}_{down} \in {\mathbb{R}^{M \times M}}$ is also considered.
\section{Recovery Problem \& with Matrix Decomposition}
In this section, we describe a recovery problem of the received signal of \eqref{r_t}, and formulate the recovery problem as a matrix decomposition problem. Our approach then is to decompose the received signal $\mathbf{Y}$ into the transmitted signal $\mathbf{S}$ and the jamming signal $\mathbf{J}$ by utilizing their inherent low-dimensionality features.
\subsection{Low-Dimensionality Properties \& Channel Estimation}
We demonstrate low-dimensionality properties of the transmitted DS-CDMA signal matrix and the uplink jamming signal matrix that they can be often represented as a sparse matrix and a low-rank matrix in many emerging applications \cite{SUA-TCOM, SSP-5G-IAC, RDMUD-TIT, LRI-TCOM, LRI-TWCOM}.
First, the transmitted DS-CDMA signal $\mathbf{S}\in\mathbb{R}^{M\times N}$ has low-dimensionality, since the number of active users in CDMA systems is often much lower than the spreading gain ($K\ll M$) \cite{SUA-TCOM, RDMUD-TIT}. This low activity is observed in a wide range of applications. In typical tactical communications, active users are usually very small because the spreading gain of military systems focuses mainly on the anti-jamming capability. Due to emerging 5G and IoT technologies, numerous devices are inactive most of the time but occasionally communicate for minor updates \cite{SSP-5G-IAC}.
If the receiver has a priori information of spreading codes as the transmitter, each column in matrix ${{\bf{W}}^{T}\bf{S}} \in {\mathbb{R}^{M \times N}}$ has only $K$ numbers of non-zero components due to column vectors of Walsh code that are generated independently.
Therefore, the low-dimensionality of the DS-CDMA signal matrix can be represented by the sparse matrix ${{\bf{W}}^{T}\bf{S}} \in {\mathbb{R}^{M \times N}}$.
Second, we also remark that low-rank jamming signals are present and studied in \cite{LRI-TCOM, LRI-TWCOM} and references therein. With this observation, the MT-FH uplink jamming signal matrix ${\bf{J}} \in {\mathbb{R}^{M \times N}}$ can be assumed to have the low-dimensionality.
Aforementioned in Subsection $\rm{II}$. $A$, the typical MT-FH jamming signal matrix ${\bf{J}} \in {\mathbb{R}^{M \times N}}$ is modeled as the low-rank matrix.
The term ``low-rank matrix" refers to a rank of the matrix that is small compared to the largest possible rank.
Moreover, since $rank(AB) \le \min \{ rank(A),rank(B)\}$, a matrix ${{\bf{W}}^{T}\bf{J}} \in {\mathbb{R}^{M \times N}}$ is also a low-rank matrix.
Depending on the above descriptions, our objective is to propose an anti-jamming DS-CDMA recovery structure, which is depicted in Fig. 2, exploiting the low-dimensionality of the transmitted signal and the jamming signal.
We assume that the AnF gain of the satellite compensates the uplink channel, i.e., $G_{AnF}*\mathbf{H}_{up}=\mathbf{I}_M\in \mathbb{R}^{M \times M}$, where $\mathbf{I}_M$ is an identity matrix with size $M$. This assumption is due to the strong LOS path in the uplink channel.
With the assumption of the perfect estimation $\mathbf{H}_{down}$, Robust PCA decomposes ${{\bf{W}}^{T}\bf{S}}$ and ${{\bf{W}}^{T}\bf{J}}$ from ${{\bf{W}}^{T}\bf{D}}$, where ${\bf{D}}={\bf{H}}_{down}^{-1} {\bf{Y}}$.
The input and output signals of the Robust PCA block are ${\bf{D}} \in {\mathbb{R}^{M \times N}}$ and ${\widehat{\bf{S}}} \in {\mathbb{R}^{M \times N}}$, respectively.
The Despreading block then despreads ${\widehat{\bf{S}}} \in {\mathbb{R}^{M \times N}}$ with the known spreading code matrices for all bits ${{\bf{C}}^{(n)}} \in {\mathbb{R}^{M \times K}} \forall n=1,\dots,N$.
Finally, ICA reconstructs the original signal ${\widehat{\bf{X}}} \in {\mathbb{R}^{K \times N}}$ from ${\widetilde{\bf{X}}} \in {\mathbb{R}^{K \times N}}$ by using independence inherently contained in the received signal.
In subsections $\rm{III}$. $B$ and $\rm{III}$. $C$, we delineate the functionality of the recovery block regarding matrix decomposition.
To implementing Robust PCA and ICA for our anti-jamming DS-CDMA receiver, we modify iALM and Fast ICA algorithms for the system model of this paper.
\begin{algorithm}[t]
\DontPrintSemicolon
\caption{iALM for Robust PCA problem}\label{alg:iALM}
\KwData{${{\bf{W}}^{{T}}}{\bf{D}} \in {\mathbb{R}^{M \times N}},{\rm{ }}\lambda = 1/\sqrt M $}
\KwResult{~~$\mathbf{W}^T\widehat{\bf{J}} \leftarrow \mathbf{L}_k, \mathbf{W}^T\widehat{\mathbf{S}} \leftarrow \mathbf{R}_{k}$}
\BlankLine
${\bf{\Lambda }} _{0} \leftarrow {{\bf{W}}^{{T}}}{\bf{D}}/ \max \left( \lVert{{\bf{W}}^{{T}}}{\bf{D}}\rVert _2 ,\lambda^{-1}\lVert{{\bf{W}}^{{T}}}{\bf{D}}\rVert_\infty\right).$\;
${\bf{R}}_{0} \leftarrow 0$, ${\mu_0} \leftarrow {1.25}/{{\lVert {\bf{W}}^{{T}}}{\bf{D}\rVert}_2}$, $k \leftarrow 0$.\;
\While{not converged}{
\tcp*[l]{Solve $\mathbf{L}_{k+1}=\arg{\min\limits_{\mathbf{L}}{L(\mathbf{L}, \mathbf{R}_{k}, \Lambda_{k}, \mu_{k})}}$}
$[{\bf{U,P,V}}] \leftarrow {\mathrm{svd}}({{\bf{W}}^{{T}}}{\bf{D}} - {\bf{R}}_{k} + {\mu} _{k}^{ -1}{\bf{\Lambda }} _{k})$.\;
${\bf{L}}_{k+1} \leftarrow {\bf{U}}\cdot{\mathrm{Th}}[{\bf{P}}:{{\mu} _{k}^{ - 1}}]\cdot{{\bf{V}}^{{T}}}$.\;
\tcp*[l]{Solve $\mathbf{R}_{k+1}=\arg{\min\limits_{\mathbf{R}}{L(\mathbf{L}_{k+1}, \mathbf{R}, \Lambda_{k}, \mu_{k})}}$}
${\bf{R}}_{k+1} \leftarrow {\mathrm{Th}}[{{\bf{W}}^{{T}}}{\bf{D}} - {\bf{L}}_{k+1} + {\mu} _{k}^{ - 1}{\bf{\Lambda }} _{k}:\lambda{{\mu} _{k}^{ - 1}}] $.\;
${\bf{\Lambda }} _{k+1} \leftarrow {\bf{\Lambda }} _{k} + {\mu _k}({{\bf{W}}^{{T}}}{\bf{D}} - {\bf{L}}_{k+1} - {\bf{R}}_{k+1})$.\;
Update $\mu_{k}$ to $\mu_{k+1}$. \;
$k \leftarrow k + 1$.\;
}
\end{algorithm}
\subsection{The iALM Algorithm for Robust PCA}
We now consider a matrix decomposition problem to recover the sparse DS-CDMA signal ${{\bf{W}}^{T}\bf{S}}$ and the low-rank jamming signal ${{\bf{W}}^{T}\bf{J}}$ by solving the following convex optimization problem:
\begin{equation}\label{eq:RPCA}
\begin{aligned}
&\mathop {\min }\limits_{{{\bf{W}}^{T}}{\bf{J}},\,{{\bf{W}}^{T}}\bf{S}}~~{\left\| {{{\bf{W}}^{{T}}}{\bf{J}}} \right\|_*} + \lambda {\left\| {{{\bf{W}}^{{T}}}{\bf{S}}} \right\|_1},\\
&{\rm{subject~to}}~~{{\bf{W}}^{{T}}}{\bf{D}} = {{\bf{W}}^{{T}}}{\bf{J}} + {{\bf{W}}^{{T}}}{\bf{S}},
\end{aligned}
\end{equation}
where $\lambda$ is a weighting parameter, ${\left\| \mathbf{A} \right\|_1}:=\sum_{m,n}\lvert a_{m,n}\rvert$ denotes the $\ell_1$-norm (i.e., the sum of absolute values of all entries of the matrix $\mathbf{A}$), and ${\left\| \mathbf{A} \right\|_*}:=\sum_i \sigma_i(\mathbf{A})$ the nuclear norm of the matrix $\mathbf{A}$(i.e., the sum of singular values of $\mathbf{A}$). The optimization problem \eqref{eq:RPCA} simply minimizes a weighted combination of the nuclear norm and $\ell_1$-norm is referred to as Robust PCA \cite{DEK7}. Robust PCA can recover sparse components of the signal matrix even though the matrix are entirely corrupted by a low-rank matrix. The weighting parameter $\lambda$ controls the balance of regularization between the sparsity and the low-rank constraints.
Based on prior knowledge to the solution, a choice of $\lambda$ often improves performance. For example, if we know that $\mathbf{W}^T\mathbf{S}$ is very sparse, it is possible to recover matrices $\mathbf{W}^T\mathbf{J}$ of larger rank by increasing $\lambda$. However, $\lambda={1}/{\sqrt{M}}$ is recommended for the existence and the uniqueness of the solution in practical problems \cite{DEK7}. We also choose $\lambda=1/{\sqrt{M}}$ in this paper.
In this paper, we have chosen to solve the Robust PCA problem \eqref{eq:RPCA} using an augmented Lagrangian multiplier (ALM) algorithm introduced in \cite{DEK8}.
ALM has been proved to converge to the exact optimal solution in fewer iterations \cite{DEK9}.
In practical applications, it works stably across a wide range of problem settings with no parameter tuning \cite{DEK7}.
The ALM method operates on the augmented Lagrangian function of the Robust PCA optimization \eqref{eq:RPCA}
\begin{equation}
\begin{aligned}
L({{\bf{W}}^{{T}}}{\bf{J}},{{\bf{W}}^{{T}}}{\bf{S}},\mathbf{\Lambda} ,\mu )
& \buildrel\textstyle.\over= {\left\| {{{\bf{W}}^{{T}}}{\bf{J}}} \right\|_*} + \mathbf{\lambda} {\left\| {{{\bf{W}}^{{T}}}{\bf{S}}} \right\|_1}\\
&+ \medmath{\left\langle {\mathbf{\Lambda} ,{{\bf{W}}^{{T}}}{\bf{D}} - {{\bf{W}}^{{T}}}({\bf{S}} + {\bf{J)}}} \right\rangle} \\
\label{eq:ALMF}&+ \medmath{\frac{\mu }{2}\left\| {{{\bf{W}}^{{T}}}{\bf{D}} - {{\bf{W}}^{{T}}}({\bf{S}} + {\bf{J)}}} \right\|_F^2,}
\end{aligned}
\end{equation}
where $\left\langle {A,B} \right\rangle = {\rm{tr}}({A^T}B)$ and $\mu$ is a positive scalar with a Lagrange multiplier matrix $\mathbf{\Lambda} $. A generic ALM algorithm is to solve \eqref{eq:RPCA} by repeatedly solving
\begin{align}\label{eq:eALM}
\medmath{(\mathbf{W}^T\mathbf{J}_k, \mathbf{W}^T\mathbf{S}_k)=\mathop{\arg \min}\limits_{\mathbf{W}^T\mathbf{J}, \mathbf{W}^T\mathbf{S}}L(\mathbf{W}^T\mathbf{J}, \mathbf{W}^T\mathbf{S}, \mathbf{\Lambda}_k, \mu_{k}),}
\end{align}
and then update the Lagrangian multiplier matrix by
\begin{align}\label{eq:LMup}
\mathbf{\Lambda}_{k+1}=\mathbf{\Lambda}_k+\mu_{k}(\mathbf{W}^T\mathbf{D}-\mathbf{W}^T(\mathbf{S+J})).
\end{align}
For the low-rank and sparse decomposition problem, the solution of a complex optimization of \eqref{eq:eALM} can be obtained by solving very simple calculations sequentially as follows:
\begin{equation}\label{eq:Ssub}
\begin{aligned}
\mathbf{W}^{T}\mathbf{S}_{k+1} &=\mathop{\arg \min}\limits_{\mathbf{W}^{T}\mathbf{S}} L(\mathbf{W}^{T}\mathbf{J}_{k}, \mathbf{W}^{T}\mathbf{S}, \mathbf{\Lambda}_{k}, \mu_{k})\\
&=\medmath{\mathrm{Th} \left[\mathbf{W}^{T}\mathbf{D}-\mathbf{W}^{T}\mathbf{J}+\mu_{k}^{-1}\mathbf{\Lambda}_{k}:\lambda\mu_{k}^{-1} \right]} ,
\end{aligned}
\end{equation}
\begin{equation}\label{eq:Jsub}
\begin{aligned}
\mathbf{W}^{T}\mathbf{J}_{k+1} &=\mathop{\arg \min}\limits_{\mathbf{W}^{T}\mathbf{J}} L(\mathbf{W}^T\mathbf{J}, \mathbf{W}^{T}\mathbf{S}_{k}, \mathbf{\Lambda}_{k}, \mu_{k})\\
&= \mathbf{U} \cdot \mathrm{Th}\left[\mathbf{P}:\mu_{k}^{-1}\right] \cdot \mathbf{V}^{T},
\end{aligned}
\end{equation}
where $\mathrm{Th}\left[a:\mu \right]=\mathrm{sgn}(a)\max(\lvert a\rvert-\mu,0)$ is the shrinkage operator and extend it to matrices by applying it to each element, and $\mathbf{UP}\mathbf{V}^{T}=\left[\mathbf{W}^{T}\mathbf{D}-\mathbf{W}^{T}\mathbf{S}_{k}-\mu_{k}^{-1}\mathbf{\Lambda}_{k}\right]$ is any singular value decomposition. In \eqref{eq:Jsub}, the rank of $\mathbf{W}^{T}\mathbf{J}_{k+1}$ is minimized by thresholding corresponding singular values. Additionally, In \eqref{eq:Ssub}, reliable sparse components remain after thresholding elements values.
Algorithm \ref{alg:iALM} describes procedures to solve Robust PCA with proper initialization. Algorithm \ref{alg:iALM} is referred to as inexact ALM (iALM) since it inexactly solves \eqref{eq:eALM} by updating \eqref{eq:Ssub} and \eqref{eq:Jsub} iteratively. Finally, the sparse DS-CDMA signal ${{\bf{W}}^{T}\bf{S}}$ and the low-rank jamming signal ${{\bf{W}}^{T}\bf{J}}$ are decomposed by applying iALM. The initialization of $\mathbf{\Lambda}_0$ in the algorithm is selected to make the objective function \eqref{eq:ALMF} reasonably large. The most important implementation detail of the algorithm is the choice of $\{\mu_{k}\}$. The choice is directly related to the convergence of the algorithm. It is known that Algorithm \ref{alg:iALM} converges to the optimal solution of Robust PCA if $\{\mu_{k}\}$ is nondecreasing and $\sum_{k=1}^{+\infty}\mu_{k}^{-1}=+\infty$ \cite{DEK9}. We have chosen $\mu_{0}=1.25/{{\lVert {\bf{W}}^{{T}}}{\bf{D}\rVert}_2}$ and $\mu_{k+1}=\min(1.5\mu_{k}, 10^7 \mu_{0} ),$ where ${{\lVert \mathbf{A}\rVert}_2}=\max_{i}{\sigma_{i}(\mathbf{A})}$ is a 2-norm of a matrix $\mathbf{A}$ i.e., the largest singular value of the matrix $\mathbf{A}$.
After recovering the transmitted signal $\widehat{\mathbf{S}}=\mathbf{W}\mathbf{R}_{k}$ from the received signal by using Robust PCA approach. The MT-FH uplink jamming signal can be removed effectively.
Then, ${\widetilde{\bf{X}}} \in {\mathbb{R}^{K \times N}}$ is obtained by despreading $\widehat{\mathbf{S}}$ with the spreading code matrix $\mathbf{C}^{(n)}$ in Fig. 2.
\begin{algorithm}[t]
\DontPrintSemicolon
\KwData{${\bf{\widetilde X}}: = ({{{\widetilde x}_{i,j}}}) \in {\mathbb{R}^{K \times N}}$\;
~~~~~~~~(where $i = 1, \ldots ,K$, and $j = 1, \ldots ,N$)}
\KwResult{${\widehat{\bf{X}}} \leftarrow {{\mathbf{\omega}}^{{T}}}{\widetilde{\bf{X}}}$}
${{{\widetilde x}_{i,j}}}\leftarrow{{{\widetilde x}_{i,j}}}-\frac{1}{N}\sum\limits_{j = 1}^N {{{\widetilde x}_{i,j}}}$ \tcp*[r]{Centering the data}
$[{\bf{Q}},{\bf{\Gamma}}] \leftarrow {\rm{eig}}({\mathop{\rm cov}} ({\bf{\widetilde X}}))$\;
${\bf{\widetilde X}} \leftarrow {\bf{Q}}{{\bf{\Gamma}}^{ - 1/2}}{{\bf{Q}}^T}{\bf{\widetilde X}} $ \tcp*[r]{Whitening the data}
To find initial (random) weight vector ${{\bf{\omega}}_{0}}$; $k = 0$\;
\While{not converged}{
${{\bf{\omega}}_{k}} \leftarrow E\{ {\widetilde{\bf{X}}}g{({{\bf{\omega}}_{k}^{{T}}}{\widetilde{\bf{X}}})^{{T}}}\} - E\{ g'({{\bf{\omega}}_{k}^{{T}}}{\widetilde{\bf{X}}})\} {\bf{\omega}}_{k}$\;
\tcp*[h]{where $E\{ \cdot \}$ means averaging over\\ all column vectors of matrix ${\bf{\widetilde X}})$}
${\bf{\omega}}_{k+1} \leftarrow {{\bf{\omega}}}_{k}/\left\| {{{\bf{\omega}}_{k}}} \right\|$\;
$k \leftarrow k + 1$\;
}
\caption{Fast ICA for ICA problem}\label{alg:fICA}
\end{algorithm}
\subsection{Fast ICA Algorithm for ICA}
The next step of our AJ receiver structure is the ICA block which reconstructs the final estimate of the input data ${\widehat{\bf{X}}} \in {\mathbb{R}^{K \times N}}$ from a mixed observation ${\widetilde{\bf{X}}} \in {\mathbb{R}^{K \times N}}$.
BSS using ICA here cannot only detect multi-user signals, but also suppress multi-user interferences, inter-symbol interferences, and intentional jamming signals in CDMA systems \cite{BSS-ICA-CDMA,DEK10}.
Authors of \cite{DEK10} evaluate the anti-jamming performance of the receiver and show via numerical results that 5dB SJR gains in terms of bit-error-ratio (BER) of $10^{-3}$ under AWGN channel when signal-to-noise-ratio (SNR) is fixed to 20dB.
In our scenario, ICA reconstructs the original signal ${\widehat{\bf{X}}} \in {\mathbb{R}^{K \times N}}$ from ${\widetilde{\bf{X}}} \in {\mathbb{R}^{K \times N}}$, which is also shown in Fig. 2.
To extract independent components from the mixture matrix, we adapt the Fast ICA algorithm \cite{DEK12} which is based on a fixed-point iteration.
For computational simplicity and fast convergence, many studies (also in \cite{DEK10,BMD-DSCDMA}) consider Fast ICA, which is the most popular ICA algorithm thus far.
The Fast ICA algorithm used to restructure ${\widehat{\bf{X}}} \in {\mathbb{R}^{K \times N}}$ is described in Algorithm \ref{alg:fICA}, where $ g(a) = \tanh (a)$ and $g'(a) = 1 - \tanh^{2} (a)$.
The notation ${\mathop{\rm cov}} ({\bf{A}})$ symbolizes the covariance matrix of ${\bf{A}}$. A procedure $[{\bf{Q}},{\bf{\Gamma}}]={\rm{eig}}({\bf{A}})$ performs eigendecomposition of a matrix $\mathbf{A}=\mathbf{Q\Gamma}\mathbf{Q}^{-1}$, where $\mathbf{Q}$ is the square matrix whose columns vectors are eigenvectors of $\mathbf{A}$, and $\mathbf{\Gamma}$ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues.
Fast ICA effectively separates the input data ${\widehat{\bf{X}}}$ by finding an inverse transformation ${{\bf{\omega}}^{T}}{\widetilde{\bf{X}}}$ that maximizes the statistical independence.
In the next section, we perform extensive simulations to verify the anti-jamming ability of the proposed receiver.
\section{Simulation Results and Discussions}
The anti-jamming DS-CDMA receivers using matrix decomposition methods such as Robust PCA and ICA are assessed through simulations for the following two receiver types:
\begin{itemize}
\item Receiver-Type1 : The conventional anti-jamming DS-CDMA receiver using ICA without Robust PCA,
\item Receiver-Type2 : The proposed anti-jamming DS-CDMA receiver using both, ICA and Robust PCA approaches.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width = 8.6cm]{03_BER_vs_SJR_Urban_LOS.pdf}
\caption{BER versus SJR with SNR fixed to 5dB and 10dB under urban environments (LOS).}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 8.6cm]{04_BER_vs_SJR_Urban_nLOS.pdf}
\caption{BER versus SJR with SNR fixed to 5dB and 10dB under urban environments (nLOS).}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 8.6cm]{05_BER_vs_SJR_Rural_LOS.pdf}
\caption{BER versus SJR with SNR fixed to 5dB and 10dB under rural environments (LOS).}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 8.6cm]{06_BER_vs_SJR_Rural_nLOS.pdf}
\caption{BER versus SJR with SNR fixed to 5dB and 10dB under rural environments (nLOS).}
\end{figure}
The DS-CDMA transmitted signals are generated by following system parameters: $K=30$ users, $N=1000$ bits, and $M=1024$ spreading code length of the Walsh code. The system transmits $M$ chips within each bit duration bearing the information of $K$ users.
We consider various types of the downlink channels such as urban environments with a LOS path and without a LOS path (nLOS), and rural environments with LOS/nLOS. It is known that the downlink channel in LMS communications is a frequency selective channel due to its multi-path propagation consisting of a direct path, near echoes, and far echoes. Parameter sets including a number of taps, delays, and channel gains are set to the measurement data of the LMS International Telecommunication Union (ITU) model \cite{DEK3}.
In the MT-FH uplink jamming scenario, the probability $p$ that the $m^{th}$ frequency is jammed at the $n^{th}$ bit duration is set to be 0.1 i.e., $p=0.1$. Simulations also consider a range of rank-$r$ MT-FH jamming signals such as 1, 100, 200, 500, and at most 1000. The rank of the MT-FH jamming represents the number of hopping events. The case of $r=1$ is a typical MT jamming without hopping, and $r=1000$ is an MT-FH jamming with hopping every bit duration.
We run 1000 Monte Carlo simulations to observe a reliable BER level of $10^{-5}$ with $\textrm{SJR} =\left[-30,0\right] \textrm{dB}$ and $\textrm{SNR}=5 \textrm{ and } 10\, \textrm{dB}$, as used in \cite{BSS-ICA-CDMA, DEK10}.
It is worthy noted that a broad-band noise jamming can be more effective than the MT-FH jamming as the SJR is too low. However, in the paper, we focus on the MT-FH jamming in order to discuss the effects of the jamming rank $r$ on the performance of the proposed receiver.
Fig. 3, 4, 5 and 6 show BER performances of the Receiver-Type1 and the Receiver-Type2 versus SJR values with various ranks of the MT-FH jamming signal under four different channel scenarios.
The simulation results in Fig. 3 and 4 consider urban environments with the 5 paths frequency selective downlink channels, while Fig. 5 and 6 present the BER performance in rural environments with 3 paths.
Furthermore, Fig. 3 and 5 consider the presence of a LOS path, whereas Fig. 4 and 6 do not.
\begin{figure}[t]
\centering
\includegraphics[width = 8.6cm]{07_BER_vs_r_SJR_Urban_LOS.pdf}
\caption{BER versus $r$ (rank of jamming signal) change with SNR fixed to 10dB under urban environments (LOS).}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 8.6cm]{08_BER_vs_r_SJR_Urban_nLOS.pdf}
\caption{BER versus $r$ (rank of jamming signal) change with SNR fixed to 10dB under urban environments (nLOS).}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 8.6cm]{09_BER_vs_r_SJR_Rural_LOS.pdf}
\caption{BER versus $r$ (rank of jamming signal) change with SNR fixed to 10dB under rural environments (LOS).}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 8.6cm]{10_BER_vs_r_SJR_Rural_nLOS.pdf}
\caption{BER versus of $r$ (rank of jamming signal) change with SNR fixed to 10dB under rural environments (nLOS).}
\end{figure}
Fig. 3 presents the anti-jamming performance of the aforementioned two receivers versus SJR under the urban environments including a LOS path with $\textrm{SNR}=5 \textrm{ and } 10\, \textrm{dB}$ on the left and right figures, respectively.
Each subfigure considers the MT-FH uplink jammer with different jamming ranks of 1, 100, and 1000.
The blue curves are for the Receiver-Type2, and the red curves are for the Receiver-Type1.
The results show that the Receiver-Type2 outperforms the Receiver-Type1 in most cases and the BER performance of the Receiver-Type2 increases as rank decreases.
Especially, it is noted that the Receiver-Type2 completely decomposes the transmitted DS-CDMA signal matrix and the MT-FH jamming signal matrix with the rank $r=1$, when the signal power is larger than an SJR level of -20dB with a fixed SNR level of 10dB.
This implies that the typical MT jamming without FH can be easily separated by the Robust PCA even with very high SJR value.
It is also noteworthy that typical uplink jammers in GPSs are commonly simple single tone pulse generators with a high power amplifier, which can be effectively mitigated by using the proposed receiver.
In the case that MT-FH jamming signals hop every bit duration, whose the rank increases up to $r=1000$, the Receiver-Type2 for SNR=10dB still guarantees a comparable anti-jamming performance compared to its counterpart.
In another case of SNR=5dB, although the Receiver-Type1 outperforms the Receiver-Type2 for $r=1000$, the Receiver-Type2 performs better for MT-FH jamming signals with low hopping rates.
Simulation results also remark that the BER results of the Receiver-Type1 are almost the same and independent with respect to the rank of the jamming signal for both SNR=5dB and 10dB.
This implies that ICA does not utilize low-dimensionality to decompose the signals.
In Fig. 4, we simulate the BER of the two receivers in similar conditions of Fig.3 except that the urban environment with nLOS path is considered.
From the Fig. 4, we observe that the BER performance of the Receiver-Type2 increases when the jammer decreases its hopping rate (the rank $r$).
However, it should be noticed that MT-FH jammers, which require a high-rank $r$, are not common due to their high complexity and hardware costs in practical satellite communication systems.
The figure also shows that the BERs of the Receiver-Type1/2 are saturated to $1.5\cdot10^{-3}$ and $5\cdot10^{-6}$ as SJR increases when SNR=5dB and 10dB, respectively.
This result is explained by the effects of the LMS channel under the urban environment with nLOS that implies a highly fading channel.
Similar to Fig. 3, the worst BER performance of the Receiver-Type2 is observed when $r=1000$.
Fig. 5 and 6 plot the BERs of the Receiver-Type1/2 under the rural environment with LOS/nLOS.
The BERs of the Receiver-Type1 under the rural environments is almost equal to the BERs under the urban environments. One difference is that the BERs under the rural environment with nLOS are not saturated within the simulated SJR region.
For the case of $r=1$ (no hopping) of the Receiver-Type2, the BERs approach roughly $10^{-5}$ at SJR=-20dB and SNR=10dB while the BER of the Receiver-Type2 under the urban environment with LOS is $10^{-3}$.
One reason, why the BER of the Receiver-Type1/2 are not saturated and the Receiver-Type2 gives better AJ performance, is that the rural LMS channels are measured by fewer paths and long delay channel impulse responses compared to the urban environments.
\begin{figure*}[t!]
\centering
\subfloat[Runtimes and BER of the Receiver-Type1/2 versus the number of users $K$, with $M=128, N=100$]{
\label{sfig:RunBERvsK}
\centering
\includegraphics[width = 5.6cm]{11_a_Runtime_BER_vs_K.pdf}
}
\hfill
\subfloat[Runtimes and BER of the Receiver-Type1/2 versus the spreading code length $M$ with $K=3, N=100$]{
\label{sfig:RunBERvsM}
\centering
\includegraphics[width = 5.6cm]{11_b_Runtime_BER_vs_M.pdf}
}
\hfill
\subfloat[Runtimes and BER of the Receiver-Type1/2 versus the number of bits $N$ with $K=6, M=128$]{
\label{sfig:RunBERvsN}
\centering
\includegraphics[width = 5.6cm]{11_c_Runtime_BER_vs_N.pdf}
}
\caption{Runtimes and BER of the Receiver-Type1/2 versus the number of users $K$, the spreading code length $M$, and the number of bits $N$. Simulation considers the rural environment with nLOS path, SNR=5dB, SJR=-10dB, and the jamming rank is $N/10$. }
\label{fig:RunBERvsKMN}
\end{figure*}
In Fig. 7, 8, 9, and 10, we compare the anti-jamming performances of the Receiver-Type1/2 versus the rank of the jamming signal with $\textrm{SJR} = -25, -20, -15,\ \textrm{and} -10 \textrm{dB}$ under the MT-FH uplink jamming for four different environments.
Fig. 7 and 8 plot the BERs versus the rank-$r$ under urban environments with LOS/nLOS and Fig. 9 and 10 are under rural environments with LOS/nLOS.
Red dotted curves correspond to the BERs of the Receiver-Type1 for different SJRs, and blue curves are for the BERs of the Receiver-Type2.
X-axis represents the rank of the jamming $r=(1, 100, 200, 500, 1000)$, which the minimum $r$ corresponds to no hopping and the maximum $r$ is for the case of hopping every bit duration.
Overall Fig. 7, 8, 9, and 10, as the rank of the jamming decreases, the anti-jamming capacity of the Receiver-Type2 increases. On the other hand, the Receiver-Type1 does not improve the BER performance although the rank of the jamming decreases.
At a low-rank range of $r<200$, which represents less hopping MT-FH jammers, the Receiver-Type2 significantly outperforms the Receiver-Type1 for a wide range of SJRs. Moreover, at high-rank jammings, the Receiver-Type2 performs equally well or slightly worse than the Receiver-Type1 depending on SJR.
The BER differences between the Receiver-Type1/2 for the high-rank jamming decrease as SJR decreases. In addition, ranges of rank values, where the Receiver-Type2 performs better than the Receiver-Type1, become wider as SJR decreases--in other words, the jamming power increases.
It is observed that the range of rank values that Receiver-Type2 outperforms Receiver-Type1 is smaller when the LMS downlink channel becomes severe.
Simulation results conclusively remark that the proposed Receiver-Type2 is more effective than the conventional Receiver-Type1 for low-rank ($r<200$), high power jammers, and less-severe multi-path environments. In addition, even for high-rank and more-severe multi-path channels, the Receiver-Type2 is competitive to the Receiver-Type1.
The CPU runtimes of the MATLAB implementations and BER performances of the Receiver-Type1 and the Receiver-Type2 with respect to various DS-CDMA system parameters are summarized in Fig. \ref{fig:RunBERvsKMN}.
The subfigures for the number of users $K$, the spreading code length $M$, and the number of bits $N$ are presented in Fig. \ref{sfig:RunBERvsK}, Fig. \ref{sfig:RunBERvsM}, and Fig. \ref{sfig:RunBERvsN}, respectively.
The rural environment with nLOS path is assumed, and SNR and the jamming rank are set to 5dB and $N/10$.
In addition, BERs are measured at SJR of -10dB.
Generally, the results show that the computational time of the Receiver-Type2, combining Robust PCA and ICA, is comparable to that of the Receiver-Type1 using ICA only.
The Fig. \ref{sfig:RunBERvsK} implies that the computational time of the Receiver-Type1 increases linearly as the number of users $K$ increases, while the gap between the runtime of the Receiver-Type2 and that of the Receiver-Type1 reduces.
It is also seen increasing $K$ degrades the BER performances of both the Receiver-Type1 and the Receiver-Type2.
The Fig. \ref{sfig:RunBERvsM} shows that the spreading code length $M$ only linearly increases the CPU runtime of the Receiver-Type2, while the BER performance of the Receiver-Type2 is exponentially improving.
Moreover, in the Fig. \ref{sfig:RunBERvsN}, we observe that the runtimes of both the Receiver-Type1 and the Receiver-Type2 increase as the number of bits $N$ increases.
It is also noted that the additional time for combining Robust PCA algorithms on the Receiver-Type1, when the number of bits $N$ is less than 400, is less than the computational time of the Receiver-Type1 itself.
\section{Conclusion}
In this paper, we considered the anti-jamming problem of DS-CDMA receivers against the presence of uplink jammers under LMS communication systems.
We developed an anti-jamming DS-CDMA receiver that decomposes the received signal into the transmitted signal and the unintended uplink jamming signal by exploiting the fact that they are typically low-dimensionality.
Utilizing their low-dimensionality attributes, we suggested the integration of Robust PCA and ICA approaches, which are implemented by iALM and Fast ICA algorithms.
Anti-jamming performances of Receiver-Type1 (the conventional receiver using only ICA without Robust PCA) and Receiver-Type2 (the proposed receiver using both Robust PCA and ICA) were assessed in the scenarios that consider the MT-FH uplink jammer and practical downlink channels including urban and rural environments.
Simulation results show that Robust PCA in Receiver-Type2 achieves significant performance improvement as compared with the Receiver-Type1 for a wide range of the rank of the MT-FH jamming signal. This implies that Robust PCA separates various jamming signals more effectively than ICA only. The performance improvement increases as the rank decreases.
For ranks lower than 200 that represent MT-FH jamming signals with less hopping, Receiver-Type2 outperforms Receiver-Type1. Even for large ranks that signify frequent hopping jamming, Receiver-Type2 shows a comparable performance to its counterpart.
In conclusion, our proposed receiver has potential applications in DS-CDMA based LMS systems under various uplink jammers.
\section*{Acknowledgment}
The authors gratefully acknowledge the support from Electronic Warfare Research Center (EWRC) at Gwangju Institute of Science and Technology (GIST), originally funded by Defense Acquisition Program Administration (DAPA) and Agency for Defense Development (ADD).
|
{
"timestamp": "2018-03-08T02:05:04",
"yymm": "1803",
"arxiv_id": "1803.02549",
"language": "en",
"url": "https://arxiv.org/abs/1803.02549"
}
|
\section{Introduction}
It is widely accepted that Alfv\'en waves \citep{Alfve42} play an important role in the heating \citep{Alfve47,Oster61,Matth99} and acceleration \citep{Belch71b,Jacqu77,Heine80} of the solar wind.
Indeed, Alfv\'en waves are observed in the solar atmosphere \citep{DePon07a,Tomcz07,McInt11,Sriva17} and solar wind \citep{Colem68,Belch71a}.
Nonthermal line width \citep{Baner09,Hahn013} and Faraday-rotation fluctuations \citep{Hollw82c,Hollw10} also indicate the existence of Alfv\'en waves in the corona.
Meanwhile, the dissipation process of Alfv\'en waves in the corona and solar wind is still under discussion.
Since the amount and location of Alfv\'en-wave dissipation vary with respect to the mechanism and strongly affect the coronal temperature and wind velocity \refp{Hanst12},
clarifying the elemental processes is important not only for plasma physics but also for space weather.
There are several processes of Alfv\'en-wave dissipation.
If there are counter-propagating Alfv\'en waves, Alfv\'en-wave turbulence \refp{Irosh64,Kraic65,Dobro80,Goldr95} evolves.
In the corona and solar wind, because of the inhomogeneity,
Alfv\'en waves partially reflect \refp{Ferra58,Heine80,An00090,Velli93,Cranm05} and Alfv\'en-wave turbulence is sustained \refp{Dmitr03, Ought06}.
This reflection-driven Alfv\'en-wave turbulence is frequently studied \refp{Matth99,Dmitr02,Verdi07,Perez13,Balle16},
and some models explain the heating and acceleration of the solar wind self-consistently \citep{Cranm07,Verdi10}.
Alfv\'en-wave turbulence is also important for the energy cascade and the formation of the power spectrum \refp{Verdi12a,Balle17}.
When the Alfv\'en velocity is inhomogeneous perpendicular to the magnetic field lines, phase mixing begins \refp{Heyva83,DeGro02,Goose12}.
The density variation across the magnetic field lines is observed in the corona \refp{Tian011,Raymo14}, and this indicates the possibility of phase mixing.
Several studies show the role of phase mixing and related phenomena in the solar atmosphere \refp{Antol15,Kanek15}.
Recently, it was numerically shown that phase mixing can generate turbulent structure \refp{Magya17}.
Since the amplitude of an Alfv\'en wave is not small and the plasma beta is low \refp{Gary001,Iwai014,Bourd17},
the (extended) corona and solar wind are preferable locations for the development of parametric decay instability (PDI).
PDI is a type of instability of an Alfv\'en wave \refp{Galee63,Sagde69,Golds78,Derby78} and was recently observed in laboratory plasma \refp{Dorfm16} and in the solar wind \refp{Bowen18}.
As a result of PDI, a large-amplitude longitudinal wave is generated \refp{Hoshi89,DelZa01}, and the plasma is heated up by the resultant shock wave.
\reft{Suzuk05,Suzuk06a} demonstrated that, without Alfv\'en-wave turbulence, the coronal heating and solar-wind acceleration are explained self-consistently by PDI.
These studies were extended to two dimensions (2D) by \reft{Matsu12,Matsu14}.
In addition, the cross-helicity evolution in the fast solar wind \refp{Bavas82,Bavas00} might be due to PDI \citep{Malar96,Malar00,Shoda16}.
\reft{Chand18} also argued that the $1/f$ spectrum observed in the fast solar wind \refp{Bruno13} possibly results from PDI.
We note that Alfv\'en-wave turbulence and PDI are not independent of each other,
because PDI generates large-amplitude backscattered Alfv\'en waves \refp{Sagde69,Golds78} and enhances the heating by Alfv\'en-wave turbulence.
\citet{Shoda18a} showed that,
due to PDI, the turbulence heating rate per unit mass increases ($\sim 10^{11} {\rm \ erg \ g^{-1} \ s^{-1}}$)
compared with the reduced- magnetohydrodynamic MHD (without-PDI) value ($\sim 10^{10} {\rm \ erg \ g^{-1} \ s^{-1}}$) \citep{Perez13,Balle16}.
Amongst the aforementioned dissipation processes, we focus on PDI in this study.
The PDI of monochromatic Alfv\'en waves in a time-independent, uniform background with MHD approximation is well studied.
In the limit of $\beta \ll 1$ and $\eta = \delta B / B_0 \ll 1$ where $B_0$ and $\delta B$ denote the mean and fluctuating magnetic field, respectively, the growth rate is given as \citep{Galee63,Sagde69}
\begin{align}
\gamma / \omega_0 = \frac{1}{2} \eta \beta^{-1/4}, \label{eq:gamma_sagdeev}
\end{align}
where $\omega_0$ is the angular frequency of the parent wave.
Here we define $\beta$ as
\begin{align}
\beta = c_s^2/v_A^2,
\end{align}
where $c_s$ and $v_A$ denote the sound and Alfv\'en speed, respectively.
The general dispersion relation that considers full four-wave interaction \refp{Lashm76} is given by \reft{Golds78} and \reft{Derby78} as
\begin{align}
\left( \omega-k \right) \left( \omega^2 - \beta k^2 \right) \left[ \left(\omega+k \right)^2 -4 \right] \nonumber \\
= \eta^2 k^2 \left( \omega^3 + k \omega^2 - 3\omega + k \right),
\label{eq:gamma_gd78}
\end{align}
where $\omega$ and $k$ are normalized by the parent-wave frequency $\omega_0$ and wavenumber $k_0$.
In this study, we call Eq. (\ref{eq:gamma_gd78}) the Goldstein--Derby dispersion relation.
By solving Eq. (\ref{eq:gamma_gd78}), \reft{Golds78} confirmed that
the classical understanding that the parent wave decays into a forward acoustic wave and a backward Alfv\'en wave is correct in the low-beta regime.
In the high-beta plasma, however, the behavior of the instability changes \refp{Jayan93}.
The linear stage of this {\it ideal} (monochromatic, time-independent, and uniform) case is well understood.
The nonlinear stage of PDI is also frequently studied using numerical simulation.
\reft{Hoshi89} studied the linear-to-nonlinear evolution of PDI.
This study was extended to multi-dimensional simulations in both low- and high-beta cases \refp{Ghosh94a,Ghosh94b}.
\reft{DelZa01} investigated the evolution of PDI with different plasma parameters, different dimensions and different boundary conditions to show the robustness of PDI.
Recently, the three-dimensional (3D) hybrid simulation of PDI-driven turbulence has been studied \refp{Fu00017}.
There are several studies on the linear growth rate of PDI under {\it non-ideal} situations.
Two-fluid and kinetic simulations were performed \refp{Teras86,Nariy08}
The PDI of non-monochromatic Alfv\'en waves tends to have a smaller growth rate \refp{Cohen74b,Umeki92,Malar96,Malar00}.
If the background is turbulent, the growth rate is quenched compared with the {\it ideal} value \refp{Shi0017}.
The solar wind acceleration and expansion also work to reduce the growth rate \refp{Tener13,DelZa15}.
Recently the effect of temperature anisotropy on PDI has also been also studied \refp{Tener17b}.
Specifically in the solar wind close to the Sun, wind acceleration and expansion play an important role.
Such effects are frequently studied using a local co-moving box in the so-called accelerating expanding box (AEB) model \refp{Velli92,Grapp93,Grapp96,Tener17a}.
One problem with the AEB model is that the dynamics and energetics are not self-consistent;
initially, we have to assume the background quantities such as flow speed or Alfv\'en speed and ignore the feedback of wave heating and acceleration on them.
Our motivation is to test the idea obtained from the AEB model using a non-local simulation box that extends from the corona to the distant heliosphere.
This paper is organized as follows.
In Section \ref{sec:method}, we describe the basic equations, numerical scheme, and boundary conditions used in this study.
Section \ref{sec:mono} and Section \ref{sec:broad} describe the results with monochromatic wave injection and broadband wave injection, respectively.
We summarize this paper in Section \ref{sec:summary}
\section{Numerical method } \label{sec:method}
\subsection{Basic equations and setting}
We used the same equations as those in \reft{Shoda18a} and
considered a one-dimensional system whose coordinate $r$ is curved along the background magnetic field line.
The basic equations used were
\begin{align}
&\frac{\partial}{\partial t} \left( \rho r^2 f \right) + \frac{\partial }{\partial r} \left( \rho v_r r^2 f \right) =0, \label{eq:mass} \\
&\frac{\partial}{\partial t} \left( \rho v_r r^2 f \right) + \frac{\partial }{\partial r} \left[ \left( \rho {v_r}^2 + p + \frac{{\vec{B}_{\perp}}^2}{8\pi} \right) r^2 f \right] \nonumber \\
&= \left( p + \frac{\rho {\vec{v}_{\perp}}^2}{2} \right) \frac{d}{dr} \left( r^2 f \right) - \rho g r^2 f, \label{eq:eomradial} \\
&\frac{\partial}{\partial t} \left( \rho \vec{v}_{\perp} r^3 f^{3/2} \right) + \frac{\partial }{\partial r} \left[ \left( \rho v_r \vec{v}_{\perp} - \frac{B_r \vec{B}_{\perp}}{4 \pi} \right) r^3 f^{3/2} \right] \nonumber \\
&= -\hat{\vec{\eta}}_1 \cdot \rho \vec{v}_{\perp} r^3 f^{3/2} - \hat{\vec{\eta}}_2 \cdot \sqrt{\frac{\rho}{4 \pi}} \vec{B}_{\perp} r^3 f^{3/2}, \label{eq:vt} \\
&\frac{\partial}{\partial t} \left( \vec{B}_{\perp} r \sqrt{f} \right) + \frac{\partial }{\partial r} \left[ \left( \vec{B}_{\perp} v_r - B_r \vec{v}_{\perp} \right) r \sqrt{f} \right] \nonumber \\
&= -\hat{\vec{\eta}}_1 \cdot \vec{B}_{\perp} r \sqrt{f} - \hat{\vec{\eta}}_2 \cdot \sqrt{4 \pi \rho} \vec{v}_{\perp} r \sqrt{f}, \label{eq:bt} \\
&\frac{d}{dr} \left(B_r r^2 f \right) = 0, \label{eq:divb} \\
&\frac{\partial}{\partial t} \left[ \left( e + \frac{1}{2} \rho \vec{v}^2 + \frac{\vec{B}^2}{8 \pi} \right) r^2 f \right] \nonumber \\
&+ \frac{\partial }{\partial r} \left[\left( e + p + \frac{1}{2} \rho \vec{v}^2 + \frac{{\vec{B}_{\perp}}^2}{4 \pi} \right ) v_r r^2 f - B_r \frac{\vec{B}_{\perp} \cdot \vec{v}_{\perp}}{4 \pi} r^2 f \right] \nonumber \\
&= r^2 f \left( - \rho g v_r + Q_{\rm cond} \right), \label{eq:energy} \\
& e = \frac{p}{\Gamma-1}, \ \ \ \ p = \frac{\rho k_B T}{\mu}. \label{eq:eos} \
\end{align}
See Appendix in \reft{Shoda18b} for the derivation.
We denoted the perpendicular components of $\vec{X}$ as $\vec{X}_\perp = X_x \vec{e}_x + X_y \vec{e}_y$,
and we assumed that the plasma is composed of only hydrogen and is fully ionized in the entire simulation region.
Therefore, the mean molecular mass $\mu$ satisfied $\mu = 0.5 m_p$ where $m_p$ is the proton mass.
$\Gamma$ is the adiabatic specific heat: $\Gamma = 5/3$.
$f$ is the expansion factor of the flux tube \refp{Levin77,Wang090,Arge000}.
In this study, following \reft{Kopp076} and \reft{Verdi10}, we assumed
\begin{align}
f(r) = \frac{f_{\rm exp} \exp \left[ \left( r - r_f \right) / \sigma \right] + f_1 }{\exp \left[ \left( r - r_f \right) / \sigma \right] + f_1},
\end{align}
where $f_1 = 1- f_{\rm exp} \exp \left[ \left( R_\odot - r_f \right) / \sigma \right]$, $f_{\rm exp}=10$, $r_f = 1.3 R_\odot$ and $\sigma = 0.5 R_\odot$.
$\hat{\vec{\eta}}_1$ and $\hat{\vec{\eta}}_2$ are coefficient tensors that represent phenomenological turbulent decay.
\begin{align}
&\hat{\vec{\eta}}_1=
\frac{c_d}{4 \lambda} \left(
\begin{array}{cc}
|z^+_x| + |z^-_x| & 0 \\
0 & |z^+_y| + |z^-_y| \\
\end{array}
\right), \label{eq:eta1} \\
&\hat{\vec{\eta}}_2=
\frac{c_d}{4 \lambda} \left(
\begin{array}{cc}
|z^+_x| - |z^-_x| & 0 \\
0 & |z^+_y| - |z^-_y| \\
\end{array}
\right), \label{eq:eta2}
\end{align}
where $z^{\pm}_{x,y}$ are Els\"asser variables \refp{Elsas50}:
\begin{align}
z^{\pm}_{x,y} = v_{x,y} \mp B_{x,y} / \sqrt{4 \pi \rho}.
\end{align}
\reft{Shoda18a} showed that these terms are a natural extension of a widely used phenomenological model of Alfv\'en-wave turbulence \refp{Hossa95,Dmitr02,Verdi07,Chand09a}.
$c_d = 0.1$ was chosen in this study \refp{Balle17}.
$\lambda$ is the perpendicular correlation length of turbulence.
We assumed that the correlation length is proportional to the flux-tube radius:
\begin{align}
\lambda \propto B_r^{-1/2}.
\label{eq:lambda}
\end{align}
Using the phenomenological turbulence term together with Eq. (\ref{eq:lambda}),
both local \refp{Cranm07,Verdi10,Lione14,Shoda18a} and global \refp{Holst14} simulations succeeded in modeling the corona and solar wind.
The correlation length at the coronal base ($\lambda_0$) is
\begin{align}
\lambda_0 = 1{\rm \ Mm}.
\end{align}
This is based on the assumption that Alfv\'en waves are generated inside the magnetic patches on the photosphere and propagate upward along the flux tube \refp{Balle11}.
$Q_{\rm cond}$ is the heating by thermal conduction given as
\begin{align}
Q_{\rm cond} = - \nabla \cdot \vec{q}_{\rm cond} = - \frac{1}{r^2 f} \frac{\partial}{\partial r} \left( r^2 f q_{\rm cond} \right),
\end{align}
where $\vec{q}_{\rm cond}$ is the conductive flux and $q_{\rm cond}$ represents its radial component.
The conductive flux is a combination of Spitzer-H\"arm flux \refp{Spitz53} and free-streaming flux \refp{Hollw74,Hollw76} given as
\begin{align}
q_{\rm cond} = \xi q_{\rm SH} + (1-\xi) q_{\rm FS}, \ \ \ \ \xi = \max \left(1, \frac{\rho}{\rho_{\rm SW}} \right)
\end{align}
where $\rho_{\rm SW} = 10^{-21} {\rm g \ cm^{-3}}$ and
\begin{align}
&q_{\rm SH} = - \kappa_0 T^{5/2} \frac{\partial}{\partial r} T, \\
&q_{\rm FS} = \frac{3}{4} \alpha p v_r.
\end{align}
In GGS-Gaussian units, $\kappa_0 \approx 10^{-6}$.
We fixed $\alpha=2$ in this study.
Radiative cooling is ignored because of its small contribution to the coronal energy budget \refp{Matsu14,Balle16}.
The coronal base is cooled down by keeping the bottom temperature fixed.
\subsection{Numerical scheme and boundary conditions}
We solved the basic equations (\ref{eq:mass})-(\ref{eq:eos}) from the coronal base ($r=1.014R_\odot$) to 1 au ($r= 215 R_\odot$).
Furthermore, we applied 50 000 uniform grid points to resolve the computational domain.
The Harten-Lax-van Leer-discontinuities (HLLD) approximated Riemann solver \refp{Miyos05} with 2nd-order monotone upstream-centered schemes for conservation law (MUSCL) reconstruction \refp{Leer079} was used to calculate the numerical flux,
while the 3rd-order strong stability preserving (SSP) Runge-Kutta method \refp{Shu0088} was used for time integration.
The free boundary condition was imposed on the boundary at 1 au.
We confirmed that the boundary condition at 1 au does not affect the calculation because the super-sonic and super-Alfv\'enic solar wind is formed in a quasi-steady state.
This is why we did not need to apply the transmitting boundary condition \refp{Thomp87,DelZa01,Suzuk06a}.
As for the lower boundary, the conditions were as follows.
Here we denoted the lower-boundary values with subscript $0$.
The mass density $\rho$, temperature $T$, and radial magnetic field strength $B_{r}$ were fixed to
\begin{align}
\rho_0 = 8.5 \times 10^{-16} {\rm \ g \ cm^{-3}}, \ \ T_0 = 4 \times 10^5 {\rm \ K}, \ \ B_{r,0} = 10 {\rm \ G}.
\end{align}
The quantity $B_{r,0}/f_{\rm exp}$ controls the solar-wind velocity \refp{Suzuk04,Suzuk06b,Fujik15,Revil17}.
According to \reft{Fujik15}, our setting of ($B_{r,0}/f_{\rm exp} = 1 {\rm \ G}$) approximately corresponds to $650 {\rm \ km \ s^{-1}}$ in terms of asymptotic solar wind velocity.
We applied the free boundary conditions for the radial velocity and inward Els\"asser variable:
\begin{align}
\left. \frac{\partial}{\partial r} v_r \right|_0 = 0, \ \ \ \ \left. \frac{\partial}{\partial r} \vec{z}^{-} \right|_0 = 0,
\end{align}
where $\vec{z}^{-} = \vec{v}_\perp + \vec{B}_\perp / \sqrt{4 \pi \rho}$.
As for the upward Els\"asser variable, $\vec{z}^{+} = \vec{v}_\perp - \vec{B}_\perp / \sqrt{4 \pi \rho}$,
we applied monochromatic (Section \ref{sec:mono}) or broadband (Section \ref{sec:broad}) wave injections.
In both cases, the root-mean-square value of the transverse velocity $v_{\rm rms,0}$ was fixed to $v_{\rm rms,0} = 32 {\rm \ km \ s^{-1}}$.
In terms of the upward Els\"asser variable, the root-mean-square value was $z^{+}_{\rm rms,0} = 2 v_{\rm rms,0}$,
because $\left| z^{+}_{x,y} \right| \gg \left| z^{-}_{x,y} \right|$ at the coronal base and
\begin{align}
z^{+}_{\rm rms,0} &= \sqrt{ {z^{+}_x}^2 + {z^{+}_y}^2 } \nonumber \\
&\simeq \sqrt{ \left( z^{+}_x + z^{-}_x \right)^2 + \left( z^{+}_y + z^{-}_y \right)^2 } \nonumber \\
&= \sqrt{ 4 \left( v_x^2 + v_y^2 \right) } = 2 v_{\rm rms,0}.
\end{align}
Note that the injected energy flux $F_0$ was kept constant:
\begin{align}
F_0 = \frac{1}{4} \rho {z^{+}_{0}}^2 v_{A,0} = 8.4 \times 10^5 {\rm \ erg \ cm^{-3} \ s^{-1}}.
\end{align}
This was larger than the required amount of energy injection required to sustain the solar wind in the open field regions \refp{Withb77}.
\section{Monochromatic-wave injection } \label{sec:mono}
We first applied the monochromatic wave injections with different frequencies to discuss the basic properties.
The boundary condition of the upward Els\"asser variable was
\begin{align}
&z^{+}_{x,0} = 2 v_{\rm rms,0} \sin \left( 2 \pi f_0 t \right), \\
&z^{+}_{y,0} = 2 v_{\rm rms,0} \cos \left( 2 \pi f_0 t \right),
\end{align}
where $f_0$ is the injection frequency.
\subsection{Quasi-steady state } \label{sec:qs_mono}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=70mm]{rs_compare}
\end{center}
\vspace{-1em}
\caption{
Snapshots of the quasi-steady states with different wave-injection frequencies.
Blue, green and red lines indicate $f_0 = 10^{-2.5} {\rm \ Hz}$, $10^{-3.5} {\rm \ Hz}$, $10^{-4.5} {\rm \ Hz}$, respectively.
Panels correspond to a: mass density, b: temperature, c: radial velocity, d: Els\"asser variables.
In Panel d, transparent and dashed lines indicate $z^{+}$ and $z^{-}$, respectively.
}
\label{fig:compare_rs_mono}
\end{figure}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=180mm]{fdepend_qs}
\end{center}
\vspace{-1em}
\caption{
Parameters of the corona and solar wind as functions of wave-driving frequency.
Panels indicate the a: solar wind speed at 1 au, b: maximum coronal temperature, c: mass-loss rate, and d: sonic point location.
Each parameter is measured in the time-averaged quasi-steady states.
}
\label{fig:fdepend_qs}
\end{figure*}
Figure \ref{fig:compare_rs_mono} shows the snapshots of the quasi-steady states of different injection frequencies:
$f_0 = 10^{-2.5} {\rm \ Hz \ (blue)}$, $10^{-3.5} {\rm \ Hz \ (green)}$, $10^{-4.5} {\rm \ Hz \ (red)}$.
Panels indicate from top to bottom the mass density $\rho$, temperature $T$, radial velocity $v_r$, and Els\"asser variables $z^{\pm} = \sqrt{{z_x^{\pm}}^2 + {z_y^{\pm}}^2}$ (transparent: $z^{+}$, dashed: $z^{-}$).
Although the same amount of energy flux ($F_0 = 8.4 \times 10^5 {\rm \ erg \ cm^{-3} \ s^{-1}}$) was injected in each case, the corresponding quasi-steady states showed different properties.
Firstly, a significant density fluctuation is observed when $f_0 = 10^{-2.5} {\rm \ Hz}$.
Because the large density fluctuation is attributed to PDI, it indicates that PDI can develop only when $f_0 > 10^{-3.5} {\rm \ Hz}$.
Els\"asser variables also show evidence of PDI when $f_0 = 10^{-2.5} {\rm \ Hz}$.
The ratio $z^{+}/z^{-}$ is smaller than unity partly in $r/R_{\odot}>50$ when $f_0 = 10^{-2.5} {\rm \ Hz}$, while $z^{+}/z^{-} \gg 1$ when $f_0 \lesssim 10^{-3.5} {\rm \ Hz}$.
A natural interpretation of low $z^{+}/z^{-}$ is that, as a result of PDI, a large amount of reflected Alfv\'en waves is generated \refp{Sagde69,Golds78,Suzuk05} and is advected to 1 au.
The coronal temperature is the lowest in the medium-frequency case ($f_0=10^{-3.5} {\rm \ Hz}$).
When $f_0$ is high, because PDI occurs in the sub-Alfv\'enic corona, the coronal plasma is heated up by the shock and turbulence driven by PDI \refp{Shoda18a}.
However, when $f_0$ is low, Alfv\'en waves reflect efficiently \refp{An00090,Velli93,Cranm05} and the turbulence heating in the corona increases \refp{Matth99,Dmitr02,Ought06}.
This is why the medium-frequency case, in which PDI does not occur and reflection is weak, shows the lowest temperature of the corona.
As a result of the lower-temperature corona, the mass density of the wind is smaller and the wind is faster \refp{Hanst12}.
In Figure \ref{fig:fdepend_qs}, we show the dependence of solar wind parameters on $f_0$.
From left to right, we show the solar-wind velocity $v_r$ at $r=1{\rm \ au}$, maximum temperature $T_{\rm max}$,
mass-loss rate $\dot{M} = \rho v_r 4 \pi r^2$, and the sonic point $r_\ast$ where the sound speed $c_s$ is equal to the wind speed $v_r$.
Here, we assumed $c_s=\sqrt{p/\rho}$ because the plasma is almost isothermal due to the strong thermal conduction near the sonic point.
Every variable is averaged in time over $10^5 {\rm \ s}$.
Figure \ref{fig:fdepend_qs} shows that the solar wind properties depend non-monotonically on $f_0$;
slow, high-temperature, and high-density winds are driven in the cases with high and low $f_0$;
in contrast, fast, low-temperature, and low-density winds stream out in the cases with intermediate $f_0$.
As explained before, this bimodal behavior can be understood by the different characters of the reflection and dissipation of low- and high-frequency Alfv\'{e}n waves;
low-frequency waves dissipate by reflection-driven turbulence and high-frequency waves by PDI.
In addition, Fig. \ref{fig:fdepend_qs} indicates that the corona and solar wind have a frequency-filtering mechanism;
waves with a medium frequency $f_0 \sim 10^{-3.5} {\rm \ Hz}$ are the least dissipative and most transparent in order to propagate through.
This might be responsible for the dominance of the hour-scale Alfv\'en waves observed in the solar wind \refp{Belch71a}.
Some features found in Fig. \ref{fig:fdepend_qs} are consistent with previous research.
In the high-frequency range, the solar wind velocity (Fig. \ref{fig:fdepend_qs}a) decreases as $f_0$ increases, and this result is consistent with \reft{Ofman98},
who showed the inverse correlation between the injection frequency and the resultant wind speed when $0.35 {\rm \ mHz} \lesssim f_0 \lesssim 3 {\rm \ mHz}$.
The critical point $r_*$ (Fig. \ref{fig:fdepend_qs}d) has a negative correlation with the temperature.
This is because the critical point is closer to the Sun when the sound speed is larger \refp{Parke58}.
\subsection{Decay law of density fluctuation in the accelerating and expanding solar wind } \label{sec:decaylaw}
Following \reft{Tener13}, we derived the linear decay law for slow magnetoacoustic waves in the accelerating and expanding solar wind.
We began with the conservation of mass: Eq. (\ref{eq:mass}).
Assuming that the density and radial velocity have mean $\rho_0$, $v_{r,0}$ and small fluctuation $\delta \rho$, $\delta v_r$ parts, we could express the linearized equation for $\delta \rho$ as
\begin{align}
\frac{\partial}{\partial t} \left( \delta \rho S \right) + \frac{\partial}{ \partial r} \left( \rho_0 \delta v_r S \right) + \frac{\partial}{ \partial r} \left( \delta \rho v_{r,0} S \right) = 0.
\label{eq:dro_linear}
\end{align}
where $S=r^2f$ represents the cross section of flux tube.
We could safely assume that the compressible fluctuations come from upward slow mode because PDI generates the slow-mode wave propagating in the same direction as the parent Alfv\'en wave.
Therefore, $\delta \rho$ and $\delta v_r$ satisfy a characteristic relation of
\begin{align}
\delta \rho / \rho_0 = \delta v_r / c_s.
\label{eq:characteristic_slow}
\end{align}
This relation holds when the slow mode has acoustic nature.
When $\beta \ll 1$, magnetic and acoustic perturbations decouple with each other.
In addition, the gravity effect is negligible when the wavelength is much smaller than the scale height of stratification.
Therefore, Eq. (\ref{eq:characteristic_slow}) is a good approximation because $\beta$ is small and the scale height is large in and above the corona.
Combining Eq. (\ref{eq:dro_linear}) and Eq. (\ref{eq:characteristic_slow}), we had
\begin{align}
\frac{\partial}{\partial t} \left( \delta \rho S \right) + \frac{\partial}{ \partial r} \left[ \delta \rho \left( v_{r,0} + c_s \right) S \right] = 0.
\end{align}
$\delta \rho$ was assumed to have following form:
\begin{align}
\delta \rho \propto \exp \left[ i \left( kr - \omega t \right) \right],
\label{eq:dro_formulation}
\end{align}
From Eqs. (\ref{eq:dro_linear}) and (\ref{eq:dro_formulation}), we have
\begin{align}
- i \omega + i k \left( c_s + v_{r,0} \right) + \left( v_{r,0} + c_s \right) \frac{\partial}{\partial r} \ln S + \frac{\partial}{\partial r} \left( v_{r,0} + c_s \right) = 0.
\end{align}
If the background has little variation and the third term in the left hand side is negligible, the usual dispersion relation of the acoustic wave ($\omega / k = c_s+v_{r,0} $) is obtained.
If not, we have
\begin{align}
\omega = \left( c_s+v_{r,0} \right) k - i \gamma_{\rm acc} - i \gamma_{\rm exp},
\end{align}
where
\begin{align}
\gamma_{\rm acc} = \frac{\partial}{\partial r} \left( v_{r,0} + c_s \right),
\end{align}
and
\begin{align}
\gamma_{\rm exp} = \left( v_{r,0} + c_s \right) \frac{\partial}{\partial r} \ln S,
\end{align}
are the damping rates by the acceleration and expansion of the solar wind, respectively.
In the linear regime, the density fluctuations have decay rates of $\gamma_{\rm acc} + \gamma_{\rm exp}$.
Since density fluctuation should increase as a result of PDI, acceleration and expansion work to suppress the instability \refp{Tener13,DelZa15}.
The effective growth rate $\gamma_{\rm eff}$ of PDI is given as
\begin{align}
\gamma_{\rm eff} = \gamma_{\rm GD} - \gamma_{\rm acc} - \gamma_{\rm exp},
\label{eq:effective}
\end{align}
where $\gamma_{\rm GD}$ is a growth rate given by the Goldstein--Derby dispersion relation: Eq. (\ref{eq:gamma_gd78}).
\begin{figure}[!t]
\begin{center}
\includegraphics[width=70mm]{gamma_compare}
\end{center}
\vspace{-1em}
\caption{
a: Decay rates of density fluctuation due to wind acceleration $\gamma_{\rm acc}$ (solid line) and due to wind expansion $\gamma_{\rm exp}$ (dashed line).
b: Growth rate of PDI given by the Goldstein--Derby dispersion relation $\gamma_{\rm GD}$ (dashed line) and the effective growth rate $\gamma_{\rm eff}$ (solid line).
Blue, green, orange, and red lines indicate $f_0 = 10^{-2.5} {\rm \ Hz}$, $10^{-3.0} {\rm \ Hz}$, $10^{-3.5} {\rm \ Hz}$, and $10^{-4.0} {\rm \ Hz}$, respectively.
We note that $\gamma_{\rm eff}$ of $f_0=10^{-4.0}{\rm \ Hz}$ (red solid) is negative in the entire region.
}
\label{fig:compare_gamma}
\end{figure}
\subsection{Doppler effect and effective growth rate}
To discuss the possibility of the onset of PDI for each $f_0$, we calculated $\gamma_{\rm eff}$ using Eq. (\ref{eq:effective}).
The normalized growth rate $\gamma_{\rm GD} / \omega$ was calculated from Eq. (\ref{eq:gamma_gd78}).
We should note that $\omega \ne 2 \pi f_0$ because of the Doppler effect by the acceleration of the solar wind.
$\omega$ should be the intrinsic frequency, that is, the wave frequency observed in a co-moving frame of the solar wind.
Because the wave frequency observed from a fixed coordinate is constant in a quasi-steady state, the wave number $k(r)$ satisfies
\begin{align}
k(r) = \frac{2 \pi f_0}{v_A(r) + v_r (r)}.
\end{align}
When we observed this wave in a co-moving frame, the wave number was invariant and the phase speed decreased to $v_A(r)$, and therefore,
\begin{align}
\omega = v_A(r) k(r) = 2 \pi f_0 \frac{v_A(r)}{v_A(r) + v_r (r)}.
\end{align}
This means that the intrinsic frequency decreases in an accelerating flow.
In the accelerating expanding box model, this effect is mentioned as the box-stretching effect \refp{Tener17a}.
A similar argument appears in deriving the wave-action conservation \refp{Dewar70,Heine80}.
We should note that, the wind-acceleration effect appears in different ways.
As discussed in Section \ref{sec:decaylaw}, the wind acceleration works to reduce the density fluctuation.
In addition, as discussed here, wind acceleration also causes the Doppler effect.
In Figure \ref{fig:compare_gamma}, we show $\gamma_{\rm eff}$, $\gamma_{\rm GD}$, $\gamma_{\rm acc}$, and $\gamma_{\rm exp}$ as functions of height
to see the effects of wind acceleration and expansion on the growth rate of PDI.
$\gamma_{\rm acc}$ (solid lines) and $\gamma_{\rm exp}$ (dashed lines) are shown in Figure \ref{fig:compare_gamma}a,
while $\gamma_{\rm eff}$ and $\gamma_{\rm GD}$ are shown in Figure \ref{fig:compare_gamma}b.
The colors represent the injection frequency as follows:
$f_0=10^{-2.5} {\rm \ Hz}$ (blue), $f_0=10^{-3.0} {\rm \ Hz}$ (green), $f_0=10^{-3.5} {\rm \ Hz}$ (orange), and $f_0=10^{-4.0} {\rm \ Hz}$ (red).
Figure \ref{fig:compare_gamma}a shows that the expansion ($\gamma_{\rm exp}$) dominates the acceleration ($\gamma_{\rm acc}$) in the damping of the PDI.
As a result, $\gamma_{\rm GD}$ is reduced to $\gamma_{\rm eff}$.
The reduction factors, $\gamma_{\rm acc}/\gamma_{\rm GD}$ and $\gamma_{\rm exp}/\gamma_{\rm GD}$, are larger for smaller injection frequencies, $f_0$,
because $\gamma_{\rm GD}$ is proportional to $f_0$.
Specifically when $f_0=10^{-4.0} {\rm \ Hz}$, $\gamma_{\rm GD}$ is smaller than the reduction factor, $\gamma_{\rm acc} + \gamma_{\rm exp}$, and the effective growth rate is negative.
The local maxima of $\gamma_{\rm GD}$ is determined by the balance between plasma beta and wave amplitude (see Eq. (\ref{eq:gamma_sagdeev})); the plasma beta is low and the wave amplitude is small in the lower corona and vice versa in the distant solar wind.
\subsection{Onset and suppression of PDI}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=70mm]{fdepend_drochgam}
\end{center}
\vspace{-1em}
\caption{
Solar wind parameters versus wave-injection frequency $f_0$.
Each panel indicates
a: maximum fractional density fluctuation $n$,
b: normalized cross-helicity $\sigma_c$ at $1 {\rm \ au}$,
c: maximum effective growth rate $\gamma_{\rm eff}$ (blue: positive, red: negative).
}
\label{fig:fdepend_drochgam}
\end{figure}
To discuss the threshold of the onset of PDI, we calculated the maximum fractional density fluctuation $n_{\rm max}$ and the normalized cross-helicity (Alfv\'enicity) $\sigma_c$ at $1 {\rm \ au}$.
Here we defined $n_{\rm max}$ and $\sigma_c$ as
\begin{align}
n_{\rm max} &= \max \left( \frac{1}{\rho_{\rm ave}} \sqrt{\average{\left( \rho - \rho_{\rm ave} \right)^2}} \right), \ \ \ \ \rho_{\rm ave} = \average{\rho}, \\
\sigma_c &= \frac{\average{z^{+}}^2 - \average{z^{-}}^2}{\average{z^{+}}^2 + \average{z^{-}}^2},
\end{align}
where $\average{X}$ denotes the time-averaged value of $X$ and $\max \left( X \right)$ denotes the maximum value of $X$ in space.
We note that the sign of ${\sigma_c}$ is opposite to the sign of $H_c = \vec{v} \cdot \vec{B}$.
These values can be useful indicators of PDI because PDI generates large-amplitude density fluctuation, which increases $n$, and back-scattered Alfv\'en waves, which reduce $\sigma_c$.
The latter effect works to reduce $\sigma_c$.
According to \reft{Cranm12a}, without PDI, $\delta \rho_{\rm rms} / \rho_0 \lesssim 0.1$, and thus $\delta \rho_{\rm rms} / \rho_0 > 0.1$ indicates the PDI.
In Figure \ref{fig:fdepend_drochgam}, we show a: $n_{\rm max}$, b: $\sigma_c$ at $1 {\rm \ au}$, and c: the maximum effective growth rate $\gamma_{\rm eff, max}$ (blue: positive, red: negative) as functions of $f_0$.
When $f_0 \lesssim 10^{-3.5} {\rm \ Hz}$, both $n_{\rm max}$ and $\sigma_c$ show monotonic trends with $f_0$:
$n_{\rm max}$ decreases and $\sigma_c$ increases as $f_0$ increases.
This is explained as follows.
As $f_0$ becomes smaller, Alfv\'en waves are reflected more efficiently \refp{Ferra58,An00090,Velli93,Cranm05,Hollw07}.
If Alfv\'en waves are reflected in the solar wind beyond the Alfv\'en point, reflected Alfv\'en waves are advected towards $1 {\rm \ au}$ and contribute to reducing $\sigma_c$.
Note that the inward waves vanish near the Alfv\'en point \refp{Verdi09,Tener17a}.
When the amount of reflected Alfv\'en waves increases, the interaction between outward and inward waves is activated.
This wave-wave collision excites not only turbulence \refp{Irosh64,Kraic65,Dobro80,Goldr95},
but also the slow-mode generation \refp{Wentz74,Uchid74} by the modulation of magnetic field pressure \refp{Hollw71,Kudoh99,Cranm15}.
Magnetic field modulation also leads to direct steepening to fast shock \refp{Cohen74a,Kenne90,Suzuk04}.
Owing to these processes, larger-density fluctuation is likely to be generated in the presence of larger-amplitude reflected Alfv\'en waves.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=70mm]{scaleratio}
\end{center}
\vspace{-1em}
\caption{
Scale ratio $H_\beta / l$
where $l$ and $H_{\beta}$ denote the typical propagation length scale during the PDI growth and the scale length of plasma beta, respectively.
Blue, green, and red lines indicate $f_0 = 10^{-2.5} {\rm \ Hz}$, $10^{-3.0} {\rm \ Hz}$, and $10^{-3.5} {\rm \ Hz}$, respectively.
}
\label{fig:scaleratio}
\end{figure}
The monotonic trend in $10^{-5.5} {\rm \ Hz} \lesssim f_0 \lesssim 10^{-3.5} {\rm \ Hz}$ breaks down near $f_0 \approx 10^{-3} {\rm \ Hz}$.
When $f_0$ gets larger than $10^{-3} {\rm \ Hz}$, $n_{\rm max}$ becomes larger than $0.1$ and $\sigma_c$ becomes smaller than $0.5$.
Considering the fact that PDI generates large amounts of density fluctuation and reflected Alfv\'en waves,
Figure \ref{fig:fdepend_drochgam} indicates that the frequency threshold of the onset of PDI is $10^{-3.5} {\rm \ Hz} < f_0 < 10^{-3} {\rm \ Hz}$.
This means that,
even though $\gamma_{\rm eff,max}$ is positive when $f_0 = 10^{-3.5} {\rm \ Hz}$,
PDI cannot develop with this injection frequency.
\reft{Tener13} argued that PDI is suppressed not only by the acceleration and expansion of the solar wind but also by the inhomogeneity of the solar wind,
because the resonance condition changes as the plasma parameters such as the plasma beta, Alfv\'en speed and wave amplitude, vary.
In Figure \ref{fig:scaleratio}, we show the ratio between the propagation length during growth time $l$ and the scale length of the plasma beta ($H_\beta$):
\begin{align}
l = \frac{ v_A + v_r}{\gamma_{\rm eff}}, \ \ \ \ H_\beta = \left| \frac{\beta}{ \partial \beta / \partial r} \right|.
\end{align}
The ratio, $H_{\beta}/l$, can be used as a measure of how the inhomogeneity of the background field affects the onset of PDI;
if $H_{\beta}/l$ is small $\lesssim 1$, the background inhomogeneity could suppress the PDI.
Figure \ref{fig:scaleratio} shows $H_{\beta} / l$ versus height.
This indicates that, when $f_0 = 10^{-3.5} {\rm \ Hz}$, PDI cannot evolve because the scale ratio $H_{\beta} / l$ is at most around unity and the inhomogeneity affects the growth of PDI.
Another possible reason that PDI is not observed when $f_0 = 10^{-3.5} {\rm \ Hz}$ is that the typical growth time is too small to develop before $1 {\rm \ au}$.
$\gamma_{\rm eff}$ averaged over the entire simulation box is approximately $\overline{\gamma}_{\rm eff} = 5 \times 10^{-5} {\rm \ Hz}$, corresponding to the timescale of $\overline{\tau} = 2 \times 10 ^{4} {\rm \ s}$.
Therefore, because it takes a few $\overline{\tau}$ to reach the saturation phase of PDI,
the evolution timescale ($\sim 10^{5} {\rm \ s}$) is comparable to the propagation timescale up to $1 {\rm \ au}$ ($\sim 2 \times 10^{5} {\rm \ s}$).
This indicates that $1 {\rm \ au}$ might be too short for the PDI of $f_0 = 10^{-3.5} {\rm \ Hz}$ to develop.
\section{Broadband-wave injection} \label{sec:broad}
Next, we applied the broadband-wave injection to discuss the consistency with observation.
The boundary condition is given as
\begin{align}
z^{+}_{x,y} = \int^{f_{\rm max}}_{f_{\rm min}} P(f) \sin \left( 2 \pi f t + \phi_{x,y}(f) \right) df,
\end{align}
where $P(f)$ is determined so that the root-mean-square value is $2 v_{\rm rms,0}$ and the power show $1/f$ spectrum in $f_{\rm min} \le f \le f_{\rm max}$ \refp{Bruno13}.
$\phi_{x,y}(f)$ are random functions of $f$.
We fixed $f_{\rm max} = 10^{-2} {\rm \ Hz}$, corresponding to $100 {\rm \ s}$ in terms of period.
Note that some observations show an even higher frequency component \refp{He00009,Okamo11,Shoda18b}.
$f_{\rm min}$ is the free parameter.
In this study, we calculated three cases: $f_{\rm min} = 10^{-3} {\rm \ Hz}$, $10^{-4} {\rm \ Hz}$, $10^{-5} {\rm \ Hz}$,
each of which corresponded to the largest timescale of
the photospheric transverse motion \refp{Matsu10b}, the coronal transverse motion \refp{Morto15}, and the solar-wind fluctuations \refp{Tu00095}, respectively.
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=170mm]{rs_compare_broad}
\end{center}
\vspace{-1em}
\caption{
Quasi-steady states with different $f_{\rm min}$ values.
Blue, green and red lines indicate $f_{\rm min} = 10^{-3} {\rm \ Hz}$, $10^{-4} {\rm \ Hz}$, $10^{-5} {\rm \ Hz}$, respectively.
a-c: mass density. Shown by circles and stars are observations by \reft{Wilhe98} and by \reft{Lamy097}, respectively.
d-f: temperature. Shown by circles and stars are observations by \reft{Landi08} and compilation of observed data by \reft{Cranm04,Cranm09b}, respectively.
g-i: radial velocity. Shown by stars are ion outflow speed by \reft{Zangr02}, while the crosses represent the IPS observations \refp{Kojim04}.
j-l: transverse velocity (Alfv\'en-wave amplitude).
}
\label{fig:compare_rs_broad}
\end{figure*}
\subsection{Quasi-steady state}
As in Section \ref{sec:mono}, we begin by discussing the quasi-steady states.
Figure \ref{fig:compare_rs_broad} shows the same variables as those shown in Figure \ref{fig:compare_rs_mono}
except for Panel d, where the transverse velocity $v_\perp$ is shown instead of the Els\"asser variables.
Color represents $f_{\rm min} = 10^{-3} {\rm \ Hz}$ (blue), $10^{-4} {\rm \ Hz}$ (green), $10^{-5} {\rm \ Hz}$ (red), respectively.
Because the main motivation of broadband-wave injection is to compare to observations, we also show several observational values.
In Panel a, we show the density observation by \reft{Wilhe98} (circles) and \reft{Lamy097} (stars).
In converting the observed electron density $n_e$ to mass density $\rho$, we simply assumed $\rho = m_p n_e$.
In Panel b, circles and stars correspond to the results from \reft{Landi08} and \reft{Cranm04,Cranm09b}, respectively.
In Panel c, observed ion-outflow velocity is plotted by stars \refp{Zangr02}, while the results of IPS observation are indicated by crosses \refp{Kojim04}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=70mm]{freq_dro}
\end{center}
\vspace{-1em}
\caption{
Fractional density fluctuation $n_f$ versus heliocentric distance.
Blue, green and red lines indicate $f_{\rm min} = 10^{-3} {\rm \ Hz}$, $10^{-4} {\rm \ Hz}$, $10^{-5} {\rm \ Hz}$, respectively.
Shown by gray rectangles and circles are the observational values (see text).
}
\label{fig:freq_dro}
\end{figure}
\subsection{Density fluctuation}
The density fluctuation in the solar wind is observed by radio-wave observation.
As explained in the Introduction, density fluctuation possibly plays a role in reflecting Alfv\'en waves in the corona and solar wind \refp{Balle16}.
When we applied the broadband-wave injection, it was difficult to obtain the amplitude of the density fluctuation that was solely attributed to PDI,
because the density fluctuates not only by PDI but also by the time variation of the injected energy flux.
Since the density fluctuation that comes from the injection has a timescale of typically $f_{\rm min}$,
in this study, we defined the density fluctuation as high-frequency ($>f_{\rm min}$) components.
Given that we have density $\rho(r,t)$, the fractional fluctuation $n_f (r)$ is given as
\begin{align}
n_f (r) = \frac{1}{\rho_{\rm ave}} \sqrt{ \frac{1}{2 \pi \tau_0} \int_{\left| \omega \right| > 2 \pi f_{\rm th}} \left| \tilde{\rho} (r,\omega) \right|^2 d\omega},
\end{align}
where
\begin{align}
\tilde{\rho} (r,\omega) = \int^{\tau_0}_0 dt \rho (r, t) e^{ i \omega t}.
\end{align}
Note that Parseval's identity holds as follows:
\begin{align}
\int dt \left| \rho(r,t) \right|^2 = \frac{1}{2 \pi} \int d \omega \left| \tilde{\rho} \left(r, \omega \right) \right|^2.
\end{align}
$f_{\rm th}$ is the frequency threshold.
Here, we set $f_{\rm th} = 10^{-3} {\rm \ Hz}$.
Although this value was a rather arbitrary choice,
we confirmed that the radial trend of $n_f$ does not depend on $f_{\rm th}$.
Figure \ref{fig:freq_dro} shows the radial profiles of $n_f (r)$.
Rectangles are observational values taken from \reft{Cranm12a}.
The rectangle near $r = 5 R_\odot$ indicates the radio sounding data \refp{Coles89,Spang02,Harmo05} while the rectangles in $r \gtrsim 100 R_\odot$ indicate the in-situ data by \reft{Marsc90}.
The circles indicate the observation by \reft{Miyam14}.
Our three cases nicely explain the overall radial profile of the observed density fluctuation peaked at $r\sim 5-10 R_{\odot}$ \refp{Miyam14}.
The peak of $n_f$ in our calculation is created by the high-frequency ($f>10^{-3.5}$Hz) Alfv\'{e}n waves that are subject to PDI (Fig.3b).
The largest effective growth rate $\gamma_{\rm eff}$ is peaked in $r \sim 2-10 R_\odot$ when the parent-wave frequency is $10^{-3} - 10^{-2} {\rm \ Hz}$.
Therefore, the large density fluctuations are excited as an outcome of the PDI in these locations.
To summarize, the observed density fluctuation is explained by the evolution of the PDI of high-frequency ($10^{-2} - 10^{-3} {\rm \ Hz}$) Alfv\'en waves.
\subsection{Cross-helicity in the solar wind}
Because the radial evolution of Els\"asser variables $z^{\pm}$ in the heliosphere has been observed \refp{Bavas82,Bavas00}, we can test our theoretical model by comparison with these observations.
In Figure \ref{fig:els_compare}, we show the radial profile of time-averaged Els\"asser variables ($z^{+}$: solid line, $z^{-}$: dashed line) with different $f_{\rm min}$ values
($f_{\rm min}=10^{-3} {\rm \ Hz}$: blue, $10^{-4} {\rm \ Hz}$: green, $10^{-5} {\rm \ Hz}$: red).
Also shown by gray transparent lines are the observational trends by \reft{Bavas00}.
While $z^{+}$ is consistent with observation when $f_{\rm min}=10^{-4}, \ 10^{-5} {\rm \ Hz}$, we have a much smaller $z^{+}$ compared with observation when $f_{\rm min}=10^{-3} {\rm \ Hz}$.
Because PDI evolves when $f_0 \gtrsim 10^{-3} {\rm Hz}$, this discrepancy indicates that, via PDI, excessive energy transfer from $z^{+}$ to $z^{-}$ occurs.
When $f_{\rm min}$ becomes smaller, the intensity of high-frequency waves that are subject to PDI is reduced because the total wave power is fixed.
This is why the signature of PDI is weak for smaller $f_0$.
Our result indicates that the cross-helicity evolution in the solar wind is dominated by the linear reflection \citep{Zhou090,Velli91,Verdi07}.
Because the simulated $z^{-}$ approaches the observational value as $f_{\rm min}$ decreases, as a result of PDI suppression,
the cross-helicity evolution in the solar wind is governed by linear reflection of low-frequency ($f_0 \sim 10^{-5} {\rm \ Hz}$) components.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=70mm]{els_compare}
\end{center}
\vspace{-1em}
\caption{
Radial profiles of time-averaged Els\"asser variables.
Solid and dashed lines indicate $z^{+}$ (anti-sunward) and $z^{-}$ (sunward) components.
Blue, green and red lines indicate $f_{\rm min} = 10^{-3} {\rm \ Hz}$, $10^{-4} {\rm \ Hz}$, $10^{-5} {\rm \ Hz}$, respectively.
Also shown by gray lines are observational trends by \reft{Bavas00}.
}
\label{fig:els_compare}
\end{figure}
\section{Summary \& Discussion} \label{sec:summary}
In this study, using numerical simulations, we investigated the threshold of the onset of PDI by changing the Alfv\'en-wave injection.
As discovered by \reft{Tener13} and \reft{DelZa15}, wind acceleration and expansion work to reduce the growth rate of PDI.
We have solved the wave propagation self-consistently from the coronal base to $1 {\rm \ au}$, and this was then compared with the accelerating expanding box simulation.
Firstly, we investigated the fundamental processes of PDI by applying monotonic-wave injection with frequency $f_0$.
Our results show that PDI can develop when $f_0 \gtrsim 10^{-3} {\rm \ Hz}$, while we observe no signature of PDI when $f_0 \lesssim 10^{-3.5} {\rm \ Hz}$.
Owing to the wind acceleration and expansion, the growth rate of PDI becomes negative when $f_0 \lesssim 10^{-4.0} {\rm \ Hz}$.
When $f_0 \lesssim 10^{-3.5} {\rm \ Hz}$, even though the growth rate of PDI is positive, PDI cannot develop.
The suppression by solar wind inhomogeneity or the long timescale of growth might be the reason for this.
The frequency-filtering mechanism can operate in the corona and solar wind due to the bimodal behavior of wave dissipation with respect to frequency.
The low-frequency ($f_0\lesssim 10^{-4} {\rm \ Hz}$) waves undergo linear reflection and generate Alfv\'{e}nic turbulence from the interaction with counter-propagating waves.
The high-frequency ($f_0\gtrsim 10^{-3} {\rm \ Hz}$) waves dissipate through the PDI. As a result of the efficient heating, dense, hot and relatively slow winds are driven in the cases with $f_0\lesssim 10^{-4} {\rm \ Hz}$ or $f_0\gtrsim 10^{-3} {\rm \ Hz}$.
In contrast, the intermediate-frequency ($f_0\approx 10^{-3.5} {\rm \ Hz}$) waves are not severely subjected to these damping mechanisms.
As a result, fast and less dense wind emanates from the relatively cool corona in this case.
This indicates that the corona and solar wind have a frequency-filtering effect of the Alfv\'en wave, and as a result, the medium-frequency wave is likely to permeate.
This is a possible reason for the hour-scale waves observed in the solar wind \citep{Belch71a}.
Secondly, we applied broadband-wave injection to compare the numerical results with observation.
The observed radial trend of the density fluctuation can be well explained by the evolution of the high-frequency ($f_0\gtrsim 10^{-3} {\rm \ Hz}$) Alfv\'en waves.
However, the observed trend of the cross-helicity can be explained by the linear reflection of the low-frequency ($f_0 < 10^{-4} {\rm \ Hz}$) Alfv\'en waves.
These results show that the Alfv\'en waves in a wide range of frequency play an essential role in the global solar wind.
There are several limitations in our model.
The most severe limitation is the treatment of turbulence.
We have applied a simple one-point-closure model of Alfv\'en wave turbulence (Eq. (\ref{eq:eta1}) and (\ref{eq:eta2})) with the correlation length that increases with an expanding flux tube (Eq. (\ref{eq:lambda})).
However, \reft{Cranm12a} showed that Eq. (\ref{eq:lambda}) possibly underestimates the correlation length.
To overcome this, we needed to solve the transport equation of $\lambda$ \refp{Breec08,Usman11}.
In addition, the correlation length should be different between $z^{+}$ and $z^{-}$ \refp{Zank017,Shiot17}.
More sophisticated treatment of the turbulence, including the shell model \refp{Buchl07,Verdi12a}, remains as a future work.
Another limitation is one-dimensional modeling.
While the Alfv\'en wave turbulence is taken into account phenomenologically, we completely ignore the effect of phase mixing \refp{Heyva83,Kanek15,Antol15,Okamo15} by 1D modeling.
Besides, it has been shown by \reft{DelZa01} that the onset (and possibly growth) of parametric decay instability is slower in 3D than in 1D.
Our quantitative discussion might be slightly modified by the 1D assumption.
Also, we cannot take into account the wave refraction in the lower region \refp{Rosen02,Bogda03}.
In future, we need to conduct a 3D MHD simulation for the reasons above.
In this study, we have focused on the frequency dependence.
Since the growth rate of parametric decay instability depends also on plasma beta and wave amplitude \refp{Golds78,Derby78}.
\reft{Suzuk06a} investigated the dependence on the injected wave amplitude.
Readers probably expect that the PDI is suppressed for smaller wave injection according to Eq. (\ref{eq:gamma_sagdeev}).
However, the response of the solar wind totally changes the situation.
A case with smaller injection gives lower coronal temperature because of the suppressed heating.
As a result, the plasma beta in the corona is lower, and larger density fluctuations are excited by more activated PDI as shown in Figure 9 of \reft{Suzuk06a}.
Similarly, it is expected that the density variation is large when the magnetic field is stronger and the plasma beta is lower.
There are also ambiguities in the thermal flux in the free-streaming regime.
We chose $\alpha=2$ in evaluating the magnitude of free-streaming thermal flux.
Although $\alpha=4$ has been sometimes used \refp{Leer082,Withb88,Landi03,Cranm07,Balle16},
$\alpha=4$ might overestimate the actual flux because thermal conduction can be suppressed by the local instability and turbulence \refp{Gary099,Rober17,Komar17,Tong018}.
Indeed \reft{Cranm09a} showed that $\alpha=1.05$ yields good agreement with observation, and several recent studies used $\alpha=1.05$ \refp{Usman11,Holst14}.
The precise value of $\alpha$ should depend on the solar wind condition.
Since the change in $\alpha$ does not strongly affect the physical quantities of the solar wind \refp{Cranm07}, we expect that our findings are independent on $\alpha$.
M.S. is supported by the Leading Graduate Course for Frontiers of Mathematical Sciences and Physics (FMSP) and Grant-in-Aid for Japan Society for the Promotion of Science (JSPS) Fellows.
T.Y. is supported by JSPS KAKENHI Grant Number 15H03640.
T.K.S. is supported in part by Grants-in-Aid for Scientific Research from the MEXT of Japan, 17H01105.
Numerical calculations were in part carried out on the PC cluster at the Center for Computational Astrophysics, National Astronomical Observatory of Japan.
|
{
"timestamp": "2018-05-02T02:05:28",
"yymm": "1803",
"arxiv_id": "1803.02606",
"language": "en",
"url": "https://arxiv.org/abs/1803.02606"
}
|
\section{Introduction}
Projection-based model order reduction (MOR) methods, including {the} reduced basis (RB) {method} or proper orthogonal decomposition (POD), are popular approaches for {approximating} large-scale parameter-dependent equations (see the recent surveys and monographs \cite{benner2015survey,quarteroni2015reduced,HesthavenRozzaStamm2015,morbook2017}). {They can be considered in the contexts of optimization, uncertainty quantification, inverse problems, real-time simulations, etc.} An essential feature of MOR methods is offline/online splitting of the computations.
The construction of the reduced order (or surrogate) model, which is usually the most computationally demanding task, is performed during the offline stage. This stage consists of (i) the generation of a reduced approximation space with a greedy algorithm for RB method or a principal component analysis of a set of samples of the solution for POD and (ii) the efficient representation of the reduced system of equations, usually obtained {through} (Petrov-)Galerkin projection, and of all the quantities needed for evaluating output quantities of interest and error estimators. {In the online stage, the reduced order model is evaluated for each value of the parameter and provides prediction of the output quantity of interest with a small computational cost, which is independent of the dimension of the initial system of equations.}
In this paper, we address the reduction of computational costs for both offline and online stages of projection-based model order reduction methods by adapting random sketching methods \cite{achlioptas2003database,sarlos2006improved} to the context of RB and POD. These methods were proven capable of significant complexity reduction for basic problems in numerical linear algebra such as computing {products} or factorizations of matrices~\cite{halko2011finding,woodruff2014sketching}. {We show how a reduced order model can be approximated from a small set, called a sketch, of efficiently computable random projections of the reduced basis vectors and the vectors involved in the affine expansion{\footnote{{A parameter-dependent quantity $\mathbf{v}(\mu)$ with values in vector space $V$ over a field $\mathbb{K}$ is said to admit an affine representation (or be parameter-separable) if $\mathbf{v}(\mu) = \sum^d_{i=1} \mathbf{v}_i \lambda_i(\mu)$ with $\lambda_i(\mu) \in \mathbb{K}$ and $\mathbf{v}_i \in V$. Note that for $V$ of finite dimension, $\mathbf{v}(\mu)$ always admits an affine representation with a finite number of terms.}}}
of the residual, which is assumed to contain a small number of terms.} Standard algebraic operations are performed on the sketch, which avoids heavy operations on large-scale matrices and vectors. {Sufficient conditions on the dimension of the sketch for quasi-optimality of approximation of the reduced order model can be obtained by exploiting the fact that the residuals associated with reduced approximation spaces are contained in low-dimensional spaces.}
Clearly, the randomization inevitably implies a probability of failure. This probability, however, is a user-specified parameter that can be chosen extremely small without affecting {considerably} the computational costs.
{Even though this paper is concerned only with linear equations, similar considerations should also apply to a wide range of nonlinear problems.}
{Note that deterministic techniques have also been proposed for adapting POD methods to modern (e.g., multi-core or limited-memory) computational architectures~\cite{oxberry2017limited,himpe2016hierarchical,braconnier2011towards}.
{Compared to the aforementioned deterministic approaches, our randomized version of POD (see~Section\nobreakspace \ref {sk_pod}) has the advantage of not requiring the computation of the full reduced basis vectors, but only of their small random sketches.} In fact, maintaining and operating with large vectors can be completely avoided. This remarkable feature makes our algorithms particularly well suited for distributed computing and streaming context{s}.}
{Randomized linear algebra has} been employed for reducing the computational cost of MOR in~\cite{hochman2014reduced,alla2016randomized}, where the authors considered random sketching only as a tool for efficient evaluation of low-rank approximations of large matrices (using randomized versions of SVDs). They, however, did not adapt the MOR methodology itself and therefore did not fully exploit randomization techniques. In~\cite{buhr2017randomized} a probabilistic range finder based on random sketching has been used for combining {the} RB method with domain decomposition. Random sketching was also used for building parameter-dependent preconditioners for projection-based MOR in~\cite{zahm2016interpolation}.
The rest of the paper is organized as follows. Section\nobreakspace \ref {Contributions} presents the main contributions and discusses the benefits of the proposed methodology. In Section\nobreakspace \ref {MOR} we introduce the problem of interest and present the ingredients of standard projection-based model order reduction methods. In Section\nobreakspace \ref {RS}, we extend the classical sketching technique in Euclidean spaces to a more general framework. In Section\nobreakspace \ref {l2embeddingsMOR}, we introduce the concept of a \emph{sketch of a model} and
{propose} new and efficient randomized versions of Galerkin projection, residual based error estimation, and primal-dual correction. In Section\nobreakspace \ref {efficient_RB}, we present and discuss {the} randomized greedy algorithm and POD for the efficient generation of reduced approximation spaces. In~Section\nobreakspace \ref {Numerical}, the methodology is validated on two benchmarks. Finally, in~Section\nobreakspace \ref {Conclusions}, we provide conclusions and perspectives.
{Proofs of propositions and theorems are provided in the Appendix.}
\subsection{Main contributions} \label{Contributions}
Our methodology can be used for the efficient construction of a reduced order model. In classical projection-based methods, the cost of evaluating samples (or snapshots) of the solution for a training set of {parameter} values can be much smaller than the cost of other computations. This is the case when the samples are computed {using a sophisticated method for solving linear systems of equations requiring log-linear complexity,} or beyond the main routine, e.g., using a highly optimised commercial {solvers} or a server with limited budget, and possibly obtained using multiple workstations.
This is also the case when, due to memory constraints, the computational time of algorithms for constructing the reduced order model are greatly affected by the number of passes taken over the data. In all these cases the cost of the offline stage is dominated by the post-processing of samples but not their computation. We here assume that the cost of solving high-dimensional systems is irreducible and focus on the reduction of other computational costs. The metric for efficiency depends on the computational environment and how data is presented to us. Our algorithms can be beneficial in basically all computational environments. \\
\subsubsection*{Complexity reduction}
Consider a parameter-dependent linear system of equations $\mathbf{A}(\mu) \mathbf{u}(\mu) = \mathbf{b}(\mu)$ of dimension $n$ and assume that the parameter-dependent matrix $\mathbf{A}(\mu)$ and vector $\mathbf{b}(\mu)$ are parameter-separable with $m_A$ and $m_b$ terms, respectively (see Section\nobreakspace \ref {MOR} for more details). Let $r \ll n$ be the dimension of the reduced approximation space. Given a basis of this space, the classical construction of a reduced order model requires the evaluation of inner products between high-dimensional vectors. More precisely, it consists in multiplying each of the $r m_A+ m_b$ vectors in the affine expansion of the residual by $r$ vectors for constructing the reduced systems and by $r m_A + m_b$ other vectors for estimating the error. These two operations result in $\mathcal{O}(n r^2 m_A + n r m_b)$ and $\mathcal{O}(n r^2 m^2_A + n m^2_b)$ flops respectively. It can be argued that the aforementioned complexities can dominate the {total complexity} of the offline stage {(see Section\nobreakspace \ref {sketch}).}
With the methodology presented in this work the complexities can be reduced to $\mathcal{O}(n r m_A \log{k} + n m_b \log{k})$, where $r \leq k \ll n$.
Let $m$ be the number of samples in the training set. The {computation} of the POD basis using a direct eigenvalue solver requires multiplication of two $n \times m$ matrices, i.e., $\mathcal{O}(n m \min(n, m))$ flops, while using a Krylov solver it requires multiplications of a $n \times m$ matrix by $\mathcal{O}(r)$ adaptively chosen vectors, i.e., $\mathcal{O}(n m r)$ flops. In the prior work \cite{alla2016randomized} on randomized algorithms for MOR, the authors proposed to use a randomized version of SVD introduced in~\cite{halko2011finding} for the computation of the POD basis. More precisely, the SVD can be performed by applying Algorithms 4.5 and 5.1 in~\cite{halko2011finding} with complexities $\mathcal{O}(n m \log{k} + n k^2)$ and $\mathcal{O}(n m k)$, respectively. However, the authors in \cite{alla2016randomized} did not take any further advantage of random sketching methods, besides the SVD, and did not provide any theoretical analysis. In addition, they considered the Euclidean norm for the basis construction, which can be far from optimal. Here we reformulate the classical POD and obtain an algebraic form {(see Proposition\nobreakspace \ref {thm:approx_pod})} well suited for the application of efficient low-rank approximation algorithms, e.g., randomized or {incremental} SVDs~\cite{baker2012low}. We consider a general inner product associated with a self-adjoint positive definite matrix. More importantly, we provide a new version of POD (see Section\nobreakspace \ref {sk_pod}) which does not require evaluation of high-dimensional basis vectors. In this way, the complexity of POD can be reduced to only $\mathcal{O}(n m \log{k}$).\\
\subsubsection*{Restricted memory and streaming environments}
Consider an environment where the memory consumption is the primary constraint. {The} classical offline stage involves evaluations of inner products of high-dimensional vectors.
These operations require many passes over large data sets, e.g., a set of samples of the solution or the reduced basis, and can result in a computational burden. We show how to build the reduced order model with only one pass over the data. In extreme cases our algorithms may be employed in a streaming environment, where samples of the solution are provided as data-streams and storage of only a few large vectors is allowed. Moreover, with our methodology one can build a reduced order model without storing any high-dimensional vector.\\
\subsubsection*{Distributed computing}
The computations involved in our version of POD can be efficiently distributed among multiple workstations. Each sample of the solution can be evaluated and processed on a different machine with absolutely no communication. Thereafter, small sketches of the samples can be sent to the master workstation for building the reduced order model. The total amount of communication required by our algorithm is proportional to $k$ (the dimension of the sketch) and is independent of the dimension of the initial full order model.\\
\subsubsection*{Parallel computing}
Recently, parallelization was {considered} as a workaround to address large-scale computations~\cite{knezevic2011high}. The authors did not propose a new methodology but rather exploited the key opportunities for parallelization in a standard approach. We, on the other hand, propose a new methodology which can be better suited for parallelization than the classical one. The computations involved in our algorithms mainly consist in evaluating random matrix-vector products and solving high-dimensional systems of equations. The former operation is embarrassingly parallel (with a good choice of random matrices), while the latter one can be efficiently parallelized with state-of-the-art algorithms. \\
\subsubsection*{Online-efficient and robust error estimation}
In addition, we provide a new way for estimating the error associated with a solution of the reduced order model, the error being defined as some norm of the residual. It does not require any assumption on the way to obtain the approximate solution and can be employed separately from the rest of the methodology. For example, it could be used for the efficient estimation of the error associated with a classical Galerkin projection. Our approach yields cost reduction for the offline stage but it is also online-efficient. Given the solution of the reduced order model, it requires only $\mathcal{O}(r m_A + m_b)$ flops for estimating the residual-based error while a classical procedure takes $\mathcal{O}(r^2 m^2_A +m^2_b)$ flops. Moreover, {compared to} the classical approach, our method is less sensitive to round-off errors.
\section{Projection-based model order reduction methods} \label{MOR}
In this section, we introduce the problem of interest and present the basic ingredients of classical MOR algorithms {in a form well suited for random sketching methods}. We consider a discrete setting, e.g, a problem arising after discretization of a parameter-dependent PDE {or integral equation}. We use notations that are standard in the context of variational methods for PDEs. However, for models simply described by algebraic equations, the notions of solution spaces, dual spaces, etc., can be disregarded.
Let $U := \mathbb{K}^{n}$ (with $\mathbb{K} = \mathbb{R}$ or $\mathbb{C}$) denote the solution space equipped with inner product $\langle \cdot , \cdot \rangle_U := \langle \mathbf{R}_U \cdot, \cdot \rangle$, where $\langle \cdot, \cdot \rangle$ is the canonical $\ell_2$-inner product on $\mathbb{K}^{n}$ and $\mathbf{R}_U \in \mathbb{K}^{n \times n}$ is some self-adjoint (symmetric if $\mathbb{K} = \mathbb{R}$ and Hermitian if $\mathbb{K} = \mathbb{C}$) positive definite matrix. The dual space of $U$ is identified with $U':= \mathbb{K}^{n}$, which is endowed with inner product $\langle \cdot , \cdot \rangle_{U'}:= \langle \cdot, \mathbf{R}_U^{-1}\cdot \rangle $. For a matrix $\mathbf{M} \in \mathbb{K}^{n\times n}$ we denote by $\mathbf{M}^\mathrm{H}$ its adjoint (transpose if $\mathbb{K} = \mathbb{R}$ and Hermitian transpose if $\mathbb{K} = \mathbb{C}$).
\begin{remark}
{The} matrix $\mathbf{R}_U$ is seen as a map from $U $ to $U'$. In the framework of numerical methods for PDEs, the entries of $\mathbf{R}_U$ can be obtained by evaluating inner products of corresponding basis functions. For example, if the PDE is defined on a space equipped with $H^1$ inner product, then $\mathbf{R}_U$ is equal to the stiffness (discrete Laplacian) matrix. For algebraic parameter-dependent equations, $\mathbf{R}_U$ can be taken as identity.
\end{remark}
Let $\mu$ denote parameters taking values in a set $\mathcal{P}$ {(which is typically a subset of $\mathbb{K}^p$, but could also be a subset of function spaces, etc.)}. Let parameter-dependent linear forms $\mathbf{b}(\mu) \in U'$ and $\mathbf{l}(\mu) \in U'$ represent the right-hand side and the extractor of a quantity of interest, respectively, and let $\mathbf{A} (\mu): U \to U'$ represent the parameter-dependent operator. The problem of interest can be formulated as follows: for each given $\mu \in \mathcal{P}$ find the quantity of interest $s(\mu):= \langle \mathbf{l}(\mu), \mathbf{u}(\mu) \rangle$, where $\mathbf{u}(\mu) \in U$ is such that
\begin{equation} \label{eq:initialproblem}
\mathbf{A}(\mu)\mathbf{u}(\mu)= \mathbf{b}(\mu).
\end{equation}
Further, we suppose that the solution manifold {$\{ \mathbf{u}(\mu) : \mu \in \mathcal{P} \}$} can be well approximated by some low dimensional subspace of $U$. Let $U_r \subseteq U$ be such a subspace and $\mathbf{U}_r \in \mathbb{K}^{n \times r}$ be a matrix whose column vectors form a basis for $U_r$. The question of finding a good $U_r$ is addressed in Sections\nobreakspace \ref {Greedy} and\nobreakspace \ref {POD}. In projection-based MOR methods, $\mathbf{u}(\mu)$ is approximated by a projection $\mathbf{u}_r(\mu) \in U_r$.
\subsection{Galerkin projection}
Usually, a Galerkin projection $\mathbf{u}_r(\mu)$ is obtained by imposing the following orthogonality condition to the residual~\cite{quarteroni2015reduced}:
\begin{equation} \label{eq:galproj}
\langle \mathbf{r}(\mathbf{u}_r(\mu);\mu),\mathbf{w} \rangle=0, ~\forall \mathbf{w} \in U_r,
\end{equation}
where $\mathbf{r}(\mathbf{x};\mu):= \mathbf{b}(\mu)- \mathbf{A}(\mu)\mathbf{x}, ~\mathbf{x} \in U$.
This condition can be expressed in a different form that will be particularly handy in further sections. For this we define the following semi-norm over $U'$:
\begin{equation} \label{eq:Urseminorm}
\| \mathbf{y} \|_{U_r'} := \underset{ \mathbf{w} \in U_r \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{y} , \mathbf{w}\rangle|}{\| \mathbf{w} \|_U}, ~\mathbf{y} \in U'.
\end{equation}
Note that replacing $U_r$ by $U$ in definition~(\ref {eq:Urseminorm}) yields a norm consistent with the one induced by $\langle \cdot, \cdot \rangle_{U'}$. The relation~(\ref {eq:galproj}) can now be rewritten as
\begin{equation} \label{eq:galproj2}
\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|_{U_r'}= 0.
\end{equation}
Let us define the following parameter-dependent constants {characterizing quasi-optimality of Galerkin projection:}
\begin{subequations}
\begin{align}
&\alpha_r(\mu):= \underset{ \mathbf{x} \in U_r \backslash \{ \mathbf{0} \}} \min \frac{\| \mathbf{A}(\mu) \mathbf{x} \|_{U_r'}}{\| \mathbf{x} \|_U}, \label{eq:alphar} \\
&\beta_r(\mu):= \underset{ \mathbf{x} \in \left (\mathrm{span} \{ \mathbf{u}(\mu) \}+ U_r \right ) \backslash \{ \mathbf{0} \}} \max \frac{ \| \mathbf{A}(\mu) \mathbf{x} \|_{U_r'}}{\| \mathbf{x} \|_U}.\label{eq:betar}
\end{align}
\end{subequations}
It has to be mentioned that $\alpha_r(\mu)$ and $\beta_r(\mu)$ can be bounded by the coercivity constant $\theta(\mu)$ and the continuity constant (the maximal singular value) $\beta(\mu)$ of $\mathbf{A}(\mu)$, respectively defined by
\begin{subequations} \label{eq:thetabeta}
\begin{align}
\theta(\mu) &:= \underset{ \mathbf{x} \in U \backslash \{ \mathbf{0} \}} \min \frac{\langle \mathbf{A}(\mu) \mathbf{x}, \mathbf{x} \rangle}{\| \mathbf{x} \|^2_U} \leq \alpha_r(\mu), \\
\beta(\mu) &:= \underset{ \mathbf{x} \in U \backslash \{ \mathbf{0} \}} \max \frac{\| \mathbf{A}(\mu) \mathbf{x} \|_{U'}}{\| \mathbf{x} \|_U} \geq \beta_r(\mu). \label{eq:beta}
\end{align}
\end{subequations}
For some problems it is possible to provide lower and upper bounds for $\theta(\mu)$ and $\beta(\mu)$~\cite{haasdonk2017reduced}.
If $\alpha_r(\mu)$ is positive, then the reduced problem~(\ref {eq:galproj}) is well-posed. For given $V \subseteq U$, let $\mathbf{P}_{V}:U \rightarrow V$ denote the orthogonal projection on $V$ with respect to $\| \cdot \|_{U}$, i.e.,
\begin{equation}
\forall \mathbf{x} \in U,~\mathbf{P}_{V} \mathbf{x} = \arg\min_{\mathbf{w} \in V} \| \mathbf{x}- \mathbf{w} \|_{U}.
\end{equation}
We now provide {a quasi-optimality characterization} for the projection $\mathbf{u}_r(\mu)$.
\begin{proposition} [modified Cea's lemma] \label{thm:cea}
If $\alpha_r(\mu)>0$, then the solution $\mathbf{u}_r(\mu)$ of~(\ref {eq:galproj}) is such that
\begin{equation} \label{eq:quasi-opt}
\| \mathbf{u}(\mu)- \mathbf{u}_r(\mu) \|_{U} \leq (1+ \frac{\beta_r(\mu)}{\alpha_r(\mu)}) \| \mathbf{u}(\mu)- \mathbf{P}_{U_r} \mathbf{u}(\mu) \|_{U}.
\end{equation}
\begin{proof}
See appendix.
\end{proof}
\end{proposition}
{Note that~Proposition\nobreakspace \ref{thm:cea} is a slightly modified version of the classical Cea's lemma with the continuity constant $\beta(\mu)$ replaced by $\beta_r(\mu)$.}
The coordinates of $\mathbf{u}_r(\mu)$ in the basis $\mathbf{U}_r$, i.e., $\mathbf{a}_r(\mu) \in \mathbb{K}^{r}$ such that $\mathbf{u}_r(\mu) = \mathbf{U}_r \mathbf{a}_r(\mu)$, can be found by solving the following system of equations
\begin{equation} \label{eq:reduced_system}
\mathbf{A}_r(\mu) \mathbf{a}_r(\mu) = \mathbf{b}_r(\mu),
\end{equation}
where $\mathbf{A}_r(\mu)= \mathbf{U}_r^{\mathrm{H}}\mathbf{A}(\mu)\mathbf{U}_r \in \mathbb{K}^{r\times r}$ and $\mathbf{b}_r(\mu)= \mathbf{U}_r^{\mathrm{H}}\mathbf{b}(\mu) \in \mathbb{K}^r$. The numerical stability of~(\ref {eq:reduced_system}) is usually obtained by orthogonalization of $\mathbf{U}_r$.
\begin{proposition} \label{thm:clsstability}
If $\mathbf{U}_r$ is orthogonal with respect to $\langle \cdot, \cdot \rangle_U$, then the condition number of $\mathbf{A}_r(\mu)$ is bounded by $\frac{\beta_r(\mu)}{\alpha_r(\mu)}$.
\begin{proof}
See appendix.
\end{proof}
\end{proposition}
\subsection{Error estimation}
When an approximation $\mathbf{u}_r(\mu)\in U_r$ of the exact solution $\mathbf{u}(\mu)$ has been evaluated, it is important to be able to certify how close they are.
The error $\| \mathbf{u}(\mu)- \mathbf{u}_r(\mu)\|_{U}$ can be bounded by the following error indicator
\begin{equation} \label{eq:errorind}
\Delta(\mathbf{u}_r(\mu); \mu):= \frac{\| \mathbf{r}(\mathbf{u}_r(\mu); \mu) \|_{U'}}{\eta(\mu)},
\end{equation}
where $\eta(\mu)$ is such that
\begin{equation} \label{eq:eta}
\eta(\mu)\leq \underset{ \mathbf{x} \in U \backslash \{ \mathbf{0} \}} \min \frac{\| \mathbf{A}(\mu) \mathbf{x} \|_{U'}}{\| \mathbf{x} \|_U}.
\end{equation}
In its turn, the certification of the output quantity of interest $s_r(\mu):= \langle \mathbf{l}(\mu), \mathbf{u}_r(\mu) \rangle$ is provided by
\begin{equation} \label{eq:scert}
|s(\mu)- s_r(\mu)| \leq \| \mathbf{l}(\mu) \|_{U'} \| \mathbf{u}(\mu)- \mathbf{u}_r(\mu) \|_{U} \leq \| \mathbf{l}(\mu) \|_{U'} \Delta(\mathbf{u}_r(\mu); \mu).
\end{equation}
\subsection{Primal-dual correction}
The accuracy of the output quantity obtained by the aforementioned methodology can be improved by goal-oriented correction~\cite{rozza2008reduced} explained below. A dual problem can be formulated as follows: for each $\mu \in \mathcal{P}$, find $\mathbf{u}^\mathrm{du}(\mu) \in U$ such that
\begin{equation} \label{eq:dualproblem}
\mathbf{A}(\mu)^{\mathrm{H}} \mathbf{u}^\mathrm{du}(\mu) = -\mathbf{l}(\mu).
\end{equation}
The dual problem can be tackled in the same manner as the primal problem. For this we can use a Galerkin projection onto a certain $r^{\mathrm{du}}$-dimensional subspace $U^{\mathrm{du}}_r \subseteq U$.
Now suppose that besides approximation $\mathbf{u}_r(\mu)$ of $\mathbf{u}(\mu)$, we also have obtained an approximation of $\mathbf{u}^\mathrm{du}(\mu)$ denoted by $\mathbf{u}_r^\mathrm{du}(\mu) \in U^{\mathrm{du}}_r$. The quantity of interest can be estimated by
\begin{equation} \label{eq:correction}
{s_r^{\mathrm{pd}}(\mu)}:= s_r(\mu)- \langle \mathbf{u}_r^\mathrm{du}(\mu), \mathbf{r}(\mathbf{u}_r(\mu); \mu) \rangle.
\end{equation}
\begin{proposition}\label{thm:error_correction}
The estimation $s_r^{\mathrm{pd}}(\mu)$ of $s(\mu)$ is such that
\begin{equation} \label{eq:error_correction}
|s(\mu)- {s_r^{\mathrm{pd}}(\mu)}| \leq \| \mathbf{r}^{\mathrm{du}}(\mathbf{u}_r^\mathrm{du}(\mu); \mu) \|_{U'} \Delta(\mathbf{u}_r(\mu); \mu),
\end{equation}
where $\mathbf{r}^{\mathrm{du}}(\mathbf{u}_r^\mathrm{du}(\mu); \mu):= -\mathbf{l}(\mu) -\mathbf{A}(\mu)^{\mathrm{H}}\mathbf{u}_r^\mathrm{du}(\mu)$.
\begin{proof}
See appendix.
\end{proof}
\end{proposition}
We observe that the error bound~(\ref {eq:error_correction}) of the quantity of interest is now quadratic in the residual norm in contrast to~(\ref {eq:scert}).
\subsection{Reduced basis generation}
Until now we have assumed that the reduced subspaces $U_r$ and $U^{\mathrm{du}}_r$ were given. Let us briefly outline the standard procedure for the reduced basis generation with {the} greedy algorithm and POD. {The POD is here presented in a general algebraic form, which allows a non-intrusive use of any low-rank approximation algorithm.} Below we consider only the primal problem noting that similar algorithms can be used for the dual one. We also assume that a training set $\mathcal{P}_{\mathrm{train}} \subseteq \mathcal{P}$ with finite cardinality $m$ is provided.
\subsubsection{Greedy algorithm} \label{Greedy}
The approximation subspace $U_{r}$ can be constructed recursively with a (weak) greedy algorithm. At iteration $i$, the basis of $U_i$ is enriched by snapshot $\mathbf{u}(\mu^{i+1})$, i.e., $$U_{i+1}:= U_{i}+\mathrm{span}(\mathbf{u}(\mu^{i+1})),$$ evaluated at a parameter value $\mu^{i+1}$ that maximizes a certain error indicator $\widetilde{\Delta}(U_{i}; \mu)$ over the training set. Note that for efficient evaluation of $\arg \max_{\mu \in \mathcal{P}_{\mathrm{train}}}{\widetilde{\Delta}(U_i; \mu)}$ a provisional online solver associated with $U_{i}$ has to be provided.
The error indicator $\widetilde{\Delta}(U_{i}; \mu)$ for the greedy selection is typically chosen as an upper bound or estimator of $\| \mathbf{u}(\mu)- \mathbf{P}_{U_i}\mathbf{u}(\mu) \|_{U}$. One can readily take $\widetilde{\Delta}(U_{i}; \mu) := {\Delta}(\mathbf{u}_i(\mu); \mu) $, where $\mathbf{u}_i(\mu)$ is the Galerkin projection defined by~(\ref {eq:galproj2}). The quasi-optimality of such $\widetilde{\Delta}(U_{i}, \mu)$ can then be characterized by using~Proposition\nobreakspace \ref{thm:cea} and definitions~(\ref{eq:beta}) and (\ref{eq:errorind}).
\subsubsection{Proper Orthogonal Decomposition} \label{POD}
In the context of POD we assume that the samples (snapshots) of $\mathbf{u}(\mu)$, associated with the training set, are available. Let them be denoted as $\{ \mathbf{u} (\mu^i) \}_{i=1}^{m}$, where $\mu^i \in \mathcal{P}_{\mathrm{train}}$, $1\leq i \leq m$. Further, let us define $\mathbf{U}_m:= \left [ \mathbf{u} (\mu^1), \mathbf{u} (\mu^2), ..., \mathbf{u} (\mu^m) \right ] \in \mathbb{K}^{n\times m}$ and $U_m:= \mathrm{range}(\mathbf{U}_m)$. POD aims at finding a low dimensional subspace $U_r \subseteq U_m$ for the approximation of the set of vectors $\{ \mathbf{u} (\mu^i) \}^{m}_{i=1}$.
For each $r\leq \mathrm{dim}(U_m)$ we define
\begin{equation} \label{eq:poddef}
POD_r(\mathbf{U}_m, \| \cdot \|_U ):=\arg\min_{\substack{W_r \subseteq U_m \\ \mathrm{dim}(W_r) =r}}
\sum^{m}_{i=1} \| \mathbf{u} (\mu^i) - \mathbf{P}_{W_r} \mathbf{u} (\mu^i) \|^2_{U}.
\end{equation}
The standard POD consists {in choosing $U_r$ as $POD_r(\mathbf{U}_m, \| \cdot \|_U)$ and using the method of snapshots~\cite{sirovich1987turbulence}, or SVD of matrix $\mathbf{R}^{1/2}_U \mathbf{U}_m$, for computing the basis vectors. For large-scale problems, however, performing the method of snapshots or the SVD can become a computational burden. In such a case the standard eigenvalue decomposition and SVD have to be replaced by other low-rank approximations, e.g., incremental SVD, randomized SVD, hierarchical SVD, etc. For each of them it can be important to characterize quasi-optimality of the approximate POD basis.
Below we provide a generalized algebraic version of POD well suited for a combination with low-rank approximation algorithms as well as state-of-the-art SVD. Note that obtaining (e.g., using a spectral decomposition) and operating with $\mathbf{R}^{1/2}_U$ can be expensive and should be avoided for large-scale problems. The usage of this matrix for constructing the POD basis can be easily circumvented (see~Remark \nobreakdash \ref{rmk:Q_eval} ). }
\begin{proposition} \label{thm:approx_pod}
Let $\mathbf{Q} \in \mathbb{K}^{s\times n}$ be such that $\mathbf{Q}^{\mathrm{H}}\mathbf{Q}= \mathbf{R}_U$. Let $\mathbf{B}^*_r \in \mathbb{K}^{s \times m}$ be a best rank-$r$ approximation of $\mathbf{Q} \mathbf{U}_m$ with respect to the Frobenius norm $\|\cdot \|_{F}$. Then for any rank-$r$ matrix $\mathbf{B}_r \in \mathbb{K}^{s\times m}$, it holds
\begin{equation}
\frac{1}{m} \| \mathbf{Q} \mathbf{U}_m - {\mathbf{B}^*_r}\|^2_{F} \leq \frac{1}{m} \sum^{m}_{i=1} \|\mathbf{u} (\mu^i) - \mathbf{P}_{{U_r}} \mathbf{u} (\mu^i) \|^2_{U} \leq \frac{1}{m} \| \mathbf{Q} \mathbf{U}_m - {\mathbf{B}_r}\|^2_{F},
\end{equation}
where ${U_r}:= \{ \mathbf{R}_{U}^{-1}\mathbf{Q}^{\mathrm{H}}\mathbf{b} : \mathbf{b} \in \mathrm{span}({\mathbf{B}_r}) \}$.
\begin{proof}
See appendix.
\end{proof} \end{proposition}
\begin{corollary}\label{thm:exact_pod}
Let $\mathbf{Q} \in \mathbb{K}^{s\times n}$ be such that $\mathbf{Q}^{\mathrm{H}}\mathbf{Q}= \mathbf{R}_U$. Let $\mathbf{B}^*_r \in \mathbb{K}^{s\times m}$ be a best rank-$r$ approximation of $\mathbf{Q}\mathbf{U}_m$ with respect to the Frobenius norm $\|\cdot \|_{F}$. Then
\begin{equation}
POD_r(\mathbf{U}_m, \| \cdot \|_U )= \{ \mathbf{R}_{U}^{-1}\mathbf{Q}^{\mathrm{H}}\mathbf{b} : \mathbf{b} \in \mathrm{range}(\mathbf{B}^*_r) \}.
\end{equation}
\end{corollary}
It follows that the approximation subspace $U_r$ for $\{ \mathbf{u}(\mu^i) \}^{m}_{i=1}$ can be obtained by computing a low-rank approximation of $\mathbf{Q} \mathbf{U}_m$. According to Proposition\nobreakspace \ref{thm:approx_pod}, for given $r$, quasi-optimality of {$U_r$} can be guaranteed by quasi-optimality of {$\mathbf{B}_r$}.
\begin{remark} \label{rmk:Q_eval}
{The} matrix $\mathbf{Q}$ in Proposition\nobreakspace \ref {thm:approx_pod} and Corollary\nobreakspace \ref {thm:exact_pod} can be seen as a map from $U$ to $ \mathbb{K}^{s}$. Clearly, it can be computed with a Cholesky (or spectral) decomposition of $\mathbf{R}_U$. For large-scale problems, however, it might be a burden to obtain, store or operate with such a matrix. We would like to underline that $\mathbf{Q}$ does not have to be a square matrix. It can be easily obtained in the framework of numerical methods for PDEs (e.g., finite elements, finite volumes, etc.). Suppose that $\mathbf{R}_U$ can be expressed as an assembly of smaller self-adjoint positive semi-definite matrices $\mathbf{R}_U^{(i)}$ each corresponding to the contribution, for example, of a finite element or subdomain. In other words,
\begin{equation*}
\mathbf{R}_U = \sum_{i=1}^l\mathbf{E}^{(i)} \mathbf{R}_U^{(i)} [\mathbf{E}^{(i)}]^{\mathrm{T}},
\end{equation*}
where $\mathbf{E}^{(i)}$ is an extension operator mapping a local vector to the global one (usually a boolean matrix). Since $\mathbf{R}_U^{(i)}$ are small matrices, their Cholesky (or spectral) decompositions are easy to compute. Let $\mathbf{Q}^{(i)}$ denote the adjoint of the Cholesky factor of $\mathbf{R}_U^{(i)}$. It can be easily verified that
\begin{equation*}
\mathbf{Q} :=\left[ \begin{array}{c}
\mathbf{Q}^{(1)} [\mathbf{E}^{(1)}]^{\mathrm{T}} \\
\mathbf{Q}^{(2)} [\mathbf{E}^{(2)}]^{\mathrm{T}} \\
... \\
\mathbf{Q}^{(l)} [\mathbf{E}^{(l)}]^{\mathrm{T}}
\end{array}\right]
\end{equation*}
satisfies $ \mathbf{Q}^{\mathrm{H}}\mathbf{Q}= \mathbf{R}_U$.
\end{remark}
The POD procedure using low-rank approximations is depicted in~Algorithm\nobreakspace \ref {alg:approx_pod}.
\begin{algorithm} \caption{Approximate Proper Orthogonal Decomposition} \label{alg:approx_pod}
\begin{algorithmic}
\STATE{\textbf{Given:} $\mathcal{P}_{\mathrm{train}}$, $\mathbf{A}(\mu)$, $\mathbf{b}(\mu)$, $\mathbf{R}_U$}
\STATE{\textbf{Output}: $\mathbf{U}_r$ and $\Delta^{\mathrm{POD}}$}
\STATE{1. Compute the snapshot matrix $\mathbf{U}_m$.}
\STATE{2. Determine $\mathbf{Q}$ such that $\mathbf{Q}^{\mathrm{H}}\mathbf{Q}= \mathbf{R}_U$.}
\STATE{3. Compute {a} rank-$r$ approximation, $\mathbf{B}_r $, of $\mathbf{Q}\mathbf{U}_m$.}
\STATE{4. Compute an upper bound, $\Delta^{\mathrm{POD}}$, of $\frac{1}{m} \| \mathbf{Q}\mathbf{U}_m- \mathbf{B}_r \|^2_{F}$. }
\STATE{5. Find a matrix, $\mathbf{C}_r$ whose column space is $\mathrm{span}(\mathbf{B}_r)$.}
\STATE{6. Evaluate $\mathbf{U}_r:=\mathbf{R}_{U}^{-1}\mathbf{Q}^{\mathrm{H}}\mathbf{C}_r$.}
\end{algorithmic}
\end{algorithm}
\section{Random sketching} \label{RS}
In this section, we adapt the classical sketching theory in Euclidean spaces~\cite{woodruff2014sketching} to a slightly more general framework. The sketching technique is seen as a modification of inner product for a given subspace. The modified inner product is approximately equal to the original one but it is much easier to operate with. Thanks to such interpretation of the methodology, integration of the sketching technique to the context of projection-based MOR will become straightforward.
\subsection{$\ell_2$-embeddings} \label{l2embeddings}
Let $X:= \mathbb{K}^{n}$ be endowed with inner product $\langle \cdot, \cdot \rangle_{X}:= \langle \mathbf{R}_X \cdot, \cdot \rangle $ for some self-adjoint positive definite matrix $\mathbf{R}_X \in \mathbb{K}^{n\times n}$, and let $Y$ be a subspace of $X$ of moderate dimension. The dual of $X$ is identified with $X':=\mathbb{K}^{n}$ and the dual of $Y$ is identified with $Y':=\{ \mathbf{R}_{X} \mathbf{y} : \mathbf{y} \in Y \}$. $X'$ and $Y'$ are both equipped with inner product $\langle \cdot, \cdot \rangle_{X'}:=\langle \cdot, \mathbf{R}_X^{-1} \cdot \rangle$. The inner products $\langle \cdot , \cdot \rangle_{X}$ and $\langle \cdot, \cdot \rangle_{X'}$ can be very expensive to evaluate. The computational cost can be reduced drastically if we are interested {solely} in operating with vectors lying in subspaces $Y$ or $Y'$. For this we introduce the concept of $X \to \ell_2$ subspace embeddings.
Let $\mathbf{\Theta} \in \mathbb{K}^{k\times n}$ with $k\leq n$. Further, $\mathbf{\Theta}$ is seen as an embedding for subspaces of $X$. It maps vectors from the subspaces of $X$ to vectors from $\mathbb{K}^{k}$ equipped with {the canonical} {$\ell_2$}-inner product $\langle \cdot, \cdot \rangle$, so $\mathbf{\Theta}$ is referred to as an $X \to \ell_2$ subspace embedding.
Let us now introduce the following semi-inner products on $X$:
\begin{equation} \label{eq:thetainnerdef}
\langle \cdot, \cdot \rangle^{\mathbf{\Theta}}_{X}:= \langle \mathbf{\Theta} \cdot, \mathbf{\Theta} \cdot \rangle, \textup{ and }
\langle \cdot, \cdot \rangle^{\mathbf{\Theta}}_{X'} := \langle \mathbf{\Theta} \mathbf{R}_X^{-1} \cdot, \mathbf{\Theta} \mathbf{R}_X^{-1} \cdot \rangle.
\end{equation}
Let $\| \cdot \|^{\mathbf{\Theta}}_{X}$ and $\| \cdot \|^{\mathbf{\Theta}}_{X'}$ denote the associated semi-norms. In general, $\mathbf{\Theta}$ is chosen so that $ \langle \cdot, \cdot \rangle^{\mathbf{\Theta}}_{X}$ approximates well $\langle \cdot, \cdot \rangle_{X}$ for all vectors in $Y$ or, in other words, $\mathbf{\Theta}$ is $X \to \ell_2$ $\varepsilon$-subspace embedding for $Y$, as defined below.
\begin{definition} \label{def:epsilon_embedding}
If $\mathbf{\Theta}$ satisfies
\begin{equation} \label{eq:epsilon_embedding}
\forall \mathbf{x}, \mathbf{y} \in Y, \ \left | \langle \mathbf{x}, \mathbf{y} \rangle_X - \langle \mathbf{x}, \mathbf{y} \rangle^{\mathbf{\Theta}}_{X} \right |\leq \varepsilon \| \mathbf{x} \|_X \| \mathbf{y} \|_X,
\end{equation}
for some $\varepsilon \in [0,1)$, then it is called a $X \to \ell_2$ $\varepsilon$-subspace embedding {(or simply, $\varepsilon$-embedding)} for $Y$.
\end{definition}
\begin{corollary} \label{thm:dual_embedding}
If $\mathbf{\Theta}$ is a $X \to \ell_2$ $\varepsilon$-subspace embedding for $Y$, then
\begin{equation*}
\forall \mathbf{x}', \mathbf{y}' \in Y', \ \left | \langle \mathbf{x}', \mathbf{y}' \rangle_{X'}- \langle \mathbf{x}', \mathbf{y}' \rangle^{\mathbf{\Theta}}_{X'} \right | \leq \varepsilon \| \mathbf{x}' \|_{X'} \| \mathbf{y}' \|_{X'}.
\end{equation*}
\end{corollary}
\begin{proposition} \label{thm:innerproduct}
If $\mathbf{\Theta}$ is a $X \to \ell_2$ $\varepsilon$-subspace embedding for $Y$, then $\langle \cdot, \cdot \rangle^{\mathbf{\Theta}}_{X}$ and $\langle \cdot , \cdot \rangle^{\mathbf{\Theta}}_{X'}$ are inner products on $Y$ and $Y'$, respectively.
\begin{proof}
See appendix.
\end{proof}
\end{proposition}
Let $Z \subseteq Y$ be a subspace of $Y$. A semi-norm $ \| \cdot \|_{Z'}$ over $Y'$ can be defined by
\begin{equation} \label{eq:seminorm}
\| \mathbf{y}' \|_{Z'}:= \underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{y}', \mathbf{x} \rangle|}{\| \mathbf{x} \|_X}=\underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{R}_X^{-1} \mathbf{y}', \mathbf{x} \rangle_{X}|}{\| \mathbf{x} \|_{X}},~ \mathbf{y}' \in Y'.
\end{equation}
We propose to approximate $\| \cdot \|_{Z'}$ by the semi norm $\| \cdot \|^{\mathbf{\Theta}}_{Z'}$ given by
\begin{equation} \label{eq:skseminorm}
\| \mathbf{y}' \|^{\mathbf{\Theta}}_{Z'}:= \underset{\mathbf{x} \in Z \backslash \{ \mathbf{0} \}} \max \frac{|\langle \mathbf{R}_X^{-1}\mathbf{y}', \mathbf{x} \rangle^{\mathbf{\Theta}}_{X}|}{\| \mathbf{x} \|^{\mathbf{\Theta}}_{X}},~ \mathbf{y}' \in Y'.
\end{equation}
Observe that letting $Z= Y$ in~Equations\nobreakspace \textup {(\ref {eq:seminorm})} and\nobreakspace \textup {(\ref {eq:skseminorm})} leads to norms on $Y'$ which are induced by $\langle \cdot, \cdot \rangle_{X'}$ and $\langle \cdot, \cdot \rangle^{\mathbf{\Theta}}_{X'}$.
\begin{proposition} \label{thm:skseminorm_ineq}
If $\mathbf{\Theta}$ is a $X \to \ell_2$ $\varepsilon$-subspace embedding for $Y$, then for all $\mathbf{y}' \in Y'$,
\begin{equation} \label{eq:skseminorm_ineq}
\frac{1}{\sqrt{1+\varepsilon}} (\| \mathbf{y}' \|_{Z'}- \varepsilon\| \mathbf{y}' \|_{X'})\leq \| \mathbf{y}' \|^{\mathbf{\Theta}}_{Z'} \leq \frac{1}{\sqrt{1-\varepsilon}}(\| \mathbf{y}' \|_{Z'}+ \varepsilon\| \mathbf{y}' \|_{X'}).
\end{equation}
\begin{proof}
See appendix.
\end{proof}
\end{proposition}
\subsection{Data-oblivious embeddings} \label{obleddings}
Here we show how to build a $X \to \ell_2$ $\varepsilon$-subspace embedding $\mathbf{\Theta}$ as a realization of a carefully chosen probability distribution over matrices. A reduction of the complexity of an algorithm can be obtained when $\mathbf{\Theta}$ is a structured matrix (e.g., sparse or hierarchical)~\cite{woodruff2014sketching} so that it can be efficiently multiplied by a vector. In such a case $\mathbf{\Theta}$ has to be operated {as a function outputting products with vectors}. For environments where the memory consumption or the cost of communication between cores is the primary constraint, unstructured $\mathbf{\Theta}$ can still provide drastic reductions and be more expedient~\cite{halko2011finding}.
\begin{definition} \label{def:oblepsilon_embedding}
$\mathbf{\Theta}$ is called a $( \varepsilon, \delta, d)$ oblivious $X \to \ell_2$ subspace embedding if for any $d$-dimensional subspace {$V$} of $X$ it holds
\begin{equation} \label{eq:oblepsilon_embedding}
\mathbb{P} \left (\mathbf{\Theta} \text{ is a }X \to \ell_2 \text{ subspace embedding for } {V} \right) \geq 1-\delta.
\end{equation}
\end{definition}
\begin{corollary} \label{thm:oblepsilon_prime}
If $\mathbf{\Theta}$ is a $( \varepsilon, \delta, d)$ oblivious $X \to \ell_2$ subspace embedding, then $\mathbf{\Theta} \mathbf{R}_X^{-1}$ is a $( \varepsilon, \delta, d)$ oblivious $X' \to \ell_2$ subspace embedding.
\end{corollary}
The advantage of oblivious embeddings is that they do not require any a priori knowledge of the embedded subspace. In this work we shall consider three well-known oblivious $\ell_2 \to \ell_2$ subspace embeddings: the rescaled Gaussian distribution, the rescaled Rademacher distribution, and the partial Subsampled Randomized Hadamard Transform (P-SRHT). The rescaled Gaussian distribution is such that the entries of $\mathbf{\Theta}$ are independent normal random variables with mean $0$ and variance $k^{-1}$. For the rescaled Rademacher distribution, the entries of $\mathbf{\Theta}$ are independent random variables satisfying $\mathbb{P} \left ( [\mathbf{\Theta}]_{i,j}= \pm k^{-1/2} \right )=1/2$. Next we recall a standard result that states that the rescaled Gaussian and Rademacher distributions with sufficiently large $k$ are $(\varepsilon, \delta, d)$ oblivious $\ell_2 \to \ell_2$ subspace embeddings. This can be found in~\cite{sarlos2006improved,woodruff2014sketching}. The authors, however, provided the bounds for $k$ in $\mathcal{O}$ (asymptotic) notation with no concern about the constants. These bounds can be impractical for certification (both a priori and a posteriori) of the solution. Below we provide explicit bounds for $k$.
\begin{proposition} \label{thm:Rademacher}
Let $\varepsilon$ and $\delta$ be such that $0<\varepsilon<0.572$ and $0 <\delta <1$. The rescaled Gaussian and the rescaled Rademacher distributions over $\mathbb{R}^{k \times n}$ with $k\geq 7.87 \varepsilon^{-2}({6.9} d + {\log ({1}/\delta)})$ for $\mathbb{K} = \mathbb{R}$ and $k\geq 7.87 \varepsilon^{-2}({13.8} d + {\log ({1}/\delta)})$ for $\mathbb{K} = \mathbb{C}$ are $( \varepsilon, \delta, d)$ oblivious $\ell_2 \to \ell_2$ subspace embeddings.
\begin{proof}
See appendix.
\end{proof} \end{proposition}
\begin{remark} \label{rmk:complexGauss}
For $\mathbb{K} = \mathbb{C}$, an embedding with a better theoretical bound for $k$ than the one in~Proposition\nobreakspace \ref {thm:Rademacher} can be obtained by taking $\mathbf{\Theta}:= \frac{1}{\sqrt{2}}(\mathbf{\Theta}_{\mathrm{Re}} + j \mathbf{\Theta}_{\mathrm{Im}})$, where $j = \sqrt{-1}$ and $\mathbf{\Theta}_{\mathrm{Re}}, \mathbf{\Theta}_{\mathrm{Im}} \in \mathbb{R}^{k \times n}$ are rescaled Gaussian matrices. It can be shown that such $\mathbf{\Theta}$ is an $( \varepsilon, \delta, d)$ oblivious $\ell_2 \to \ell_2$ subspace embedding for $k\geq 3.94 \varepsilon^{-2}(13.8 d + \log (1/\delta))$. A detailed proof of this fact is provided in the supplementary material. In this work, however, we shall consider only real-valued embeddings.
\end{remark}
For {the} P-SRHT distribution, $\mathbf{\Theta}$ is taken to be the first $n$ columns of the matrix $k^{-1/2} (\mathbf{R} \mathbf{H}_s \mathbf{D}) \in \mathbb{R}^{k\times s}$, where $s$ is the power of 2 such that $n\leq s <2n$, $\mathbf{R} \in \mathbb{R}^{k\times s}$ are the first $k$ rows of a random permutation of {rows} of the identity matrix, $\mathbf{H}_s \in \mathbb{R}^{s\times s}$ is a Walsh-Hadamard matrix\footnote{{The Walsh-Hadamard matrix $\mathbf{H}_s$ of dimension $s$, with $s$ being a power of $2$, is a structured matrix defined recursively by $\mathbf{H}_s = \mathbf{H}_{s/2} \otimes \mathbf{H}_2$, with $\mathbf{H}_2 := \begin{bmatrix} 1& 1\\ 1& -1 \end{bmatrix}$. A product of $\mathbf{H}_s$ with a vector can be computed with $s \log_2{(s)}$ flops by using the fast Walsh-Hadamard transform.}}, and $\mathbf{D} \in \mathbb{R}^{s\times s}$ is a random diagonal matrix with random entries such that $\mathbb{P} \left ( [\mathbf{D}]_{i,i} =\pm 1 \right )=1/2$.
\begin{proposition} \label{thm:P-SRHT}
Let $\varepsilon$ and $\delta$ be such that $0<\varepsilon<{1}$ and $0<\delta <1$. The P-SRHT distribution over $\mathbb{R}^{k\times n}$ with $k\geq {2( \varepsilon^{2} - \varepsilon^3/3)^{-1}} \left [\sqrt{d}+ \sqrt{8 \log(6 n/\delta)} \right ]^2 \log (3 d/\delta)$ is a $( \varepsilon, \delta, d)$ oblivious $\ell_2 \to \ell_2$ subspace embedding.
\begin{proof}
See appendix.
\end{proof} \end{proposition}
\begin{remark} \label{rmk:P-SRHT-Gaussian}
A product of P-SRHT and Gaussian (or Rademacher) matrices can lead to oblivious $\ell_2 \to \ell_2$ subspace embeddings that have better theoretical bounds for $k$ than P-SRHT but still {have low complexity of multiplication by a vector.}
\end{remark}
We observe that the lower bounds in Propositions\nobreakspace \ref {thm:Rademacher} and\nobreakspace \ref {thm:P-SRHT} are independent or only weakly (logarithmically) dependent on the dimension $n$ and the probability of failure $\delta$. In other words, $\mathbf{\Theta}$ with a moderate $k$ can be guaranteed to satisfy~(\ref {eq:oblepsilon_embedding}) even for extremely large $n$ and small $\delta$. {Note that the theoretical bounds for $k$ shall be useful only for problems with rather high initial dimension, say with $n/r>10^4$. Furthermore, in our experiments we revealed that the presented theoretical bounds are pessimistic. {Another way for selecting the size for {the} random sketching matrix $\mathbf{\Theta}$ such that it is an $\varepsilon$-embedding for a given subspace $V$ is the adaptive procedure proposed in~\cite{balabanov2018}.}}
The rescaled Rademacher distribution and P-SRHT provide database-friendly matrices, which are easy to operate with. The rescaled Rademacher distribution is attractive from the data structure point of view and it can be efficiently implemented using standard SQL primitives~\cite{achlioptas2003database}. {The P-SRHT has a hierarchical structure allowing multiplications by vectors with only $s \log_2{(s)}$ flops, where $s$ is a power of $2$ and $n\leq s < 2n$, using the fast Walsh-Hadamard transform or even $2s \log_2(k + 1)$ flops using a more sophisticated procedure proposed in~\cite{ailon2009fast}}. In the algorithms P-SRHT distribution shall be preferred. However for multi-core computing, where the hierarchical structure of P-SRHT cannot be fully exploited, Gaussian or Rademacher matrices can be more expedient. Finally, we would like to point out that a random sequence needed for constructing a realization of Gaussian, Rademacher or P-SRHT distribution can be generated using a seeded random number generator. In this way, an embedding can be efficiently maintained with negligible communication (for parallel and distributed computing) and storage costs.
The following proposition can be used for constructing oblivious $X \to \ell_2$ subspace embeddings for general inner product $\langle \mathbf{R}_X \cdot, \cdot \rangle$ from classical $\ell_2 \to \ell_2$ subspace embeddings.
\begin{proposition} \label{thm:buildepsilon_embedding}
Let $\mathbf{Q} \in \mathbb{K}^{s\times n}$ be any matrix such that $\mathbf{Q}^{\mathrm{H}}\mathbf{Q}= \mathbf{R}_X$. If $\mathbf{\Omega} \in \mathbb{K}^{k\times s}$ is a $(\varepsilon, \delta, d)$ oblivious $\ell_2 \to \ell_2$ subspace embedding, then $\mathbf{\Theta}=\mathbf{\Omega}\mathbf{Q}$ is a $(\varepsilon, \delta, d)$ oblivious $X \to \ell_2$ subspace embedding.
\begin{proof}
See appendix.
\end{proof} \end{proposition}
Note that {the} matrix $\mathbf{Q}$ in Proposition\nobreakspace \ref {thm:buildepsilon_embedding} can be efficiently obtained block-wise (see Remark\nobreakspace \ref {rmk:Q_eval}). In addition, there is no need to evaluate $\mathbf{\Theta}=\mathbf{\Omega}\mathbf{Q}$ explicitly.
\section{$\ell_2$-embeddings for projection-based MOR} \label{l2embeddingsMOR}
In this section we integrate the sketching technique in the context of {model order reduction methods from Section\nobreakspace \ref {MOR}. Let us define the following subspace of $U$:
\begin{equation} \label{eq:Y_r}
Y_r(\mu) := U_r+ \mathrm{span} \{ \mathbf{R}_U^{-1} \mathbf{r}(\mathbf{x}; \mu) : \mathbf{x} \in U_r \},
\end{equation}
where $\mathbf{r}(\mathbf{x}; \mu) = \mathbf{b}(\mu)- \mathbf{A}(\mu)\mathbf{x}$, and identify its dual space with $Y_r(\mu)' :=\mathrm{span} \{ \mathbf{R}_U \mathbf{x} : \mathbf{x} \in Y_r(\mu) \}$.}
{Furthermore, let $\mathbf{\Theta} \in \mathbb{K}^{k\times n}$ be a certain sketching matrix seen as an $U \to \ell_2$ subspace embedding.}
\subsection {Galerkin projection} \label{SKgalproj}
{We propose to use random sketching for estimating the Galerkin projection. For any $\mathbf{x} \in U_r$ the residual $ \mathbf{r}(\mathbf{x}; \mu)$ belongs to $Y_r(\mu)'$.} Consequently, taking into account Proposition\nobreakspace \ref {thm:skseminorm_ineq}, if $\mathbf{\Theta}$ is a $U \to \ell_2$ $\varepsilon$-subspace embedding for $Y_r(\mu)$, then for all $\mathbf{x} \in U_r$ the semi-norm $\| \mathbf{r}(\mathbf{x}; \mu) \|_{U_r'}$ in~(\ref {eq:galproj2}) can be well approximated by $\| \mathbf{r}(\mathbf{x}; \mu) \|^{\mathbf{\Theta}}_{U_r'}$. This leads to the sketched version of the Galerkin orthogonality condition:
\begin{equation} \label{eq:SKgalproj}
\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Theta}}_{U_r'}=0.
\end{equation}
The quality of projection $\mathbf{u}_r(\mu)$ satisfying~(\ref {eq:SKgalproj}) can be characterized by the following coefficients:
\begin{subequations} \label{eq:skalpharbetar}
\begin{align}
&\alpha^{\mathbf{\Theta}}_r(\mu):= \underset{ \mathbf{x} \in U_r \backslash \{ \mathbf{0} \}} \min \frac{\| \mathbf{A}(\mu)\mathbf{x} \|^{\mathbf{\Theta}}_{U_r'}}{\| \mathbf{x} \|_U}, \label{eq:skalphar} \\
&\beta^{\mathbf{\Theta}}_r(\mu):= \underset{ \mathbf{x} \in \left ( \mathrm{span} \{ \mathbf{u}(\mu) \}+ U_r \right ) \backslash \{ \mathbf{0} \}} \max \frac{ \| \mathbf{A}(\mu) \mathbf{x} \|^{\mathbf{\Theta}}_{U_r'}}{\| \mathbf{x} \|_U}.
\end{align}
\end{subequations}
\begin{proposition}[Cea's lemma for sketched Galerkin projection] \label{thm:SKquasi-opt}
Let $\mathbf{u}_r(\mu)$ satisfy~(\ref {eq:SKgalproj}). If $\alpha^{\mathbf{\Theta}}_r(\mu)>0$, then the following relation holds
\begin{equation} \label{eq:SKquasi-opt}
\| \mathbf{u}(\mu)- \mathbf{u}_r(\mu) \|_{U} \leq (1+\frac{\beta^{\mathbf{\Theta}}_r(\mu)}{\alpha^{\mathbf{\Theta}}_r(\mu)}) \| \mathbf{u}(\mu)- \mathbf{P}_{U_r}\mathbf{u}(\mu) \|_{U}.
\end{equation}
\begin{proof}
See appendix.
\end{proof} \end{proposition}
\begin{proposition} \label{thm:skcea}
Let
\begin{equation*}
a_r(\mu):= \underset{ \mathbf{w} \in U_r \backslash \{ \mathbf{0} \}} \max \frac{ \| \mathbf{A}(\mu)\mathbf{w} \|_{U'} }{\| \mathbf{A}(\mu)\mathbf{w} \|_{U_r'}}.
\end{equation*}
If $\mathbf{\Theta}$ is a $U \to \ell_2$ $\varepsilon$-embedding for $Y_r(\mu)$, then
\begin{subequations}
\begin{align} \label{eq:skalphabetabounds}
&\alpha^{\mathbf{\Theta}}_r(\mu)\geq \frac{1}{\sqrt{1+\varepsilon}}(1-\varepsilon a_r(\mu)) \alpha_r(\mu), \\
&\beta^{\mathbf{\Theta}}_r(\mu)\leq \frac{1}{\sqrt{1-\varepsilon}}(\beta_r(\mu)+ \varepsilon \beta(\mu)).
\end{align}
\end{subequations}
\begin{proof}
See appendix.
\end{proof} \end{proposition}
There are two ways to select a random distribution for $\mathbf{\Theta}$ such that it is guaranteed to be a $U \to \ell_2$ $\varepsilon$-embedding for $Y_r(\mu)$ for all $\mu \in \mathcal{P}$, simultaneously, with probability at least $1-\delta$.
A first way applies when $\mathcal{P}$ is of finite cardinality. We can choose $\mathbf{\Theta}$ such that it is a $( \varepsilon, \delta {\# \mathcal{P}}^{-1}, d)$ oblivious $U \to \ell_2$ subspace embedding, where $d: = \max_{\mu \in \mathcal{P} } {\textup{ dim} (Y_r(\mu))}$ and apply a union bound for the probability of success. Since $d \leq 2r +1$, $\mathbf{\Theta}$ can be selected of moderate size.
When $\mathcal{P}$ is infinite, we make a standard assumption that $\mathbf{A}(\mu)$ and $\mathbf{b}(\mu)$ admit affine representations.
It then follows directly from the definition of $Y_r(\mu)$ that $ \bigcup_{\mu \in \mathcal{P}} Y_r(\mu)$ is contained in a low-dimensional space $Y^*_r$. Let $d^*$ be the dimension of this space. By definition, if $\mathbf{\Theta}$ is a $( \varepsilon, \delta, d^*)$ oblivious $U \to \ell_2$ subspace embedding, then it is a $U \to \ell_2$ $\varepsilon$-embedding for $Y^*_r$, and hence for every $Y_r(\mu)$, simultaneously, with probability at least $1-\delta$.
The lower bound for $\alpha^{\mathbf{\Theta}}_r(\mu)$ in~Proposition\nobreakspace \ref {thm:skcea} depends on the product $\varepsilon a_r(\mu)$. {In particular, to guarantee positivity of $\alpha^{\mathbf{\Theta}}_r(\mu)$ and ensure well-posedness of~(\ref{eq:SKgalproj}), condition $\varepsilon a_r(\mu) <1$ has to be satisfied.} The coefficient $a_r(\mu)$ is {bounded from above} by $\frac{\beta(\mu)}{\alpha_r(\mu)}$. Consequently, $a_r(\mu)$ for coercive well-conditioned operators is expected to be lower than for non-coercive ill-conditioned $\mathbf{A}(\mu)$. The condition number and coercivity of $\mathbf{A}(\mu)$, however, do not fully characterize $a_r(\mu)$. This coefficient rather reflects how well $U_r$ corresponds to its image $\{ \mathbf{A}(\mu) \mathbf{x}: \mathbf{x} \in U_r \}$ through the map $\mathbf{A}(\mu)$.
For example, if the basis for $U_r$ is formed from eigenvectors of $\mathbf{A}(\mu)$ then $a_r(\mu)=1$. We also would like to note that the performance of {the} random sketching technique depends on {the operator,} only when it is employed for estimating the Galerkin projection. The accuracy of estimation of the residual error and the goal-oriented correction depends on the quality of sketching matrix $\mathbf{\Theta}$ but not on $\mathbf{A}(\mu)$. In addition, to make the performance of random sketching completely insensitive to the operator's properties, one can consider another type of projection (randomized minimal residual projection) for $\mathbf{u}_r(\mu)$ as is discussed in~\cite{balabanov2018}.
The {coordinates} of the solution $\mathbf{u}_r(\mu)$ of~(\ref {eq:SKgalproj}) can be found by solving
\begin{equation} \label{eq:skreduced_system}
\mathbf{A}_r(\mu)\mathbf{a}_r(\mu) = \mathbf{b}_r(\mu),
\end{equation}
where $\mathbf{A}_r(\mu):= \mathbf{U}_r^{\mathrm{H}}\mathbf{\Theta}^{\mathrm{H}}\mathbf{\Theta}\mathbf{R}_U^{-1}\mathbf{A}(\mu)\mathbf{U}_r \in \mathbb{K}^{r\times r}$ and $\mathbf{b}_r(\mu):= \mathbf{U}_r^{\mathrm{H}}\mathbf{\Theta}^{\mathrm{H}}\mathbf{\Theta}\mathbf{R}_U^{-1}\mathbf{b}(\mu) \in \mathbb{K}^{r}$.
\begin{proposition} \label{thm:skalgstability}
Let $\mathbf{\Theta}$ be a $U \to \ell_2$ $\varepsilon$-embedding for $U_r$, and let $\mathbf{U}_r$ be orthogonal with respect to $\langle \cdot, \cdot \rangle^{\mathbf{\Theta}}_U$. Then the condition number of $\mathbf{A}_r(\mu)$ in~(\ref {eq:skreduced_system}) is bounded by $\sqrt{\frac{1+\varepsilon}{1-\varepsilon}} \frac{\beta^\mathbf{\Theta}_r(\mu)}{\alpha^\mathbf{\Theta}_r(\mu)}$.
\begin{proof}
See appendix.
\end{proof}
\end{proposition}
\subsection{Error estimation}
Let $\mathbf{u}_r(\mu) \in U_r$ be an approximation of $\mathbf{u}(\mu)$. Consider the following error estimator:
\begin{equation} \label{eq:skerrorind}
\Delta^{\mathbf{\Theta}}(\mathbf{u}_r(\mu);\mu):= \frac{\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Theta}}_{U'}}{\eta(\mu)},
\end{equation}
where $\eta(\mu)$ is defined by~(\ref {eq:eta}). Below we show that under certain conditions, $\Delta^{\mathbf{\Theta}}(\mathbf{u}_r(\mu);\mu)$ is guaranteed to be close to the classical error indicator $\Delta(\mathbf{u}_r(\mu);\mu)$.
\begin{proposition} \label{thm:skerrorind}
If $\mathbf{\Theta}$ is a $U \to \ell_2$ $\varepsilon$-embedding for $\mathrm{span} \{ \mathbf{R}^{-1}_U\mathbf{r}(\mathbf{u}_r(\mu);\mu) \}$, then
\begin{equation} \label{eq:skerroropt}
\sqrt{1-\varepsilon}\Delta(\mathbf{u}_r(\mu);\mu)\leq \Delta^{\mathbf{\Theta}}(\mathbf{u}_r(\mu);\mu)\leq \sqrt{1+\varepsilon}\Delta(\mathbf{u}_r(\mu);\mu).
\end{equation}
\begin{proof}
See appendix.
\end{proof} \end{proposition}
\begin{corollary} \label{thm:skerrorind1}
If $\mathbf{\Theta}$ is a $U \to \ell_2$ $\varepsilon$-embedding for $Y_r(\mu)$, then relation~(\ref {eq:skerroropt}) holds.
\end{corollary}
\subsection{Primal-dual correction} \label{sk_pd_correction}
The sketching technique can be applied to the dual problem in exactly the same manner as to the primal problem.
Let $\mathbf{u}_r(\mu) \in U_r$ and $\mathbf{u}_r^\mathrm{du}(\mu) \in U^{\mathrm{du}}_r$ be approximations of $\mathbf{u}(\mu)$ and $\mathbf{u}^\mathrm{du}(\mu)$, respectively.
The sketched version of the primal-dual correction~(\ref {eq:correction}) can be expressed as follows
\begin{equation} \label{eq:skcorrection}
s_r^{\mathrm{spd}}(\mu):= s_r(\mu)- \langle \mathbf{u}_r^\mathrm{du}(\mu), \mathbf{R}_U^{-1} \mathbf{r}(\mathbf{u}_r(\mu);\mu) \rangle^{\mathbf{\Theta}}_{U}.
\end{equation}
\begin{proposition} \label{thm:skerror_correction}
If $\mathbf{\Theta}$ is $U \to \ell_2$ $\varepsilon$-embedding for $\mathrm{span}\{ \mathbf{u}_r^\mathrm{du}(\mu), \mathbf{R}^{-1}_U\mathbf{r}(\mathbf{u}_r(\mu);\mu) \}$, then
\begin{equation} \label{eq:skerror_correction}
|s(\mu)- s_r^{\mathrm{spd}}(\mu)|\leq \frac{\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|_{U'}}{\eta(\mu)} ((1+\varepsilon)\| \mathbf{r}^{\mathrm{du}}(\mathbf{u}_r^\mathrm{du}(\mu);\mu) \|_{U'}+ \varepsilon\| \mathbf{l}(\mu) \|_{U'}).
\end{equation}
\begin{proof}
See appendix.
\end{proof} \end{proposition}
\begin{remark}
We observe that the new version of primal-dual correction~(\ref {eq:skcorrection}) and its error bound~(\ref {eq:skerror_correction}) are no longer symmetric {in terms} of the primal and dual solutions. When the residual error of $\mathbf{u}_r^\mathrm{du}(\mu)$ is smaller than the residual error of $\mathbf{u}_r(\mu)$, it can be more beneficial to consider the dual problem as the primal one and vice versa.
\end{remark}
\begin{remark} \label{rmk:compliant}
Consider the so called ``compliant case'', i.e., $\mathbf{A}(\mu)$ is self-adjoint, and $\mathbf{b}(\mu)$ is equal to $\mathbf{l}(\mu)$ up to a scaling factor. In such a case the same solution (up to a scaling factor) should be used for both the primal and the dual problems. If the approximation $\mathbf{u}_r(\mu)$ of $\mathbf{u}(\mu)$ is obtained with the classical Galerkin projection then the primal-dual correction is automatically included to the primal output quantity, i.e., $s_r(\mu) = s_r^{\mathrm{pd}}(\mu)$. {A} similar scenario can be observed for the sketched Galerkin projection. If $\mathbf{u}_r(\mu)$ satisfies~(\ref {eq:SKgalproj}) and the same $\mathbf{\Theta}$ is considered for both the projection and the inner product in~(\ref {eq:skcorrection}), then $s_r(\mu)= s^{\mathrm{spd}}_r(\mu)$.
\end{remark}
It follows that if $\varepsilon$ is of the order of $\| \mathbf{r}^{\mathrm{du}}(\mathbf{u}_r^\mathrm{du}(\mu);\mu) \|_{U'}/ \| \mathbf{l}(\mu) \|_{U'}$, then the quadratic dependence in residual norm of the error bound is preserved. For relatively large $\varepsilon$, however, the error is expected to be proportional to $\varepsilon \| \mathbf{r}(\mathbf{u}_r(\mu); \mu) \|_{U'}$. Note that $\varepsilon$ can decrease slowly with $k$ ({typically} $\varepsilon = \mathcal{O}(k^{-1/2})$, see~Propositions\nobreakspace \ref {thm:Rademacher} and\nobreakspace \ref {thm:P-SRHT}). Consequently, preserving high precision of the primal-dual correction can require large sketching matrices.
More accurate but yet efficient estimation of $s^{\mathrm{pd}}(\mu)$ can be obtained by introducing an approximation $\mathbf{w}^\mathrm{du}_r(\mu)$ of $\mathbf{u}_r^\mathrm{du}(\mu)$ such that the inner products with $\mathbf{w}^\mathrm{du}_r(\mu)$ are efficiently computable. Such approximation does not have to be very precise. As it will become clear later, it is sufficient to have $\mathbf{w}^\mathrm{du}_r(\mu)$ such that $\|\mathbf{u}_r^\mathrm{du}(\mu) - \mathbf{w}^\mathrm{du}_r(\mu) \|_U$ is of the order of $\varepsilon^{-1} \|\mathbf{u}_r^\mathrm{du}(\mu) - \mathbf{u}^\mathrm{du}(\mu)\|_U$. A possible choice is to let $\mathbf{w}^\mathrm{du}_r(\mu)$ be the orthogonal projection of $\mathbf{u}_r^\mathrm{du}(\mu)$ on a certain subspace $W^\mathrm{du}_r \subset U$, where $W^\mathrm{du}_r$ is such that it approximates well $\{ \mathbf{u}_r^\mathrm{du} (\mu):~\mu \in \mathcal{P} \}$ but is much cheaper to operate with than $U^\mathrm{du}_r$, e.g., if it has a smaller dimension. One can simply take $W^\mathrm{du}_r = U^\mathrm{du}_i$ (the subspace spanned
by the first $i^\mathrm{du}$ basis vectors obtained during the generation of
$U^\mathrm{du}_r$), for some small $i^\mathrm{du} < r^\mathrm{du}$. A better approach consists in using a greedy algorithm or the POD method with a training set $\{ \mathbf{u}_r^\mathrm{du}(\mu) : \mu \in \mathcal{P}_{\mathrm{train}} \}$. {We could also choose $W^\mathrm{du}_r$ as the subspace associated with a coarse-grid interpolation of the solution.} In this case, even if $W^\mathrm{du}_r$ has a high dimension, it can be operated with efficiently because its basis vectors are sparse. Strategies for the efficient construction of approximation spaces for $\mathbf{u}_r^\mathrm{du}(\mu)$ (or $\mathbf{u}_r(\mu)$) are provided in~\cite{balabanov2018}. Now, let us assume that $\mathbf{w}^\mathrm{du}_r(\mu)$ is given and consider the following estimation of $s_r^\mathrm{pd}(\mu)$:
\begin{equation} \label{eq:skcorrection2}
s^{\mathrm{spd+}}_r(\mu):= s_r(\mu) - \langle \mathbf{w}^\mathrm{du}_r(\mu), \mathbf{r}(\mathbf{u}_r(\mu);\mu) \rangle - \langle \mathbf{u}_r^\mathrm{du}(\mu) - \mathbf{w}^\mathrm{du}_r(\mu), \mathbf{R}_U^{-1}\mathbf{r}(\mathbf{u}_r(\mu);\mu) \rangle^{\mathbf{\Theta}}_{U}.
\end{equation}
We notice that $s^{\mathrm{spd+}}_r(\mu)$ can be evaluated efficiently but, at the same time, it has better accuracy than $s_r^{\mathrm{spd}}(\mu)$ in~(\ref {eq:skerror_correction}). By similar consideration as in~Proposition\nobreakspace \ref {thm:skerror_correction} it can be shown that for preserving quadratic dependence in the error for $s^{\mathrm{spd+}}_r(\mu)$, it is sufficient to have $\varepsilon$ of the order of $\| \mathbf{u}_r^\mathrm{du}(\mu) - \mathbf{u}^\mathrm{du}(\mu) \|_{U'}/\| \mathbf{u}_r^\mathrm{du}(\mu) - \mathbf{w}^\mathrm{du}_r(\mu) \|_{U'}$.
Further, we assume that the accuracy of $s_r^{\mathrm{spd}}(\mu)$ is sufficiently good so that there is no need to consider a corrected estimation $s^{\mathrm{spd+}}_r(\mu)$. For other cases the methodology can be applied similarly.
\subsection{Computing the sketch} \label{sketch}
In this section we introduce the concept of a sketch of the reduced order model. A sketch contains all the information needed for estimating the output quantity and certifying this estimation. It can be efficiently computed in basically any computational environment.
We restrict ourselves to solving the primal problem. Similar considerations also apply for the dual problem and primal-dual correction. The $\mathbf{\Theta}$-sketch of a reduced model associated with a subspace $U_r$ is defined as
\begin{equation} \label{eq:sketchUr}
\left \{ \left \{ \mathbf{\Theta} \mathbf{x}, \mathbf{\Theta} \mathbf{R}_U^{-1} \mathbf{r}(\mathbf{x}; \mu), \langle \mathbf{l}(\mu), \mathbf{x} \rangle \right \}: ~~ \mathbf{x} \in U_r \right \}
\end{equation}
In practice, each element of~(\ref {eq:sketchUr}) can be represented by the coordinates of $\mathbf{x}$ associated with $\mathbf{U}_r$, i.e., a vector $\mathbf{a}_r \in \mathbb{K}^{r}$ such that $\mathbf{x}=\mathbf{U}_r \mathbf{a}_r$, the sketched reduced basis matrix $\mathbf{U}^{\mathbf{\Theta}}_r:=\mathbf{\Theta} \mathbf{U}_r$ and the following small parameter-dependent matrices and vectors:
\begin{equation} \label{eq:sketch}
\mathbf{V}^{\mathbf{\Theta}}_r(\mu):=\mathbf{\Theta} \mathbf{R}_U^{-1}\mathbf{A}(\mu) \mathbf{U}_r,~~ {\mathbf{b}^{\mathbf{\Theta}}}(\mu):=\mathbf{\Theta} \mathbf{R}_U^{-1} \mathbf{b}(\mu), ~~\mathbf{l}_r(\mu)^{\mathrm{H}}:= \mathbf{l}(\mu)^{\mathrm{H}} \mathbf{U}_r.
\end{equation}
Throughout the paper, matrix $\mathbf{U}^{\mathbf{\Theta}}_r$ and the affine expansions of $\mathbf{V}^{\mathbf{\Theta}}_r(\mu)$, $\mathbf{b}^{\mathbf{\Theta}}(\mu)$ and $\mathbf{l}_r(\mu)$ shall be referred to as the $\mathbf{\Theta}$-sketch of $\mathbf{U}_r$. This object should not be confused with the $\mathbf{\Theta}$-sketch associated with a subspace $U_r$ defined by~(\ref{eq:sketchUr}). The $\mathbf{\Theta}$-sketch of $\mathbf{U}_r$ shall be used for characterizing the elements of the $\mathbf{\Theta}$-sketch associated with $U_r$ similarly as $\mathbf{U}_r$ is used for characterizing the vectors in $U_r$.
The affine expansions of $\mathbf{V}^{\mathbf{\Theta}}_r(\mu)$, ${\mathbf{b}^{\mathbf{\Theta}}}(\mu)$ and $\mathbf{l}_r(\mu)$ can be obtained either by considering the affine expansions of $\mathbf{A}(\mu)$, $\mathbf{b}(\mu)$, and $\mathbf{l}(\mu)$\footnote{{For instance, if $\mathbf{A}(\mu)=\sum^{m_A}_{i=1} \phi_i(\mu) \mathbf{A}_i$, then $ \mathbf{V}_r^{\mathbf{\Theta}}(\mu) = \sum^{m_A}_{i=1} \phi_i(\mu) \left (\mathbf{\Theta} \mathbf{R}_U^{-1}\mathbf{A}_i \mathbf{U}_r \right )$. Similar relations can also be derived for $\mathbf{b}^{\mathbf{\Theta}}(\mu)$ and $\mathbf{l}_r(\mu)$.}} or with {the} empirical interpolation method (EIM)~\cite{maday2007general}. Given the sketch, the affine expansions of the quantities (e.g., $\mathbf{A}_r(\mu)$ in~(\ref {eq:skreduced_system})) needed for efficient evaluation of the output can be computed with negligible cost. Computation of the $\mathbf{\Theta}$-sketch determines the cost of the offline stage and it has to be performed depending on the computational environment. We assume that the affine factors of $\mathbf{l}_r(\mu)$ are cheap to evaluate. Then the remaining computational cost is mainly associated with the following three operations: computing the samples (snapshots) of the solution (i.e., solving the full order problem for several $\mu \in \mathcal{P}$), performing matrix-vector products with $\mathbf{R}_U^{-1}$ and the affine factors of $\mathbf{A}(\mu)$ (or $\mathbf{A}(\mu)$ evaluated at the interpolation points for EIM), and evaluating matrix-vector products with $\mathbf{\Theta}$.
{The cost of obtaining the snapshots is assumed to be low compared to the cost of other offline computations such as evaluations of high dimensional inner and matrix-vector products.}
This is the case when the snapshots are computed beyond the main routine {using} highly optimised linear solver or a powerful server with limited budget. This is also the case when the snapshots are obtained on distributed machines with expensive communication costs.
{Solutions of linear systems of equations should have only a minor impact on the overall cost of an algorithm even when the basic metrics of efficiency, such as the complexity (number of floating point operations) and memory consumption, are considered.
For large-scale problems solved in sequential or limited memory environments the computation of each snapshot should have log-linear (i.e., $\mathcal{O}(n (\log{n})^d)$, for some small $d$) complexity and memory requirements. Higher complexity or memory requirements are usually not acceptable {with} standard architectures.
In fact, in recent years there was an extensive development of methods for solving large-scale linear systems of equations~\cite{hackbusch2015book,bebendorf2008book,Elman2014} allowing computation of the snapshots with log-linear number of flops and bytes of memory (see for instance \cite{Xia2009,bjorn2011,Martinsson2009,Lee2013,Boman2008}). On the other hand, for classical model reduction, the evaluation of multiple inner products for the affine terms of reduced systems~(\ref {eq:reduced_system}) and the quantities for error estimation (see Section\nobreakspace\ref{efficient_res_norm}) require $\mathcal{O}(n r^2 m^2_A + n m^2_b)$ flops, with $m_A$ and $m_b$ being the numbers of terms in affine expansions of $\mathbf{A}(\mu)$ and $\mathbf{b}(\mu)$, respectively, and $\mathcal{O}(n r)$ bytes of memory. We see that indeed the complexity and memory consumption of the offline stage can be highly dominated by the postprocessing of the snapshots but not their computation.}
{{The} matrices $\mathbf{R}_U$ and $\mathbf{A}(\mu)$ should be sparse or maintained in a hierarchical format~\cite{hackbusch2015book}, so that they can be multiplied by a vector using (log-)linear complexity and {storage consumption}.} {Multiplication of $\mathbf{R}_U^{-1}$ by a vector should also be an inexpensive operation with the cost comparable to the cost of computing matrix-vector products with $\mathbf{R}_U$.} For many problems it can be beneficial to precompute a factorization of $\mathbf{R}_U$ and to use it for efficient multiplication of $\mathbf{R}^{-1}_U$ by multiple vectors. {Note that for the typical $\mathbf{R}_U$ (such as stiffness and mass matrices) originating from standard discretizations of partial differential equations in two spatial dimensions, a sparse Cholesky decomposition can be precomputed using $\mathcal{O}(n^{3/2})$ flops and then used for multiplying $\mathbf{R}_U^{-1}$ by vectors with $\mathcal{O}(n \log n)$ flops.} For discretized PDEs in higher spatial dimensions, or problems where $\mathbf{R}_U$ is dense, the classical Cholesky decomposition can be more burdensome to obtain and use. {For better efficiency, the matrix $\mathbf{R}_U$ can be approximated by $\tilde{\mathbf{Q}}^\mathrm{H} \tilde{\mathbf{Q}}$ (with log-linear number of flops) using incomplete or hierarchical \cite{Bebendorf2007} Cholesky factorizations. Iterative Krylov methods with good preconditioning {are} an alternative way for computing products of $\mathbf{R}_U^{-1}$ with vectors with log-linear complexity~\cite{Boman2008}. Note that although multiplication of $\mathbf{R}_U^{-1}$ by a vector and computation of a snapshot both require solving high-dimensional systems of equations, the cost of the former operation should be considerably less than the cost of the later one due to good properties of $\mathbf{R}_U$ (such as positive-definiteness, symmetry, and parameter-independence providing ability of precomputing a decomposition).}
In a streaming environment, where the snapshots are provided as data-streams, a special care has to be payed to the memory constraints. It can be important to maintain $\mathbf{R}_U$ and the affine factors (or evaluations at EIM interpolation points) of $\mathbf{A}(\mu)$ with a reduced storage consumption. For discretized PDEs, for example, the entries of these matrices {(if they are sparse)} can be generated subdomain-by-subdomain on the fly. In such a case the conjugate gradient method can be a good choice for evaluating products of $\mathbf{R}^{-1}_U$ with vectors. In very extreme cases, e.g., where storage of even a single large vector is forbidden, $\mathbf{R}_U$ can be approximated by a block matrix and inverted block-by-block on the fly.
Next we discuss an efficient implementation of $\mathbf{\Theta}$. We assume that $$\mathbf{\Theta}=\mathbf{\Omega} \mathbf{Q},$$ where $\mathbf{\Omega} \in \mathbb{K}^{k \times s}$ is a classical oblivious $\ell_2 \to \ell_2$ subspace embedding and $\mathbf{Q} \in \mathbb{K}^{s\times n}$ is such that $\mathbf{Q}^\mathrm{H}\mathbf{Q}= \mathbf{R}_U$ (see~Propositions\nobreakspace \ref {thm:Rademacher}, \ref {thm:P-SRHT} and\nobreakspace \ref {thm:buildepsilon_embedding}).
{The} matrix $\mathbf{Q}$ can be expected to have a cost of multiplication by a vector comparable to $\mathbf{R}_U$. If needed, this matrix can be generated block-wise (see Remark\nobreakspace \ref {rmk:Q_eval}) on the fly similarly to $\mathbf{R}_U$.
{For environments where the measure of efficiency is the number of flops, a sketching matrix $\mathbf{\Omega}$ with fast matrix-vector multiplications such as P-SRHT {is preferable}. The complexity of {a} matrix-vector product for P-SRHT is only $2s \log_2(k+1)$, with $s$ being the power of $2$ such that $n \leq s < 2n$~\cite{ailon2009fast,boutsidis2013improved}\footnote{{The straightforward implementation of P-SRHT using the fast Walsh-Hadamard transform results in $s \log_2{(s)}$ complexity of multiplication by a vector, which yields similar computational costs as the procedure from~\cite{ailon2009fast}.}}. Consequently, assuming that $\mathbf{A}(\mu)$ is sparse, that multiplications of $\mathbf{Q}$ and $\mathbf{R}^{-1}_U$ by a vector take $\mathcal{O}(n(\log{n})^d)$ flops, and that $\mathbf{A}(\mu)$ and $\mathbf{b}(\mu)$ admit affine expansions with $m_A$ and $m_b$ terms respectively, the overall complexity of computation of a $\mathbf{\Theta}$-sketch of $\mathbf{U}_r$, using {a} P-SRHT matrix as $\mathbf{\Omega}$, from the snapshots is only $$\mathcal{O}(n[ r m_A \log{k} + m_b \log{k} + r m_A (\log{n})^d]).$$
This complexity can be much less than the complexity of construction of the classical reduced model from $\mathbf{U}_r$, which is $\mathcal{O}(n [r^2 m^2_A + m^2_b + r m_A (\log{n})^d])$ (including the precomputation of quantities needed for online evaluation of the residual error).}
The efficiency of an algorithm can be also measured {in terms of} the number of passes taken over the data. Such a situation may arise when there is a restriction on the accessible amount of fast memory. In this scenario, both structured and unstructured matrices may provide drastic reductions of the computational cost. Due to robustness and simplicity of implementation, we suggest using Gaussian or Rademacher matrices over the others. For these matrices a seeded random number generator has to be utilized. It allows accessing the entries of $\mathbf{\Omega}$ on the fly with negligible storage costs~\cite{halko2011finding}. In a streaming environment, multiplication of Gaussian or Rademacher matrices by a vector can be performed block-wise.
Note that all aforementioned operations are well suited for parallelization. Regarding distributed computing, a sketch of each snapshot can be obtained on a separate machine with absolutely no communication. The cost of transferring the sketches to the master machine will depend on the number of rows of $\mathbf{\Theta}$ but not the size of the full order problem.
Finally, let us comment on orthogonalization of $\mathbf{U}_r$ with respect to {$\langle \cdot, \cdot \rangle_U^{\mathbf{\Theta}}$}. This procedure is particularly important for numerical stability of the reduced system of equations (see Proposition\nobreakspace \ref {thm:skalgstability}). In our applications we are interested in obtaining a sketch of the orthogonal matrix but not the matrix itself. In such a case, operating with large-scale matrices and vectors is not necessary. Let us assume to be given a sketch of $\mathbf{U}_r$ associated with $\mathbf{\Theta}$. Let $\mathbf{T}_r \in \mathbb{K}^{r \times r}$ be such that $ \mathbf{U}^{\mathbf{\Theta}}_r \mathbf{T}_r$ is orthogonal with respect to {$\langle \cdot, \cdot \rangle$}. Such a matrix can be obtained with a standard algorithm, e.g., QR factorization. It can be easily verified that $\mathbf{U}^*_r := \mathbf{U}_r \mathbf{T}_r$ is orthogonal with respect to {$\langle \cdot, \cdot \rangle_U^{\mathbf{\Theta}}$}. We have,
\begin{equation*}
\mathbf{\Theta}\mathbf{U}^*_r= \mathbf{U}^{\mathbf{\Theta}}_r\mathbf{T}_r,~~ \mathbf{\Theta}\mathbf{R}_U^{-1}\mathbf{A}(\mu)\mathbf{U}^*_r= \mathbf{V}^{\mathbf{\Theta}}_r(\mu)\mathbf{T}_r, \textup{ and } \mathbf{l}(\mu)^{\mathrm{H}}\mathbf{U}^*_r= \mathbf{l}_r(\mu)^{\mathrm{H}}\mathbf{T}_r.
\end{equation*}
Therefore, the sketch of $\mathbf{U}^*_r$ can be computed, simply, by multiplying $\mathbf{U}^{\mathbf{\Theta}}_r$ and the affine factors of $\mathbf{V}^{\mathbf{\Theta}}_r(\mu)$, and $\mathbf{l}_r(\mu)^{\mathrm{H}}$, by $\mathbf{T}_r$.
\subsection{Efficient evaluation of the residual norm} \label{efficient_res_norm}
Until now we discussed how random sketching can be used for reducing the offline cost of precomputing factors of affine decompositions of the reduced operator and the reduced right-hand side. Let us now focus on the cost of the online stage. Often, the most expensive part of the online stage is the evaluation of the quantities needed for computing the residual norms for a posteriori error estimation due to many summands in their affine expansions. In addition, as was indicated in~\cite{casenave2014,buhr2014numerically}, the classical procedure for the evaluation of the residual norms can be sensitive to round-off errors. Here we provide a {less expensive} way of computing the residual norms, which simultaneously offers a better numerical stability.
Let $\mathbf{u}_r(\mu) \in U_r$ be an approximation of $\mathbf{u}(\mu)$, and $\mathbf{a}_r(\mu) \in \mathbb{K}^{r}$ be the coordinates of $\mathbf{u}_r(\mu)$ associated with $\mathbf{U}_r$, i.e., $\mathbf{u}_r(\mu) = \mathbf{U}_r \mathbf{a}_r(\mu)$. The classical algorithm for evaluating the residual norm $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|_{U'}$ for a large finite set of parameters $\mathcal{P}_{\mathrm{test}} \subseteq \mathcal{P}$ proceeds with expressing $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^2_{U'}$ in the following form~\cite{haasdonk2017reduced}
\begin{equation} \label{eq:compute_res}
\| \mathbf{r}(\mathbf{u}_r(\mu); \mu) \|^2_{U'}= \langle \mathbf{a}_r(\mu), \mathbf{M}(\mu)\mathbf{a}_r(\mu) \rangle+ 2\mathrm{Re}(\langle \mathbf{a}_r(\mu), \mathbf{m}(\mu) \rangle) +m(\mu),
\end{equation}
where affine expansions of $\mathbf{M}(\mu):= \mathbf{U}_r^{\mathrm{H}}\mathbf{A}(\mu)^{\mathrm{H}}\mathbf{R}_U^{-1}\mathbf{A}(\mu)\mathbf{U}_r$, $\mathbf{m}(\mu):= \mathbf{U}_r^{\mathrm{H}}\mathbf{A}(\mu)^{\mathrm{H}}\mathbf{R}_U^{-1} \mathbf{b}(\mu)$ and $m(\mu):=\mathbf{b}(\mu)^{\mathrm{H}}\mathbf{R}_U^{-1}\mathbf{b}(\mu)$ can be precomputed during the offline stage and used for efficient online evaluation of these quantities for each $\mu \in \mathcal{P}_{\mathrm{test}}$. {If $\mathbf{A}(\mu)$ and $\mathbf{b}(\mu)$ admit affine representations with $m_A$ and $m_b$ terms, respectively, then the associated affine expansions of $\mathbf{M}(\mu)$, $\mathbf{m}(\mu)$ and $m(\mu)$ contain $\mathcal{O}(m^2_A), \mathcal{O}(m_A m_b), \mathcal{O}(m^2_b)$ terms respectively, therefore requiring $\mathcal{O}(r^2 m^2_A+m^2_b)$ flops for their online evaluations.}
An approximation of the residual norm can be obtained in a more efficient and numerically stable way with {the} random sketching technique. Let us assume that $\mathbf{\Theta} \in \mathbb{K}^{k\times n} $ is a $U \to \ell_2$ embedding such that $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Theta}}_{U'}$ approximates well $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|_{U'}$ (see Proposition\nobreakspace \ref {thm:skerrorind}). Let us also assume that the factors of affine decompositions of $\mathbf{V}^{\mathbf{\Theta}}_r(\mu)$ and $ \mathbf{b}^{\mathbf{\Theta}}(\mu)$ have been precomputed and are available.
For each $\mu \in \mathcal{P}_{\mathrm{test}}$ an estimation of the residual norm can be provided by
\begin{equation} \label{eq:skcompres}
\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|_{{U}'} \approx \| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Theta}}_{{U}'}= \|\mathbf{V}^{\mathbf{\Theta}}_r(\mu)\mathbf{a}_r(\mu)- \mathbf{b}^{\mathbf{\Theta}}(\mu) \|.
\end{equation}
We notice that ${\mathbf{V}^{\mathbf{\Theta}}_r}(\mu)$ and $\mathbf{b}^{\mathbf{\Theta}}(\mu)$ have less terms in their affine expansions than the quantities in~(\ref {eq:compute_res}). The sizes of ${\mathbf{V}^{\mathbf{\Theta}}_r}(\mu)$ and $\mathbf{b}^{\mathbf{\Theta}}(\mu)$, however, can be too large to provide any online cost reduction.
In order to improve the efficiency, we introduce an additional $(\varepsilon, \delta, 1)$ oblivious $\ell_2 \to \ell_2$ subspace embedding $\mathbf{\Gamma} \in \mathbb{K}^{k'\times k}$. The theoretical bounds for the number of rows of Gaussian, Rademacher and P-SRHT matrices sufficient to satisfy the $(\varepsilon, \delta, 1)$ oblivious $\ell_2 \to \ell_2$ subspace embedding property can be obtained from~\cite[Lemmas~4.1 and 5.1]{achlioptas2003database}~and~Proposition\nobreakspace \ref {thm:P-SRHT}. They are presented in Table\nobreakspace \ref {tab:numrows}. Values are shown for $\varepsilon= 0.5$ and varying probabilities of failure $\delta$. We note that in order to account for the case $\mathbb{K} = \mathbb{C}$ we have to employ \cite[Lemmas~4.1 and 5.1]{achlioptas2003database} for the real part and the imaginary part of a vector, separately, with a union bound for the probability of success.
\begin{table}[tbhp]
\caption{The number of rows of Gaussian (or Rademacher) and P-SRHT matrices sufficient to satisfy {the} $(1/2, \delta, 1)$ oblivious $\ell_2 \to \ell_2$ $\varepsilon$-subspace embedding property.}
\label{tab:numrows}
\centering
\scalebox{0.9}{
\begin{tabular}{|l|l|l|l|l|} \hline
& $\delta = 10^{-3}$ & $\delta = 10^{-6}$ & $\delta = 10^{-12}$ & $\delta = 10^{-18}$ \\ \hline
{Gaussian} & $200$ & $365$ & $697$ & $1029$ \\ [2pt] \hline
{P-SRHT} & ${{96.4}(8\log{k}+69.6)}$ & ${{170}(8\log{k}+125)}$ & ${{313}(8\log{k}+236)}$ & ${{454}(8\log{k}+346)}$ \\ \hline
\end{tabular}
}
\end{table}
\begin{remark}
In practice the bounds provided in Table\nobreakspace \ref {tab:numrows} are pessimistic (especially for P-SRHT) and much smaller $k'$ (say, $k' =100$) may provide desirable results. In addition, in our experiments any significant difference in performance between Gaussian matrices, Rademacher matrices and P-SRHT has not been revealed.
\end{remark}
We observe that the number of rows of $\mathbf{\Gamma}$ can be chosen independent (or weakly dependent) of the number of rows of $\mathbf{\Theta}$. Let $\mathbf{\Phi}:= \mathbf{\Gamma}\mathbf{\Theta}$. By definition, for each $\mu \in \mathcal{P}_{\mathrm{test}}$
\begin{equation} \label{eq:res_concentration}
\mathbb{P} \left ( \left | (\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Theta}}_{V'})^2- (\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Phi}}_{V'})^2 \right | \leq \varepsilon (\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Theta}}_{V'})^2\right ) \geq 1-\delta;
\end{equation}
which means that $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Phi}}_{V'}$ is an {$\mathcal{O}(\varepsilon)$-}accurate approximation of $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|_{V'}$ with high probability. The probability of success for all $\mu \in \mathcal{P}_{\mathrm{test}}$ simultaneously can be guaranteed with a union bound. In its turn, $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Phi}}_{V'}$ can be computed from
\begin{equation} \label{eq:eff_comp_res}
\| \mathbf{r}(\mathbf{u}_r(\mu); \mu) \|^{\mathbf{\Phi}}_{V'}= \|\mathbf{V}^{\mathbf{\Phi}}_r(\mu)\mathbf{a}_r(\mu)- \mathbf{b}^{\mathbf{\Phi}}(\mu) \|,
\end{equation}
where $\mathbf{V}^{\mathbf{\Phi}}_r(\mu):= \mathbf{\Gamma} {\mathbf{V}^{\mathbf{\Theta}}_r}(\mu)$ and $\mathbf{b}^{\mathbf{\Phi}}(\mu):= \mathbf{\Gamma}\mathbf{b}^{\mathbf{\Theta}}(\mu)$. The efficient way of computing $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Phi}}_{V'}$ for every $\mu \in \mathcal{P}_{\mathrm{test}}$ consists in two stages. Firstly, we generate $\mathbf{\Gamma}$ and precompute affine expansions of $\mathbf{V}^{\mathbf{\Phi}}_r(\mu)$ and $\mathbf{b}^{\mathbf{\Phi}}(\mu)$ by multiplying each affine factor of $\mathbf{V}^{\mathbf{\Theta}}_r(\mu)$ and $\mathbf{b}^{\mathbf{\Theta}}(\mu)$ by $\mathbf{\Gamma}$. {The cost of this stage is independent of $ \# \mathcal{P}_\mathrm{test}$ (and $n$, of course) and becomes negligible for $\mathcal{P}_\mathrm{test}$ of large size.} In the second stage, for each parameter $\mu \in \mathcal{P}_{\mathrm{test}}$, $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Phi}}_{V'}$ is evaluated from~(\ref {eq:eff_comp_res}) using precomputed affine expansions.
{The quantities $\mathbf{V}^{\mathbf{\Phi}}_r(\mu)$ and $\mathbf{b}^{\mathbf{\Phi}}(\mu)$ contain at most the same number of terms as $\mathbf{A}(\mu)$ and $\mathbf{b}(\mu)$ in their affine expansion. Consequently, if $\mathbf{A}(\mu)$ and $\mathbf{b}(\mu)$ are parameter-separable with $m_A$ and $m_b$ terms, respectively, then each evaluation of $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Phi}}_{V'}$ from $\mathbf{a}_r(\mu)$ requires only $\mathcal{O}(k' r m_A+k' m_b)$ flops, which can be much less than the $\mathcal{O}(r^2 m^2_A+m^2_b)$ flops required for evaluating~(\ref{eq:compute_res}).
Note that the classical computation of the residual norm by taking the square root of $\| \mathbf{r}(\mathbf{u}_r(\mu); \mu) \|^2_{U'}$ evaluated using~(\ref{eq:compute_res}) can suffer from round-off errors. On the other hand, the evaluation of $\| \mathbf{r}(\mathbf{u}_r(\mu); \mu) \|^{\mathbf{\Phi}}_{V'}$ using~(\ref{eq:eff_comp_res}) is less sensitive to round-off errors since here we proceed with direct evaluation of the (sketched) residual norm but not its square.}
\begin{remark}
{If $\mathcal{P}_{\mathrm{test}}$ is provided a priori, then the random matrix $\mathbf{\Gamma}$ can be generated and multiplied by the affine factors of $\mathbf{V}^{\mathbf{\Theta}}_r(\mu)$ and $\mathbf{b}^{\mathbf{\Theta}}(\mu)$ during the offline stage.}
\end{remark}
\begin{remark}
For algorithms where $\mathcal{P}_{\mathrm{test}}$ or $U_r$ are selected adaptively based on a criterion depending on the residual norm (e.g., the classical greedy algorithm outlined in Section\nobreakspace \ref {Greedy}), a new realization of $\mathbf{\Gamma}$ has to be generated at each iteration. If the same realization of $\mathbf{\Gamma}$ is used for several iterations of the adaptive algorithm, care must be taken when characterizing the probability of success. This probability can decrease exponentially with the number of iterations, which requires to use considerably larger $\mathbf{\Gamma}$. Such option can be justified only for the cases when the cost of multiplying affine factors by $\mathbf{\Gamma}$ greatly dominates the cost of the second stage, i.e., evaluating $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Phi}}_{V'}$ for all $\mu \in
\mathcal{P}_{\mathrm{test}}$.
\end{remark}
\section{Efficient reduced basis generation} \label{efficient_RB}
In this section we show how the sketching technique can be used for improving the generation of reduced approximation spaces with greedy algorithm for RB, or a POD. Let $\mathbf{\Theta} \in \mathbb{K}^{k\times n}$ be a $U \to \ell_2$ subspace embedding.
\subsection{Greedy algorithm}
Recall that at each iteration of the greedy algorithm (see Section\nobreakspace\ref{Greedy}) the basis is enriched with a new sample (snapshot) $\mathbf{u}(\mu^{i+1})$, selected based on error indicator $\widetilde{\Delta}(U_i; \mu)$. The standard choice is $\widetilde{\Delta}(U_i; \mu):= \Delta (\mathbf{u}_i(\mu);\mu)$ where $\mathbf{u}_i(\mu) \in U_i$ satisfies~(\ref {eq:galproj}). Such error indicator, however, can lead to very expensive computations. The error indicator can be modified to $\widetilde{\Delta}(U_i; \mu):= \Delta^{\mathbf{\Theta}}(\mathbf{u}_i(\mu);\mu)$, where $\mathbf{u}_i(\mu) \in U_i$ is an approximation of $\mathbf{u}(\mu)$ which does not necessarily satisfy~(\ref {eq:galproj}). Further, we restrict ourselves to the case when $\mathbf{u}_i(\mu)$ is the sketched Galerkin projection~(\ref {eq:SKgalproj}). If there is no interest in reducing the cost of evaluating inner products but only reducing the cost of evaluating residual norms, it can be more relevant to consider the classical Galerkin projection~(\ref {eq:galproj}) instead of~(\ref {eq:SKgalproj}).
A quasi-optimality guarantee for the greedy selection with $\widetilde{\Delta}(U_i; \mu):= \Delta^{\mathbf{\Theta}}(\mathbf{u}_i(\mu);\mu)$ can be derived from Propositions\nobreakspace \ref {thm:SKquasi-opt} and\nobreakspace \ref {thm:skcea} and\nobreakspace Corollary\nobreakspace \ref {thm:skerrorind1}. At iteration $i$ of the greedy algorithm, we need $\mathbf{\Theta}$ to be a $U \to \ell_2$ $\varepsilon$-subspace embedding for $Y_i(\mu)$ defined in~(\ref {eq:Y_r}) for all $\mu \in \mathcal{P}_{\mathrm{train}}$. One way to achieve this is to generate a new realization of an oblivious $U \to \ell_2$ subspace embedding $\mathbf{\Theta}$ at each iteration of the greedy algorithm. Such approach, however, will lead to extra complexities and storage costs compared to the case where the same realization is employed for the entire procedure. In this work, we shall consider algorithms where $\mathbf{\Theta}$ is generated only once. When it is known that the set $\bigcup_{\mu \in \mathcal{P}_{\mathrm{train}}} Y_r(\mu)$ belongs to a subspace $Y^*_m$ of moderate dimension (e.g., when we operate on a small training set), then $\mathbf{\Theta}$ can be chosen such that it is a $U \to \ell_2$ $\varepsilon$-subspace embedding for $Y^*_m$ with high probability. Otherwise, care must be taken when characterizing the probability of success because of the adaptive nature of the greedy algorithm. {In such cases, all possible outcomes for $U_r$ should be considered by using a union bound for the probability of success.}
\begin{proposition}\label{thm:sk_greedy_finite}
Let $U_r \subseteq U $ be a subspace obtained with $r$ iterations of the greedy algorithm with error indicator depending on $\mathbf{\Theta}$. If $\mathbf{\Theta}$ is a $(\varepsilon, \allowbreak m^{-1}\binom{m}{r}^{-1}\delta, \allowbreak 2 r+ 1)$ oblivious $U \to \ell_2$ subspace embedding, then it is a $U \to \ell_2$ $\varepsilon$-subspace embedding for $Y_r(\mu)$ defined in~(\ref {eq:Y_r}), for all $\mu \in \mathcal{P}_{\mathrm{train}}$, with probability at least $1- \delta$.
\begin{proof}
See appendix.
\end{proof} \end{proposition}
\begin{remark}
Theoretical bounds for the number of rows needed to construct $( \varepsilon, \allowbreak m^{-1} \binom{m}{r}^{-1} \delta, \allowbreak 2 r+ 1)$ oblivious $U \to \ell_2$ subspace embeddings using Gaussian, Rademacher or P-SRHT distributions can be obtained from Propositions\nobreakspace \ref {thm:Rademacher}, \ref {thm:P-SRHT} and\nobreakspace \ref {thm:buildepsilon_embedding}. For Gaussian or Rademacher matrices they are proportional to $r$, while for P-SRHT they are proportional to $r^2$. In practice, however, embeddings built with P-SRHT, Gaussian or Rademacher distributions perform equally well.
\end{remark}
Evaluating $\| \mathbf{r}(\mathbf{u}_r(\mu);\mu) \|^{\mathbf{\Theta}}_{V'}$ for very large training sets can be much more expensive than other costs. The complexity of this step can be reduced using the procedure explained in~Section\nobreakspace \ref {efficient_res_norm}. The {efficient sketched greedy algorithm} is summarized in Algorithm\nobreakspace \ref {alg:sk_greedy_online}.
\begin{algorithm} \caption{efficient sketched greedy algorithm} \label{alg:sk_greedy_online}
\begin{algorithmic}
\STATE{\textbf{Given:} $\mathcal{P}_{\mathrm{train}}$, $\mathbf{A}(\mu)$, $\mathbf{b}(\mu)$, $\mathbf{l}(\mu)$, $\mathbf{\Theta}$, $\tau$.}
\STATE{\textbf{Output}: $U_{r}$}
\STATE{1. Set $i:=0$, $U_{0}= \{ \mathbf{0} \}$, and pick $\mu^{1} \in \mathcal{P}_{\mathrm{train}}$.}
\WHILE{$\underset{\mu \in \mathcal{P}_{\mathrm{train}}}{\max} {\widetilde{\Delta}(U_i;\mu)}\geq \tau$}
\STATE{2. Set $i:=i+1$.}
\STATE{3. Evaluate $\mathbf{u}(\mu^{i})$ and set $U_{i}:= U_{i-1}+\mathrm{span}(\mathbf{u}(\mu^{i}))$.}
\STATE{4. Update affine factors of $\mathbf{A}_i(\mu)$, $\mathbf{b}_i(\mu)$, {$\mathbf{V}^{\mathbf{\Theta}}_i(\mu)$} and $\mathbf{b}^{\mathbf{\Theta}}(\mu)$.}
\STATE{5. Generate $\mathbf{\Gamma}$ and evaluate affine factors of {$\mathbf{V}^{\mathbf{\Phi}}_i(\mu)$} and $\mathbf{b}^{\mathbf{\Phi}}(\mu)$.}
\STATE{6. Set $\widetilde{\Delta}(U_i;\mu):= \Delta^{\mathbf{\Phi}}(\mathbf{u}_i(\mu);\mu)$.}
\STATE{7. Use~(\ref {eq:eff_comp_res}) to find $\mu^{i+1}:=\underset{\mu \in \mathcal{P}_{\mathrm{train}}}{\mathrm{argmax}~}{\widetilde{\Delta}(U_i;\mu)}$.}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
From Propositions\nobreakspace \ref {thm:SKquasi-opt} and\nobreakspace \ref {thm:skcea}, Corollary\nobreakspace \ref {thm:skerrorind1} and (\ref {eq:res_concentration}), we can prove the quasi-optimality of the greedy selection in Algorithm\nobreakspace \ref {alg:sk_greedy_online} with high probability.
\subsection{Proper Orthogonal Decomposition} \label{sk_pod}
Now we introduce the sketched version of POD. We first note that random sketching is a popular technique for obtaining low-rank approximations of large matrices~\cite{woodruff2014sketching}. It can be easily combined with Proposition\nobreakspace \ref {thm:approx_pod} and\nobreakspace Algorithm\nobreakspace \ref {alg:approx_pod} for finding POD vectors. For large-scale problems, however, evaluating and storing POD vectors can be too expensive or even unfeasible, e.g., in a streaming or a distributed environment. We here propose a POD where evaluation of the full vectors is not necessary. We give a special attention to distributed computing. The computations involved in our version of POD can be distributed among separate machines with a communication cost independent of the dimension of the full order problem.
We observe that a complete reduced order model can be constructed from a sketch (see Section\nobreakspace \ref {l2embeddingsMOR}). Assume that we are given the sketch of a matrix $\mathbf{U}_m$ containing $m$ solutions samples associated with $\mathbf{\Theta}$, i.e.,
\begin{equation*}
\mathbf{U}^{\mathbf{\Theta}}_m:= \mathbf{\Theta}\mathbf{U}_m,~~ \mathbf{V}^{\mathbf{\Theta}}_m(\mu):= \mathbf{\Theta}\mathbf{R}_U^{-1}\mathbf{A}(\mu)\mathbf{U}_m,~~ \mathbf{l}_m(\mu)^\mathrm{H}:= \mathbf{l}(\mu)^{\mathrm{H}}\mathbf{U}_m,~~ \mathbf{b}^{\mathbf{\Theta}}(\mu):= \mathbf{\Theta}\mathbf{R}_U^{-1}\mathbf{b}(\mu).
\end{equation*}
Recall that sketching a set of vectors can be efficiently performed basically in any modern computational environment, e.g., a distributed environment with expensive communication cost (see~Section\nobreakspace \ref {sketch}). Instead of computing a full matrix of reduced basis vectors, $\mathbf{U}_r \in \mathbb{K}^{n \times r}$, as in classical methods, we look for a small matrix $\mathbf{T}_r \in \mathbb{K}^{m\times r}$ such that $\mathbf{U}_r= \mathbf{U}_m\mathbf{T}_r$. Given $\mathbf{T}_r$, the sketch of $\mathbf{U}_r$ can be {computed} without operating with the whole $\mathbf{U}_m$ but only with its sketch:
\begin{equation*}
\mathbf{\Theta} \mathbf{U}_r= \mathbf{U}^{\mathbf{\Theta}}_m\mathbf{T}_r,~~ \mathbf{\Theta} \mathbf{R}_U^{-1}\mathbf{A}(\mu)\mathbf{U}_r= \mathbf{V}^{\mathbf{\Theta}}_m(\mu)\mathbf{T}_r, \textup{ and } \mathbf{l}(\mu)^{\mathrm{H}}\mathbf{U}_r= \mathbf{l}_m(\mu)^{\mathrm{H}}\mathbf{T}_r.
\end{equation*}
Further we propose an efficient way for obtaining $\mathbf{T}_r$ such that the quality of $U_r:= \mathrm{span}(\mathbf{U}_r)$ is close to optimal.
For each $r\leq \mathrm{rank}(\mathbf{U}^\mathbf{\Theta}_m)$, let $U_r$ be an $r$-dimensional subspace obtained with the method of snapshots associated with norm $\| \cdot \|^{\mathbf{\Theta}}_U $, presented below.
\begin{definition}[Sketched method of snapshots]\label{thm:sk_pod}
Consider the following eigenvalue problem
\begin{equation} \label{eq:sk_eig_gram}
\mathbf{G} \mathbf{t} = \lambda \mathbf{t}
\end{equation}
where $\mathbf{G}:= (\mathbf{U}^\mathbf{\Theta}_m)^\mathrm{H}\mathbf{U}^\mathbf{\Theta}_m$. Let $l= \mathrm{rank}(\mathbf{U}^\mathbf{\Theta}_m)\geq r$ and let $\{ (\lambda_i, \mathbf{t}_i) \}_{i=1}^{l}$ be the solutions to~(\ref {eq:sk_eig_gram}) ordered such that $ \lambda_1 \ge \hdots \ge \lambda_l$. Define
\begin{equation} \label{eq:sk_podbasis}
U_r:=\mathrm{range} (\mathbf{U}_m\mathbf{T}_r),
\end{equation}
where $\mathbf{T}_r:= [\mathbf{t}_1, ..., \mathbf{t}_r]$.
\end{definition}
For given $V \subseteq U_m$, let $\mathbf{P}^\mathbf{\Theta}_{V}:U_m \rightarrow V$ denote an orthogonal projection on $V$ with respect to $\| \cdot \|^\mathbf{\Theta}_{U}$, i.e.,
\begin{equation} \label{eq:P*}
\forall \mathbf{x} \in U_m,~ \mathbf{P}^\mathbf{\Theta}_{V}\mathbf{x}= \arg\min_{\mathbf{w} \in V} \| \mathbf{x}-\mathbf{w} \|^\mathbf{\Theta}_{U},
\end{equation}
and define the following error indicator:
\begin{equation}\label{eq:sk_poderror}
\Delta^\mathrm{POD}(V) := \frac{1}{m} \sum^{m}_{i=1} \left (\| \mathbf{u} (\mu^i)- \mathbf{P}^\mathbf{\Theta}_{V}\mathbf{u}(\mu^i)\|^{\mathbf{\Theta}}_{U} \right )^2.
\end{equation}
\begin{proposition} \label{thm:sk_poderror}
Let $\{ \lambda_i \}_{i=1}^{l}$ be the set of eigenvalues from Definition\nobreakspace\ref{thm:sk_pod}. Then
\begin{equation} \label{eq:lambda}
\Delta^\mathrm{POD}(U_r):= \frac{1}{m} \sum^{l}_{i=r+1} \lambda_i.
\end{equation}
Moreover, for all $V_r \subseteq U_m$ with $\mathrm{dim}(V_r)\leq r$,
\begin{equation}
\Delta^\mathrm{POD}(U_r)\leq \Delta^\mathrm{POD}(V_r).
\end{equation}
\begin{proof}
See appendix.
\end{proof} \end{proposition}
Observe that {{the} matrix $\mathbf{T}_r$ (characterizing $U_r$) can be much cheaper to obtain than the basis vectors for $U^*_r = POD_r(\mathbf{U}_m, \| \cdot \|_U )$.} For this, we need to operate only with the sketched matrix $\mathbf{U}^\mathbf{\Theta}_m$ but not with the full snapshot matrix $\mathbf{U}_m$. Nevertheless, the quality of $U_r$ can be guaranteed to be close {to the quality of $U^*_r$.}
\begin{theorem} \label{thm:sk_podopt}
Let $Y \subseteq U_m$ be a subspace of $U_m$ with $\mathrm{dim}(Y)\geq r$, and let
\begin{equation*}
\Delta_Y= \frac{1}{m} \sum^{m}_{i=1} \| \mathbf{u}(\mu^i)- \mathbf{P}_{Y}\mathbf{u}(\mu^i) \|^2_{U}.
\end{equation*}
If $\mathbf{\Theta}$ is a $U \to \ell_2$ $\varepsilon$-subspace embedding for $Y$ and every subspace in $\left \{ \mathrm{span} (\mathbf{u}(\mu^i)- \mathbf{P}_{Y}\mathbf{u}(\mu^i)) \right \}_{i=1}^{m}$ and $\left \{ \mathrm{span}(\mathbf{u}(\mu^i)- {\mathbf{P}_{U^*_r}}\mathbf{u}(\mu^i)) \right \}_{i=1}^{m}$, then
\begin{equation} \label{eq:sk_podoptY}
\begin{split}
\frac{1}{m} \sum^{m}_{i=1} \| \mathbf{u}(\mu^i)- \mathbf{P}_{U_r}\mathbf{u}(\mu^i) \|&_{U}^2 \leq \frac{2}{1-\varepsilon}\Delta^\mathrm{POD}(U_r)+ (\frac{2(1+\varepsilon)}{1-\varepsilon}+1)\Delta_Y \\
&\leq \frac{2(1+\varepsilon)}{1-\varepsilon} \frac{1}{m} \sum^{m}_{i=1} \| \mathbf{u}(\mu^i)- \mathbf{P}_{U^*_r}\mathbf{u}(\mu^i) \|^2_{U}+ (\frac{2(1+\varepsilon)}{1-\varepsilon}+1)\Delta_Y.
\end{split}
\end{equation}
Moreover, if $\mathbf{\Theta}$ is $U \to \ell_2$ $\varepsilon$-subspace embedding for $U_{m}$, then
\begin{equation} \label{eq:sk_podoptUm}
\begin{split}
\frac{1}{m}\sum^{m}_{i=1} \| \mathbf{u}(\mu^i)- \mathbf{P}_{U_r}\mathbf{u}(\mu^i) \|_{U}^2 &\leq \frac{1}{1-\varepsilon}\Delta^\mathrm{POD}(U_r)\leq \frac{1+\varepsilon}{1-\varepsilon} \frac{1}{m}\sum^{m}_{i=1} \| \mathbf{u}(\mu^i) - \mathbf{P}_{U^*_r} \mathbf{u}(\mu^i) \|_{U}^2.
\end{split}
\end{equation}
\begin{proof}
See appendix.
\end{proof}
\end{theorem}
{By an union bound argument and the definition of an oblivious embedding, the hypothesis in the first part of Theorem\nobreakspace \ref {thm:sk_podopt} can be satisfied with probability at least $1-3\delta$ if $\mathbf{\Theta}$ is a $( \varepsilon, \delta, \mathrm{dim}(Y))$ and $( \varepsilon, \delta/m, 1)$ oblivious $U \to \ell_2$ embedding.} A subspace $Y$ can be taken as $U^*_{r}$, or a larger subspace making $\Delta_Y$ as small as possible. It is important to note that even if $U_r$ is quasi-optimal, there is no guarantee that $\mathbf{\Theta}$ is a $U \to \ell_2$ $\varepsilon$-subspace embedding for $U_r$ unless it is a $U \to \ell_2$ $\varepsilon$-subspace embedding for the whole $U_m$. Such guarantee can be unfeasible to achieve for large training sets. One possible solution is to maintain two sketches of $\mathbf{U}_m$: one for {the method of snapshots}, and one for Galerkin projections and residual norms. Another way (following considerations similar to~\cite{halko2011finding}) is to replace $\mathbf{U}_m$ by its low-rank approximation $\widetilde{\mathbf{U}}_m = \mathbf{P}^\mathbf{\Theta}_{W}\mathbf{U}_m$, with $W = \mathrm{span}(\mathbf{U}_m \mathbf{\Omega}^*)$, where $\mathbf{\Omega}^*$ is a small random matrix (e.g., Gaussian matrix). The latter procedure can be also used for improving the efficiency of the algorithm when $m$ is large. Finally, if $\mathbf{\Theta}$ is a $U \to \ell_2$ $\varepsilon$-subspace embedding for every subspace in $\{ \mathrm{span} (\mathbf{u}(\mu^i)- \mathbf{P}^\mathbf{\Theta}_{U_r}\mathbf{u}(\mu^i)) \}_{i=1}^{m}$ then the error indicator $\Delta^\mathrm{POD}(U_r)$ is quasi-optimal. However, if only the first hypothesis of Theorem\nobreakspace \ref{thm:sk_podopt} is satisfied then the quality of $\Delta^\mathrm{POD}(U_r)$ will depend on $\Delta_Y$. In such a case the error can be {certified} using $\Delta^{\mathrm{POD}}(\cdot)$ defined with a new realization of $\mathbf{\Theta}$.
\section{Numerical examples} \label{Numerical}
In this section the approach is validated numerically
and compared against classical methods. For simplicity in all our experiments, we chose a coefficient $\eta(\mu)=1$ in~Equations\nobreakspace \textup {(\ref {eq:errorind})} and\nobreakspace \textup {(\ref {eq:skerrorind})} for the error estimation. The experiments revealed that the theoretical bounds for $k$ in~Propositions\nobreakspace \ref {thm:Rademacher} and\nobreakspace \ref {thm:P-SRHT} and\nobreakspace Table\nobreakspace \ref {tab:numrows} are pessimistic.
In practice, much smaller random matrices still provide good estimation of the output. In addition, we did not detect any significant difference in performance between Rademacher matrices, Gaussian matrices and P-SRHT, even though the theoretical bounds for P-SRHT are worse.
Finally, the results obtained with Rademacher matrices are not presented. They are similar to those for Gaussian matrices and P-SRHT.
\subsection{3D thermal block}
We use a 3D version of the thermal block benchmark from~\cite{haasdonk2017reduced}. This problem describes a heat transfer phenomenon through a domain $\Omega:= [0,1]^3$ made of an assembly of blocks, each composed of a different material. The boundary value problem for modeling the thermal block is as follows
\begin{equation} \label{eq:BVP1}
\left \{
\begin{array}{rll}
-\boldsymbol{\nabla} \cdot (\kappa \boldsymbol{\nabla} T)&= 0,~~ & \textup{in } \Omega \\
T &=0,~~ & \textup{on } \Gamma_{D} \\
\boldsymbol{n} \cdot (\kappa \boldsymbol{\nabla} T)&=0,~~ & \textup{on } \Gamma_{N,1}\\
\boldsymbol{n} \cdot (\kappa \boldsymbol{\nabla} T)&=1,~~ & \textup{on } \Gamma_{N,2},
\end{array}
\right.
\end{equation}
where $T$ is the temperature field, $\boldsymbol{n}$ is the outward normal vector to the boundary, $\kappa$ is the thermal conductivity, and $\Gamma_{D}$, $\Gamma_{N,1}$, $\Gamma_{N,2}$ are parts of the boundary defined by $\Gamma_{D} := \{(x,y,z) \in \partial\Omega: y=1\}$, $\Gamma_{N,2} := \{(x,y,z) \in \partial\Omega: y=0\}$ and $\Gamma_{N,1} :=\partial\Omega \backslash (\Gamma_D \cup \Gamma_{N,2})$. $\Omega$ is partitioned into $2\times 2\times 2$ subblocks $\Omega_i$ of equal size. A different thermal conductivity $\kappa_i$ is assigned to each $\Omega_i$, i.e.,
$ \kappa(x)= \kappa_i$, $x \in \Omega_i.$
We are interested in estimating the mean temperature in $\Omega_1 := [0, \frac{1}{2}]^3$ for each $\mu:= (\kappa_1,..., \kappa_{8}) \in \mathcal{P} := [\frac{1}{10}, 10]^{8}$. The $\kappa_i$ are independent random variables with log-uniform distribution over $ [\frac{1}{10}, 10]$.
Problem~(\ref {eq:BVP1}) was discretized using the classical finite element method with approximately $n=120000$ degrees of freedom. A function $w$ in the finite element approximation space is identified with a vector $\mathbf{w} \in U$. The space $U$ is equipped with an inner product compatible with the
$H^1_0$ inner product, i.e., $\| \mathbf{w} \|_{U}:= \| \boldsymbol{\nabla}w \|_{L_2}$. The training set $\mathcal{P}_{\mathrm{train}}$ and the test set $\mathcal{P}_{\mathrm{test}}$ were taken as $10000$ and $1000$ independent samples{,} respectively. The factorization of $\mathbf{R}_U$ was precomputed only once and used for efficient multiplication of $\mathbf{R}^{-1}_U$ by multiple vectors. The sketching matrix $\mathbf{\Theta}$ was constructed with ~Proposition\nobreakspace \ref {thm:buildepsilon_embedding}, i.e., $\mathbf{\Theta}:= \mathbf{\Omega}\mathbf{Q}$, where $\mathbf{\Omega} \in \mathbb{R}^{k\times s}$ is a classical oblivious $\ell_2 \to \ell_2$ subspace embedding and $\mathbf{Q} \in \mathbb{R}^{s\times n}$ is such that $\mathbf{Q}^\mathrm{T}\mathbf{Q}=\mathbf{R}_U$. Furthermore, $\mathbf{Q}$ was taken as the transposed Cholesky factor of $\mathbf{R}_U$. Different distributions and sizes of {the} matrix $\mathbf{\Omega}$ were considered. The same realizations of $\mathbf{\Omega}$ were used for all parameters and greedy iterations within each experiment. A seeded random number generator was used for memory-efficient operations on random matrices. For P-SRHT, a fast implementation of the {fast} Walsh-Hadamard transform was employed for multiplying the Walsh-Hadamard matrix by a vector in $s \log_2{(s)}$ time. In Algorithm\nobreakspace \ref {alg:sk_greedy_online}, we used $\mathbf{\Phi}:= \mathbf{\Gamma}\mathbf{\Theta}$, where $\mathbf{\Gamma} \in \mathbb{R}^{k'\times k}$ is a Gaussian matrix and $k'=100$. The same realizations of $\mathbf{\Gamma}$ were used for all the parameters but it was regenerated at each greedy iteration.
\emph{Galerkin projection and primal-dual correction.}
Let us investigate how the quality of the solution depends on the distribution and size of $\mathbf{\Omega}$. We first generated sufficiently accurate reduced subspaces $U_r$ and $U^{\mathrm{du}}_r$ for the primal and the dual problems. The subspaces were spanned by snapshots evaluated at some points in $\mathcal{P}_{\mathrm{train}}$. The interpolation points were obtained by $r=100$ iterations of the {efficient sketched greedy algorithm} (Algorithm\nobreakspace \ref {alg:sk_greedy_online}) with P-SRHT and $k =1000$ rows. Thereafter, $\mathbf{u}(\mu)$ was approximated by a projection $\mathbf{u}_r(\mu) \in U_r$. The classical Galerkin projection~(\ref {eq:galproj}) and its sketched version~(\ref {eq:SKgalproj}) with different distributions and sizes of $\mathbf{\Omega}$ were considered. The quality of a parameter-dependent projection is measured by $e_\mathcal{P}:=\max_{\mu \in \mathcal{P}_{\mathrm{test}}} \|\mathbf{u}(\mu) - \mathbf{u}_r(\mu)\|_{U} / \max_{\mu \in \mathcal{P}_{\mathrm{test}}} \|\mathbf{u}(\mu)\|_{U}$ and $\Delta_\mathcal{P}:=\max_{\mu \in \mathcal{P}_{\mathrm{test}}} \| \mathbf{r}(\mathbf{u}_r(\mu); \mu) \|_{U'}/\max_{\mu \in \mathcal{P}_{\mathrm{test}}} \|\mathbf{b}(\mu)\|_{U'}$. For each random projection 20 samples of $e_\mathcal{P}$ and $\Delta_\mathcal{P}$ were evaluated. Figure\nobreakspace \ref {fig:Ex1_1} describes how $e_\mathcal{P}$ and $\Delta_\mathcal{P}$ depend on the number of rows $k$\footnote{{The $p$-quantile of a random variable $X$ is defined as $\inf \{ t : \mathbb{P}(X \leq t) \geq p \}$ and can be estimated by replacing the cumulative distribution function $\mathbb{P}(X \leq t)$ by its empirical estimation. Here we use 20 samples for this estimation.}}. We observe that the error associated with the sketched Galerkin projection is large when $k$ is close to $r$, but as $k$ increases, it asymptotically approaches the error of the classical Galerkin projection. The residual errors of the classical and the sketched projections become almost identical already for $k=500$ while the exact errors become close for $k=1000$. We also observe that for the aforementioned $k$ there is practically no deviation of $\Delta_\mathcal{P}$ and only a little deviation of $e_\mathcal{P}$.
{
Note that the theoretical bounds for $k$ to preserve the quasi-optimality constants of the classical Galerkin projection can be derived using Propositions\nobreakspace \ref{thm:Rademacher} and \ref{thm:P-SRHT} combined with Proposition\nobreakspace \ref{thm:SKquasi-opt} and a union bound for the probability of success. As was noted in Section\nobreakspace\ref{obleddings}, however, the theoretical bounds for $k$ in Propositions\nobreakspace \ref{thm:Rademacher} and \ref{thm:P-SRHT} shall be useful only for large problems with, say $n/r>10^4$, which means they should not be applicable here. Indeed, we see that for ensuring that
$$\mathbb{P}(\forall \mu \in \mathcal{P}_{\mathrm{test}}: \varepsilon a_r(\mu) <1) >1-10^{-6}, $$
using the theoretical bounds, we need impractical values $k \geq {39280}$ for Gaussian matrices and $k =n \approx 100000$ for P-SRHT.
In practice, the value for $k$ can be determined using the adaptive procedure proposed in~\cite{balabanov2018}.}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_Gaussexact.pdf}
\caption{}
\label{fig:Ex1_1a}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}[b]{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_PSRHTexact.pdf}
\caption{}
\label{fig:Ex1_1b}
\end{subfigure}
\begin{subfigure}[b]{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_Gaussres.pdf}
\caption{}
\label{fig:Ex1_1c}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}[b]{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_PSRHTres.pdf}
\caption{}
\label{fig:Ex1_1d}
\end{subfigure}
\caption{Errors $e_\mathcal{P}$ and $\Delta_\mathcal{P}$ of the classical Galerkin projection and quantiles of probabilities $p=1, 0.9, 0.5$ and $0.1$ over 20 samples of $e_\mathcal{P}$ and $\Delta_\mathcal{P}$ of the randomized Galerkin projection versus the number of rows of $\mathbf{\Omega}$. (a) The exact error $e_\mathcal{P}$ with rescaled Gaussian distribution as $\mathbf{\Omega}$. (b) The exact error $e_\mathcal{P}$ with P-SRHT matrix as $\mathbf{\Omega}$. (c) The residual error $\Delta_\mathcal{P}$ with rescaled Gaussian distribution as $\mathbf{\Omega}$. (d) The residual error $\Delta_\mathcal{P}$ with P-SRHT matrix as $\mathbf{\Omega}$.}
\label{fig:Ex1_1}
\end{figure}
Thereafter, we let $\mathbf{u}_r(\mu) \in U_r$ and $\mathbf{u}_r^\mathrm{du}(\mu) \in U^{\mathrm{du}}_r$ be the sketched Galerkin projections, where $\mathbf{\Omega}$ was taken as P-SRHT with $k =500$ rows. For the fixed $\mathbf{u}_r(\mu)$ and $\mathbf{u}_r^\mathrm{du}(\mu)$ the classical primal-dual correction $s^{\mathrm{pd}}_r(\mu)$~(\ref {eq:correction}), and the sketched primal-dual correction $s^{\mathrm{spd}}_r(\mu)$~(\ref {eq:skcorrection}) were evaluated using different sizes and distributions of $\mathbf{\Omega}$. In addition, the approach introduced in~Section\nobreakspace \ref {sk_pd_correction} for improving the accuracy of the sketched correction was employed. For $\mathbf{w}^\mathrm{du}_r(\mu)$ we chose the orthogonal projection of $\mathbf{u}_r^\mathrm{du}(\mu)$ on $W^\mathrm{du}_r:=U^{\mathrm{du}}_i$ with $i^\mathrm{du}=30$ (the subspace spanned by the first $i^\mathrm{du}=30$ basis vectors obtained during the generation of $U^{\mathrm{du}}_r$). With such $\mathbf{w}_r^\mathrm{du}(\mu)$ the improved correction $s^{\mathrm{spd+}}_r(\mu)$ defined by~(\ref {eq:skcorrection2}) was computed. It has to be mentioned that $s^{\mathrm{spd+}}_r(\mu)$ yielded additional computations. They, however, are cheaper than the computations required for constructing the classical reduced systems and evaluating the classical output quantities in about $10$ times in terms of complexity and $6.67$ times in terms of memory. We define the error by $d_\mathcal{P}:=\max_{\mu \in \mathcal{P}_{\mathrm{test}}} |s(\mu) - \widetilde{s}_r(\mu) | / \max_{\mu \in \mathcal{P}_{\mathrm{test}}} |s(\mu)|$, where $\widetilde{s}_r(\mu)=s^{\mathrm{pd}}_r(\mu), s^{\mathrm{spd}}_r(\mu)$ or $s^{\mathrm{spd+}}_r(\mu)$. For each random correction we computed $20$ samples of $d_\mathcal{P}$. The errors on the output quantities versus the numbers of rows of $\mathbf{\Theta}$ are presented in~Figure\nobreakspace \ref {fig:Ex1_2}. We see that the error of $s^{\mathrm{spd}}_r(\mu)$ is proportional to $k^{-1/2}$. It can be explained by the fact that for considered sizes of random matrices, $\varepsilon$ is large compared to the residual error of the dual solution. As was noted in~Section\nobreakspace \ref {sk_pd_correction} in such a case the error bound for $s^{\mathrm{spd}}_r(\mu)$ is equal to $\mathcal{O}(\varepsilon \| \mathbf{r}(\mathbf{u}_r(\mu);\mu)) \|_{U'})$. By~Propositions\nobreakspace \ref {thm:Rademacher} and\nobreakspace \ref {thm:P-SRHT} it follows that $\varepsilon = \mathcal{O}(k^{-1/2})$, which explains the behavior of the error in~Figure\nobreakspace \ref {fig:Ex1_2}. Note that the convergence of $s^{\mathrm{spd}}_r(\mu)$ is not expected to be reached even for $k$ close to the dimension of the discrete problem. For large enough problems, however, the quality of the classical output will be always attained with $k \ll n$. In general, the error of the sketched primal-dual correction does not depend (or weakly depends for P-SRHT) on the dimension of the full order problem, but only on the accuracies of the approximate solutions $\mathbf{u}_r(\mu)$ and $\mathbf{u}_r^\mathrm{du}(\mu)$.
On the other hand, we see that $s^{\mathrm{spd+}}_r(\mu)$ reaches the accuracy of the classical primal-dual correction for moderate $k$.
\begin{figure}[h!]
\centering
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_Gaussquantity.pdf}
\caption{}
\label{fig:Ex1_2a}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_PSRHTquantity.pdf}
\caption{}
\label{fig:Ex1_2b}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_Gaussquantityprime.pdf}
\caption{}
\label{fig:Ex1_2c}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_PSRHTquantityprime.pdf}
\caption{}
\label{fig:Ex1_2d}
\end{subfigure}
\caption{The error $d_\mathcal{P}$ of the classical primal-dual correction and quantiles of probabilities $p=1, 0.9, 0.5$ and $0.1$ over 20 samples of $d_\mathcal{P}$ of the randomized primal-dual corrections with fixed $\mathbf{u}_r(\mu)$ and $\mathbf{u}_r^\mathrm{du}(\mu)$ versus the number of rows of $\mathbf{\Omega}$. (a) The errors of $s^{\mathrm{pd}}_r(\mu)$ and $s^{\mathrm{spd}}_r(\mu)$ with Gaussian matrix as $\mathbf{\Omega}$. (b) The errors of $s^{\mathrm{pd}}_r(\mu)$ and $s^{\mathrm{spd}}_r(\mu)$ with P-SRHT distribution as $\mathbf{\Omega}$. (c) The errors of $s^{\mathrm{pd}}_r(\mu)$ and $s^{\mathrm{spd+}}_r(\mu)$ with Gaussian matrix as $\mathbf{\Omega}$ and $W^\mathrm{du}_r:=U^{\mathrm{du}}_i$, $i^\mathrm{du}=30$. (d) The errors of $s^{\mathrm{pd}}_r(\mu)$ and $s^{\mathrm{spd+}}_r(\mu)$ with P-SRHT distribution as $\mathbf{\Omega}$ and $W^\mathrm{du}_r:=U^{\mathrm{du}}_i$, $i^\mathrm{du}=30$.}
\label{fig:Ex1_2}
\end{figure}
Further we focus only on the primal problem noting that similar results were observed also for the dual one.
\emph{Error estimation.}
We let $U_r$ and $\mathbf{u}_r(\mu)$ be the subspace and the approximate solution from the previous experiment. The classical error indicator $\Delta(\mathbf{u}_r(\mu); \mu)$ and the sketched error indicator $\Delta^\mathbf{\Theta} (\mathbf{u}_r(\mu); \mu)$ were evaluated for every $\mu \in \mathcal{P}_{\mathrm{test}}$. For $\Delta^\mathbf{\Theta}(\mathbf{u}_r(\mu);\mu)$ different distributions and sizes of $\mathbf{\Omega}$ were considered. The quality of $\Delta^\mathbf{\Theta}(\mathbf{u}_r(\mu);\mu)$ as estimator for $\Delta(\mathbf{u}_r(\mu);\mu)$ can be characterized by $e^{\mathrm{ind}}_{\mathrm{\mathcal{P}}}:= \max_{\mu \in \mathcal{P}_{\mathrm{test}}} |\Delta(\mathbf{u}_r(\mu);\mu)-\Delta^\mathbf{\Theta}(\mathbf{u}_r(\mu);\mu)| / \max_{\mu \in \mathcal{P}_{\mathrm{test}}} \Delta(\mathbf{u}_r(\mu);\mu)$. For each $\mathbf{\Omega}$, $20$ samples of $e^{\mathrm{ind}}_{\mathrm{\mathcal{P}}}$ were evaluated. Figure\nobreakspace \ref {fig:Ex1_3b} shows how $e^{\mathrm{ind}}_{\mathrm{\mathcal{P}}}$ depends on $k$. The convergence of the error is proportional to $k^{-1/2}$, similarly as for the primal-dual correction. In practice, however, $\Delta^\mathbf{\Theta}(\mathbf{u}_r(\mu);\mu)$ does not have to be so accurate as the approximation of the quantity of interest. For many problems, estimating $\Delta(\mathbf{u}_r(\mu);\mu)$ with relative error less than {1/2} is already good enough. Consequently, $\Delta^\mathbf{\Theta}(\mathbf{u}_r(\mu);\mu)$ employing $\mathbf{\Omega}$ with $k=100$ or even $k=10$ rows can be readily used as a reliable error estimator. Note that $\mathcal{P}_{\mathrm{test}}$ and $U_r$ were formed independently of $\mathbf{\Omega}$. Otherwise, a larger $\mathbf{\Omega}$ should be considered with an additional embedding $\mathbf{\Gamma}$ as explained in Section\nobreakspace \ref {efficient_res_norm}.
\begin{figure}[h!]
\centering
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_Gausserrorind.pdf}
\caption{}
\label{fig:Ex1_3a}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_PSRHTerrorind.pdf}
\caption{}
\label{fig:Ex1_3b}
\end{subfigure}
\caption{Quantiles of probabilities $p=1, 0.9, 0.5$ and $0.1$ over 20 samples of the error $e^{\mathrm{ind}}_\mathcal{P}$ of $\Delta^\mathbf{\Theta}(\mathbf{u}_r(\mu);\mu)$ as estimator of $\Delta(\mathbf{u}_r(\mu);\mu)$. (a) The error of $\Delta^\mathbf{\Theta}(\mathbf{u}_r(\mu);\mu)$ with Gaussian distribution. (b) The error of $\Delta^\mathbf{\Theta}(\mathbf{u}_r(\mu);\mu)$ with P-SRHT distribution.}
\label{fig:Ex1_3}
\end{figure}
To validate the claim that our approach (see~Section\nobreakspace \ref {efficient_res_norm}) for error estimation provides more numerical stability than the classical one, we performed the following experiment. For fixed $\mu \in \mathcal{P}$ such that $\mathbf{u}(\mu) \in U_r$ we picked several vectors $\mathbf{u}^*_i \in U_r$ at different distances of $\mathbf{u}(\mu)$. For each such $\mathbf{u}^*_i$ we evaluated $\Delta(\mathbf{u}_i^*;\mu)$ and $\Delta^\mathbf{\Theta}(\mathbf{u}_i^*;\mu)$. The classical error indicator $\Delta(\mathbf{u}_i^*;\mu)$ was evaluated using the traditional procedure, i.e., expressing $\| \mathbf{r}(\mathbf{u}_i^*;\mu) \|^2_{U'}$ in the form~(\ref {eq:compute_res}), while $\Delta^\mathbf{\Theta}(\mathbf{u}_i^*;\mu)$ was evaluated with relation~(\ref {eq:skcompres}). The sketching matrix $\mathbf{\Omega}$ was generated from the P-SRHT or the rescaled Gaussian distribution with $k=100$ rows. Note that $\mu$ and $\mathbf{u}_i^*$ were chosen independently of $\mathbf{\Omega}$ so there is no point to use larger $\mathbf{\Omega}$ with additional embedding $\mathbf{\Gamma}$ (see~Section\nobreakspace \ref {efficient_res_norm}). Figure\nobreakspace \ref {fig:Ex1_4} clearly reveals the failure of the classical error indicator at $\Delta(\mathbf{u}_i^*;\mu)/\| \mathbf{b}(\mu) \|_{U'} \approx 10^{-7}$. On the contrary, the indicators computed with random sketching technique remain reliable even for $\Delta(\mathbf{u}_i^*;\mu)/\| \mathbf{b}(\mu) \|_{U'}$ close to the machine precision.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{Ex1_stability.pdf}
\caption{Error indicator $\Delta(\mathbf{u}_i^*;\mu)$ (rescaled by $\| \mathbf{b}(\mu) \|_{U'}$) computed with the classical procedure and its estimator $\Delta^\mathbf{\Theta}(\mathbf{u}_i^*;\mu )$ computed with relation~(\ref {eq:skcompres}) employing P-SRHT or Gaussian distribution with $k=100$ rows versus the exact value of $\Delta(\mathbf{u}_i^*;\mu)$ (rescaled by $\| \mathbf{b}(\mu) \|_{U'}$).}
\label{fig:Ex1_4}
\end{figure}
\emph{{Efficient sketched greedy algorithm}.} Further, we validate the performance of the {efficient sketched greedy algorithm} (Algorithm\nobreakspace \ref {alg:sk_greedy_online}). For this we generated a subspace $U_r$ of dimension $r=100$ using the classical greedy algorithm (depicted in Section\nobreakspace \ref {Greedy}) and its randomized version (Algorithm\nobreakspace \ref {alg:sk_greedy_online}) employing $\mathbf{\Omega}$ of different types and sizes. In~Algorithm\nobreakspace \ref {alg:sk_greedy_online}, $\mathbf{\Gamma}$ was generated from a Gaussian distribution with $k'=100$ rows. The error at $i$-th iteration is identified with $\Delta_\mathcal{P}:=\max_{\mu \in \mathcal{P}_{\mathrm{train}}} \| \mathbf{r}(\mathbf{u}_i(\mu); \mu) \|_{U'}/\max_{\mu \in \mathcal{P}_{\mathrm{train}}} \|\mathbf{b}(\mu)\|_{U'}$. The {convergence} is depicted in Figure\nobreakspace \ref {fig:Ex1_6}. For the {efficient sketched greedy algorithm} with $k=250$ and $k=500$ a slight difference in performance is detected compared to the classical algorithm. The difference is more evident for $k=250$ at higher iterations. The {behaviors} of~the classical algorithm and~Algorithm\nobreakspace \ref {alg:sk_greedy_online} with $k=1000$ are almost identical.
\begin{figure}[h!]
\centering
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_Greedy_Gauss.pdf}
\caption{}
\label{fig:Ex1_6a}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_Greedy_PSRHT.pdf}
\caption{}
\label{fig:Ex1_6b}
\end{subfigure}
\caption{{Convergence} of the classical greedy algorithm (depicted in Section\nobreakspace \ref {Greedy}) and its efficient randomized version (Algorithm\nobreakspace \ref {alg:sk_greedy_online}) using $\mathbf{\Omega}$ drawn from (a) Gaussian distribution or (b) P-SRHT distribution.}
\label{fig:Ex1_6}
\end{figure}
\emph{Efficient Proper Orthogonal Decomposition.}
We finish with validation of the efficient randomized version of POD. For this experiment only $m=1000$ points from $\mathcal{P}_{\mathrm{train}}$ were considered as the training set. The POD bases were obtained with the classical method of snapshots, i.e.,~Algorithm\nobreakspace \ref {alg:approx_pod} where $\mathbf{B}_r$ was computed from SVD of $\mathbf{Q} \mathbf{U}_m$, or the randomized version of POD introduced in~Section\nobreakspace \ref {sk_pod}. The same $\mathbf{\Omega}$ was used for both the basis generation and the error estimation with $\Delta^{\mathrm{POD}}(U_r)$, defined in~(\ref {eq:sk_poderror}). From~Figure\nobreakspace \ref {fig:Ex1_5a} we observe that for large enough $k$ the quality of {the} POD basis formed with the new efficient algorithm is close to the quality of the the basis obtained with the classical method. Construction of $r=100$ basis vectors using $\mathbf{\Omega}$ with only $k=500$ rows provides almost optimal error. As expected, the error indicator $\Delta^{\mathrm{POD}}(U_r)$ is close to the exact error for large enough $k$, but it represents the error poorly for small $k$. Furthermore, $\Delta^{\mathrm{POD}}(U_r)$ is always smaller than the true error and is increasing monotonically with $k$. Figure\nobreakspace \ref {fig:Ex1_5b} depicts how the errors of the classical and randomized (with $k=500$) POD bases depend on the dimension of $U_r$. We see that the qualities of the basis and the error indicator obtained with the new version of POD remain close to the optimal ones up to dimension $r=150$. However, as $r$ becomes larger the quasi-optimality of the randomized POD degrades {so that for $r \geq 150$ the sketching size $k=500$ becomes insufficient.}
\begin{figure}[h!]
\centering
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_PODk.pdf}
\caption{}
\label{fig:Ex1_5a}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex1_PODr.pdf}
\caption{}
\label{fig:Ex1_5b}
\end{subfigure}
\caption{Error $e=\frac{1}{m} \sum^{m}_{i=1} \| \mathbf{u}(\mu^i)- \mathbf{P}_{U_r}\mathbf{u}(\mu^i) \|_{U}^2 / ( \frac{1}{m} \sum^{m}_{i=1} \| \mathbf{u}(\mu^i) \|_{U}^2 )$ and error indicator $e= \Delta^{\mathrm{POD}}(U_r)/ (\frac{1}{m}\sum^{m}_{i=1} \| \mathbf{u}(\mu^i) \|_{U}^2 )$ associated with $U_r$ computed with traditional POD and its efficient randomized version introduced in~Section\nobreakspace \ref {sk_pod}. (a) Errors and indicators versus the number of rows of $\mathbf{\Omega}$ for $r=100$. (b) Errors and indicators versus the dimension of $U_r$ for $k=500$.}
\label{fig:Ex1_5}
\end{figure}
\subsection{Multi-layered acoustic cloak}
In the previous numerical example we considered a problem with strongly coercive well-conditioned operator. But as was discussed in~Section\nobreakspace \ref {SKgalproj}, random sketching with a fixed number of rows is expected to perform worse for approximating the Galerkin projection with non-coercive ill-conditioned $\mathbf{A}(\mu)$. Further, we would like to validate the methodology on such a problem. The benchmark consists in a scattering problem of a 2D wave with perfect scatterer covered in a multi-layered cloak. For this experiment we solve the following Helmholtz equation with first order absorbing boundary conditions
\begin{equation} \label{eq:BVP2}
\left \{
\begin{array}{rll}
\Delta u + \kappa^2 u &= 0,~~ & \textup{in } \Omega \\
i \kappa u + \frac{\partial u}{\partial \boldsymbol{n}} &=0,~~ & \textup{on } \Gamma_{out} \\
i \kappa u + \frac{\partial u}{\partial \boldsymbol{n}} &=2i \kappa,~~ & \textup{on } \Gamma_{in}\\
\frac{\partial u}{\partial \boldsymbol{n}} &= 0,~~ & \textup{on } \Gamma_{s},
\end{array}
\right.
\end{equation}
where $u$ is the solution field (primal unknown), $\kappa$ is the wave number and the geometry of the problem is defined in~Figure\nobreakspace \ref {fig:Ex2_intial_problem_a}. The background has a fixed wave number $\kappa=\kappa_0:=50$. The cloak consists of 10 layers of equal thicknesses enumerated in the order corresponding to the distance to the scatterer. The $i$-th layer is composed of a material with wave number $\kappa=\kappa_i$. The quantity of interest is the average of the solution field on $\Gamma_{in}$. The aim is to estimate the quantity of interest for each parameter $\mu:=(\kappa_1, ..., \kappa_{10}) \in [\kappa_0, \sqrt{2} \kappa_0]^{10} := \mathcal{P}$. The $\kappa_i$ are considered as independent random variables with log-uniform distribution over $[\kappa_0, \sqrt{2} \kappa_0]$.
The solution for a randomly chosen $\mu \in \mathcal{P}$ is illustrated in~Figure\nobreakspace \ref {fig:Ex2_intial_problem_b}.
\begin{figure}[h!]
\centering
\begin{minipage}[c]{.4\textwidth}
\centering
\includegraphics[width=.96\textwidth]{Ex2_problem_setup.pdf}
\end{minipage}%
\begin{minipage}[c]{.4\textwidth}
\centering
\includegraphics[width=.84\textwidth]{Ex2_snapshot.pdf}
\end{minipage} \\
\begin{minipage}[b]{.4\textwidth}
\centering
\subcaption{Geometry}
\label{fig:Ex2_intial_problem_a}
\end{minipage}%
\begin{minipage}[b]{.4\textwidth}
\subcaption{Solution at random $\mu$}
\label{fig:Ex2_intial_problem_b}
\end{minipage}
\caption{(a) Geometry of acoustic cloak benchmark. (b) The real component of $u$ for randomly picked parameter $\mu=\allowbreak (66.86,\allowbreak 54.21,\allowbreak 61.56,\allowbreak 64.45,\allowbreak 66.15,\allowbreak 58.42,\allowbreak 54.90,\allowbreak 63.79,\allowbreak 58.44,\allowbreak 63.09)$.}
\label{fig:Ex2_intial_problem}
\end{figure}
The problem has a symmetry with respect to the vertical axis $x = 0.5$. Consequently, only half of the domain has to be considered for discretization. The discretization was performed {using quadratic triangular finite elements with} approximately {17} complex degrees of freedom per wavelength, i.e., around $200000$ complex degrees of freedom in total. A function $w$ in the approximation space is identified with a vector $\mathbf{w} \in U$. The solution space $U$ is equipped with an inner product compatible with the $H^1$ inner product, i.e.,
\begin{equation*}
\|\mathbf{w} \|_{U}^2:= \| \boldsymbol{\nabla}w \|^2_{L_2}+ \kappa_0^2 \| w \|^2_{L_2} .
\end{equation*}
Further, $20000$ and $1000$ independent samples were considered as the training set $\mathcal{P}_{\mathrm{train}}$ and the test set $\mathcal{P}_{\mathrm{test}}$, respectively. {The} sketching matrix $\mathbf{\Theta}$ was constructed as in the thermal block benchmark, i.e., $\mathbf{\Theta}:= \mathbf{\Omega}\mathbf{Q}$, where $\mathbf{\Omega} \in \mathbb{R}^{k\times s}$ is either a Gaussian matrix or P-SRHT and $\mathbf{Q} \in \mathbb{R}^{s\times n}$ is the transposed Cholesky factor of $\mathbf{R}_U$. In addition, we used $\mathbf{\Phi}:= \mathbf{\Gamma}\mathbf{\Theta}$, where $\mathbf{\Gamma} \in \mathbb{R}^{k'\times k}$ is a Gaussian matrix and $k'=200$.
Below we present validation of the Galerkin projection and the greedy algorithm only. The performance of our methodology for error estimation and POD does not depend on the operator and is similar to the performance observed in the previous numerical example.
\emph{Galerkin projection.}
A subspace $U_r$ was generated with $r=150$ iterations of the randomized greedy algorithm (Algorithm\nobreakspace \ref {alg:sk_greedy_online}) with a $\mathbf{\Omega}$ drawn from the P-SRHT distribution with $k =20000$ rows. Such $U_r$ was then used for validation of the Galerkin projection. We evaluated multiple approximations of $\mathbf{u}(\mu)$ using either the classical projection (\ref {eq:galproj}) or its randomized version (\ref {eq:SKgalproj}). Different $\mathbf{\Omega}$ were considered for (\ref {eq:SKgalproj}). As before, the approximation and residual errors are respectively defined by
$e_\mathcal{P}:=\max_{\mu \in \mathcal{P}_{\mathrm{test}}} \|\mathbf{u}(\mu) - \mathbf{u}_r(\mu)\|_{U} / \max_{\mu \in \mathcal{P}_{\mathrm{test}}} \|\mathbf{u}(\mu)\|_{U}$ and $\Delta_\mathcal{P}:=\max_{\mu \in \mathcal{P}_{\mathrm{test}}} \| \mathbf{r}(\mathbf{u}_r(\mu); \mu) \|_{U'} /\max_{\mu \in \mathcal{P}_{\mathrm{test}}} \|\mathbf{b}(\mu)\|_{U'}$. For each type and size of $\mathbf{\Omega}$, 20 samples of $e_\mathcal{P}$ and $\Delta_\mathcal{P}$ were evaluated. The errors are presented in Figure\nobreakspace \ref {fig:Ex2_1}. This experiment reveals that indeed the performance of random sketching is worse than in the thermal block benchmark (see Figure\nobreakspace \ref {fig:Ex1_1}). For $k=1000$ the error of the randomized version of Galerkin projection is much larger than the error of the classical projection. Whereas for the same value of $k$ in the thermal block benchmark practically no difference between the qualities of the classical projection and its sketched version was observed. It can be explained by the fact that the quality of randomized Galerkin projection depends on the coefficient $a_r(\mu)$ defined in Proposition\nobreakspace \ref {thm:skcea}, which in its turn depends on the operator. {In both numerical examples the coefficient $a_r(\mu)$ was measured over $\mathcal{P}_{\mathrm{test}}$. We observed that {here} $\max_{\mu \in \mathcal{P}_{\mathrm{test}}} a_r(\mu) = 28.3$, while in the thermal block benchmark $\max_{\mu \in \mathcal{P}_{\mathrm{test}}} a_r(\mu) = 2.65$.} In addition, here we work on the complex field instead of the real field and consider slightly larger reduced subspaces, which {could} also have an impact on the accuracy of random sketching. Reduction of performance, however, is not that severe and already starting from $k=15000$ the sketched version of Galerkin projection has an error close to the classical one. Such size of~$\mathbf{\Omega}$ is still very small compared to the dimension of the discrete problem and provides drastic reduction of the computational cost. Let us also note that one could obtain a good approximation of $\mathbf{u}(\mu)$ from the sketch with $k \ll 15000$ by considering another type of projection (a randomized minimal residual projection) proposed in~\cite{balabanov2018}.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_Gaussexact.pdf}
\caption{}
\label{fig:Ex2_1a}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}[b]{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_PSRHTexact.pdf}
\caption{}
\label{fig:Ex2_1b}
\end{subfigure}
\begin{subfigure}[b]{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_Gaussres.pdf}
\caption{}
\label{fig:Ex2_1c}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}[b]{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_PSRHTres.pdf}
\caption{}
\label{fig:Ex2_1d}
\end{subfigure}
\caption{Error $e_\mathcal{P}$ and residual error $\Delta_\mathcal{P}$ of the classical Galerkin projection and quantiles of probabilities $p=1, 0.9, 0.5$ and $0.1$ over 20 samples of $e_\mathcal{P}$ and $\Delta_\mathcal{P}$ of the randomized Galerkin projection versus the number of rows of $\mathbf{\Omega}$. (a) Exact error $e_\mathcal{P}$, with rescaled Gaussian distribution as $\mathbf{\Omega}$. (b) Exact error $e_\mathcal{P}$, with P-SRHT matrix as $\mathbf{\Omega}$. (c) Residual error $\Delta_\mathcal{P}$, with rescaled Gaussian distribution as $\mathbf{\Omega}$. (d) Residual error $\Delta_\mathcal{P}$, with P-SRHT matrix as $\mathbf{\Omega}$.}
\label{fig:Ex2_1}
\end{figure}
Let us further note that we are in the so called ``compliant case'' (see Remark\nobreakspace \ref {rmk:compliant}). Thus, for the classical Galerkin projection we have $s_r(\mu) =s^{\mathrm{pd}}_r(\mu)$ and for the sketched Galerkin projection, $s_r(\mu)=s^{\mathrm{spd}}_r(\mu)$. The output quantity $s_r(\mu)$ was computed with the classical Galerkin projection and with the randomized Galerkin projection employing different $\mathbf{\Omega}$. For each $\mathbf{\Omega}$ we also computed the improved sketched correction $s^{\mathrm{spd+}}_r(\mu)$ (see~Section\nobreakspace \ref {sk_pd_correction}) using $W^\mathrm{du}_r:=U^{\mathrm{du}}_i$ with $i^\mathrm{du}=30$. It required inexpensive additional computations which are in about $5$ times cheaper (in terms of both complexity and memory) than the computations involved in the classical method. The error on the output quantity is measured by $d_\mathcal{P}:=\max_{\mu \in \mathcal{P}_{\mathrm{test}}} |s(\mu) - \widetilde{s}_r(\mu) | / \max_{\mu \in \mathcal{P}_{\mathrm{test}}} |s(\mu)|$, where $\widetilde{s}_r(\mu)=s_r(\mu)$ or $s^{\mathrm{spd+}}_r(\mu)$.
For each random distribution type $20$ samples of $d_\mathcal{P}$ were evaluated. Figure\nobreakspace \ref {fig:Ex2_2} describes how the error of the output quantity depends on $k$. For small $k$ the error is large because of the poor quality of the projection and lack of precision when approximating the inner product for $s^{\mathrm{pd}}_r(\mu)$ in~(\ref {eq:correction}) by the one in~(\ref {eq:skcorrection}). But starting from $k=15000$ we see that the quality of $s_r(\mu)$ obtained with the random sketching technique becomes close to the quality of the output computed with the classical Galerkin projection. For $k \geq 15000$ the randomized Galerkin projection has practically the same accuracy as the classical one. Therefore, for such values of $k$ the error depends mainly on the precision of the approximate inner product for $s^{\mathrm{pd}}_r(\mu)$. Unlike in the thermal block problem (see Figure\nobreakspace \ref {fig:Ex1_2}), in this experiment the quality of the classical method is attained by $s_r(\mu)=s^{\mathrm{spd}}_r(\mu)$ with $k \ll n$. Consequently, the benefit of employing the improved correction $s^{\mathrm{spd+}}_r(\mu)$ here is not as evident as in the previous numerical example. This experiment only proves that the error associated with approximation of the inner product for $s^{\mathrm{pd}}_r(\mu)$ does not depend on the condition number and the dimension of the operator.
\begin{figure}[h!]
\centering
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_Gaussquantity.pdf}
\caption{}
\label{fig:Ex2_2a}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_PSRHTquantity.pdf}
\caption{}
\label{fig:Ex2_2b}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_Gaussquantityprime.pdf}
\caption{}
\label{fig:Ex2_2c}
\end{subfigure} \hspace{.01\textwidth}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_PSRHTquantityprime.pdf}
\caption{}
\label{fig:Ex2_2d}
\end{subfigure}
\caption{The error $d_\mathcal{P}$ of the classical output quantity and quantiles of probabilities $p=1, 0.9, 0.5$ and $0.1$ over 20 samples of $d_\mathcal{P}$ of the output quantities computed with random sketching versus the number of rows of $\mathbf{\Omega}$. (a) The errors of the classical $s_r(\mu)$ and the randomized $s_r(\mu)$ with Gaussian matrix as $\mathbf{\Omega}$. (b) The errors of the classical $s_r(\mu)$ and the randomized $s_r(\mu)$ with P-SRHT distribution as $\mathbf{\Omega}$. (c) The errors of the classical $s_r(\mu)$ and $s^{\mathrm{spd+}}_r(\mu)$ with Gaussian matrix as $\mathbf{\Omega}$ and $W^\mathrm{du}_r:=U^{\mathrm{du}}_i$, $i^\mathrm{du}=30$. (d) The errors of the classical $s_r(\mu)$ and $s^{\mathrm{spd+}}_r(\mu)$ with P-SRHT distribution as $\mathbf{\Omega}$ and $W^\mathrm{du}_r:=U^{\mathrm{du}}_i$, $i^\mathrm{du}=30$.}
\label{fig:Ex2_2}
\end{figure}
\emph{Randomized greedy algorithm.} Finally, we performed $r=150$ iterations of the classical greedy algorithm (see Section\nobreakspace \ref {Greedy}) and its randomized version (Algorithm\nobreakspace \ref {alg:sk_greedy_online}) using different distributions and sizes for $\mathbf{\Omega}$, and a Gaussian random matrix with $k'=200$ rows for $\mathbf{\Gamma}$. As in the thermal block benchmark, the error at $i$-th iteration is measured by $\Delta_\mathcal{P}:=\max_{\mu \in \mathcal{P}_{\mathrm{train}}} \| \mathbf{r}(\mathbf{u}_i(\mu); \mu) \|_{U'}/\max_{\mu \in \mathcal{P}_{\mathrm{train}}} \|\mathbf{b}(\mu)\|_{U'}$. For $k=1000$ we reveal poor performance of~Algorithm\nobreakspace \ref {alg:sk_greedy_online} (see~Figure\nobreakspace \ref {fig:Ex2_3}). It can be explained by the fact that for such size of $\mathbf{\Omega}$ the randomized Galerkin projection has low accuracy. For $k=20000$, however, the {convergence} of the classical greedy algorithm is fully preserved.
\begin{figure}[h]
\centering
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_Greedy_Gauss.pdf}
\caption{}
\label{fig:Ex2_3a}
\end{subfigure} \hspace{.03\textwidth}
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics[width=\textwidth]{Ex2_Greedy_PSRHT.pdf}
\caption{}
\label{fig:Ex2_3b}
\end{subfigure}
\caption{{Convergence} of the classical greedy algorithm (see Section\nobreakspace \ref {Greedy}) and its efficient randomized version (Algorithm\nobreakspace \ref {alg:sk_greedy_online}) using $\mathbf{\Omega}$ drawn from (a) Gaussian distribution or (b) P-SRHT distribution.}
\label{fig:Ex2_3}
\end{figure}
{\emph{Comparison of computational costs.} }
Even though the size of $\mathbf{\Omega}$ has to be considered larger than for the thermal block problem, our methodology still yields considerable reduction of the computational costs compared to the classical approach. The implementation was carried out in Matlab\textsuperscript{\textregistered} {R2015a} with an external $\verb!C++!$ function for the fast Walsh-Hadamard transform (see e.g. \url{https://github.com/sheljohn/WalshHadamard}). Our codes were not designed for a specific problem but rather for a generic multi-query MOR. The algorithms were executed on an Intel\textsuperscript{\textregistered} Core{\texttrademark} i7-7700HQ 2.8GHz CPU, with 16.0GB RAM memory.
Let us start with validation of the computational cost reduction of the greedy algorithm. In~Table\nobreakspace \ref {tab:runtimes} we provide the runtimes of the classical greedy algorithm and~Algorithm\nobreakspace \ref {alg:sk_greedy_online} employing $\mathbf{\Omega}$ drawn from P-SRHT distribution with $k=20000$ rows. In~Table\nobreakspace \ref {tab:runtimes} the computations are divided into three basic categories: computing the snapshots (samples of the solution), precomputing the affine expansions for the online solver, and finding $\mu^{i+1} \in \mathcal{P}_\mathrm{train}$ which maximizes the error indicator with a provisional online solver. The first category includes evaluation of $\mathbf{A}(\mu)$ and $\mathbf{b}(\mu)$ using their affine expansions and solving the systems with a built in~Matlab\textsuperscript{\textregistered} linear solver. The second category consists of evaluating the random sketch in~Algorithm\nobreakspace \ref {alg:sk_greedy_online}; evaluating high-dimensional matrix-vector products and inner products for the Galerkin projection; evaluating high-dimensional matrix-vector products and inner products for the error estimation; and the remaining computations, such as precomputing a decomposition of $\mathbf{R}_U$, memory allocations, orthogonalization of the basis, etc. In its turn, the third category of computations includes generating $\mathbf{\Gamma}$ and evaluating the affine factors of $\mathbf{V}^{\mathbf{\Phi}}_i(\mu)$ and $\mathbf{b}^{\mathbf{\Phi}}(\mu)$ from the affine factors of $\mathbf{V}^{\mathbf{\Theta}}_i(\mu)$ and $\mathbf{b}^{\mathbf{\Theta}}(\mu)$ at each iteration of~Algorithm\nobreakspace \ref {alg:sk_greedy_online}; evaluating the reduced systems from the precomputed affine expansions and solving them with a built in~Matlab\textsuperscript{\textregistered} linear solver, for all $\mu \in \mathcal{P}_\mathrm{train}$, at each iteration; evaluating the residual terms from the affine expansions and using them to evaluate the residual errors of the Galerkin projections, for all $\mu \in \mathcal{P}_\mathrm{train}$, at each iteration.
We observe that evaluating the snapshots occupied only $6\%$ of the overall runtime of the classical greedy algorithm. The other $94 \%$ could be subject to reduction with {the} random sketching technique. Due to operating on a large training set, the cost of solving (including estimation of the error) reduced order models on $\mathcal{P}_{\mathrm{train}}$ has a considerable impact on the runtimes of both classical and randomized algorithms. This cost, however, is independent of the dimension of the full system of equations and will become negligible for larger problems. Nevertheless, {for $r=150$} the randomized procedure for error estimation (see~Section\nobreakspace \ref {efficient_res_norm}) yielded reduction of the aforementioned cost in about $2$ times. As expected, in the classical method the most expensive computations are numerous evaluations of high-dimensional matrix-vector and inner products. For large problems these computations can become a bottleneck of an algorithm. Their cost reduction by random sketching is drastic. We observe that for the classical algorithm the corresponding runtime grows quadratically with $r$ while for the randomized algorithm it grows only linearly. The cost of this step for $r=150$ iterations of the greedy algorithm was divided $15$. In addition, random sketching helped to reduce memory consumption. The memory required by $r=150$ iterations of the greedy algorithm has been reduced from $6.1$GB {(including storage of affine factors of $\mathbf{A}(\mu)\mathbf{U}_i$)} to only $1$GB, from which $0.4$GB is meant for the initialization, i.e., defining the discrete problem, precomputing the decomposition of $\mathbf{R}_U$, etc.
\begin{table}[tbhp]
\caption{The CPU times in seconds taken by each type of computations in the classical greedy algorithm (see Section\nobreakspace \ref {Greedy})~and the randomized greedy algorithm (Algorithm\nobreakspace \ref {alg:sk_greedy_online}). }
\label{tab:runtimes}
\centering
\scalebox{0.88}{
\begin{tabular}{|c|l|r|r|r|r|r|r|r|} \hline
\multirow{2}{*}{Category} & \multirow{2}{*}{Computations} & \multicolumn{3}{c|}{Classical} & \multicolumn{3}{c|}{Randomized} \\ \cline{3-8}
& & $r=50$ & $r=100$ & $r=150$ & $r=50$ & $r=100$ & $r=150$ \\ [2pt] \hline
snapshots & & $143$ & $286$ & $430$ & $143$ & $287$ & $430$ \\ [2pt] \hline
\multirow{5}{*}{
\begin{tabular}{c c c} high-dimensional \\ matrix-vector \& \\ inner products \end{tabular}} & sketch & $-$ & $-$ & $-$ & $54$ & $113$ & $177$ \\ \cline{2-8}
& Galerkin & $59$ & $234$ & $525$ & $3$ & $14$ & $31$ \\ \cline{2-8}
& error & $405$ & $1560$ & $3444$ & $-$ & $-$ & $-$ \\ \cline{2-8}
& remaining & $27$ & $196$ & $236$ & $7$ & $28$ & $67$ \\ \cline{2-8}
& total & $491$ & $1899$ & $4205$ & $64$ & $154$ & $275$ \\ \hline
\multirow{4}{*}{\begin{tabular}{c c} provisional \\ online solver \end{tabular}} & sketch & $-$ & $-$ & $-$ & $56$ & $127$ & $216$ \\ \cline{2-8}
& Galerkin& $46$ & $268$ & $779$ & $50$ & $272$ & $783$ \\ \cline{2-8}
& error & $45$ & $522$ & $2022$ & $43$ & $146$ & $407$ \\ \cline{2-8}
& total & $91$ & $790$ & $2801$ & $149$ & $545$ & $1406$ \\ \hline
\end{tabular}
}
\end{table}
The improvement of the efficiency of the online stage can be validated by comparing the CPU times of the provisional online solver in the greedy algorithms. Table\nobreakspace \ref {tab:runtimes} presents the CPU times taken by the provisional online solver at {the} $i$-th iteration of the classical and the sketched greedy algorithms, where the solver is used for efficient computation of the reduced models associated with {an} $i$-dimensional approximation space $U_i$ for all parameter's values from the training set. These computations consist of evaluating the reduced systems from the affine expansions and their solutions with the Matlab\textsuperscript{\textregistered} linear solver, and computing residual-based error estimates using~(\ref{eq:compute_res}) for the classical method or~(\ref{eq:eff_comp_res}) for the estimation with random sketching. Moreover, the sketched online stage also involves generation of $\mathbf{\Gamma}$ and computing $\mathbf{V}^{\mathbf{\Phi}}_i(\mu)$ and $\mathbf{b}^{\mathbf{\Phi}}(\mu)$ from the affine factors of $\mathbf{V}^{\mathbf{\Theta}}_i(\mu)$ and $\mathbf{b}^{\mathbf{\Theta}}(\mu)$. Note that random sketching can reduce the online complexity (and improve the stability) associated with residual-based error estimation. The online cost of computation of a solution, however, remains the same for both the classical and the sketched methods. Table\nobreakspace \ref {tab:onlineruntimes} reveals that for this benchmark the speedups in the online stage are achieved for $i \geq 50$. The computational cost of the error estimation using the classical approach grows quadratically with $i$, while using the randomized procedure, it grows only linearly. For $i=150$ we report a reduction of the runtime for error estimation by a factor $5$ and a reduction of the total runtime by a factor $2.6$.
\begin{table}[tbhp]
\caption{The CPU times in seconds taken by each type of computations of the classical and the efficient sketched provisional online solvers during the $i$th iteration of the greedy algorithms. }
\label{tab:onlineruntimes}
\centering
\begin{tabular}{|c|r|r|r|r|r|r|} \hline
\multirow{2}{*}{Computations} & \multicolumn{3}{c|}{Classical} & \multicolumn{3}{c|}{Randomized} \\ \cline{2-7}
&$i=50$ & $i=100$ & $i=150$ & $i=50$ & $i=100$ & $i=150$ \\ [2pt] \hline
sketch & $-$ & $-$ & $-$ & $1.3$ & $1.5$ & $2$ \\ \cline{1-7}
Galerkin& $2$ & $7$ & $13.5$ & $2.3$ & $7$ & $14$ \\ \cline{1-7}
error & $2.8$ & $18$ & $45.2$ & $1.3$ & $3.1$ & $7$ \\ \cline{1-7}
total & $4.8$ & $24.9$ & $58.7$ & $4.8$ & $11.6$ & $22.8$ \\ \hline
\end{tabular}
\end{table}
The benefit of using random sketching methods for POD is validated in the context of distributed or limited-memory environments, where the snapshots are computed on distributed workstations or when the storage of snapshots requires too much RAM. For these scenarios the efficiency is characterized by the amount of communication or storage needed for constructing a reduced model. Let us recall that the classical POD requires maintaining and operating with the full basis matrix $\mathbf{U}_m$, while the sketched POD requires the {precomputation} of a $\mathbf{\Theta}$-sketch of $\mathbf{U}_m$ and then constructs a reduced model from the sketch. In particular, for distributed computing a random sketch of each snapshot should be computed on a separate machine and then efficiently transfered to the master workstation for post-processing. For this experiment, Gaussian matrices of different sizes were tested for $\mathbf{\Omega}$. A seeded random number generator was used for maintaining $\mathbf{\Omega}$ with negligible computational costs. In~Table\nobreakspace \ref{tab:storage} we provide the amount of storage needed to maintain a sketch of a single snapshot, which also reflects the required communication for its transfer to the master workstation in the distributed computational environment. We observe that random sketching methods yielded computational costs reductions when $k \leq 17000$. {It follows that for $k=10000$ a $\mathbf{\Theta}$-sketch of a snapshot consumes $1.7$ times less memory than the full snapshot. Yet, for $m=\# \mathcal{P}_\mathrm{train} \leq 10000$ and $r \leq 150$, the sketched method of snapshots (see Definition~\ref{thm:sk_pod}) using $\mathbf{\Omega}$ of size $k=10000$ provides almost optimal approximation of the training set of snapshots with an error which is only at most $1.1$ times higher than the error associated with the classical POD approximation. A Gaussian matrix of size $k=10000$, for $r \leq 150$, also yields with high probability very accurate estimation (up to a factor of $1.1$) of the residual error and sufficiently accurate estimation of the Galerkin projection (increasing the residual error by at most a factor of $1.66$).} For coercive and well-conditioned problems such as the thermal-block benchmark, it can be sufficient to use much smaller sketching matrices than in the present benchmark, say with $k=2500$ rows. Moreover, this value for $k$ should be pertinent also for ill-conditioned problems, including the considered acoustic cloak benchmark, when the minimal residual methods are used alternatively to the Galerkin methods~\cite{balabanov2018}. From~Table\nobreakspace \ref{tab:storage} it follows that a random sketch of dimension $k=2500$ is $6.8$ times cheaper to maintain than a full snapshot vector. It has to be mentioned that when the sketch is computed from the affine expansion of $\mathbf{A}(\mu)$ with $m_A$ terms (here $m_A=11$), its maintenance/transfer costs are proportional to $k m_A$ and are independent of the dimension of the initial system of equations. Consequently, for problems with larger $n/m_A$ a better cost reduction is expected.
\begin{table}[tbhp]
\caption{The amount of data in megabytes required to maintain/transfer a single snapshot or its $\mathbf{\Theta}$-sketch for post-processing.}
\label{tab:storage}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
full snapshot& $k=2500$ & $k=5000$ & $k=10000$ & $k=15000$ & $k=17000$ & $k=20000$ \\ [2pt] \hline
$1.64$ & $0.24$ & $0.48$ & $0.96$ & $1.44$ & $1.63$& $1.92$ \\ [2pt] \hline
\end{tabular}
\end{table}
\section{Conclusions and future work} \label{Conclusions}
In this paper we proposed a methodology for reducing the cost of classical projection-based MOR methods such as RB method and POD. The computational cost of constructing a reduced order model is essentially reduced to evaluating the samples (snapshots) of the solution on the training set, which in its turn can be efficiently performed with state-of-the-art routine on a powerful server or distributed machines. Our approach can be beneficial in any computational environment. It improves efficiency of classical MOR methods in terms of complexity (number of flops), memory consumption, scalability, communication cost between distributed machines, etc. Unlike classical methods, our method does not require maintaining and operating with high-dimensional vectors. Instead, the reduced order model is constructed from a random sketch (a set of random projections), with a negligible computational cost. A new framework was introduced in order to adapt {the} random sketching technique to the context of MOR. We interpret random sketching as a random estimation of inner products between high-dimensional vectors. The projections are obtained with random matrices (called oblivious subspace embeddings), which are efficient to store and to multiply by. We introduced oblivious subspace embeddings for a general inner product defined by a self-adjoint positive definite matrix. Thereafter, we introduced randomized versions of Galerkin projection, residual based error estimation, and primal-dual correction. The conditions for preserving the quality of the output of the classical method were provided. In addition, we discussed computational aspects for an efficient evaluation of a random sketch in different computational environments, and introduced a new procedure for estimating the residual norm. This procedure is not only efficient but also is less sensitive to round-off errors than the classical approach. Finally, we proposed randomized versions of POD and greedy algorithm for RB. Again, in both algorithms, standard operations are performed only on the sketch but not on high-dimensional vectors.
The methodology has been validated in a series of numerical experiments. We observed that indeed random sketching can provide a drastic reduction of the computational cost. The experiments revealed that the theoretical bounds for the sizes of random matrices are pessimistic. In practice, it can be pertinent to use much smaller matrices. In such a case it is important to provide a posteriori certification of the solution. In addition, it can be helpful to have an indicator of the accuracy of random sketching, which can be used for an adaptive selection of {the random matrices'} sizes. The aforementioned issues are addressed in~\cite{balabanov2018}. It was also observed that the performance of random sketching for estimating the Galerkin projection depends on the operator's properties (more precisely on the constant $a_r(\mu)$ defined in Proposition\nobreakspace \ref {thm:skcea}). Consequently, the accuracy of the output can degrade considerably for problems with ill-conditioned operators. A remedy is to replace Galerkin projection by another type of projection for the approximation of $\mathbf{u}(\mu)$ (and $\mathbf{u}^\mathrm{du}(\mu)$). The randomized minimal residual projection proposed in~\cite{balabanov2018} preserves the quality of the classical minimal residual projection regardless of the operator's properties. Another remedy would be to improve the condition number of $\mathbf{A}(\mu)$ with {an} affine parameter-dependent preconditioner. We also have seen that preserving a high precision for the sketched primal-dual correction~(\ref {eq:skcorrection}) can require large sketching matrices. A way to overcome this issue was proposed. It consists in obtaining an efficient approximation $\mathbf{w}^\mathrm{du}_r(\mu)$ of the solution $\mathbf{u}_r^\mathrm{du}(\mu)$ (or $\mathbf{u}_r(\mu)$). Such $\mathbf{w}^\mathrm{du}_r(\mu)$ can be also used for reducing the cost of extracting the quantity of interest from $\mathbf{u}_r(\mu)$, i.e., computing $\mathbf{l}_r(\mu)$, which in general can be expensive (but was assumed to have a negligible cost). In addition, this approach can be used for problems with nonlinear quantities of interest. An approximation $\mathbf{w}^\mathrm{du}_r(\mu)$ can be taken as a projection of $\mathbf{u}_r^\mathrm{du}(\mu)$ (or $\mathbf{u}_r(\mu)$) on a subspace $W^\mathrm{du}_r$. In the experiments $W^\mathrm{du}_r$ was constructed from the first several basis vectors of the approximation space $U^\mathrm{du}_r$. A better subspace can be obtained by approximating the manifold $\{{\mathbf{u}_r^\mathrm{du}(\mu)} : \mu \in \mathcal{P}_\mathrm{train} \}$ with a greedy algorithm or POD. Here, random sketching can be again employed for improving the efficiency. The strategies for obtaining both accurate and efficient $W^\mathrm{du}_r$ with random sketching are discussed in details in~\cite{balabanov2018}.
|
{
"timestamp": "2019-11-01T01:14:48",
"yymm": "1803",
"arxiv_id": "1803.02602",
"language": "en",
"url": "https://arxiv.org/abs/1803.02602"
}
|
\section{Introduction} \label{sec:Introduction}
Topology optimization is a powerful tool for optimal design in multidisciplinary fields of
optics, electronics, structural, bio and nano-mechanics.
Mathematically speaking, this tool is based on finite element method such that the coupled variational problems
in computational mechanics can be formulated as certain mixed integer nonlinear programming (MINLP) problems \cite{gao-to17}.
Due to the integer constraint, traditional theory and methods in continuous optimization can't be applied for solving topology optimization problems.
Therefore, most MINLP problems are considered to be NP-hard ($n$on-deterministic $p$olynomial-time hard) in global optimization and computer science
\cite{g-l-r-17}.
During the past forty years, many approximate methods have been developed for solving topology optimization problems, these include
homogenization method \cite{Bendsoe1, Bendsoe2},
density-based method \cite{Bendsoe0},
Solid Isotropic Material with Penalization (SIMP) \cite{ Rozvany, Rozvany-Zhou,Zhou-Rozvany},
level set approximation \cite{Osher-Sethian, Sethian},
Evolutionary Structural Optimization (ESO) \cite{Xie-Steven1, Xie-Steven2} and
bi-directional evolutionary structural optimization (BESO) \cite{Huang-Xie, Querin-Steven, Querin-Y-S-X}.
Currently, the popular commercial software products used in topology optimization are based on SIMP and ESO/BESO methods \cite{ Huang, Liu-Tovar,Sigmund-2001,Zuo-Xie-2015}.
However, these approximate methods can't mathematically guarantee the global convergence.
Also, they usually suffer from having different intrinsic disadvantages, such as slow convergence,
the gray scale elements and checkerboards patterns,
etc \cite{Diaz1995,Sigmund1998,Sigmund2013}.
Canonical duality theory (CDT) is a methodological theory, which was developed from Gao and Strang's original work in 1989 on finite deformation mechanics \cite{gao-strang89}. The key feature of this theory is that by using certain canonical strain measure, general nonconvex/nonsmooth potential variational problems can be equivalently reformulated as a pure (stress-based only) complementary energy variational principle \cite{gao-mrc99}.
The associated triality theory provides extremality criteria for both global and
local optimal solutions, which can be used to develop powerful algorithms for solving general nonconvex variational
problems \cite{gao-book00}. This pure complementary energy variational principle solved a well-known open problem in nonlinear elasticity and
is known as the Gao principle in literature \cite{li-gupta}.
Based on this principle, a canonical dual finite element method was proposed
in 1996 for large deformation nonconvex/nonsmooth mechanics \cite{gao-jem96}.
Applications have been given to post-buckling problems of large deformed beams \cite{Ali-gao}, nonconvex variational problems \cite{gao-ogden-qjmam},
and phase transitions in solids \cite{gao-yu}.
It was discovered by Gao in 2007 that by simply using a canonical measure $\epsilon (x) = x(x-1)=0$,
the 0-1 integer constraint $x \in \{ 0, 1\}$ in general nonconvex minimization problems can be equivalently converted to a unified
concave maximization problem in continuous space, which can be solved deterministically to obtain global optimal solution in polynomial time \cite{gao-jimo07}.
Therefore, this pure complementary energy principle plays a fundamental role not only in computational nonlinear mechanics, but also in discrete optimization
\cite{gao-ruan-jogo10}.
Most recently, Gao proved that the topology optimization should be formulated as a bi-level mixed integer nonlinear programming problem (BL-MINLP) \cite{gao-to17,gao-to18}.
The upper-level optimization of this BL-MINLP is actually equivalent to the well-known Knapsack problem,
which can be solved analytically by the CDT \cite{gao-to18}.
The review articles \cite{gao-cace,gao-sherali09} and the newly published book \cite{g-l-r-17} provide comprehensive reviews and applications of the canonical duality theory in multidisciplinary fields of mathematical modeling, engineering mechanics, nonconvex analysis, global optimization, and computational science.
The main goal of this paper is to apply the canonical duality theory for solving
3-dimensional benchmark problems in topology optimization.
In the next section, we first review Gao's recent work why the topology optimization should be formulated as a bi-level mixed integer nonlinear programming problem.
A basic mathematical mistake in topology optimization modeling is explicitly addressed.
A canonical penalty-duality method for solving this Knapsack problem is presented in Section \ref{Knapsack},
which is actually the so-called $\beta$-perturbation method first proposed in global optimization \cite{gao-ruan-jogo10} and recently in topology optimization \cite{gao-to17}.
Section \ref{Pure} reveals for the first time the unified relation between this canonical penalty-duality method in integer programming and
Gao's pure complementary energy principle in nonlinear elasticity.
Section \ref{CDT-Algorithm} provides 3-D finite element analysis and the associated canonical penalty-duality (CPD) algorithm.
The volume evolutionary method and computational complexity of this CPD algorithm are discussed.
Applications to 3-D benchmark problems are provided in Section \ref {Numerical}.
The paper is ended by concluding remarks and open problems. Mathematical mistakes in the popular methods are explicitly addressed. Also, general modeling and conceptual mistakes in engineering optimization are discussed based on reviewers comments.
\section{Mathematical Problems for 3-D Topology Optimization}
\label{Mathematical-Problem}
The minimum total potential energy principle provides a theoretical foundation for all mathematical problems in computational solid mechanics.
For general 3-D nonlinear elasticity, the total potential energy has the following standard form:
\begin{equation}
\Pi({\bf u}, \rho)=\int_{\Omega}\bigg( W(\nabla {\bf u}) \rho +{\bf u} \cdot {{\bf b}} \rho \bigg) d\Omega - \int_{\Gamma_t} {\bf u} \cdot {\bf t} d\Gamma ,
\end{equation}
where ${\bf u}: \Omega \rightarrow {\bf R}^3$ is a displacement vector field, ${{\bf b}}$ is a given body force vector, ${\bf t}$ is a given surface traction on
the boundary $\Gamma_t \subset \partial \Omega$, the dot-product ${\bf u} \cdot {\bf t} = {\bf u}^T {\bf t}$.
In this paper, the stored energy density $W({\bf F})$
is an {\em objective function} (see Remark \ref{remark4}) of the deformation gradient ${\bf F} = \nabla {\bf u}$.
In topology optimization, the mass density $\rho: \Omega \rightarrow \{ 0, 1\}$ is the design variable,
which takes $\rho({\bf x})=1$ at a solid material point ${\bf x} \in \Omega$,
while $\rho({\bf x})=0$ at a void point ${\bf x} \in \Omega$. Additionally, it must satisfy the so-called knapsack condition:
\begin{equation}
\int_{\Omega} \rho({\bf x}) d\Omega \leq V_c ,
\end{equation}
where $V_c >0$ is a desired volume bound.
By using finite element method, the whole design domain $\Omega$ is meshed with
$n$ disjointed finite elements $\{ \Omega_e\}$.
In each element, the unknown variables can be numerically written as ${\bf u}({\bf x}) = {\bf N} ({\bf x}) {\bf u}_e,\;\;
\rho({\bf x}) = \rho_e \in \{0, 1 \} \;\; \forall {\bf x} \in \Omega_e$, where ${\bf N}({\bf x})$ is a given interpolation matrix,
${\bf u}_e$ is a nodal displacement vector.
Let ${\cal U}_a \subset {\bf R}^m$ be a kinetically admissible space, in which certain deformation conditions are given,
$v_e $ represents the volume of the $e$-th element $\Omega_e$, and ${\bf v} = \{ v_e\} \in {\bf R}^n$.
Then the admissible design space can be discretized as a discrete set
\begin{equation}
{\cal Z}_a = \bigg\{\boldsymbol{\rho}=\{\rho_e\} \in {\bf R}^n \big|\;\; \rho_e \in \{0,1\} \;\forall e=1, \dots, n, \;\; \boldsymbol{\rho}^T {\bf v} =\sum_{e=1}^{n} \rho_e v_e \leq V_c \bigg\}
\end{equation}
and on ${\cal U}_a \times {\cal Z}_a, $
the total potential energy functional can
be numerically reformulated as a real-valued function
\begin{equation}
\Pi_h ({\bf u}, \boldsymbol{\rho})= C(\boldsymbol{\rho},{\bf u}) -{\bf u}^T {\bf f} ,
\end{equation}
where $$ C(\boldsymbol{\rho},{\bf u})= \boldsymbol{\rho}^T {\bf c}({\bf u}),
$$
in which
\begin{equation}
{\bf c}({\bf u})=\bigg\{\int_{\Omega_e} [ W(\nabla {\bf N}({\bf x}) {\bf u}_e) - {{\bf b}}^T {\bf N} ({\bf x}) {\bf u}_e ] d\Omega \bigg\} \; \in {\cal R}^{n},
\end{equation}
and
$$
{\bf f} = \bigg\{ \int_{\Gamma_t^e} {\bf N}({\bf x})^T {\bf t}({\bf x}) d\Gamma \bigg\} \in {\cal R}^{m}.
$$
By the facts that the topology optimization is a combination of both variational analysis on a continuous space ${\cal U}_a$
and optimal design on a discrete space ${\cal Z}_a$,
it can't be simply formulated in a traditional variational form.
Instead, a general problem of
topology optimization
should be proposed as a bi-level programming \cite{gao-to18}:
\begin{eqnarray}
({\cal{P}}_{bl}):\;\; & \;\;\;\;\;\; & \min
\{ \Phi (\brho, {\bf u}) | \;\; {\brho\in {\cal Z}_a}, \;\; {\bf u}\in {\cal U}_a \} \label{eq-ulo}, \\
& & \mbox{s.t.} \;\; {\bf u} \in \arg \min_{{\bf v} \in {\cal U}_a } \Pi_h({\bf v}, \brho), \label{eq-llo}
\end{eqnarray}
where $\Phi (\brho, {\bf u})$ represents the upper-level cost function,
$\brho \in {\cal Z}_a$ is the upper-level variable. Simillarly,
$\Pi_h({\bf u}, \brho)$ represents the lower-level cost function and
${\bf u} \in {\cal U}_a$ is the lower-level variable.
The cost function $\Phi(\brho,{\bf u})$ depends on both particular problems and numerical methods.
It can be $\Phi(\brho^p,{\bf u}) = {\bf f}^T {\bf u} -{\bf c}({\bf u})^T \brho^p $
for any given parameter $p \ge 1$, or simply $\Phi(\brho,{\bf u}) = - \brho^T {\bf c}({\bf u})$.
Since the topology optimization is a design-analysis process, it is reasonable to use the alternative iteration method \cite{gao-to18} for
solving the challenging topology optimization problem $({\cal{P}}_{bl})$, i.e.
(i) for a given design variable $\boldsymbol{\rho}_{k-1} \in {\cal Z}_a$, solving the lower-level optimization (\ref{eq-llo}) for
\begin{equation}
{\bf u}_k = \arg \min \{\Pi_h({\bf u}, \boldsymbol{\rho}_{k-1}) | \;\; {\bf u} \in {\cal U}_a \}
\end{equation}
(ii) for the given ${\bf c}_u = {\bf c}({\bf u}_k)$, solve the upper-level optimization problem (\ref{eq-ulo}) for
\begin{equation}
\boldsymbol{\rho}_k = \arg \min \left\{ \Phi(\boldsymbol{\rho}, {\bf u}_k) \; | \;\; \boldsymbol{\rho} \in {\cal Z}_a \right \}. \label{eq-u}
\end{equation}
The upper-level problem \eqref{eq-u} is actually equivalent to the well-known Knapsack problem in its most simple (linear) form:
\begin{equation} \label{eq-knap}
({\cal{P}}_{u}): \;\; \min
\{ P_u(\boldsymbol{\rho})= - {\bf c}_u^T \boldsymbol{\rho} \;\; | \;\; \boldsymbol{\rho}^T {\bf v} \le V_{c} , \;\; \boldsymbol{\rho} \in \{0,1\}^n \} ,
\end{equation}
which makes a perfect sense in topology optimization, i.e.
among all elements $\{ \Omega_e\}$, one should keep those stored more strain energy.
Knapsack problems appear extensively in multidisciplinary fields of operations research, decision science, and engineering design problems.
Due to the integer constraint, even this most simple linear knapsack problem is listed as one of Karp's 21 NP-complete problems~\cite{karp}.
However, by using the canonical duality theory, this challenging problem can be solved easily to obtain global optimal solution.
For linear elastic structures without the body force, the stored energy $C$ is a quadratic function of ${\bf u}$:
\begin{equation} \label{pi}
C(\boldsymbol{\rho}, {\bf u})= \frac{1}{2} {\bf u}^T {\bf K}(\boldsymbol{\rho}){\bf u},
\end{equation}
where $ {\bf K}(\boldsymbol{\rho}) = \left\{ \rho_e {\bf K}_e \right\} \in {\bf R}^{n\times n} $ is the overall stiffness matrix, obtained by assembling the sub-matrix $\rho_e {\bf K}_e$ for each element $\Omega_e$.
For any given $\boldsymbol{\rho} \in {\cal Z}_a$, the displacement variable can be obtained analytically by solving the linear equilibrium equation
${\bf K}(\boldsymbol{\rho}) {\bf u} = {\bf f} $.
Thus, the topology optimization for linear elastic structures can be simply formulated as
\begin{equation} \label{uku}
({\cal{P}}_{le}): \;\;\;\; \min \bigg\{ {\bf f}^T{\bf u} - \frac{1}{2} {\bf u}^T {\bf K}( \boldsymbol{\rho} ) {\bf u} \; | \;\; {\bf K}(\boldsymbol{\rho}) {\bf u} = {\bf f} , \;\;{\bf u} \in {\cal U}_a, \;\; \boldsymbol{\rho} \in \boldsymbol{\varrho}_a \; \bigg\}.
\end{equation}
\begin{remark}[On Compliance Minimization Problem] \label{remark1}
In literature, topology optimization for linear elastic structures is usually formulated as a compliance minimization problem (see \cite{Liu-Tovar} and
the problem $(P)$ in \cite{sto-ben}\footnote{The linear inequality constraint ${\bf A} \boldsymbol{\rho} \le {{\bf b}}$ in \cite{Liu-Tovar} is ignored in this paper.}):
\begin{equation}
(P): \;\;\;
\min_{\boldsymbol{\rho} \in {\bf R}^n, {\bf u} \in {\cal U}_a} \;\; \frac{1}{2} {\bf f}^T {\bf u}
\;\; s.t. \;\; {\bf K}(\boldsymbol{\rho}) {\bf u} = {\bf f}, \;\; \boldsymbol{\rho} \in \{0,1\}^n, \; \boldsymbol{\rho}^T {\bf v} \le V_{c}.
\end{equation}
Clearly, if the displacement is replaced by ${\bf u} = [{\bf K}(\boldsymbol{\rho}) ]^{-1} {\bf f}$,
this problem can be written as
\begin{equation}
(P_c): \;\;\;\; \min \bigg\{ P_c(\boldsymbol{\rho}) = \frac{1}{2} {\bf f}^T[{\bf K}( \boldsymbol{\rho} )]^{-1} {\bf f} \; | \;\; {\bf K}(\boldsymbol{\rho}) \mbox{ {\rm is invertible for all }} \boldsymbol{\rho} \in \boldsymbol{\varrho}_a \; \bigg\}.
\end{equation}
which is equivalent to $({\cal{P}}_{le})$ under the regularity condition, i.e. $[{\bf K}(\boldsymbol{\rho})]^{-1}$ exists for all $\boldsymbol{\rho} \in {\cal Z}_a$.
However, instead of ${\bf u}$ the given external force in the cost function of $(P)$ is replaced by
${\bf f} = {\bf K} {\bf u} $ such that
$(P)$ is commonly written in the so-called minimization of strain energy (see \cite{Sigmund2013}):
\begin{equation}
(P_s): \;\; \min \left\{ \frac{1}{2} {\bf u}^T {\bf K}(\boldsymbol{\rho}) {\bf u} \; | \;\; {\bf K}(\boldsymbol{\rho}) {\bf u} = {\bf f}, \;\; \boldsymbol{\rho} \in {\cal Z}_a, \;\; {\bf u} \in {\cal U}_a \;\; \right\} , \label{eq-pc}
\end{equation}
One can see immediately that $(P_s)$ contradicts $({\cal{P}}_{le})$ in the sense that
the alternative iteration for solving $(P_c)$ leads to an anti-Knapsack problem:
\begin{equation}
\min {\bf c}_u^T \boldsymbol{\rho}, \;\; s.t. \;\; \boldsymbol{\rho} \in \{0,1\}^n, \; \boldsymbol{\rho}^T {\bf v} \le V_{c}. \label{eq-anti}
\end{equation}
By the fact that ${\bf c}_u = {\bf c}({\bf u}_k) \in {\bf R}^n_+ : = \{ {\bf c} \in {\bf R}^n | \;\; {\bf c} \ge {\bf 0} \} $ is a non-negative vector for any given ${\bf u}_k$,
this problem has only a trivial solution.
Therefore, the alternative iteration is not allowed for solving $({\cal{P}}_s)$.
In continuum physics, the linear scalar-valued function ${\bf u}^T {\bf f} \in {\bf R} $ is called
the external (or input) energy, which is not an objective function (see Remark \ref{remark4}).
Since ${\bf f}$ is a given force, it can't be replaced by ${\bf K}(\brho) {\bf u}$.
Although
the cost function $P_c(\brho) $ can be called as the mean compliance, it is not an objective function either. Thus, the problem $(P_c)$ works only for those problems that ${\bf u}(\brho)$ can be uniquely determined.
Its complementary form
\begin{equation}
(P^c ): \;\;\; \max
\left\{ \frac{1}{2} {\bf u}^T {\bf K}(\brho) {\bf u} \;\; | \;\; {\bf K}(\brho) {\bf u} = {\bf f} , \;\; \brho \in {\cal Z}_a \right\} \label{eq-msp}
\end{equation}
can be called a maximum stiffness problem, which is equivalent to $({\cal{P}}_{le})$ in the sense that both problems produce the same results by the alternative iteration method.
Therefore, it is a conceptual mistake to call the strain energy $\frac{1}{2} {\bf u}^T {\bf K}(\brho) {\bf u} $ as the mean compliance and
$(P_s)$ as the compliance minimization.\footnote{Due to this conceptual mistake, the general problem for topology optimization was originally formulated as a double-min optimization $({\cal{P}}_{bl})$
in \cite{g-to}. Although this model is equivalent to a knapsack problem for linear elastic structures under the condition ${\bf f} = {\bf K}(\brho) {\bf u}$, it contradicts the popular theory in topology optimization.}
The problem $(P_s)$ has been used as a mathematical model for many approximation methods, including the SIMP and BESO.
Additionally, some conceptual mistakes in the compliance minimization and mathematical modeling are also addressed in Remark \ref{remark4}.
\end{remark}
\section{Canonical Dual Solution to Knapsack Problem} \label{Knapsack}
The canonical duality theory for solving general integer programming problems was first proposed by Gao in 2007 \cite{gao-jimo07}. Applications to topology optimization have been given recently in \cite{gao-to17,gao-to18}.
In this paper, we present this theory in a different way, i.e. instead of the canonical measure in ${\bf R}^{n+1}$,
we introduce a canonical measure in ${\bf R}^n$:
\begin{equation}
\pmb{\varepsilon}= \Lam(\boldsymbol{\rho})= \boldsymbol{\rho} \circ \boldsymbol{\rho}-\boldsymbol{\rho} \in {\bf R}^n
\end{equation}
and the associated super-potential
\begin{equation}\label{eq-ind}
\Psi(\pmb{\varepsilon}) = \left\{ \begin{array}{ll}
0 & \mbox{ if } \pmb{\varepsilon} \in {\bf R}^n_- := \{ \pmb{\varepsilon} \in {\bf R}^n| \;\; \pmb{\varepsilon} \le {\bf 0} \}\\
+\infty & \mbox{ otherwise},
\end{array}
\right.
\end{equation}
such that the integer constraint in the Knapsack problem $({\cal{P}}_u)$ can be relaxed by the following canonical form
\begin{equation}\label{eq-cpp}
\min \left\{ \Pi_u(\boldsymbol{\rho})= \Psi(\Lam(\boldsymbol{\rho}) ) - {\bf c}_u^T \boldsymbol{\rho} \; \big| \; \; \boldsymbol{\rho}^T {\bf v} \le V_c \;\; \boldsymbol{\rho} \in {\bf R}^n \right\}.
\end{equation}
This is a nonsmooth minimization problem in ${\bf R}^n$ with only one linear inequality constraint.
The classical Lagrangian for this inequality constrained problem is
\begin{equation}
L(\boldsymbol{\rho}, \tau) = \Psi(\Lam(\boldsymbol{\rho}) ) - {\bf c}_u^T \boldsymbol{\rho} + \tau (\boldsymbol{\rho}^T {\bf v} - V_c),
\end{equation}
and the canonical minimization problem (\ref{eq-cpp}) is equivalent to the following min-max problem:
\begin{equation}
\min_{\boldsymbol{\rho} \in {\bf R}^n} \max_{\tau \in {\bf R} } L(\boldsymbol{\rho}, \tau) \;\; s.t. \;\; \tau \ge 0 .
\end{equation}
According to the Karush-Kuhn-Tucker theory in inequality constrained optimization, the Lagrange multiplier $ \tau $ should satisfy the following KKT conditions:
\begin{equation}
\boldsymbol{\varsigma}(\boldsymbol{\rho}^T {\bf v} - V_c)=0 , \; \; \boldsymbol{\varsigma} \ge 0 , \; \; \boldsymbol{\rho}^T {\bf v} - V_c \le 0.
\end{equation}
The first equality $ \boldsymbol{\varsigma}(\boldsymbol{\rho}^T {\bf v} - V_c)=0 $ is the so-called {\em complementarity condition}.
It is well-known that to solve the complementarity problems is not an easy task, even for linear complementarity problems \cite{isac}.
Also, the Lagrange multiplier has to satisfy the constraint qualification $\boldsymbol{\varsigma} \ge 0$.
Therefore, the classical Lagrange multiplier theory can be essentially used for linear equality constrained optimization problems \cite{l-g-opl}.
This is one of main reasons why the canonical duality theory was developed.
By the fact that the super-potential $\Psi(\pmb{\varepsilon}) $ is a convex, lower-semi continuous function (l.s.c),
its sub-differential is a positive cone ${\bf R}^n_+ $ \cite{gao-book00}:
\begin{equation}
\partial \Psi(\pmb{\varepsilon}) = \left\{ \begin{array}{ll}
\{ \mbox{\boldmath$\sigma$} \} \in {\bf R}^n_+ \;\; & \; \mbox{ if } \pmb{\varepsilon} \le { \bf 0} \in {\bf R}^n_- \\
\;\; \emptyset & \mbox{ otherwise}.
\end{array} \right.
\end{equation}
Using Fenchel transformation, the conjugate function of $\Psi(\pmb{\varepsilon})$ can be uniquely defined by (see \cite{gao-book00})
\begin{equation}\label{eq-psis}
\Psi^\sharp(\mbox{\boldmath$\sigma$}) = \sup_{\pmb{\varepsilon} \in {\bf R}^{n }} \{ \pmb{\varepsilon}^T \mbox{\boldmath$\sigma$} - \Psi(\pmb{\varepsilon}) \}
=\left\{ \begin{array}{ll}
0 & \mbox{ if } \mbox{\boldmath$\sigma$}\in {\bf R}^n_+ , \\
+\infty & \mbox{ otherwise},
\end{array} \right.
\end{equation}
which can be viewed as a {\em super complementary energy} \cite{gao-cs88}.
By the theory of convex analysis, we have the following {\em canonical duality relations} \cite{gao-jimo07}:
\begin{equation}\label{eq-cdr}
\Psi(\pmb{\varepsilon}) + \Psi^\sharp(\mbox{\boldmath$\sigma$}) = \pmb{\varepsilon}^T \mbox{\boldmath$\sigma$} \;\; \Leftrightarrow \;\; \mbox{\boldmath$\sigma$} \in \partial \Psi(\pmb{\varepsilon}) \;\; \Leftrightarrow \;\; \pmb{\varepsilon} \in \partial \Psi^\sharp(\mbox{\boldmath$\sigma$}) .
\end{equation}
By the Fenchel-Young equality $\Psi(\pmb{\varepsilon})= \pmb{\varepsilon}^T \mbox{\boldmath$\sigma$} - \Psi^\sharp(\mbox{\boldmath$\sigma$})$,
the Lagrangian $L(\boldsymbol{\rho},\tau)$ can be written in the following form
\begin{equation}
\Xi(\boldsymbol{\rho},\mbox{\boldmath$\sigma$}, \boldsymbol{\varsigma})= G_{ap}(\boldsymbol{\rho},\mbox{\boldmath$\sigma$}) - \boldsymbol{\rho}^T \mbox{\boldmath$\sigma$} - \Psi^\sharp(\mbox{\boldmath$\sigma$}) - \boldsymbol{\rho}^T {\bf c}_u + \boldsymbol{\varsigma}(\boldsymbol{\rho}^T {\bf v} - V_c).
\end{equation}
This is the Gao-Strang total complementary function for the Knapsack problem, in which, $G_{ap}(\boldsymbol{\rho},\mbox{\boldmath$\sigma$}) = \mbox{\boldmath$\sigma$}^T(\boldsymbol{\rho}\circ\boldsymbol{\rho})$ is the so-called {\em complementary gap function}. Clearly, if $\mbox{\boldmath$\sigma$} \in {\bf R}^n_+$, this gap function is convex and
$G_{ap}(\boldsymbol{\rho}, \mbox{\boldmath$\sigma$}) \ge 0 \;\; \forall \boldsymbol{\rho}\in {\bf R}^n$.
Let
\begin{equation}
{\cal S}_a^+ = \{ \bzeta = \{ \mbox{\boldmath$\sigma$}, \boldsymbol{\varsigma}\} \in {\bf R}^{n+1} | \;\; \mbox{\boldmath$\sigma$} > {\bf 0} \in {\bf R}^n, \;\; \boldsymbol{\varsigma} \ge 0 \}.
\end{equation}
Then on ${\cal S}_a$, we have
\begin{equation}
\Xi(\boldsymbol{\rho},\bzeta)= \mbox{\boldmath$\sigma$}^T(\boldsymbol{\rho} \circ \boldsymbol{\rho} - \boldsymbol{\rho}) - \boldsymbol{\rho}^T{\bf c}_u + \boldsymbol{\varsigma}(\boldsymbol{\rho}^T {\bf v} - V_c)
\end{equation}
and for any given $\bzeta \in {\cal S}_a^+$, the canonical dual function can be obtained by
\begin{equation}
P^d_u (\bzeta) = \min_{\boldsymbol{\rho} \in {\bf R}^n} \Xi(\boldsymbol{\rho}, \bzeta) = -\frac{1}{4} {\mbox{\boldmath$\tau$}}^T_u(\bzeta) {\bf G}(\mbox{\boldmath$\sigma$})^{-1}{\mbox{\boldmath$\tau$}}_u(\bzeta)-\boldsymbol{\varsigma} V_c,
\end{equation}
where
\[
{\bf G}(\mbox{\boldmath$\sigma$}) = \mathop{\rm Diag}(\mbox{\boldmath$\sigma$}), \;\;\;\; {\mbox{\boldmath$\tau$}}_u = \mbox{\boldmath$\sigma$} + {\bf c}_u - \boldsymbol{\varsigma} {\bf v}.
\]
This canonical dual function is the so-called {\em pure complementary energy} in nonlinear elasticity, first proposed by Gao in 1999 \cite{gao-mrc99}, where ${\mbox{\boldmath$\tau$}}_u$ and $\mbox{\boldmath$\sigma$}$ are corresponding to the first and second Piola-Kirchhoff stresses, respectively.
Thus, the canonical dual problem of the Knapsack problem can be proposed in the following
\begin{equation}
({\cal{P}}^d_u): \;\;\;\; \max \left\{ P^d_u(\bzeta) | \;\;\bzeta \in {\cal S}^+_a \right\}.
\end{equation}
\begin{theorem}[Canonical Dual Solution for Knapsack Problem \cite{gao-to17}]\label{thm1}
For any given ${\bf u}_k \in {\cal U}_a$ and $V_c > 0$, if $ \barbzeta = (\barbsig, \bar{\tau}) \in {\cal S}^+_a$ is a solution to $({\cal{P}}^d_u)$, then
\begin{equation} \label{eq-solu}
\barbrho = \frac{1}{2} {\bf G}(\barbsig)^{-1} {\mbox{\boldmath$\tau$}}_u(\barbzeta)
\end{equation}
is a global minimum solution to the Knapsack problem $({\cal{P}}_u)$ and
\begin{equation}\label{eq-cdp}
P_u(\barbrho) = \min_{\boldsymbol{\rho} \in {\bf R}^n} P_u(\boldsymbol{\rho}) = \Xi(\barbrho, \barbzeta )=
\max_{\bzeta \in {\cal S}^+_a} P^d_u(\bzeta) = P_u^d(\barbzeta).
\end{equation}
\end{theorem}
{\bf Proof}. By the convexity of the super-potential $\Psi(\pmb{\varepsilon})$, we have $\Psi^{**} (\pmb{\varepsilon}) = \Psi(\pmb{\varepsilon})$. Thus,
\begin{equation}
L(\boldsymbol{\rho}, \tau) = \sup_{\mbox{\boldmath$\sigma$} \in {\bf R}^n} \Xi(\boldsymbol{\rho}, \mbox{\boldmath$\sigma$}, \tau) \;\; \forall \boldsymbol{\rho} \in {\bf R}^n, \;\; \tau \in {\bf R}.
\end{equation}
It is easy to show that for any given $\boldsymbol{\rho} \in {\bf R}^n, \;\tau \in {\bf R}$, the supremum condition is governed by
$\Lam(\boldsymbol{\rho}) \in \partial \Psi^*(\mbox{\boldmath$\sigma$})$. By the canonical duality relations given in (\ref{eq-cdr}),
we have the equivalent relations:
\begin{equation}\label{eq-kkts}
\Lam(\boldsymbol{\rho})^T \mbox{\boldmath$\sigma$} = \mbox{\boldmath$\sigma$}^T (\boldsymbol{\rho} \circ \boldsymbol{\rho} - \boldsymbol{\rho}) = 0 \;\; \Leftrightarrow \;\; \mbox{\boldmath$\sigma$} \in {\bf R}^n_+ \;\; \Leftrightarrow \;\; \Lam(\boldsymbol{\rho}) =
(\boldsymbol{\rho} \circ \boldsymbol{\rho} - \boldsymbol{\rho}) \in {\bf R}^n_-.
\end{equation}
This is exactly equivalent to the KKT conditions of the canonical problem for the inequality condition $\Lam(\boldsymbol{\rho} ) \in {\bf R}^n_-$.
Thus, if $\barbzeta \in {\cal S}^+_a$ is a KKT solution to $({\cal{P}}^d_u)$, then $\barbsig > {\bf 0}$ and the complementarity condition in (\ref{eq-kkts} ) leads to
$\barbrho \circ \barbrho - \barbrho = 0$, i.e. $\barbrho \in \{0,1\}^n$.
It is easy to prove that for a given $\barbzeta $, the equality (\ref{eq-solu}) is exactly the criticality condition $\nabla_{\boldsymbol{\rho}} \Xi(\barbrho, \barbzeta ) = 0$.
Therefore, the vector $\barbrho \in \{0,1\}^n$ defined by (\ref{eq-solu}) is a solution to the Knapsack problem $({\cal{P}}_u)$.
According to Gao and Strang \cite{gao-strang89} that the total complementary function $\Xi(\boldsymbol{\rho}, \bzeta)$ is a saddle function on ${\bf R}^n\times {\cal S}^+_a$,
then
\begin{equation}
\min_{\boldsymbol{\rho} \in {\bf R}^n} P_u(\boldsymbol{\rho}) =\min_{\boldsymbol{\rho} \in {\bf R}^n} \max_{\bzeta \in {\cal S}^+_a} \Xi(\boldsymbol{\rho}, \bzeta )=
\max_{\bzeta \in {\cal S}^+_a} \min_{\boldsymbol{\rho} \in {\bf R}^n} \Xi(\boldsymbol{\rho}, \bzeta )= \max_{\bzeta \in {\cal S}^+_a} P^d_u(\bzeta) .
\end{equation}
The complementary-dual equality (\ref{eq-cdp}) can be proved by the canonical duality relations. \hfill $\Box$ \\
This theorem shows that the so-called NP-hard Knapsack problem is canonically dual to a concave maximization problem $({\cal{P}}^d_u)$
in continuous space, which is much easier than the 0-1 programming problem $({\cal{P}}_u)$ in discrete space.
Whence the canonical dual solution $\barbzeta$ is obtained, the solution to the Knapsack problem can be given analytically by
(\ref{eq-solu}).
\section{Pure Complementary Energy Principle and Perturbed Solution}
\label{Pure}
Based on Theorem \ref{thm1}, a perturbed solution for the Knapsack problem has been proposed recently in \cite{gao-to17,gao-to18}.
This section demonstrates the relation of this solution with the pure complementary energy principle in nonlinear elasticity discovered by Gao in 1997-1999 \cite{gao-amr97,gao-mrc99}.
In terms of the deformation ${\mbox{\boldmath$\chi$}} = {\bf u} + {\bf x}$, the total potential energy variational principle
for general large deformation problems can also be written in the following form
\begin{equation}
({\cal{P}}_\chi): \;\;\; \inf_{{\mbox{\boldmath$\chi$}} \in {\cal X}_a} \Pi({\mbox{\boldmath$\chi$}}) = \int_\Omega [W(\nabla {\mbox{\boldmath$\chi$}}) - {\mbox{\boldmath$\chi$}} \cdot {{\bf b}} ] \rho {\rm d}\Oo - \int_{\Gamma_t} {\mbox{\boldmath$\chi$}} \cdot {\bf t} {\rm d} \Gamma,
\end{equation}
where ${\cal X}_a$ is a kinetically admissible deformation space, in which, the boundary condition ${\mbox{\boldmath$\chi$}} ({\bf x}) = 0 $ is given on $ \Gamma_\chi$.
It is well-known that the stored energy $W({\bf F})$ is usually a nonconvex function of the deformation gradient ${\bf F} = \nabla {\mbox{\boldmath$\chi$}} = \nabla {\bf u} + {\bf I}$ in order to model complicated phenomena, such as
phase transitions and post-buckling.
By the fact that $W({\bf F})$ must be an objective function \cite{marsd-hugh}, there exists a real-valued function $\Psi({\bf C}) $ such that $W({\bf F}) = \Psi({\bf F}^T {\bf F})$
(see \cite{ciarlet}).
For most reasonable materials (say the St. Venant-Kirchhoff material \cite{gao-haj}),
the function $\Psi({\bf C})$ is a usually convex function of the Cauchy strain measure ${\bf C} = {\bf F}^T {\bf F}$ such that its complementary energy density can be uniquely defined by the Legendre transformation
\begin{equation}
\Psi^*({\bf S}) = \{ \; {\mbox{tr}} ({\bf C} \cdot {\bf S}) - \Psi({\bf C}) | \;\; {\bf S} = \nabla \Psi({\bf C}) \}.
\end{equation}
Therefore,
a pure complementary energy variational principle
was obtained by Gao in 1999 \cite{gao-mrc99,gao-book00}:
\begin{theorem}[Pure Complementary Energy Principle for Nonlinear Elasticity \cite{gao-mrc99}] \hfill
For any given external force field ${{\bf b}}({\bf x})$ in $\Omega$ and ${\bf t}({\bf x}) $ on $\Gamma_t$, if $\tau({\bf x})$ is a statically admissible stress field, i.e.
\begin{equation}
{\mbox{\boldmath$\tau$}} \in {\cal T}_a := \left\{ {\mbox{\boldmath$\tau$}}({\bf x}): \Omega \rightarrow {\bf R}^{3\times 3} | \;\; - \nabla \cdot {\mbox{\boldmath$\tau$}} = {{\bf b}} \;\; \forall {\bf x} \in \Omega , \;\; {\bf n} \cdot {\mbox{\boldmath$\tau$}} = {\bf t} \;\; \forall {\bf x} \in\Gamma_t \right \} ,
\end{equation}
and $\bar{\bf S}$ is a critical point of the pure complementary energy
\begin{equation}
\Pi^d({\bf S}) =- \int_\Omega \left[ \frac{1}{4} {\mbox{tr}}({\mbox{\boldmath$\tau$}} \cdot {\bf S}^{-1} \cdot {\mbox{\boldmath$\tau$}}) + \Psi^*({\bf S}) \right] \rho\; {\rm d}\Oo ,
\end{equation}
then the deformation field ${\bar{\bchi}} ({\bf x}) $ defined by
\begin{equation}
{\bar{\bchi}} ({\bf x}) = \frac{1}{2} \int_{{\bf x}_0}^{{\bf x}} {\mbox{\boldmath$\tau$}} \cdot \bar{\bf S}^{-1} d {\bf x}
\end{equation}
along any path from ${\bf x}_0 \in \Gamma_\chi$ to ${\bf x} \in \Omega$ is a critical point of the total potential energy $\Pi({\mbox{\boldmath$\chi$}})$
and $\Pi({\bar{\bchi}}) = \Pi^d(\bar{\bf S})$. Moreover, if $\bar{\bf S}({\bf x}) \succ 0 \;\;\forall {\bf x} \in \Omega$, then ${\bar{\bchi}}$ is a global minimizer of $\Pi({\mbox{\boldmath$\chi$}})$.
\end{theorem}
It is easy to prove that the criticality condition $\delta \Pi^d_\chi({\bf S}) = 0$ is governed by the so-called canonical dual algebraic equation \cite{gao-book00}:
\begin{equation}
4 {\bf S} \cdot [\nabla \Psi^*({\bf S}) ] \cdot {\bf S} = {\mbox{\boldmath$\tau$}}^T \cdot {\mbox{\boldmath$\tau$}}. \label{eq-cda}
\end{equation}
For certain materials, this algebraic equation can be solved analytically to obtain all possible solutions \cite{gao-ogden-qjmam}.
Particularly, for the St Venant-Kirchhoff material,
this tensor equation could have at most 27 solutions at each material point ${\bf x}$, but only one positive-definite ${\bf S}({\bf x}) \succ 0 \;\;\forall {\bf x} \in \Omega$, which leads to the global minimum solution ${\bar{\bchi}}({\bf x})$ \cite{gao-haj}.
The pure complementary energy principle solved a well-known open problem in large deformation mechanics and is known as the Gao principle in literature
(see \cite{li-gupta}).
This principle plays an important role not only in large deformation theory and nonconvex variational analysis, but also in global optimization and computational science.
Indeed, Theorem \ref{thm1} is simply an application of this principle as if we consider the quadratic operator $\pmb{\varepsilon}(\boldsymbol{\rho})$ as the Cauchy strain measure ${\bf C}({\mbox{\boldmath$\chi$}})$,
then the canonical dual $\mbox{\boldmath$\sigma$} \in \partial \Psi(\pmb{\varepsilon})$ is corresponding to the second Piola-Kirchhoff stress ${\bf S} = \nabla \Psi({\bf C})$,
while ${\mbox{\boldmath$\tau$}}_u$ is corresponding to the first Piola-Kirchhoff stress ${\mbox{\boldmath$\tau$}}$.
By the fact that $\Psi^\sharp(\mbox{\boldmath$\sigma$})$ is nonsmooth, the associated canonical dual algebraic equation (\ref{eq-cda}) should be governed by the KKT conditions (\ref{eq-kkts}). In order to solve this problem, a $\beta$-perturbation method was proposed in 2010 for solving general integer programming problems \cite{gao-ruan-jogo10}
and recently for solving the topology optimization problems \cite{gao-to17}.
According to the canonical duality theory for mathematical modeling \cite{gao-to18}, the integer constraint $\boldsymbol{\rho} \in \{ 0,1\}^n$ in the Knapsack problem $({\cal{P}}_u)$
is a constitutive condition, while $\boldsymbol{\rho} \cdot {\bf v} \le V_c$ is a geometrical constraint. Thus, by using the so-called pan-penalty functions
\begin{equation}
W(\boldsymbol{\rho}) = \left\{ \begin{array}{ll}
0 \;\; & \mbox{ if } \boldsymbol{\rho} \in \{ 0, 1\}^n\\
+\infty & \mbox{ otherwise},
\end{array} \right.
\;\;\; F(\boldsymbol{\rho}) = \left\{ \begin{array}{ll}
{\bf c}_u \cdot \boldsymbol{\rho} \;\; & \mbox{ if } \boldsymbol{\rho} \cdot {\bf v} \le V_c \\
- \infty & \mbox{ otherwise},
\end{array} \right.
\end{equation}
the Knapsack problem $({\cal{P}}_u)$ can be equivalently written in Gao-Strang's unconstrained form \cite{gao-strang89}:
\begin{equation}
\min \left\{ W(\boldsymbol{\rho}) - F(\boldsymbol{\rho}) | \;\; \boldsymbol{\rho} \in {\bf R}^n \right\}.
\end{equation}
By introducing a penalty parameter $\beta > 0$ and a Lagrange multiplier $\tau \ge 0$, these two pan-penalty functions can have the following
relaxations:
\begin{equation}
W_\beta(\boldsymbol{\rho}) = \beta \| \boldsymbol{\rho} \circ \boldsymbol{\rho} - \boldsymbol{\rho} \|^2, \;\; F_\tau(\boldsymbol{\rho}) = {\bf c}_u \cdot \boldsymbol{\rho} - \tau (\boldsymbol{\rho} \cdot {\bf v} - V_c) .
\end{equation}
It is easy to prove that
\begin{equation}
W(\boldsymbol{\rho}) = \lim_{\beta \rightarrow \infty} W_\beta(\boldsymbol{\rho}), \;\; F(\boldsymbol{\rho}) = \min_{\tau \ge 0 } F_\tau(\boldsymbol{\rho}) \;\; \forall \boldsymbol{\rho} \in {\bf R}^n.
\end{equation}
Thus, the Knapsack problem can be relaxed by the so-called penalty-duality approach:
\begin{equation}
\min_{\boldsymbol{\rho} \in {\bf R}^n } \max_{\tau \ge 0 } \left\{ L_\beta(\boldsymbol{\rho}, \tau) = W_\beta(\boldsymbol{\rho}) - {\bf c}_u \cdot \boldsymbol{\rho} + \tau (\boldsymbol{\rho} \cdot {\bf v} - V_c)
\right\} \label{eq-pda} .
\end{equation}
Since the penalty function $W_\beta(\boldsymbol{\rho})$ is nonconvex, by using the canonical
transformation $W_\beta(\boldsymbol{\rho}) = \Psi_\beta(\Lam(\boldsymbol{\rho}))$, we have
$\Psi_\beta(\pmb{\varepsilon}) = \beta \| \pmb{\varepsilon}\|^2$, which is a convex quadratic function. Its Legendre conjugate is simply
$\Psi^*_\beta(\mbox{\boldmath$\sigma$}) = \frac{1}{4} \beta^{-1} \| \mbox{\boldmath$\sigma$}\|^2$.
Thus, the Gao and Strang total complementary optimization problem for the penalty-duality approach (\ref{eq-pda}) can be given by \cite{gao-to17}:
\begin{equation}
\min_{\boldsymbol{\rho} \in {\bf R}^n } \max_{\bzeta \in {\cal S}^+_a} \left\{ \Xi_\beta(\boldsymbol{\rho}, \bzeta) = (\boldsymbol{\rho} \circ \boldsymbol{\rho} - \boldsymbol{\rho}) \cdot \mbox{\boldmath$\sigma$} - \frac{1}{4} \beta^{-1} \| \mbox{\boldmath$\sigma$}\|^2
- {\bf c}_u \cdot \boldsymbol{\rho} + \tau (\boldsymbol{\rho} \cdot {\bf v} - V_c)
\right\} \label{eq-pdb} .
\end{equation}
For any given $\beta > 0$ and $\bzeta = \{ \mbox{\boldmath$\sigma$}, \boldsymbol{\varsigma}\} \in {\bf S}^+_a $, a canonical penalty-duality (CPD) function can be obtained as
\begin{equation}
P^d_\beta (\bzeta) = \min_{\boldsymbol{\rho} \in {\bf R}^n} \Xi_\beta(\boldsymbol{\rho},\bzeta) = P^d_u (\mbox{\boldmath$\sigma$}, \boldsymbol{\varsigma}) -
\frac{1}{4} \beta^{-1} \| \mbox{\boldmath$\sigma$} \|^2 ,
\end{equation}
which is exactly the so-called $\beta$-perturbed canonical dual function presented in \cite{gao-to17, gao-to18}.
It was proved by Theorem 7 in \cite{gao-ruan-jogo10} that there exists a $\beta_c>0$ such that for any given $\beta \ge \beta_c$,
both the CPD problem
\begin{equation}
({\cal{P}}^d_\beta): \;\;\; \max \{ P^d_\beta (\bzeta) | \;\; \bzeta \in {\cal S}^+_a \}
\end{equation}
and the problem $({\cal{P}}^d_u)$ have the same solution set.
Since $\Psi_\beta^* (\mbox{\boldmath$\sigma$}) $ is a quadratic function, the corresponding canonical dual algebraic equation (\ref{eq-cda}) is a coupled cubic algebraic system
\begin{equation}
2 \beta^{-1} \sig_e^3 + \sig_e^2 = (\tau v_e - c_e)^2, \;\; e = 1, \dots, n, \label{eq-cdas}
\end{equation}
\begin{equation}
\sum_{e=1}^n \frac{1}{2} \frac{v_e}{\sig_e} ( \sig_e - v_e \tau + c_e) - V_c = 0 .\label{eq-cdv}
\end{equation}
It was proved in \cite{gao-book00,gao-jimo07} that for any given $\beta > 0$, $\tau \ge 0$ and ${\bf c}_u =\{ c_e({\bf u}_e) \}$ such that
$\theta_e = \tau v_e - c_e ({\bf u}_e) \neq 0, \ e = 1, \dots, n$, the canonical dual algebraic equation (\ref{eq-cdas}) has a unique
positive real solution
\begin{equation}
\sigma_e = \frac{1}{12} \beta [- 1 + \phi_e(\tau ) + \phi_e^c(\tau )] > 0 , \;\; e = 1, \dots, n
\label{eq-solus}
\end{equation}
where
\[
\phi_e(\varsigma ) = \eta^{-1/3} \left [2 \theta_e^2 - \eta + 2 i \sqrt{ \theta_e^2( \eta -\theta_e^2 )} \right]^{1/3} ,
\;\; \eta = \frac{\beta^2}{27},
\]
and $\phi_e^c $ is the complex conjugate of $\phi_e $, i.e. $\phi_e \phi_e^c = 1$.
Thus, a canonical penalty-duality algorithm has been proposed recently for solving general topology optimization problems \cite{gao-to17,gao-to18}.
\section{CPD Algorithm for 3-D Topology Optimization}
\label{CDT-Algorithm}
For three-dimensional linear elastic structures, we simply use
cubic 8-node hexahedral elements $\{ \Omega_e\}$, each element contains 24 degrees of freedom corresponding to the displacements in x-y-z directions (each node has three degrees of freedom) as shown in Fig. \ref{hexahedron-element}.
\begin{figure}[h!]
\begin{center}
\scalebox{0.15}{\includegraphics{e7.jpg}}
\caption{\em The hexahedron element - eight nodes }
\label{hexahedron-element}
\end{center}
\end{figure}
Thus, the displacement interpolation matrix is
${\bf N}= [\mathrm{N}_1\;\; \mathrm{N}_2\;\;...\;\; \mathrm{N}_8]$ and
\begin{equation}
\mathrm{N}_i= \left[ \begin{array}{ccc}
N_i&0 & 0\\
0 &N_i&0\\
0 & 0 &N_i\\
\end{array} \right].
\end{equation}
The shape functions $N_i=N_i(\xi_1, \xi_2, \xi_3)$, $i=1,...8$ are derived by
$$N_1=\frac{1}{8} (1-\xi_1) (1-\xi_2) (1-\xi_3),\;\;\;\;\;\;\;\;
N_2=\frac{1}{8} (1+\xi_1) (1-\xi_2) (1-\xi_3),$$
$$N_3=\frac{1}{8} (1+\xi_1) (1+\xi_2) (1-\xi_3),\;\;\;\;\;\;\;\;
N_4=\frac{1}{8} (1-\xi_1) (1+\xi_2) (1-\xi_3),$$
$$N_5=\frac{1}{8} (1-\xi_1) (1-\xi_2) (1+\xi_3),\;\;\;\;\;\;\;\;
N_6=\frac{1}{8} (1+\xi_1) (1-\xi_2) (1+\xi_3),$$
$$N_7=\frac{1}{8} (1+\xi_1) (1+\xi_2) (1+\xi_3),\;\;\;\;\;\;\;\;
N_8=\frac{1}{8} (1-\xi_1) (1+\xi_2) (1+\xi_3),$$
in which $\xi_1, \xi_2$ and $\xi_3$ are the natural coordinates of the $i^{th}$ node.
The nodal displacement vector ${\bf u}_e$ is given by
$$ {\bf u}_e^T= \left[ u^e_{1}\;\; u^e_{2} \;\; ... \;\; u^e_{8}\right],$$
where $u^e_{i}=(x^e_{i}, y^e_{i}, z^e_{i}) \in {\bf R}^3, \; i= 1,..., 8$ are the displacement components at node $i$.
The components $\mathrm{B}_i$ of strain-displacement matrix ${\bf B} =[ \mathrm{B}_1 \; \mathrm{B}_2 \; ... \; \mathrm{B}_8 ]$, which
relates the strain $\varepsilon$ and the nodal displacement ${\bf u}_e$ ($\varepsilon={\bf B} {\bf u}_e$),
are defined as
\begin{equation}
\mathrm{B}_i= \left[ \begin{array}{ccc}
\frac{\partial N_i}{\partial x}&0 & 0\\
0 &\frac{\partial N_i}{\partial y}&0\\
0 & 0 &\frac{\partial N_i}{\partial z}\\
\frac{\partial N_i}{\partial y}&\frac{\partial N_i}{\partial x} & 0\\
\frac{\partial N_i}{\partial z} &0&\frac{\partial N_i}{\partial x}\\
0 & \frac{\partial N_i}{\partial z} &\frac{\partial N_i}{\partial y}\\
\end{array} \right].
\end{equation}
Hooke's law for isotropic materials
in constitutive matrix form
is given by
\begin{equation}
{\bf H} = \frac{E}{(1+ \nu )(1-2\nu)} \left[ \begin{array}{cccccc}
1-\nu & \nu & \nu & 0& 0&0 \\
\nu &1-\nu & \nu &0 &0 & 0 \\
\nu & \nu & 1-\nu &0 &0 & 0 \\
0&0& 0 &\frac{1-2 \nu}{2} &0 & 0 \\
0&0& 0 &0&\frac{1-2 \nu }{2} & 0 \\
0&0& 0 &0&0 & \frac{1-2 \nu }{2}
\end{array} \right],
\end{equation}
where, $E$ is the Young's modulus and $\nu$ is the Poisson's ratio of the isotropic material.
The stiffness matrix of the
structure in CPD algorithm is given by
\begin{equation} \label{k-rho}
{\bf K} (\boldsymbol{\rho} )=\sum_{e=1}^n ( E_{min} +(E-E_{min})\rho_e ) \mathrm{K}_e,
\end{equation}
where $E_{min}$ must be small enough (usually let $E_{min} = 10-9E$) to avoid singularity in computation
and $\mathrm{K}_e$ is defined as
\begin{equation}
\mathrm{K}_e= \int^{1}_{-1} \int^{1}_{-1} \int^{1}_{-1} {\bf B}^T {\bf H} {\bf B}\; d\xi_1 d\xi_2 d\xi_3.
\end{equation}
Based on the canonical duality theory,
an evolutionary canonical penalty-duality (CPD) algorithm\footnote{This algorithm was called the CDT algorithm in \cite{gao-to17}.
Since a new CDT algorithm without $\beta$ perturbation has been developed, this algorithm based on the canonical penalty-duality method should be called CPD algorithm.}
for solving the topology optimization problem \cite{gao-to17} can be presented in the following.
\vspace{.8cm} \\
{\bf Canonical Penalty-Duality Algorithm for Topology Optimization (CPD)}:
\begin{enumerate}
\item Initialization: \\
Choose a suitable initial volume reduction rate $\mu<1$. \\
Let $\boldsymbol{\rho}^{0}=\{1\} \in {\cal R}^n$.\\
Given an initial value $\boldsymbol{\varsigma}^{ 0}>0$, an initial volume $ V_\gamma = \mu V_0$.\\
Given a perturbation parameter $\beta >10$, error allowances $\omega_1$ and $\omega_2$, in which $\omega_1$ is a termination criterion.\\
Let $\gamma=0$ and compute
\[
{\bf u}^{0} = {\bf K}^{-1}(\boldsymbol{\rho}^0){\bf f}(\boldsymbol{\rho}^0) , \;\;
{\bf c}^{0}={\bf c}({\bf u}^{0}) = {{\bf u}^0}^T {\bf K}(\boldsymbol{\rho}^0) {\bf u}^0.
\]
\item\label{step2} Let $k=1$ .
\item \label{step3} Compute $\bzeta_k= \{\mbox{\boldmath$\sigma$}^k, \tau^k\} $ by
\[
\sigma_e^{k } = \frac{1}{6} \beta [- 1 + \phi_e(\boldsymbol{\varsigma}^{k-1 }) + \phi_e^c(\boldsymbol{\varsigma}^{k-1 })] , \;\; e = 1, \dots, n.
\]
\[
\boldsymbol{\varsigma}^{k } = \frac{ \sum_{e=1}^n v_e (1 + c^\gamma_e /\sigma_e^{k} ) - 2 V_{\gamma} }{\sum_{e=1}^n v_e^2/\sigma_e^{k}} .
\]
\item If
\begin{equation} \label{change}
\Delta=|P^d_u(\mbox{\boldmath$\sigma$}^k, \tau^k) - P^d_u( \mbox{\boldmath$\sigma$}^{k-1},\tau^{k-1}) | > \omega_1 ,
\end{equation}
then let $k = k + 1$, go to Step \ref{step3};
Otherwise, continue.
\item Compute $\boldsymbol{\rho}^{\gamma+1} = \{ \rho^{\gamma+1}_e \}$ and ${\bf u}^{\gamma+1} $ by
\[
\rho^{\gamma+1}_e = \frac{1}{2} [ 1 - ( \boldsymbol{\varsigma}^{k } v_e - c^\gamma_e)/\sigma_e^k],
\;\; e= 1, \dots, n.
\]
\[
{\bf u}^{\gamma+1} = {\bf K}(\boldsymbol{\rho}^{\gamma+1})^{-1} {\bf f}(\boldsymbol{\rho}^{\gamma+1}).
\]
\item If $|\boldsymbol{\rho}^{\gamma+1} - \boldsymbol{\rho}^\gamma | \le \omega_2 $ and $ V_\gamma \le V_c$ ,
then stop; Otherwise, continue.
\item Let $ V_{\gamma +1} =\mu V_{\gamma} $, $\tau^0 = \tau^k$, and
$\gamma=\gamma+1$, go to step \ref{step2}.
\end{enumerate}
\begin{remark} [Volume Evolutionary Method and Computational Complexity]
By Theorem 1 we know that for any given desired volume $V_c> 0$, the optimal solution $\barbrho $ can be analytically obtained by (\ref{eq-solu})
in terms of its canonical dual solution in continuous space. By the fact that the topology optimization problem $({\cal{P}}_{bl})$ is a
coupled nonconvex minimization, numerical optimization depends sensitively on the
the initial volume $V_0$. If $ \mu_c = V_c/V_0 \ll 1, $ any given iteration method could lead to unreasonable numerical solutions. In order to resolve this problem,
a volume decreasing control parameter
$\mu \in (\mu_c,1)$ was introduced in \cite{gao-to17} to produce a volume sequence $V_{\gamma } = \mu V_{\gamma-1}$ ($\gamma = 1, \dots, \gamma_c$)
such that $V_{\gamma_c} = V_c$ and for any given $V_\gamma \in [V_c, V_0]$, the problem $({\cal{P}}_{bl})$ is replaced by
\begin{eqnarray}
({\cal{P}}_{bl})^\gamma: \;\;& \min & \bigg\{ {\bf f}^T{\bf u} - C_p(\boldsymbol{\rho},{\bf u}) \; | \;\; \;\; \boldsymbol{\rho} \in \{ 0, 1\}^n, \;\; {\bf v}^T \boldsymbol{\rho} \le V_\gamma \bigg \} , \\
& \mbox{s.t. } & {\bf u}(\boldsymbol{\rho}) = \arg \min \{ \Pi_h ({\bf v}, \boldsymbol{\rho}) | \;\; {\bf v} \in {\cal U}_a \} .
\end{eqnarray}
The initial values for solving this $\gamma$-th problem are $V_{\gamma-1}, {\bf u}_{\gamma-1}, $ $\boldsymbol{\rho}_{\gamma-1}$.
Theoretically speaking, for any given sequence $\{V_\gamma \}$ we should have
\begin{equation}
({\cal{P}}_{bl}) = \lim_{\gamma\rightarrow \gamma_c} ({\cal{P}}_{bl})^\gamma.
\end{equation}
Numerically, different volume sequence $\{V_\gamma\}$ may produce totally different structural topology as long as the alternative iteration is used.
This is intrinsic difficulty for all coupled bi-level optimal design problems.
The original idea of this sequential volume decreasing technique is from an evolutionary method
for solving optimal shape design problems (see Chapter 7, \cite{gao-book00}). It was realized recently that the same idea was used in the ESO and BESO methods. But these two methods are not polynomial-time algorithm.
By the facts that there are only two loops in the CPD algorithm, i.e. the $\gamma$-loop and the $k$-loop, and the canonical dual solution is analytically given in the $k$-loop,
the main computing is the $m\times m$ matrix inversion in the $\gamma$-loop. The complexity for the Gauss-Jordan elimination is $O(m^3)$. Therefore, the CPD is a polynomial-time algorithm.
\end{remark}
\section{Applications to 3-D Benchmark Problems}
\label{Numerical}
In order to demonstrate the novelty of the CPD algorithm for solving 3D topology optimization problems, our numerical results are compared
with the two popular methods: BESO and SIMP.
The algorithm for the soft-kill BESO is from \cite{Huang}\footnote{According to Professor Y.M. Xie at RMIT, this BESO code was poorly implemented and has never been used for any of their further research simply because it was extremely slow compared to their
other BESO codes. Therefore, the comparison for computing time between CPD and BESO provided in this section
may not show the reality if the other commercial BESO codes are used. }.
A modified SIMP algorithm without filter is used according to \cite{Liu-Tovar}.
The parameters used in BESO and SIMP are:
the minimum radius $r_{\min}= 1.5$, the evolutionary rate $er=0.05$, and the penalization power $p= 3$.
Young's modulus and Poisson's ratio of the material are taken as $E=1$ and $\nu=0.3$, respectively.
The initial value for $\boldsymbol{\varsigma}$ used in CPD is $\boldsymbol{\varsigma}^0=1$.
We take the design domain $V_0 = 1$, the initial design variable $\boldsymbol{\rho}^0=\{1\}$ for both CPD and BESO algorithms.
All computations are performed by a computer with Processor Intel Core I7-4790, CPU 3.60GHz and memory 16.0 GB.
\subsection{Cantilever Beam Problems} \label{CBP}
For this benchmark problem, we present results based on three types of mesh resolutions with two types of loading conditions.
\subsubsection{Uniformly distributed load with $60 \times 20 \times 4$ meshes.} \label{exa1}
First, let us consider the cantilever beam with uniformly distributed load at the right end
as illustrated in Fig. \ref{design1}.
\begin{figure}[h!]
\begin{center}
\scalebox{0.12}{\includegraphics{design1.JPG}}
\caption{\em Cantilever beam with uniformly distributed load in the right end}
\label{design1}
\end{center}
\end{figure}
The target volume and termination criterion for CPD, BESO and SIMP
are selected as $V_c= 0.3$ and $\omega_1=10^{-6}$, respectively.
For both CPD and BESO methods, we take the volume evolution rate $ \mu=0.89$,
the perturbation parameter for CPD is $\beta=4000$.
The results are reported in Table \ref{cantilever000}\footnote{The so-called compliance in this section is actually a doubled strain energy, i.e. $c=2 C(\boldsymbol{\rho},{\bf u})$ as used in \cite{Liu-Tovar}}.
\begin{table}[h!]
\centering
\begin{tabular}{|p{1cm}|p{2.4cm}|p{7.5cm}|}
\hline
\centerline{\small{Method}}
&
\centerline{Details}
&
\centerline{Structure}
\\
\hline
\vspace{2.0cm}
\centerline{CPD}
&
\vspace{1.5cm}
\centerline{\small{$C= 1973.028$}}
\centerline{\small{It. = 23}}
\centerline{\small{Time= 27.1204}}
&
\vspace{0.0000001cm}
\centerline{
\begin{minipage}{0.40\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=40mm]{CDT1.jpg}
\end{center}
\end{minipage}} \\
\hline
\vspace{2.0cm}
\centerline{BESO}
&
\vspace{1.5cm}
\centerline{\small{$C= 1771.3694$}}
\centerline{\small{It. = 154}}
\centerline{\small{Time= 2392.9594}}
&
\vspace{0.0000001cm}
\centerline{
\begin{minipage}{0.40\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=40mm]{Beso1.jpg}
\end{center}
\end{minipage}} \\
\hline
\vspace{2.0cm}
\centerline{SIMP}
&
\vspace{1.5cm}
\centerline{\small{$C= 2416.6333$}}
\centerline{\small{It. = 200}}
\centerline{\small{Time= 98.7545}}
&
\vspace{0.0000001cm}
\centerline{
\begin{minipage}{0.40\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=40mm]{SIMP1.jpg}
\end{center}
\end{minipage}}
\\
\hline
\end{tabular}
\caption{{\em Structures produced by CPD, BESO and SIMP for cantilever beam} ($60 \times 20 \times 4$)}
\label{cantilever000}
\end{table}
Fig. \ref{Compliance} shows the convergence of compliances produced by all the three methods.
As we can see that the SIMP provides an upper bound approach since this method is based on the minimization of the compliance, i.e. the problem $(P)$.
By Remark 1 we know that this problem violates the minimum total potential energy principle, the SIMP converges in a strange way, i.e. the structures produced by the SIMP at the beginning are broken until $It. = 15$ (see Fig. \ref{Compliance}), which is physically unreasonable.
Dually, both the CPD and BESO provide lower bound approaches. It is reasonable to believe that the main idea of the BESO is
similar to the Knapsack problem, i.e. at each volume iteration, to eliminate elements which stored less strain energy by simply using comparison method.
By the fact that the same volume evolutionary rate $\mu$ is adopted, the results obtained by
the CPD and BESO are very close to each other (see also Fig. \ref{volume}).
However, the CPD is almost 100 times faster than the BESO method since the BESO is not a polynomial-time algorithm.
\begin{figure}[h!]
\begin{center}
\scalebox{0.225}{\includegraphics{Compliance.JPG}}
\caption{\em Convergence test for CPD, BESO and SIMP}
\label{Compliance}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\scalebox{0.232}{\includegraphics{volume.jpg}}
\caption{\em Comparison of volume variations for CPD, BESO and SIMP}
\label{volume}
\end{center}
\end{figure}
The optimal structures produced by the CPD with $\omega_1= 10^{-16}$ and with different values of $\mu$ and $\beta$
are summarized in Table \ref{cantilever}.
Also, the target compliances during the iterations for all CPD examples are reported in Figs. \ref{Compliance5} with different values of $\mu$ and $\beta$.
The results show that the CPD algorithm is sensitively depends on the volume evolution parameter $\mu$, but not the penalty parameter $\beta$.
The comparison for volume evolutions by CPD and BESO is given in Fig \ref{compliance-CDT-BESO},
which shows as expected that the BESO method also sensitively depends on the volume evolutionary rate $\mu$.
For a fixed $\beta=4000$, the convergence of the CPD is more stable and faster than
the BESO. The $C$-Iteration curve for BESO jumps for every given $\mu$,
which could be the so-called ``chaotic convergence curves'' addressed by G. I. N. Rozvany in \cite{Rozvany}.
\begin{figure}[h!]
\begin{center}
\scalebox{0.22}{\includegraphics{Compliance2.JPG}}
\caption{\em Convergence tests for CPD method at different values of $\mu$ and $\beta$}
\label{Compliance5}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\scalebox{0.21}{\includegraphics{compliance-CDT-BESO.jpg}}
\caption{\em Convergence test for CPD and BESO with different $\mu$.}
\label{compliance-CDT-BESO}
\end{center}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|p{1.95cm}|p{5.25cm}|p{1.95cm}|p{5.25cm}|}
\hline
\centerline{Details}
&
\centerline{Structure}
&
\centerline{Details}
&
\centerline{Structure}
\\
\hline
\vspace{0.8cm}
{$\mu=0.88$}
{$\beta=4000$}
{\small{$C=2182.78$}}
{\small{It. =22}}
{\small{Time=29.44}}
&
\vspace{0.000001cm}
\centerline{
\begin{minipage}{0.32\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=37mm]{cant-1.jpg}
\end{center}
\end{minipage}}
&
\vspace{0.8cm}
{$\mu=0.89$}
{$\beta=90000$}
{\small{$C=1973.02$}}
{\small{It. =23}}
{\small{Time=30.69}}
&
\vspace{0.000001cm}
\centerline{
\begin{minipage}{0.32\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=37mm]{CDT1_beta90000.jpg}
\end{center}
\end{minipage}} \\
\hline
\vspace{0.8cm}
{$\mu=0.9$}
{$\beta=4000$}
{\small{$C=1920.68$}}
{\small{It. =23}}
{\small{Time=30.87}}
&
\vspace{0.000001cm}
\centerline{
\begin{minipage}{0.32\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=37mm]{CDT3.jpg}
\end{center}
\end{minipage}}
&
\vspace{0.8cm}
{$\mu=0.92$}
{$\beta=90000$}
{\small{$C=1832.59$}}
{\small{It. =23}}
{\small{Time=33.73}}
&
\vspace{0.000001cm}
\centerline{
\begin{minipage}{0.32\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=37mm]{CDT2_beta90000.jpg}
\end{center}
\end{minipage}} \\
\hline
\end{tabular}
\caption{{\em Optimal structures produced by CPD with different values of $\mu$ and $\beta$ }
\label{cantilever}}
\end{table}
\subsubsection{Uniformly distributed load with $120 \times 50 \times 8$ mesh resolution} \label{exa2}
Now let us consider the same loaded beam as shown in Fig \ref{design1} but with a finer mesh resolution of
$120 \times 50 \times 8$. In this example
the target volume fraction and termination criterion for all procedures are assumed to be $V_c= 0.3$ and $\omega_1=10^{-6}$, respectively. The initial volume reduction rate for both CPD and BESO is $\mu=0.935$.
The perturbation parameter for CPD is $\beta=7000$.
The optimal topologies produced by CPD, BESO and SIMP methods are reported in Table \ref{cantilevera}.
As we can see that the CPD is about five times faster than the SIMP and almost 100 times faster than the BESO method.
If we choose $\omega_1=0.001$, the computing times (iterations) for CPD, BESO and SIMP are
0.97 (24), 24.67 (44) and 4.3 (1000) hours, respectively.
Actually, the SIMP failed to reach the given precision.
If we increase $\omega_1=0.01$, the SIMP takes 3.14 hours with 742 iterations to satisfy the given precision.
Our numerical results show that the CPD method can produce very good results with much less computing time.
For a given very small $\omega_1=10^{-16}$, Table \ref{CDT-different values} shows the effects of the parameters of $\mu, \; \beta$ and $V_c$ on the computing time of the CPD
method.
\begin{table}[h!]
\centering
\begin{tabular}{|p{1.1cm} |p{2.4cm}|p{7.7cm}|}
\hline
\centerline{\small{Method}}
&
\centerline{\small{Details}}
&
\centerline{Structure}
\\
\hline
\vspace{2.0cm}
\centerline{CPD}
&
\vspace{1.5cm}
{\small{$C= 1644.0886$}}
{\small{It. =24}} {\small{Time=3611.23}}
&
\vspace{0.0000001cm}
\centerline{
\begin{minipage}{0.45\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=43mm]{CDTa.jpg}
\end{center}
\end{minipage}} \\
\hline
\vspace{2.0cm}
\centerline{BESO}
&
\vspace{1.5cm}
{\small{$C= 1605.1102$}}
{\small{It. =200}}
{\small{Time=342751.96}
&
\vspace{0.0000001cm}
\centerline{
\begin{minipage}{0.45\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=43mm]{Besoa.jpg}
\end{center}
\end{minipage}} \\
\hline
\vspace{2.0cm}
\centerline{SIMP}
&
\vspace{1.5cm}
{\small{$C=1835.4106$}}
{\small{It. =1000}}
{\small{Time=15041.06}
&
\vspace{0.0000001cm}
\centerline{
\begin{minipage}{0.45\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=43mm]{SIMPa.jpg}
\end{center}
\end{minipage}} \\
\hline
\end{tabular}
\caption{{\em Topology optimization for cantilever beam} ($120 \times 50 \times 8$)}
\label{cantilevera}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{ |p{7.2cm}|p{7.2cm}|}
\hline
\centerline{$\;\mu=0.935, \;\; \beta=3000, \;V_c=0.3$} \centerline{ $C=1632.959$, It. =25, Time=3022.029}
& \centerline{$\;\mu=0.935, \;\; \beta=7000,\;V_c=0.18$} \centerline{ $C= 2669.980$, It. =34, Time=5040.6647}
\\
\centerline{
\begin{minipage}{0.40\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=41mm]{CDT11.jpg}
\end{center}
\end{minipage}}
&
\centerline{
\begin{minipage}{0.40\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=41mm]{Cant-11.jpg}
\end{center}
\end{minipage} }
\\ \hline \hline
\centerline{$\;\mu=0.98, \;\; \beta=7000, \;V_c=0.3$} \centerline{$C=1635.922$, It. =25, Time=3531.3235}
& \centerline{$\;\mu=0.98, \;\; \beta=7000,\;V_c=0.18$} \centerline{$C=2892.914$, It. =35, Time=4853.3776}
\\
\centerline{
\begin{minipage}{0.40\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=41mm]{Cant-6.jpg}
\end{center}
\end{minipage}}
&
\centerline{
\begin{minipage}{0.40\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=41mm]{Cant-10.jpg}
\end{center}
\end{minipage} }
\\ \hline
\end{tabular}
\caption{{\em Effects of $\mu$, $\beta$ and $V_c$ to the final results by CPD method ($\omega_1=10^{-16}$)}
\label{CDT-different values}}
\end{table}
\subsubsection{Beam with a central load and $40 \times 20 \times 20$ meshes} \label{exa3}
In this example, the beam is subjected to a central load at its right end (see Fig. \ref{design2}).
We let $V_c= 0.095$, $\omega_1=0.001$,
$\beta=7000$ and $\mu=0.888$.
\begin{figure}[h!]
\begin{center}
\scalebox{0.12}{\includegraphics{design2.JPG}}
\caption{\em Design domain for cantilever beam with a central load in the right end}
\label{design2}
\end{center}
\end{figure}
The topology optimized structures produced by CPD, SIMP and BESO methods are summarized in Table \ref{Can with central load}.
Compared with the SIMP method, we can see that by using only $20\%$ of computing time, the CPD can produce global optimal solution, which is better than that produced by
the BESO, but with only $8\%$ of computing time.
We should point out that for the given $\omega_1 = 0.001$, the SIMP method failed to converge in 1000 iterations
(the so-called ``change'' $\Delta = 0.0061>\omega_1$).
\begin{table}[h!]
\centering
\begin{tabular}{ |p{12.5cm}|}
\hline
\centerline{CPD: \;\;$C= 20.564,$ \;$\;$ It. =45, $\;$Time=959.7215}
\centerline{\begin{minipage}{0.68\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=49mm]{CDTO3D1.jpg}
\end{center}
\end{minipage}}
\\ \hline
\centerline{ BESO: \;\;$C= 20.1533,$ \;$\;$It. =53, $\;$Time=11461.128}
\centerline{ \begin{minipage}{0.68\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=45mm]{BESOc.jpg}
\end{center}
\end{minipage} }
\\ \hline
\centerline{ SIMP: \;\;$C= 25.7285,$ \;$\;$It. =1000, $\;$Time=4788.4762}
\centerline{\begin{minipage}{0.68\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=45mm]{SIMPex3.jpg}
\end{center}
\end{minipage}}
\\ \hline
\end{tabular}
\caption{{\em Topologies of the cantilever beam with a central load in the right end}
\label{Can with central load}}
\end{table}
\subsection{MBB Beam} \label{MBB Beam Problem}
The second benchmark problem is the 3-D Messerschmitt-$\mathrm{ \ddot{B}olkow}$-Blohm (MBB) beam.
Two examples with different loading and boundary conditions are illustrated.
\subsubsection{Example 1} \label{exb1}
The MBB beam design for this example is illustrated in Fig. \ref{MBB-design2}. In this example, we use $40 \times 20 \times 20$ mesh resolution, $V_c= 0.1$ and $\omega_1=0.001$.
The initial volume reduction rate and perturbation parameter are $\mu=0.89$ and $\beta=5000$, respectively.
\begin{figure}[h!]
\begin{center}
\scalebox{0.12}{\includegraphics{design4.JPG}}
\caption{\em MBB beam with uniformly distributed central load}
\label{MBB-design2}
\end{center}
\end{figure}
Table \ref{MBB-ex1} summarizes the optimal topologies by using CPD, BESO and SIMP methods.
Compared with the BESO method, we see again that the CPD produces a mechanically sound structure and takes only
$12.6\%$ of computing time.
Also, the SIMP method failed to converge for this example and the result presented in Table \ref{MBB-ex1} is only the output of the
1000th iteration when $\Delta = 0.039 > \omega_1$.
\begin{table}[h!]
\centering
\begin{tabular}{ |p{12.3cm}|}
\hline
\centerline{CPD: \;\;$C= 7662.5989,$ \;$\;$ It. =46, $\;$Time=1249.1267 }
\centerline{\begin{minipage}{0.72\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=43mm]{MBB-ex2.jpg}
\end{center}
\end{minipage}}
\\ \hline
\centerline{ BESO: \;\;$C= 7745.955,$ \;$\;$It. =55, $\;$Time=9899.0921}
\centerline{ \begin{minipage}{0.72\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=43mm]{BESO-MBB.jpg}
\end{center}
\end{minipage} }
\\ \hline
\centerline{ SIMP: \;\;$C=12434.8629,$ \;$\;$It. =1000, $\;$Time=5801.0065}
\centerline{\begin{minipage}{0.72\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=43mm]{SIMPmbb2.jpg}
\end{center}
\end{minipage}}
\\ \hline
\end{tabular}
\caption{{\em Results for 3-D MBB beam with uniformly distributed load}
\label{MBB-ex1}}
\end{table}
\subsubsection{Example 2} \label{exb2}
In this example, the MBB beam is supported horizontally in its four bottom corners under central load as shown in Fig. \ref{MBB-design1}.
\begin{figure}[h!]
\begin{center}
\scalebox{0.140}{\includegraphics{MBB-design1.JPG}}
\caption{\em 3-D MBB beam with a central load}
\label{MBB-design1}
\end{center}
\end{figure}
The mesh resolution is $60 \times 10 \times 10$, the target volume is $V_c= 0.155$.
The initial volume reduction rate and perturbation parameter are defined as $\mu=0.943$ and $\beta=7250$, respectively.
The topology optimized structures produced by CPD, BESO and SIMP with $\omega_1=10^{-5}$ are reported in Table \ref{MBB-2}.
Once again we can see that without using any artificial techniques, the CPD produces mechanically sound integer density distribution but the computing time
is only 3.3\% of that used by the BESO.
\begin{table}[h!]
\centering
\begin{tabular}{|p{1.0cm} |p{2.3cm} |p{9.0cm}|}
\hline
\centerline{\small{Method}}
&
\centerline{\small{Details}}
&
\centerline{Structure}
\\
\hline
\vspace{0.25cm}
\centerline{CPD}
&
{\small{$C=19.5313\;\;\;$}}
{\small{It. = 37}}
{\small{Time=48.2646}}
&
\centerline{
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=58mm]{MBB-ex1.jpg}
\end{center}
\end{minipage}} \\
\hline
\vspace{2.5cm}
\centerline{BESO}
&
\vspace{2.0cm}
{\small{$C=20.1132\;\;\;$}}
{\small{It. =57}}
{\small{Time=1458.488}}
&
\vspace{0.000001cm}
\centerline{
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=58mm]{Beso3.jpg}
\end{center}
\end{minipage}} \\
\hline
\vspace{2.6cm}
\centerline{SIMP}
&
\vspace{2.0cm}
{\small{$C=41.4099 \;\;\;$}}
{\small{It. =95}}
{\small{Time=366.4988}}
&
\vspace{0.000001cm}
\centerline{
\begin{minipage}{0.48\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=58mm]{SIMPMBBex2.jpg}
\end{center}
\end{minipage}} \\
\hline
\end{tabular}
\caption{{\em Structures for 3-D MBB beam with a central load }
\label{MBB-2}}
\end{table}
\subsection{Cantilever beam with a given hole}
\label{circular_void}
In real-world applications, the desired structures are usually subjected to certain design constraints such that some elements are required to be
either solid or void. Now let us consider the cantilever beam with a given hole as illustrated in Fig. \ref{design3}.
We use mesh resolution $70 \times 30 \times 6$ and parameters $V_c= 0.5$, $\beta=7000$, $\mu=0.94$ and $\omega_1=0.001$.
\begin{figure}[h!]
\begin{center}
\scalebox{0.12}{\includegraphics{design3.JPG}}
\caption{\em Design domain for cantilever beam with a given hole}
\label{design3}
\end{center}
\end{figure}
The optimal topologies produced by CPD, BESO, and SIMP are summarized in Table \ref{circular-void}.
The results show clearly that the CPD method is significantly faster than both BESO and SIMP.
Again, the SIMP failed to converge in 1000 iterations
and the ``Change'' $\Delta = 0.011 > \omega_1$ at the last iteration.
\begin{table}[h!]
\centering
\begin{tabular}{ |p{14.4cm}|}
\hline
\centerline{CPD: \;\;$C= 910.0918,$ \;$\;$ It. =14, $\;$Time=74.61}
\vspace{-0.12cm}
\centerline{
\begin{minipage}{0.80\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=42mm]{CDT-void.jpg}
\end{center}
\end{minipage}}
\\
\hline
\centerline{ BESO: \;\;$C= 916.3248,$ \;$\;$It. =21, $\;$Time=1669.5059}
\vspace{-0.012cm}
\centerline{
\begin{minipage}{0.80\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=42mm]{BESO-circal.jpg}
\end{center}
\end{minipage}}
\\ \hline
\centerline{ SIMP: \;\;$C=997.1556,$ \;$\;$It. =1000, $\;$Time=1932.7697}
\vspace{-0.012cm}
\centerline{
\begin{minipage}{0.80\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=42mm]{SIMP_circal.jpg}
\end{center}
\end{minipage}}
\\ \hline
\end{tabular}
\caption{{\em Topology optimized structures for cantilever beam with a given hole }
\label{circular-void}}
\end{table}
\subsection{3D wheel problem}
\label{wheel_3d}
The 3D wheel design problem is constrained by planar
joint on the corners with a downward point load in the center
of the bottom as shown in Fig. \ref{3D_wheel}.
The mesh resolution for this problem is $40 \times20 \times 40$.
The target volume is $V_c= 0.2$ and the parameters used are $\beta=150$, $\mu=0.94$ and
$\omega_1=10^{-5}$.
The optimal topologies produced by CPD, BESO and SIMP are reported in Table \ref{3D-wheel1}.
We can see that the CPD takes only about 18\% and 32\% of computing times by BESO and SIMP, respectively.
Once again, the SIMP failed to converge in 1000 iterations
and the ``Change'' $\Delta = 0.0006 > \omega_1$ at the last iteration.
\begin{figure}[h!]
\begin{center}
\scalebox{0.28}{\includegraphics{3D_wheel.JPG}}
\caption{\em 3D wheel problem }
\label{3D_wheel}
\end{center}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{ |p{4.8cm}|p{4.8cm}|p{4.8cm}|}
\hline
\centerline{$C=3.6164$, It. =32} \centerline{Time=6716.1433}
& \centerline{ $C=3.6136$, It. =52} \centerline{Time=37417.5089}
& \centerline{ $C=3.7943$, It. =1000} \centerline{Time=20574.8348} \\
\hline \vspace{0.000001cm}
\begin{minipage}{0.29\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=41mm]{wheel3D-CDT.jpg}
\end{center}
\end{minipage}
& \vspace{0.000001cm}
\begin{minipage}{0.29\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=41.0mm]{wheel3D-BESO.jpg}
\end{center}
\end{minipage}
& \vspace{0.000001cm}
\begin{minipage}{0.29\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=41mm]{wheel3D-SIMP.jpg}
\end{center}
\end{minipage}
\\ \hline
\end{tabular}
\caption{{\em Topology optimized results for 3D-wheel problem} ($40\times 20 \times 40$) {\em by CPD (left), BESO (middle), and SIMP (right)}}
\label{3D-wheel1}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{ |p{4.9cm}|p{4.9cm}|p{4.9cm}|}
\hline
\centerline{$\mu=0.88, \;\; V_c=0.06$} \centerline{$C=5.7296$, It. =55} \centerline{Time=2324.0445}
& \centerline{$\mu=0.88, \;\; V_c=0.1$} \centerline{$C=4.2936$, It. =44} \centerline{Time=1888.6451}
& \centerline{$\mu=0.92, \;\; V_c=0.1$} \centerline{$C=4.3048$, It. =45} \centerline{Time=1823.7826} \\
\hline \vspace{0.000001cm}
\begin{minipage}{0.31\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=32mm]{CDT-Wheel1.jpg}
\end{center}
\end{minipage}
& \vspace{0.000001cm}
\begin{minipage}{0.31\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=32mm]{CDT-Wheel2.jpg}
\end{center}
\end{minipage}
& \vspace{0.000001cm}
\begin{minipage}{0.31\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=32mm]{CDT-Wheel3.jpg}
\end{center}
\end{minipage}
\\ \hline
\vspace{0.000001cm}
\begin{minipage}{0.31\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=40mm]{cc3.jpg}
\end{center}
\end{minipage}
& \vspace{0.000001cm}
\begin{minipage}{0.31\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=40mm]{cc2.jpg}
\end{center}
\end{minipage}
& \vspace{0.000001cm}
\begin{minipage}{0.31\textwidth}
\begin{center}
\includegraphics[width=\linewidth, height=40mm]{cc1.jpg}
\end{center}
\end{minipage}
\\ \hline
\end{tabular}
\caption{{\em Topology optimized results by CPD for 3D-wheel problem} ($30\times 20 \times 30$) {\em with two different views}}
\label{wheel}
\end{table}
For a given very small termination criterion $\omega_1=10^{-16}$ and for
mesh resolution $30 \times20 \times 30$,
Table \ref{wheel} shows effects of the parameters $\mu$ and $V_c$ on the topology optimized results by CPD.
\section{Concluding Remarks and Open Problems}
\label{Conclusions}
We have presented a novel canonical penalty-duality method for solving challenging topology optimization problems.
The relation between the CPD method for solving 0-1 integer programming problems
and the pure complementary energy principle in nonlinear elasticity is revealed for the first time.
Applications are demonstrated by 3-D linear elastic structural topology optimization problems.
By the fact that the integer density distribution is obtained analytically, it should be consider as the global optimal solution at each volume iteration.
Generally speaking, the so-called compliance produced by the CPD is higher than those by BESO for most of tested problems except for the MBB beam and the cantilever beam with a given hole.
The possible reason is that certain artificial techniques such as the so-called soft-kill, filter and sensitivity are used by the BESO method. The following remarks are important for understanding these popular methods and conceptual mistakes in topology optimization.
\begin{remark}[On Penalty-Duality, SIMP, and BESO Methods]
It is well-known that
the Lagrange multiplier method can be used essentially for solving convex problem with equality constraints. The Lagrange multiplier must be a solution to the Lagrangian dual problem
(see the Lagrange Multiplier's Law in \cite{gao-book00}, page 36). For inequality constraint, the Lagrange multiplier must satisfy the KKT conditions.
The penalty method can be used for solving problems with both equality and inequality constraints, but the iteration method must be used. By the facts that the penalty parameter is hard to control during the iterations and in principle, needs to be large enough for the penalty function to be truly effective, which on the other hand, may cause numerical instabilities, the penalty method was becoming disreputable after the {\em augmented Lagrange multiplier method }
was proposed in 1970 and 1980s.
The augmented Lagrange multiplier method is simply the combination of the Lagrange multiplier method and the penalty method, which has been actively studied for more than 40 years. But this method can be used mainly for solving linearly constrained problems since any simple nonlinear constraint could lead to a nonconvex minimization problem \cite{l-g-opl}.
For example, let us consider the knapsack problem $({\cal{P}}_u)$. As we know that by using the canonical measure $\Lam(\boldsymbol{\rho}) = \boldsymbol{\rho} \circ \boldsymbol{\rho} - \boldsymbol{\rho}$, the 0-1 integer constraint $\boldsymbol{\rho} \in \{ 0, 1\}^n$ can be equivalently written in equality $\boldsymbol{\rho} \circ \boldsymbol{\rho} - \boldsymbol{\rho} = {\bf 0}$. Even for this most simple quadratic nonlinear equality constraint, its penalty function
$W_\beta = \beta \| \boldsymbol{\rho} \circ \boldsymbol{\rho} - \boldsymbol{\rho} \|^2 $ is a nonconvex function! In order to solve this nonconvex optimization problem, the canonical duality theory has to be used as discussed in Section 4. The idea for this penalty-duality method was originally from Gao's PhD thesis \cite{gao-thesis}. By Theorem 1, the canonical dual variable $\mbox{\boldmath$\sigma$}$ is exactly the Lagrange multiplier to the canonical equality constraint $ \pmb{\varepsilon}= \Lam(\boldsymbol{\rho}) = \boldsymbol{\rho}\circ\boldsymbol{\rho} - \boldsymbol{\rho} = {\bf 0}$, the penalty parameter $\beta$ is theoretically not necessary for the canonical duality approach. But, by this parameter, the canonical dual solution can be analytically and uniquely obtained. By Theorem 7 in \cite{gao-ruan-jogo10},
there exists a $ \beta_c > 0$ such that for any given $\beta \ge {\beta_c}$, this analytical solution solves the canonical dual problem $({\cal{P}}^d_u)$, therefore, the parameter $\beta$ is not arbitrary and
no iteration is needed for solving the $\beta$-perturbed canonical dual problem $({\cal{P}}^d_\beta)$.
The mathematical model for the SIMP is formulated as a box constrained minimization problem:
\begin{equation}
(P_{sp}): \;\;
\min \left \{ \frac{1}{2} {\bf u}^T {\bf K}(\boldsymbol{\rho}^p) {\bf u} \; | \;\; {\bf K}(\boldsymbol{\rho}^p) {\bf u} = {\bf f}, \;\; {\bf u} \in {\cal U}_a , \;\; \boldsymbol{\rho} \in {\cal Z}_b \right\} , \\
\end{equation}
where $p > 0$ is a given parameter,
and
\[
{\cal Z}_b = \{ \boldsymbol{\rho} \in {\bf R}^n | \;\; \boldsymbol{\rho}^T {\bf v} \le V_c, \;\;\boldsymbol{\rho} \in (0, 1]^n\}.
\]
By the fact that $\boldsymbol{\rho}^p = \boldsymbol{\rho} \;\; \forall p \in {\bf R}, \;\; \forall \boldsymbol{\rho} \in \{ 0, 1\}^n$, the problem $(P_{sp})$ is obtained from $(P_s)$ by artificially
replacing the integer constraint $\boldsymbol{\rho} \in \{0, 1\}^n$ in ${\cal Z}_a$ with the box constraint $\boldsymbol{\rho} \in (0,1]^n$.
Therefore, the SIMP is not a mathematically correct penalty method for solving the integer constrained problem $(P_s)$ and $p$ is not a correct penalty parameter.
By Remark \ref{remark1} we know that the alternative iteration can't be used for solving $(P_{sp})$ and the target function must be written in term of $\boldsymbol{\rho}$ only,
i.e. $P_c(\boldsymbol{\rho}^p) = \frac{1}{2} {\bf f}^T [{\bf K}(\boldsymbol{\rho}^p) ]^{-1} {\bf f} $, which is not a coercive function and, for any given $p> 1$, its extrema are usually located on the boundary of ${\cal Z}_b$ (see \cite{gao-to18}).
Therefore, unless some artificial techniques are adopted, any mathematically correct approximations to $(P_{sp})$ can't produce reasonable solutions to either $(P_c)$ or $(P_s)$.
Indeed, from all examples presented above, the SIMP produces only gray-scaled topology, and from Fig \ref{Compliance} we can see clearly that during the first 15 iterations, the structures produced by SIMP are broken, which are both mathematically and physically unacceptable.
Also, the so-called magic number $p=3$ works only for certain homogeneous material/structures. For general composite structures, the global min of $P_c(\boldsymbol{\rho}^3) $ can't be integers \cite{gao-to18}.
The optimization problem of BESO as formulated in
\cite{Huang-Xie} is posed in the form of minimization of mean
compliance, i.e. the problem $(P)$.
Since the alternative iteration is adopted by BESO, and by Remark \ref{remark1} this alternative iteration leads to an anti-Knapsack problem,
the BESO should theoretically produce only trivial solution at each volume evolution.
However, instead of solving the anti-Knapsack problem (\ref{eq-anti}),
a comparison method is used to determine whether an element needs to be added to or removed from the structure, which is actually a direct method for solving
the knapsack problem $({\cal{P}}_u)$. This is the reason why the numerical results obtained by BESO are similar to that by CPD.
But, the direct method is not a polynomial-time algorithm. Due to the combinatorial complexity, this popular method is computationally expensive and be used only for small sized problems.
This is the very reason that the knapsack problem was considered as NP-complete for all existing direct approaches.
\end{remark}
\begin{remark}[On Compliance, Objectivity, and Modeling in Engineering Optimization]\label{remark4}
By Wikipedia (see \url{https://en.wikipedia.org/wiki/Stiffness}), the concept of ``compliance" in mechanical science is defined as the inverse of stiffness, i.e. if the stiffness of an elastic bar
is k, then the compliance should be c = 1/k, which is also called the flexibility. In 3-D linear elasticity, the stiffness is the Hooke tensor ${\bf K}$, which is associated with the strain energy $W(\pmb{\varepsilon}) = \frac{1}{2} \pmb{\varepsilon} : {\bf K} : \pmb{\varepsilon}$; while the compliance is $ {\bf C} = {\bf K}^{-1}$, which is associated with the complementary energy $W^*(\mbox{\boldmath$\sigma$}) = \frac{1}{2} \mbox{\boldmath$\sigma$} : {\bf K}^{-1} : \mbox{\boldmath$\sigma$}$. All these are well-written in textbooks. However, in topology optimization literature, the linear function
$ F({\bf u}) = {\bf u}^T {\bf f}$ is called the compliance. Mathematically speaking, the inner product ${\bf u}^T {\bf f}$ is a scalar, while the compliance ${\bf C}$ is a matrix; physically, the scaler-valued function $F({\bf u})$ represents the external (or input) energy, while the compliance matrix ${\bf C}$
depends on the material of structure, which is related to the internal energy $W^*(\mbox{\boldmath$\sigma$})$.
Therefore, they are two totally different concepts, mixed using these terminologies could lead to serious
confusions in multidisciplinary research\footnote{Indeed, since the first author was told that the strain energy is also called the compliance in topology optimization and $(P_c)$ is a correct model for topology optimization, the general problem $({\cal{P}}_{bl})$ was originally formulated as a minimum total potential energy so that using ${\bf f} = {\bf K}(\boldsymbol{\rho}) \bar{\bf u} $, $\min \{ \Pi_h(\bar{\bf u}, \boldsymbol{\rho})|\;\boldsymbol{\rho} \in {\cal Z}_a\} =
\min\{ - \frac{1}{2} {\bf c}({\bf u}) \boldsymbol{\rho}^T | \;\;\boldsymbol{\rho} \in {\cal Z}_a \} $ is a knapsack problem \cite{gao-to17}.}
Also, the well-defined stiffness and compliance are mainly for linear elasticity. For nonlinear elasticity or plasticity,
the strain energy is nonlinear and the complementary energy can't be explicitly defined.
For nonconvex $W(\pmb{\varepsilon})$, the complementary energy is not unique.
In these cases, even if the stiffness can be defined by the Hessian matrix ${\bf K}(\pmb{\varepsilon}) = \nabla^2 W(\pmb{\varepsilon})$, the compliance ${\bf C}$ can't be well-defined since ${\bf K}(\pmb{\varepsilon}) $ could be singular even for the so-called G-quasiconvex materials \cite{gao-neohook}.
Objectivity is a central concept in our daily life, related to reality and truth. According to Wikipedia,
the objectivity in philosophy means the state or quality of being true even outside a subject's individual biases, interpretations, feelings, and imaginings\footnote{\url{https://en.wikipedia.org/wiki/Objectivity_(philosophy)}}.
In science, the objectivity is often attributed to the property of scientific measurement, as the accuracy of a measurement can be tested independent from the individual scientist who first reports it\footnote{ \url{https://en.wikipedia.org/wiki/Objectivity_(science)}}.
In continuum mechanics, it is well-known that a real-valued function $W(\pmb{\varepsilon})$ is called to be objective if and only if
$W(\pmb{\varepsilon}) = W({\bf R} \pmb{\varepsilon})$ for any given rotation tensor ${\bf R} \in$ {\em SO(3)}, i.e. $W(\pmb{\varepsilon})$ must be an invariant under rigid rotation,
(see \cite{ciarlet}, and Chapter 6 \cite{gao-book00}). The duality relation $\pmb{\varepsilon}^* = \nabla W(\pmb{\varepsilon})$ is called the constitutive law, which is independent of any particularly given problem.
Clearly, any linear function is not objective.
The objectivity lays a foundation for mathematical modeling.
In order to emphasize its importance, the objectivity is also called the principle of frame-indifference in continuum physics \cite{truesd}.
Unfortunately, this fundamentally important concept has been mistakenly used in optimization literature with other functions,
such as the target, cost, energy, and utility functions, etc\footnote{ \url{http://en.wikipedia.org/wiki/Mathematical_optimization}}.
As a result, the general optimization problem
has been proposed as
\begin{equation}
\min f(x) , \;\; s.t. \; g(x) \le 0,
\end{equation}
and the arbitrarily given $f(x)$ is called objective function\footnote{This terminology is used mainly in English literature. The function $f(x)$ is correctly called
the target function in Chinese and Japanese literature.}, which is even allowed to be a linear function.
Clearly, this general problem is artificial. Without detailed information on the functions $f(x)$ and $g(x)$,
it is impossible to have powerful theory and method for solving this artificially given problem.
It turns out that many nonconvex/nonsmooth optimization problems are considered to be NP-hard.
In linguistics, a grammatically correct sentence should be composed by at least three components: subject, object, and a predicate.
Based on this rule and the canonical duality principle \cite{gao-book00},
a unified mathematical problem for multi-scale complex systems was proposed by Gao in \cite{gao-aip16}:
\begin{equation}
({\cal{P}}_g): \;\; \min \{ \Pi({\bf u}) = W({\bf D} {\bf u}) - F({\bf u}) | \;\; {\bf u} \in {\cal U}_c \},
\end{equation}
where $W(\pmb{\varepsilon}): {\cal E}_a \rightarrow {\bf R}$ is an objective function such that the internal duality relation $\pmb{\varepsilon}^* = \nabla W(\pmb{\varepsilon})$ is governed by the
constitutive law, its domain ${\cal E}_a$ contains only physical constraints (such as the incompressibility and plastic yield conditions \cite{gao-cs88}), which depends on mathematical modeling;
$F({\bf u}): {\cal U}_a \rightarrow {\bf R}$ is a subjective function such that the external duality relation
${\bf u}^* = \nabla F({\bf u}) = {\bf f} $ is a given input (or source),
its domain ${\cal U}_a$ contains only geometrical constraints (such as boundary and initial conditions), which depends on each given problem;
${\bf D} :{\cal U}_a \rightarrow {\cal E}_a$ is a linear operator which links the two spaces ${\cal U}_a$ and ${\cal E}_a$ with different physical scales;
the feasible space is defined by ${\cal U}_c = \{ {\bf u} \in {\cal U}_a | \;\; {\bf D} {\bf u} \in {\cal E}_a \}$.
The predicate in $({\cal{P}}_g)$ is the operator ``$-$'' and the difference $\Pi({\bf u})$ is called the target function in general problems.
The object and subject are in balance only at the optimal states.
The unified form $({\cal{P}}_g)$ covers general constrained nonconvex/nonsmooth/discrete variational and optimization problems in multi-scale complex systems \cite{g-l-r-17, gao-yu}.
Since the input ${\bf f}$ does not depend on the output ${\bf u}$, the subjective function $F({\bf u})$ must be linear.
Dually, the objective function $W(\pmb{\varepsilon})$ must be nonlinear such that there exists an objective measure ${\mbox{\boldmath$\xi$}} = \Lam({\bf u})$ and a convex function $\Psi({\mbox{\boldmath$\xi$}})$, the
canonical transformation $W({\bf D} {\bf u}) = \Psi(\Lam({\bf u}))$ holds for most real-world systems.
This is the reason why the canonical duality theory was naturally developed and can be used to solve general challenging problems in multidisciplinary fields.
However, since the objectivity has been misused in optimization community, this theory was mistakenly challenged
by M.D. Voisei and C. Z\u{a}linescu
(cf. \cite{g-l-r-17}). By oppositely choosing linear functions for $W(\pmb{\varepsilon})$ and nonlinear functions for $F({\bf u})$,
they produced a list of ``count-examples'' and concluded: ``a correction of this theory is impossible without falling into trivial''.
The conceptual mistakes in their challenges
revealed at least two important truths: 1) there exists a huge gap between optimization and mechanics;
2) incorrectly using the well-defined concepts can lead to ridiculous arguments.
Interested readers are
recommended to read the recent papers \cite{gao-ol16}
for further discussion.
For continuous systems, the necessary optimality condition for the general problem $({\cal{P}}_g)$ leads to an abstract equilibrium equation
\begin{equation}
{\bf D}^* \partial_{\pmb{\varepsilon}} W({\bf D} {\bf u}) = {\bf f}.
\end{equation}
It is linear if the objective function $W(\pmb{\varepsilon})$ is quadratic. This abstract equation includes almost all well-known equilibrium problems in textbooks
from partial differential equations in mathematical physics to
algebraic systems in numerical analysis and optimization \cite{strang}\footnote{The celebrated textbook {\em Introduction to Applied Mathematics} by Gil Strang is a required course for all engineering graduate students
at MIT. Also, the well-known MIT online teaching program was started from this course.}
In mathematical economics, if the output ${\bf u} \in {\cal U}_a \subset {\bf R}^n$ represents product of a manufacture company, the input
${\bf f}$ can be considered as the market price of ${\bf u}$, then the subjective function
$F({\bf u}) ={\bf u}^T {\bf f} $ in this example is the total income of the company.
The products are produced by workers $\pmb{\varepsilon} = {\bf D} {\bf u}$ and ${\bf D} \in {\bf R}^{m\times n} $ is a cooperation matrix.
The workers are paid by salary $\pmb{\varepsilon}^* = \nabla W(\pmb{\varepsilon})$ and the objective function
$W(\pmb{\varepsilon})$ is the total cost. Thus, the optimization problem $({\cal{P}}_g)$ is to minimize the total loss $\Pi({\bf u})$ under certain given constraints in ${\cal U}_c$.
A comprehensive review on modeling, problems and NP-hardness in multi-scale optimization is given in \cite{gao-to181}.
\end{remark}
In summary, the theoretical results presented in this paper show that the canonical duality theory is indeed an important methodological theory
not only for solving the most challenging topology optimization problems,
but also for correctly understanding and modeling multi-scale problems in complex systems.
The numerical results verified that
the CPD method can produce mechanically sound optimal topology, also it is much more powerful than the popular SIMP and BESO methods.
Specific conclusions are given below.
\begin{enumerate}
\item The mathematical model for general topology optimization should be formulated as a bi-level
mixed integer nonlinear programming problem $({\cal{P}}_{bl})$. This model works for both linearly and nonlinearly deformed elasto-plastic structures.
\item The alternative iteration is allowed for solving $({\cal{P}}_{bl})$, which leads to a knapsack problem for linear elastic structures. The CPD is a polynomial-time algorithm, which can solve $({\cal{P}}_{bl})$ to obtain global optimal solution at each volume iteration.
\item The pure complementary energy principle is a special application of the canonical duality theory in nonlinear elasticity. This principle plays an important role not only in nonconvex analysis and computational mechanics, but also in topology optimization, especially for large deformed structures.
\item Unless a magic method is proposed, the volume evolution is necessary for solving $({\cal{P}}_{bl})$ if $ \mu_c = V_c/V_0 \ll 1$. But the global optimal solution depends sensitively on the evolutionary rate $\mu \in [\mu_c, 1)$.
\item The compliance minimization problem $(P)$ should be written in the form of $(P_c)$ instead of the minimum strain energy form $(P_s)$.
The problem $(P_c)$ is actually a single-level reduction of $({\cal{P}}_{bl})$ for linear elasticity.
Alternative iteration for solving $(P_s)$ leads to an anti-knapsack problem.
\item The SIMP is not a mathematically correct penalty method for solving either $(P)$ or
$(P_c)$. Even if the magic number $p=3$ works for certain material/structures, this method can't produce correct integer solutions.
\item Although the BESO is posed in the form of minimization of mean
compliance, it is actually a direct method for solving a knapsack problem at each volume reduction.
For small-scale problems, BESO can produce reasonable results much better than by SIMP. But it is time consuming for
large-scale topology optimization problems since the direct method is not a polynomial-time algorithm.
\end{enumerate}
By the fact that the canonical duality is a basic principle in mathematics and natural sciences, the canonical duality theory plays a versatile rule
in multidisciplinary research. As indicated in the monograph \cite{gao-book00} (page 399), applications of this methodological theory
have into three aspects:
\begin{verse}
(1) to check the validity and completeness of the existence theorems;\\
(2) to develop new (dual) theories and methods based upon the known
ones;\\
(3) to predict the new systems and possible theories by the triality
principles and its sequential extensions.
\end{verse}
This paper is just a simple application of the canonical duality theory for linear elastic topology optimization.
The canonical penalty-duality method for solving general nonlinear constrained problems and a 66 line Matlable code for topology optimization are given in the coming paper \cite{gao-ruan-66}.
The canonical duality theory is particularly useful for studying nonconvex, nonsmooth, nonconservative large deformed dynamical systems \cite{gao-royal}. Therefore, the future works include
the CPD method for solving general topology optimization problems of large deformed elasto-plastic structures subjected to dynamical loads.
The main open problems include the optimal parameter
$\mu$ in order to ensure the fast convergence rate with the optimal results, the existence and uniqueness of the global optimization solution for a given design domain $V_c$.
\section*{Acknowledgement}
This research is supported by US Air Force Office for Scientific Research (AFOSR) under the grants FA2386-16-1-4082 and FA9550-17-1-0151.
The authors would like to express their sincere gratitude to Professor Y.M. Xie at RMIT for providing
his BESO3D code in Python and for his important comments and suggestions.
|
{
"timestamp": "2018-06-22T02:07:33",
"yymm": "1803",
"arxiv_id": "1803.02615",
"language": "en",
"url": "https://arxiv.org/abs/1803.02615"
}
|
\section{Introduction}
\label{sec:intro}
In this paper we consider the solution of linear systems
\begin{equation}
Ax=b \label{eq:OCS}
\end{equation}
with row-action methods, i.e methods that only use single rows of the system in each step.
This is beneficial, for example, in situations where the full system is too large to store or keep in memory.
Probably the first method of this type is the Kaczmarz method where each step consists of a projection onto a hyperplane given by the solution space of a single row.
If $a^{T}$ is a row vector of the system and the corresponding entry on the right hand side is (with slight abuse of notation) $b$, then the orthogonal projection of a given vector $x$ onto the solution space of $\scp{a}{x} = b$ is
\[
x - \frac{\scp{a}{x} - b}{\norm{a}^{2}}\cdot a.
\]
Thus, one updates the current vector $x$ in the direction of $a$ which is the corresponding column of $A^{T}$.
A question that has been motivated by the use of the Kaczmarz method in tomographic reconstruction (where it is known under the name \emph{algebraic reconstruction technique} (ART),~\cite{gordon1970algebraic}, see also~\cite{kak2002principles}) is:
\emph{Will the method still converge, if we do not use $A^{T}$ as the adjoint but a different matrix $V^{T}$?}
In tomographic reconstruction, the linear operator $A$ models the ``forward projection'' operation, which maps an object's density to a set of measured line integrals.
The adjoint map $A^{T}$, however, also has a physical interpretation: This map is called ``backprojection'' and, roughly speaking, ``distributes the values along lines through the measurement volume''.
Since both $A$ and $A^{T}$ have their own physical significance, their corresponding maps are often implemented by different means.
For example,~\cite{de2004distance} proposes and discusses several method for the implementation of the backprojection method and shows that special methods compare favorably with respect to reconstruction quality.
In~\cite{zeng2000unmatched}, the authors discuss the use of mismatched projection pairs, for the purposes of improved computational efficiency when using the Landweber algorithm for reconstruction.
Hence, one does not always use the actual adjoint, but a different map and we refer to this situation as using a ``mismatched adjoint''.
The goal of this paper is to analyze the convergence behavior of the randomized Kaczmarz method with mismatched adjoint.
\section{The overdetermined consistent case}
The Kaczmarz method is known to converge for any consistent linear system, but the speed of convergence is hard to quantify since it depends on the ordering of the rows.
This is notably different for the \emph{randomized Kaczmarz method} as shown in~\cite{strohmer2009randomized}: If the rows are chosen independently at random the method converges linearly.
To fix notation, let $A=(a_i^T)_{i=1,\ldots,m} \in \RR^{m \times n}$ with $m \ge n$ and row vectors
$a_i \in \RR^n$ and $V=(v_i^T)_{i=1,\ldots,m}$, with row vectors
$v_i \in \RR^n$. Moreover let $p_i>0$,
$i\in \{1,\ldots,m\}$ denote a probability distribution on the set of indices of the rows, i.e., $p_{i}$ is the probability to choose the $i$-th row for the next step.
\begin{algorithm}[h]
\caption{Randomized Kaczmarz with Mismatched Adjoint}
\label{alg:RKMA}
\begin{algorithmic}[1]
\REQUIRE{starting point $x_0\in\RR^n$ and probabilities $p_i>0$, $i \in \{1,\ldots,m\}$}
\ENSURE{solution of~\eqref{eq:OCS}}
\STATE initialize $k = 0$
\REPEAT
\STATE choose an index $i_k=i \in \{1,\ldots,m\}$ at random with probability $p_i$
\STATE update $x_{k+1}=x_k-\tfrac{\scp{a_{i_k}}{x_k}-b_{i_k}}{\scp{a_{i_k}}{v_{i_k}}} \cdot v_{i_k}$
\STATE increment $k = k+1$
\UNTIL{a stopping criterion is satisfied}
\end{algorithmic}
\end{algorithm}
The algorithm we consider in this work is the randomized Kaczmarz method with mismatched adjoint, abbreviated RKMA, and is given in Algorithm~\ref{alg:RKMA}.
The difference to the standard randomized Kaczmarz method is that the usual projection step
$x_{k+1} =
x_{k}-\tfrac{\scp{a_{i_k}}{x_k}-b_{i_k}}{\norm{a_{i_{k}}}^{2}} \cdot
a_{i_k}$
is replaced by
$x_{k+1} =
x_{k}-\tfrac{\scp{a_{i_k}}{x_k}-b_{i_k}}{\scp{a_{i_k}}{v_{i_k}}} \cdot
v_{i_k}$.
This results in $\scp{x_{k+1}}{a_{i_{k}}} = b_{i_{k}}$, i.e., the next iterate
$x_{k+1}$ is on the hyperplane defined by the $i_{k}$-th equation of
the system, but since $v_{i_{k}}$ is not orthogonal to this
hyperplane, this is an \emph{oblique} projection, instead of an
orthogonal projection as it would be in the original Kaczmarz method (see Figure~\ref{fig:oblique-projection}).
\begin{figure}[htb]
\centering
\begin{tikzpicture}
\fill (0,0) circle (2pt) node[right]{$x_{k}$};
\draw[thick] (-1,2)++(-2,-1) -- (-1,2)--++(2,1);
\fill (-1,2) circle (2pt) node[right=6pt]{$\tilde x_{k+1}$};
\draw[dashed] (0,0) -- (-1,2);
\draw[->] (-1,2)--+(-0.5,1)node[right]{$a_{i}$};
\fill (-2,1.5) circle(2pt) node[right=4pt]{$x_{k+1}$};
\draw[dashed] (0,0) -- (-2,1.5);
\draw[->] (-2,1.5) --++ (-1,0.75) node[left]{$v_{i}$};
\end{tikzpicture}
\caption{Oblique projection $x_{k+1}$ of $x_{k}$ onto the hyperplane $\{x\mid \scp{a_{i}}{x}=b_{i}\}$. The orthogonal projection is $\tilde x_{k+1}$.}
\label{fig:oblique-projection}
\end{figure}
To formulate the convergence theorem for RKMA we denote by $\lambda_{\min}(M)$ the smallest
eigenvalue of a symmetric real matrix $M$.
For general (non-symmetric) real square matrices $M$ we denote by $\rho(M)$ its \emph{spectral radius}, i.e. the largest absolute value of its eigenvalues.
First we state a result on the expected outcome of one step of RKMA.
A similar result has been observed earlier in the case with
no mismatch, see e.g.~\cite{strohmer2009randomized}, \cite[Lemma 2.2]{needell2010noisy}) or~\cite[Lemma 3.6]{zouzias2013randomized}).
In the following, we generally assume that the rows of $A$ and $V$ fulfill $\scp{a_{i}}{v_{i}}\neq 0$ and, without loss of generality, that $\scp{a_{i}}{v_{i}}>0$.
\begin{lem}
\label{lem:est-one-step}
Let $\hat x$ fulfill $A\hat x=b$, $x$ be arbitrary and
$x^{+} = x - \tfrac{\scp{a_{i}}{x}-b_{i}}{\scp{a_{i}}{v_{i}}} \cdot v_{i}$
be the oblique projection onto the hyperplane
$\{x\mid \scp{a_{i}}{x}=b_{i}\}$. Further we let $p_{i}>0$, $i=1,\dots,m$, be probabilities and denote
$D:=\diag\big(\tfrac{p_i}{\scp{a_i}{v_i}}\big)$ and
$S:=\diag\big(\tfrac{\norm{v_i}^2}{\scp{a_i}{v_i}}\big)$.
If $i$ is randomly chosen with
probability $p_{i}$ (i.e. $x^{+}$ is a random variable) then it
holds that
\begin{equation}\label{eq:estimated-step}
\EE(x^{+}-\hat x) = (I - V^{T}DA)(x-\hat x)
\end{equation}
and if
\begin{equation}
\lambda := \lambda_{\min}\big(V^{T}DA + A^{T}DV - A^{T}SDA\big)>0 \label{eq:CC}
\end{equation}
is fulfilled, it holds that
\[
\EE(\norm{x^{+}-\hat x}^{2})\leq (1-\lambda)\cdot \norm{x-\hat x}^{2}
\]
(where both expectations are with respect to the probabilities $p_{i}$).
\end{lem}
\begin{proof}
Since $b_{i} = \scp{a_{i}}{\hat x}$, the expectation $\EE(x^{+}-\hat x)$ is
\begin{align*}
\EE(x^{+}-\hat x) & = \sum_{i=1}^{m}p_{i}\cdot (x - \frac{\scp{a_{i}}{x}-b_{i}}{\scp{a_{i}}{v_{i}}}\cdot v_{i}) - \hat x\\
& = x - \sum_{i=1}^{m}p_{i}\cdot \frac{\scp{a_{i}}{x-\hat x}}{\scp{a_{i}}{v_{i}}}\cdot v_{i} - \hat x\\
& = x - \hat x - \sum_{i=1}^{m}\tfrac{p_{i}}{\scp{a_{i}}{v_{i}}}\cdot v_{i}a_{i}^{T}(x-\hat x),
\end{align*}
from which~\eqref{eq:estimated-step} follows.
To calculate the expectation of the squared norm we calculate
\begin{align}
\norm{x^{+}-\hat{x}}^2 = &\norm{x-\hat{x}}^2 - 2 \cdot \frac{\scp{a_{i}}{x-\hat{x}} \cdot \scp{v_{i}}{x-\hat{x}}}{\scp{a_{i}}{v_{i}}} \nonumber \\
& + \frac{\big(\scp{a_{i}}{x-\hat x}\big)^2}{\big(\scp{a_{i}}{v_{i}}\big)^2} \cdot \norm{v_{i}}^2 \,. \label{eq:error_exact}
\end{align}
Taking the expectation gives
\begin{align*}
\EE(\norm{x^{+}-\hat{x}}^2) = & \norm{x-\hat{x}}^2 \\
& - \sum_{i=1}^{m} p_i \cdot 2 \cdot \frac{\scp{a_i}{x-\hat{x}} \cdot \scp{v_i}{x-\hat{x}}}{\scp{a_i}{v_i}} \\
& + \sum_{i=1}^{m} p_i \cdot\frac{\big(\scp{a_i}{x-\hat x}\big)^2}{\big(\scp{a_i}{v_i}\big)^2} \cdot \norm{v_i}^2 \,.
\end{align*}
By the definition of $D$ and $S$ the right hand side can be written as
\begin{align}
\norm{x-\hat x}^{2} - \scp{x-\hat x}{(2V^{T}DA -
A^{T}SDA)(x-\hat{x})}\nonumber\\
= \norm{x-\hat x}^{2} - \scp{x-\hat x}{(2V^{T} -
A^{T}S)DA(x-\hat{x})}\label{eq:est_rkma-aux}
\end{align}
and hence, we aim to bound $\scp{x-\hat x}{(2V^{T} -
A^{T}S)DA(x-\hat{x})}$ from below. More precisely, we want
\[
\scp{x-\hat x}{(2V^{T} - A^{T}S)DA(x-\hat{x})}\geq \lambda\cdot \norm{x-\hat x}^{2}
\]
and this is the case if and only if
\[
\scp{x-\hat x}{((2V^{T} - A^{T}S)DA - \lambda I)(x-\hat{x})}\geq 0.
\]
Since we have $2\scp{z}{V^TDA z} = \scp{z}{(V^TDA+A^TDV) z}$ for all $z$, this is equivalent to
\[
\scp{x-\hat x}{(V^{T}DA + A^{T}DV - A^{T}SDA - \lambda I)(x-\hat{x})}\geq 0
\]
and this is ensured if
\[
\lambda_{\min}(V^{T}DA + A^{T}DV -A^{T}SDA)\geq \lambda.
\]
Hence, if~\eqref{eq:CC} is fulfilled, we obtain the estimate
\[
\EE(\norm{x_{k+1}-\hat{x}}^2) \le (1- \lambda) \cdot \norm{x-\hat{x}}^2 \,.\qedhere
\]
\end{proof}
Equation~\eqref{eq:estimated-step} shows that $\norm{\EE(x^{+}-\hat x)}^{2}\leq \norm{I-V^{T}DA}^{2}\norm{x-\hat x}^{2}$. Recall that $\rho(M)\leq \norm{M}$ for asymmetric matrices $M$, and note that the above inequality is not true, if we replace the norm by the spectral radius.
Due to Jensen's inequality we generally have $\norm{\EE(x^{+}-\hat x)}^{2}\leq \EE(\norm{x^{+}-\hat x}^{2})$ and Lemma~\ref{lem:est-one-step} provides different estimates for both quantities.
Iterating the previous lemma, we obtain the convergence result:
\begin{thm}
\label{thm:RKMA}
Assume that the assumptions of Lemma~\ref{lem:est-one-step} are
fulfilled and denote by $x_k$ the iterates of
Algorithm~\ref{alg:RKMA}.
If $\rho(I-V^{T}DA)<1$ then $x_k$ converges in expectation to $\hat x$,
\[
\EE(x_{k}-\hat x) \to 0 \quad \mbox{for} \quad k \to \infty \,,
\]
moreover, it holds that
\[
\norm{\EE(x_{k}-\hat x)}\leq \norm{I-V^{T}DA}^{k}\norm{x_{0}-\hat x}.
\]
If condition \eqref{eq:CC}
is fulfilled then it holds that
\begin{equation*}
\EE \left[\norm{x_{k+1}-\hat{x}}^2\right] \le (1- \lambda) \cdot \EE
\left[\norm{x_k-\hat{x}}^2 \right] \,.
\end{equation*}
\end{thm}
\begin{proof}
The first claim follows from Lemma~\ref{lem:est-one-step} and the well known fact that $(I-V^{T}DA)^{k}\to 0$ if the spectral radius of $I-V^{T}DA$ is smaller than one (see, e.g.,~\cite[Theorem 11.2.1]{golub2013matrix}). The second claim is also immediate from the previous lemma.
Finally, we get for expectation with respect to $i_{k}$ (conditional on $i_{0},\dots, i_{k-1}$)
\[
\EE \left[\norm{x_{k+1}-\hat{x}}^2\,\middle|\, i_0,\ldots,i_{k-1} \right] \le (1- \lambda) \norm{x_k-\hat{x}}^2
\]
Now we consider all indices $i_0,\ldots,i_k$ as random variables
with values in $\{1,\ldots,m\}$, and take the full expectation on
both sides to get the assertion.
\end{proof}
Here are some remarks on the result:
\begin{rem}
Since eigenvalues depend continuously on perturbations, both
condition~\eqref{eq:CC} and $\rho(I-V^{T}DA)<1$ are fulfilled for
$V \approx A$. Note that $\norm{I-V^{T}DA} = \rho(I-V^{T}DA)$ does
hold for $V = A$ and is generally not true otherwise. It may even be the case
that $\norm{I-V^{T}DA}>1$ while $\rho(I-V^{T}DA)<1$.
\end{rem}
\begin{rem}[Relation to the result of Strohmer and Vershynin]\label{rem:strohmer-vershynin}
Note that Theorem~\ref{thm:RKMA} contains the result of Strohmer and
Vershynin~\cite{strohmer2009randomized} as a special case: Take
$V=A$ and the probabilities $p_{i}$ proportional to the squared
row-norms, i.e. $p_{i} =
\frac{\norm{a_{i}}^{2}}{\norm[F]{A}^{2}}$. Then we have
\[
D=\diag\big(\tfrac{p_i}{\scp{a_i}{v_i}}\big) =
\tfrac{1}{\norm[F]{A}^{2}}\cdot I\quad \text{and}\quad
S=\diag\big(\tfrac{\norm{v_i}^2}{\scp{a_i}{v_i}}\big) = I
\]
and hence we get
\[
\lambda = \frac{\lambda_{\min}(A^{T}A)}{\norm[F]{A}^{2}} =
\frac{\sigma_{\min}(A)}{\norm[F]{A}^{2}}
\]
(where $\sigma_{\min}(A)$ denotes the smallest singular value of $A$) as in~\cite{strohmer2009randomized}.
To get a similarly simple expression for the convergence of the method with mismatch we set
\[
p_{i} =
\frac{\scp{a_{i}}{v_{i}}}{\norm[V]{A}^{2}},\quad\text{with}\quad\norm[V]{A}^{2}
= \sum_{i}\scp{a_{i}}{v_{i}}.
\]
This leads to
\[
D = \tfrac{1}{\norm[V]{A}^{2}}\cdot I
\]
and thus, from~\eqref{eq:estimated-step},
\[
\norm{\EE(x_{k+1}-\hat x)}\leq \norm{I -
\tfrac{V^{T}A}{\norm[V]{A}^{2}}} \norm{x_{k}-\hat x} =
\sigma_{\max}(I -
\tfrac{V^{T}A}{\norm[V]{A}^{2}})\cdot\norm{x_{k}-\hat x}.
\]
However, in general the contraction factor does not simplify to
$1 - \tfrac{\sigma_{\min}(V^{T}A)}{\norm[V]{A}^{2}}$ as it would in
the case with no mismatch.
We also get
\[
\EE(\norm{x_{k+1}-\hat x}) \leq \left(1 - \tfrac{\lambda_{\min}(V^{T}A + A^{T}V - A^{T}SA)}{\norm[V]{A}^{2}}\right)^{1/2}\norm{x_{k}-\hat x}
\]
for the expectation of the error.
\end{rem}
\begin{rem}[Asymptotic convergence rate and expected improvement in norm]
The above theorem states that the RKMA method has the asymptotic
convergence rate of
\begin{equation}\label{eq:asymp-rate-RKMA}
\rho(I-V^{T}DA)
\end{equation}
(in expectation), however, the
expected improvement of the squared error, i.e. $\EE(\norm{x_{k}-\hat x}^{2})$ in every iteration is
\begin{equation}\label{eq:expected-improvement-RKMA}
\begin{split}
(1-\lambda_{\min}(V^{T}DA+A^{T}DV - A^{T}SDA)) & = \rho(I -
V^{T}DA - A^{T}DV + A^{T}SDA)\\
& = \norm{I -
V^{T}DA - A^{T}DV + A^{T}SDA}.
\end{split}
\end{equation}
Using the spectral norm we can also estimate
\begin{equation*}
\norm{\EE(x_{k+1}-\hat x)} = \norm{(I-V^{T}DA)(x_{k}-\hat x)} \leq \norm{I - V^{T}DA}\cdot \norm{x_{k}-\hat x}.
\end{equation*}
We can express this norm by the spectral radius as
\begin{equation}\label{eq:est-norm-of-expectation}
\norm{I-V^{T}DA} = \rho(I - V^{T}DA - A^{T}DV + A^{T}DVV^{T}DA).
\end{equation}
Note that all three expressions in~\eqref{eq:asymp-rate-RKMA},~\eqref{eq:expected-improvement-RKMA}~\eqref{eq:est-norm-of-expectation} are equal in the case of $V=A$, but for the mismatched case, they are in general different.
Numerically it seems like~$\eqref{eq:asymp-rate-RKMA}\leq~\eqref{eq:est-norm-of-expectation}\leq~\eqref{eq:expected-improvement-RKMA}$, but we do not have a proof for this.
\end{rem}
\begin{rem}[Different possibilities for stepsizes]
We could consider the slightly more general iteration
\[
x_{k+1} = x_{k} -
\omega_{i_{k}}\cdot (\scp{a_{i_{k}}}{x_{k}}-b_{i_{k}})\cdot v_{i_{k}}
\]
with a steplength $\omega_{i_{k}}$. The iteration in
Algorithm~\ref{alg:RKMA} uses $\omega_{i} = \scp{a_{i}}{v_{i}}^{-1}$, but
there are other meaningful choices:
\begin{itemize}
\item As for the case with no mismatch, one could take
$\omega_{i_{k}} = \norm{a_{i_{k}}}^{-2}$, but this would not imply
$\scp{x_{k+1}}{a_{i_{k}}} = b_{i_{k}}$. Similarly,
$\omega_{i} = \norm{v_{i_{k}}}^{-2}$ does not imply $\scp{x_{k+1}}{v_{i_{k}}} = b_{i_{k}}$.
\item The choice $\omega_{i_{k}} = \frac{\scp{x_{k}}{v_{i_{k}}}-b_{i_{k}}}{(\scp{x_{k}}{a_{i_{k}}}-b_{i_{k}})\norm{v_{i}}^{2}}$ implies that $\scp{x_{k+1}}{v_{i_{k}}} = b_{i_{k}}$.
\end{itemize}
Although none of these cases guarantees that the iterates solve one
of the equations of the linear system $Ax=b$, one can still deduce
that iterates converge to the solution of this system of equalities.
The result of Theorem~\ref{thm:RKMA} can also be derived for this
slightly more general iteration and the respective condition for
linear convergence with contraction factor $(1-\lambda)$ is that
\[
\lambda := \lambda_{\min}\big(V^{T}DA + A^{T}DV - A^{T}SDA\big) > 0
\]
with
\[
D = \diag(p_{i}\omega_{i}),\qquad S = \diag(\omega_{i}\norm{v_{i}}^{2}).
\]
\end{rem}
Experiments show that other probabilties than
$p_{i} = \norm{a_{i}}^{2}/\norm[F]{A}^{2}$ in the case $V=A$ or $p_{i} = \scp{a_{i}}{v_{i}}/\norm[V]{A}^{2}$ in the mismatched case frequently lead to faster
convergence. This should not be surprising as one could scale the rows
of system $Ax=b$ arbitrarily by multiplying with a diagonal matrix
which leaves the solution unchanged, but leads to arbitrary row-norms
of the scaled system. In this sense, the row-norms do not reflect the geometry of the arrangements of hyperplanes. We will come back to the problem of selecting probabilities
in Section~\ref{sec:optim-probs}.
\section{Inconsistent overdetermined systems}
\label{sec:inconsistent}
Now we consider the inconsistent case, i.e. we do not assume that the
overdetermined system has a solution. This case has been treated
in~\cite{needell2010noisy} for the case $V=A$. We model an additive
error and assume that the right hand side is $b+r$ with $b\in\rg A$.
\begin{thm}
\label{thm:inconsitent}
Denote by $\hat x$ the unique solution of $Ax=b$ and let $x_{k}$
denote the iterates of Algorithm~\ref{alg:RKMA} where the right hand
side is $b+r$.
With $M = (I-V^{T}DA)$ it holds that
\[
\EE(x_{k}-\hat x) = M^{k}(x_{0}-\hat x) + \sum_{l=0}^{k-1}M^{l}V^{T}Dr.
\]
Moreover, with $\lambda$ defined in~\eqref{eq:CC}, we have
\[
\EE(\norm{x_{k}-\hat x}^{2})\leq (1-\tfrac{\lambda}{2})^{k}\cdot \norm{x_{0}-\hat x}^{2} + \tfrac2\lambda \cdot \gamma^{2}
\]
with $\gamma := \max_{i}\tfrac{|r_{i}|\cdot \norm{v_{i}}}{|\scp{a_{i}}{v_{i}}|}$.
\end{thm}
\begin{proof}
For the iterate $x_{k}$ we denote by $\tilde x_{k+1}$ the oblique
projection onto the ``true hyperplane''
$H = \{x\mid \scp{a_{i}}{x}=b_{i}\}$, i.e.
$\tilde x_{k+1} = x_{k} - \frac{\scp{a_{i}}{x_{k}} -
b_{i}}{\scp{v_{i}}{a_{i}}}\cdot v_{i}$. Then it holds that
\[
x_{k+1}-\hat x = \tilde x_{k+1}-\hat x + \tfrac{r_{i}}{\scp{a_{i}}{v_{i}}}\cdot v_{i}.
\]
For one step of the method we get (taking the expectation with respect to the random variable $i_{k+1}$)
\[
\EE(x_{k+1}-\hat x) = \EE(\tilde x_{k+1}-\hat x) + \EE(\tfrac{r_{i_{k}}}{\scp{a_{i_{k}}}{v_{i_{k}}}}v_{i_{k}}) = (I-V^{T}DA)(x_{k}-\hat x) + V^{T}Dr.
\]
The formula for $\EE(x_{k}-\hat x)$ (with the expectation with
respect to all indices $i_{0},\dots,i_{k}$) follows by induction.
Moreover, we get
\begin{align*}
\norm{x_{k+1}-\hat x}^{2} & = \norm{\tilde x_{k+1}-\hat x}^{2} + 2\tfrac{r_{i}}{\scp{a_{i}}{v_{i}}}\cdot \scp{\tilde x_{k+1}-\hat x}{v_{i}} + \tfrac{r_{i}^{2}}{\scp{a_{i}}{v_{i}}^{2}}\cdot \norm{v_{i}}^{2}\\
& \leq \norm{\tilde x_{k+1}-\hat x}^{2} + 2\tfrac{r_{i}}{\scp{a_{i}}{v_{i}}}\cdot \scp{\tilde x_{k+1}-\hat x}{v_{i}} + \gamma^{2}.
\end{align*}
Now we use Cauchy-Schwarz and Young with $\epsilon>0$ (i.e. $2ab\leq \epsilon a^{2} + b^{2}/\epsilon$) to get
\begin{align*}
\norm{x_{k+1}-\hat x}^{2} &\leq \norm{\tilde x_{k+1}-\hat x}^{2} + 2\norm{\tilde x_{k+1}-\hat x}\cdot \tfrac{r_{i}}{\scp{a_{i}}{v_{i}}}\cdot \norm{v_{i}} + \gamma^{2}\\
& \leq (1+\epsilon)\cdot \norm{\tilde x_{k+1}-\hat x}^{2} + (1+\tfrac1\epsilon)\cdot \gamma^{2}.
\end{align*}
Applying Lemma~\ref{lem:est-one-step} we get
\[
\EE(\norm{x_{k+1}-\hat x}^{2}) \leq (1+\epsilon)\cdot (1-\lambda)\cdot \norm{x_{k}-\hat x}^{2} + (1+\tfrac1\epsilon)\cdot \gamma^{2}.
\]
Recursively we obtain
\[
\EE(\norm{x_{k}-\hat x}^{2}) \leq
\Big((1+\epsilon)\cdot (1-\lambda)\Big)^{k}\cdot \norm{x_{0}-\hat x}^{2} +
\sum_{j=0}^{k-1}\Big((1+\epsilon)\cdot (1-\lambda)\Big)^{j}\cdot (1+\tfrac1\epsilon)\cdot \gamma^{2}.
\]
Now we choose $\epsilon = \tfrac{\lambda}{2(1-\lambda)}$, observe that
\[
(1-\lambda)\cdot (1+\epsilon) = 1-\tfrac{\lambda}2\quad\text{and}\quad (1 + \tfrac1\epsilon) = 1 - \tfrac\lambda2
\]
and get
\[
\begin{split}
\EE(\norm{x_{k}-\hat x}^{2}) & \leq
(1-\tfrac\lambda2)^{k}\cdot \norm{x_{0}-\hat x}^{2} +
\sum_{j=0}^{k-1}(1-\tfrac\lambda2)^{j+1}\cdot \gamma^{2}\\
& \leq
(1-\tfrac\lambda2)^{k}\cdot \norm{x_{0}-\hat x}^{2} +
\tfrac{2-\lambda}{\lambda}\cdot \gamma^{2}
\end{split}
\]
which proves the claim.
\end{proof}
The first equation in Theorem~\ref{thm:inconsitent} shows that the
iteration of RKMA will reach a final error of the order of
$\norm{\sum_{l=0}^{\infty}M^{l}V^{T}Dr} = \norm{(I-M)^{-1}V^{T}Dr} =
\norm{(V^{T}DA)^{-1}V^{T}Dr}$ if $\rho(M)<1$.
\section{Underdetermined systems}
\label{sec:underdetemined}
Now we consider the underdetermined case, i.e. the case where $m<n$,
but we will still assume full row rank of $A$ and $V$.
In the case of no mismatch, linear convergence has been proven for the probabilities $p_{i} = \norm{a_{i}}^{2}/\norm[F]{A}^{2}$ in~\cite{liu2014asynchronous}.
In this case,
Theorem~\ref{thm:RKMA} does never ensure convergence: On the one hand, the matrix
$V^{T}DA + A^{T}DV$ is never positive definite, so $\lambda$ from \eqref{eq:CC} is always zero. On the other hand, $V^{T}DA$ always has a non-trivial kernel, and thus, $I - V^{T}DA$ always has a spectral radius equal to one. However, the iteration
often converges in practice and this is due to the following simple
observation: All the iterates $x_{k}$ of Algorithm~\ref{alg:RKMA} are
in $\rg V^{T}$ if the starting point $x_{0}$ is there. So, if the
equation $Ax = b$ has a solution $\hat x$ in $\rg V^{T}$, then all
vectors $x_{k}-\hat x$ are also in the range.
Inspecting the proof of
Lemma~\ref{lem:est-one-step} we note that the constant $\lambda$
that needs to be positive to guarantee improvement in each step, is in fact not the smallest eigenvalue of
$V^{T}DA + V^{T}DA - A^{T}DSA$ but the smallest eigenvalue of this
matrix when restricted to the range of $V^{T}$. More explicitly, let
$Z\in\RR^{n\times m}$ be a matrix whose columns form on orthonormal basis
of $\rg V^{T}$. So, the term in~\eqref{eq:est_rkma-aux} can also be written as
\begin{align*}
\norm{x_{k}-\hat x}^{2} - \scp{ZZ^{T}(x_{k}-\hat x)}{(2V^{T}DA - A^{T}DSA)ZZ^{T}(x_{k}-\hat x)}\\
= \norm{x_{k}-\hat x}^{2} - \scp{Z^{T}(x_{k}-\hat x)}{Z^{T}(2V^{T}DA - A^{T}DSA)ZZ^{T}(x_{k}-\hat x)}.
\end{align*}
Consequently, we need an estimate of the form
\[
\scp{Z^{T}(x_{k}-\hat x)}{Z^{T}(2V^{T}DA - A^{T}DSA)ZZ^{T}(x_{k}-\hat x)}\geq \lambda\cdot \norm{x_{k}-\hat x}^{2}
\]
and, since $\norm{Z^{T}(x_{k}-\hat x)}^{2} = \norm{x_{k}-\hat x}^{2}$, this is fulfilled for
\[
\lambda = \lambda_{\min}(Z^{T}(V^{T}DA +A^{T}DV- A^{T}DSA)Z).
\]
Similarly, the convergence of $\EE(x_{k}-\hat x)$ is equivalent to the convergence of $\EE(Z^{T}(x_{k}-\hat x))$, and it holds that
\begin{align*}
\EE(Z^{T}(x_{k+1}-\hat x)) & = Z^{T}(I-V^{T}DA)(x_{k}-\hat x)\\
& = Z^{T}(I-V^{T}DA)ZZ^{T}(x_{k}-\hat x)\\
& = (I - Z^{T}V^{T}DAZ)Z^{T}(x_{k}-\hat x).
\end{align*}
Finally, note that the system $Ax=b$ has only one solution that lies in $\rg V^{T}$ if $AV^{T}$ is non-singular.
Thus, we have proved the following theorem:
\begin{thm}
\label{thm:RKMA-underdetermined}
Consider the consistent system~\eqref{eq:OCS} with
$A,V\in\RR^{m\times n}$ for $m\leq n$ both with full row rank such that $AV^{T}$ is non-singular.
Furthermore let the columns of $Z$ be an orthonormal basis for $\rg V^{T}$ and let
$p\in\RR^{m}$ with $p_{i}\geq 0$ and $\sum_{i}p_{i}=1$ and set
$D:=\diag\big(\tfrac{p_i}{\scp{a_i}{v_i}}\big)$ and
$S:=\diag\big(\tfrac{\norm{v_i}^2}{\scp{a_i}{v_i}}\big)$. Then it holds:
\begin{enumerate}
\item The system $Ax=b$ has exactly one solution $\hat x$ that lies in
$\rg V^{T}$.
\item If $x_{0}\in\rg V^{T}$ and $\rho(I-Z^{T}V^{T}DAZ)<1$,
then the iterates of Algorithm~\ref{alg:RKMA} fulfill
\[
\EE(x_{k}-\hat x)\to 0\quad \mbox{for} \quad k \to \infty .
\]
\item
If $x_{0}\in\rg V^{T}$ and
\begin{equation}
\lambda := \lambda_{\min}\big(Z^{T}(V^{T}DA + A^{T}DV - A^{T}SDA)Z\big) > 0 \label{eq:CC-underdetermined}
\end{equation}
is fulfilled, then it holds that
\begin{equation*}
\EE \left[\norm{x_{k+1}-\hat{x}}^2\right] \le (1- \lambda) \cdot \EE
\left[\norm{x_k-\hat{x}}^2 \right] \,.
\end{equation*}
\end{enumerate}
\end{thm}
This result has the following practical implication:
If one can measure the quantity $x$ by linear measurements, encoded by the vectors $a_{i}$, but only has less measurements available than degrees of freedom in $x$, it is beneficial to use a mismatched adjoint $V$ with rows $v_{i}^{T}$ such that the $v_{i}$ are close to the vectors $a_{i}$ (such that the convergence condition is fulfilled), but which also ensure that $x$ is in the range of the vectors $v$.
Mismatched forward/back projection models in CT provide a useful example to illustrate this result.
Forward-projection in CT is often implemented using a ray-tracing algorithm known as Siddon's method~\cite{siddon1985fast}.
This algorithm models line integration and has the benefit of being computationally efficient and amenable to parallelization;
however, it does not model the finite width of the detector bin.
This can lead to Moire pattern artifacts when using a matched forward/back-projection pair if the image pixel size is smaller than the detector bin size~\cite{de2004distance}.
Mismatched projector pairs --- in which the backprojection operator models the finite detector bin width --- are often used to avoid these artifacts. We illustrate how RKMA can be used in this manner in Section~\ref{sec:numerics}.
\section{Optimizing the probabilities}
\label{sec:optim-probs}
In the case of exact adjoint, a common choice for the probabilities $p_{i}$ is to use $p_{i} = \norm{a_{i}}^{2}/\norm[F]{A}^2$ which leads to the simple expression $\lambda = \lambda_{\min}(A^{T}A)/\norm[F]{A}^{2}$.
However, numerical experiments show that this vector $p$ of probabilities does not lead to the best performance in practice.
This is of no surprise: For any diagonal matrix $W = \diag(w_{i})$ one can consider the problem $WA x = Wb$ which has different row norms,
while the each Kaczmarz iteration stays exactly the same.
This shows that the choice of probabilities by norms of the rows is in some sense arbitrary.
In~\cite{agaskar2014randomized} the authors proposed a method to find the smallest contraction factor of the method by minimizing the largest eigenvalue of an auxiliary matrix of size $\RR^{n^{2}\times n^{2}}$. Here we present a different method that also works for the case of mismatched adjoint.
Theorem~\ref{thm:RKMA} states that the asymptotic convergence rate is given by $\rho(I - V^{T}DA)$, while the expected improvement in each step is either expressed by
$1-\lambda_{\min}(V^{T}DA + A^{T}DV - A^{T}SDA)$ or $\norm{I-V^{T}DA}$
(recall that $D = \diag(p_{i}/\scp{a_{i}}{v_{i}})$ and $S = \diag(s_{i})$ with $s_{i} = \norm{v_{i}}^{2}/\scp{a_{i}}{v_{i}}$).
One would like to choose $p$ (i.e. $D$) in such a way that these quantities are as small as possible.
Numerically, we observe that the asymptotic rate is indeed quite tight, while the expected improvement is only a loose estimate in the case of mismatched adjoint.
However, the numerical radius of a non-symmetric matrix is not easily characterized and is neither a convex, nor concave function of the entries of the matrix.
The minimal eigenvalue of a symmetric matrix, on the other hand, is characterized by a minimization problem and it will turn out, that $\lambda_{\min}$ is indeed a concave function in $p$.
Also, the spectral norm is convex and thus, the function $\norm{I-V^{T}DA}$ is also convex in $p$.
We therefore aim to choose $p$ such that $\lambda_{\min}$ is maximized or $\norm{I-V^{T}DA}$ is minimized, i.e. we aim to solve
\begin{equation}
\label{eq:best-prob-vector}
\max_{p}\ \lambda_{\min}(V^{T}DA + A^{T}DV - A^{T}SDA),\quad \text{s.t.}\quad \sum_{i=1}^{m}p_{i} = 1,\quad p\geq 0.
\end{equation}
or
\begin{equation}
\label{eq:best-prob-vector_norm}
\min_{p}\ \norm{I-V^{T}DA},\quad \text{s.t.}\quad \sum_{i=1}^{m}p_{i} = 1,\quad p\geq 0.
\end{equation}
\subsection{Maximizing $\lambda_{\min}$}
\label{sec:max_lambda_min}
The super-gradient of the objective functional in~\eqref{eq:best-prob-vector} is given by the next lemma:
\begin{lem}
\label{lem:supergrad-smallest-ev}
The function $f(p) = \lambda_{\min}(V^{T}DA + A^{T}DV - A^{T}SDA)$ is concave.
A super-gradient at $p$ is given by
\[
\frac{\partial\lambda_{\min}}{\partial p} = \left(\frac{\scp{2v_{i}-s_{i}a_{i}}{x}\scp{a_{i}}{x}}{\scp{a_{i}}{v_{i}}}\right)_{i=1,\dots,m}
\]
where $x$ is an eigenvector of $V^{T}DA + A^{T}DV - A^{T}SDA$ corresponding to the smallest eigenvalue.
\end{lem}
\begin{proof}
By the min-max principle for eigenvalues of symmetric matrices, we have
\[
\begin{split}
\lambda_{\min}(V^{T}DA + A^{T}DV - A^{T}SDA) & = \min_{\norm{x}=1}\scp{(V^{T}DA + A^{T}DV - A^{T}SDA)x}{x}\\
& = \min_{\norm{x}=1}\scp{DAx}{(2V - SA)x}\\
& = \min_{\norm{x}=1}\sum_{i=1}^{m}p_{i}\frac{\scp{2v_{i}-s_{i}a_{i}}{x}\scp{a_{i}}{x}}{\scp{a_{i}}{v_{i}}}.
\end{split}
\]
This shows that $f$ is a minimum over linear functions in $p$, and hence, concave.
To compute a super-gradient, let $x$ be a minimizer, i.e. an
eigenvector of $V^{T}DA + A^{T}DV - A^{T}SDA$ corresponding to the
smallest eigenvalue. Since this is a point where the minimum is
assumed, a super-gradient is given by
\[
\frac{\partial\lambda_{\min}}{\partial p} = \left(\frac{\scp{2v_{i}-s_{i}a_{i}}{x}\scp{a_{i}}{x}}{\scp{a_{i}}{v_{i}}}\right)_{i=1,\dots,m}.
\]
\end{proof}
The previous lemma allows one to solve~\eqref{eq:best-prob-vector} by projected super-gradient ascent as follows: Choose a stepsize sequence $t_{k}$ and iterate:
\begin{enumerate}
\item Initialize with $p^{0}_{i}= 1/m$, $k=0$
\item Form $V^{T}DA + A^{T}DV - A^{T}SDA$ and compute an eigenvector $x$ corresponding to the minimal eigenvalue.
\item Compute the super-gradient $g^{k}_{i} = \frac{\partial\lambda_{\min}}{\partial p}(p^{k})$ according to Lemma~\ref{lem:supergrad-smallest-ev}.
\item Update $p^{k+1} = \proj_{\Delta_{m}}(p^{k} + t_{k}g^{k})$ where $\proj_{\Delta_{m}}$ is the projection onto the $m$-dimensional simplex.
\end{enumerate}
It is worth noting, how this algorithm looks in the special case of $V=A$.
There we only want to maximize $\lambda_{\min}(A^{T}DA)$ and the super-gradient of this at some $p^{k}$ is just $g^{k}=\left(\tfrac{\scp{a_{i}}{x}^{2}}{\norm{a_{i}}^{2}}\right)_{i=1m\dots,m}$.
As this is always positive, we can project onto the simplex by a simple rescaling as
\[
p^{k+1} = \frac{p^{k}+ t_{k}g^{k}}{\norm[1]{p^{k}+t_{k}g^{k}}}.
\]
\subsection{Minimizing $\norm{I-V^TDA}$}
\label{sec:min_norm}
The subgradient of the objective functional in~\eqref{eq:best-prob-vector_norm} is given by the next lemma:
\begin{lem}\label{lem:subgrad-spectral-norm}
Let $s_{i} = \scp{a_{i}}{v_{i}}$.
The function $f(p) = \norm{I-V^{T}DA}$ with $D = \diag(p/s)$ is convex. A subgradient is given by
\[
-\frac{(Aq_{1})\odot(Vr_{1})}{s}\in \partial f(p)
\]
where $q_{1}$ and $r_{1}$ are left and right singular values of $I-V^{T}DA$ corresponding the largest singular value, $\odot$ denotes the componentswise product and the division is also to be understood componentwise.
\end{lem}
\begin{proof}
The convexity of $f$ follows from the convexity of the norm and the
fact that the map $M: \RR^{m}\to\RR^{n\times n}$, $p\mapsto -V^{T}DA$ is linear in $p$.
Example 1 in \cite{watson1992characterization} shows that the
subgradient of the spectral norm is given as follows: If $B$ has a singular value decomposition $B = Q\Sigma R^{T}$ and the maximal singular value has multiplicity $j$, then
\[
\partial_{A}\norm{A} = \conv\{q_{i}r_{i}^{T}\mid i=1,\dots,j\}
\]
where $q_{i}$ and $r_{i}$ are the $i$ columns of $Q$ and $R$, respectively.
By the chain rule for subgradients, we get that
\[
\partial_{p}f(p) = M^{T}\partial\norm{I-Mp}.
\]
The transpose of $M$ is calculated by
\begin{align*}
\scp{p}{M^{T}B} & = \scp{Mp}{B}\\
& = \trace((Mp)^{T}B)\\
& = -\trace(V^{T}\diag(p/s)AB)\\
& = -\trace(\diag(p/s)ABV^{T})\\
& = \scp{p}{-\diag(\diag(1/s)ABV^{T})},
\end{align*}
i.e.
\[
M^{T}B = -\diag\Big(\diag(1/s)ABV^{T}\Big).
\]
Plugging in the previous formula we obtain that
\[
-\diag(\diag(1/s)Aq_{1}r_{1}^{T}V^{T}) = -\diag((Vr_{1})^{T}\diag(1/s)(Aq_{1})) = \frac{(Aq_{1})\odot(Vr_{1})}{s}
\]
is a subgradient of $f$,
which shows the assertion.
\end{proof}
Similar to the previous subsection we can solve~\eqref{eq:best-prob-vector_norm} by projected subgradient descent as follows: Choose a stepsize sequence $t_{k}$ and iterate:
\begin{enumerate}
\item Initialize with $p^{0}_{i}= 1/m$, $k=0$
\item Compute a pair $q,r$ of left and right singular vectors of $I-V^{T}DA$.
\item Compute a subgradient $g^{k}$ according to Lemma~\ref{lem:subgrad-spectral-norm}
\item Update $p^{k+1} = \proj_{\Delta_{m}}(p^{k} + t_{k}g^{k})$ where $\proj_{\Delta_{m}}$ is the projection onto the $m$-dimensional simplex.
\end{enumerate}
\section{Numerical experiments}
\label{sec:numerics}
In this section we report a few numerical experiments that illustrate the results.\footnote{The code to produce the figures in this article is available at \url{https://github.com/dirloren/rkma}.}
We start with an illustration of Theorem~\ref{thm:RKMA}, i.e. the consistent and overdetermined case.
We used a Gaussian matrix $A\in\RR^{500\times 200}$ (i.e. the entries and independently and normally distributed) and defined the mismatched adjoint $V$ by setting all entries of $A$ with magnitude smaller that $0.5$ to zero.
The unique solution $\hat x$ was also generated as a Gaussian vector and as probabilties we used $p_{i} = \norm{a_{i}}^{2}/\norm[F]{A}^{2}$. The convergence condition~\eqref{eq:CC} is fulfilled and $1-\lambda\approx 1-5.5\cdot 10^{-4}$ and we also have $\rho(I-V^{T}DA) \approx 1 - 7.5\cdot 10^{-4}$.
Figure~\ref{fig:convergence_perturbation} shows the error and the residuals for the randomized Kaczmarz method with and without mismatched adjoint.
\begin{figure}[htb]
\centering
\setlength\figurewidth{0.35\textwidth}
\input{figures/example_convergence_perturbation_error}
\input{figures/example_convergence_perturbation_residual}
\caption{Comparison of the randomized Kaczmarz method with and without mismatched adjoint in the overdetermined and consistent case. Left: Decay of the error and also the upper bound from Theorem~\ref{thm:RKMA}. Right: Decay of the residual.}
\label{fig:convergence_perturbation}
\end{figure}
As expected, both methods converge, but in this example the method with mismatch converges slightly faster.
We note that this is not universal: other random instances constructed in the same way show different behaviour, although both methods are always quite close to each other.
Another observation is that the upper bound derived from the convergence factor $1-\lambda$ is quite far from the actual behavior.
Our second numerical example treats the inconsistent and overdetermined case.
The matrix $A$ and solution $\hat x$ and the probabilities are similar to the previous example, but now the right hand side is $b = A\hat x + r$ (with Gaussian $r$).
Figure~\ref{fig:inconsistent} shows the result of the RKMA method on this example and also the error bound from Theorem~\ref{thm:inconsitent}.
As predicted, the error does not go to zero, but levels out at a non-zero level (the same is true for the residual).
As in the previous example one sees that the upper bound from Theorem~\ref{thm:inconsitent} is quite loose.
\begin{figure}[htb]
\centering
\setlength\figurewidth{0.7\textwidth}
\input{figures/example_inconsistent_error}
\caption{The RKMA method in the inconsistent. The plot shows the decay of the error and the theoretical upper bound and the expected final error from Theorem~\ref{thm:inconsitent}.}
\label{fig:inconsistent}
\end{figure}
Now we illustrate the behavior of RKMA in the underdetermined case.
We used $A,V\in\RR^{100\times 500}$, again $A$ with Gaussian entries and we obtained $V$ from $A$ by setting the entries of $A$ to zero that have magnitude smaller than $0.3$.
The solution $\hat x$ was constructed as $\hat x = V^{T}c$ for some random vector $c$ and the right hand side was obtained through $b = A\hat x$.
Hence, generically $\hat x$ is not in the range of $A^{T}$ and the standard randomized Kaczmarz method can not converge to $\hat x$.
Figure~\ref{fig:underdetermined} shows that the error decays quickly to zero for RKMA but not for the standard randomized Kaczmarz method. The residuals, however, behave roughly similar for both methods.
\begin{figure}[htb]
\centering
\setlength\figurewidth{0.7\textwidth}
\input{figures/example_underdetermined_error}
\caption{Comparison of the randomized Kaczmarz method with and without mismatched adjoint in the underdetermined and consistent case but with true solution $\hat x\in\rg V^{T}$ and $\hat x\notin \rg A^{T}$. Plot shows the decay of the error.}
\label{fig:underdetermined}
\end{figure}
For another illustration of the underdetermined case, we generated a toy problem from computerized tomography with the AIRtools package \cite{hansen2012air} as follows:
For a $50\times 50$ pixel image we generated a CT projection matrix for a parallel beam geometry with 36 equi-spaced angels and 150 rays per angle, leading to a projection matrix of size $5,400\times 2,500$ (the MATLAB command is \texttt{Afull = paralleltomo(50,0:5:180,150,70)}).
For the matrix $A$ for the forward projection we used every third row of the matrix while for $V$ (the backprojection) we used the averaged of three consecutive rows, thereby employing a simple model for detector bin width in the backprojection operation.
Then we eliminated the rows of $A$ and $V$ which correspond to zero-rows in $A$ which leaves us with two matrices of size $1,636\times 2,500$.
Then we generated a smooth image by
\begin{quote}
\texttt{im = phantomgallery('ppower',N,0.3,1.3,155432);\\
im = imfilter(im,fspecial('gaussian',16,4));\\ im =
im/max(max(im));\\ x = im(:);}
\end{quote}
and generated the data by \texttt{b = A*x}.
We reconstructed $x$ by RKMA and RK (with the probabilities from Remark~\ref{rem:strohmer-vershynin}).
Figure~\ref{fig:ct_rek} shows the reconstructions after a quite small number of sweeps (one sweep corresponds to $m$ step of the methods, where $m$ is is the number of rows).
One sees that using a mismatched adjoint is beneficial in this setting: First, the iteration converges to a limit which is closer to the original image (which is due to the fact that this is closer to the range of $V^{T}$ that to that of $A^{T}$). Moreover, the initial iterates are better.
As expected, the reconstruction with $A^{T}$ and $A$ suffers from Moire patterns.
Using $V^{T}$ as adjoint avoids these artifact as the range of $V^{T}$ contains smoother functions, in some sense.
Finally, we note that applying RK using $V$ for both the forward and back-projection does also converge, but leads to an even worse reconstruction than using RK with $A$.
\begin{figure}[htb]
\newcommand{0.24\textwidth}{0.24\textwidth}
\setlength\figurewidth{0.2\textwidth}
\subfigure[Original]{\includegraphics[width=0.24\textwidth]{images/ct_xhat}}
\subfigure[Decay of error]{\input{figures/example_ct_error}}\\
\subfigure[RK, 1 sweep]{\includegraphics[width=0.24\textwidth]{images/ct_rk_1_sweep}}
\subfigure[RK, 3 sweeps]{\includegraphics[width=0.24\textwidth]{images/ct_rk_3_sweeps}}
\subfigure[RK, 10 sweeps]{\includegraphics[width=0.24\textwidth]{images/ct_rk_10_sweeps}}
\subfigure[RK, 20 sweeps]{\includegraphics[width=0.24\textwidth]{images/ct_rk_20_sweeps}}\\
\subfigure[RKMA, 1 sweep]{\includegraphics[width=0.24\textwidth]{images/ct_rkma_1_sweep}}
\subfigure[RKMA, 3 sweeps]{\includegraphics[width=0.24\textwidth]{images/ct_rkma_3_sweeps}}
\subfigure[RKMA, 10 sweeps]{\includegraphics[width=0.24\textwidth]{images/ct_rkma_10_sweeps}}
\subfigure[RKMA, 20 sweeps]{\includegraphics[width=0.24\textwidth]{images/ct_rkma_20_sweeps}}\\
\caption{Reconstruction for a toy CT example.}
\label{fig:ct_rek}
\end{figure}
Finally, we illustrate that the optimization of the probabilities according to Section~\ref{sec:optim-probs} does indeed improve the practical performance.
We used $A,V \in\RR^{300\times 100}$ where $A$ is a random matix with Gaussian entries where the $i$th row has been scaled with the factor $2/(\sqrt{i}+2)$ and $V$ has been obtained from $A$ by setting 5\% randomly chosen entries of $A$ to zero.
We calculated optimized probabilities by the methods from Sections~\ref{sec:max_lambda_min} and~\ref{sec:min_norm}, respectively (initialized with uniform probabilities).
We applied RKMA with these optimized probabilities, uniform probabilities, and $p_{i}\propto \scp{a_{i}}{v_{i}}$. Figure~\ref{fig:optimize_lambda} shows that the optimized probabilites indeed outperform the uniform choice and the choice proportional to $\scp{a_{i}}{v_{i}}$.
Table~\ref{tab:optimized-probabilities} shows the respective quantities for the different probabilities.
Although both approaches optimize different quantities and neither optimizes the asymptotic convergence rate, both probabilities are rather similar in practice, and, as shown in Figure~\ref{fig:optimize_lambda} on the right, the probabilities for the different optimization problems are quite similar.
\begin{table}[htb]
\centering
\begin{tabular}{lcccc}\toprule
& unif & row & $\max\lambda$ & $\min\norm{I-V^{T}DA}$\\\midrule
$1-\lambda$ &0.998588 & 0.999079 & 0.997820 & 0.998311 \\
$\rho(I-V^{T}DA)$ &0.997908 & 0.998352 & 0.997540 & 0.997327 \\
$\norm{I-V^{T}DA}$ &0.998029 & 0.998485 & 0.997752 & 0.997439 \\\bottomrule
\end{tabular}
\caption{Quantities describing the convergence of RKMA for different probabilities.}
\label{tab:optimized-probabilities}
\end{table}
\begin{figure}[htb]
\centering
\setlength\figurewidth{0.35\textwidth}
\input{figures/example_optimize_prob_error}
\input{figures/example_optimize_probs}
\caption{A sample run of RKMA on a consistent system with different
probabilities: ``unif'' refers to the uniform probabilities
$p_{i}= 1/m$, ``$\propto\scp{a_{i}}{v_{i}}$'' uses
$p_{i} = \scp{a_{i}}{v_{i}}/\norm[V]{A}^{2}$
(cf. Remark~\ref{rem:strohmer-vershynin}) and ``opt $\norm{I-V^{T}DA}$'' and ``opt $\lambda$'' refer to
probabilities obtained by the methods from
Section~\ref{sec:optim-probs}.}
\label{fig:optimize_lambda}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We derived several results on the convergence of the randomized Kaczmarz method with mismatched adjoint and could show that the method converges linearly when the mismatch is not too large.
The results are a little bit more complicated compared to the case of no mismatch due to the asymmetry of the matrix $I-V^{T}DA$. In particular, estimates for the norm of the expected error and the expectation of the norm of the error are different in this case.
We were also able to characterize the asymptotic convergence rate of RKMA and numerical experiments indicate that this estimate of the rate is indeed quite sharp.
In the underdetermined case one may even take advantage of the use of a mismatched adjoint to drive the randomized Kaczmarz method to a solution in the subspace $\rg V^{T}$.
This last point may be important for algebraic reconstruction techniques in computerized tomography where mismatched projector pairs are often employed.
Using the conditions derived here, a thorough study of commonly used mismatched projector pairs could be performed to determine what pairs have guaranteed asymptotic convergence properties.
\section*{Acknowledgements}
The authors thank Emil Sidky for valuable discussions. This material was based upon work partially supported by the National Science Foundation under Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings, and conclusions or recommendations expressed in this material
are those of the author(s) and do not necessarily reflect the views of the National Science
Foundation.
\bibliographystyle{plain}
|
{
"timestamp": "2018-03-09T02:00:43",
"yymm": "1803",
"arxiv_id": "1803.02848",
"language": "en",
"url": "https://arxiv.org/abs/1803.02848"
}
|
\section{Introduction}
The task of person Re-Identification (Re-ID) is to judge whether two person images indicate the same target or not and has widespread applications in video surveillance for public security. From the perspective of human perception, two persons can be distinguished according to the color or texture features of the persons' attributes (e.g. clothes, hairs) and the latent semantic parts (e.g., head, front and back upper body, belongings). Consequently, the person Re-ID task can be addressed from two aspects’ matching: the color-texture distributions and the latent semantic-components.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.43\textwidth]{example-eps-converted-to.pdf}
\caption{Example pairs of images from the CUHK03 dataset. Given the probe image
of a person in view A marked by a blue window, the task is to find the same person in the gallery set of view B. The groundtruth images are marked by the green bounding boxes. The first row and the second row are re-identification results by semantic components (SC)-based and color-texture maps (CTM)-based strategies, respectively. Failures exist in both cases. The third row are the results by the combination of the two strategies which obtain success on both examples. }
\label{fig:realexamples}
\end{figure}
In the previous efforts, two common strategies are employed for the person Re-ID task. One strategy focuses on learning the correspondence among color-texture distributions from different person images, but ignoring correspondence among the semantic-components~\cite{Alpher11}~\cite{Alpher12}~\cite{Alpher13}~\cite{Alpher14}. The other relies on learning the correspondence among semantic-components, while ignoring the color-texture correspondence~\cite{Alpher16}~\cite{Alpher27}~\cite{Alpher19}~\cite{Alpher06}~\cite{Alpher20}~\cite{Alpher21}~\cite{Alpher22}. Figure \ref{fig:realexamples} gives two examples to show respective advantages of the two strategies, where images in the first row are re-identified results by the semantic correspondence and images in the second row are re-identified results by the color-texture correspondence..
In this work, we assume that the semantic-components and color-texture distributions are complementary to each other and present a novel multi-channel deep convolutional person matching network based on the combination of the semantic-components and the color-texture distributions. In particular, we learn separate deep representations for semantic-components and color-texture distributions from two person images and then employ the matching network to obtain the correspondence representations. These correspondence representations are fused to address the Re-ID task.
On one hand, to learn the correspondence among semantic-components from two persons, we first fine-tune the model weights of the ImageNet-pretrained GoogLeNet~\cite{Alpher09} to learn the deep representation of each person's semantic-components. By visualizing some layers of this network, we observe that the discriminative regions in feature maps correspond to different components (bag, head, body, etc.) of a person. For matching these learned feature regions from two person images, convolution operation is exploited to fuse these feature regions from different inputs in the same sub-windows. However, the feature regions of the same components from two views for one person seldom have the consistent spatial scale and location due to viewpoint changes. To overcome the variation of spatial scale and location, we employ atrous convolution ~\cite{Alpher23} with multi-scale views to construct a module called pyramid matching module, which provides a desirable view of perception without increasing parameters and computation by introducing zeroes between the consecutive filter values. With this module, we obtain the correspondence representation between the semantic-components from different inputs.
On the other hand, to build the correspondence representation between color-texture distributions, we propose to introduce the deep color-texture distribution representation learning based on convolutional neural network. Different from the conventional hand-crafted features (e.g., LOMO)~\cite{Alpher32}, we first extract RGB, HSV and SILTP histograms ~\cite{Alpher37} with the sliding windows and then project the histogram bins into specific feature maps, which encode the spatial distribution for the particular color-texture range. With these Color-Texture feature Maps (CTM), we employ a 3-layers convNet to learn the deep color-texture representation for each person image. Thus, the pyramid matching module is exploited to learn the correspondence representation between color-texture distributions from different person images.
Having the learned correspondence representations for the semantic-components and color-texture distributions, the MC-PPMN is carried out by fusing them with two fully connected layers to decide whether the two input images represent the same person or not. The proposed framework is evaluated on several real-world datasets. Extensive experiments on these benchmark datasets demonstrate the effectiveness of our approach against the state-of-the-art, especially on the rank-1 recognition rate.
The main contributions of this paper are as follows:
(1) We propose a deep convolutional network named MC-PPMN which learns the correspondence representations from both the semantic-components and color-texture distributions. Deep structures for encoding both semantic space and color-texture distributions, and cross-person correspondence are jointly optimized to improve the generalization performance of the person re-identification task.
(2) The proposed framework employs the pyramid matching strategy based on the atrous convolution to learn the correspondence representation for two person images, which provides a desirable view of perception without increasing parameters and computation by introducing zeroes between the consecutive filter values.
\begin{figure*}[htb!]
\includegraphics*[width=\textwidth]{Visio-finalNet-eps-converted-to.pdf}
\caption{The proposed architecture of deep convolutional person matching network. }
\label{fig:architecture}
\end{figure*}
\section{Related Work}
In the past five years, many efforts have been proposed for the task of person Re-ID, which greately advance this field. The discrimative feature representation learning and the effective matching strategy learning are the main topics for person Re-ID. For feature representation, many approaches design the robust descriptors againist misalignments and variations with color and texture, which are two of the most useful characteristics in image representation. The hand-crafted features including HSV color histogram~\cite{Alpher01}, SIFT histogram~\cite{Alpher02}, LBP histogram~\cite{Alpher03} features and the combination of them are widely used for image representation. Many efforts also consider the properties of person images such as symmetry structure of segments~\cite{Alpher01} and the horizontal occurrence of local features\cite{Alpher32}, to design the features, which significantly boost the matching rate.
For the matching strategy, the metric learning is the basic idea to find a mapping function from the feature space to the distance space so as to minimize the intra-personal variance while maximizing the inter-personal margin. Many approaches have been proposed based on this idea including pair-wise constrained component analysis (PCCA)~\cite{Alpher11}, local Fisher discriminant analysis (LFDA)~\cite{Alpher12}, Large Margin Nearest-Neighbour (LMNN)~\cite{Alpher13}, and KISS metric learning (KISSME)~\cite{Alpher14}. However, these matching strategies often pay much attention to the distance learning of the abstract features without taking the spatial stuctural and semantic correspondence learning in consideration.
Recently, the efforts which employ deep convolutional architectures to deal with the task of person Re-ID have shown a remarkable improvement over the approaches based on the hand-craft features. For example, the patch-based methods~\cite{Alpher16}~\cite{Alpher27} perform patch-wise distance measurement to obtain the spatial relationship. Part-based methods~\cite{Alpher19} divide one person into some equal parts and jointly perform body-wise and part-wise correspondence learning based on the assumption that the pedestrian keeps upright in general. Some efforts~\cite{Alpher06} try to capture the semantic and structural correlation using deep convolution networks, which have promising results on the challenging datasets. To improve the performance of feature extraction, the triplet learning frameworks~\cite{Alpher20}~\cite{Alpher21}~\cite{Alpher22} which employ triplet training examples and the triplet loss function to learn fine grained image are also proposed.
\section{Our Architecture}
Figure \ref{fig:architecture} illustrates our network's architecture. The proposed architecture extracts color-texture and the mid-level semantic-components representation for a pair of input person images. With the features mentioned above, two pyramid matching modules are employed to learn the correspondence for the color-texture distributions and semantic-components, respectively, and to output the correspondence representations. Finally, we fuse the correspondence representations utilizing two fully connected layers and employ softmax activations to compute the final decision which indicates the probability that the image pair depicts the same person. The details of the architecture is explained in the following subsections.
\subsection{Semantic-Components (SC) Images Representation}
As discussed previously, there exists a set of intrinsic latent semantic components (e.g., head, front and back upper body, belongings) in a person image, which are robust to the variations of views and background change. With these semantic representations for the images, we are able to learn the correspondence between the image pair. The well-known ImageNet-pretrained deep convolutional frameworks (like AlexNet, GoogLeNet, ResNet, etc)~\cite{Alpher09}~\cite{Alpher10}~\cite{Alpher08} have been widely used to project the RGB space to the semantic-aware space. The previous efforts~\cite{Alpher42} have also verified that the mid-level feature maps of the frameworks represent the semantic-components for one object. In our architecture, we extract these semantic-components with two parameter-shared GoogLeNets for a pair of person images. Figure \ref{fig:visualization}a shows the visualization of every block's responses in GoogLeNet finetuned on the Re-ID dataset CUHK03. We observe that the original person images are decomposed into many semantic-components (bag, head, etc.). The responses of low layers like Conv1 depict the particular components apparently and the high layers' responses like Conv5 layer look abstract but still keep the shape and spatial location. For notational simplicity, we refer to the convNet as a function $f_{CNN}( \boldsymbol X; \boldsymbol \theta)$, that takes $ \boldsymbol X$ as input and $ \boldsymbol \theta$ as parameters. The GoogLeNets output 1024 feature maps with size $10 \times 5$ respectively as the representations of images for an input pair of images resized to $160\times80$ from two cameras, A and B. We denote this process as follows:
\begin{equation}
\label{equ:imgPre}
\{\boldsymbol R^A_{sc}, \boldsymbol R^B_{sc}\}=\{f_{CNN}(\boldsymbol I^A; \boldsymbol \theta^1_{sc}), f_{CNN}(\boldsymbol I^B; \boldsymbol \theta^1_{sc})\}
\end{equation}
where $\boldsymbol R^A_{sc}$ and $\boldsymbol R^B_{sc}$ denote the SC representation of images $\boldsymbol I^A$ and $\boldsymbol I^B$ separately. $\boldsymbol \theta^1_{sc}$ are the shared parameters.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{visuaization-eps-converted-to.pdf}
\caption{a: Visualization of features for the ImageNet-pretrained GoogLeNet network, which is finetuned on the CUHK03 dataset. b: Illustration of the proposed Color-Texture feature Maps (CTM) extraction. }
\label{fig:visualization}
\end{figure}
\subsection{Color-Texture Maps (CTM) Images Representation}
The existing methods often extract the color-texture features for images by computing the histgrams of color channels within a partitioned horizontal stripe, which works under the assumption of slight vertical misalignment, and only consider the pose variations on horizontal dimension. These methods also ignore the spatial structure information. To address these problems and represent the color spatial distributions, we propose to use sliding windows to describe local color details for a person image and construct the spatial feature maps instead of feature vectors. RGB and HSV channels are the basic color characteristics for images. The Scale Invariant Local Ternary Pattern (SILTP)~\cite{Alpher37} descriptor is an improved operator over the well-known Local Binary Pattern (LBP)~\cite{Alpher03} and an invariant texture description for illumination. Specifically, we use a subwindow size of $8\times8$, with an overlapping step of $4$ pixels to locate local patches in the input $160\times80$ images. Within each subwindow, we extract a 24-bin RGB histogram, a 24-bin HSV histogram and a 16-bin SILTP histogram ($SILTP^{0.3}_{4,3}$). These resulting histogram-bins computed from all subwindows are then projected to the feature maps with size $40\times 20$. Figure \ref{fig:visualization}b shows the procedure of the proposed CTM extraction.
With the extracted CTM, we employ the parameters-shared convolution networks constructed with three convolution layers and two max-pooling layers to generate the color-texture representation with spatial size $10\times5$ consistent with the SC representation. We denote the representation above as $\boldsymbol R^A_{ctm}$ and $\boldsymbol R^B_{ctm}$ for images $\boldsymbol I^A$ and $\boldsymbol I^B$ respectively with the shared parameters $\boldsymbol \theta^1_{ctm}$:
\begin{equation}
\label{equ:CTMpre}
\{\boldsymbol R^A_{ctm}, \boldsymbol R^B_{ctm}\}=\{f_{CNN}(\boldsymbol I^A; \boldsymbol \theta^1_{ctm}), f_{CNN}(\boldsymbol I^B; \boldsymbol \theta^1_{ctm})\}
\end{equation}
\subsection{Pyramid Matching Module with Atrous Convolution}
\begin{figure}
\includegraphics[width=0.45\textwidth]{fusion_layer-eps-converted-to.pdf}
\caption{
Illustration of correspondence learning with pyramid matching module. Left: the component ``head'' has the similiar spatial location. Right: the component ``bag'' has the completely different shape and location. We match the components above by computing their responses in one window and the convolutions with multi-scale field-of-view are robust to the misalignment and variation of scale caused by viewpoint changes.
}
\label{fig:fusion_layer}
\end{figure}
In this work, we represent the semantic-components of person images with the mid-level feature maps of GoolgLeNet, which still preserve the original shape and releative spatial location. Therefore, the variations of the spatial scale and location misalignment caused by viewpoint changes remain significant on the image representation. As shown in Figure \ref{fig:fusion_layer}, the same bag belonging to the same person is located on the right side of one image but on the left side of the other image. The previous efforts~\cite{Alpher06} address this problem by decreasing the distance for the same semantic components from two images with max-pooling layers. This strategy is effective but loses the spatial information.
We employ atrous convolution to address this issue above. By introducing zeroes between the consecutive filter values, the atrous convolution computes the correspondences of the same semantic-components without decreasing their resolutions. Another challenge is that different semantic-components have the different scale of variations and misalignments. To address the scale invariance, we employ multi-rate atrous convolutions to construct pyramid matching module based on pyramid matching strategy to adaptively learn the correspondence for the semantic-components with multi-scale misalignments. Considering the size of feature maps, the pyramid matching module includes three branches $3\times3$ atrous convolution with rate 1, 2 and 3, which provides the field-of-view with size $3 \times 3$, $5 \times 5$, $7 \times 7$ respectively. Figure~\ref{fig:ppm_structure} shows the structure of this module and in Figure~\ref{fig:fusion_layer} two examples whose correspondences are learned with the rate1 and rate2 atrous convolutions respectively, are given to illustrate how this module works. With the images' concatenated SC representation $\{\boldsymbol R_{sc}^A,\boldsymbol R_{sc}^B\}$, the proposed module computes the correspondence distribution denoted as $\boldsymbol S_{sc}^{p} = \{ \boldsymbol S_{sc}^{r=1}$, $\boldsymbol S_{sc}^{r=2}$, $\boldsymbol S_{sc}^{r=3} \}$, in which the value of each location $(i, j)$ indicates the correspondence probability at that location. $r$ is the rate of atrous convolution. We formulate this matching strategy as follows:
\begin{align}
\label{equ:ppm}
\boldsymbol S_{sc}^{p} ={} & \{\boldsymbol S_{sc}^{r=1}, \boldsymbol S_{sc}^{r=2}, \boldsymbol S_{sc}^{r=3}\} \notag \\
={} & \{f_{CNN}(\{\boldsymbol R_{sc}^A,\boldsymbol R_{sc}^B\}; \{\boldsymbol \theta^2_1, \boldsymbol \theta^2_2, \boldsymbol \theta^2_3\}_{sc}\} \notag \\
={} & \{f_{CNN}(\{\boldsymbol R_{sc}^A,\boldsymbol R_{sc}^B\}; \boldsymbol \theta_{sc}^2\}\}
\end{align}
where $\boldsymbol \theta_{sc}^2 = \{ \boldsymbol \theta^2_1, \boldsymbol \theta^2_2, \boldsymbol \theta^2_3 \}_{sc}$ denotes the parameters of our module for SC representation. $\boldsymbol \theta^2_r(r=1,2,3)$ are the parameters of the matching branch with rate $r$.
We fuse the concatenated correspondence maps $\bm S_{sc}^{p}$ with learned parameters $\boldsymbol \theta_{sc}^3$, which indicate the weights of different matching branches, and output the fused correspondence representation. Inspired by \cite{Alpher06}, we further downsample the representation by max-pooling so as to preserve the most discriminative correspondence information and align it in a larger region. Finally, we obtain the correspondence representation $\boldsymbol S_{sc}^{f}$:
\begin{align}
\label{equ:weights}
\boldsymbol S_{sc}^{f} = {}& f_{CNN}(\{\boldsymbol S_{sc}^{r=1}, \boldsymbol S_{sc}^{r=2}, \boldsymbol S_{sc}^{r=3}\}; \boldsymbol \theta_{sc}^3) \notag \\
= {}& f_{CNN}(\{\boldsymbol R_{sc}^A,\boldsymbol R_{sc}^B\}; \boldsymbol \theta_{sc}^2, \boldsymbol \theta_{sc}^3\}
\end{align}
Based on the same motivation and principle, we learn the correspondence of color-texture distributions of the person's attributes (e.g.clothes, hairs) with another standalone pyramid matching module. With the images' concatenated CTM representation $\{\boldsymbol R_{ctm}^A,\boldsymbol R_{ctm}^B\}$, We obtain the correspondence representation as follows:
\begin{align}
\vspace{-1ex}
\label{equ:descriptors}
\boldsymbol S_{ctm}^{f} = f_{CNN}(\{\boldsymbol R^A_{ctm},\boldsymbol R^B_{ctm}\}; \boldsymbol \theta^2_{ctm}, \boldsymbol \theta^3_{ctm}\}
\end{align}
where $\boldsymbol \theta^2_{ctm}$ and $\boldsymbol \theta^3_{ctm}$ denote the parameters of pyramid matching module for CTM representation.
\begin{figure}
\includegraphics[width=0.47\textwidth]{ppm_structure.pdf}
\caption{
Illustration of the pyramid matching module.
}
\label{fig:ppm_structure}
\end{figure}
\subsection{The Unified Framework and Learning}
The correspondence representations $\bm S_{sc}^{f}$ and $\bm S_{ctm}^{f}$ are fused to the correspondence descriptor of size 1024 by using two fully connected layers. We pass the correspondence descriptor to another fully connected layer containing two softmax units. The probability that the two images in the pair, $\bm I^A$ and $\bm I^B$, are of the same person with softmax activations computed on the units above is denoted as:
\begin{equation}
\label{equ:softmax}
p = \frac {exp(\bm S_1(\bm S_{sc}^{f}, \bm S_{ctm}^{f}; \bm \theta^4))}{exp(\bm S_0(\bm S_{sc}^{f}, \bm S_{ctm}^{f}; \bm \theta^4))+exp(\bm S_1(\bm S_{sc}^{f}, \bm S_{ctm}^{f}; \bm \theta^4))}
\end{equation}
where $\bm S_0(\bm S_{sc}^{f}, \bm S_{ctm}^{f};\bm \theta^4)$ and $\bm S_1(\bm S_{sc}^{f}, \bm S_{ctm}^{f};\bm \theta^4)$ are the softmax units for $\bm S(\bm S_{sc}^{f}, \bm S_{ctm}^{f}\bm \theta^4)$.
We reformulate the proposed framework as a unified deep convolution framework based on Eqs.\ref{equ:imgPre} - \ref{equ:weights} :
\begin{align}
\label{equ:fuse}
& S(\bm S_{sc}^{f}, \bm S_{ctm}^{f}, \bm \theta^4) \notag \\
={} & f_{CNN}(\{\bm I^A,\bm I^B\}; \{\{\bm \theta^3, \{\bm \theta^2_r\},\bm \theta^1 \}_{sc}; \notag\\
{} & \{\bm \theta^3, \{\bm \theta^2_r\},\bm \theta^1 \}_{ctm};\bm \theta^4 \}) \notag \\
={} & f_{CNN}(\{\bm I^A,\bm I^B\}; \bm \theta)
\end{align}
where $\bm \theta = \{ \{\bm \theta^1, \{\bm \theta^2_r\}, \bm \theta^3\}_{sc}; \{\bm \theta^1, \{\bm \theta^2_r\}, \bm \theta^3\}_{ctm};\bm \theta^4 \}$, and $r=1,2,3$.
We minimize the widely used cross-entropy loss to optimize the network as Eq.\ref{equ:fuse} over a training set of $N$ pairs using stochastic gradient descent. $l_n$ is the 1/0 label for the input pair depicting whether the same person or not. With this unified network, the processes of discriminative image representation learning and cross-person correspondence learning are optimized jointly to make the image representation optimal to this task.
\begin{align}
\label{equ:loss}
\bm L = - \frac{1}{N} \sum^N_{n=1} [ l_n \log p_n + (1-l_n) \log (1-p_n) ]
\end{align}
By setting $\{\bm \theta^1, \{\bm \theta^2_r\}, \bm \theta^3\}_{sc} = \bm 0$ or $\{\bm \theta^1, \{\bm \theta^2_r\}, \bm \theta^3\}_{ctm} = \bm 0$, we construct two independent convNets named SC-PPMN and CTM-PPMN which focus on semantic-components correspondence learning and color-texture distributions correspondence learning respectively. These two convNets are denoted as Eq.\ref{equ:unified_sc} and Eq.\ref{equ:unified_ctm} optimized with $\bm L_{sc}$ and $\bm L_{ctm}$ represented in Eq.\ref{equ:loss} respectively.
\begin{align}
\label{equ:unified_sc}
& S_{sc}(\bm S_{sc}^{f}, \bm \theta_{sc}^4) \notag \\
={} & f_{CNN}(\{\bm I^A,\bm I^B\}; \{\{\bm \theta^3, \{\bm \theta^2_r\},\bm \theta^1\}_{sc};\bm \theta_{sc}^4\}) \notag \\
={} & f_{CNN}(\{\bm I^A,\bm I^B\}; \bm \theta_{sc})
\end{align}
\begin{align}
\label{equ:unified_ctm}
& S_{ctm}(\bm S_{ctm}^{f}, \bm \theta_{ctm}^4) \notag \\
={} & f_{CNN}(\{\bm I^A,\bm I^B\}; \{\{\bm \theta^3, \{\bm \theta^2_r\},\bm \theta^1 \}_{ctm};\bm \theta_{ctm}^4\}) \notag \\
={} & f_{CNN}(\{\bm I^A,\bm I^B\}; \bm \theta_{ctm})
\end{align}
\section{Experiments}
\subsection{Datasets and Protocol}
We evaluate the proposed architecture and compare our results with those of the state-of-the-art approaches on six person Re-ID datasets, namely CUHK03~\cite{Alpher27}, CUHK01~\cite{Alpher28}, VIPeR~\cite{Alpher29}, PRID450s~\cite{Alpher38}, i-LIDS~\cite{Alpher39} and PRID2011~\cite{Alpher40}. All the approaches are evaluated with Cumulative Matching Characteristics (CMC) curves by single-shot results, which characterize a ranking result for every image in the gallery given the probe image. Our experiments are conducted on the datasets with 10 random training and the average results are presented. We conduct the experiments on SC-PPMN, CTM-PPMN and MC-PPMN to learn the correspondence for two person images with the CTM features, SC features and the fused features, respectively. We report the experimental results and analyze the performances of CTM features and SC features.
\begin{table}[!htbp]
\centering
\label{table:dataset}
\caption{Datasets and settings in our experiments.}
\vspace{1ex}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Dataset & CUHK03 & CUHK01 & VIPeR & PRID450s & i-LIDS & PRID2011\\
\hline
identities & 1360 & 971 & 632 & 450 & 119 & 385/749\\
images & 13164 & 3884 & 1264 & 900 & 479 & 1134\\
views & 2 & 2 & 2 & 2 & 2 & 2\\
train IDs & 1160 & 871;485 & 316 & 225 & 59 & 100\\
test IDs & 100 & 100;486 & 316 & 225 & 59 & 100\\
\hline
\end{tabular}}
\end{table}
Table \ref{table:dataset} lists the description of each dataset and our experimental settings with the training and testing splits. The CUHK03 dataset provides two settings named labelled setting with the manually annotated pedestrian bounding boxes and detected settings with automatically generated bounding boxes in which possible misalignments and body part missing are introduced for a more realistic setting. In this paper, the evaluation results on both labelled and detected settings are reported. For the CUHK01 dataset, we report results on two different settings: 100 test IDs, and 486 test IDs. The VIPeR and PRID450s dataset are relatively small datasets and only contain one image per person in each view. i-LIDS dataset is constructed from video images shooting a busy airport arrival hall and contains 479 images from 119 persons, in which each person has four images in average. PRID2011 dataset consists of images captured by two static surveillance cameras, in which views A and B contain 385 and 749 persons, respectively, with 200 persons appearing in both views. Following the procedure described in ~\cite{Alpher33} for evaluation on the test set, view A is used for the probe set (100 person IDs) and view B is used for the gallery set, which contains all images of the view B (649 person IDs) except the 100 training samples.
\subsection{Training the Network}
The proposed architecture is implemented on the widely used deep learning framework Caffe~\cite{Alpher30} with an NVIDIA TITAN X GPU. We use stochastic gradient descent(SGD) for updating the weights of the network. The parameters for training SC-PPMN, CTM-PPMN and MC-PPMN are listed in Table \ref{table:para}. We start with a base learning rate and gradually decrease it as the training progresses using a polynomial decay policy: $ \eta^{i} = \eta^{0}(1-\frac{i}{max\_iter})^p$, where $p = 0.5$, $i$ is the current mini-batch iteration and $max\_iter$ is the maximum iteration. We train the MC-PPMN model by fixing the parameters of the pre-trained SC-PPMN and CTM-PPMN models.
\textbf{Data Augmentation}. To make the model robust to the image translation variation and to enlarge the data set, we sample 5 images around the image center, with translation drawn
from a uniform distribution in the range $[-8,8]\times[-4,4]$ for an original image of size $160\times80$.
\textbf{Hard Negative Mining (hnm)}. In fact, the negative pairs are far more than the positive pairs, which can lead to data imbalance. Also, in these negative pairs, there still exist some scenarios that are hard to distinguish. To address these difficuties, we sample the hard negative piars for retraining our network following the way in ~\cite{Alpher16}.
\begin{table}[!htbp]
\centering
\caption{The parameters for training.}
\label{table:para}
\vspace{1ex}
\scalebox{0.8}{
\begin{tabular}{|c|c|c|c|}
\hline
Parameters & SC-PPMN & CTM-PPMN & MC-PPMN \\
\hline
Training Time (hours) & 40-48 & 16 & 10 \\
Maximum Iteration & 160K & 30K & 10K \\
Batch Size & 100 & 800 & 150 \\
Momentum & 0.9 & 0.9 & 0.9 \\
Weight Decay & 0.0002 & 0.0002 & 0.0002 \\
Base Learning Rate & 0.01 & 0.1 & 0.0001 \\
\hline
\end{tabular}}
\end{table}
\begin{table}
\centering
\caption{Comparison of state-of-the-art results on labelled and detected CUHK03 dataset with 100 test IDs. The cumulative matching scores (\%) at rank 1, 5, and 10 are listed.}
\label{table:CUHK03}
\vspace{1ex}
\scalebox{0.75}{
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}*{Methods} &
\multicolumn{3}{c|}{labelled CUHK03} &
\multicolumn{3}{c}{detected CUHK03} \\
\cline{2-7}
& r=1 & r=5 & r=10 & r=1 & r=5 & r=10 \\ \hline
KISSME & 14.17 & 37.46 & 52.20 & 11.70 & 33.45 & 45.69 \\
LMNN & 7.29 & 19.64 & 30.74 & 6.25 & 17.87 & 26.60 \\
LSSCDL & 57.00 & - & - & 51.20 & - & - \\
LOMO+LSTM & - & - & - & 57.30 & 80.10 & 88.30 \\
LOMO+XQDA & 52.20 & 82.23 & 92.14 & 46.25 & 78.90 & 88.55 \\ \hline
CTM-PPMN (no hnm) & 73.52 & 95.12 & 98.56 & 68.44 & 91.50 & 96.98 \\
CTM-PPMN (hnm) & 76.58 & 95.64 & 98.24 & 70.68 & 92.58 & 97.18 \\ \hline
\hline
FPNN & 20.65 & 50.94 & 67.01 & 19.89 & 49.41 & - \\
ImprovedDL & 54.74 & 86.50 & 93.88 & 44.96 & 76.01 & 81.85 \\
PIE(R)+Kissme & - & - & - & 67.10 & 92.20 & 96.60 \\
SICIR & - & - & - & 52.17 & - & - \\
DCSL (no hnm) & 78.60 & 97.76 & 99.30 & - & - & - \\
DCSL (hnm) & 80.20 & 97.73 & 99.17 & - & - & - \\
MTDnet & 74.68 & 95.99 & 97.47 & - & - & - \\
JLML & 83.20 & 98.00 & 99.40 & 80.60 & \textbf{96.90} & \textbf{98.70} \\ \hline
SC-PPMN (no hnm) & 83.20 & 97.50 & 99.25 & 77.60 & 96.10 & 98.60 \\
SC-PPMN (hnm) & 85.50 & 98.20 & 99.50 & 80.63 & 95.62 & 98.07 \\ \hline
\hline
MC-PPMN (no hnm) & 84.36 & \textbf{98.56} & \textbf{99.80} & 81.28 & 96.14 & 98.54 \\
MC-PPMN (hnm) & \textbf{86.36} & 98.54 & 99.66 & \textbf{81.88} & 96.56 & 98.58\\ \hline
\end{tabular}}
\end{table}
\subsection{Experiments Results}
We campare our proposed MC-PPMN with several methods in recent years, including both hand-craft feature based methods: ITML~\cite{Alpher42}, LMNN~\cite{Alpher13}, KISSME~\cite{Alpher14}, LOMO+XQDA~\cite{Alpher32}, LSSCDL~\cite{Alpher31}, LOMO+LSTM~\cite{Alpher19}; and DCNN feature based methods: FPNN~\cite{Alpher27}, ImprovedDL~\cite{Alpher16}, Single-Image and Cross-Images Representation learning (SICIR)~\cite{Alpher22}, TCP~\cite{Alpher33}, DCSL~\cite{Alpher06}, Pose Invariant Embedding (PIE(R)+Kissme)~\cite{Alpher34}, MTDnet (including MTDnet-cross)~\cite{Alpher43}, JLML~\cite{Alpher41}. We report the evaluation results as shown in Table~\ref{table:CUHK03} - Table~\ref{table:i-LIDS}.
\textbf{Comparisons on CUHK03 dataset}.
We conduct the experiments on both labelled and detected CUHK03 datasets. From Table \ref{table:CUHK03}, we see that our proposed approach achieves the better results than the state-of-the-art methods. On the labelled dataset, our method outperforms the next best method by an improvement of 3.16\% (86.36\% vs. 83.20\%). On the detected dataset, the performance is reduced by the misalignment and incompleteness caused by the detector. However, the proposed method still achieves an improvement 1.28\% over the next best method (81.88\% vs. 80.60\%).
\begin{table}[!htbp]
\centering
\caption{Comparison of state-of-the-art results on CUHK01 dataset with 100 test IDs and 486 test IDs. The cumulative matching scores (\%) at rank 1, 5, and 10 are listed.}
\label{table:CUHK01}
\vspace{1ex}
\scalebox{0.75}{
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}*{Methods} &
\multicolumn{3}{c|}{CUHK01(100 test IDs)} &
\multicolumn{3}{c}{CUHK01(486 test IDs)} \\
\cline{2-7}
& r=1 & r=5 & r=10 & r=1 & r=5 & r=10 \\ \hline
KISSME & 29.40 & 60.18 & 74.44 & - & - & - \\
LMNN & 21.17 & 48.51 & 62.98 & 13.45 & 31.33 & 42.25\\
LSSCDL & 65.97 & 48.51 & 62.98 & - & - & - \\ \hline
CTM-PPMN (no hnm) & 71.18 & 91.94 & 96.54 & 48.01 & 75.91 & 84.34 \\
CTM-PPMN (hnm) & 73.74 & 92.32 & 98.18 & 53.57 & 79.32 & 87.13 \\ \hline
\hline
FPNN & 27.87 & 59.64 & 73.53 & - & - & - \\
ImprovedDL & 65.00 & 89.00 & 94.00 & 47.53 & 71.60 & 80.25 \\
SICIR & 71.80 & - & - & - & - & - \\
TCP & - & - & - & 53.70 & 84.30 & 91.00 \\
MTDnet-cross & 78.50 & 96.50 & 97.50 & - & - & - \\
DCSL (no hnm) & 88.00 & 96.90 & 98.10 & - & - & - \\
DCSL (hnm) & 89.60 & 97.80 & 98.90 & 76.54 & 94.24 & 97.49 \\ \hline
SC-PPMN (no hnm) & 92.10 & 99.50 & 99.95 & - & - & - \\
SC-PPMN (hnm) & 93.10 & 98.80 & 99.80 & 77.16 & 92.80 & 97.53 \\ \hline
\hline
MC-PPMN (no hnm) & 92.32 & 98.68 & 99.60 & - & - & - \\
MC-PPMN (hnm) & \textbf{93.45} & \textbf{99.62} & \textbf{99.98} & \textbf{78.95} & \textbf{94.67} & \textbf{97.64} \\ \hline
\end{tabular}}
\end{table}
\textbf{Comparisons on CUHK01 dataset}.
Table \ref{table:CUHK01} illustrates the top recognition rate on CUHK01 dataset with 100 test IDs and 486 test IDs. We see that our proposed method achieves the best recognition rate of 93.45\% (rank-1), 99.62\% (rank-5) and 99.98\% (rank-10) (vs. 89.60\%, 96.90\% and 99.98\% respectively by the next best method) with 100 test IDs. For the setting with 486 test IDs, only 485 identities and half positive samples are left for training which make it challenging for our proposed deep architecture to converge. Following the way in \cite{Alpher06}, we finetune the network for CUHK01 with the pre-trained model on CUHK03 and achieve an improvement of 2.41\%(78.95\% vs. 76.54\%) on rank-1 recognition rate.
\begin{table}
\centering
\caption{Comparison of state-of-the-art results on VIPeR and PRID450S datasets.The cumulative matching scores (\%) at rank 1, 5, and 10 are listed.}
\label{table:VIPeR}
\vspace{1ex}
\scalebox{0.70}{
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}*{Methods} &
\multicolumn{3}{c|}{VIPeR} &
\multicolumn{3}{c}{PRID450s} \\
\cline{2-7}
& r=1 & r=5 & r=10 & r=1 & r=5 & r=10 \\ \hline
KISSME & 19.60 & 48.00 & 62.20 & 15.0 & - & 39.0 \\
LSSCDL & 42.66 & - & 84.27 & 60.49 & - & 88.58 \\
LOMO+LSTM & 42.40 & 68.70 & 79.40 & - & - & -\\
LOMO+XQDA & 40.00 & 68.13 & 80.51 & 61.42 & - & 90.84 \\ \hline
CTM-PPMN & 32.12 & 64.24 & 80.38 & 28.98 & 59.47 & 73.60 \\ \hline
\hline
ImprovedDL & 34.81 & 63.61 & 75.63 & 34.81 & 63.72 & 76.24 \\
PIE(R) & 27.44 & 43.01 & 50.82 & - & - & -\\
SICIR & 35.76 & - & - & - & - & -\\
TCP & 47.80 & 74.70 & 84.80 & - & - & -\\
DCSL & 44.62 & 73.42 & 82.59 & - & - & - \\
JLML & \textbf{50.20} & 74.20 & 84.30 & - & - & -\\ \hline
SC-PPMN & 45.82 & 74.68 & 86.08 & 52.08 & 82.58 & 88.36 \\ \hline
\hline
MC-PPMN & 50.13 & \textbf{81.17} & \textbf{91.46} & \textbf{62.22} & \textbf{84.00} & \textbf{93.56}\\ \hline
\end{tabular}}
\end{table}
\textbf{Comparisons on VIPeR and PRID450s dataset}.
Following \cite{Alpher16}, we pre-train the network using CUHK03 and CUHK01 datasets, and fine-tune on the training set of VIPeR and PRID450s. As shown in the Table \ref{table:VIPeR}, the proposed MC-PPMN is better than the state-of-the-art method in all the cases except the rank-1 recognition rate for VIPeR dataset, while is comparable with the best competing method JLML.
\begin{table}
\centering
\caption{Comparison of state-of-the-art results on i-LIDS and PRID2011 datasets. The cumulative matching scores (\%) at rank 1, 5, and 10 are listed.}
\label{table:i-LIDS}
\vspace{1ex}
\scalebox{0.70}{
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}*{Methods} &
\multicolumn{3}{c|}{i-LIDS} &
\multicolumn{3}{c}{PRID2011} \\
\cline{2-7}
& r=1 & r=5 & r=10 & r=1 & r=5 & r=10\\ \hline
ITML & 29.00 & 54.00 & 70.50 & 12.00 & - & 36.00 \\
KISSME & - & - & - & 15.00 & - & 39.00 \\
LMNN & 28.00 & 53.80 & 66.10 & 10.00 & - & 30.00 \\ \hline
CTM-PPMN & 44.17 & 73.31 & 85.02 & 12.00 & 32.00 & 42.00 \\ \hline
\hline
TCP & 60.40 & 82.70 & 90.70 & 22.00 & 47.00 & 57.00 \\
MTDnet & 57.8 & 78.61 & 87.28 & 32.00 & 51.00 & 62.00 \\
\hline
SC-PPMN & 54.80 & 81.92 & 92.32 & 32.00 & 53.00 & 63.00 \\ \hline
\hline
MC-PPMN & \textbf{62.69} & \textbf{84.80} & \textbf{93.31} & \textbf{34.00} & \textbf{60.00} & \textbf{69.00}\\ \hline
\end{tabular}}
\end{table}
\textbf{Comparisons on i-LIDS and PRID2011 datasets}.
We also conduct experiments on the i-LIDS dataset and PRID2011 dataset. Table \ref{table:i-LIDS} shows our results. For both datasets, MC-PPMN achieves the best rank-1, rank-5 and rank-10 recognition rates, which demonstrate the effectiveness of the proposed method for the small training set.
\begin{table}
\centering
\caption{The improvement of the fused correspondence representations for rank-1 recognition rates on the experimental datasets.}
\label{table:enhancement}
\vspace{1ex}
\scalebox{0.68}{
\begin{tabular}{c|ccc|c}
\hline
Dataset & CTM-PPMN & SC-PPMN & MC-PPMN & Improvement \\ \hline
CUHK03(labelled) & 76.58 & 85.50 & 86.36 & 0.86 \\
CUHK03(detected) & 70.68 & 80.63 & 81.88 & 1.25 \\
CUHK01(100 test IDs) & 73.74 & 93.10 & 93.45 & 0.35 \\
CUHK01(486 test IDs) & 53.57 & 77.16 & 78.95 & 1.79 \\
VIPeR & 32.12 & 45.82 & 50.13 & 4.31 \\
PRID450s & 28.98 & 52.08 & 62.22 & 10.14\\
i-LIDS & 44.17 & 54.80 & 62.69 & 7.89\\
PRID2011 & 12.00 & 32.00 & 34.00 & 2.00 \\ \hline
\end{tabular}}
\end{table}
\textbf{The effect of fusion for the correspondence representations}. Camparing with the experimental results by learning the correspondence for two person images with CTM features and SC features, respectively, Table \ref{table:enhancement} shows the improvement on the rank-1 recognition rates with the fusion for the correspondence representations. For CUHK03 and CUHK01 datasets, we achieve the absolute gain about 1.00\% and for the other datasets, we can see the absolute gain over 2.00\%. Especially, the proposed method achieve 10.14\% improvement on the rank-1 recognition rate. The results above demonstrate the effectiveness of fusion for the correspondence representations, which is obvious on the small datasets.
\textbf{The effect of hard negative mining}. We also report the results of both our model with hnm and without hnm as shown in Table \ref{table:CUHK03} and \ref{table:CUHK01}. We can see the absolute gain about 1.00\% compared with the same model without hnm.
\section{Conclusion}
In this paper, we have developed a novel multi-channel deep convolutional architecture for person re-identification. We employ deep convNets to map person's semantic components and color-texture distributions to the required feature space. Based on the learned deep features and a pyramid matching strategy, we learn their correspondence representations and fuse them together to perform the re-identification task. The effectiveness and promise of our method is demonstrated by extensive evaluations on various datasets. The results have shown that our method has a remarkable improvement over the competing models.
\section{ Acknowledgments}
\noindent This work was supported by National Key R\&D Program of China (No. 2017YFB1002400), National Natural Science Foundation of China (No. 61702448, 61672456), the Key R\&D Program of Zhejiang Province (No. 2018C03042), the Fundamental Research Funds for the Central Universities (No. 2017QNA5008, 2017FZA5007). X. Li was also supported in part by the National Natural Science Foundation of China under Grant U1509206 and Grant 61472353, and the Alibaba-Zhejiang University Joint Institute of Frontier Technologies.
|
{
"timestamp": "2018-03-08T02:05:16",
"yymm": "1803",
"arxiv_id": "1803.02558",
"language": "en",
"url": "https://arxiv.org/abs/1803.02558"
}
|
\section{Introduction}
The question of how dissipation influences topological quantum systems is heavily addressed in today's research \cite{Esaki2011ZakPhaseNonHermitian,Bardyn2013a,Diehl2011,Budich2015DissipativePreparation,Weimann2017EdgeStatesPhotonicCrystal}.
In literature various concepts are used and proposed to generalize the cherished tools developed for a characterization of topological phases in closed quantum systems to open systems.
In this field two different approaches of describing dissipation are frequently used.
One is based on Lindblad operators, which describe the interactions of a system with a reservoir, and allow for the calculation of the time evolution of the system's density matrix via \emph{Lindblad master equations} (LME) \cite{Lindblad1976}.
This framework has been applied to prepare quantum systems in topologically nontrivial states by engineering the dissipative dynamics \cite{Budich2015DissipativePreparation,Diehl2011}.
Also generalizations of topological invariants have been formulated in the course of this framework and the effects of dissipation on the topological properties of open quantum systems have been investigated \cite{Bardyn2013a,Rivas2013DensityMatrixChernInsulators,Budich2015DensityMatrices,Linzner2016ThoulessPumping,Grust2017TopologicalOrder}.
Another approach of describing dissipation uses complex potentials and effective non-Hermitian Hamiltonians $H_\text{eff}$ to model the gain and loss of particles.
In this context reservoirs which are invariant under a spatial inversion $\mathcal{P}$ and a simultaneous time reflection $\mathcal{T}$ (interchange of particle sinks and sources) are of special interest. Such systems can be effectively described by $\mathcal{PT}$-symmetric Hamiltonians, which then fulfill $\left[ H_\text{eff}, \mathcal{PT} \right] = 0$ and, in spite of their non-Hermiticity, still can possess entirely real eigenvalue spectra \cite{BenderBoettcher1998RealSpectra}.
Within this description the effects of dissipation on topological systems have been investigated and it has been debated, whether or not topologically nontrivial states may exist in combination with $\mathcal{PT}$-symmetric gain-loss patterns \cite{Hu2011AbsenceToplogicalPhases,Esaki2011ZakPhaseNonHermitian,Ghosh2012TopologicalPhaseNonHermitian,Zhu2014PTNonHermitianSSH,Yuce2015PTFloquetTopological,Klett2017PTSymmetry,Menke2017TopologicalQuantumWires}. Special interest is triggered in optics \cite{Schomerus2013PhotonicLattices}, where experimental realizations have successfully been developed \cite{Zeuner2015ObservationBulk,Weimann2017EdgeStatesPhotonicCrystal}. While most of the works address the appearance of edge states, topological invariants have also been formulated for the non-Hermitian case \cite{Esaki2011ZakPhaseNonHermitian,Menke2017TopologicalQuantumWires,we2017,Lieu2018}.
The generic example of a one-dimensional topological insulator is the \emph{Su-Schrieffer-Heeger} (SSH) model \cite{SSH1979} which was initially proposed to study solitons in polyacetylene.
In the present paper we compare both approaches by way of the example of the SSH model subject to gain and loss. A first comparison was done with regard to the appearance of edge states \cite{Klett2018MasterPT}. Here we want to go one step further and study whether the characterization of topological phases in both approaches leads to an agreement. To do so, we introduce two different $\mathcal{PT}$-symmetric dissipative patterns and investigate a topological invariant, viz.\ the real part of the complex Zak phase.
We find that the topological invariants of both approaches coincide in the parameter regime where the effective theory possesses an unambiguous interpretation.
Further we find a remarkable accordance in the long-term dynamics following from both approaches, where we justify a construction scheme of a fixed-point-like many-body state in the effective $\mathcal{PT}$-symmetric theory.
Since we combine disparate descriptions and methods, the first part of the paper (Secs.\ \ref{sec:ssh} - \ref{sec:ComplexBerry}) is dedicated to give an overview of the methods and concepts used in the analysis.
We shortly summarize the most important aspects of the SSH model in Sec.\ \ref{sec:ssh} before we introduce the dissipative frameworks in Sec.\ \ref{sec:DissipativeFrameworks}.
For spatially periodic systems (periodic boundary conditions) we introduce a momentum basis, which allows for the generalization of the Zak phase to dissipative systems described by an LME.
The method of expressing a Liouvillean in a momentum basis was previously used in \cite{Rivas2013DensityMatrixChernInsulators} to generalize the Chern number to dissipative systems.
In Sec.\ \ref{sec:ComplexBerry} we argue that in case of the effective description of dissipation the real part of the complex Zak phase is quantized and can be used as a topological invariant, the corresponding topological phases of which are protected by $\mathcal{PT}$ symmetry.
The results of the comparison of the descriptions are presented in Sec.\ \ref{sec:Comparison}, where we find clear similarities in both approaches for modeling dissipation. \section{\label{sec:ssh}SSH model}
The paradigmatic one-dimensional SSH model \cite{SSH1979} describes spin-polarized non-interacting fermions on a lattice with $n$ sites arranged in double-well unit cells in such a way that nearest-neighbor tunneling alternates between $t_1$ and $t_2$ (see Fig.\ \ref{fig:model}). For comparison with the dissipative extensions considered in this work its properties are briefly outlined. The Hamiltonian $H$ describing the SSH model is given by
\begin{align}\label{eq:HermSSH}
\begin{split}
H & =-\sum_{j=1}^{n/2} \left( t_1 c_{2j-1}^{\dagger} c_{2j}^{\phantom{\dagger}} + t_2 c_{2j}^{\dagger} c_{2j+1}^{\phantom{\dagger}} + \t{h.c.}\right)
\\
& = -\sum_{j=1}^{n/2} \Big( t_1 \ket{j,A} \! \bra{j,B}
+ t_2 \ket{j,B} \! \bra{j+1,A} + \t{\t{h.c.}}\Big),
\end{split}\hspace*{-0.35cm}
\end{align}
where $c_i^{\phantom{\dagger}}$ $(c_{i}^{\dagger})$ denotes the annihilation (creation) operator of a spinless fermion at the site $i$.
For staggered hopping amplitudes $t_1 \neq t_2$, the Hamiltonian in Eq.\ \eqref{eq:HermSSH} describes a two-band topological insulator \cite{HassanKane2010TopologicalInsulators} that undergoes a topological phase transition at $t_1 = t_2$ where the band gap closes.
In fact, the periodic translation invariant system can be solved analytically by Fourier transformation $\ket{k,X} = 1/\sqrt{n/2} \sum_j \ee^{\ii j k} \ket{j,X}$ of the unit cell index $j$, where $X \in \{A, B\}$ and $k = 0, 2\gpi/(n/2), 4\gpi/(n/2), \dots, 2\gpi (n/2-1)/(n/2)$.
Fac\-torizing $\ket{k,X} = \ket{k} \otimes \ket{X}$, the Hamiltonian is fully described by the $2\times 2$ \emph{Bloch Hamiltonian} $H_\text{B}$ (the matrix occurring on the right side),
\begin{equation}
\label{equ:sshBlochHamiltonian}
H = \sum_k \ket{k} \! \bra{k} \otimes
\begin{pmatrix}
0 & -t_1 -t_2\ee^{\ii k}
\\
-t_1 - t_2 \ee^{-\ii k} & 0
\end{pmatrix} ,
\end{equation}
where the sum runs over the discrete steps of $k$ mentioned above.
The Bloch Hamiltonian yields the energies $E_m(k) = \pm \sqrt{t_1^2 + t_2^2 + 2 t_1 t_2 \cos(k)}$.
The topological invariant of the Bloch band corresponding to $\ket{u_m(k)}$ (an eigenvector of the Bloch Hamiltonian) is given by the \emph{Berry phase} $\nu_m$ \cite{Berry1984BerryPhase} in momentum space, also known as \emph{Zak phase} \cite{Zak1989ZakPhase}
\begin{equation}
\nu_m = \ii \oint_{0}^{2\pi} \!\!\!\!\!\! \braket{u_m(k) | \partial_k | u_m(k)} \dd k
\end{equation}
and takes a value of $\nu_m = \gpi$ if $t_1 < t_2$ and $\nu_m = 0$ if $t_1 > t_2$ \cite{Asboth2016TopologicalInsulators}. In the topologically \emph{nontrivial} phase ($\nu_m \neq 0$) the SSH model possesses zero-energy edge modes at open boundaries, which are protected by the topological properties of the system.
\begin{figure}
\centering
\includegraphics[page = 1, width = \linewidth]{Fig1.pdf}
\vspace{3ex}
\includegraphics[page = 3, width = \linewidth]{Fig1.pdf}
\vspace{3ex}
\includegraphics[page = 2, width = \linewidth]{Fig1.pdf}
\vspace{-3ex}
\caption{Sketch of the SSH model and its extensions for $n=8$ sites. Top row: The Hermitian model consists of double-well unit cells (sites $A, B$) with intra-tunneling $t_1$ joined by the inter-cell hopping $t_2$. Lower side: In this work we study dissipative extensions of the SSH model where sites marked with a plus (minus) sign indicate single-particle gain (loss). The two patterns with dissipation only among the boundary sites ($U_1$, middle row) and alternating gain and loss ($U_2$, bottom row) are motivated by previous works \cite{Zhu2014PTNonHermitianSSH, Wang2015SpontaneousPTBreaking, Menke2017TopologicalQuantumWires, Weimann2017EdgeStatesPhotonicCrystal, Klett2017PTSymmetry}.}
\label{fig:model}
\end{figure} \section{\label{sec:DissipativeFrameworks}Dissipative frameworks}
In the further course of this paper we allow particles to enter respectively exit the system on certain sites as sketched in the lower part of Fig.\ \ref{fig:model}. This notion of dissipation is modeled by two different approaches.
\subsection{Lindblad master equation}
The LME \cite{Lindblad1976} results from the Markovian approximation \cite{Breuer2007OpenQuantumSystems} of the reservoir and describes the dissipative (trace and positivity preserving) evolution of the density matrix $\rho$,
\begin{equation}\label{eq:LindbladMaster}
\dot{\rho} = -\ii \big[ H, \rho \big] + \sum_{\mu} \big(2 L_{\mu}^{\phantom{\dagger}} \rho L_{\mu}^{\dagger} - \big\{ L_{\mu}^{\dagger}L_{\mu}^{\phantom{\dagger}}, \rho \big\} \big) \equiv \hat{\mathcal{L}} \rho,
\end{equation}
where the unitary evolution generated by $H$ is supplemented by the influence of collapse operators $L_\mu$ characterizing the dissipative coupling to the reservoir, and we have set \mbox{$\hbar \equiv 1$.}
Writing Eq.\ \eqref{eq:LindbladMaster} as an operator equation introduces the Liouville operator (or Liouvillean) $\hat{\mathcal{L}}$. In the long-time limit, the system converges to the \emph{non-equilibrium steady state} (NESS), which satisfies $\hat{\mathcal{L}} \rho_\text{ness} = 0$.
In this work, we choose the Lindblad operators
$L_{\mu}=\sqrt{\gamma} c_\mu^{\dagger}$ $(\sqrt{\gamma} c_\mu^{\phantom{\dagger}})$ to describe single-particle gain (loss) with a rate $\gamma$.
Consequently, the dissipative patterns presented in Fig.\ \ref{fig:model} are expressed by the following choice of Lindblad couplings ($\mu = 1, \dots, n$),
\begin{subequations}
\begin{align}
\label{eq:LindbladOpsU1}
\hspace*{-2mm} U_1: \quad \hspace*{0.3mm} L_1 &= \sqrt{\gamma} c_{1}^{\phantom{\dagger}}, \quad
L_n= \sqrt{\gamma} c_{n}^{\dagger}, \quad L_{\mu}=0 \ \text{(else)} ,
\\
\label{eq:LindbladOpsU2}
\hspace*{-2mm} U_2: \quad L_\mu &= \sqrt{\gamma} c_{\mu}^{\dagger} \ \text{($\mu$ odd)}, \quad
L_\mu = \sqrt{\gamma} c_{\mu}^{\phantom{\dagger}} \ \text{($\mu$ even)}.
\end{align}
\end{subequations}
We note that the reservoir $U_2$ does not spoil the translational symmetry and the appropriate system can still be naturally described in momentum space. \subsubsection*{\label{sec:3rdQZ}Third quantization}
Any fermionic dissipative system described by a master equation in Lindblad form \eqref{eq:LindbladMaster} with a quadratic Hamiltonian $H$ and linear collapse operators $L_\mu$ can be treated by means of a method named \emph{third quantization}, presented in references \cite{Prosen2008ThirdQuantizationFermions, Prosen2008XYChain}:
Both constituents of the Liouvillean $\hat{\mathcal{L}}$ are expanded in terms of $2n$ \emph{abstract Hermitian Majorana operators} $w_j$ (which are in our case related to the fermionic operators by $w_{2m-1} = c_m + c_m^\dagger, w_{2m} = \ii(c_m - c_m^\dagger)$ with $m = 1, \dots, n$) as $H = \sum_{j,k = 1}^{2n} w_j H_{jk} w_k$ and $L_\mu = \sum_{j = 1}^{2n}l_{\mu, j} w_j$.
The operator space $\mathcal{K}$ of the $w_j$ is spanned by the $2^{2n}$-dimensional orthonormal basis vectors $P_{\alpha_1,\dots,\alpha_{2n}} = w_1^{\alpha_1} \cdots w_{2n}^{\alpha_{2n}}$ with $\alpha_j \in \{0, 1\}$.
By introducing super-operators on $\mathcal{K}$ in terms of \emph{adjoint Fermi maps} $\hat{c}_j, \hat{c}_j^\dagger\ (j = 1, \dots, 2n)$ that act on the canonical basis as $\hat{c}_j \ket{P_{\alpha_1,\dots,\alpha_2n}} = \delta_{\alpha_j,1} \ket{w_j P_{\alpha_1,\dots,\alpha_2n}}, \hat{c}_j^\dagger \ket{P_{\alpha_1,\dots,\alpha_2n}} = \delta_{\alpha_j,0} \ket{w_j P_{\alpha_1,\dots,\alpha_2n}}$, the Liouvillean can be rewritten and becomes bilinear after a linear transformation to $4n$ \emph{adjoint Hermitian Majorana maps} $\hat{a}_{2j-1} = (\hat{c}_j + \hat{c}_j^\dagger)/\sqrt{2}$, $\hat{a}_{2j} = \ii (\hat{c}_j - \hat{c}_j^\dagger)/\sqrt{2}$, where $j = 1 \dots, 2n$.
The resul\-ting expression,
\begin{equation}
\hat{\mathcal{L}} = \sum_{i,j=1}^{4n} \hat{a}_i A_{ij} \hat{a}_j - A_0 \hat{\identity},
\end{equation}
introduces the antisymmetric $4n\times 4n$ \emph{shape matrix} $\bm{A} = - \bm{A}^T$, which contains all information about the system.
Its eigenvalues, referred to as \emph{rapidities}, come in pairs $\pm\beta_{1,\dots, 2n}$ with $\mathrm{Re}(\beta_j) \ge 0$.
By means of the shape matrices' eigenvectors, the dissipative system decomposes into \emph{normal master modes} (NMM) that are populated at an exponential rate given by $2\Re(\beta_j)$ \cite{Prosen2008ThirdQuantizationFermions}.
Hence, the NESS is unique whenever all rapidities $\beta_j$ satisfy $\Re(\beta_j) > 0$. As shown in reference \cite{Prosen2008ThirdQuantizationFermions}, NESS expectation values can be computed straightforwardly in this case.
Applying this procedure to an SSH ring with Lindblad operators $\sqrt{\gamma} c_\mu, \sqrt{\gamma}c_\mu^{\dagger}$ arranged in the alternating pattern of $U_2$, the shape matrix takes the banded form
\begin{subequations} \label{eq:ShapePeriodicBC}
\begin{align}
\bm{A} &= \frac{1}{2}\begin{pmatrix}
\gamma \bm{\Gamma}_\text{g} & -t_1 \bm{T}& & & & -t_2 \bm{T}
\\
-t_1 \bm{T} & \gamma \bm{\Gamma}_\text{l} & -t_2 \bm{T} &
\\
& -t_2 \bm{T} & \gamma \bm{\Gamma}_\text{g} & -t_1 \bm{T} &
\\
& & \ddots & \ddots & \ddots
\\
-t_2 \bm{T}
\end{pmatrix},
\end{align}
with $4\times 4$ matrices $\bm{\Gamma}_\text{g(l)} = - \identity_{2} \otimes \sigma_y \varpm \sigma_y \otimes \left( \ii \sigma_x + \sigma_z\right)$ and $\bm{T} = -\ii\sigma_y \otimes \identity_{2}$.
As the dissipative pattern does not spoil the system's unit cell structure, $\bm{A}$ is naturally expressed by partitioning the matrix into $8\times 8$ blocks labeled with $j = 1, \dots, n/2$, which themselves consist of $4\times4$ blocks $A,B$.
Adopting a projector notation the shape matrix reads
\begin{align}
\begin{split}
\bm{A} = \frac{1}{2}\sum_{j=1}^{n/2} &\Big( \gamma \big[ \ket{j, A}\bra{j, A} \otimes \bm{\Gamma}_\text{g}
+ \ket{j, B}\bra{j, B} \otimes \bm{\Gamma}_\text{l}\big]
\\
&-t_1 \big[ \ket{j,A}\bra{j,B} + {\t{h.c.}} \big] \otimes \bm{T}
\\
&-t_2 \big[ \ket{j,B}\!\bra{j+1,A} + {\t{h.c.}} \big] \otimes \bm{T}\Big),
\end{split}
\end{align}
which resembles the form of the $\mathcal{PT}$-symmetric Hamiltonian subject to $U_2$ mentioned below with matrices $\bm{T}, \bm{\Gamma}_\text{g}, \bm{\Gamma}_\text{l}$ replacing the scalar entries of tunneling amplitudes and gain and loss rates, respectively.
In analogy with the Hamiltonian case, the representation can be further compacted by transforming the external degree of freedom into momentum space with a Fourier transform, $\ket{k, X} = 1/\sqrt{(n/2)} \sum_{j = 1}^{n/2} \mathrm{e}^{\ii j k} \ket{j, X}$ with $k = 0, 2\gpi/(n/2), 4\gpi/(n/2), \dots, 2\gpi (n/2-1)/(n/2)$ and $X \in \{A, B\}$, resulting in
\begin{align}
\label{eq:BlochLiouvillean}
\begin{split}
\bm{A} = \frac{1}{2} \sum_k \ket{k}\!\bra{k} \otimes& \Big(\gamma \ket{A}\!\bra{A} \otimes \bm{\Gamma}_\text{g} + \gamma \ket{B}\!\bra{B} \otimes \bm{\Gamma}_\text{l}
\\
&- \big[(t_1 + \mathrm{e}^{\ii k} t_2) \ket{A}\!\bra{B} + \text{h.c.}\big] \otimes \bm{T}\Big)
\\
= \frac{1}{2} \sum_k \ket{k}\!\bra{k} \otimes& \begin{pmatrix}
\gamma \bm{\Gamma}_\text{g} & -(t_1 + t_2 \mathrm{e}^{\ii k}) \bm{T}
\\
-(t_1 + t_2 \mathrm{e}^{-\ii k}) \bm{T} & \gamma \bm{\Gamma}_\text{l}
\end{pmatrix},
\end{split}
\end{align}
\end{subequations}
where the additional factorization $\ket{k, X} = \ket{k} \otimes \ket{X}$ has been assumed.
Due to the similarity of this procedure and the derivation of the Bloch Hamiltonian, we will refer to the matrix in the last equation as \emph{Bloch Liouvillean}. \subsection{\label{subsec:effectiveTheory}\texorpdfstring{\bm{$\mathcal{PT}$}}{PT}-symmetric potentials (effective theory)}
Our second approach is given by a description of dissipation using complex on-site potentials that lead to an effective non-Hermitian Hamiltonian and in fact triggered the surge of the entire research area of non-Hermitian quantum mechanics \cite{Moiseyev2011NonHermitianQM, Brody2014BiorthogonalQuantumMechanics}.
This procedure has already been promisingly applied to bosonic systems \cite{Dast2014BalancedGainAndLoss}.
It can be motivated by considering a single site with $H=0$ subject to single-particle gain (loss) described by an LME \eqref{eq:LindbladMaster}, which reads $\dot{\rho}= \gamma (2 c^{\dagger} \rho c - c c^{\dagger} \rho - \rho c c^{\dagger})$ (with $c \leftrightarrow c^\dagger$ swapped for single-particle loss).
The populations $\rho_{ii}, i = 0, 1$ of the solution $\rho(t) = \sum_{i,j} \rho_{ij}(t) \ket{i}\!\bra{j}$ show an exponential decrease at the rate of $2\gamma$, that is $\rho_{11}(t) = \rho_{11}(0)\ee^{-2\gamma t}$ in the case of single-particle loss, and analogously for the gain scenario.
However, such an exponential increase (except for the maximum occupation) can also be implemented by a complex on-site potential $\pm \ii \gamma c^\dagger c$, which becomes exact in the mean-field limit of bosons \cite{Dast2014BalancedGainAndLoss}.
Hence, we account for single-particle gain (loss) at site $j$ via the extension of the Hamiltonian with a term $\varpm ̣̣̣̣\ii \gamma c_j^\dagger c_j$ to obtain an effective description of dissipation.
This results in a non-Hermitian Hamiltonian $H_{\text{eff}}^{(U)} = H + U$. The complex potentials, which effectively describe the effects of the reservoirs shown in Fig.\ \ref{fig:model} are given by
\begin{subequations}
\label{equ:PotentialsPT}
\begin{align}
\label{eq:PotU1}
U_{1} &= \ii \gamma \left( c_{n}^{\dagger}c_{n}^{\phantom{\dagger}} - c_{1}^{\dagger}c_{1}^{\phantom{\dagger}}\right),
\\
\label{eq:PotU2}
U_{2} &=\sum \limits_{j=1}^{n/2} \ii \gamma \left( c_{2j-1}^{\dagger}c_{2j-1}^{\phantom{\dagger}} - c_{2j}^{\dagger}c_{2j}^{\phantom{\dagger}}\right).
\end{align}
\end{subequations}
A helpful property of both reservoirs in Eq.\ \eqref{equ:PotentialsPT} as well as the Hamiltonian in Eq.\ \eqref{eq:HermSSH} is their invariance under the combined action of parity and time inversion, $\left[ H_\text{eff}, \mathcal{PT} \right] = 0$, which causes the complex energy spectrum to be entirely real-valued in the $\mathcal{PT}$-unbroken parameter regime.
A further analogy between the two approaches can be revealed by following the physical interpretation of a prominent algorithm, which allows for the computation of the time evolution of observables of systems characterized by an LME.
The so called \emph{Monte Carlo wave-function} approach uses the combination of a time evolution with a non-Hermitian Hamiltonian and quantum jumps to determine steady states of an open quantum system described by an LME \cite{Molmer1993QuantumMonteCarlo}.
For $\mathcal{PT}$-symmetric systems the non-Hermitian Hamiltonian constructed within the numerical method is equivalent to the $\mathcal{PT}$-symmetric Hamiltonian of the effective approach in Sec.\ \ref{subsec:effectiveTheory}, except for a constant imaginary shift. Thus, our effective $\mathcal{PT}$-symmetric Hamiltonian is connected with the non-Hermitian Hamiltonian of the Monte Carlo wave-function algorithm. \section{\label{sec:ComplexBerry}Complex Berry phase}
In the scope of the effective $\mathcal{PT}$-symmetric theory the concept of the Berry phase can be generalized to dissipative systems \cite{Garrison1988, Nenciu1992, Mostafazadeh1999AdiabaticCyclicStatesNonHermitian, Esaki2011ZakPhaseNonHermitian}.
In case of spatially periodic systems like $U_2$ the \emph{complex Zak phase} can be defined with the help of pairs of biorthogonal eigenvectors of the non-Hermitian $\mathcal{PT}$-symmetric Hamiltonian $H_\text{eff}^{(U_2)}$ belonging to real eigenvalues. To do so, one follows the same procedure as described in Sec.\ \ref{sec:ssh}, which results in
\begin{align}\label{eq:SSHU2Bloch}
H_{\text{eff}}^{(U_2)} & = \sum_{k}^{} \ket{k}\!\bra{k} \otimes
\begin{pmatrix}
\ii \gamma & -t_1-t_2 \ee^{\ii k}
\\
-t_1-t_2 \ee^{-\ii k} & -\ii \gamma
\end{pmatrix},
\end{align}
and energies $E_m(k) = \pm \sqrt{t_1^2 + t_2^2 + 2t_1t_2\cos(k) - \gamma^2}$, which are entirely real as long as $|t_1 - t_2| > \gamma$ \cite{Weimann2017EdgeStatesPhotonicCrystal}.
The $2\times 2$ matrix represents the $\mathcal{PT}$-symmetric Bloch Hamiltonian.
The structure of the Bloch Liouvillean found in Eq.\ \eqref{eq:BlochLiouvillean} is very similar to the one of the non-Hermitian Bloch Hamiltonian in Eq.\ \eqref{eq:SSHU2Bloch}.
In case of unbroken $\mathcal{PT}$ symmetry, that is if all eigenvalues of $H_{\text{eff}}$ are real-valued, the complex Zak phase which is picked up by the $m$th Bloch band, described by the left and right eigenvectors $\bra{\chi_m}$ and $\ket{\phi_m}$ of the Bloch Hamiltonian associated with eigenvalue $E_m$, is defined as \cite{Garrison1988, Mostafazadeh1999AdiabaticCyclicStatesNonHermitian}
\begin{equation}
\label{eq:complexBerry}
\nu_{m}= \ii \oint_{0}^{2\gpi} \!\!\!\!\!\! \bra{\chi_m(k)} \partial_k \ket{\phi_m(k)} \dd k.
\end{equation}
In the $\mathcal{PT}$-unbroken regime, the real part of the complex Zak phase is quantized, $\Re(\nu_m) = 0, \gpi \mod2\gpi$, as shown in App.\ \ref{app:quanizationBerry}.
In analogy with the line of argument of Hatsugai \cite{Hatsugai2006} for Hermitian systems we use the quantized real part of the complex Zak phase as topological invariant to characterize topological phases in the periodic lattice system described by Eq.\ \eqref{eq:SSHU2Bloch}.
The quantization of the real part of the complex Zak phase is ensured by $\mathcal{PT}$ symmetry, and thus the corresponding topological phases are protected by $\mathcal{PT}$ symmetry (see App.\ \ref{app:quanizationBerry}).
In addition to analytical results of the Berry phase defined by Eq.\ \eqref{eq:complexBerry} in Ref.\ \cite{Esaki2011ZakPhaseNonHermitian}, an algorithm to numerically evaluate the expression is described in Ref. \cite{we2017}.
Note, however, that the extension of Berry phases is limited to the $\mathcal{PT}$-unbroken regime and if the eigenvalues of $H_\text{eff}$ are complex, the adiabatic theorem which is used in the derivation of the complex Zak phase does not apply \cite{Nenciu1992} and a Berry phase is not well-defined.
Note that the notion of a chiral symmetry $\Lambda = \sigma_z$ protecting the topology in the Hermitian case $\gamma = 0$ where $\left\{H_\text{B}, \sigma_z \right\} = 0$ cannot be directly carried over to the dissipative case due to the non-Hermiticity of the Hamiltonian. In fact, the underlying relations have to be modified for non-Hermitian Hamiltonians \cite{Martinez2018Symmetries} and it is nevertheless possible to find a symmetry operator that constrains the eigenvalue spectrum of $H_\text{eff}^{(U_2)}$ in the same way as the chiral symmetry does in the Hermitian SSH model.
To see this, consider the Hamiltonian of Eq. \eqref{eq:SSHU2Bloch} with open boundaries expanded in the $n$-dimensional single-particle basis,
\begin{align}
H_\text{eff}^{(U_2)} = \begin{pmatrix}
\ii \gamma & - t_1 & 0
\\
- t_1 & -\ii \gamma & -t_2 & \ddots
\\
0 & -t_2 & \ii \gamma & \ddots
\\
& \ddots & \ddots & \ddots
\end{pmatrix}.
\end{align}
Using the unitary $n$-dimensional operators
\begin{align}
\Sigma_x = \begin{pmatrix}
& & 1
\\
& 1 &
\\
\iddots & &
\end{pmatrix},
\qquad
\Sigma_z = \begin{pmatrix}
1 & &
\\
& -1 &
\\
& & \ddots
\end{pmatrix}
\end{align}
introduced in the Suppl.\ Mat.\ of Ref.\ \cite{Weimann2017EdgeStatesPhotonicCrystal} one can construct the \emph{non-Hermitian} operator $\Lambda = \Sigma_x \Sigma_z = -\Lambda^\dagger$ that satisfies the relation
\begin{align}
\label{eq:SymmetryProperty}
\Lambda^\dagger H_\text{eff}^{(U_2)} \Lambda = - H_\text{eff}^{(U_2)}.
\end{align}
Similarly to the chiral symmetry in the Hermitian scenario the symmetry property \eqref{eq:SymmetryProperty} constrains the eigenvalue spectrum of $H_\text{eff}^{(U_2)}$ to bands of opposite sign: Given left and right eigenvectors $\bra{\chi}, \ket{\phi}$ of $H_\text{eff}^{(U_2)}$ associated with an eigenvalue $E$, the symmetric partner states $\bra{\chi}\Lambda^\dagger, \Lambda \ket{\phi}$ are eigenvectors with an eigenvalue of $-E$.
By the same reasoning this property carries over to the spectrum of $H_\text{eff}^{(U_1)}$.
As a side mark, note that it is possible to construct non-Hermitian $\mathcal{PT}$-symmetric systems with alternating gain and loss and a centered defect that hosts topologically protected zero-energy edge states in the $\mathcal{PT}$-unbroken phase \cite{Weimann2017EdgeStatesPhotonicCrystal}. Their stability under disorder respecting the symmetry $\Lambda = \Sigma_x \Sigma_z$ has been verified analytically and numerically in the Suppl.\ Mat.\ of Ref.\ \cite{Weimann2017EdgeStatesPhotonicCrystal}.
Motivated by the analogy between the effective Hamiltonian \eqref{eq:SSHU2Bloch} and the shape matrix given in Eq.\ \eqref{eq:BlochLiouvillean}, we now formulate the complex Zak phase defined for the master bands of the Bloch Liouvillean.
The NMM bands obtained from a description via LME can be classified topologically in the same fashion as the bands of the Bloch Hamiltonian $H_\text{eff}^{(U_2)}$.
Therefore we define a Zak phase $\nu$ for the NMM bands by \mbox{using} the left- and right-hand eigenstates of the Bloch Liouvillean \eqref{eq:BlochLiouvillean} in Eq.\ \eqref{eq:complexBerry}.
\section{\label{sec:Comparison}Comparison of both descriptions}
\begin{figure}
\centering
\includegraphics{Fig2a.pdf}
\includegraphics{Fig2b.pdf}
\vspace{-4ex}
\caption{Complex single-particle energy spectrum of the $\mathcal{PT}$-symmetric Hamiltonian $H_\text{eff}^{(U_2)}$ for different dissipative strengths with highlighted differences between trivial ($\theta = 2\gpi/3$, dark blue points) and nontrivial ($\theta = \gpi/3$, additional light orange points) dimerization. Additional features in the nontrivial lattice configuration are caused by zero-energy edge modes.}
\label{fig:U2ptSpectrum}
\end{figure}
Having outlined the different approaches of modeling dissipation in the SSH model, a key aspect of this work shall be dedicated to a comparison of both methods in order to check their conformity.
While the $\mathcal{PT}$-unbroken regime in the effective theory yields stationary modes, which can also be understood as a non-Hermitian extension of Hermitian quantum mechanics, the interpretation of the $\mathcal{PT}$-broken regime with complex eigenvalues is questionable,
as the exponential change of the probability density resulting from a time evolution with the effective Hamiltonian may result in an unphysical behavior of the system.
The applied scheme in this work relies on the analogy between the effective theory and LME outlined in Sec.\ \ref{subsec:effectiveTheory}.
Starting from the dissipation-free Hamiltonian many-body ground state scenario ($\gamma = 0$), where all single-particle modes with negative and zero energy $\Re(E) \le 0$ (to include edge modes) are occupied, we expect those modes to remain occupied as long as the imaginary part vanishes exactly when dissipation is turned on.
Whenever a mode breaks $\mathcal{PT}$ symmetry by acquiring a complex eigenvalue, the sign of the imaginary part determines whether the mode is filled up ($+$) or emptied ($-$) in the long-time limit.
Applying this interpretation we identify a NESS-like many-body state built up from the specified single-particle modes in the effective framework.
This state corresponds to the complex many-body energy with a maximum imaginary and minimum real part.
Thus, we refer to this state as \emph{maximally $\mathcal{PT}$-broken ground state} (MBS), which is unique if all single-particle modes possess non-zero energies.
The further investigation within this section will reveal a good agreement of the MBS and the NESS with respect to the expectation value of the lattice site occupation operators. This is a surprising observation as the construction of both states relies on very different methods.
The idea behind the MBS is a modification of the many-body ground state due to the effects of dissipation. As we are interested in the ground state the MBS is constructed by a conditional minimization of the energy. Some single-particle modes of the ground state will be affected by the dissipative terms and thus are filled or emptied due to dissipation.
By contrast the decision whether a master mode is occupied in the NESS is solely determined by the sign of the corresponding eigenvalue's real part.
Hereinafter we parameterize the tunneling amplitudes accordingly to $t_{1/2}=t (1\mp\Delta\cos(\theta))$.
\begin{figure}
\centering
\includegraphics{Fig3a.pdf}
\includegraphics{Fig3b.pdf}
\vspace{-4ex}
\caption{Rapidity spectrum of the Liouvillean shape matrix with dissipative couplings according to $U_2$. The decay rates $\pm \Re(\beta)$ of the NMM shown in the left panel are \emph{independent} of the lattice dimerization (see App.\ \ref{app:proof}). By contrast, the ima\-ginary parts $\Im(\beta)$ depend only on the Hamiltonian and are presented in the right panel for different dimerizations $\theta$. We emphasize that the imaginary rapidity spectrum \emph{exactly} reproduces the energies of the Hermitian SSH model.}
\label{fig:U2rapiditySpectrum}
\end{figure}
\subsection{Alternating gain and loss}
First we consider an SSH chain ($n = 64$ and \mbox{$t = \Delta = 1$}) subject to the dissipative pattern $U_2$ and compute the complex energy spectrum of the corresponding $\mathcal{PT}$-symmetric Hamiltonian, which was previously discussed in \cite{Weimann2017EdgeStatesPhotonicCrystal, Klett2017PTSymmetry}, as well as the rapidity spectrum of the Liouvillean \eqref{eq:ShapePeriodicBC} (without periodic boundary conditions).
Fig.\ \ref{fig:U2ptSpectrum} shows the complex energies of the system for a varying gain-loss strength, with highlighted differences between the topologically trivial ($t_1 / t_2 = 3$, $\theta=2 \gpi /3$) and nontrivial ($t_1 / t_2 = 1/3$, $\theta=\gpi /3$) dimerization.
We note that the phase transition to the $\mathcal{PT}$-broken regime is in agreement with that of the analytical eigenvalues of the Bloch Hamiltonian given in Eq.\ \eqref{eq:SSHU2Bloch} for the bulk modes (blue points).
By contrast the nontrivial dimerization possesses $\mathcal{PT}$-broken modes for any non-zero $\gamma$ as the edge states can obviously not be eigenstates of parity-time reflection \cite{Klett2017PTSymmetry} and thus immediately break the symmetry.
Moreover, the imaginary energy of the edge modes is linear and given by $\pm \ii \gamma$, which follows from the fact that the edge modes are supported only on one of the sublattices $A,B$ \cite{Asboth2016TopologicalInsulators}.
The real part of the Bloch bands containing the bulk modes bend towards zero for increasing dissipation and eventually each mode breaks the $\mathcal{PT}$ symmetry with a purely imaginary eigenvalue.
This observation is exactly the expected behavior in the strongly dissipative scenario where the hopping can be considered as a weak perturbation, and in which all lattice sites effectively decouple and yield independent sites being subject to either gain or loss, that is $\lim_{\gamma \to \infty} \Re(E) = 0$ and $\lim_{\gamma \to \infty} \Im(E) = \pm\gamma$.
In Fig.\ \ref{fig:U2rapiditySpectrum} we show the rapidities of the analogue system formulated in the framework of the LME.
Whereas the presence of a $\mathcal{PT}$-symmetric region in the effective theory suggested a regime with stationary modes despite dissipation, the convergence rates in the Lindblad scenario are equal for each NMM, $\Re(\beta) = \gamma/2$, such that all modes are sensitive to the reservoir regardless of the lattice configuration.
Interestingly, we find that for the specific pattern with alternating gain and loss the features of the unitary Hamiltonian and the collapse operators decouple, since it can be shown analytically for the periodic system that the rapidities are two-fold degenerate and given by $\beta = \gamma/2 \pm \ii E_{m}/2$, where $E_{m}$ denotes the eigenvalues of the Hermitian SSH model from \mbox{Eq.\ \eqref{equ:sshBlochHamiltonian}} (compare App.\ \ref{app:proof}).
We will further comment on this remarkable property in the course of this section.
The MBS and NESS lattice occupations derived from the spectra in Figs.\ \ref{fig:U2ptSpectrum}, \ref{fig:U2rapiditySpectrum} are compared in Fig.\ \ref{fig:U2ss}.
For increasing dissipation, the bulk makes a transition into a progressively staggered configuration, which ultimately leads to entirely filled/empty sites.
However, the edge modes occurring in the nontrivial lattice dimerization play an important role in both descriptions and are occupied/emptied for finite dissipation. From this it follows that a pronounced occupation at the edges can always be observed whenever the SSH Hamiltonian yields edge modes.
We note that the MBS qualitatively reproduces the NESS behavior to a good extent, especially the property of half filling.
Only in the $\mathcal{PT}$-unbroken regime where the effective theory suggests a flat bulk, a slight imbalance between gain and loss sites can already be detected in the Lindblad description (compare top row of Fig.\ \ref{fig:U2ss}).
In fact, the MBS does not show a staggered bulk in the absence of $\mathcal{PT}$-broken bulk modes (blue bands in Fig.\ \ref{fig:U2ptSpectrum}).
\begin{figure}
\centering
\includegraphics{Fig4.pdf}
\vspace{-1ex}
\caption{Lattice occupation corresponding to the NESS (blue dots) and MBS (open orange circles) of the SSH model with alternating gain and loss ($\gamma=0.5,1.4,2.5$ from top to bottom). Left panels: Nontrivial dimerization ($\theta = \gpi/3$). Right panels: Trivial dimerization ($\theta = 2 \gpi/3$).}
\label{fig:U2ss}
\end{figure}
Carrying on the remarkable agreement of our approaches to model dissipation, we will now address the question of how a topological invariant, which already exists for the effective theory \cite{Garrison1988}, transfers to the Liouvillean.
\subsubsection*{Complex Zak phase}
The question how the notion of topological invariants of Hamiltonian states or bands can be extended to open quantum systems represents a challenge to current research.
The complex Bloch bands of the non-Hermitian system can be classified topologically in the $\mathcal{PT}$-unbroken parameter regime by the real part of the complex Zak phase, which is shown in the left panel of Fig.\ \ref{fig:phaseU2}.
This phase diagram is obtained from a numerical evaluation of Eq.\ \eqref{eq:complexBerry} using the eigenvectors of the non-Hermitian Bloch Hamiltonian given by Eq.\ \eqref{eq:SSHU2Bloch} via the algorithm presented in Ref.\ \cite{we2017}.
The scenario is in full accordance with the topological band theory of Hermitian systems, and the complex Bloch bands possess a quantized Zak phase, which is well-defined as long as the bands are gapped.
The system is in a topologically trivial (nontrivial) phase in the $\mathcal{PT}$-unbroken parameter regime for $\theta>\gpi/2$ ($\theta<\gpi/2$).
In the $\mathcal{PT}$-broken parameter regime the complex Bloch bands are gapless and two exceptional points exist within the Brillouin zone \cite{Weimann2017EdgeStatesPhotonicCrystal}.
Since the quantization of the real part of the complex Zak phase collapses in the $\mathcal{PT}$-broken parameter regime that separates the $\mathcal{PT}$-unbroken regions, a sharp topological phase transition indicated by a discontinuous change of the real part of the Zak phase of $\gpi$ cannot be observed for values of $\gamma>0$.
In the same fashion the NMM bands obtained from a description via an LME can be classified topologically in analogy with the Hermitian band theory.
Therefore we calculate the Zak phase of the NMM bands by evaluating Eq.\ \eqref{eq:complexBerry} \mbox{using} the left- and right-hand eigenstates of the Bloch Liouvillean \eqref{eq:BlochLiouvillean}.
A numerical evaluation yields the right panel of Fig.\ \ref{fig:phaseU2}.
For a given dimerization parameter $\theta$ and gain and loss strength $\gamma$ one finds all NMM bands to be characterized by the same quantized phase with a vanishing imaginary part.
The Zak phase of the NMM bands indicates a trivial (nontrivial) phase for a trivially (nontrivially) dimerized chain, which is in full agreement with the isolated SSH model.
This observation is found to be inherently linked to the structure of the shape matrix for $U_2$ given in Eq.\ \eqref{eq:BlochLiouvillean}, to which we provide further details in App.\ \ref{app:proof}.
In fact it turns out that the Liouvillean can be decomposed into outer products. One of the product spaces contains the entire structure of the Bloch Hamiltonian, such that the Berry phase of the transport along the Brillouin zone is inherited from the Hermitian case.
Thus, the NMM bands possess a Zak phase of $\gpi$ if the Hermitian SSH chain is dimerized topologically non-trivial ($\theta<\gpi/2$), and a Zak phase of $0$ in case of trivial dimerization ($\theta>\gpi/2$).
For $\theta=\gpi/2$ the NMM bands touch each other and the Zak phase is not well-defined.
In contrast to the complex Zak phase of the effective $\mathcal{PT}$-symmetric description this quantization does not require a special structure of the eigenvalue spectrum of the Bloch Liouvillean, the quantization is rather ensured by the chiral symmetry of the Hermitian SSH model from which the Zak phase is inherited to the eigenstates of the Bloch Liouvillean (see App.\ \ref{app:proof}).
Hence, in the case of alternating gain and loss we can define a dissipative analogue of the Zak phase for the NMM bands, which carries the information about the topology of the Hamiltonian and is not affected by the strength of dissipation.
This Zak phase of the NMM bands is in perfect agreement with the complex Zak phase obtained from the effective approach in the para\-meter regime where the complex energy spectrum of the non-Hermitian Hamiltonian is entirely real-valued ($\mathcal{PT}$-unbroken phase).
We comment on the stability of the complex Zak phase for the disordered case in App.\ \ref{App:disorder}.
\begin{figure}
\centering
\includegraphics{Fig5.pdf}
\caption{Complex Zak phase $\nu$ of the SSH model ($t=\Delta=1$) with alternating gain and loss. Left panel: Real part of the complex Zak phase obtained from the effective description using a complex $\mathcal{PT}$-symmetric potential. The hatched area marks the $\mathcal{PT}$-broken parameter regime in which the real part of the complex Zak phase is \emph{not} quantized (see App.\ \ref{app:quanizationBerry} for details). Right panel: Zak phase characterizing an NMM band of the Bloch Liouvillean obtained from a description of the system using an LME.}
\label{fig:phaseU2}
\end{figure} \subsection{Gain and loss at the boundaries}
In the following we compare the interpretation of the effective theory introduced in the beginning of this section with the LME description of an SSH chain ($n = 64$ and $t = \Delta = 1$) subject to a reservoir of the type $U_1$ (cf.\ Fig.\ \ref{fig:model} and Eq.\ \eqref{eq:PotU1} resp.\ Eq.\ \eqref{eq:LindbladOpsU1}).
Fig.\ \ref{fig:U1ptSpectrum} shows the single-particle spectrum of the non-Hermitian Bloch Hamiltonian in dependence of the dissipative strength $\gamma$ for a topologically trivial ($t_1 / t_2 = 3$, $\theta=2 \gpi /3$) and nontrivial ($t_1 / t_2 = 1/3$, $\theta=\gpi /3$) dimerization. In the latter, the edge states are again found to be $\mathcal{PT}$-broken for all values $\gamma>0$.
The imaginary part of the complex energy is restricted by $|\t{Im}(E)|<\gamma$ and the localisation of the edge modes increases with $\gamma$, such that one finds $\lim_{\gamma \to \infty} \Im(E) = \pm\gamma$ for their energy.
The same holds in the strongly dissipative scenario of the trivial SSH chain.
After the $\mathcal{PT}$ phase transition the real part of the complex energies of the modes located mainly at the boundary unit cells bend towards zero (orange points in Fig.\ \ref{fig:U1ptSpectrum}).
These modes decouple while going through a bifurcation at $\gamma=3$ and collapse into (i) modes hosted solely by the dissipative site at the boundary and (ii) edge modes in the nontrivially dimerized Hermitian subsystem in the limit $\gamma \rightarrow \infty$.
While the latter corres\-pond to the eigenvalue branches whose complex energy converges towards zero, the eigenvalues of the others are given by $\pm \ii \gamma$ in the strongly dissipative limit. The effect of the modes with approximately zero total energies for large values of $\gamma$ becomes visible in the MBS, which is discussed later, and illustrates that these states indeed correspond to internal edge states of the subsystem, i.e.\ the SSH chain without the dissipative sites.
Interestingly, one finds considerable resemblances between the complex energies and the rapidities following from a description via LME.
The imaginary part of the rapidity spectrum (cf.\ Fig.\ \ref{fig:U1rapiditySpectrum}) reproduces the real part of the complex energy spectrum (cf.\ Fig.\ \ref{fig:U1ptSpectrum}).
Also the real part of the rapidities (cf.\ left panel in Fig.\ \ref{fig:U1rapiditySpectrum}) and the imaginary part of the single-particle modes ($\t{Im}(E)$ cf.\ right side in Fig. \ref{fig:U1ptSpectrum}) show significant \mbox{similarities. Only the} exactly vanishing imaginary parts leading to stationary bulk modes in the effective description differ from their counterpart in the LME framework where bulk NMMs are always characterized by a small (but nonzero) relaxation rate that guarantees the uniqueness of the NESS.
Furthermore, the bifurcations in the rapidity spectrum and the complex energy spectrum approximately occur at the same parameter values. Generally, only modes hosted by sites coupled to the reservoir are strongly affected by dissipation, which also causes a change of the lattice site occupation. This is further investigated by comparing the NESS and the MBS.
\begin{figure}
\centering
\includegraphics{Fig6a.pdf}
\includegraphics{Fig6b.pdf}
\vspace{-4ex}
\caption{Complex single-particle eigenvalue spectrum of the $\mathcal{PT}$-symmetric Hamiltonian of the SSH model subject to the complex potential $U_1$ with $\theta = \gpi/3$ (blue and green points in the left panel, and right top) respectively $\theta = 2\gpi/3$ (blue and orange points in the left panel, and right bottom). The colors of the imaginary part branches on the right correspond to the real part of the complex energies shown on the left.}
\label{fig:U1ptSpectrum}
\end{figure}
We show MBS and NESS lattice occupations for a reservoir of the type $U_1$ in Fig.\ \ref{fig:U1ss}.
Due to its construction the MBS contains the single-particle edge modes which are occupied due to the effect of the gain and loss of particles.
For increasing dissipative strengths the edge mode localization at the lattice boundaries becomes more pronounced and the mode extends less into the bulk.
In case of a trivial chain and values of $\gamma \lesssim 0.5$ the $\mathcal{PT}$-symmetric Hamiltonian is $\mathcal{PT}$-unbroken (see Fig.\ \ref{fig:U1ptSpectrum} right bottom) and the corresponding MBS, which has a real-valued energy is equivalent to a Mott state at half filling (cf.\ Fig.\ \ref{fig:U1ss} right top) \cite{Muth2008HalfIntegerMI}.
In this parameter regime the MBS does not reproduce the NESS lattice occupations.
However, the trivial chain yields lattice occupation profiles of the NESS and the MBS, which match perfectly for large values of $\gamma$.
In this case the sites at the edges of the lattice act as a connection between the reservoir and the rest of the system.
For large values of $\gamma$ one can assign the edge sites (the occupations of which are approximately fixed to 0 resp.\ 1) to the reservoir and finds the NESS (MBS) in the subsystem to reproduce the NESS (MBS) of the non-trivial phase (cf.\ bottom of Fig.\ \ref{fig:U1ss}).
This effect is caused by the inversion of the sublattice's dimerization due to the cut-off of dissipative sites leaving a non-trivially dimerized system.
We note that this decoupling of dissipative sites from their surrounding was already observed in the strongly dissipative scenario of alternating gain and loss, where the lattice occupation is found to be completely staggered (cf.\ bottom panel of Fig.\ \ref{fig:U2ss}).
\begin{figure}
\centering
\includegraphics{Fig7a.pdf}
\includegraphics{Fig7b.pdf}
\vspace{-4ex}
\caption{Rapidity spectrum of the Liouvillean of the SSH model with gain and loss at the edges (Lindblad operators $L_{\mu}^{(U_1)}$) with $\theta = \gpi/3$ (left top, and blue and green points in the right panel) respectively $\theta = 2\gpi/3$ (left bottom, and blue and orange points in the right panel). The colors of the real part branches correspond to the imaginary part of the rapidities shown on the right.}
\label{fig:U1rapiditySpectrum}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics{Fig8.pdf}
\vspace{-1ex}
\caption{Comparison of the lattice site occupation profile of the NESS (blue circles) and the corresponding MBS (open orange circles) for $t=\Delta=1$ and $\theta = \gpi/3$ on the left respectively $\theta = 2 \gpi/3$ on the right and $\gamma=0.25,4$ from top to bottom.}
\label{fig:U1ss}
\end{figure} \section{Conclusion}
One of the main efforts of this work was a comparison between two approaches for describing dissipative quantum systems, viz.\ Lindblad master equations and an effective theory using $\mathcal{PT}$-symmetric on-site potentials.
For alternating gain and loss we have defined topological invariants which characterize the complex Bloch bands in the $\mathcal{PT}$-unbroken regime and the corresponding master bands of the Liouvillean to match perfectly.
The topo\-logical invariants rely on the idea that both frameworks allow for a generalization of the Hermitian Zak phase to the dissipative scenario where the entire information is contained in a non-Hermitian Bloch Hamiltonian resp.\ Bloch Liouvillean.
In particular we showed that the quantization of the complex Zak phase's real part is protected by the $\mathcal{PT}$ symmetry in the $\mathcal{PT}$-unbroken parameter regime.
By contrast the spectral properties of the Liouvillean yield a decoupled structure that contains the bands of the Hamiltonian as a subset. Hence, the quantization of the Zak phase of the master bands can be traced back to the topological properties of the Hermitian SSH Hamiltonian.
An investigation of the effect of hopping disorder which respects the symmetry properties of the system shows that the complex energy spectra of the system do neither undergo a $\mathcal{PT}$ phase transition nor a gap closing, such that complex the Zak phase is robust against a sufficiently small disorder. The same holds for the Zak phase of the master bands as it is inherited from the Hermitian SSH model which is robust against hopping disorder.
Working with the effective theory we have introduced and justified an interpretation to obtain the long-term fixed-point of the system, by regarding imaginary parts of the energies in the $\mathcal{PT}$-broken regime as decay rates.
The resulting lattice occupation shows a remarkable resemblance to the non-equilibrium steady state extracted from the Liouvillean.
In the regime of weak dissipation where the bulk modes do not break the $\mathcal{PT}$ symmetry in the effective description, the steady state of the effective description (MBS) and the non-equilibrium steady state (NESS) of the Lindblad master equation disagree as the expectation values of the occupation operator of the steady state in the effective framework is not influenced by the reservoir in the $\mathcal{PT}$-unbroken parameter regime.
Using a reservoir coupled to the edges of the SSH chain we have found that the edge modes are occupied or emptied with progressing time evolution of the system.
The dissipative sites effectively decouple from their surrounding in the strongly dissipative regime in such a way that by turning on dissipation in the trivial regime the steady state of the subsystem (where the two edge sites are ranked to belong to the reservoir) reproduces the steady state of the nontrivial SSH chain.
|
{
"timestamp": "2018-10-02T02:14:03",
"yymm": "1803",
"arxiv_id": "1803.02636",
"language": "en",
"url": "https://arxiv.org/abs/1803.02636"
}
|
\section{Introduction}
\indent \rem{The asteroids orbiting in the main belt, between Mars and
Jupiter,}
\add{Main belt asteroids}
are the remnants of the building blocks that accreted to form
terrestrial planets, leftovers of the dynamical events that
shaped our planetary system.
Among them, large bodies (diameter larger than $\approx$100\,km) are
deemed primordial \citep{2009-Icarus-204-Morbidelli}, and contain a
relatively pristine record of their initial formation conditions.
\indent Decades of photometric and spectroscopic surveys
have provided \rem{a clear view} \add{an ever-improving picture} of the
distribution of material in the inner solar system
\citep[e.g.][]{1982-Science-216-Gradie,
1996-MPS-31-Burbine, 2002-AsteroidsIII-5.2-Burbine,
2002-Icarus-158-BusII,
2002-AsteroidsIII-2.2-Rivkin, 2006-Icarus-185-Rivkin,
2008-Nature-454-Vernazza, 2010-Icarus-207-Vernazza,
2014-ApJ-791-Vernazza,
2014-Nature-505-DeMeo}, yet
these studies have probed the composition of the surface only.
As such, they
\rem{failed to address}
\add{do not necessarily lead us to}
the original location and time
scales for the accretion of these blocks, which are key to
understand\add{ing} the \add{important} processes \rem{that occurred} in the disk of gas and dust
around the young Sun.
\indent \rem{Fortunately, t} \add{T}hese \rem{questions} \add{issues} can be addressed by studying
the internal structure of asteroids:
objects formed far from the Sun are expected to be composed \rem{by a}
\add{of various}
mixture\add{s} of \rem{rocks and ices} \add{rock and ice}, while
\rem{innermost} objects \add{closer to the Sun} are \rem{deemed}
\add{expected to be} volatile-free.
Depending on their formation time scale, the amount of radiogenic
heat \rem{ was different} \add{varied}, leading to \add{complete,}
partial\add{,} or \rem{complete} \add{no} differentiation\rem{,
or not at all}.
In that respect, density is \rem{maybe} \add{clearly} the \rem{main}
\add{most important} \rem{physical property} remotely
measurable \add{property} that \add{can} constrain\rem{s} internal structure
\citep{2015-AsteroidsIV-Scheeres}.
\indent Determination of \rem{the} density \rem{relies on the}
\add{requires} measurement of \rem{the}
mass and \rem{the} volume, and for that, large asteroids with satellites
are prime targets \citep{1999-Nature-401-Merline,
2002-AsteroidsIII-2.2-Merline,
2008-Icarus-195-Marchis, 2008-Icarus-196-Marchis,
2011-AA-534-Carry, 2015-AsteroidsIV-Margot}.
The study of \rem{their mutual orbit} \add{the orbits of satellites
within asteroid binaries or
multiple systems} is currently the most precise method
to estimate \rem{asteroid masses,} \add{the mass of the primary
asteroid.} \rem{while they usually} \add{If the primary also
happen to} have \add{an} angular
\rem{diameters} \add{diameter} large enough to be spatially resolved by large telescopes,
\rem{allowing the} \add{this also allows an} accurate determination
of \rem{their} \add{the primary's} volume. In addition, the orbits of the
satellites themselves offer a way to probe the gravity field,
related to mass distribution inside the asteroid
\citep{2014-Icarus-239-Berthier, 2014-ApJ-783-Marchis}.
\indent \rem{We focus in the present study}
\add{Here we focus} on the outer\add{-}main-belt asteroid (107)
Camilla,
\add{orbiting in the Cybele region} \rem{asteroid,} \add{and} discovered on November 17, 1868 from Madras,
India by N. R. Pogson. Its first satellite, \textsl{S/2001 (107) 1}~\add{(hereafter
S1)}, was
discovered in March 2001 by
\citet{2001-IAUC-Storrs}, using the Hubble Space Telescope (HST),
and its orbit first studied by \citet{2008-Icarus-196-Marchis} using
observations from large ground-based telescopes equipped with
adaptive-optics (AO) systems.
Its second satellite, \textsl{S/2016 (107) 1}~\add{(hereafter
S2)}, was discovered in 2016 by our team
\citep{2016-IAUC-Marsset}, using the European Southern Observatory
(ESO) Very Large Telescope (VLT).
\indent \add{
Camilla was originally classified as a C-type
based on its visible colors and albedo
\citep{1989-AsteroidsII-Tedesco}.
Later on, both \citet{2002-Icarus-158-BusII} and
\citet{2004-Icarus-172-Lazzaro} classified it as
X, based on visible spectra. More recently, based on a
near-infrared spectrum from NASA IRTF Spex,
\citet{2015-Icarus-247-Lindsay} classified Camilla as either Xe or
L. }
\indent \add{
The physical properties of Camilla have been extensively studied,
from its rotation period of \numb{4.8}\,h
\citep[e.g.,][]{1987-Icarus-70-Weidenschilling,
1987-Icarus-69-DiMartino} to its spin and 3D shape model
\citep[][]{2003-Icarus-164-Torppa,
2011-Icarus-214-Durech,
2013-Icarus-226-Hanus,
2017-AA-601-Hanus}.
Its diameter, however, was poorly constrained, with estimates
ranging from
185\,$\pm$\,9\,km \citep{2006-Icarus-185-Marchis} to
256\,$\pm$\,12\,km \citep{2012-Icarus-221-Marchis}. More
recent studies combining
images or stellar occultations with lightcurve-based 3D shape
modeling, are yielding
diameters in excess of 220\,km
(see Fig.~\ref{fig:diam} and Table~\ref{tab:diam} for the exhaustive
list of diameter estimates).
The mass estimates
also spanned a wide range, from
\numb{$2.25_{-2.25}^{+18.00}$} to
\numb{$39 \pm 10 \times 10^{18}$\,kg} \citep{2011-AJ-142-Zielenbach}
(see Fig.~\ref{fig:mass} and Table~\ref{tab:mass} for the
exhaustive list of mass estimates).
With these large spread of values, deriving an
accurate density would require substantial
improvements to these parameters.
}
\indent Gathering all the available disk-resolved and high-contrast images
from HST and AO-fed cameras, optical lightcurves, stellar occultations,
and visible and near-infrared spectra
(Section~\ref{sec:obs}), we present an extensive study of the dynamics of
the system (Section~\ref{sec:dyn}), of the surface properties of Camilla and
its main satellite \add{S1}~(Section~\ref{sec:spec}), and of Camilla's spin and 3-D
shape (Section~\ref{sec:phys}), all constraining its internal
\add{composition and} structure (Section~\ref{sec:discuss}).
\section{Observations\label{sec:obs}}
\subsection{Optical lightcurves\label{ssc:obs:lc}}
\indent We gather the 24 lightcurves used by
\citet{2003-Icarus-164-Torppa} to create a convex 3-D shape model of
Camilla\footnote{Available on DAMIT \citep{2010-AA-513-Durech}:\\
\href{http://astro.troja.mff.cuni.cz/projects/asteroids3D/}{http://astro.troja.mff.cuni.cz/projects/asteroids3D/}},
compiled from the Uppsala Asteroid Photometric
Catalog\footnote{\href{http://asteroid.astro.helsinki.fi/apc/asteroids/}{http://asteroid.astro.helsinki.fi/apc/asteroids/}}
\citep{PDS-SAPC-2011}. We also retrieve the three lightcurves reported by
\citet{2009-MPBu-Polishook}.
\indent In addition to these data, we acquired \numb{29}
lightcurves using the
60\,cm \textsl{Andr{\'e} Peyrot} telescope mounted at Les Makes
observatory on R{\'e}union Island, \rem{which is} operated as a
partnership among Les Makes Observatory and the IMCCE, Paris
Observatory.
We also extracted \numb{63} lightcurves from the data archive of
the SuperWASP survey \citep{2006-PASP-118-Pollacco} for the
period 2006-2009.
This survey aims \rem{at finding and characterizing} \add{to find
and characterize} exoplanets by
observation\add{s} of their transit\add{s of} \rem{in front of
their} \add{the} host star.
Its large field of view (8\ensuremath{^{\circ}}\,$\times$\,8\ensuremath{^{\circ}}) provides a
goldmine for asteroid lightcurves \citep{2005-EMP-97-Parley,
2017-ACM-Grice}.
\indent A total of \numb{127} lightcurves observed between
\numb{1981} and \numb{2016}
(Table~\ref{tab:lc}) are used in this work.
\subsection{High-angular-resolution imaging\label{ssc:obs:ao}}
\indent We compile here all the high-angular-resolution images of \rem{(107)}
Camilla taken with the HST and
large ground-based telescopes equipped with AO-fed
cameras: Gemini North, ESO VLT, and W. M. Keck, of which only a subset had already been published
\citep{2001-IAUC-Storrs, 2008-Icarus-196-Marchis}.
All of these data sets were acquired by the authors
of this paper. The data comprise \numb{62} different epochs, with
multiple images each, spanning \numb{15} years, from March 2001 to
August 2016.
\indent The images from the VLT were acquired with
both the first generation instrument NACO
\citep[NAOS-CONICA,][]{2003-SPIE-4841-Lenzen,2003-SPIE-4839-Rousset} and
SPHERE
\add{\citep[Spectro-Polarimetric High-contrast Exoplanet REsearch,][]{2006-OExpr-14-Fusco,2008-SPIE-Beuzit}}, the
second generation extreme-AO instrument designed for
exoplanet detection and characterization.
The images taken with SPHERE used its IRDIS differential imaging
camera sub-system
\add{\citep[InfraRed Dual-band Imager and Spectrograph,][]{2008-SPIE-Dohlen}}.
Images taken at the Gemini North used NIRI camera
\add{\citep[Near InfraRed Imager,][]{2003-PASP-115-Hodapp}}, fed by the ALTAIR AO system
\citep{2000-AOST-Herriot}. Finally, observations at Keck were
\rem{realized} \add{acquired}
with \add{NIRC2
\citep[Near-InfraRed Camera 2,][]{2004-AppOpt-43-vanDam,2000-SPIE-4007-Wizinowich}}.
We list in Table~\ref{tab:ao} the details of each observation.
\indent The basic data processing (sky subtraction, bad-pixel
removal, and flat-field
correction) was performed using
in-house routines developed in Interactive Data Language (IDL) to
reduce AO-imaging data
\citep[see][for more details]{2008-AA-478-Carry}.
\subsection{High-angular-resolution spectro-imaging\label{ssc:obs:ifu}}
In 2015 and 2016, we also used the \add{integral-field
spectrograph (IFS)} of the SPHERE instrument
at the ESO VLT, aiming \rem{at measuring} \add{to measure} the
reflectance spectrum of Camilla's largest satellite S1, and the
astrometry of the fainter satellite S2.
The observations were \rem{carried out} \add{made} in the IRDIFS\_EXT mode
\citep{2014-AA-572-Zurlo}, in
which both IRDIS \citep{2008-SPIE-Dohlen} and the IFS \citep{2008-SPIE-Claudi}
\add{data are acquired}
\rem{observe} simultaneously. In
this set-up, the IFS covers the wavelength range from 0.95 to
1.65\,$\mu$m (YJH bands) at a spectral resolving power of $\sim$30 in
a 1.7\ensuremath{^{\prime\prime}} $\times$1.7\ensuremath{^{\prime\prime}}~field of view (FoV),
while IRDIS \rem{observes} \add{operates} in the dual-band imaging mode
\citep[DBI,][]{2010-MNRAS-407-Vigan}
with $K_{12}$, a pair of filters in the {\it K} band ($\lambda_{K_1}$ =
2.110\,$\mu$m and $\lambda_{K_2}$ = 2.251\,$\mu$m, $\sim$0.1\,$\mu$m
bandwidth), within a 4.5\ensuremath{^{\prime\prime}}~FoV.
All observations were performed in the pupil-tracking mode,
where the pupil remains fixed while the field orientation varies during the
observations. This mode provides the best PSF stability and helps in reducing
and subtracting static speckle noise in the images.
\indent For the pre-processing of both the IFS and IRDIS data,
we \rem{made use of} \add{used} the preliminary release (v0.14.0-2) of the SPHERE
Data Reduction and Handling (DRH) software
\citep{2008-SPIE-7019-Pavlov}, as well as additional in-house tools written
in IDL, including parts of the public pipeline presented in
\citet{2015-MNRAS-454-Vigan}.
{See our recent works on (3) Juno and
(6) Hebe for more details
\citep{2015-AA-581-Viikinkoski, 2017-AA-604-Marsset}.}
We used the DRH for the creation of some of the basic
calibrations: master sky frames, master flat-field, IRDIFS spectra positions,
initial wavelength calibration and flat field. Before creating the data
cubes, we used IDL routines to subtract the background from each science frame
and correct for the bad pixels identified using the master dark and master
flat-field DRH products. This step was introduced as a substitute
\rem{to} \add{for} the bad
pixel correction provided by the DRH. Bad pixels were first identified using a
sigma-clipping routine, and then corrected using a bicubic pixel
interpolation with the \texttt{MASKINTERP} IDL routine. The resulting frames were then
injected into the DRH recipe to create the data cubes by interpolating the
data spectrally and spatially.
\subsection{Stellar occultations\label{sec:obs:occ}}
\indent Eleven stellar occultations by Camilla have been observed
in the last decade, mostly by amateur astronomers
\citep[see][]{2014-ExA-38-Mousis, 2016-IAU-Dunham}.
The timings of disappearance and reappearance of the
stars, together with the \rem{localization} \add{location} of each observing station
are compiled by \citet{PDSSBN-OCC}, and publicly available on the Planetary Data
System
(PDS\footnote{\href{http://sbn.psi.edu/pds/resource/occ.html}{http://sbn.psi.edu/pds/resource/occ.html}}).
We converted the disappearance and reappearance timings
(Table~\ref{tab:obsocc})
of the occulted stars into segments (called chords) on the plane
of the sky, using the location of the observers on Earth and the
apparent motion of Camilla following the recipes by \citet{1999-IMCCE-Berthier}.
Four stellar occultations had multiple chords\add{;}\rem{,
while the} other
events \add{had} only \rem{had} one or two positive chords, and
\rem{provided}
\add{contributed less to constraining} \rem{constraints on} the size and apparent shape of Camilla.
\add{In n}\rem{N}one of these eleven stellar occultations was
\add{there any evidence for} \rem{indicative of the
presence of} a companion.
We list in Table~\ref{tab:occ} the details of the seven events that
we used.
\subsection{Near-infrared spectroscopy\label{sec:obs:spex}}
\indent On November 1, 2010, we observed Camilla over
0.8--2.5\,$\mu$m with the
near-infrared spectrograph SpeX \citep{2003-PASP-115-Rayner}, on
the
3-meter NASA IRTF located on Mauna Kea, Hawaii,
using the low resolution Prism mode ($R$\,=\,100). We used the
standard \textsl{nodding} procedure for the observations, using
alternately two separated locations on the slit
\citep[e.g.,][]{2007-AA-470-Nedelcu} to estimate the sky background.
We used Spextool (SPectral EXtraction TOOL), an
IDL-based data reduction package written by
\cite{2004-PASP-116-Cushing} to reduce SpeX data.
\begin{figure*}[ht]
\includegraphics[width=\textwidth]{CamillaAO-001.png}
\caption[Examples of AO images of Camilla]{
Examples of AO images from Gemini, Keck, and ESO VLT.
The first two panels (\add{1 \& 2}, August 13, 2003, from Keck) show a typical
AO image, before and after halo subtraction:
Camilla dominates the background and makes the satellites hard to
detect.
The remaining panels show halo-subtracted images from different
dates, with small circles indicating the positions of
\add{the bright satellite S1~and the fainter
S2~(frames 9 and 10 only)}.
On these panels, the images before subtraction are also
shown in the central circle to highlight the elongated shape of Camilla.
}
\label{fig:ao}
\end{figure*}
\section{Dynamical properties\label{sec:dyn}}
\subsection{Data processing\label{sec:dyn:data}}
\indent The main challenges in measuring the position and
apparent flux of the satellite of an
asteroid results from their sub-arcsecond angular separation and
high contrast (several magnitudes), combined with
\rem{non-perfect} \add{imperfect} AO correction.
A typical image of a binary asteroid (Fig.~\ref{fig:ao}) displays
a central peak (the asteroid itself, angularly resolved or not)
encompassed by a halo (its diffused light),
within which speckle patterns appear.
The faintness of these speckles, produced by
interference of the incoming light, make them very similar in
appearance to a small moon with a contrast up to several thousands,
and they can be misleading.
Speckles, however, \add{vary (position and flux)} on
short timescales, depending on the ambient conditions
and AO performances (e.g., seeing, airmass, brightness of the AO
reference source).
These fluctuations can be used to distinguish genuine satellites from
speckles.
\indent As for the direct imaging of exoplanets, it is crucial to
substract the halo that surrounds the primary
\citep[in a similar way to the digital coronography of][]{2008-PSS-56-Assafin}.
Because asteroids
are also marginally resolved, their light is not fully coherent,
and the speckle pattern is not as stable in time, nor simple, as
in the case of a star.
The tool we developed considers concentric annuli around the center
of light of the primary to evaluate its halo. Although the principle is
straightforward, great caution was taken in the implementation, especially
in the computation of the
intersection of the annulus with the pixels to allow the use of
annuli with a sub-pixel width. The contribution of each pixel to
different annuli is thus solved first, and the median flux of each
annulus is computed, and subtracted from each pixel accordingly.
\indent The position and flux of the satellite, relative to the
primary, is then measured by fitting a 2-D Gaussian function to
the halo-subtracted image.
\add{The satellites are distinguished from
speckles by comparing different images, taken both
close in time and over a range of times.}
To estimate the uncertainties on the
position and apparent flux of both the primary and the satellites,
we use different \add{integration} apertures for \add{each object.} \rem{which their size is determined by
the fit the} \add{The sizes of the apertures are
determined by fitting a} 2-D Gaussian \rem{function} \add{to
each}, \add{with diameters} typically \rem{from} \add{being}
5 to 150 pixels for the primary, and 3 to 15 \add{pixels} for the satellites.
The reported positions and apparent magnitudes
(Tables~\ref{tab:genoid1} and \ref{tab:genoid2})
are the average of all fits (after removal of outlier values), and the
reported uncertainties are the standard deviations.
\subsection{Orbit determination with \texttt{Genoid}\label{sec:dyn:genoid}}
\indent We use our algorithm \texttt{Genoid}~\citep[GENetic Orbit
IDentification,][]{2012-AA-543-Vachier}
to determine the orbit of the satellites. \texttt{Genoid}~is a genetic-based
algorithm that relies on a metaheuristic method to find the
best-fit (i.e., minimum $\chi^2$) suite of dynamical parameters
(\add{mass, semi-major axis, eccentricity, inclination, longitude
of the node, argument of pericenter, and time of passage to
pericenter}) by
refining, generation after generation, a grid of test values
(called \textsl{individuals}).
\indent The first generation is drawn randomly over a very
wide range for each parameter, \add{thus avoiding a miss of the
global minimum from inadequate initial conditions}\rem{, which is always
a threat in minimization algorithms}.
For each individual (i.e., set of dynamical
parameters), the $\chi^2$ residuals between the observed and predicted
positions is computed as
\begin{equation}
\label{eq:chi2}
\chi^2 = \sum_{i=1}^{N} \left[
\left(\frac{X_{o,i} - X_{c,i}}{\sigma_{x,i}} \right)^2 +
\left(\frac{Y_{o,i} - Y_{c,i}}{\sigma_{y,i}} \right)^2 \right]
\end{equation}
\noindent where $N$ is the number of observations, and $X_i$ and $Y_i$
are the relative positions between the satellite and Camilla along
the right ascension and declination respectively.
The indices $o$ and $c$ stand for observed and computed
positions, and $\sigma$ are the measurement uncertainties.
\indent A new generation of individuals is drawn
by mixing randomly the parameters of individuals with the lowest
$\chi^2$ from the former generation\rem{, in a survival of the fittest
fashion}.
This way, the entire parameter space is scanned, with the
density of evaluation points increasing toward low $\chi^2$
regions along the process.
At each generation, we also use the best
individual as initial condition to search for the
local minimum by gradient descent.
The combination of genetic grid-search and gradient descent thus
ensures finding \textsl{the} best solution.
\indent We then assess the confidence interval of the dynamical
parameters by considering all the individuals providing predictions
within 1, 2, and 3\,$\sigma$ of the observations. The range
spanned by these individuals provide the
confidence interval at the corresponding $\sigma$ level for each
parameter.
\indent The reliability of \texttt{Genoid}~has been assessed during a stellar
occultation by (87) Sylvia and its satellites Romulus and Remus on
January 6, 2013:
\add{\texttt{Genoid}~had been used to predict the position of Romulus
before the event, directing observers to locations specifically to
target the satellite. Four different observers detected
an occultation by Romulus at only 13.5\,km
off the predicted track
\citep[the cross-track uncertainty was 65\,km,][]{2014-Icarus-239-Berthier}. }
\subsection{Orbit of S1: \textsl{S/2001 (107) 1}\label{sec:dyn:S1}}
\indent We measured \numb{80} astrometric positions of the
satellite S1~relative to Camilla over a span of \numb{15} years,
corresponding to \numb{5642} days or \numb{1520} revolutions.
The orbit we derive with \texttt{Genoid}~fits all \numb{80} observed
positions of the satellite with a root mean square (RMS) residual
of \numb{7.8} milli-arcseconds (mas) only, which corresponds to
a sub-pixel accuracy.
\indent S1~orbits Camilla on a circular, prograde, equatorial
orbit, in \numb{3.71} days with a semi-major axis of \numb{1248}\,km.
We detail all the parameters of its orbit in
Table~\ref{tab:dyn}, with their confidence interval taken at
3\,$\sigma$. The distribution of residuals between the
observed and predicted positions, normalized by the uncertainty on
the measured positions, are plotted in Fig.~\ref{fig:bin:rms}.
The orbit we determine here is qualitatively similar to the one
\add{given} by
\citet{2008-Icarus-196-Marchis}, while much better constrained:
we fit \numb{80} astrometric positions over \numb{15} years
with \rem{a} \add{an} RMS residual
of \numb{7.8} mas, compared to their fit of
\numb{23} positions over less than \numb{3} years with \rem{a} \add{an} RMS
residual of \numb{22} mas.
\add{The much longer time span of observations provides a much
more stringent
constraint on the period
(\numb{3.712\,34\,$\pm$\,0.000\,04\,day}) of S1,
compared to the value of
\numb{3.722\,$\pm$\,0.009\,day} reported by
\citet{2008-Icarus-196-Marchis}.}
\indent \add{As a result, we determine a much more precise mass for Camilla of
\numb{$(1.12 \pm 0.01) \times 10^{19}$}\,kg (3\,$\sigma$
uncertainty), about 1\% of the mass of Ceres
\citep{2012-PSS-73-Carry}.
We list in Table~\ref{tab:mass} the reported values of the mass
of Camilla found in the literature. Our mass value agrees well
with the average value
\numb{$(1.10 \pm 0.69) \times 10^{19}$}\,kg we show in Table~\ref{tab:mass}, although
the mass estimates derived from orbital deflection and
solar system ephemerides have a large scatter
\citep[see][for a discussion on the precision and bias of mass
determination methods]{2012-PSS-73-Carry}.
Our determination significantly reduces the
uncertainty in the prior
value of \numb{$(1.12 \pm 0.09) \times 10^{19}$}\,kg, that also
used the orbit of S1~\citep{2008-Icarus-196-Marchis}.
}
\begin{figure}[ht]
\includegraphics[width=.5\textwidth]{s1-omc.png}
\caption[Residuals on predicted positions of S1]{
Distribution of residuals \add{for S1} between the
observed (index o)
and
predicted (index c) positions, normalized by the uncertainty on
the measured
positions ($\sigma$), and color-coded by observing epoch.
X stands for right ascension and Y for declination.
The three large gray circles represent the 1, 2, and 3 $\sigma$
limits.
The top panel shows the histogram of residuals along X, and the
right panel the residuals along Y. The light gray Gaussian in the
background has a standard deviation of \rem{1} \add{one}.
}
\label{fig:bin:rms}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=.5\textwidth]{s2-omc.png}
\caption[Residuals on predicted positions of S2]{
Similar \rem{as} \add{to} Fig.~\ref{fig:bin:rms}, but for S2.
}
\label{fig:bin:rms2}
\end{figure}
\subsection{Orbit of S2: \textsl{S/2016 (107) 1}\label{sec:dyn:S2}}
\indent We measured \numb{11} astrometric positions of the
satellite S2~relative to Camilla \rem{between} \add{during} \numb{2015
and 2016}, corresponding to \numb{428} days or \numb{311} revolutions.
\rem{Unfortunately,} \rem{these} \add{These} observations correspond to \rem{only} three
well-separated epochs: 2015-May-29, 2016-Jul-12, and 2016-Jul-30,
providing \add{the minimum needed to constrain} \rem{little constraints on} the orbit.
\add{Thus, although the orbit we determine with \texttt{Genoid}~fits all
\numb{11} observed positions of S2~with an RMS
residual of only \numb{5.0} mas and yields
reliable values for the major orbital elements,
details of all orbital parameters will require further
observations.}
\indent \add{
S2~orbits Camilla in
\numb{1.38} days with a semi-major axis of \numb{644}\,km.
We detail all the parameters of its orbit in
Table~\ref{tab:dyn} and present the distribution of residuals
between the observed and predicted positions in Fig.~\ref{fig:bin:rms2}.
Unlike S1, its orbit seems neither equatorial nor
circular.
While cognizant of the larger uncertainties,
we favor an orbit inclined to the equator
of Camilla by an angle
$\Lambda$ of \numb{32\,$\pm$\,28}\ensuremath{^{\circ}}~(Fig.~\ref{fig:spins}),
and a more eccentric orbit (\numb{e=0.18$_{-0.18}^{+0.23}$}).
Although a circular orbit, co-planar with S1~is marginally
within the range of uncertainty,
such a solution results in significantly higher residuals.
This configuration of an outer satellite on a circular and
equatorial orbit with an inner satellite
on an inclined and more eccentric orbit
has already been reported for other
triple systems:
(45) Eugenia, (87) Sylvia, and (130) Elektra
\citep{2010-Icarus-210-Marchis, 2012-AJ-144-Fang,
2014-Icarus-239-Berthier, 2016-ApJ-820-Yang,
2016-Icarus-276-Drummond}.
}
\begin{table*}
\begin{center}
\caption[Orbital elements of the satellites of Camilla]{Orbital elements of the satellites of Camilla\add{,}
S1~and S2, expressed in EQJ2000,
obtained with \texttt{Genoid}:
orbital period $P$, semi-major axis $a$,
eccentricity $e$, inclination $i$,
longitude of the ascending node $\Omega$,
argument of pericenter $\omega$, time of pericenter $t_p$.
The number of observations and RMS between predicted and
observed positions are also provided.
\rem{We finally} \add{Finally, we} report the derived primary mass $M$,
the ecliptic J2000 coordinates of the orbital pole
($\lambda_p,\,\beta_p$),
the equatorial J2000 coordinates of the orbital pole
($\alpha_p,\,\delta_p$), and the
\rem{angular tilt} \add{orbital inclination} ($\Lambda$) with respect to the equator of
Camilla. Uncertainties are given at 3-$\sigma$.}
\label{tab:dyn}
\begin{tabular}{l ll ll}
\hline\hline
& \multicolumn{2}{c}{S1} & \multicolumn{2}{c}{S2} \\
\hline
\noalign{\smallskip}
\multicolumn{2}{c}{Observing data set} \\
\noalign{\smallskip}
Number of observations & 80 & & 11 \\
Time span (days) & 5642 & & 428 \\
RMS (mas) & 7.8 & & 5.0 \\
\hline
\noalign{\smallskip}
\multicolumn{2}{c}{Orbital elements EQJ2000} \\
\noalign{\smallskip}
$P$ (day) & 3.712\,34 & $\pm$ 0.000\,04 & 1.376 & $\pm$ 0.016 \\
$a$ (km) & 1247.8 & $\pm$ 3.8 & 643.8& $\pm$ 3.9 \\
$e$ & 0.0 & $+$ 0.013 & 0.18 & $_{-0.18}^{+0.23}$ \\
$i$ (\ensuremath{^{\circ}}) & 16.0 & $\pm$ 2.3 & 27.7 & $\pm$ 21.8 \\
$\Omega$ (\ensuremath{^{\circ}}) & 140.1 & $\pm$ 4.9 & 219.9 & $\pm$ 67.0 \\
$\omega$ (\ensuremath{^{\circ}}) & 98.7 & $\pm$ 6.5 & 199.4 & $\pm$ 37.6 \\
$t_{p}$ (JD) & 2452835.902 & $\pm$ 0.067 & 2452835.31589 & $\pm$ 0.174 \\
\hline
\noalign{\smallskip}
\multicolumn{2}{c}{Derived parameters} \\
\noalign{\smallskip}
$M$ ($\times 10^{19}$ kg) & 1.12 & $\pm$ 0.01 \\
$\lambda_p,\,\beta_p$ (\ensuremath{^{\circ}}) & 73, +53 & $\pm$ 2, 2 & 114, +42 & $\pm$ 44, 18 \\
$\alpha_p,\,\delta_p$ (\ensuremath{^{\circ}}) & 50, +74 & $\pm$ 5, 2 & 130, +62 & $\pm$ 67, 22 \\
$\Lambda$ (\ensuremath{^{\circ}}) & 4 & $\pm$ 8 & 32 & $\pm$ 28 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[ht]
\includegraphics[width=.5\textwidth]{spins.png}
\caption[Spin location of the orbits and Camilla]{
Coordinates and 1 -- 2 -- 3 $\sigma$ contours of Camilla\add{'s} spin axis
(blue) and the orbital poles of S1~(gray) and S2~(red)
in ecliptic coordinates.
}
\label{fig:spins}
\end{figure}
\section{Surface properties\label{sec:spec}}
\subsection{Data processing\label{sec:spec:data}}
\indent We measured the near-infrared spectra of Camilla and its largest
satellite S1~using the SPHERE/IRDIFS data.
Telluric features were removed, and the reflectance spectra were
obtained by observing the nearby solar type star HD139380.
\indent Similarly to previous sections, the bright halo of Camilla that
contaminated the spectrum of the moon was removed.
This was achieved by measuring the
background at the location of the moon for each pixel
as the median value of the area defined as a
40$\times$1-pixel arc \rem{centred} \add{centered} on Camilla.
To estimate the uncertainty and potential bias on photometry
introduced by this method, we performed a
number of simulations in which we injected fake companions on the 39 spectral
images of the spectro-imaging cube, at separation ($\approx$300
mas) and random position angles from the
primary. The simulated sources were modeled as the PSF, from the
calibration star images, scaled in brightness.
\indent The halo from Camilla was then removed from these
simulated images using the method described above, and the flux of the simulated
companion measured by adjusting a 2D-Gaussian profile.
Based on a total statistics of 500 simulated companions, we find
that the median loss of flux at each wavelength is
\numb{11$\pm$10\%}. A spectral gradient is also introduced by our
technique, but it is smaller than \numb{0.06$\pm$0.07\%\,$\cdot$$\mu$m$^{-1}$}.
The spectra of Camilla and S1\add{,} normalized at
1.1\,$\mu$m\add{,} are shown in Fig.~\ref{fig:spec}.
\begin{figure}[ht]
\includegraphics[width=.5\textwidth]{spectra}
\caption[Near-infrared spectra of Camilla and its largest moon S1]{
Visible and near-infrared spectrum of Camilla from IRTF (green
and black dots) and
SPHERE (red squares, offset by +0.1),
and its moon S1~from SPHERE (blue triangles, offset by -0.15).
Gray areas represent the wavelength ranges affected by water
vapour in the atmosphere.
All spectra were normalize\add{d} to unity at one micron.
Overplot to the IRTF spectra is the Bus-DeMeo Xk class.
}
\label{fig:spec}
\end{figure}
\subsection{Spectrum of Camilla\label{sec:spec:107}}
\indent We combine the near-infrared spectrum we acquired at NASA IRTF
(Section~\ref{sec:obs:spex}) with the visible
spectrum from SMASS
\citep{2002-Icarus-158-BusII,2002-Icarus-158-BusI}
and analyze them with \add{the}
\texttt{M4AST}\footnote{\href{http://m4ast.imcce.fr/}{http://m4ast.imcce.fr/}}
\citep[Modeling for Asteroids, ][]{2012-AA-544-Popescu} suite of
Web tools to determine asteroid taxonomic classification,
mineralogy, and most-likely meteorite analog.
From this longer wavelength range, we found Camilla to be an Xk-type
asteroid \citep[using Bus-DeMeo taxonomic scheme,
Fig.~\ref{fig:spec},][]{2009-Icarus-202-DeMeo}.
The low albedo of Camilla
\citep[0.059\,$\pm$\,0.005, taken as the average of the
estimates by][]{PDSSBN-TRIAD, 2002-AJ-123-Tedesco-a,
2010-AJ-140-Ryan, 2011-PASJ-63-Usui,
2011-ApJ-741-Masiero}\add{,}
hints at a P-type classification, using the
\citet{1989-AsteroidsII-Tedesco} scheme.
\indent Although the best spectral match is formally found
for an Enstatite Chondrite EH5 meteorite (Queen
Alexandra Range, Antarctica origin, maximum size of 10\,$\mu$m),
the low albedo of Camilla argues for a different type of analog
material.
The composition of P-type asteroids is indeed \rem{hard} \add{difficult}, if not
impossible, to infer from their visible and near-infrared spectra
owing to the lack of absorption bands.
\indent Recently,
\citet{2015-ApJ-806-Vernazza} have shown
that anhydrous chondritic porous interplanetary dust particules
(IDPs) were
likely to originate from D- and P-types asteroids, based on
spectroscopic observations in the mid-infrared of outer\add{-}belt
D- and P-type asteroids, including Camilla. The mixture of
olivine-rich and pyroxene-rich IDPs they used was compatible with
the visible and near-infrared spectrum of Camilla.
As such, the surface of Camilla, and more generally of D- and
P-types, is very similar to that of
comets, as already reported by \citet{2006-Icarus-182-Emery}
from the spectroscopy of Jupiter Trojans in the mid-infrared,
revealing the presence of anhydrous silicates.
\subsection{Spectrum of S1\label{sec:spec:S1}}
\indent As visible \rem{on} \add{in} Fig.~\ref{fig:spec}\add{,}
the spectrum of S1~is \rem{very} similar to that of Camilla.
No significant difference in slope nor absorption band
can be detected. This implies that the two components
are spectraly identical from 0.95 to 1.65\,$\mu$m\add{,} within
the precision of our measurements.
Such a similarity between the components of multiple systems have
already been reported for
several other main-belt asteroids:
(22) Kalliope \citep{2009-Icarus-204-Laver},
(90) Antiope \citep{2009-MPS-44-Polishook,
2011-Icarus-213-Marchis},
(130) Elektra \citep{2016-ApJ-820-Yang}, and
(379) Huenna \citep{2011-Icarus-212-DeMeo}.
\indent Such spectral similarity, together with the main
characteristics of the orbit (prograde, equatorial, circular)
supports an origin of these satellites, here for S1~in
particular, by impact and reaccumulation of material in orbit
\citep[see][for a review]{2015-AsteroidsIV-Margot}.
Formation by \rem{rotation} \add{rotational} fission is \rem{here} unlikely owing to the
rotation period of Camilla (4.84\,h).
\section{Physical properties\label{sec:phys}}
\subsection{Data processing\label{sec:phys:data}}
\indent We used the optical lightcurves \rem{at their face
value}\add{without modification}, only
converting their heterogeneous formats from many observers to the
usual lightcurve inversion format \citep{2010-AA-513-Durech}.
\add{For the occultation observations, the} \rem{The} location of observers\add{,} together with their timings of
the disappearance and the reappearance of the star\add{,} were
converted into chords on the plane of the sky, using the recipes from
\citet{1999-IMCCE-Berthier}.
Finally, the 2-D profile of the apparent disk of Camilla was measured
on the AO images, deconvolved using the
\texttt{Mistral}~algorithm \citep{2000-PhD-Fusco, 2004-JOSAA-21-Mugnier},
the reliability of which has been demonstrated elsewhere
\citep{2006-JGR-111-Witasse},
using the wavelet transform described in
\citet{2008-AA-478-Carry, 2010-AA-523-Carry}.
\subsection{3-D shape modeling with \texttt{KOALA}\label{sec:shape:koala}}
\indent We used the multi-data inversion algorithm Knitted Occultation,
Adaptive-optics, and Lightcurve Analysis (\texttt{KOALA}), which
determines the set of rotation period, spin-vector coordinates, and
3-D shape that provide the best fit to all observations
simultaneously \citep{2010-Icarus-205-Carry-a}.
\indent The \texttt{KOALA}~algorithm minimizes the total
$\chi^2 = \chi^2_{LC} + w_{AO}\ \chi^2_{AO}+ w_{Occ}\ \chi^2_{Occ}$
that \add{is composed of} \rem{composes} the individual contributions from light curves (LC),
profiles from disk-resolved images (AO), and occultation chords
(Occ). Adaptive optics and occultation data are weighted with
respect to the lightcurves with parameters $w_{AO}$ and $w_{Occ}$,
respectively.
\add{Within each type of data, all the epochs are weighted
uniformly.}
The optimum values of these weights can be
objectively obtained
following the approach of \cite{2011-IPI-5-Kaasalainen}.
\indent This method has been spectacularly validated by the images
taken by the
OSIRIS camera on-board the ESA Rosetta mission during its flyby of
the asteroid (21) Lutetia \citep{2011-Science-334-Sierks}.
Before the encounter, the spin and 3-D shape of Lutetia had been
determined with \texttt{KOALA}, using lightcurves and AO images
\citep{2010-AA-523-Carry, 2010-AA-523-Drummond}.
A comparison of the pre-flyby solution with the OSIRIS images
showed that the \rem{spin-vector} \add{spin vector}
\rem{coordinates were} \add{was} accurate to within
2\ensuremath{^{\circ}}\rem{,} and the diameter to within 2\%. The \add{RMS} residual\rem{s} in the
\add{surface topography} \rem{topographic profiles} between the \texttt{KOALA}~predictions and the OSIRIS
images \rem{were} \add{was} only 2\,km, for a
98\,km-diameter asteroid \citep{2012-PSS-66-Carry}.
\subsection{Spin and 3-D shape of Camilla\label{sec:shape:107}}
\indent We used
\numb{127} optical lightcurves,
\numb{34} profiles from disk-resolved imaging, and
\numb{7} stellar occultation events to reconstruct the spin axis and 3-D
shape of Camilla.
The model fits \rem{very} well the entire data set, with mean residuals of
only
\numb{0.03}\,mag for the lightcurves (\add{Figs.~\ref{fig:lc}, \ref{app:lc}}),
\numb{0.29}\,pixel for the images (Fig.~\ref{fig:prof}), and
\numb{0.35}\,s for the stellar occultations
(Fig.~\ref{fig:occ}).
\add{There are small local departures of the shape model from the
stellar occultation chords that can be due to local topography
not modeled with our low-resolution shape model.}
\begin{figure*}[ht]
\includegraphics[width=\textwidth]{artlc-001}
\caption[Optical lightcurves of Camilla]{
\add{Examples of optical lightcurves of Camilla.
For each epoch, the upper panel presents the observed photometry
(grey spheres) compared with synthetic lightcurves generated with
the shape model (black lines). The lower panel shows the residuals
between the observed and synthetic flux.
The observing date, number of points, duration
of the lightcurve (in hours), phase angle ($\alpha$),
and RMS residuals between the
observations and the synthetic lightcurves are displayed.
In most cases, measurement uncertainties are not provided by the
observers but can be estimated from the spread of measurements.
See Fig.~\ref{app:lc} for the entire data set.}
}
\label{fig:lc}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=.9\textwidth]{ao-001}
\caption[Disk-resolved profiles of Camilla]{
\rem{Profiles of Camilla from disk-resolved images,}
\add{All 34 profiles of Camilla from disk-resolved images,
compared with the \add{projection of the shape model on the
plane of the sky}.
On each panel, corresponding to a different epoch,
the grey shaded areas correspond to the 1-2-3\,$\sigma$
confidence intervals of each profile, while the shape model is
represented by the wired mesh.}
}
\label{fig:prof}
\end{figure*}
\begin{figure*}[ht]
\includegraphics[width=\textwidth]{occ-001.png}
\caption[Stellar occultations by Camilla]{
The seven stellar occultations by Camilla, compared with the \add{shape
model projected on the
plane of the sky for the times of the
occultations.}
\add{The observer of northern chord in the first occultation, presenting a
clear mismatch with the shape model, reported the presence of
thin cirrus that may explain the discrepancy.}
}
\label{fig:occ}
\end{figure*}
\indent The rotation period and coordinates of the spin axis
(Table~\ref{tab:koala}) agree very well with previous results from
lightcurve-only inversion and convex shape modeling
\citep{2003-Icarus-164-Torppa, 2011-Icarus-214-Durech,
2016-AA-586-Hanus}, as well as
models obtained by combining lightcurves and smaller
subsets of \add{the} present AO data \citep[respectively 3 and 21 epochs,
see][]{2013-Icarus-226-Hanus, 2017-AA-601-Hanus}.
The shape of Camilla is far from a sphere, with a strong
ellipsoidal elongation along the equator (\add{a/b} axes ratio of
\numb{1.37\,$\pm$\,0.12}, see Table~\ref{tab:koala}).
Departures from the ellipsoid are, however, limited, and \add{mainly} consist
\rem{mainly} in two large circular basins, \rem{possibly} reminiscent of
impact craters (Fig.~\ref{fig:topo}).
\begin{figure}[ht]
\includegraphics[width=.5\textwidth]{map-koala}
\caption[Topography of Camilla]{
Topographic map of Camilla, with respect to its reference
ellipsoid (Table~\ref{tab:koala}). The main features are the two
deep and circular basins located at
(87\ensuremath{^{\circ}},-23\ensuremath{^{\circ}})
and
(278\ensuremath{^{\circ}},+33\ensuremath{^{\circ}}).
}
\label{fig:topo}
\end{figure}
\indent The \add{spherical-}volume-equivalent diameter of Camilla is found to be
\numb{254\,$\pm$\,36}\,km (3 $\sigma$), in \rem{perfect} \add{excellent} agreement with the recent
determination by \citet{2017-AA-601-Hanus} based on a similar data set.
Both estimates are high compared to diameter estimates from
infrared observations with IRAS, AKARI, or WISE
\citep[][see Table~\ref{tab:diam}]{PDSSBN-IRAS,
2010-AJ-140-Ryan, 2011-PASJ-63-Usui, 2011-ApJ-741-Masiero}.
However, diameter determinations by
mid-IR radiometry are based on disk-integrated fluxes.
In the case of highly elongated targets like Camilla,
\add{the projected area is often smaller than the average area as
shown in Table~\ref{tab:IRdiam}. Averaging
disk-integrated fluxes may thus underestimate the average
diameter.}
\indent The agreement of the 3-D models by
\citet{2017-AA-601-Hanus} and developed here with \rem{both
the} \add{lightcurves,} disk-resolved images\add{,} and \rem{the} stellar occultation timings,
providing direct size measurements, indeed
argues for Camilla being larger than previously thought.
The corresponding volume is
\numb{8.5\,$\pm$\,1.2 $\cdot 10^{6}$}\,km$^3$. The uncertainty
on the volume matches closely that of the diameter
($\delta V / V \approx \delta D / D$) in the case of 3-D shape
modeling, as shown by \citet{2012-AA-543-Kaasalainen},
because it derives from the uncertainty on the radius of each
vertex, which are correlated (unlike in the case of scaling a
sphere).
\begin{table}[ht]
\begin{center}
\caption[Parameters of the shape model of Camilla]{Sidereal rotation period,
spin-vector coordinates (longitude $\lambda$, latitude $\beta$ in
ECJ2000; and right ascension $\alpha$, declination $\delta$ in EQJ2000),
\add{spherical-}volume-equivalent diameter (D),
volume (V),
diameters along the principal axis of inertia (a, b, c), and
axes ratio of
Camilla
obtained with KOALA.
All uncertainties are reported at 3 $\sigma$.
\label{tab:koala}
}
\begin{tabular}{llll}
\hline\hline
Parameter & Value & Unc. & Unit \\
\hline
Period & 4.843927 & 4.10$^{-5}$ & hour \\
$\lambda$ & 68.0 & 9.0 & deg. \\
$\beta$ & 58.3 & 7.0 & deg. \\
$\alpha$ & 35.8 & 9.0 & deg. \\
$\delta$ & 76.1 & 7.0 & deg. \\
T$_0$ & 2444636.00 & & \\
\hline
D & 254 & 36 & km \\
V & 8.55 $\cdot 10^{6}$ & 1.21 $\cdot 10^{6}$ & km$^3$ \\
a & 340 & 36 & km\\
b & 249 & 36 & km\\
c & 197 & 36 & km\\
a/b & 1.37 & 0.12 & \\
b/c & 1.26 & 0.12 & \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[ht]
\centering
\includegraphics[width=.85\textwidth]{3d-view.pdf}
\caption[Shape model of Camilla]{
\add{Views of the shape model along its principal axes (the x,y,z
axes in the plot are aligned with the principal moment of
inertia of the model).}
}
\label{fig:3d}
\end{figure*}
\subsection{Diameter of S1\label{sec:shape:S1}}
\indent We list in Table~\ref{tab:genoid1} and display in
Fig.~\ref{fig:dmag} the \numb{65} measured brightness
difference\add{s} with an uncertainty lower than \numb{1} magnitude
between Camilla and its largest satellite S1.
We found a normal distribution of measurement, as expected from
photon noise, and measure an average magnitude difference of
$\Delta m$\,=\,\numb{6.51\,$\pm$\,0.27}, similar to the value of
\numb{6.31\,$\pm$\,0.68} reported by
\citet{2008-Icarus-196-Marchis} on \numb{22} epochs.
\indent Using the diameter of
\numb{254\,$\pm$\,36}\,km for Camilla (Sect.~\ref{sec:shape:107})
and assuming S1~has the same albedo
as Camilla itself (supported by their spectral similarity, see
Section~\ref{sec:spec:S1}), this magnitude difference
implies a size of
\numb{12.7\,$\pm$\,3.5}\,km for S1, smaller than previously
reported.
\begin{figure}[ht]
\includegraphics[width=.5\textwidth]{s1-mag.png}
\caption[Magnitude of S1~with respect to Camilla]{
Distribution of the magnitude difference\add{s} between Camilla and its
largest satellite S1, compared with previous report from
\citet{2008-Icarus-196-Marchis}.
The dashed black line represent\add{s} the normal distribution fit to our
results, with a mean and standard deviation of \numb{6.51\,$\pm$\,0.27}.
}
\label{fig:dmag}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=.5\textwidth]{s2-mag.png}
\caption[Magnitude of S2~with respect to Camilla]{
Distribution of the magnitude difference\add{s} between Camilla and its
second satellite S2.
The dashed black line represent\add{s} the normal distribution fit to our
results, with a mean and standard deviation of \numb{9.0\,$\pm$\,0.3}.
}
\label{fig:dmag2}
\end{figure}
\subsection{Diameter of S2\label{sec:shape:S2}}
\indent We list in Table~\ref{tab:genoid2} and display in
Fig.~\ref{fig:dmag2} the \numb{11} measured brightness
difference\add{s} between Camilla and its smaller satellite S2.
We measure an average magnitude difference of
$\Delta m$\,=\,\numb{9.0\,$\pm$\,0.3}
\citep[already reported upon discovery, see][]{2016-IAUC-Marsset}.
\indent Using the diameter of
\numb{254\,$\pm$\,36}\,km for Camilla (Sect.~\ref{sec:shape:107})
and assuming S2~has the same albedo
as Camilla itself as we did for S1, this magnitude difference
implies a size of
\numb{4.0\,$\pm$\,1.2}\,km for S2.
\section{Discussion\label{sec:discuss}}
\subsection{Internal Structure\label{sec:dens}}
\indent Using the mass derived from the study of the \add{dynamics
of the satellites} and the volume from the 3-D shape modeling, we
infer a density of
\numb{1,280\,$\pm$\,130}\,kg$\cdot$m$^{-3}$ (3 $\sigma$ uncertainty),
in agreement with previous reports by
\citet{2008-Icarus-196-Marchis} and
\citet{2017-AA-601-Hanus}.
This highlights how critically the density relies on accurate
volume estimates: the summary of previous diameter
determinations (Table~\ref{tab:diam}),
mainly based on indirect techniques,
leads to a density of
\numb{1,750\,$\pm$\,1,400}\,kg$\cdot$m$^{-3}$
\citep[3 $\sigma$ uncertainty,][]{2012-PSS-73-Carry}.
\indent
\add{The low density found here is comparable to that
of (87) Sylvia, a P-type of similar size, also
orbiting in the Cybele region
\citep{2014-Icarus-239-Berthier}, and the D-/P-type
Jupiter Trojans (617) Patroclus and (624) Hektor \citep{2010-Icarus-205-Mueller,
2014-ApJ-783-Marchis,2015-AJ-149-113-Buie}.
As mentioned above (\ref{sec:spec:107}), the most-likely analog
material for this type of asteroids are IDPs
\citep{2015-ApJ-806-Vernazza}.
There is no measurement of IDP density in the laboratory. However,
a density of 3,000$\cdot$m$^{-3}$ for the
silicate phase was reported by the StarDust mission
\citep{2006-Science-314-Brownlee}.
Because these silicates are mixed with organic carbonaceous
particles ($\approx$2,200$\cdot$m$^{-3}$),
the density of the bulk material is likely of
$\approx$2,600$\cdot$m$^{-3}$
\citep{2000-EMP-82-Greenberg,2016-Nature-530-Paetzold}.
A macroporosity of \numb{50\,$\pm$\,9}\% would thus be required to explain
the density of Camilla, i.e., half of
its volume would be occupied by voids.
Because the pressure inside Camilla reaches 10$^5$\,Pa less
than 15\,km from its surface (90\% of the radius), it is
unlikely that its structure can sustain such large voids.
While silicate grains crush at 10$^7$\,Pa, larger structures
will not resist pressure significantly
smaller, as the compressive strength decreases as the power -1/2
of the size \citep{1967-JRM-4-Lundborg, 2002-AsteroidsIII-4.2-Britt}.
}
\begin{figure}[ht]
\includegraphics[width=.5\textwidth]{interior.png}
\caption[Dust and ice fractions in Camilla]{
\textbf{Top:} Dust density as function of its volumetric fraction
for different porosities (10, 30, 50\%).
The expected range from pure
organics to pure silicates is represented in shaded gray.
Expected range is highlighted in gold.
\textbf{Bottom:} Dust-to-ice mass ratios as function of the
volumetric fraction of dust.
}
\label{fig:dustice}
\end{figure}
\indent \add{
An alternate explanation to the low density of Camilla may be
that it contains large amounts of water ice.
An absorption band due to hydration at 3\,$\mu$m was indeed reported by
\citet{2012-Icarus-219-Takir}, whose shape is similar to those of
the nearby (24) Themis and (65) Cybele and interpreted as water
frost coating on surface grains \citep{2010-Nature-464-Campins,
2011-AA-525-Licandro}.
Because water ice sublimates on airless
surfaces at the heliocentric distance of Themis, Camilla, and Cybele,
the ice on the surface must be replenishment from sub-surface
reservoir(s) \citep{2010-Nature-464-Rivkin},
as it occurs on (1) Ceres \citep{1992-Icarus-98-AHearn,
2011-AJ-142-Rousselot,
2014-Nature-505-Kueppers,
2016-Science-353-Combe}.
}
\indent \add{
We thus investigate the possible range of dust-to-ice mass ratio\add{s}
as function of macroporosity in Camilla (Fig.~\ref{fig:dustice}).
As expected, the porosity decreases
with higher ice content and reaches \numb{10-30\%} for dust-to-ice mass
ratios of \numb{1--6}.
Therefore, the volume occupied by dust, ice, and voids would
be
\numb{33\,$\pm$\,10\%},
\numb{47\,$\pm$\,19\%}, and
\numb{20\,$\pm$\,10\%} respectively, the latter being preferentially
found in the
outer-most volume of the asteroid body.
}
\indent \add{To test this, we compute the
gravitational potential quadrupole
\numb{$J_2$\,=\,0.042\,$\pm$\,0.004}
of the 3-D shape model (Sect.~\ref{sec:shape:107})
under the assumption of a homogeneous interior
using the method of
\citet{1996-Icarus-124-Dobrovolskis}.
Because the orbit of S1~fits \numb{80}~astrometric
positions over 15 years to
measurement accuracy under the assumption of a
null $J_2$ (\ref{sec:dyn:S1}), the mass distribution in Camilla
must be more concentrated at the center, with a denser
\textsl{core}, than suggested by its
shape. Similar internal structure has already been suggested
for (87) Sylvia and (624) Hektor by
\citet{2014-Icarus-239-Berthier} and
\citet{2014-ApJ-783-Marchis}.
Considering a core of pure silicate, and an outer shell of
porous ice matching the masses above, the core radius would be
\numb{87\,$\pm$\,8}\,km or
\numb{68\,$\pm$\,7}\% of the radius of Camilla.
Additional observations of S2~to determine precisely its
orbit are now required to test further the internal structure of
Camilla.
}
\subsection{Future characterization of Camilla triple system\label{sec:dyn:futur}}
\indent \add{Owing to the large magnitude difference between
Camilla and its satellites (6.5 and 9 mag.),
constraining the size and orbit of the satellites by photometric
observations of mutual event
\citep[eclipses and occultations, see,
e.g.,][]{2009-Icarus-200-Scheirich,
2015-Icarus-248-Carry} is not feasible.
Observation of S2~will therefore rely on direct imaging
such as presented here, or stellar occultations which can moreover
provide a direct measurement of the diameter of the satellites.
To this effect, we list in Table~\ref{tab:nextocc} a selection of stellar
occultations that will occur in the next three years.
}
\indent \add{
Similarly to our work on (87) Sylvia which led to the
observation of a stellar occultation by its satellite Romulus
\citep{2014-Icarus-239-Berthier}, we will continuously update
the occultation path of Camilla and of its satellites, for
these events.
The precision of such predictions will benefit from each
successive data release of the ESA Gaia astrometry catalogs
\citep{2007-AA-474-Tanga, 2016-AA-595-Prusti,
2017-AA-607-Spoto}, that will reduce the uncertainty on the path of
Camilla itself to a few kilometers.
The uncertainty on the occultation path of the satellites will
then mostly derive from the uncertainty on their orbital
parameters, and we provide them in Table~\ref{tab:nextocc}.
The orbit of
S2~being little constrained, the uncertainty on its
position for upcoming occultations is very large.
Initial improvement must thus rely on direct
imaging of the system.
}
\section{Summary}
\indent In the present study, we have acquired and compiled
optical lightcurves, stellar occultations, visible and near-infrared
spectra, and high-contrast and
high-angular-resolution images and spectro-images from the Hubble
Space Telescope and large ground-based telescopes (Keck, Gemini,
VLT) equipped with adaptive-optics-fed cameras.
\indent Using \numb{80} positions spanning \numb{15} years,
we study the dynamics of the largest satellite, S1, and
determine its orbit around Camilla to be circular, equatorial, and
prograde. The residuals between our dynamical solution and the
observations are \numb{7.8}\,mas, corresponding to a sub-pixel
accuracy. Using \numb{11} positions of the second, smaller\add{,}
satellite S2~\add{that} we discovered in 2015, we determine a preliminary
orbit, marginally \rem{tilted} \add{inclined} from that of S1~and more eccentric.
Predictions of the relative position of the satellite with
respect to Camilla, critical for planning stellar occultations for
instance, are available to the community through our VO service
\texttt{Miriade}\,\footnote{\href{http://vo.imcce.fr/}{http://vo.imcce.fr/}}
\citep{2008-ACM-Berthier}.
\indent From the visible and near-infrared spectrum of Camilla, we
classify it as an Xk-type asteroid, in the Bus-DeMeo
taxonomy \citep{2009-Icarus-202-DeMeo}. Considering its low albedo, it would be classified as a
P-type in older taxonomic schemes such as Tedesco's \citep{1989-AsteroidsII-Tedesco}.
Using VLT/SPHERE integral-field spectrograph, we measure the
near-infrared spectrum of the largest satellite, S1, and
compare it with Camilla. No significant difference\add{s} are found.
This\add{,} together with its orbital parameters\add{,} argue for a formation of
the satellite by
excavation from impact, re-accumulation of ejecta in orbit, and
circularization by tides.
\indent Using optical lightcurves, profiles from disk-resolved imaging, and
stellar occultation events, we determine the spin-vector coordinates and 3-D
shape of Camilla. The model fits \rem{very} well each data set, and
\add{we find a \add{spherical-}volume-equivalent diameter of
\numb{254\,$\pm$\,36\,km}.
By combining the mass from the dynamics with the volume of the shape
model, we find a density of \numb{1,280\,$\pm$\,130}\,kg$\cdot$m$^{-3}$.
Considering Camilla's most likely analog material are IDPs, this
implies a macroporosity of \numb{50\,$\pm$\,9}\%, likely too high to be sustained.
By considering a mixture of ice and silicate, the macroporosity
could be in the range
\numb{10--30\%} for a dust-to-ice mass ratio of \numb{1--6}, the denser
material being concentrated toward the center as suggested by the
dynamics of the system.
}
\section*{Acknowledgements}
\indent Based on observations collected at the European Organisation for Astronomical
Research in the Southern Hemisphere under ESO programmes
\href{http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=71.C-0669}{071.C-0669} (PI Merline),
\href{http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=73.C-0062}{073.C-0062} \&
\href{http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=74.C-0052}{074.C-0052} (PI Marchis),
\href{http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=88.C-0528}{088.C-0528} (PI Rojo),
\href{http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=95.C-0217}{095.C-0217} \&
\href{http://archive.eso.org/wdb/wdb/eso/sched_rep_arc/query?progid=297.C-5034}{297.C-5034} (PI Marsset).
\indent Some of the data presented herein were obtained at the W.M. Keck
Observatory, which is operated as a scientific partnership among the
California Institute of Technology, the University of California and the
National Aeronautics and Space Administration. The Observatory was made
possible by the generous financial support of the W.M. Keck
Foundation.
\indent This research has made use of the Keck Observatory Archive (KOA),
which is operated by the W. M. Keck Observatory and the NASA
Exoplanet Science Institute (NExScI), under contract with the
National Aeronautics and Space Administration.
\indent \add{Some of these observations were acquired
under grants from the National Science Foundation
and NASA to Merline (PI).}
\indent The authors wish to recognize and acknowledge the very significant
cultural role and reverence that the summit of Mauna Kea has always
had within the indigenous Hawaiian community. We are most fortunate
to have the opportunity to conduct observations from this mountain.
\indent Based on observations obtained at the Gemini Observatory, which is operated by
the Association of Universities for Research in Astronomy, Inc., under a
cooperative agreement with the NSF on behalf of the Gemini partnership: the
National Science Foundation (United States), the National Research Council
(Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e
Innovaci{\'o}n Productiva (Argentina), and Minist{\'e}rio da Ci{\^e}ncia,
Tecnologia e Inova{\c c}{\~a}o (Brazil).
\indent \add{We wish to acknowledge the support of NASA
Contract NAS5-26555 and STScI grant GO-05583.01 to
Alex Storrs (PI).}
\indent Visiting Astronomer at the Infrared Telescope Facility, which is operated by
the University of Hawaii under contract NNH14CK55B with the National
Aeronautics and Space Administration.
\indent We thank the AGORA association which administrates the
60\,cm telescope at Les Makes observatory, under a financial agreement
with Paris Observatory. Thanks to A. Peyrot, J.-P. Teng for local
support, and A. Klotz for helping with the robotizing.
\indent \add{Thanks to all the amateurs worldwide who
regularly observe asteroid lightcurves and stellar
occultations. Many co-authors of this study are amateurs who
observed Camilla, and provided crucial data.}
\indent We thank J. {\v D}urech for providing his implementation of
\citet{1996-Icarus-124-Dobrovolskis} method.
\indent The authors acknowledge the use of the Virtual Observatory
tools
\texttt{Miriade}\,\footnote{\href{http://vo.imcce.fr/}{http://vo.imcce.fr/}}
\citep{2008-ACM-Berthier},
\texttt{MP$^3$C}\,\footnote{\href{https://mp3c.oca.eu}{https://mp3c.oca.eu}}
\citep{2017-ACM-Delbo},
\texttt{TOPCAT}\,\footnote{\href{http://www.star.bristol.ac.uk/~mbt/topcat/}{http://www.star.bristol.ac.uk/~mbt/topcat/}},
and
\texttt{STILTS}\,\footnote{\href{http://www.star.bristol.ac.uk/~mbt/stilts/}{http://www.star.bristol.ac.uk/~mbt/stilts/}}
\citep{2005-ASPC-Taylor}.
This research used the facilities of the Canadian Astronomy Data Centre
operated by the National Research Council of Canada with the support of the
Canadian Space Agency \citep{2012-PASP-124-Gwyn}.
\bibliographystyle{elsarticle-harv}
|
{
"timestamp": "2018-03-08T02:10:06",
"yymm": "1803",
"arxiv_id": "1803.02722",
"language": "en",
"url": "https://arxiv.org/abs/1803.02722"
}
|
\section{Introduction}
For atmospheric reentry of space vehicles, it is important to estimate
the heat flux at the solid wall of the vehicle. In such hypersonic
flows, the temperature is very large, and the air flow, which is a
mixture of monoatomic and polyatomic gases, is modified by chemical
reactions. The characteristics of the mixture (viscosity and
specific heats) then depend on its temperature (see~\cite{nbk86}).
One way to take into account this variability is to use appropriate
constitutive laws for the air. For instance, quantum mechanics allows
to derive a relation between internal energy and temperature that
accounts for activation of vibrational modes of the molecules (see~\cite{anderson}). When
the temperature is larger, chemical reactions occur, and if the flow
is in chemical equilibrium, empirically tabulated laws can be used to
compute all the thermodynamical quantities (pressure, entropy,
temperature, specific heats) in terms of density and internal energy,
like the one
given in~\cite{anderson,hansen}. These laws give a closure of the
compressible Navier-Stokes equations, that are used for simulations
in the continuous regime, at moderate to low altitudes (see, for
example,~\cite{mykv88}).
In high altitude, the flow is in the rarefied or transitional regime,
and it is described by the
Boltzmann equation of Rarefied Gas Dynamics, also called the Wang-Chang-Uhlenbeck equation in
case of a reacting mixture. This equation is much too complex to be
solved by deterministic methods, and such flows are generally
simulated by the DSMC method~\cite{BS_2017}. However, it is
attractive to derive simplified kinetic models that account for high
temperature effects, in order to obtain alternative and deterministic
solvers{: for such computations, it is necessary to capture dense zones with high temperatures and very rarefied zones with low temperatures}. Up to our knowledge, the first attempt to introduce non ideal
constitutive laws into a kinetic model has recently been
published in~\cite{rahimi16}. In this article, the authors define the
constant volume specific heat~$c_v$ as a third-order polynomial
function of the temperature of the gas, and derive a mesoscopic model
based on the moment approach. {A similar approach is
proposed in~\cite{KKA_2019} that gives a correct Prandtl number.}
Simplified Boltzmann models for mixtures
of polyatomic gases have also been proposed in~\cite{andries2002,bisi2016,DESVILLETTES2005219},
however, high temperature effects are not addressed in these references.
In this paper, {our goal is to construct models that are able to
capture macroscopic effects as well as kinetic effects at a
reasonable numerical cost, for an application to reentry flows.} We propose two ways to include high
temperature effects (vibrational modes, chemical reactions) in a
generalized BGK model.
First, we show that vibrational modes can be
taken into account by using a temperature dependent number of degrees
of freedom. This can be used in a BGK model for polyatomic gases, but
we show that the choice of the variable used to describe the internal
energy of the molecules is fundamental here. This model allows us to
simulate a mixture of rarefied polyatomic gases (like the air) with
rotational and vibrational non equilibrium, {with a single
distribution function for the mixture}. As a consequence, we are
able to simulate a polyatomic gas flow with a non-constant specific
heat.
Then we propose a more general BGK model that can be used to describe
a rarefied flow with both vibrational excitation and chemical
reactions, at chemical equilibrium, based on arbitrary constitutive
laws for pressure and temperature. Our BGK model is shown to be consistent
with the corresponding Navier-Stokes model in the small Knudsen number
limit. Finally, the internal energy variable of our BGK model can be
eliminated by the standard reduced distribution
technique~\cite{HH_1968}: this gives a kinetic model for high
temperature polyatomic gases with a computational complexity which is
close to that of a simple monoatomic model.
{Up to our knowledge, the model proposed in this work is the
first Boltzmann model equation that allows for realistic
equations of state and includes concentration effects in the thermal
flux. We point out that this article is a first step towards a
correct computation of the parietal heat flux: since we use a BGK
model, it is clear that our model does not have a correct Prandtl
number, as usual. This might be solved by using the ES-BGK
approach~\cite{esbgk_poly,Mathiaud2016,Mathiaud2017} to capture the
correct relaxation times for energy and fluxes~\cite{kustova}}.
The outline of our paper is the following. First we remind a standard
BGK model for polyatomic gases in
section~\ref{sec:polyatomicBGK}. Then, in section~\ref{sec:hightemp},
we explain how high temperature effects are taken into account to
define the internal number of degrees of freedom of molecules and
generalized constitutive laws.
A first BGK model is proposed to allow
for vibrational mode excitation with a temperature dependent number of
degrees of freedom in section~\ref{sec:deltaT}. This model is
extended to allow for arbitrary constitutive laws for pressure and
temperature in section~\ref{sec:BGK_EOS}, and this model is also
analyzed by the Chapman-Enskog expansion. {Some features of
our new model are illustrated by a few numerical simulations in
section~\ref{sec:num}.}
\section{Polyatomic BGK model}
\label{sec:polyatomicBGK}
For
standard temperatures, a polyatomic perfect gas can be described by
the mass distribution function $F(t,x,v,\varepsilon)$ that depends on time
$t$, position $x$, velocity $v$, and internal energy $\varepsilon$. The
internal energy is described with a continuous variable, and takes into
account rotational modes. The corresponding number of degrees of
freedom for rotational modes is $\delta$ (see~\cite{BL1975}).
Corresponding macroscopic quantities mass density $\rho$, velocity $u$,
and specific internal energy $e$, are defined through the first 5 moments of $F$
with respect to $v$ and $\varepsilon$:
\begin{align}
\rho(t,x) & = \cint{\cint{F}},\label{eq-rho} \\
\rho u (t,x) & = \cint{\cint{v F}}, \label{eq-rhou} \\
\rho e (t,x) & = \cint{\cint{(\frac{1}{2} |v-u|^2 + \varepsilon) F}} \label{eq-rhoe} ,
\end{align}
where
$\cint{\cint{\phi}} = \int_{{\mathbb{R}}^3}\int_0^{+\infty} \phi(v,\varepsilon)\,
dvd\varepsilon$
denotes the integral of any scalar or vector-valued function $\phi$
with respect to $v$ and $\varepsilon$. The specific internal energy take into
account translational and rotational modes. Other macroscopic
quantities can be derived from these definitions. The temperature is
$T$ is such that $e = \frac{3+\delta}{2}RT$, where $R$ is the gas
constant. The pressure is given by the perfect gas equation of state
(EOS) $p=\rho R T$.
For a gas in thermodynamic equilibrium, the distribution function
reaches a Maxwellian state, defined by
\begin{equation} \label{eq-Maxwpoly}
M[F] = \frac{\rho}{(2\pi R T)^{\frac{3}{2}}}
\exp\left( - \frac{|v-u|^2}{2RT} \right)
\Lambda(\delta)\left(\frac{\varepsilon}{RT}\right)^{\frac{\delta}{2}-1}
\frac{1}{RT}\exp\left( -\frac{\varepsilon}{RT} \right),
\end{equation}
where $\rho$, $u$, and $T$ are defined above. The constant
$\Lambda(\delta)$ is a normalization factor defined by
$\Lambda(\delta) = 1/\Gamma(\frac{\delta}{2})$, so that $M[F]$
has the same 5 moments as $F$ (see above).
The simplest BGK model that can be derived from this description is
the following
\begin{equation}\label{eq-BGKpoly}
\partial_t F + v\cdot \nabla_x F = \frac{1}{\tau}( M[F]-F),
\end{equation}
where $\tau$ is a relaxation time (see below).
The standard Chapman-Enskog expansion shows that this model is
consistent, \correct{with an error which is of second order with respect to the Knudsen number}, to
the following compressible Navier-Stokes equations
\begin{align*}
& \partial_t \rho + \nabla\cdot \rho u = 0 \\
& \partial_t \rho u + \nabla \cdot (\rho u \otimes u) + \nabla p = -\nabla
\cdot \sigma \\
& \partial_t E + \nabla\cdot ((E+p)u) = -\nabla \cdot q - \nabla \cdot
(\sigma u),
\end{align*}
where $\sigma = -\mu (\nabla u + (\nabla u)^T - \frac{2}{3+\delta}
\nabla \cdot u \, Id)$
is the shear stress tensor and $q = -\kappa \nabla T$ is the heat flux. The
transport coefficients $\mu$ and $\kappa$ are linked to the relaxation
time by the relations $\mu = \tau p$ and $\kappa = \mu c_p$, where the
specific heat at constant pressure is $c_p =
\frac{5+\delta}{2}R$. Actually, these relations define the correct
value that has to be given to the relaxation time $\tau$ of~(\ref{eq-BGKpoly}), which is
\begin{equation} \label{eq-deftau}
\tau = \frac{\mu}{p},
\end{equation}
where the viscosity is given by a standard temperature dependent law
like $\mu(T) = \mu_{ref}(\frac{T}{T_{ref}})^{\omega}$
(see~\cite{bird}). This implies that the Prandtl number
${\rm Pr} = \frac{\mu c_p}{\kappa}$ is equal to $1$. This incorrect
result (it should be close to $\frac{5}{7}$ for a diatomic gas, for
instance) is due to the fact that the BGK model contains only one
relaxation time. Instead it would be more relevant to include at least
three relaxation times in the model to allow for various different
time scales (viscous versus thermal diffusion time scale,
translational versus rotational energy relaxation rates). It is
possible to take these different time scales into account by using the
ESBGK polyatomic model (see~\cite{esbgk_poly}), or the Rykov model
(see~\cite{LYXZ_2014} and the references therein). See also multiple relaxation time BGK models developed for polyatomic gases in~\cite{ARS_2017,ARS_2018}. However, in this
work, the derivation of a model for high temperature gases is based on
this simple polyatomic BGK model (with a single relaxation time).
Note that this model is generally simplified by using the variable $I$ such
that the internal energy of a molecule is $\varepsilon =
I^{\frac{2}{\delta}}$ (see~\cite{esbgk_poly}). Then the corresponding distribution ${\cal F}(t,x,v,I)$ is
defined such that ${\cal F}dxdvdI = Fdxdvd\varepsilon$, which gives ${\cal F}
= I^{\frac{2}{\delta}-1}F$. The macroscopic quantities are defined by
\begin{equation*}
\rho(t,x) = \cint{\cint{{\cal F}}}, \qquad
\rho u (t,x) = \cint{\cint{v {\cal F}}},\qquad
\rho e (t,x) = \cint{\cint{(\frac{1}{2} |v-u|^2 + I^{\frac{2}{\delta}}) {\cal F}}},
\end{equation*}
where now
$\cint{\cint{\phi}} = \int_{{\mathbb{R}}^3}\int_0^{+\infty} \phi(v,I)\,
dxdI$. The corresponding Maxwellian, which is simpler, is
\begin{equation}\label{eq-MaxwpolyI}
{\cal M}[{\cal F}] = \frac{\rho}{(2\pi R T)^{\frac{3}{2}}}
\exp\left( - \frac{|v-u|^2}{2RT} \right)
\frac{2}{\delta}\Lambda(\delta)\frac{1}{(RT)^{\frac{\delta}{2}}}
\exp\left( -\frac{I^{\frac{2}{\delta}}}{RT} \right).
\end{equation}
The corresponding BGK equation is
\begin{equation}\label{eq-BGKpolyI}
\partial_t {\cal F} + v\cdot \nabla_x {\cal F} = \frac{1}{\tau}( {\cal M}[{\cal F}]-{\cal F}),
\end{equation}
which is equivalent to~\eqref{eq-BGKpoly}.
{Moreover, note that these models can be derived from~\cite{bourgat94}: in \correct{that} paper, the authors first give a Boltzmann collision operator for polyatomic gases deduced from the Borgnakke-Larsen model. In this model, the internal energy variable $\varepsilon$ is described by a variable $I$ such that $I=\sqrt{\varepsilon}$. By using the corresponding Maxwellian, it is easy to derive a single relaxation time BGK model. When this model is written with $\varepsilon$, we exactly get model~\eqref{eq-BGKpoly}.}
{In the same paper~\cite{bourgat94}, the authors propose a second Boltzmann collision operator based on a model for a monoatomic gas in higher dimension, with an internal variable $w$ that lives in a $\delta$-dimension space, where $\delta$ is the number of internal degrees of freedom. The internal energy of the model is $\varepsilon=|w|^2$. This model is written in polar coordinates $w=I\theta$, where $I$ is the norm of $w$ (and hence again the square root of $\varepsilon$), and $\theta$ is the polar angle, and then it is reduced by integration with respect to $\theta$. The authors get a Boltzmann collision operator in which the distribution function is multiplied by a weight function $\phi(I)=I^{\delta -1}$. Again, a BGK model can be derived from this formulation, but it is different from models~\eqref{eq-BGKpoly} and~\eqref{eq-MaxwpolyI}. The resulting model has been extended by several authors to get BGK models for non polytropic gases (see section~\ref{sec:hightemp}). However, in the case of polytropic gases (i.e. constant $\delta$), this model can easily be shown to be equivalent to our model~\eqref{eq-BGKpoly}.
}
\section{High temperature gases}
\label{sec:hightemp}
When the temperature of the gas is larger, new phenomena appear
(vibration, chemical reactions, ionization). For instance, for
dioxygen, at $800$K, the molecules begin to vibrate, and chemical
reactions occur for much larger temperatures (for instance, dissociation of $O_2$
into $O$ starts at $2500$K).
The next sections explain how some of these effects (vibrations and
chemical reactions) can be taken into accounts in terms of
EOS and number of internal degrees of freedom.
\subsection{Vibrations}
Of course, the definition of the specific internal energy must
account for vibrational energy. A possible way to do so is to
increase the number of internal degrees of freedom $\delta$, that
now accounts for rotational and vibrational modes. However, a
result of quantum mechanics implies that this number of
degrees of freedom is not an integer anymore, and that it is even
not a constant (it is temperature dependent), see the examples
below. Vibrating gases have other properties that make them quite
different to what is described by the standard kinetic theory of
monoatomic gases. For instance, the specific heat at constant
pressure $c_p$ becomes temperature dependent. However, vibrating
gases can still be considered as perfect gases, so that the perfect
EOS $p=\rho R T$ still holds (in fact, such gases are called
thermally perfect gases, see~\cite{anderson}).
Now we give two examples of gases with vibrational excitation, and
we explain how their number of internal degrees of
freedom is defined.
\subsubsection{Example 1: dioxygen}
\label{subsubsec:example1}
At equilibrium, translational $e_{tr}$ and rotational
$e_{rot}$ specific energies can be defined by
\begin{equation*}
e_{tr} = \frac{3}{2}RT \quad \text{ and } \quad e_{rot} = RT.
\end{equation*}
This shows that a molecule of dioxygen has $3$ degrees of freedom for
translation, and $2$ for rotation.
By using quantum mechanics~\cite{anderson}, vibrational specific energy $e_{vib}$ is
found to be
\begin{equation*}
e_{vib} = \frac{T^{vib}_{O_2}/T}{\exp(T^{vib}_{O_2}/T)-1} RT,
\end{equation*}
where $T^{vib}_{O_2}=2256$K is a reference temperature.
The number of ``internal'' degrees of freedom $\delta$, related to
rotation and vibration modes only, is defined such that the total
specific internal energy $e$ is
\begin{equation*}
e = e_{tr}+e_{rot}+e_{vib} = \frac{3+\delta}{2}RT.
\end{equation*}
By combining this relation with the relations above, we find that
$\delta$ is actually temperature dependent, and defined by
\begin{equation*}
\delta(T) = 2 + 2\frac{T^{vib}_{O_2}/T}{\exp(T^{vib}_{O_2}/T)-1}.
\end{equation*}
Accordingly, the specific heat at constant pressure $c_p$, which is
defined by $dh = c_pdT$, where the enthalpy is $h = e +
\frac{p}{\rho}$, can be computed as follows. Since $p=\rho R T$, we
find $h = \frac{5+\delta(T)}{2}RT$, and hence the enthalpy depends on
$T$ only, through a nonlinear relation. This means that $c_p= h'(T)$
is not a constant anymore, while we have $c_p = \frac{5+\delta}{2}R$
without vibrations. Finally, note that the relation that defines the
temperature $T$ through the internal specific energy $e =
\frac{3+\delta(T)}{2}RT $ now is nonlinear (it has to be
inverted numerically to find $T$).
\subsubsection{Example 2: air}
\label{subsubsec:example2}
The air at moderately high temperatures ($T<2500$K) is a
non-reacting mixture of nitrogen $N_2$ and dioxygen $O_2$. To
simplify, assume that their mass concentrations are
$c_{N_2} = 75\%$ and $c_{O_2}=25\%$. These two species are perfect
gases with their own gas constants $R_{N_2}$ and $R_{O_2}$. {The
gas constant $R$ of the mixture can be defined by
$R = c_{N_2}R_{N_2} + c_{O_2}R_{O_2}$ (see~\cite{anderson}).}
The specific internal energy is defined by $e=
c_{O_2}e_{O_2}+c_{N_2}e_{N_2}$. The energy of each species can be
computed like in our first example (see
section~\ref{subsubsec:example1}), and we find:
\begin{equation*}
e_{N_2} = \frac{3+\delta_{N_2}(T)}{2}R_{N_2}T \qquad \text{and}
\qquad
e_{O_2} = \frac{3+\delta_{O_2}(T)}{2}R_{O_2}T,
\end{equation*}
where the number of internal degrees of freedom of each
species are
\begin{equation}\label{eq-deltaO2N2}
\delta_{N_2}(T) = 2 + 2\frac{T^{vib}_{N_2}/T}{\exp(T^{vib}_{N_2}/T)-1}
\qquad \text{and} \qquad
\delta_{O_2}(T) = 2 + 2\frac{T^{vib}_{O_2}/T}{\exp(T^{vib}_{O_2}/T)-1},
\end{equation}
with $T^{vib}_{N_2}=3373$K and $T^{vib}_{O_2}=2256$K. Then the specific
internal energy of the mixture is
\begin{equation*}
\begin{split}
e& =c_{O_2}\frac{3+\delta_{O_2}(T)}{2} R_{O_2}T+c_{N_2}\frac{3+\delta_{N_2}(T)}{2} R_{N_2}T \\
&=\frac32RT+\frac12(c_{O_2}\delta_{O_2}(T)R_{O_2}+c_{N_2}\delta_{N_2}(T)R_{N_2})T
\\
& = \frac{3+\delta(T)}{2}RT
\end{split}
\end{equation*}
with the number of internal degrees of freedom given by
\begin{equation}\label{eq-deltaair}
\begin{split}
\delta(T) & =\frac{c_{O_2}\delta_{O_2}(T) R_{O_2}+c_{N_2}\delta_{N_2}(T)R_{N_2}}{R} \\
& = 2+\frac2R\left(c_{O_2}R_{O_2}\frac{T^{vib}_{O_2}/T}{\exp(T^{vib}_{O_2}/T)-1}+c_{N_2}R_{N_2}\frac{T^{vib}_{N_2}/T}{\exp(T^{vib}_{N_2}/T)-1}\right).
\end{split}
\end{equation}
We show in figure~\ref{fig:delta} the number of internal degrees of
freedom for each species and for the whole mixture. For all gases,
$\delta$ is equal to $2$ below $500$K, which means that only the rotational
modes are excited: each species is a diatomic gas with 2 degrees of freedom of
rotation, and the mixture behaves like a diatomic gas too. Then the number
of degrees of freedom increases with the temperature, and is greater than
\correct{$3$} for $T=3000$K. At this temperature, the number of degrees of
freedom for vibrations is \correct{around $1$}. Note that in addition to this graphical
analysis, it can be analytically proved
that all the $\delta$ computed here are increasing functions of $T$.
\subsection{Chemical reactions}
\label{subsec:chemical}
When chemical reactions have to be taken into account (for the air,
this starts at $2500$K), the perfect gas EOS still holds for each species,
but the EOS for the reacting mixture is less simple. To avoid the
numerical solving of the Navier-Stokes equations for all the species,
in the case of an equilibrium chemically reacting gas, it is convenient
to use instead a Navier-Stokes model for the mixture (considered as a
single species), for which tabulated EOS $p = p(\rho,e)$ and even
a tabulated temperature law $T=T(\rho,e)$ are used
(see~\cite{anderson}, chapter 11).
More precisely, it can be proved that for a mixture of thermally
perfect gases in chemical equilibrium, with a constant atomic nuclei composition, two state variables, like
$\rho$ and $e$, are sufficient to uniquely define the chemical
composition of the mixture. Let us precise what this means, with
notations that will be useful in the paper.
For each species of the mixture, numbered with index
$i$:
\begin{itemize}
\item its concentration $c_i$ depends on $\rho$ and $e$ only: $c_i =
c_i(\rho,e)$ ;
\item its pressure $p_i$ satisfies the usual perfect gas law: $p_i =
\rho_i R_i T$, where $R_i$ is the gas constant of the species and $\rho_i = c_i(\rho,e) \rho$, so that $p_i =
p_i(\rho,e)$ ;
\item its specific energy $e_i$ and enthalpy $h_i$ depend on $T$
only: $e_i = e_i(T)$ and $h_i=h_i(T)$,
where $e_i(T) = \frac{3+\delta_i(T)}{2} R_iT+e_i^{f,0}$, with
$e_i^{f,0}$ is the energy of formation of the $i$th molecule and $\delta_i(T) $ is the number of
activated internal degrees of freedom of the molecule that might
depend on the temperature, see the previous sections ($\delta_i
=0$ for monoatomic molecules).
\end{itemize}
For compressible Navier-Stokes equations for an equilibrium chemically
reacting mixture, these quantities are not necessary. Instead, it is
sufficient to define (with analytic formulas or tables):
\begin{itemize}
\item the total pressure $p = \sum_i p_i(\rho,e)$ so that $p =
p(\rho,e) = \rho R(\rho,e) T$, with $R(\rho,e)= \sum_i
c_i(\rho,e)R_i$ ;
\item the temperature $T$, through the relation $e = \sum_i
c_i(\rho,e)e_i(T)$, so that $T = T(\rho,e)$ ;
\item the total specific enthalpy $h = \sum_i c_ih_i$, so that $h =
h(\rho,e) = e + \frac{p(\rho,e)}{\rho}$.
\end{itemize}
We refer to~\cite{anderson} for details on this subject.
\section{BGK models for high temperature gases}
\subsection{A polyatomic BGK model for a variable number of degrees of freedom}
\label{sec:deltaT}
In this section, we propose an extension of the polyatomic BGK
model~(\ref{eq-BGKpoly}) to take into account temperature dependent
number of internal degrees of freedom, like in examples of
sections~\ref{subsubsec:example1} and~\ref{subsubsec:example2}.
This extension (already obtained in~\cite{hdr}) is quite obvious, since we just replace the constant
$\delta$ in~(\ref{eq-Maxwpoly}) by the temperature dependent
$\delta(T)$. For completeness, this model is given below:
\begin{equation}\label{eq-BGKpolydelta}
\partial_t F + v\cdot \nabla_x F = \frac{1}{\tau}( M[F]-F),
\end{equation}
with
\begin{equation} \label{eq-Maxwpolydelta}
M[F] = \frac{\rho}{(2\pi R T)^{\frac{3}{2}}}
\exp\left( - \frac{|v-u|^2}{2RT} \right)
\Lambda(\delta(T))\left(\frac{\varepsilon}{RT}\right)^{\frac{\delta(T)}{2}-1}
\frac{1}{RT}\exp\left( -\frac{\varepsilon}{RT} \right).
\end{equation}
The macroscopic quantities are defined
by~(\ref{eq-rho})--(\ref{eq-rhoe}), while the temperature $T$ is
defined by
\begin{equation}\label{eq-eT}
e = \frac{3+\delta(T)}{2}RT.
\end{equation}
Indeed, this implicit relation is invertible if, for instance, $\delta(T)$ is
an increasing function of $T$. This is true, at least for the
examples shown in section~\ref{subsubsec:example2}: it can easily be
shown that equations~(\ref{eq-deltaO2N2}) and~(\ref{eq-deltaair})
define increasing functions of $T$.
Finally, the relaxation time $\tau$ is given by~(\ref{eq-deftau})
with $p=\rho R T$.
The same model has been proposed independently in~\cite{KKA_2019}, and
extended to an ES-BGK version to have correct transport coefficients.
\correct{Note that in~\cite{KKA_2019}, the temperature dependent
number of degrees of freedom $\delta(T)$ is constructed through a
given law for $c_v$ (the specific heat at constant volume), which is different from our approach.}
{
Note that model~(\ref{eq-MaxwpolyI})--(\ref{eq-BGKpolyI}) cannot be used here. Indeed, the change of variables $I = \varepsilon^{\frac{\delta(T)}{2}}$ now depends on time and space through $T$, and the corresponding model written with variable $I$ contains many more terms than~(\ref{eq-MaxwpolyI})--(\ref{eq-BGKpolyI}).}
{Finally, we mention the alternate approach derived from the second polyatomic Boltzmann operator of~\cite{bourgat94}: in~\cite{ARS_2017, BRS_2018}, a weight function is used to fit any given non polytropic gas law. This requires to invert a Laplace transform, and is different from the approach presented here.
}
\subsection{A more general BGK model for arbitrary constitutive laws}
\label{sec:BGK_EOS}
In this section, we now want to extend the polyatomic BGK
model~(\ref{eq-BGKpoly}) so as to be consistent with arbitrary
constitutive laws $p = p(\rho,e)$ and $T=T(\rho,e)$ that can be used for
an equilibrium chemically reacting gas (see
section~\ref{subsec:chemical}).
We define the gas constant of the mixture by
\begin{equation} \label{eq-Rrhoe}
R(\rho,e) = \frac{p(\rho,e)}{\rho T(\rho,e)}
\end{equation}
so that the EOS of perfect gases $p(\rho,e) = \rho R(\rho,e) T(\rho,e)$
holds (note that a definition of $R$ from the concentrations and the gas
constant of each species can also be used, see section~\ref{subsec:chemical}). We also
note $\delta(\rho,e)$ the number of internal degrees of freedom
defined such that $e =
\frac{3+\delta(\rho,e)}{2}R(\rho,e)T(\rho,e)$.
Our BGK model is obtained by using the same approach as in
section~\ref{sec:deltaT}: we replace $R$ and $\delta$
in~\eqref{eq-Maxwpoly}--\eqref{eq-BGKpoly}
by their non constant values $R(\rho,e)$ and $\delta(\rho,e)$, so that
our model is
\begin{equation}\label{eq-BGK_EOS}
\partial_t F + v\cdot \nabla_x F = \frac{1}{\tau(\rho,e)}( M[F]-F),
\end{equation}
with
\begin{equation} \label{eq-Maxwchimie}
M[F] = \frac{\rho}{(2\pi \theta(\rho,e))^{\frac{3}{2}}}
\exp\left( - \frac{|v-u|^2}{2\theta(\rho,e)} \right)
\Lambda(\delta(\rho,e))\left(\frac{\varepsilon}{\theta(\rho,e)}\right)^{\frac{\delta(\rho,e)}{2}-1}
\frac{1}{\theta(\rho,e)}\exp\left( -\frac{\varepsilon}{\theta(\rho,e)} \right),
\end{equation}
where the macroscopic quantities are defined by
\begin{equation} \label{eq-moments_chimie}
\rho(t,x) = \cint{\cint{F}}, \qquad
\rho u (t,x) = \cint{\cint{v F}}, \qquad
\rho e (t,x) = \cint{\cint{(\frac{1}{2} |v-u|^2 + \varepsilon) F}},
\end{equation}
the variable $\theta(\rho,e)$ is
\begin{equation} \label{eq-theta}
\theta(\rho,e) = R(\rho,e)T(\rho,e),
\end{equation}
the number of internal degrees of freedom is
\begin{equation} \label{eq-deltarhoe}
\delta(\rho,e) = \frac{2e}{R(\rho,e)T(\rho,e)} -3.
\end{equation}
and the relaxation time is
\begin{equation} \label{eq-tauchimie}
\tau(\rho,e) = \frac{\mu(T(\rho,e))}{p(\rho,e)},
\end{equation}
while, $p(\rho,e)$ and $T(\rho,e)$ are given
by analytic formulas or numerical tables.
\begin{remark} \label{rem:compat_modeles}
This model is more general than our previous
model~\eqref{eq-BGKpolydelta}--\eqref{eq-Maxwpolydelta} defined to
account for vibrations. In other
words, model~\eqref{eq-BGKpolydelta}--\eqref{eq-Maxwpolydelta} can
be written under the previous form. This is explained below.
First, relation~\eqref{eq-eT} defines the temperature $T$ as a function of
$e$, which can be written $T=T(\rho,e)$. Then, the perfect gas
EOS $p=\rho R T(\rho,e)$ gives $p=p(\rho,e)$.
Then, by definition of $T$, the number of internal degrees of freedom,
given by analytic laws~(\ref{eq-deltaO2N2}) or~(\ref{eq-deltaair}) for
instance, satisfies~\eqref{eq-eT}, and hence can be written $
\delta(T) = 2\frac{e}{RT} - 3 $,
which is exactly~(\ref{eq-deltarhoe}). Moreover, the
relaxation time $\tau$ given by~(\ref{eq-deftau}) is compatible
with definition~(\ref{eq-tauchimie}). Finally, the Maxwellian defined
by~(\ref{eq-Maxwpolydelta}) is clearly compatible with
definition~(\ref{eq-Maxwchimie}).
Consequently, the analysis given in the next sections will be made
with this more general
model~(\ref{eq-BGK_EOS})--(\ref{eq-tauchimie}) only.
\end{remark}
\subsection{Compressible Navier-Stokes asymptotics}
\label{subsec:CE}
In this section, we prove the following formal result.
\begin{proposition} \label{prop:CE}
The moments of $F$, solution of the BGK
model~\eqref{eq-BGK_EOS}--\eqref{eq-tauchimie}, satisfy the
following Navier-Stokes equations, up to $O({\rm Kn}^2)$:
\begin{equation} \label{eq:NS_chimie}
\begin{split}
& \partial_t \rho + \nabla \cdot \rho u = 0, \\
& \partial_t \rho u + \nabla\cdot (\rho u\otimes u) + \nabla p = -\nabla
\cdot \sigma , \\
& \partial_t E + \nabla \cdot (E+p)u = -\nabla\cdot q -
\nabla\cdot(\sigma u) ,
\end{split}
\end{equation}
where ${\rm Kn}$ is the Knudsen number (defined below), $E$ is the total
energy density defined by $E =
\frac{1}{2} \rho |u|^2 + \rho e$, and $\sigma$ and $q$ are the shear stress
tensor and heat flux vector defined by
\begin{equation} \label{eq-sigmaq}
\begin{split}
& \sigma = - \mu \left(\nabla u + (\nabla u)^T
-{\cal C} \nabla \cdot u \, {\it Id}\right) ,\\
& q = - \mu\nabla h,
\end{split}
\end{equation}
with $h= e + \frac{p(\rho,e)}{\rho}$ is the
enthalpy, and ${\cal C} =
\frac{\rho^2}{p(\rho,e)} \partial_{\rho}(\frac{p(\rho,e)}{\rho})
+ \partial_e(\frac{p(\rho,e)}{\rho})$.
\end{proposition}
Note that this result is consistent with the Navier-Stokes equations
obtained for non reacting gases. For instance, in case of
a thermally perfect gas, i.e when the enthalpy depends only on the
temperature (see~\cite{anderson}), we find that the
heat flux is $q = - \kappa\nabla T(\rho,e)$, where the heat transfer
coefficient is $\kappa = \mu c_p$, with the heat capacity at constant
pressure is $c_p = h'(T)$. In such case, the Prandtl number, defined
by ${\rm Pr} = \frac{\mu c_p}{\kappa}$, is 1, like in usual BGK
models.
Moreover, this result gives a volume viscosity (also called second
coefficient of viscosity or bulk viscosity) which is $\omega = \mu(\frac{2}{3}
- {\cal C})$. In the case of a gas with a constant $\delta$, like in a non
vibrating gas, this gives ${\cal C} = \frac{2}{3 + \delta}$, and hence
$\omega = \frac{2\delta}{3(\delta +3)}\mu$. For a monoatomic gas,
$\delta = 0$, and we find the usual result $\omega =0$.
This result is proved by using the standard Chapman-Enskog
expansion. The main steps of this proof are given in
sections~\ref{subsubsec:ndf} to~\ref{subsubsec:ns}, while some
technical details are given in appendix~\ref{app:gaussian}.
\subsubsection{Comments on this model}
\label{subsubsec:comments}
{For reacting gases, our model is consistent with the fact that the
energy flux accounts for energy transfer by diffusion of chemical
species. Indeed, if we assume that our constitutive laws satisfy the
relations given in section~\ref{subsec:chemical}, then
the enthalpy
$h$ that appears in the heat flux
in~\eqref{eq-sigmaq} is also $h=\sum_i c_i h_i$. Since $h_i$ is a function
of $T$ only, we have
\begin{align*}
q & = - \mu\nabla h = - \mu \sum_i c_i\nabla h_i
-\mu \sum_i h_i\nabla c_i \\
& = - \mu c_p\nabla T -\mu \sum_i h_i\nabla c_i,
\end{align*}
where $c_p = \sum_ic_i h'_i(T)$.}
{Some standard compressible Navier-Stokes solvers for reacting gases in
chemical equilibrium use the following heat flux
\begin{equation*}
q = - \kappa \nabla T + \sum_i \rho c_i U_i h_i,
\end{equation*}
in which the diffusion velocity $U_i$ can be modeled by the Fick law $\rho c_i
U_i=-\rho D_i \nabla c_i$ (see~\cite{anderson}), where $D_i$ is the
diffusion coefficient of the $ith$ species. }
{Our heat flux can indeed be written under the same form, with
$\kappa = \mu c_p$, and $D_i = \mu / \rho$. Consequently, the Prandtl
number ${\rm Pr}= \mu c_p/\kappa$ and Schmidt numbers
$S_i = \mu / \rho D_i$ are all equal to 1, which is the consequence of
our single time relaxation in our model. Usually Schmidt and Prandtl
numbers are very close, and hence recovering a correct Prandtl number
with an ESBGK-like approach should also give more correct the Schmidt
numbers.}
\correct{However, note that the compressible Navier-Stokes model with heat flux
given by the formula above leads to a violation of the second law
of thermodynamics. Indeed, classical theory of non equilibrium
thermodynamics states that the heat flux can only be given by a
temperature gradient, so that the physical entropy production due to
the heat flux ($-\frac{q}{T^2}\cdot \nabla T$) is non negative
(see~\cite{Struchtrup_moments}). Here, the heat flux depends on
$c_i$, and hence on $\rho$. This implies that $q$ contains a
$\nabla\rho$ term that will induce a $\nabla\rho\cdot \nabla T$ term,
which has an undefined sign in the entropy production. This is clearly
in contradiction with the second law.}
\correct{This drawback} is consistent with
the fact that we are not able to prove a H-theorem for our
model. But we believe our model is still interesting, since in its
hydrodynamic limit, it is consistent with compressible
Navier-Stokes models that are used for atmospheric
reentry. These models are also probably not compatible with the second
principle too, due to some terms in the thermal flux that are usually
neglected.
\correct{Another} drawback of this model is its physical inconsistency at
equilibrium, as it can be seen with the following example. Consider
a mixture of two inert gases and suppose they are at collisional
equilibrium with each other: then the equilibrium distribution is
the sum of two Maxwellian distributions with different molar masses
so that it cannot be reduced to a single Maxwellian distribution. At
the contrary, our model, which describes the mixture by a single
distribution, will necessarily give a single Maxwellian at
equilibrium. In case of an air flow, the difference in molar mass
of nitrogen and dioxygen is small (around 12\%), and our single
Maxwellian should not be very different from the exact equilibrium.
Of course, the same problem occurs with reacting gases at
equilibrium, except if the concentration of the product of chemical
reactions (like O, NO, etc.) is small enough.
\subsubsection{Non-dimensional form}
\label{subsubsec:ndf}
Now we start the proof of the result given in proposition~\ref{prop:CE}.
We choose a characteristic length $x_*$, mass density $\rho_*$, and
energy $e_*$. This induces characteristic values for
pressure $p_* =\rho_*e_*$, temperature $T_* = T(\rho_*,e_*)$,
molecular and bulk
velocities $v_* = u_* = \sqrt{e_*}$, time $t_* = x_*/v_*$, internal energy $\varepsilon_* =
e_*$, viscosity $\mu_* = \mu(T_*)$, relaxation time $\tau_* = \mu_*/p_*$, and distribution
$F_* = \rho_*/e_*^{5/2}$.
By using the non-dimensional variables $w' = w/w_*$ (where $w$ stands for any
variables of the problem),
model~\eqref{eq-BGK_EOS}--\eqref{eq-tauchimie} can be written
\begin{equation}\label{eq-BGKpolydelta_adim}
\partial_{t'} F' + v'\cdot \nabla_{x'} F' = \frac{1}{{\rm Kn}\, \tau'(\rho',e')}( M'[F']-F'),
\end{equation}
with
\begin{equation} \label{eq-Maxwchimie_adim}
M'[F'] = \frac{\rho'}{(2\pi \theta')^{\frac{3}{2}}}
\exp\left( - \frac{|v'-u'|^2}{2\theta'} \right)
\Lambda(\delta')\left(\frac{\varepsilon'}{\theta'}\right)^{\frac{\delta'}{2}-1}
\frac{1}{\theta'}\exp\left( -\frac{\varepsilon'}{\theta'} \right),
\end{equation}
where the macroscopic quantities are defined by
\begin{equation} \label{eq-moments_chimie_adim}
\rho'(t',x') = \cint{\cint{F'}}, \qquad
\rho' u' (t',x') = \cint{\cint{v' F'}}, \qquad
\rho' e' (t',x') = \cint{\cint{(\frac{1}{2} |v'-u'|^2 + \varepsilon') F'}},
\end{equation}
the variable $\theta'$ is
\begin{equation} \label{eq-theta_adim}
\theta' = R'T'
\end{equation}
the number of internal degrees of freedom is
\begin{equation} \label{eq-deltarhoe_adim}
\delta' = \delta = \frac{2e'}{R'T'} -3,
\end{equation}
and the relaxation time is
\begin{equation} \label{eq-tauchimie_adim}
\tau' = \frac{\mu'}{p'},
\end{equation}
while $p' = p(\rho_*\rho',e_*e')/\rho_*e_*$,
$T' = T(\rho_*\rho',e_*e')/T_*$, $R' = p'/\rho'T'$, and $\mu' =
\mu(T(\rho,e))/\mu_*$. Finally, the Knudsen number ${\rm Kn}$ that appears in~(\ref{eq-BGKpolydelta_adim})
is defined by
\begin{equation} \label{eq-Kn}
{\rm Kn} = \frac{\tau_*}{t_*} = \frac{\lambda_*}{x_*},
\end{equation}
where $\lambda_* = \tau_*v_*$ can be viewed as the mean free path.
Note that, to simplify the notations, the dependence of ($p'$,
$\theta'$, $R'$, $\delta'$, $\tau'$, $p'$, $T'$) on $\rho'$ and $e'$ is
not made explicit any more in the previous expressions. Moreover, in
the sequel, the primes will be removed too.
\subsubsection{Conservation laws}
\label{subsubsec:conslaw}
The conservation laws induced by the non-dimensional BGK
model~(\ref{eq-BGKpolydelta_adim}) are obtained by
multiplying~(\ref{eq-BGKpolydelta_adim}) by $1$, $v$, and $(\frac{1}{2}
|v|^2+ \varepsilon)$, and then by integrating it with respect to $v$ and
$\varepsilon$. By using the Gaussian integrals given in
appendix~\ref{app:gaussian}, we get
\begin{equation}\label{eq-cons}
\begin{split}
& \partial_t \rho + \nabla\cdot \rho u = 0, \\
& \partial_t \rho u + \nabla \cdot (\rho u\otimes u + \Sigma(F)) = 0, \\
& \partial_t E + \nabla \cdot (Eu + \Sigma(F) u + q(F)\cdot u) = 0,
\end{split}
\end{equation}
where the stress tensor $\Sigma(F)$ and the heat flux vector $q(F)$
are defined by
\begin{align}
& \Sigma(F) = \cint{\cint{(v-u)\otimes (v-u)F}}, \label{eq-SigmaF} \\
& q(F) = \cint{\cint{(\frac{1}{2} |v-u|^2+\varepsilon)(v-u)F}} \label{eq-qF} .
\end{align}
\subsubsection{Euler equations}
\label{subsubsec:euler}
The Euler equations of compressible gas dynamics can be obtained as
follows. Equation~(\ref{eq-BGKpolydelta_adim}) implies the first order
expansion $F =
M[F]+O({\rm Kn})$, and hence $\Sigma(F) = \Sigma(M[F]) + O({\rm Kn})$ and $q(F)
= q(M[F])+O({\rm Kn})$. Using Gaussian integrals given in
appendix~\ref{app:gaussian} gives
\begin{equation*}
\Sigma(M[F]) = p {\it Id}, \quad \text{ and } \quad
q(M[F]) = 0.
\end{equation*}
Consequently, the conservation laws~(\ref{eq-cons}) yields
\begin{equation*
\begin{split}
& \partial_t \rho + \nabla\cdot \rho u = 0, \\
& \partial_t \rho u + \nabla \cdot (\rho u\otimes u ) + \nabla p = O({\rm Kn}), \\
& \partial_t E + \nabla \cdot( (E+p)u) = O({\rm Kn}),
\end{split}
\end{equation*}
that are the Euler equations of compressible gas dynamics, up to
$O({\rm Kn})$ terms, with the given EOS $p=p(\rho,e)$.
For the following, it is useful to rewrite these equations as
evolution equations for non-conservatives variables $\rho$, $u$, and
$\theta$. After some algebra, we get
\begin{equation}\label{eq-cons-theta}
\begin{split}
& \partial_t \rho + u \cdot\nabla \rho = - \rho \nabla \cdot u, \\
& \partial_t u + (u \cdot \nabla) u = -\frac{1}{\rho}\nabla p + O({\rm Kn}), \\
& \partial_t \theta + u\cdot \nabla \theta = -\theta {\cal C} \nabla \cdot u + O({\rm Kn}),
\end{split}
\end{equation}
where ${\cal C}$ is given by
\begin{equation} \label{eq-C}
{\cal C} = \frac{\rho}{\theta}\partial_{\rho} \theta + \partial_e \theta.
\end{equation}
\subsubsection{Navier-Stokes equation}
\label{subsubsec:ns}
Navier-Stokes equations are obtained by using the higher order expansion $F =
M[F]+ {\rm Kn} \, G $. Introducing this expansion
in~(\ref{eq-SigmaF}) and~(\ref{eq-qF}) gives
\begin{equation*}
\Sigma(F) = p {\it Id} + {\rm Kn}\, \Sigma(G), \quad \text{ and } \quad
q(F) = {\rm Kn} \, q(G).
\end{equation*}
Then we have to approximate $\Sigma(G)$ and $q(G)$ up to
$O({\rm Kn})$. This is done by using the expansion of $F$
and~(\ref{eq-BGKpolydelta_adim}) to get
\begin{equation*}
G = -\tau(\partial_t M[F] + v \cdot \nabla_x M[F]) + O({\rm Kn}).
\end{equation*}
This gives the following approximations
\begin{equation} \label{eq-Sigmaq_M}
\begin{split}
& \Sigma(G) = -\tau \cint{\cint{(v-u)\otimes (v-u)(\partial_t M[F] + v \cdot \nabla_x M[F])}} +
O({\rm Kn}) , \\
& q(G) = -\tau \cint{\cint{(\frac{1}{2} |v-u|^2+\varepsilon)(v-u)(\partial_t M[F] + v \cdot \nabla_x M[F])}} + O({\rm Kn}).
\end{split}
\end{equation}
Now, we have to make some long computations to reduce
these expressions to those given in~(\ref{eq-sigmaq}). We start with
the stress tensor $\Sigma(G)$. First, note that the Maxwellian $M[F]$ given
by~(\ref{eq-Maxwchimie_adim}) can be separated into $M[F] = M_{tr}[F]M_{int}[F]$, with
\begin{equation*}
M_{tr}[F] = \frac{\rho}{(2\pi \theta)^{\frac{3}{2}}}
\exp\left( - \frac{|v-u|^2}{2\theta} \right)
, \quad \text{ and }
\quad M_{int}[F] = \Lambda(\delta)\left(\frac{\varepsilon}{\theta}\right)^{\frac{\delta}{2}-1}
\frac{1}{\theta}\exp\left( -\frac{\varepsilon}{\theta} \right).
\end{equation*}
It is useful to introduce the notations $\cint{\phi}_v =
\int_{{\mathbb{R}}^3}\phi(v)\, dv$ and $\cint{\psi}_{\varepsilon} =
\int_0^{+\infty}\psi(\varepsilon)\, d\varepsilon$ for any velocity (resp. energy)
dependent function $\phi$ (resp. $\psi$).
Then it can easily be seen that
\begin{equation*}
\begin{split}
\cint{M_{int}[F]}_{\varepsilon} = 1, \quad \cint{\partial_t M_{int}[F] }_{\varepsilon} =
0,
\quad \cint{\nabla_x M_{int}[F] }_{\varepsilon} = 0.
\end{split}
\end{equation*}
This implies that $\Sigma(G)$ reduces to
\begin{equation} \label{eq-SigmaGMtr}
\Sigma(G) = -\tau \cint{(v-u)\otimes (v-u)(\partial_t M_{tr}[F] + v \cdot \nabla_x M_{tr}[F])}_v +
O({\rm Kn}).
\end{equation}
Now it is standard to write $\partial_t M_{tr}[F]$ and $\nabla_x M_{tr}[F]$ as
functions of derivatives of $\rho$, $u$, and $\theta$, and then to use
Euler equations~(\ref{eq-cons-theta}) to write time derivatives as
functions of the space derivatives only. After some algebra, we get
\begin{equation*}
\partial_t M_{tr}[F] + v \cdot \nabla_x M_{tr}[F]
= \frac{\rho}{\theta^{\frac{3}{2}}}M_0(V)\left( A(V) \cdot
\frac{\nabla_x \theta}{\sqrt{\theta}}
+ B(V):\nabla_x u \right) + O({\rm Kn}),
\end{equation*}
where
\begin{equation*}
\begin{split}
& V = \frac{v-u}{\sqrt{\theta}}, \qquad M_0(V) =
\frac{1}{(2\pi)^{\frac32}}\exp(-\frac{|V|^2}{2}) , \\
& A(V) = \left(\frac{|V|^2}{2}- \frac52\right)V, \qquad
B(V) =
V\otimes V
- \left(
\left( \frac{|V|^2}{2}- \frac32\right) {\cal C} + 1
\right) {\it Id}.
\end{split}
\end{equation*}
Then, we introduce the previous relations in~(\ref{eq-SigmaGMtr}) to
get
\begin{equation*}
\Sigma_{ij}(G) = -\tau \rho \theta \cint{V_iV_j B(V)M_0}_V
\nabla_{x_j} u_i + O({\rm Kn}),
\end{equation*}
where we have used the change of variables $v\mapsto V$ in the
integral (the term with $A(V)$ vanishes due to the parity of
$M_0$). Then standard Gaussian integrals (see appendix~\ref{app:gaussian}) give
\begin{equation*}
\Sigma(G) = -
\tau \rho \theta \left(\nabla u + (\nabla u)^T
-{\cal C} \nabla \cdot u \, Id\right) + O({\rm Kn}),
\end{equation*}
which is the announced result, in a non-dimensional form.
For the heat flux, we use the same technique to reduce $q(G)$ as given
in~(\ref{eq-Sigmaq_M}) to
\begin{equation*}
\begin{split}
q_i(G) & =
-\tau \cint{(\frac{1}{2} |v-u|^2)(v_i-u_i)(\partial_t M_{tr}[F] + v_j \partial_{x_j}
M_{tr}[F])}_v \cint{M_{int}[F]}_{\varepsilon} \\
& \quad -\tau \cint{(v_i-u_i)(\partial_t M_{tr}[F] + v_j \partial_{x_j}
M_{tr}[F])}_v \cint{\varepsilon M_{int}[F]}_{\varepsilon} \\
& \quad - \tau \cint{(v_i-u_i)M_{tr}[F] v_j}_v \cint{\varepsilon \partial_{x_j} M_{int}}_{\varepsilon} \\
& = - \tau \cint{\frac{1}{2} |V|^2V_i A_{j}M_0}_V\partial_{x_j}\theta
- \tau \cint{V_i A_{j}M_0}_V\frac{\delta}{2}\partial_{x_j}\theta
-\tau \rho \theta\cint{V_iV_jM_0}_V \partial_{x_j} (\frac{\delta}{2}\theta),
\end{split}
\end{equation*}
where we have used the relation $\cint{\varepsilon M_{int}[F]}_{\varepsilon} = \frac{\delta}{2}\theta$.
Using again Gaussian integrals, we get
\begin{equation*}
q(G) = -\tau \rho \theta \nabla h + O({\rm Kn}),
\end{equation*}
where $h=\frac{5+\delta}{2}\theta$ is indeed the enthalpy, since
definitions~(\ref{eq-theta_adim}) and~(\ref{eq-deltarhoe_adim}) imply
$h = e + p/\rho$.
To summarize, we have shown that the stress tensor and heat flux in
conservation laws~(\ref{eq-cons}) are
\begin{equation*}
\begin{split}
& \Sigma(F) = p {\it Id} - {\rm Kn} \tau \rho \theta \left(\nabla u + (\nabla u)^T
-{\cal C} \nabla \cdot u \, Id\right) + O({\rm Kn}^2)\\
& q(F) = -{\rm Kn} \tau \rho \theta \nabla h + O({\rm Kn}^2).
\end{split}
\end{equation*}
Now, we can go back to the dimensional variables, and we find
\begin{equation*}
\begin{split}
& \Sigma(F) = p(\rho,e) {\it Id} - \mu(T(\rho,e)) \left(\nabla u + (\nabla u)^T
-{\cal C} \nabla \cdot u \, Id\right) + O({\rm Kn}^2)\\
& q(F) = - \mu(T(\rho,e)) \nabla h(\rho,e) + O({\rm Kn}^2),
\end{split}
\end{equation*}
where $h(\rho,e) = e + \frac{p(\rho,e)}{\rho}$ is the enthalpy, and ${\cal C} = \frac{\rho^2}{p(\rho,e)} \partial_{\rho}(p(\rho,e)/\rho)
+ \partial_e(p(\rho,e)/\rho)$. This concludes the proof of the result
given at the beginning of this section.
\subsection{Entropy}
Here, we prove that our model~\eqref{eq-BGK_EOS} satisfies a
local entropy dissipation property.
\begin{proposition}
Let~$F$ be the solution of
equation~\eqref{eq-BGK_EOS}--\eqref{eq-Maxwchimie}. Then the following inequality is satisfied:
\begin{equation}
\cint{\cint{ (M[F]-F) \ln
\left( \frac{2}{\delta} \varepsilon^{1-\frac \delta 2 } F\right) }} \leq 0 \, .
\label{eq:second_principe}
\end{equation}
\end{proposition}
\begin{proof}
The left-hand side can be decomposed into
$$
\cint{\cint{ (M[F]-F) \ln
\left( \frac{2}{\delta} \varepsilon^{1-\frac \delta 2 } F\right) }}
= \cint{\cint{ (M[F]-F) \ln \left( \frac F{M[F]} \right) }}
+\cint{\cint{ (M[F]-F) \ln
\left( \frac{2}{\delta} \varepsilon^{1-\frac \delta 2 } M[F]\right) }} \, .\\$$
The first term in the right-hand side is non-positive because the logarithm is a non-decreasing function.
The second term vanishes since $M[F]$ and $F$ have the same first 5
moments:
\begin{align*}
\cint{\cint{ (M[F]-F) \ln
\left( \frac{2}{\delta} \varepsilon^{1-\frac \delta 2 } M[F]\right) }}
&= \cint{\cint{ (M[F]-F) }} \ln ( c(\delta,\rho,\theta)) \\
& \quad - \frac 1\theta \cint{\cint{ (M[F]-F) \left( \frac{|v-u|^2}{2} +
\varepsilon \right) }}\\
&=0,
\end{align*}
with~$c(\delta,\rho,\theta) = \frac 2 \delta \frac{\rho \Lambda(\delta)}{\sqrt{2\pi \theta}^3\theta^{\delta/2}}$, which does not depend on~$v$ nor on~$\varepsilon$.
\end{proof}
\begin{remark}
This result does not imply the dissipation of a
global entropy, except, for example, if~$\delta$ is constant.
In such a case, we can define the entropy $H(f) = \cint{\cint{h(F)}}$,
where $h(F) = F\ln
\left(\frac{2}{\delta}\varepsilon^{1-\frac{\delta}{2}}F\right) - F$, and we have
\begin{equation*}
\begin{split}
\partial_t H(F) + \nabla \cdot \cint{\cint{v h(F)}} & = \cint{\cint{\partial_t h(F) +
v \cdot \nabla_x h(F)}} \\
& = \cint{\cint{ (\partial_t F + v \cdot \nabla_x F)h'(F)}} \\
& = \frac{1}{\tau}\cint{\cint{(M[F]-F)h'(F)}} \leq 0,
\end{split}
\end{equation*}
from~(\ref{eq:second_principe}), since $h'(F) = \ln
\left(\frac{2}{\delta}\varepsilon^{1-\frac{\delta}{2}}F\right)$.
In the general case, $\delta$ depends on $t$ and $x$: therefore, the relation
$\partial_t h(F) = \ln
\left(\frac{2}{\delta}\varepsilon^{1-\frac{\delta}{2}}F\right) \partial_t F $
is not correct. Consequently, the local property~(\ref{eq:second_principe}) cannot be used. It is not
clear so far that our model satisfies a global dissipation
property. This problem was also noticed in~\cite{KKA_2019}.
\end{remark}
\subsection{Reduced model}
For computational reasons, it is interesting to reduce the complexity
of model~(\ref{eq-BGK_EOS}) by using the usual reduced
distribution technique~\cite{HH_1968}. We define
reduced distributions
$f(t,x,v) = \int_0^{+\infty}F(t,x,v,\varepsilon)\, d\varepsilon$ and
$g(t,x,v) = \int_0^{+\infty}\varepsilon F(t,x,v,\varepsilon)\, d\varepsilon$, and by
integration of~(\ref{eq-BGK_EOS}) w.r.t $\varepsilon$, we can
easily obtain the following closed system of two BGK equations
\begin{equation}\label{eq-reduced}
\begin{split}
& \partial_t f + v\cdot \nabla_x f = \frac{1}{\tau}(M[f,g]-f), \\
& \partial_t g + v\cdot \nabla_x g
= \frac{1}{\tau}( \frac{\delta}{2}
RT M[f,g]-g),
\end{split}
\end{equation}
where $M[f,g]$ is the translational part of $M[F]$ defined by
\begin{equation*}
M[f,g] = \frac{\rho}{(2\pi RT)^{\frac{3}{2}}}
\exp\left( - \frac{|v-u|^2}{2 RT} \right) ,
\end{equation*}
and the macroscopic quantities are defined by
\begin{equation} \label{eq-mtsfg}
\rho(t,x) = \int_{{\mathbb{R}}^3} f \, dv, \qquad
\rho u (t,x) = \int_{{\mathbb{R}}^3} v f \, dv, \qquad
\rho e (t,x) = \int_{{\mathbb{R}}^3}( \frac{1}{2} |v-u|^2 f + g) \, dv,
\end{equation}
while $\delta$, $R$ and $\tau$ are still defined
by~(\ref{eq-deltarhoe}),~\eqref{eq-Rrhoe} and~(\ref{eq-tauchimie}). This reduced system
is equivalent to~(\ref{eq-BGK_EOS}), that is to say $F$ and $(f,g)$ have
the same moments. Moreover, the compressible Navier-Stokes asymptotics
obtained in section~\ref{subsec:CE} can also be derived from this
reduced system.
Consequently, \correct{this system is the one we use for our numerical
tests presented in the following section}.
\section{Numerical results}
\label{sec:num}
\subsection{Moderate temperature flow: vibrating molecules}
A numerical scheme for model~(\ref{eq-reduced}) has been implemented
in the code of CEA-CESTA. {This code is a deterministic code based
on the works presented in~\cite{mieussens99,bchm2013} which solves the BGK
equation in 3 dimensions of space and 3 dimensions in velocity with
a second order finite volume scheme on structured meshes}. It is remarkable that the original code (for
non reacting gases, with no high temperature effects), presented
in~\cite{bchm2013}, can be very easily adapted to this new model. Only a
few modifications are necessary.
The goal of this section is to illustrate the capacity of our model
to account for some high temperature gas effects. We only consider the case
of a mixture of two vibrating, but non reacting, gases. A validation
of our model for reacting gases will be given in a further work.
Our test is a 2D hypersonic plane flow of air--considered as a mixture
of two vibrating gases, nitrogen and dioxygen--over a quarter of a
cylinder which is supposed to be isothermal (see
figure~\ref{amont}). Gas-solid wall interactions are modeled by the usual
diffuse reflection.
At the inlet, the flow is defined by the data given in table~\ref{table:ci}.
\begin{table}[h]
\centering
\begin{tabular}{|l|l|}
\hline
Mass concentration of $N_2$ ($c_{N_2}$) & $0.75$\\
\hline
Mass concentration of $O_2$ ($c_{O_2}$) & $0.25$\\
\hline
Mach number of the mixture & $10$\\
\hline
Velocity of the mixture & $2267m.s^{-1}$\\
\hline
Density of the mixture & $3.059\times 10^{-4} kg.m^{-3}$\\
\hline
Pressure of the mixture& $11.22 Pa$\\
\hline
Temperature of the mixture& $127.6 K$\\
\hline
Temperature of the cylinder& $293K$\\
\hline
Radius of the cylinder& $0.1m$\\
\hline
\end{tabular}
\caption{Hypersonic flow around a cylinder: initial data}
\label{table:ci}
\end{table}
In this case, the vibrational energy is taken into account as
described in section~\ref{subsubsec:example2}. The corresponding
constitutive relations are obtained as explained in
remark~\ref{rem:compat_modeles}.
The flow conditions are such that molecules vibrate, but
no chemical reactions are active (temperatures go up to $3000$K
whereas chemical reactions occur at $5000$K at pressure $P=1$atm): our
thermodynamical approach is reasonable. Since the test case is dense
enough (the Knudsen number is around $0.01$) we can compare the new model
with a Navier-Stokes code (a 2D finite
volume code with structured meshes), in which are enforced the same viscosity and conductivity as
in compressible Navier-Stokes asymptotics derived from the BGK model
(see section~\ref{subsec:CE}). To validate the new model we have made
four different simulations:
\begin{itemize}
\item a Navier-Stokes simulation without taking into account vibrations (called $NS1$),
\item a Navier-Stokes simulation that takes into account vibrations (called $NS2$),
\item a BGK simulation without taking into account vibrations (called $BGK1$),
\item a BGK simulation that takes into account vibrations (called $BGK2$).
\end{itemize}
The first comparison is between $NS1$ and $BGK1$, in order to show
that the two model are consistent in this dense regimes, when there
are no vibration energy. As it can be seen in figure \ref{NSBGK1}, the results agree very well.
The second comparison is between $NS2$ and $BGK2$ to show we still
have a good agreement when vibrations are taken into account. This is
what we observe in figure \ref{NSBGK2}. One can also observe that, due to
vibrations, the temperature decreased from $2682K$ to $2358K$ for
Navier-Stokes and from $2695K$ to $2365K$ for BGK.
The last comparison is to show the influence of vibrational energy on
the results. We compare $BGK1$ and $BGK2$, and we observe that the shock is not
at the same position. Since there is a transfer of energy from
translational and rotational modes to vibrational modes, the maximum
temperature is lower and the shock is slightly close to the cylinder
with BGK2 (see figure~\ref{bgkbgk2}). We clearly see this difference
with the temperature profile along the stagnation line, see figure~\ref{stagnation}.
{To conclude this section, it can be said that when Navier-Stokes
and BGK are set with the same viscosity and Prandtl number, results
agree very well: but of course for more realistic test cases when
the Prandtl number is not equal to one, there will be a discrepancy
in the results that might be corrected with an ES-BGK extension of
our model. This will be presented in a further work.}
\correctp{
\subsection{High temperature flow: reacting gas}
\label{subsec:result_chemical}
In this section, we illustrate the ability of our model to account for
chemical reactions in a high temperature flow. In order to simplify
the analysis of our results, we consider here a single species flow of
dioxygen. The geometry of the test case is the same as in the previous
section, and the parameters of the upstream flow are the followings: the Mach
number is $12$, the density is $10^{-3}$ kg.m$^{-3}$, so that
the flow is in the near continuous regime (Kn$=4.29\times 10^{-4}$),
the pressure is $33.15$ Pa, the temperature is $127.6$ K, and the
temperature of the cylinder is still $283$ K.
In this case, the chemical reactions are taken into account with
pressure and temperature laws as given by Hansen~\cite{hansen}, both
in our Navier-Stokes and BGK solvers. We obtain the comparison shown
in figure~\ref{NSchimie_BGKchimie} for the temperature field. The
results given by both codes are very close. A closer look at the
temperature profile along the stagnation line is also shown in
figure~\ref{T_profile}: this profile shows that BGK results are
excellent.
We are also able to obtain the concentration $c_O$ of monoatomic oxygen
(see section~\ref{subsec:chemical}), and this concentration is
plotted in figure~\ref{CO_NSchimie_BGKchimie}. Again both codes are in
very good agreement, and these results show that there is dissociation
of $O_2$ molecules in the largest temperature zones, since the concentration
rises up to 12\% there.
Finally, the importance of chemical reactions (dissociation) in this
test case can be seen as follows. In figure~\ref{BGKvibra_BGKchimie},
we compare the previous BGK results to a simulation made when
vibrations are taken into account but the chemical reactions are
not. This figure clearly shows that the non reacting BGK results are
incorrect: the location of the shock is wrong, and the temperature is
too high.
}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have proposed several generalized BGK models to
account for high temperature effects (vibrations and chemical
reactions). The first model is able to account for the fact that, for
polyatomic gases, some internal degrees of freedom are partially
excited with a level of excitation that depends on the temperature.
In other words, we have derived a model for a polyatomic gas with a
non-constant specific heat~$c_p = c_p(T)$.
This model has been extended to take into account general
constitutive laws for pressure and temperature, like in equilibrium
chemically reacting gases in high temperature flows. By using a
Chapman-Enskog analysis, we have derived compressible Navier-Stokes
equations from this model that are consistent with these constitutive
laws. This consistency has been illustrated on preliminary numerical
tests, in which the importance to take vibration modes into account is
clearly seen.
We point out that this new model can be reduced to a BGK system in which the
molecular velocity is the only kinetic variable. This makes it
possible to simulate a high temperature polyatomic gas for the
cost of a simple monoatomic rarefied gas flow simulation.
The model for chemically reacting gases \correct{has been
tested with a single species flow that shows its ability to account
for dissociation, at least in the near continuum regime. Our model
has still to be validated with comparisons to a full Boltzmann (DSMC)
solver in the rarefied regime. It should also be }extended to allow for various
different time scales (viscous versus thermal diffusion time scale,
translational versus rotational energy relaxation rates). This might
be possible with the same approach as the one used to derive the ES-BGK
model for polyatomic gases (see~\cite{esbgk_poly}).
|
{
"timestamp": "2019-10-28T01:14:22",
"yymm": "1803",
"arxiv_id": "1803.02617",
"language": "en",
"url": "https://arxiv.org/abs/1803.02617"
}
|
\section{Introduction}
Topological data analysis (TDA) encapsulates a range of data analysis methods which investigate the topological structure of a dataset \citep{CompyTopo}.
One such method, persistent homology, describes the geometric structure of a given dataset and summarizes this information as a persistence diagram.
TDA, and in particular persistence diagrams, have been employed in several studies with topics ranging from classification and clustering \citep{tda_action, tda_number, tda_clustering2015,MaMa16-2} to the analysis of dynamical systems \citep{tda_windows,VM_CellMotion,tda_signal,tda_timeseries} and complex systems such as sensor networks \citep{persissensor,fullerene,persis_brain}.
In this work, we establish the probability density function (pdf) for a random persistence diagram.
Persistence diagrams offer a topological summary for a collection of ${\dim}$-dimensional data, say $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { x_i \RC \subset \R^{\dim}$, which focuses on their global geometric structure of the data.
A persistence diagram is a multiset of homological features $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b_i,d_i,k_i) \RC$, each representing a $k_i$-dimensional hole which appears at scale $b_i \in \R^+$ and is filled at scale $d_i \in (b_i, \infty)$.
In general, the dataset arises from any metric space, though restricting to $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { x_i \RC \subset \R^{\dim}$ guarantees $k_i \in \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 0,..., \dim-1 \RC$.
For example, if the data form a time series trajectory $x_i = f(t_i)$, the associated persistence diagram describes multistability through a corresponding number of persistent 0-dimensional features or periodicity through a single persistent 1-dimensional feature.
In a typical persistence diagram, few features exhibit long persistence (range of scales $d_i - b_i$), and such features describe important topological characteristics of the underlying dataset.
Moreover, persistent features are stable under perturbation of the underlying dataset \citep{tda_stability}.
Persistence diagrams have recently seen intense active research, including significant successful effort toward facilitating previously challenging computations with them;
these efforts impact evaluation of Wasserstein distance in \citep{geomhelps} and the creation of persistence diagrams with packages such as Dionysus \citep{tdaR} and Ripser \citep{ripser} which take advantage of certain properties of simplicial complexes \citep{persistwist}.
Recently, various approaches have defined specific summary statistics such as center and variance \citep{tda_kernels,Wass_Structure,Frechet_Computation, MaMa16-2}, birth and death estimates \citep{tda_parametric}, and confidence sets \citep{tda_confidence}.
Here we introduce a nonparametric method to construct density functions for a distribution of persistence diagrams.
The development of these densities offers a consistent framework to understand the above summary statistic results through a single viewpoint.
We naturally think of a (random) persistence diagram as a random element which depends upon a stochastic procedure which is used to generate the underlying dataset that it summarizes.
Given that geometric complexes are the typical paradigms for application of persistent homology to data analysis, see for example the partial list \citep{persissensor, tda_parametric, tda_signal, MaMa16, tda_windows, tda_timeseries, fullerene, tda_action, tda_images, tda_wheeze}), we consider persistence diagrams which arise from a dataset and its associated $\cech$ filtration.
Thus, sample datasets yield sample persistence diagrams without direct access to the distribution of persistence diagrams.
In this sense, a distribution of persistence diagrams is defined by transforming the distribution of underlying data under the process used to create a
persistence diagram, as discussed in \citep{Wass_Structure}.
The persistence diagrams are created through a complex and nonlinear process which relies on the global arrangement of datapoints (see Section \ref{sect:TDA}); thus, the structure of a persistence diagram distribution remains unclear even for underlying data with a well-understood distribution.
Indeed, known results for the persistent homology of noise alone, such as \citep{tda_crackle}, primarily concern the asymptotics of feature cardinality at coarse scale.
With little previous knowledge, we study these distributions through nonparametric means.
Kernel density estimation is a well known nonparametric technique for random vectors in $\R^{\dim}$ \citep{KDE_book};
however, persistence diagrams lack a vector space structure and thus these techniques cannot be applied directly here.
There has been extensive work to devise various maps from persistence diagrams into Hilbert spaces, especially Reproducing Kernel Hilbert Spaces (RKHS).
For example, \citep{PersistImages} discretizes persistence diagrams via bins, yielding vectors in a high dimensional Euclidean space.
The works \citep{MSKernel} and \citep{PWGK_TDA} define kernels between persistence diagrams in a RKHS.
By mapping into a Hilbert space, these studies allow the application of machine learning methods such as principal component analysis, random forest, support vector machine, and more.
The universality of such a kernel is investigated in \citep{Stat_TDA}; this property induces a metric on distributions of persistence diagrams (by comparing means in the RKHS), as \citep{Stat_TDA} demonstrates with a two-sample hypothesis test.
In a similar vein, \citep{PD_rep} utilizes Gibbs distributions in order to replicate similar persistence diagrams, e.g. for use in MCMC type sampling.
All previous approaches kernelize to map into a Hilbert space for typical statistical learning techniques.
In a similar vein, the studies \citep{tda_kernels} and \citep{tda_confidence} work with kernel density estimation on the underlying data to estimate a target diagram as the number of underlying datapoints goes to infinity.
In both cases, the target diagram is directly associated to the probability density function (pdf) of the underlying data (via the superlevel sets of the pdf).
The first work constructs an estimator for the target diagram, while the second defines a confidence set.
In either case, kernel density estimation is used to approximate the pdf of the underlying datapoints, assuming the data are independent and identically distributed (i.i.d.).
In contrast, our work considers a different kind of kernel density which directly estimates probability densities for a random persistence diagram from a sample of persistence diagrams.
This kernel density estimate converges to the true probability density as the number of persistence diagrams goes to infinity.
Instead of a transformed collection or a center diagram, the output of our method is an estimate of a probability density function (pdf) of a random persistence diagram.
Access to a pdf facilitates definition and application of many statistical techniques, including hypothesis testing, utilization of Bayesian priors, or likelihood methods.
The proposed kernel density is centered at a persistence diagram and describes each feature as having either short or long persistence;
by treating each long-persistence point individually and short persistence points collectively, the kernel density strikes a careful balance between accuracy and computation time.
Our method also enables expedient sampling of new persistence diagrams from the kernel density estimate.
In contrast to previous methodologies, our kernel density estimate has the potential to describe high probability features in a random persistence diagram, even if these features have \emph{brief} persistence.
Such features are typically indicative of the geometric structure, e.g., curvature, of the dataset rather than its topology.
The homological features $(b_i,d_i,k_i)$ in a persistence diagram come without an ordering and their cardinality is variable, being bounded but not defined by the cardinality of the underlying dataset.
Thus, any notion of density must be (i) invariant to the ordering of features and (ii) account for variability in their cardinality.
Indeed, the approach used to analyze a collection of persistence diagrams in \citep{persis_brain} is a good step toward understanding a random persistence diagram, but requires a choice of order and considers only a fixed number of features and is therefore unsuitable for creating probability densities.
In this work, we offer a kernel density with the desirable properties (i) and (ii), which also calls attention to the persistence of each feature.
A typical persistence diagram has many features with brief persistence and few with moderate or longer persistence; consequently, our kernel density groups features with short persistence together in order to combat the curse of dimensionality.
Indeed, the kernel density still considers features of short persistence, but simplifies their treatment in order to facilitate computation.
The kernel density is defined on a pertinent space of finite random sets which is equipped to describe pdfs for random persistence diagrams generated from associated data with bounded cardinality of topological features.
In this sense, our kernel density provides estimation of the distribution of persistence diagrams which in turn describes the geometry of the random underlying dataset.
The requirement of bounded feature cardinality is trivially satisfied for datasets with bounded cardinality, which is reasonable for application and theory.
Indeed, the creation of a persistence diagram from an infinite collection of data is often nonsensical (e.g., for anything with unbounded noise), and a scaling limit should be considered instead.
We establish the kernel density estimation problem through the lens of finite set statistics and we consequently begin with relevant backgrounds in topological data analysis in Section \ref{sect:TDA} and finite set statistics in Section \ref{sect:RPDs}.
For further details about these two subjects, the reader may refer respectively to \citep{CompyTopo} and \citep{Matheron}.
Our results are presented in Section \ref{sect:KDE}.
In Subsection \ref{subsect:KDE_construction}, we construct the kernel density associated to a center persistence diagram and kernel bandwidth parameter.
This consists of decomposing the center persistence diagram into lower and upper halves, finding pdfs associated to each half, and lastly determining the pdf for their union.
After the kernel density is defined and an explicit pdf is delivered in Thm. \ref{thm_construction}, its convergence is presented in Theorem \ref{thm_KDE}.
Next, Subsection \ref{subsect:Examples} presents in detail a specific example of the kernel density.
Additionally, an example of persistence diagram kernel density estimation and its convergence are demonstrated for persistence diagrams associated to underlying data with annular distribution.
In Subsection \ref{subsect:KDE_MoD}, we define the mean absolute deviation (MAD) as a measure of dispersion, and present the convergence of its kernel density estimator (Thm. \ref{thm_moment}).
Finally, we end with conclusions and discussion in Section \ref{sect:discussion}.
Further examples of KDE convergence and the proofs of the main theorems, Thm. \ref{thm_KDE} and Thm. \ref{thm_moment}, are given in the supplementary materials.
\section{Topological Data Analysis Background} \label{sect:TDA}
The topological background discussed here builds toward the definition of persistence diagrams, the pertinent objects in this work.
We begin by briefly discussing simplicial complexes and homology, an algebraic descriptor for coarse shape in topological spaces.
In turn, persistent homology, and its summary, persistence diagrams, are techniques for bringing the power and convenience of homology to describe subspace filtrations of topological spaces.
We first consider topological spaces of discernible dimension, called manifolds.
\begin{defn}
A topological space $X$ is called a $k$-dimensional manifold if every point $x \in X$ has a neighborhood which is homeomorphic to an open neighborhood in $k$-dimensional Euclidean space.
\end{defn}
We generalize the fixed-dimension notion of a manifold in order to define simplicial homology for simplicial complexes.
We then discuss the $\cech$ construction which is used to associate simplicial complexes to datasets.
\begin{defn} \label{simplex}
A $k$-simplex is a collection of $k+1$ linearly independent vertices along with all convex combinations of these vertices:
\begin{equation} \label{convex_combo}
\LP v_0,...,v_k \RP = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \sum_{i=0}^k \alpha_i v_i : \sum_{i=0}^k \alpha_i = 1 \textrm{ and } \alpha_i \geq 0 \, \forall i \RC.
\end{equation}
Topologically, a $k$-simplex is treated as a $k$-dimensional manifold (with boundary).
An oriented simplex is typically described by a list of its vertices, such as $\LP v_0, v_1, v_2 \RP$.
The faces of a simplex consist of all the simplices built from a subset of its vertex set;
for example, the edge $(v_1, v_2)$ and vertex $(v_2)$ are both faces of the triangle $\LP v_0, v_1, v_2 \RP$.
\end{defn}
\begin{defn} \label{simplicial_complex}
A simplicial complex $\K$ is a collection of simplices wherein \\
(i) if $\sigma \in \K$, then all its faces are also in $\K$, and \\
(ii) the intersection of any pair of simplices in $\K$ is another simplex in $\K$. \\
We denote the collection of $k$-simplices within $\K$ by $\K^{[k]}$.
\end{defn}
A simplicial complex is realized by the union of all its simplices; an example is shown in Fig. \ref{simp_comp}.
Conditions (i) and (ii) in Defn. \ref{simplicial_complex} establish a unique topology on the realization of a simplicial complex which restricts to the subspace topology on each open simplex.
For finite simplicial complexes realized in $\R^{\dim}$, this topology is also consistent with the Euclidean subspace topology.
\begin{figure}[h]
\centering\includegraphics[scale=0.3]{Simp_Comp_3.png}
\caption{An example of a simplicial complex realized in $\R^3$.
This particular complex has one connected component and two cycles, which generate the 0-homology and 1-homology groups, respectively.
The other homology groups are trivial. }
\label{simp_comp}
\end{figure}
Here we define the homology groups for a simplicial complex through purely combinatorial means, which allows for automated computation.
\begin{defn} \label{defn_chain_group}
The chain group (over $\Z$) on a simplicial complex $\K$ of dimension $k$ is denoted by $C_k(\K)$ and is defined as formal sums of $k$-simplices in $\K$:
\begin{equation} \label{eqn_chain_group}
C_k(\K) = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \sum_{\sigma \in \K^{[k]}} n_\sigma \sigma : n_\sigma \in \Z \RC.
\end{equation}
\end{defn}
\begin{defn} \label{defn_boundary_map}
The $k$-th boundary map is a homomorphism $\del{k} : C_k(\K) \goto C_{k-1}(\K)$ defined on each simplex as an alternating sum over the faces of one dimension less:
\begin{equation} \label{eqn_boundary_map}
\del{k}(v_0,...,v_k) = \sum_{n=0}^k (-1)^n (v_0,...,v_{n-1},v_{n+1},...,v_k).
\end{equation}
\end{defn}
\begin{remark}
Chain groups give an algebraic way to describe subsets of simplices as a formal sum.
Toward this viewpoint, the chain group is often defined over $\Z_2 = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 0 , 1 \RC$ instead of $\Z$.
In this case, the boundary maps can be understood classically; e.g., the boundary of a triangle yields (the sum of) its three edges and the boundary of an edge yields (the sum of) its endpoints.
When viewed over $\Z$, the presence of sign specifies simplex orientation.
\end{remark}
Putting chain groups of every dimension together along with the boundary maps successively defined between them, we obtain a chain complex:
\begin{equation} \label{eqn_chain_complex}
\left\{} \newcommand{\RC}{\right\}}% Curly Braces { 0 \RC \xleftarrow{\bm 0} C_0(\K) \xleftarrow{\del{1}} C_1(\K) \xleftarrow{\del{2}} C_2(\K) \xleftarrow{\del{3}} ...
\end{equation}
The composition of subsequent boundary maps yields the trivial map \citep{CompyTopo};
this property is typically rephrased as $\im(\del{k+1}) \subset \ker(\del{k})$ which enables definition of the following modular groups.
\begin{defn} \label{defn_homology_group}
The homology group of dimension $k$ is given by
\begin{equation} \label{eqn_homology_group}
H_k(\K) = \ker(\del{k}) / \im(\del{k+1}) = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { [x] = x + \im(\del{k+1}) : x \in \ker(\del{k}) \RC\!,
\end{equation}
where $[x] = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { x + y : y \in \im(\del{k+1}) \RC$ defines the coset equivalence class of $x$.
\end{defn}
The generators of the homology group correspond to topological features of the complex $\K$;
for example, generators for the $0$-homology group correspond to connected components, generators of $1$-homology group correspond to holes in $\K$, etc.
The interpretation of these features is exemplified by taking the topological boundary of a $k+1$ ball (that is, a $k$-sphere);
for example, the boundary of an interval is two (disconnected) points while the boundary of a disc is a loop.
We wish to extend the notion of homology for a discrete set of data $\bm x = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { x_i \RC_{i=1}^N$ within a metric space $(X,d_X)$.
Treating the set itself as a simplicial complex, its homology yields only the cardinality of the data points.
So, we utilize the metric to obtain more information.
Here we denote by $B(x_0,r_0)$ a metric ball centered at $x_0$ of radius $r_0$.
Fix a radius $r > 0$ and consider the collection of neighborhoods $U = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { U_i \RC = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { B(x_i,r) \RC$ along with its union $\mathcal{U}_r = \cup_i B(x_i,r)$.
The filtration of sets $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { \mathcal{U}_r \RC_{r \in \R^+}$ naturally yields information about the arrangement within $X$ of the dataset $\bm x$ at various scales.
To make homology computations more tractable for $\mathcal{U}_r$, we instead consider the associated nerve complexes.
\begin{defn} \label{defn_nerve_and_cech}
The nerve $\mathcal{N}(U)$ of a collection of open sets $U$ is the simplicial complex where a $k$-simplex $\LP v_{i_0},..., v_{i_k} \RP$ is in $\mathcal{N}(U)$ if and only if $\cap_{j=0}^k U_{i_j} \neq \emptyset$.
The nerve of the neighborhoods $U = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { B(x_i,r) \RC$ is called the $\cech$ complex on the data $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { x_i \RC$ at radius $r$ and is denoted by $\textrm{\v Cech}(\bm x, r)$.
\end{defn}
Examples of the $\cech$ complex for the same data at different radii are depicted in Fig. \ref{grow_cech}, where they are superimposed with the associated neighborhood space.
Any nerve complex trivially satisfies the requirements for a simplicial complex \citep{CompyTopo}.
Moreover, the nerve theorem states that the nerve and union of a collection of convex sets have similar topology (they are homotopy equivalent) \citep{Hatcher};
specifically, the $\cech$ complex and neighborhood space $\mathcal{U}$ have identical homology for any given radius.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{Cech_1.png} \quad
\includegraphics[scale=0.3]{Cech_2.png} \quad
\includegraphics[scale=0.3]{Cech_3.png}
\end{center}
\caption{The neighborhood space and $\cech$ complex of matching radius plotted at three different radii. Yellow indicates a triangle while orange indicates a tetrahedron. This family of simplicial complexes is the filtration utilized to compute and define persistent homology.} \label{grow_cech}
\end{figure}
A priori, it is unclear which choice of scale (radius), best describes the data;
and oftentimes different scales reveal different information.
Thus, to investigate the topology of our data, we consider the appearance and disappearance of homological features at growing scale.
This multiscale viewpoint, called persistent homology, is introduced in \citep{Persistence} and yields a topological summary of the data called a persistence diagram.
This is possible because we have a growing filtration of complexes, so each complex is included in the next (see Fig. \ref{grow_cech}).
These inclusion maps induce inclusion maps at the chain group level and in turn induce maps (though not typically inclusions) at the level of homology groups.
These induced maps $f_{r_1, r_2}:H_k(\textrm{\v Cech}(\bm{x},r_1)) \goto H_k(\textrm{\v Cech}(\bm{x},r_2))$ are referred to here as the persistence maps, and take features to features (i.e., generators to generators) or to zero \citep{bottleneck}.
Thus, each feature is tracked by how far the persistence maps preserve it.
In turn, tracking features is boiled down to a very specific algorithm for obtaining the birth and death radii for each homological feature (e.g., see \citep{CompyTopo}).
Features which persist over a large range of scale are typically considered more important, and their presence is stable under small perturbations of the underlying data \citep{tda_stability}.
Persistent homology yields a multiset of homological features, each born at a scale $b_i$, lasting until its death scale $d_i$, with degree of homology $k_i$;
in short, it yields a persistence diagram $\mathscr{D} = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_i \RC_{i=1}^M = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b_i,d_i,k_i) \RC_{i=1}^M$.
We interpret the birth-death values as coordinate points with degree of homology as labels.
For clarity and simplicity, we ignore any features with death value $d_i = \infty$, since these features are generally a characteristic of the ambient space.
In particular, one homological feature with $(b,d,k) = (0,\infty,0)$ is expected from any $\cech$ filtration.
Specifically, for data in $\R^{\dim}$, we consider each feature as an element of
\begin{equation} \label{eqn_wedge}
\W = W \times \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 0,...,\dim-1 \RC, \\
\end{equation}
where $W = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b,d) \in \R^2 : d > b \geq 0 \RC$ is the infinite wedge.
As a topological space, the ${\dim}$-fold multiwedge $\W$ is treated as ${\dim}$-disconnected copies of $W$, where $W$ has the Euclidean metric and topology.
It is desirable to define a metric between persistence diagrams with which to measure topological similarity.
In TDA, Hausdorff distance is typically used to compare underlying datasets, while the bottleneck distance (Defn. \ref{defn_bottleneck}) is used to compare their associated persistence diagrams \citep{tda_confidence, tda_guide}.
\begin{defn} \label{defn_bottleneck}
The bottleneck distance between two persistence diagrams $D_1$ and $D_2$ is given by
\begin{equation} \label{eqn_bottleneck}
W_\infty(D_1,D_2) = \min_\gamma \max_{x \in D_1} \LN x - \gamma(x) \RN_\infty.
\end{equation}
where $\gamma$ ranges over all possible bijections between $D_1$ and $D_2$ which match in degree of homology.
The diagonal $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { b = d \RC$ is included in both persistence diagrams with infinite multiplicity so that any feature may be matched to the diagonal.
\end{defn}
\begin{remark}
Due to the unstable presence of features near the diagonal, typical metrics on persistence diagrams such as the bottleneck distance treat the diagonal as part of every persistence diagram \citep{Wass_Structure} in order to achieve stability with respect to Hausdorff perturbations of the underlying dataset \citep{bottleneck}.
Morally, one considers the diagonal as representing vacuous features which are born and die simultaneously.
For convenient computation, the definition of bottleneck distance can be applied to each degree of homology separately.
\end{remark}
\section{Random Persistence Diagrams} \label{sect:RPDs}
In this section we establish background to make the notion of probability density for a random persistence diagram explicit and well-defined.
A persistence diagram changes its feature cardinality under small perturbation of the underlying dataset, and these features have no intrinsic order.
Consequently, we cannot treat persistence diagrams as elements of a vector space.
Instead, we consider a random persistence diagram $D$ as a random multiset of features $D = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_i \RC \subset \W$ in the multiwedge defined in Eq. \eqref{eqn_wedge}.
For underlying datasets sampled from $\R^{\dim}$ with bounded cardinality, the affiliated $\cech$ persistence diagrams also have bounded feature cardinality and degree of homology.
Thus, we assume that the cardinality of a random persistence diagram is bounded above by some value $\Ln D \Rn \leq M \in \N$ , and so consider the space $\mathcal{C}_{\leq M}(\W) = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { D \textrm{ multiset in } \W : \Ln D \Rn \leq M \RC$.
We view $\mathcal{C}_{\leq M}(\W)$ through a list of functions $h_N$ which each map the appropriate dimension of Euclidean space into its corresponding cardinality component, $\mathcal{C}_{N}(\W)$.
This viewpoint facilitates the definition of probability densities.
\begin{defn} \label{defn_hit-or-miss}
For each $N \in \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 0 , ..., M \RC$, consider the space of $N$ topological features, denoted $\mathcal{C}_N(\W) = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { D \textrm{ multiset in } \W : \Ln D \Rn = N \RC$, and the associated map $h_N: \W^N \goto \mathcal{C}_N(\W)$ defined by
\begin{equation} \label{eqn_E2HoM}
h_N(\xi_1,...,\xi_N) = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_1,...,\xi_N \RC.
\end{equation}
The map $h_N$ creates equivalence classes on $\W^N$ according to the action of the permutations $\Pi_N$;
specifically, $\LB Z \RB = \LB \LP \xi_1,...,\xi_N \RP \RB_{h_N} = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \LP \xi_{\pi(1)},...,\xi_{\pi(N)} \RP : \pi \in \Pi_N \RC$ for each $Z = \LP \xi_1,...,\xi_N \RP \in \W^N$.
These equivalence classes yield the space
\begin{equation} \label{eqn_mod_space}
\W^N / \Pi_N = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \LB \bm{\xi} \RB_{h_N} : \bm{\xi} \in \W^N \RC,
\end{equation}
equipped with the quotient topology.
The topology on $\mathcal{C}_{\leq M}(\W)$ is defined so that each $h_N$ lifts to a homeomorphism between $\W^N / \Pi_N$ and $\mathcal{C}_N(\W)$.
\end{defn}
With a topology in hand, one can define probability measures on the associated Borel $\sigma$-algebra.
Thus, we define a random persistence diagram $D$ to be a random element distributed according to some probability measure on $\mathcal{C}_{\leq M}(\W)$ for a fixed maximal cardinality $M \in \N$.
We denote associated probabilities by $\P \LB \cdot \RB$ and expected values by $\E \LB \cdot \RB$.
Since $\W^N / \Pi_N \cong \mathcal{C}_N(\W)$, we work toward defining probability densities on the collection of Euclidean spaces $\cup_{N=0}^M \W^N$.
\begin{defn}[\citep{Matheron}] \label{defn_belief}
For a given random persistence diagram $D$ and any Borel subset $A$ of $\W$, the belief function $\beta_D$ is defined as
\begin{equation} \label{eqn_belief}
\beta_D(A) = \P \LB D \subset A \RB.
\end{equation}
\end{defn}
Since $A$ is a Borel subset of $\W$, the collection $O_A = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { D \in \mathcal{C}_{\leq M}(\W) : D \subset A \RC$ is the quotient of $\cup_{N=0}^M A^N \subset \cup_{N=0}^M \W^N$ under $h_N$; moreover, $A^N$ is clearly Borel in the Euclidean topology of $\cup_{N=0}^M \W^N$.
Therefore, since $h_N$ induces a homeomorphism (see Defn \ref{defn_hit-or-miss}), $O_A$ is a Borel subset of $\mathcal{C}_{\leq M}(\W)$.
The belief function of a random persistence diagram is similar to the joint cumulative distribution function for a random vector,
in particular by yielding a probability density function through Radon-Nikod\'{y}m type derivatives.
\begin{defn} \label{defn_set_derivative}
\citep{Matheron} Fix $\phi$ defined on Borel subsets of $\mathcal{C}_{\leq M}(\W)$ into $\R$.
For an element $\xi \in \W$ or a multiset $Z \subset \W$ with $Z = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_1,...,\xi_N \RC$, the set derivative (evaluated at $\emptyset$) is respectively given by
\begin{equation} \label{eqn_set_derivative}
\begin{split}
\frac{\delta \phi}{\delta \xi}(\emptyset) &= \lim_{n \goto \infty} \frac{\phi(B(\xi,1/n))}{\lambda(B(\xi,1/n))}, \\
\frac{\delta \phi}{\delta Z}(\emptyset) &= \frac{\delta^N \phi}{\delta \xi_1 ... \delta \xi_N} = \LB \frac{\delta}{\delta \xi_1} \cdot \cdot \cdot \frac{\delta}{\delta \xi_N}\phi \RB (\emptyset),
\end{split}
\end{equation}
where $B(\xi,1/n)$ are Euclidean balls and $\lambda$ indicates Lebesgue measure on $\W$.
\end{defn}
\begin{remark}
Defn. \ref{defn_set_derivative} for set derivatives at the empty set closely mirrors the Radon-Nikod\'{y}m derivative with respect to Lebesgue measure.
The definition of a set derivative evaluated on a nonempty set is more involved, and is found in \citep{Matheron}.
Here we are primarily concerned with evaluation at $\emptyset$, since this suffices for the definition of a probability density function.
Also, note that set derivatives satisfy the product rule.
\end{remark}
\begin{remark}
Restricting to a particular cardinality $N$, consider $\phi_N = \phi \circ h_N$, a function on Euclidean space which is invariant under the action of $\Pi_N$.
The viewpoint of $\phi_N$ elucidates the relationship between set derivatives and Radon-Nikod\'{y}m derivatives with respect to Lebesgue measure.
This viewpoint also shows that the iterated derivative given in Eq. \eqref{eqn_set_derivative} is independent of order and thus is well-defined for a multiset $Z$.
\end{remark}
As with typical derivatives, there is a complementary set integration operation for set derivatives.
Set derivatives (at $\emptyset$) are essentially Radon-Nikod\'{y}m derivatives with order tied to cardinality, and so the corresponding set integral acts like Lebesgue integration summed over each cardinality.
\begin{defn} \label{defn_set_integral}
Consider a Borel subset $A$ of $\W$ and a Borel subset $O$ of $\mathcal{C}_{\leq M}(\W)$.
For a set function $f:\mathcal{C}_{\leq M}(\W) \goto \R$, its set integrals over $A$ and $O$ are respectively defined according to the following sums of Lebesgue integrals:
\begin{subequations}
\begin{align}
\int_A f(Z) \delta Z &= \sum_{N=0}^M \frac{1}{N!} \int_{A^N} f(h_N(\xi_1,...,\xi_N)) d\xi_1 ... d\xi_N, \label{eqn_set_integral_a} \\
\int_O f(Z) \delta Z &= \sum_{N=0}^M \frac{1}{N!} \int_{h_N^{-1}(O)} f(h_N(\xi_1,...,\xi_N)) d\xi_1 ... d\xi_N, \label{eqn_set_integral_b}
\end{align}
\end{subequations}
where $Z = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_1,...,\xi_N \RC \subset \W$ is a persistence diagram.
\end{defn}
\noindent Dividing by $N!$ in Eqs. \eqref{eqn_set_integral_a} and \eqref{eqn_set_integral_b} accounts for integrating over $\W^N$ instead of $\W^N/\Pi_N \cong \mathcal{C}_N(\W)$.
It has been shown that set derivatives and integrals are inverse operations \citep{Matheron};
specifically, the set derivative of a belief function yields a probability density for a random diagram $D$ such that
\begin{equation} \label{eqn_belief_as_pdf}
\beta_D(A) = \int_A \frac{\delta \beta_D}{\delta Z}(\emptyset) \delta Z.
\end{equation}
Indeed, $A^N = h_N^{-1}(\left\{} \newcommand{\RC}{\right\}}% Curly Braces { D \subset A \RC)$ so that Eq. \eqref{eqn_set_integral_a} also holds as an integral over $O_A = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { D \in \mathcal{C}_{\leq M} : D \subset A \RC$ in the sense of Eq. \eqref{eqn_set_integral_b}.
\begin{defn} \label{defn_global_pdf}
For a random persistence diagram $D$, a global probability density function (global pdf) $f_D: \cup_{N\in\N} \W^N \goto \R$ must satisfy
\begin{equation} \label{eqn_global_pdf}
\sum_{\pi \in \Pi_N} f_D(\xi_{\pi(1)},...,\xi_{\pi(N)}) = \frac{\delta^N \beta_D}{\delta \xi_1 \cdot ... \cdot \delta\xi_N} (\emptyset).
\end{equation}
and is described by its layered restrictions $f_N = f_D\big{|}_{\W^N}:\W^N \goto \R$ for each $N$.
\end{defn}
\begin{remark} \label{rmk_local_vs_global}
It is necessary to make a distinction between local and global densities because the global pdf is not defined on a single Euclidean space, and is instead expressed as a collection of densities over a range of dimensions.
Specifically, while each local density $f_N$ (for input cardinality $N$) is defined on $\W^N$, the global pdf $f_D$ is defined on $\cup_{N=1}^M \W^N$ and restricts to a local density on each input dimension.
Each local density $f_N(Z) = f_D\big{|}_{\W^N}(Z)$ decomposes into the product of the conditional density $f_D( Z \big | \Ln Z \Rn = N)$ and the cardinality probability $\P[\Ln Z \Rn = N]$ (this follows from Prop. \ref{prop_belief_layers}).
Thus, each local density does not integrate to one, but instead to the associated probability $\P[\Ln Z \Rn = N]$.
Also, the global pdf is not a set function and does not require division by $N!$, leading to the following relation:
$\int_{A^N} f_D(\xi_1,...,\xi_N) d\xi_1...d\xi_N = \frac{1}{N!} \int_{A^N} \frac{\delta^N \beta_D}{\delta^N Z} (\emptyset) d\xi_1...d\xi_N.$
\end{remark}
\begin{remark} \label{rmk_unique_symmetric}
While the global pdf and its local constituents need not be symmetric with respect to $\Pi_N$,
there is a unique choice of global pdf (up to sets of Lebesgue measure 0) which satisfies Eq. \eqref{eqn_global_pdf} and is symmetric under the action of $\Pi_N$.
In this case, we safely abuse notation by denoting $f_D(\left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_1, ..., \xi_N \RC) := N! f_D(\xi_1,...,\xi_N)$ and often write $f_D(Z)$ and allow context to determine whether $Z$ denotes a set or a vector.
\end{remark}
The following proposition is critical to determine the global pdf for (i) the union of independent singleton diagrams (i.e., $\Ln D^j \Rn \leq 1$), (ii) a randomly chosen cardinality, $N$, followed by $N$ i.i.d. draws from a fixed distribution, and (iii) a random persistence diagram kernel density function.
The proof of this proposition follows similar arguments to \citep{Nonparametric_Fusion} (Theorem 17, pp. 155--156).
\begin{prop} \label{prop_belief_layers}
Let $D$ be a random persistence diagram with cardinality bounded by $M$ and let $\beta_D(S) = \P(D \subset S)$ be the belief function for $D$.
Then $\beta_D$ expands as
$$\beta_D(S) = a_0 + \sum_{m=1}^M a_mq_m(S),$$
where $a_m = \P(\Ln D \Rn = m)$ and $q_m(S)=\P[D \subset S \big{|} \Ln D \Rn = m]$.
\end{prop}
\begin{remark} \label{rmk_belief_layers}
The decomposition in Prop. \ref{prop_belief_layers} is often applied as a first step toward finding the local density constituents of the global pdf.
In particular, $f_N = f_D\big{|}_{\W^N} = 0$ for $N > M$.
\end{remark}
Lastly, we encounter a computationally convenient summary for a random persistence diagram called the probability hypothesis density (PHD).
The integral of the PHD over a subset $U$ in $\W$ gives the expected number of points in the region $U$;
moreover, any other function on $\W$ with this property is a.e. equal to the PHD \citep{Mahler}.
\begin{defn} \label{defn_PHD}
\citep{Matheron} The probability hypothesis density (PHD) for a random persistence diagram $D$ is defined as the set function $F_D(a) = \frac{\delta \beta_D}{\delta Z} \LP\left\{} \newcommand{\RC}{\right\}}% Curly Braces { a \RC\RP$ and is expressed as a set integral as
\begin{equation} \label{eqn_PHD}
F_D(a) = \int_{\left\{} \newcommand{\RC}{\right\}}% Curly Braces { Z : \left\{} \newcommand{\RC}{\right\}}% Curly Braces { a \RC \subset Z \RC} \frac{\delta \beta}{\delta Z}(\emptyset) \delta Z.
\end{equation}
In particular, $\E(\Ln D \cap U \Rn) = \int_U F_D(u) \, du$ for any region $U$.
\end{defn}
\section{Kernel Density Estimation} \label{sect:KDE}
\subsection{Construction} \label{subsect:KDE_construction}
To estimate distributions of persistence diagrams, our goal is the creation of a kernel density function about a center persistence diagram $\mathscr{D}$ with a kernel bandwidth parameter $\sigma>0$, used for defining constituent Gaussians according to Definitions \ref{defn_upper_singletons} and \ref{defn_lower_cluster}.
Prop. \ref{prop_belief_layers} leads to the following lemma which is crucial for determining the kernel density.
We refer to a random persistence diagram $D$ with $\Ln D \Rn \leq 1$ as a singleton diagram, and such
singletons are indexed by superscripts.
\begin{lemma} \label{lemma_combination}
Consider a multiset of independent singleton random persistence diagrams $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { D^j \RC_{j=1}^M$.
If each singleton $D^j$ is described by the value $q^{(j)} = \P[D^j \neq \emptyset]$ and the subsequent conditional pdf, $p^{(j)}(\xi)$, given $\Ln D^j \Rn = 1$, then the global pdf for $D = \cup_{j=1}^M D^j$ is given by
\begin{equation} \label{eqn_combination}
f_D(\xi_1,...,\xi_N) = \sum_{\gamma \in I(N, M)} \mathcal{Q}(\gamma) \prod_{k=1}^N p^{(\gamma(k))}(\xi_{k}),
\end{equation}
for each $N \in \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 0,..., M \RC$ where
\begin{equation} \label{eqn_QQ}
\mathcal{Q}(\gamma) = \mathcal{Q}^*(\gamma) \prod_{k=1}^N q^{(\gamma(k))},
\end{equation}
$I(N,M)$ consists of all increasing injections $\gamma:\left\{} \newcommand{\RC}{\right\}}% Curly Braces { 1,...,N \RC \goto \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 1,...,M \RC$, and
\begin{equation} \label{eqn_Qstar}
\mathcal{Q}^*(\gamma) = \frac{\prod_{j=1}^M (1-q^{(j)})}{\prod_{k=1}^N (1-q^{(\gamma(k))})}.
\end{equation}
\end{lemma}
\begin{proof}
Since the singleton events $D^j$ are independent, the belief function for $D = \cup_j D^j$ decomposes into $\beta_D(S) = \prod_{j=1}^M \beta_{D^j}(S)$.
Next, we employ the product rule for the set derivative (see Defn. \ref{defn_set_derivative}) to obtain the global pdf for $D$ in terms of the singleton belief functions and their first derivatives.
Higher derivatives of $\beta_{D^j}$ are zero since $D^j$ are singletons (see Remark \ref{rmk_belief_layers}).
Thus, the product rule yields first derivatives on all (ordered) subsets of the singleton belief functions:
\begin{align*}
\frac{\delta^N \beta_D}{\delta \xi_1 ... \delta \xi_N}(\emptyset) = \sum_{1 \leq j_1 \neq,...,\neq j_N \leq M} \frac{\beta_{D^1}(\emptyset) \cdot \cdot \cdot \beta_{D^M}(\emptyset)}{\beta_{D^{j_1}}(\emptyset) \cdot \cdot \cdot \beta_{D^{j_N}}(\emptyset)}
\LB \frac{\delta \beta_{D^{j_1}}}{\delta \xi_{1}}(\emptyset) \cdot \cdot \cdot \frac{\delta \beta_{D^{j_N}}}{\delta \xi_{N}}(\emptyset)\RB.
\end{align*}
By Prop. \ref{prop_belief_layers}, we have that $\beta_{D^j}(\emptyset) = (1-q^{(j)})$ and $\dfrac{\delta \beta_{D^{j_i}}}{\delta\xi_{i}}(\emptyset) = q_{j_i}p^{(j_i)}(\xi_{i})$ and so
\begin{align*}
\frac{\delta^N \beta_D}{\delta \xi_1 ... \delta \xi_N}(\emptyset) = \sum_{1 \leq j_1 \neq,...,\neq j_N \leq M}
\LB \frac{\prod_{j=1}^M (1-q^{(j)})}{\prod_{j=1}^N (1-q^{(j_k)})} \prod_{k=1}^N q^{(j_k)} \RB \prod_{k=1}^N p^{(j_k)}(\xi_{k}),
\end{align*}
which nearly resembles Eq. \eqref{eqn_combination}.
To bridge the gap, we describe the choice of indices $j_i$ by an injective function from $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { 1,...,N \RC$ into $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { 1,...,M \RC$.
In turn, each such injective function is uniquely determined by the composition of an increasing injection $\gamma \in I(N,M)$ which decides the range of the function and permutations on the domain, $\Pi_N$.
These permutations take into account the order of the range.
The value of $\mathcal{Q}$ is independent of order, and thus is determined by $\gamma$ as in Eq. \eqref{eqn_QQ}.
We reorder the product in order to shift these permutations onto the input variables, obtaining
\begin{equation} \label{eqn_precomb}
\frac{\delta^N \beta_D}{\delta \xi_1 ... \delta \xi_N}(\emptyset) = \sum_{\pi \in \Pi_N} \sum_{\gamma \in I(N, M)} \mathcal{Q}(\gamma) \prod_{k=1}^N p^{(\gamma(k))}(\xi_{\pi(k)}).
\end{equation}
Finally, the global pdf in Eq. \eqref{eqn_combination} follows directly from applying Eq. \eqref{eqn_global_pdf} to Eq.\eqref{eqn_precomb}.
\end{proof}
\begin{remark}
The global pdf in Eq. \eqref{eqn_combination}, and in particular the sum over $\gamma \in I(N,M)$, accounts for each possible combination of singleton presence.
Moreover, summing over permutations as in Eq. \eqref{eqn_precomb} and dividing by $N!$ yields a symmetric pdf with terms for every possible assignment between singletons and inputs.
The weights $\mathcal{Q}(\gamma)$ indicate the probability of each assignment occurring, and is the product of the appropriate probability for each singleton to be either present, $q^{(j)}$, or absent, $1-q^{(j)}$, for each $j$.
\end{remark}
\begin{example} \label{ex_double_single} \rm
Consider two 1-dimensional singleton diagrams, $D^1$ and $D^2$, with probabilities of being nonempty $q^{(1)} = 0.6$ and $q^{(2)} = 0.8$, respectively. The corresponding local densities when nonempty are given by $p^{(1)}(x) = \frac{1}{\sqrt{2\pi}} e^{-(x+1)^2/2}$ and $p^{(2)}(x) = \frac{1}{\sqrt{2\pi}} e^{-(x-1)^2/2}$.
Lemma \ref{lemma_combination} yields the global pdf for $D = D^1 \cup D^2$ through a set of local densities $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { f_0, f_1(x),f_2(x,y)\RC$ such that $f_0 = \P[\Ln D \Rn = 0] = (1-q^{(1)})(1-q^{(2)}) = 0.08$, $f_1 = f_D\big{|}_{\R}$, and $f_2 = f_D\big{|}_{\R^2}$.
We sum over permutations and divide by $N!$ ($N=1,2$ is the input cardinality) to obtain a symmetric global pdf.
\begin{subequations}
\begin{align}
\begin{split}
f_1(x) &= (1-q^{(2)})q^{(1)}p^{(1)}(x) + (1-q^{(1)})q^{(2)}p^{(2)}(x) \\
&= \frac{0.12}{\sqrt{2\pi}}e^{-(x+1)^2/2} + \frac{0.32}{\sqrt{2\pi}} e^{-(x-1)^2/2} ,\label{eqn_012_A}
\end{split} \\[3mm]
\begin{split}
f_2(x,y) &= \frac{q^{(1)}q^{(2)}}{2} \LB p^{(1)}(x)p^{(2)}(y) + p^{(1)}(y)p^{(2)}(x) \RB \\
&= \frac{0.24}{2\pi} \LP e^{-((x-1)^2 + (y+1)^2)/2} + e^{-((x+1)^2 + (y-1)^2)/2} \RP. \label{eqn_012_B}
\end{split}
\end{align}
\end{subequations}
Accounting for each cardinality and following Eq. \eqref{eqn_012_A} and Eq. \eqref{eqn_012_B}, the total probability adds up to
\begin{align*}
\P[\Ln D \Rn = 0] + \P[\Ln D \Rn = 1] + \P[\Ln D \Rn = 2] &= f_0 + \int_\R f_1(x) dx + \int_{\R^2} f_2(x,y) dx dy \\
&= (0.08) + (0.12 + 0.32) + (0.24 + 0.24) = 1,
\end{align*}
as desired.
The local densities in Eq. \eqref{eqn_012_A} and Eq. \eqref{eqn_012_B} are plotted in Fig. \ref{fig_012}.
Though $f_1(x)$ is the sum of two Gaussians, in Fig. \ref{fig_012} (Left) we see that the Gaussian centered at $x=1$ dominates, while the Gaussian centered at $x=-1$ is only indicated by a heavy left tail.
This behavior occurs because $q^{(2)} = 0.8$ is very close to $1$.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.4]{ex_pdf_1F.png} \quad \quad
\includegraphics[scale = 0.4]{ex_pdf_2F.png}
\end{center}
\caption{\emph{Left:} Plot of the local density $f_1(x)$ in Eq. \eqref{eqn_012_A}. \emph{Right:} Contour plot of the local density $f_2(x,y)$ in Eq. \eqref{eqn_012_B}.
These pdfs cover the different possible input dimensions and are symmetric under permutations of the input.} \label{fig_012}
\end{figure}
\end{example}
Now we turn toward defining the kernel density.
We first define a random persistence diagram as a union of simpler constituents, and then determine its global pdf by combination in a fashion similar to Lemma \ref{lemma_combination}.
Indeed, we define the desired kernel density as the global pdf for this composite random diagram.
To start, we fix a degree of homology $k$ and consider a center diagram $\mathscr{D} \subset \mathcal{W}_k = W \times \left\{} \newcommand{\RC}{\right\}}% Curly Braces { k \RC$ (see Eq. \eqref{eqn_wedge}).
Since $k$ is fixed, we treat $\mathscr{D} = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_i \RC_{i=1}^M = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b_i,d_i) \RC_{i=1}^M$ within $W = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b,d) \in \R^2 : d > b \geq 0 \RC$.
Long persistence points in a persistence diagram represent prominent topological features which are stable under perturbation of underlying data, and so it is important to track each independently.
In contrast, we leverage the point of view that the small persistence features near the diagonal are considered together as a single geometric signature as opposed to individually important topological signatures.
Toward this end, features with short persistence are grouped together and interpreted through i.i.d. draws near the diagonal.
Since features cluster near the diagonal in a typical persistence diagram (see, e.g., Fig. \ref{fig_PD_UD_wob} (Right) in the supplementary materials), treating short persistence features collectively simplifies our kernel density and thus speeds up its evaluation.
It is imperative that these short persistence features are not ignored, because they still capture crucial geometric information for applications such as classification \citep{MaMa16,MaMa16-2, persissensor,fullerene,tda_phase,tda_geo_noise}.
Thus, we split $\mathscr{D}$ into upper and lower portions according to a bandwidth $\sigma$ as
\begin{equation} \label{eqn_split}
\mathscr{D}^u = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b_i,d_i,k) \in \mathscr{D} : d_i-b_i \geq \sigma \RC \textrm{ and }
\mathscr{D}^\ell = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b_i,d_i,k) \in \mathscr{D} : d_i-b_i < \sigma \RC.
\end{equation}
Now define random diagrams $D^u$ centered at $\mathscr{D}^u$ and $D^\ell$ centered at $\mathscr{D}^\ell$ such that $D = D^u \cup D^\ell$.
Ultimately, the global pdf of $D$ centered at $\mathscr{D}$ is our kernel density.
\begin{defn} \label{defn_upper_singletons}
Each feature $\xi_j = (b_j,d_j) \in \mathscr{D}^u$ yields an independent random singleton diagram $D^j$ defined by its chance to be nonempty $q^{(j)}$ (via Eq. \eqref{eqn_nonempty}) along with its potential position $(b,d)$ sampled according to a modified Gaussian distribution, denoted by $N^*((b_j,d_j), \sigma I)$.
The global pdf for $D^u$ is then determined by Lemma \ref{lemma_combination}, where each $p^{(j)}$ is given by the pdf associated with $N^*((b_j,d_j),\sigma I)$, which is given by
\begin{equation} \label{eqn_mod_normal}
p^{(j)}(b,d) = \dfrac{\varphi_j(b,d)}{\int_{W} \varphi_j(u,v) du \, dv} \one_{W}(b,d),
\end{equation}
where $\varphi_j$ is the pdf of the (unmodified) normal $N((b_j,d_j),\sigma I)$, and $\one_W(\cdot)$ is the indicator function for the wedge.
\end{defn}
The global pdf for each $D^j$ is readily obtained by a pair of restrictions.
First, we restrict the usual Gaussian distribution to the halfspace \hbox{$T = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b,d) \in \R^2 : b < d \RC$}.
Features sampled below the diagonal are considered to disappear from the diagram and thus we define the chance to be nonempty by
\begin{equation} \label{eqn_nonempty}
q^{(j)} = \P(D^j \neq \emptyset) = \int_{\left\{} \newcommand{\RC}{\right\}}% Curly Braces { v > u \RC} \varphi_j(u,v) \, du \, dv.
\end{equation}
Afterward, the Gaussian restricted to $T$ is further restricted to $W$ and renormalized to obtain a probability measure as in Eq. \eqref{eqn_mod_normal}.
This double restriction to both $T$ and $W$ is necessary for proper restriction of the Gaussian pdf and definition of $q^{(j)}= \P(D^j \neq \emptyset)$.
Indeed, restriction to $W$ alone causes points with small birth time to have an artificially high chance to disappear; while restriction to $T$ alone yields nonsensical features with negative radius (with $b < 0$).
In kernel density estimation, the effects of this distinction become negligible as the bandwidth goes to zero.
In practice, this distinction is important for features with small birth time relative to the bandwidth.
\begin{remark}
In the $\cech$ construction of a persistence diagram, a feature lies on the line $b = 0$ if and only if it has degree of homology $k=0$.
Consequently, for a feature $(0,d_j)$ with $k=0$, we instead take
$$ p^{(j)}(d) = \frac{\phi_j(d)}{\int_{\R^+} \phi_j(u) \, du} \one_{\R^+}(d) \textrm{ and } q^{(j)} = \int_{\R^+} \phi_j(u) \, du $$
where $\phi_j$ is the 1-dimensional Gaussian centered at $d_j$ with standard deviation $\sigma$.
\end{remark}
Whereas the large persistence features in $D^u$ have small chance to fall below the diagonal and disappear, the existence of the small persistence features in $D^\ell$ is volatile: these features disappear and appear fluidly under small changes in the underlying data.
The distribution of $D^\ell$ is described by a probability mass function (pmf) $\nu$ and lower density $p^\ell$.
\begin{defn} \label{defn_lower_cluster}
The lower random diagram $D^\ell$ is defined by choosing a cardinality $N$ according to a pmf $\nu$ followed by $N$ i.i.d. draws according to a fixed density $p^\ell$.
First, take $N_\ell = \Ln \mathscr{D}^\ell \Rn$ and define $\nu(\cdot)$ with mean $N_\ell$ and so that $\nu(n) = 0$ for $n > mN_\ell$ for some $m >0$ independent of $N_\ell$.
The subsequent density $p^\ell(b,d)$ is given by projecting the lower features $\mathscr{D}^\ell$ of the center diagram $\mathscr{D}$ onto the diagonal $b = d$, then creating a restricted Gaussian kernel density estimation for these features; specifically,
\begin{equation} \label{eqn_lower_density}
p^\ell(b,d) = \frac{1}{N_\ell} \sum_{(b_i,d_i) \in \mathscr{D}^\ell} \frac{1}{\pi\sigma^2} e^{-\LP\LP b - \frac{b_i+d_i}{2} \RP^2 + \LP d - \frac{b_i+d_i}{2} \RP^2\RP/2\sigma^2}.
\end{equation}
\end{defn}
Projecting the lower features $\mathscr{D}^\ell$ of the center diagram $\mathscr{D}$ onto the diagonal simplifies later analysis and evaluation of $p^\ell$;
without projecting, a unique normalization factor, similar to $q^{(j)}$ in Defn. \ref{defn_upper_singletons}, would be required for each Gaussian summand in Eq. \eqref{eqn_lower_density}.
By Prop. \ref{prop_belief_layers} and Eq. (\ref{eqn_global_pdf}), global pdfs of random persistence diagrams are described by a random vector pdf for each cardinality layer, resulting in the following global pdf for $D^\ell$:
\begin{equation} \label{eqn_lower_dist}
f_{D^\ell}(\xi_1,...,\xi_N) = \nu(N) \prod_{j=1}^N p^\ell(\xi_j).
\end{equation}
Combining the expressions for $D^\ell$ and $D^u$, we arrive at the following proposition.
\begin{theorem} \label{thm_construction}
Fix a center persistence diagram $\mathscr{D}$ and bandwidth $\sigma>0$.
Split $\mathscr{D}$ into $\mathscr{D}^\ell$ and $\mathscr{D}^u$ according to Eq. \eqref{eqn_split}.
Define $D^\ell$ with global pdf from Eq. \eqref{eqn_lower_dist}, and $D^u$ with global pdf from Eq. \eqref{eqn_combination}.
Treating the random persistence diagrams $D^u$ and $D^\ell$ as independent, the kernel density centered at $\mathscr{D}$ with bandwidth $\sigma$ is given by
\begin{equation} \label{eqn_construction}
K_\sigma(Z, \mathscr{D}) = \sum_{j=0}^{N_u} \nu(N-j) \sum_{\gamma \in I(j,N_u)} \mathcal{Q}(\gamma) \prod_{k=1}^j p^{(\gamma(k))}(\xi_{k}) \prod_{k=j+1}^N p^\ell(\xi_{k}),
\end{equation}
where $Z= (\xi_1,...,\xi_N)$ is the input, $\xi_i = (b_i,d_i)$ for $i = 1, ..., N$ are the features, and $N_u = \Ln \mathscr{D}^u \Rn$ depends on both $\mathscr{D}$ and $\sigma$.
Here $\mathcal{Q}(\gamma)$ is given by Eq. \eqref{eqn_QQ}, each $p^{(j)}$ refers to the modified Gaussian pdf as shown in Eq. \eqref{eqn_mod_normal} for its matching feature $\xi_j$ in $D^u$, and $p^\ell$ is given by Eq. \eqref{eqn_lower_density}.
\end{theorem}
\begin{proof}
Since $D^u$ and $D^\ell$ are independent random persistence diagrams, the belief function decomposes into $\beta_D(S) = \beta_{D^u}(S) \beta_{D^\ell}(S)$.
Moreover, since derivatives above order $N_u$ vanish for $\beta_{D^u}$ (see Remark \ref{rmk_belief_layers}), the product rule and binomial-type counting yield
\begin{equation} \label{eqn_RND_KDE}
\begin{split}
\frac{\delta^N\beta_D}{\delta \xi_1 ... \delta \xi_N}(\emptyset) &= \sum_{j=0}^{N_u} \sum_{1\leq i_1 \neq ... \neq i_j \leq N} \frac{\delta^j \beta_{D^u}}{\delta \xi_{i_1} ... \delta \xi_{i_j}}(\emptyset) \frac{\delta^{N-j} \beta_{D^\ell}}{\delta \xi_1 ... \hat{\delta \xi_{i_1}} ... \hat{\delta \xi_{i_j}} ... \delta \xi_N}(\emptyset) \\
&= \sum_{\pi \in \Pi_N} \sum_{j=0}^{N_u} \frac{1}{j!(N-j)!} \frac{\delta^j \beta_{D^u}}{\delta \xi_{\pi(1)} ... \delta \xi_{\pi(j)}}(\emptyset) \frac{\delta^{N-j} \beta_{D^\ell}}{\delta \xi_{\pi(j+1)} ... \delta \xi_{\pi(N)}}(\emptyset)
\end{split}
\end{equation}
where $\delta\hat{\xi_i}$ indicates that the given index is skipped in the set derivative (having been allocated to the other factor).
Similar to the proof of Lemma \ref{lemma_combination}, the choice of indices $i_j$ is replaced with a permutation $\pi \in \Pi_N$;
however, the ordering within each derivative is unrelated the choice of $i_j$, leading to $j!$-fold and $(N-j)!$-fold redundancy within each term.
Taking Eq. \eqref{eqn_lower_dist} together with Eq. \eqref{eqn_global_pdf} yields
$$\frac{\delta \beta_{D^\ell}}{\delta \xi_{\pi(j+1)} ... \delta \xi_{\pi(N)}}(\emptyset) = (N-j)! \nu(N-j) \prod_{j=1}^{N-j} p^\ell(\xi_j). $$
Also, Eq. \eqref{eqn_combination} and Eq. $\eqref{eqn_global_pdf}$ yield
$$ \frac{\delta \beta_{D^u}}{\delta \xi_{\pi(1)} ... \delta \xi_{\pi(j)}}(\emptyset) = \sum_{\pi^* \in \Pi_j} \sum_{\gamma \in I(j,N_u)} \mathcal{Q}(\gamma) \prod_{k=1}^j p^{(\gamma(k))}(\xi_{\pi^*(k)}). $$
We substitute these relations into the final expression of Eq. \eqref{eqn_RND_KDE}.
The first of these substitutions is straightforward, while the second has $j!$-fold redundant permutations overtop the existing permutations in $\Pi_N$.
These substitutions yield that $\frac{\delta^N\beta_D}{\delta \xi_1 ... \delta \xi_N}(\emptyset) = \sum_{\pi\in\Pi_N} K_\sigma(Z,\mathscr{D})$ as described in Eq. \eqref{eqn_construction} and shows that the kernel $K_\sigma(Z,\mathscr{D})$ satisfies the definition of a global pdf for $D$ (Defn. \ref{defn_global_pdf}).
Finally, the sum over permutations is removed according to Eq. \eqref{eqn_global_pdf} to obtain the expression for $f_D(Z) = K_\sigma(Z,\mathscr{D})$.
\end{proof}
\begin{remark} \label{rmk_KDE_addition}
A specific example of the component distributions provided for the kernel in Thm. \ref{thm_construction} is presented in Fig. \ref{heat}.
Since the kernel density $K_\sigma$ of Eq. \eqref{eqn_construction} is a probability density according to Defn. \ref{defn_global_pdf}, it is a function on $\cup_{N=0}^M \W^N$, and so the sum of several such kernels is defined by adding each local pdf layer separately.
\end{remark}
\vspace{-5mm}
\begin{remark} \label{rmk_split_reason}
Each feature in the upper random persistence diagram is described independently, while all the features in the lower random persistence diagram are described by a single density $p^\ell$.
Evaluation of the kernel density in Eq. \eqref{eqn_construction} is made rapid by factoring the repeated (product) evaluation of $p^\ell$, a typical 2D Gaussian KDE, despite the kernel's definition in a very high-dimensional space.
Indeed, while evaluation computation increases exponentially when upper features are added (of which there should be few), it only increases linearly for additional lower features.
Furthermore, in datasets with too many points, one typically subsamples, e.g. by min-max sampling, to reduce the computational burden of calculating the persistence diagram itself (e.g., see \citep{tda_subsample}), yielding a persistence diagram with fewer features.
\end{remark}
\vspace{-5mm}
\begin{remark} \label{rmk_split}
In the definition of our kernel, a single parameter $\sigma$ has been chosen for both the split of center diagrams, as well as the standard deviation used in the Gaussians which build our kernel.
Without loss of generality, this choice simplifies the presentation of the kernel density and the proof of kernel density estimate (KDE) convergence (Theorem \ref{thm_KDE}).
In general, the bandwidth parameter $\sigma_2$ which refers to the standard deviation used to define the Gaussians (as $\sigma$ appears in Defs. \ref{defn_upper_singletons} and \ref{defn_lower_cluster}) need not be equal to the splitting parameter $\sigma_1$ which determines which points are in $\mathscr{D}^u$ or $\mathscr{D}^\ell$ (as $\sigma$ appears in Eq. \eqref{eqn_split}).
Still, it is certainly desirable that $\sigma_1 = C \sigma_2$ when taking a limit of KDEs as the number of persistence diagrams grows to infinity (Theorem \ref{thm_KDE}).
For a fixed kernel bandwidth $\sigma_2$, increasing $C$ (and thus $\sigma_1$) moves more features into the lower portion of the diagram.
This choice may be useful in practice when underlying data are known to be noisy and more noise-related features are expected near the diagonal.
By the same token, for $\sigma_1 >> \sigma_2$, projecting the lower features onto the diagonal may lead to significant error in the approximation.
On the other hand, taking $\sigma_1 << \sigma_2$ eliminates the computational benefit of splitting the diagram and is probably not useful in practice.
For most cases, taking $\sigma_1 = \sigma_2$, is a reasonable balance between KDE accuracy and evaluation computation.
\end{remark}
\vspace{-5mm}
\begin{remark} \label{rmk_computation}
Since the associations dictated by $\gamma \in I(j,N_i)$ in Thm. \ref{thm_construction} are known a priori, the calculation is embarrassingly parallelizable, and computation can be made rapid even for the evaluation of the global density function associated with a diagram $D$ with many features (see, e.g., Fig. \ref{fig_PD_UD_wob} (Right) in the supplementary materials).
Nevertheless, the density described in Eq. \eqref{eqn_construction} is well organized for approximate evaluation.
While Eq. \eqref{eqn_construction} is sufficient for set integration, it is not symmetric under permutations of the inputs $\xi_i$, and consequently does not represent the density at a set $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_1,..., \xi_N \RC$.
A symmetric version is desirable for methods such as maximum likelihood or mode estimation \citep{bayesbook_2014}.
Indeed, a symmetric pdf is available by summing over $\Pi_N$ as per Eq. \eqref{eqn_global_pdf} to obtain the set derivative of the belief function.
At no loss of accuracy, the large sum over $\Pi_N$ need not range over all permutations and one may instead sort over compositions $\beta \circ \pi$ for $\beta \in I(j,N)$ and $\pi \in \Pi_j$ for each $j \in \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 0,...,\Ln D^u \Rn \RC$.
Since one expects that $\Ln D^u \Rn = N_u << N$, this reorganization significantly diminishes the number of computations.
\end{remark}
\begin{figure}[h]
\centering{\includegraphics[width=0.3\textwidth]{Multi_Heat_PD.png} \hspace{3mm}
\includegraphics[width=0.3\textwidth]{Multi_Heat_Map.png}}
\caption{\emph{Left:} A persistence diagram split according to Eq. \eqref{eqn_split}.
The dashed black line, $d=b+\sigma$, separates the diagram into the red upper points of $\mathscr{D}^u$ and the yellow lower points of $\mathscr{D}^\ell$.
\emph{Right:} The red and blue gradients represent the upper singleton densities $p^{(1)}$ and $p^{(2)}$ given by Eq. \eqref{eqn_mod_normal}.
The green gradient represents the lower density $p^\ell$ defined in Eq \ref{eqn_lower_density}.
While each of these densities is defined on the wedge $W \subset \R^2$, the global kernel in Eq. \eqref{eqn_construction} is defined on $\bigcup_N W^N$ for each input-cardinality $N$.}
\label{heat}
\end{figure}
Since the kernel density is a probability density function for a random persistence diagram, it has an associated probability hypothesis density (See Defn. \ref{defn_PHD}).
\begin{cor} \label{cor_kernel_PHD}
Fix a center persistence diagram $\mathscr{D}$ and bandwidth $\sigma>0$.
Split $\mathscr{D}$ into $\mathscr{D}^\ell$ and $\mathscr{D}^u$ according to Eq. \eqref{eqn_split}.
Define $D^\ell$ with global pdf from Eq. \eqref{eqn_lower_dist}, and $D^u$ with global pdf from Eq. \eqref{eqn_combination}.
Treating the random persistence diagrams $D^u$ and $D^\ell$ as independent, the probability hypothesis density (PHD) associated with the kernel density centered at $\mathscr{D}$ with bandwidth $\sigma$ of Thm. \ref{thm_construction} is given by
\begin{equation} \label{eqn_kernel_PHD}
K_{\sigma,PHD}(\xi, \mathscr{D}) = N_\ell \, p^\ell(\xi)
+ \sum_{j=1}^{N_u} q^{(j)} p^{(j)}(\xi),
\end{equation}
where the feature $\xi$ is the input and $N_u = \Ln \mathscr{D}^u \Rn$ and $N_\ell = \Ln \mathscr{D}^\ell\Rn$ depend on both $\mathscr{D}$ and $\sigma$.
Here each $p^{(j)}$ refers to the modified Gaussian pdf as shown in Eq. \eqref{eqn_mod_normal} for its matching singleton feature $\xi_j$ in $D^u$, $q^{(j)}$ given by \eqref{eqn_nonempty} is the probability each singleton is present, and the lower density $p^\ell$ is given by Eq. \eqref{eqn_lower_density}.
\end{cor}
\begin{proof}
The PHD is uniquely defined by its integral over a region $U$, which yields the expected number of points in the region.
Consequently, the independent upper and lower random draws which build the kernel contribute additively to the PHD.
Within the sum, each singleton density $p^{(j)}$ is weighted by the chance for $D^j$ to be present, $q^{(j)}$ and the lower density $p^\ell$ is weighted according to the mean draw cardinality, which was chosen to be $\Ln \mathscr{D}^\ell \Rn$.
\end{proof}
Notice that in Cor. \ref{cor_kernel_PHD}, the input for the PHD is a single feature $\xi$ as opposed to a list of features $Z = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_1, ..., \xi_N \RC$ for the global kernel in Thm. \ref{thm_construction}.
Furthermore, Thm. \ref{thm_construction} extends to the analogue result for a center persistence diagram with features of varied degree of homology.
\begin{cor} \label{cor_general_kernel}
Consider a persistence diagram $\mathscr{D} = \bigcup_{k=0}^{\dim-1} \mathscr{D}_k \times \left\{} \newcommand{\RC}{\right\}}% Curly Braces { k \RC$ split according to the degrees of homology with associated random persistence diagrams $D_k$ defined according to Eq. \eqref{eqn_construction} for each center diagram $\mathscr{D}_k$.
Treating each $D_k$ as independent, the full global pdf for $D = \bigcup D_k$ centered at $\mathscr{D}$ with bandwidth $\sigma$ is given by
\begin{equation} \label{eqn_general_kernel}
K_\sigma(Z,\mathscr{D}) = \Lambda(N) \prod_{k=0}^{\dim-1} K_\sigma(Z_k,\mathscr{D}_k),
\end{equation}
where $Z = \bigcup_{k=0}^{\dim-1} Z_k \times \left\{} \newcommand{\RC}{\right\}}% Curly Braces { k \RC \subset \W$ with each $Z_k \subset W$ of cardinality $\Ln Z_k \Rn = N_k$ within the multi-index $N = (N_0,...,N_{\dim-1})$ and
$$ \Lambda(N) = \frac{N!}{\Ln N \Rn !} := \frac{\prod N_k!}{\LP \sum N_k\RP !}.$$
\end{cor}
\begin{proof}
The result follows immediately from taking set derivatives of the full belief function $\beta_D(S) = \prod_k \beta_{D_k}(S)$.
In particular, the set derivatives $\frac{\delta\beta_{D_k}}{\delta Z}(\emptyset)$ are zero unless $Z \subset \mathcal{W}_k$.
Thus, the product rule leaves only the single term
$\frac{\delta\beta_{D}}{\delta Z}(\emptyset) = \prod_{k=0}^{\dim-1} \frac{\delta\beta_{D_k}}{\delta Z_k}(\emptyset)$.
In turn, each kernel global pdf $K_\sigma(Z_k,\mathscr{D}_k)$ is related to the associated belief function derivative by a sum over permutations $\Pi_{N_k}$ (see Eq. \eqref{eqn_global_pdf}).
Compositions of these permutations are $N_k!$-fold redundant against the $\Ln N \Rn !$ permutations in $\Pi_{\Ln N \Rn}$, yielding the coefficient $\Lambda(N)$.
\end{proof}
Next, to prove the convergence (to the target distribution) of the kernel density estimate defined via the kernel established in Thm. \ref{thm_construction}, we consider persistence diagrams $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { \mathscr{D}_i \RC_{i=1}^n$ which are i.i.d. sampled from a target distribution with global pdf $f$.
Toward this end, we require the following assumptions on $f$:
\begin{align*}
(A&1) \,\, f(Z) = 0 \textrm{ for } \Ln Z \Rn > M \in \N \textrm{ (bounded cardinality).}\\
(A&2) \,\, \textrm{The local density } f_N:\mathcal{W}_k^N \goto \R \textrm{ is bounded for each } N \in \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 1,...,M \RC\!. \\
(A&3) \,\, \textrm{There exists } C_N >0 \textrm{ so that } f(\xi_1,...,\xi_N) \leq C_N \LN (\xi_1,...,\xi_N) \RN^{-2N} \textrm{ for each } N \in \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 1,...,M \RC.
\end{align*}
The assumptions (A1), (A2), and (A3) describe conditions on the target random persistence diagram pdf.
It is important that these assumptions also hold for a random persistence diagram associated with typical (random) underlying datasets.
For example, $(A1)$ trivially holds for underlying data in $\R^{\dim}$ of bounded cardinality.
The conditions $(A2)$ and $(A3)$ hold for underlying data sampled from a compact set $E \subset \R^{\dim}$ perturbed by Gaussian noise.
The work \citep{tda_crackle}(see Cor. 2.3 and Thm 2.6 therein) describes the persistent homology of noise, and describes a `core' neighborhood.
Specifically for Gaussian noise, features are retained in the `core', but then extreme decay occurs for features of arbitrary degree outside the `core'.
Intuitively, by bounding death values by the diameter of the underlying dataset, one expects that the decay will be at worst a polynomial times Gaussian decay, which is sufficient for (A3).
The following theorem shows that the kernel density estimate converges to the true global pdf of a random persistence diagram as the number of persistence diagrams increases.
The pdf tracks not only the birth and death of features, but also their prevalence.
In particular, the persistence diagram pdf tied to a random dataset can determine which geometric features are stable regardless of their persistence.
The proof of this theorem is delegated to the supplementary materials.
\begin{theorem} \label{thm_KDE}
Consider a random persistence diagram global pdf $f$ satisfying assumptions $(A1)$-$(A3)$.
Define the kernel $K_\sigma(Z,\mathscr{D})$ according to Thm. \ref{thm_construction} and consider the kernel density estimate $\hat{f}(Z) = \frac{1}{n} \sum_{i=1}^n K_\sigma(Z,\mathscr{D}_i)$, with centers $\mathscr{D}_i$ sampled i.i.d. according to global pdf $f$ and bandwidth $\sigma = O(n^{-\alpha})$ chosen with $0 < \alpha < \alpha_{2M}$.
Then, as $n \goto \infty$, $\hat f \goto f$ uniformly on compact subsets of $W$.
\end{theorem}
\begin{remark} \label{rmk_bandwidth}
The value of $\alpha_{2M}$ is inherited from bandwidth selection for $2M$-dimensional kernel density estimates \citep{KDE_book}.
While the scaling of the bandwidth in the limit is determined by the maximum cardinality $M$ (and thus, the largest dimension of the local pdfs), choosing a bandwidth for a specific sample is an important step in applying kernel density estimation.
If the bandwidth is too narrow, the estimate is overfitted and potentially biased;
if the bandwidth is too large, the estimate will be oversmoothed, resulting in accuracy loss.
Several methods for bandwidth selection in multivariate kernel estimation are discussed in \citep{silverman}.
As a general rule of thumb, \citep{silverman} recommends choosing the bandwidth as $\sigma_{opt} = A(K) n^{-1/({\dim}+4)}$, where $n$ is the sample size (i.e., the number of persistence diagrams), ${\dim}$ is the dimension, and $A(K)$ is a constant depending on the kernel, $K$.
In particular, one may choose $\alpha \approxeq 1/(2M+4)$ as an unbiased estimator for all local pdfs with cardinalities $m \leq M$ \citep{KDE_book}.
Silverman's rule of thumb works best for distributions which are nearly Gaussian;
for more general distributions, the bandwidth may be chosen empirically.
\end{remark}
\subsection{Examples} \label{subsect:Examples}
Here we provide detailed examples of the kernel density and kernel density estimation of an unknown pdf.
For simplicity, we restrict to a single degree of homology, say $k=1$.
Due to the intrinsic high dimension of the kernel, we present contour plots for slices of the kernel density.
Specifically, for inputs $\LP (b_1,d_1),...,(b_N,d_N) \RP$, we consider the kernel density evaluated at $(b_1,d_1) \in W$ with $(b_i,d_i)$ fixed for $i \geq 2$.
For clarity, the unique symmetric pdf $f_{sym}(\xi_1,...,\xi_N) = \frac{1}{N!} \sum_{\pi \in \Pi_N} f(\xi_{\pi(1)},...,\xi_{\pi(N)})$ is used in the contour plots (see Remark \ref{rmk_unique_symmetric}).
For explicit computation, we choose the probability mass function
\begin{equation} \label{eqn_pmf_card}
\nu(N) = \max \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \frac{N_\ell + 1 - \Ln N_\ell - N \Rn}{(N_\ell + 1)^2}, 0 \RC
\end{equation}
when evaluating the lower density in Eq. \eqref{eqn_lower_dist}, where $N_\ell = \Ln \mathscr{D}^\ell \Rn$ is the lower cardinality of the center diagram.
This probability mass function is chosen to satisfy the requirements of Defn. \ref{defn_lower_cluster}, and specifically has the property that $\nu(N) > 0$ for $0 \leq N \leq 2\Ln \mathscr{D}^\ell \Rn$.
\begin{example} \label{ex1} \rm
Consider the center persistence diagram $\mathscr{D} = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (1,3), (2,4), (1,1.3),(3,3.2) \RC \subset W$ and bandwidth $\sigma = 1/2$.
We construct the associated kernel density $K_\sigma(Z,\mathscr{D})$ according to Thm. \ref{thm_construction} and follow with some plots and analysis of the kernel density.
The random persistence diagram $D$ associated with the kernel density $K_\sigma(Z,\mathscr{D})$ has a variable number of features $N = \Ln D \Rn$;
consequently, the input diagram $Z = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { \xi_1,...,\xi_N\RC$ must have variable length and therefore the kernel density has local definitions (see Rmk. \ref{rmk_local_vs_global}) on $W^N$ for each possible input cardinality $N$.
Since each modified Gaussian $p^{(j)}$ (Defn. \ref{defn_upper_singletons}) and the lower density $p^\ell$ (Defn. \ref{defn_lower_cluster}) integrate to 1 over the wedge $W$, an expression for the probability mass function (pmf) $\P[\Ln D \Rn = N]$ can be expressed solely in terms of $\nu$ and $q^{(j)}$:
\begin{equation}
\begin{split}
\P[\Ln D \Rn = N] &= \LB q^{(1)}q^{(2)}\RB \nu(N-2) \\
&+ \LB q^{(1)}\LP 1-q^{(2)}\RP + q^{(2)}\LP 1-q^{(1)}\RP \RB \nu(N-1) \\
&+ \LB\LP 1-q^{(1)}\RP \LP 1-q^{(2)}\RP\RB \nu(N)
\end{split}
\end{equation}
The plot of this pmf is shown in Fig. \ref{pmf}.
Recall that $D = D^u \cup D^\ell$, so that $\Ln D \Rn = \Ln D^u \Rn + \Ln D^\ell \Rn$;
since $q^{(j)} \approx 1$ for $j=1,2$, $\Ln D^u \Rn = 2$ with high probability and the pmf $\P[\Ln D \Rn = N]$ is nearly the pmf for $\Ln D^\ell \Rn$, $\nu$, shifted up by 2 units.
Fig. \ref{pmf} suggests that understanding the kernel density requires investigation into higher cardinality inputs.
In general, it is important to consider input diagrams $Z$ with $\Ln Z \Rn \geq \Ln \mathscr{D}^u \Rn$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale = 0.25]{KDE_ex_pmf.png}
\end{center}
\vspace{-4mm}
\caption{Cardinality probabilities $\P[ \Ln D \Rn = N]$ for random diagram $D$ distributed according to global pdf $K_\sigma(\cdot,\mathscr{D})$ in Ex. \ref{ex1}.
In general, we have that $0 \leq \Ln D^u \Rn \leq \Ln \mathscr{D}^u \Rn$ and according to Eq. \eqref{eqn_pmf_card}, $\nu(N) \neq 0$ for $0 \leq N \leq 2 \Ln \mathscr{D}^\ell \Rn$.
Thus, the cardinality $\Ln D \Rn = \Ln D^u \Rn + \Ln D^\ell \Rn$ takes on values between $0$ and $6 = \Ln \mathscr{D}^u \Rn + 2 \Ln \mathscr{D}^\ell \Rn$.} \label{pmf}
\end{figure}
First, we describe the random diagram associated to the lower features $\mathscr{D}^\ell = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (1,1.3), (3,3.2) \RC$ of the center diagram $\mathscr{D}$.
The lower random diagram $D^\ell$ is described in Defn. \ref{defn_lower_cluster} according to a probability mass function (pmf) $\nu$ for the cardinality of $D^\ell$ and a single probability density $p^\ell(b,d)$ for the subsequent features' locations in the wedge $W$.
The pmf $\nu$ is defined according to Eq. \eqref{eqn_pmf_card} with $N_\ell = 2$; that is, $\nu(\left\{} \newcommand{\RC}{\right\}}% Curly Braces { 0,1,2,3,4 \RC) = \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 1/9, 2/9, 3/9, 2/9, 1/9 \RC$ respectively, and zero otherwise.
Following Defn. \ref{defn_lower_cluster}, we project the features of $\mathscr{D}^\ell$ onto the diagonal to obtain $\left\{} \newcommand{\RC}{\right\}}% Curly Braces { (1.15,1.15), (3.1, 3.1) \RC$.
Relying on Eq. \eqref{eqn_lower_density}, the resulting lower density is given by
\begin{equation} \label{eqn_ex_pl}
p^\ell(b,d) = \frac{2}{\pi} \LB e^{- \LP (b-1.15)^2 + (d-1.15)^2 \RP} + e^{-\LP (b-3.1)^2 + (d-3.1)^2 \RP} \RB.
\end{equation}
restricted to the wedge $W$.
The coefficient $\frac{2}{\pi}$ is obtained by a direct substitution into Eq. \eqref{eqn_lower_density}.
Due to the flexible input cardinality, the kernel will be expressed and plotted separately for different input cardinalities.
For brevity, we present the local kernels on $W^N \subset \R^{2N}$ for cardinalities $N = 1, 2, 3$.
First, we consider the probability hypothesis density (or PHD, as defined in Eq. \eqref{eqn_PHD}) along with the kernel density evaluated at a single input feature in Fig. \ref{PHD_vs_1F}.
Recall that the integral of the PHD over a region $U$ yields the expected number of features in $U$ (see Defn \ref{defn_PHD}).
The kernel's corresponding PHD is a sum of Gaussians as described in Cor. \ref{cor_kernel_PHD}.
\begin{equation} \label{eqn_phd_exp}
\begin{split}
K_{\sigma,PHD}((b,d),\mathscr{D}) &= 2 p^\ell(b,d) + q^{(1)}p^{(1)}(b,d)+ q^{(2)}p^{(2)}(b,d) \\
&= 1.273 \LP e^{-2 \LP (b-3.1)^2 + (d-3.1)^2 \RP}+e^{-2 \LP (b-1.15)^2 + (d-1.15)^2 \RP} \RP \\
&\hspace{5mm} + 0.635 e^{-2 \LP (b-2)^2 + (d-4)^2 \RP} + 0.635 e^{-2 \LP (b-1)^2 + (d-3)^2 \RP}.
\end{split}
\end{equation}
Next, for input of cardinality $\Ln Z \Rn = 1$, we obtain an easily viewable 2-dimensional distribution.
Thm. \ref{thm_construction} yields the following expression:
\begin{equation} \label{eqn_pdf_f1}
\begin{split}
K_\sigma((b_1,d_1),\mathscr{D}) &= \nu(0)\LB(1 - q^{(2)})q^{(1)} p^{(1)}(b_1,d_1)
+ (1 - q^{(1)})q^{(2)} p^{(2)}(b_1,d_1) \RB \\
&\hspace{5mm} + \nu(1)\LB(1 - q^{(1)})(1 - q^{(2)})p^\ell(b_1,d_1)\RB. \\
&= 7.74 \times 10^{-2} \LP e^{-2 \LP (b_1-2)^2 + (d_1-4)^2 \RP} + e^{-2 \LP (b_1-1)^2 + (d_1-3)^2 \RP} \RP. \\
&\hspace{5mm} + 1.65 \times 10^{-4} p^\ell(b_1,d_1).
\end{split}
\end{equation}
The kernel is treated as a global pdf as in Prop. \ref{defn_global_pdf} and Rmk. \ref{rmk_local_vs_global}; thus, this 2-D density is only a local density for the whole kernel.
Each term is a weighted product of the combination of upper features considered (In order: $(2,4)$, $(1,3)$, or none.).
Since the values of $q^{(j)}$ are very close to 1, terms which include the upper pdfs $p^{(j)}$ have much larger total mass.
Contour plots of the densities expressed in Eqs. \eqref{eqn_phd_exp} and \eqref{eqn_pdf_f1} (restricted to $W$) are respectively shown in Figs. \ref{PHD_vs_1F}(a) and \ref{PHD_vs_1F}(b).
In Fig. \ref{PHD_vs_1F}(a), the PHD indicates that in general, as many features will appear near the diagonal as will appear near the upper features.
According to the local kernel shown in Fig. \ref{PHD_vs_1F}(b), if only a single feature is present, this feature is far more likely to have long persistence.
Indeed, the kernel density is defined (see Eq. \eqref{eqn_construction}) so that the number of points near the diagonal is fluid (by our choice of $\nu$), whereas the probability of each feature in the upper diagram is nearly 1.
In essence, this demonstrates that the kernel density naturally considers features with long persistence to be stable or prominent in density estimation.
\begin{figure}[h!]
\begin{center}
\begin{multicols}{2}
\includegraphics[scale = 0.24]{KDE_ex_PHD.png} \\ (a) \\
\includegraphics[scale = 0.24]{KDE_ex_1F.png} \\ (b)
\end{multicols}
\end{center}
\vspace{-3mm}
\caption{Contour maps for (a) the probability hypothesis density associated to the kernel density (Eq. \eqref{eqn_phd_exp}) and (b) the kernel density restricted to a single input feature (Eq. \eqref{eqn_pdf_f1}).
The center diagram is indicated by red (upper) and green (lower) points.
Scale bars at the right of each plot indicate the range of probability density in each shaded region.} \label{PHD_vs_1F}
\end{figure}
Taking $Z = (\xi_1,\xi_2) = ((b_1, d_2), (b_2,d_2))$, we arrive at a more complex expression for the kernel density when considering 2 input features.
From Eq. \eqref{eqn_construction}, we obtain:
\begin{equation} \label{eqn_pdf_f2}
\begin{split}
K_\sigma((\xi_1,\xi_2), \mathscr{D}) &= \nu(0)q^{(1)}q^{(2)}p^{(1)}(b_1,d_1)p^{(2)}(b_2,d_2) \\
&\hspace{5mm} + \nu(1)\LB (1-q^{(2)})q^{(1)}p^{(1)}(b_1,d_1) + (1-q^{(1)})q^{(2)}p^{(2)}(b_1,d_1) \RB p^\ell(b_2,d_2) \\
&\hspace{5mm} + \nu(2) (1-q^{(1)})(1-q^{(2)}) p^\ell(b_1,d_1)p^\ell(b_2,d_2) \\
&= 4.5 \times 10^{-2} e^{-2 \LP (b_1-2)^2 + (d_1-4)^2 \RP}e^{-2 \LP (b_1-1)^2 + (d_1-3)^2 \RP} \\
&\hspace{5mm} + 2.11 \times 10^{-4} \LB e^{-2 \LP (b_1-2)^2 + (d_1-4)^2 \RP} + e^{-2 \LP (b_1-1)^2 + (d_1-3)^2 \RP} \RB p^\ell(b_2,d_2) \\
&\hspace{5mm} + 7.39 \times 10^{-7} p^\ell(b_1,d_1) p^\ell(b_2,d_2).
\end{split}
\end{equation}
Notice that this local kernel also decomposes into terms which describe presence of upper features: one term for both, one term for each of the two upper features, and the last term has no upper features.
Contour plots of slices of this local kernel are shown in Fig. \ref{2F};
a general description of slicing is given in Rmk. \ref{rmk_slices}.
\begin{remark} \label{rmk_slices}
Slices are used to view local pdfs defined on a high dimensional space $W^N \subset \R^{2N}$ for $N > 1$.
To obtain these slices, one fixes features $(b_j,d_j) = (b_j',d_j')$ for $j = 2,...,N$, and views the density on the corresponding hyperplane $W \times \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b_2',d_2') \RC \times ... \times \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b_N',d_N') \RC \subset W^N$.
In practice, the fixed features are chosen as modes of earlier (smaller $N$) slices in order to view important parts of the distribution.
We also sum over possible permutations in order to view a slice of the symmetric pdf, as was done for Ex. \ref{ex_double_single}.
\end{remark}
If we consider the density evaluated along slices as $K_\sigma\LP \LP(b,d),(1,3)\RP, \mathscr{D} \RP$ or $K_\sigma\LP \LP(b,d),(2,4)\RP, \mathscr{D} \RP$ (Fig. \ref{2F} (a) or (b), respectively), the restricted plot is a Gaussian centered at the other upper feature.
If the fixed feature is instead close to the diagonal, as in Fig. \ref{2F} (c), the density slice is close to a mixture between the two upper Gaussians $p^{(1)}$ and $p^{(2)}$.
\begin{figure}[h!]
\begin{center}
\begin{multicols}{3}
\includegraphics[scale = 0.22]{KDE_ex_2F_given13.png} \\ (a) \\
\includegraphics[scale = 0.22]{KDE_ex_2F_given24.png} \\ (b) \\
\includegraphics[scale = 0.22]{KDE_ex_2F_given_diag1.png} \\ (c)
\end{multicols}
\end{center}
\vspace{-3mm}
\caption{Contour maps for slices of the kernel density $K_\sigma((\xi,\xi_2'),\mathscr{D})$ with input cardinality 2.
A single feature $\xi_2'$, indicated by white crosshairs, is fixed to restrict to a 2D subspace as follows: (a) $\xi_2' = (1,3)$ (b) $\xi_2' = (2,4)$ and (c) $\xi_2' = (2.5,2.7)$.
The center diagram is indicated by red (upper) and green (lower) points.
Scale bars at the right of each plot indicate the range of probability density in each shaded region.} \label{2F}
\end{figure}
In a similar fashion, we also express the kernel density with input cardinality $\Ln Z \Rn = 3$.
Since there are only 2 upper features in $\mathscr{D}$, this and further expressions are not markedly more complicated than Eq. \eqref{eqn_pdf_f2}.
From Eq. \eqref{eqn_construction}, we obtain:
\begin{equation} \label{eqn_pdf_f3}
\begin{split}
K_\sigma((\xi_1,\xi_2,\xi_3),\mathscr{D}) &= \nu(1)\LB q^{(1)}q^{(2)}p^{(1)}(b_1,d_1)p^{(2)}(b_2,d_2) \RB p^\ell(b_3,d_3) \\
&\hspace{5mm} + \nu(2)(1-q^{(2)})q^{(1)}p^{(1)}(b_1,d_1) p^\ell(b_2,d_2) p^\ell(b_3,d_3)\\
&\hspace{5mm} + \nu(2)(1-q^{(1)})q^{(2)}p^{(2)}(b_1,d_1) p^\ell(b_2,d_2) p^\ell(b_3,d_3) \\
&\hspace{5mm} + \nu(3)(1-q^{(1)})(1-q^{(2)}) p^\ell(b_1,d_1) p^\ell(b_2,d_2) p^\ell(b_3,d_3). \\
&= \, \, 9.01 \times 10^{-2} p^\ell(b_3,d_3) e^{ -2\LP (b_1-1)^2 + (d_1-3)^2 \RP} e^{ -2\LP (b_2-2)^2 + (d_2-4)^2 \RP } \\
&\hspace{5mm} + 4.96 \times 10^{-4} p^\ell(b_2,d_2) p^\ell(b_3,d_3) e^{-2 \LP (b_1-2)^2 + (d_1-4)^2 \RP} \\
&\hspace{5mm} + 4.96 \times 10^{-4} p^\ell(b_2,d_2) p^\ell(b_3,d_3) e^{-2 \LP (b_1-1)^2 + (d_1-3)^2 \RP} \\
&\hspace{5mm} + 1.22 \times 10^{-6} p^\ell(b_1,d_1) p^\ell(b_2,d_2) p^\ell(b_3,d_3).
\end{split}
\end{equation}
One may notice that Eq. \eqref{eqn_pdf_f3} has the same 4 terms as Eq. \eqref{eqn_pdf_f2}, but with another factor of $p^\ell$ in each term.
Indeed, the local kernels for input cardinality $N = 4, 5, 6$ appear very similar as well, and with progressively more factors of $p^\ell$.
Contour plot slices of this local kernel are shown in Fig. \ref{3F}, following Rmk. \ref{rmk_slices}.
In this case, since the local pdf is defined in $W^3$, we must fix a pair of features in order to view a slice in $W \times \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b_2',d_2') \RC \times \left\{} \newcommand{\RC}{\right\}}% Curly Braces { (b_3',d_3') \RC$.
In Eq. \eqref{eqn_pdf_f3}, the heaviest weighted term consists of both upper features' densities as well as the lower density $p^\ell(b_3,d_3)$.
Indeed, Fig. \ref{3F}(a) shows the slice $K_\sigma(((b,d),(1,3),(2,4)),\mathscr{D})$, which leaves both upper features fixed, and the resulting slice is nearly proportional to the lower density $p^\ell$.
Fig. \ref{3F} (b) shows the slice $K_\sigma(((b,d),(1,3),(2.5,3.5)),\mathscr{D})$, which fixes one of the upper features of $\mathscr{D}$ as well as a feature of moderate persistence.
This slice does not go through a mode of the local kernel, and so the geometry of the dataspace $W^3/\Pi_3$ makes the slice look multi-modal, depending on whether $(2.5,3.5)$ is assigned to $p^{(2)}$ or $p^\ell$.
Other assignments have negligible mass.
Thus, Fig. \ref{3F} (b) resembles a mixture of these two densities.
\begin{figure}[h!]
\begin{center}
\begin{multicols}{2}
\includegraphics[scale = 0.22]{KDE_ex_3F_both_upper.png} \\ (a) \\
\includegraphics[scale = 0.22]{KDE_ex_3F_left_upper.png} \\ (b)
\end{multicols}
\end{center}
\vspace{-3mm}
\caption{Contour maps for slices of the kernel density $K_\sigma((\xi,\xi_2',\xi_3'),\mathscr{D})$ with input cardinality 3.
A pair of features $\xi_2'$ and $\xi_3'$, indicated by white crosshairs, are fixed to restrict to a 2D subspace as follows: (a) $(\xi_2', \xi_3') = ((1,3),(2,4))$ and (b) $(\xi_2', \xi_3') = ((1,3),(2.5,3.5))$.}
Since the symmetric version of the density is used, the order of these features is irrelevant.
The center diagram is indicated by red (upper) and green (lower) points.
Scale bars at the right of each plot indicate the range of probability density in each shaded region.
\label{3F}
\end{figure}
The terms $(1-q^{(k)})$ within the $\mathcal{Q}^*$ expression (see Eq. \eqref{eqn_Qstar}) are very small and appear in terms for which the corresponding upper feature is unassigned.
These terms are so small because both upper features have very long persistence in this example (four times the bandwidth), and so the terms in Eqs. \eqref{eqn_pdf_f1}, \eqref{eqn_pdf_f2}, and \eqref{eqn_pdf_f3} which do not include one or both upper Guassians $p^{(1)}$ and $p^{(2)}$ have progressively smaller contribution to the overall local kernel.
Consequently, the kernel places much higher probability density near input diagrams with features nearby each upper feature in the center diagram.
This behavior is seen in Fig. \ref{PHD_vs_1F}, \ref{2F}, \ref{3F}, and their respective analyses, and is directly correlated to the ratio of persistence to bandwidth for each feature.
\end{example}
\begin{example} \label{ex2} \rm
Here we consider the random persistence diagram generated from a specific random dataset in $\R^2$.
Our goal in this example is to build and demonstrate convergence of the kernel density estimate for the pdf of the associated random persistence diagram.
Specifically, we generate sample datasets which each consist of 10 points sampled uniformly from the unit circle with additive Gaussian noise, $N((0,0),\LP\frac{1}{50}\RP^2I_2)$.
This toy dataset is prototypical for signal analysis (corresponding to the circular dynamics of a noisy sine curve), wherein the high dimensional point cloud is obtained through delay-embedding of the signal.
An in-depth analysis of using delay embedding alongside persistent homology is found in \citep{tda_windows}.
These datasets each yield a $\cech$ persistence diagram as described in Section \ref{sect:TDA} for degree of homology $k=1$.
A sample dataset and its associated $k=1$ persistence diagram are shown in Fig. \ref{ex2_sample_data}.
Since these datasets are sampled from the unit circle perturbed by relatively small noise, one expects the associated 1-homology to have a single persistent feature with $d \approx 1$ with possible brief features caused by noise.
\begin{figure}
\begin{center}
\begin{multicols}{2}
\includegraphics[scale = 0.3]{UD_S.png} \\ (a) \\
\includegraphics[scale = 0.3]{PD_S.png} \\ (b)
\end{multicols}
\end{center}
\vspace{-3mm}
\caption{An example underlying dataset and its associated persistence diagram.
The persistence diagrams are used as the centers for the kernel density estimate.
For this example, persistence diagrams with more than one feature are relatively rare.}
\label{ex2_sample_data}
\end{figure}
\begin{table}
\begin{tabular}{|c | c | c | c | c |} \hline
KDE & (1) & (2) & (3) & (4) \\ \hline
n & 100 & 300 & 1000 & 5000 \\ \hline
$\sigma$ & 0.03 & 0.025 & 0.020 & 0.015 \\ \hline
\end{tabular} \\[2mm]
\caption{Choices of sample size $n$ (number of persistence diagrams) and bandwidth $\sigma$ for each kernel density estimate $\hat f_{n,\sigma}(Z)$ shown in Fig. \ref{fig_KDE_simp}.}
\label{table_Nsigma_simp}
\end{table}
We consider several KDEs as we simultaneously increase the number of persistence diagrams ($n$) and narrow the bandwidth ($\sigma$) as shown in Table \ref{table_Nsigma_simp}).
The bandwidth was chosen to scale according to Silverman's rule of thumb \citep{silverman} (see Rmk. \ref{rmk_bandwidth}).
Since the KDEs $\hat f_{n, \sigma}(Z)$ are defined on $\bigcup_N W^N$ for several input cardinalities $N$, we present them in multiple slices by fixing a cardinality and then fixing all but one input feature as described in Rmk \ref{rmk_slices}.
For example, $g(\xi) = \hat f_{n, \sigma}(\xi, \xi_2',...,\xi_N')$ for fixed $\xi_j'$ ($j = 2,..., N$) is a function on $W$ and represents a slice of the local KDE on $W^N$.
The progression of KDE slices can be seen in Fig. \ref{fig_KDE_simp}, wherein the same slices (i.e., the same features are fixed) are viewed for each choice of $(n,\sigma)$.
These plots demonstrate in practice the convergence of the kernel density estimator shown in Theorem 1.
Because the sample points for the underlying dataset lie so close to the unit circle, one expects the topological feature to die near scale $d = 1$, as is reflected in the KDEs shown in Fig. \ref{fig_KDE_simp} (left);
however, the distribution of points along the circle allows its birth scale to vary quite a lot.
Additional features with brief persistence are concentrated very close to the diagonal due to small noise.
These features tend to be either spurious holes near the edge (smaller $b$ and $d$) or a short split of the main topological loop in two (larger $b$ and $d$); this behavior is reflected in the two peaks for slices of the KDEs shown in Fig. \ref{fig_KDE_simp} (right).
Indeed, the persistence diagram shown in Fig. \ref{ex2_sample_data} is typical for this example.
Overall, by scanning from top to bottom, Fig. \ref{fig_KDE_simp} demonstrates the convergence of the KDEs as $n$ increases and $\sigma$ decreases.
The location and mass of each mode is as expected from underlying data sampled from the unit circle.
Moreover, very small spread in the limiting density arises from the small noise in the underlying data.
The shape and spread of each mode converges, and the densities for $n = 1000$ and $n = 5000$ are nearly the same.
\begin{figure}
\begin{multicols}{2}
\raisebox{29mm}{{\Large(1)}}\includegraphics[scale = 0.37]{SC_K100_h03_f0_max40.png} \\
\raisebox{29mm}{{\Large(2)}}\includegraphics[scale = 0.37]{SC_K300_h025_f0_max45.png} \\
\raisebox{29mm}{{\Large(3)}}\includegraphics[scale = 0.37]{SC_K1000_h02_f0_max50.png} \\
\raisebox{29mm}{{\Large(4)}}\includegraphics[scale = 0.37]{SC_K5000_h015_f0_max60.png} \\
\includegraphics[scale = 0.37]{SC_K100_h03_f1_max300.png} \\
\includegraphics[scale = 0.37]{SC_K300_h025_f1_max900.png} \\
\includegraphics[scale = 0.37]{SC_K1000_h02_f1_max900.png} \\
\includegraphics[scale = 0.37]{SC_K5000_h015_f1_max1200.png} \\
\end{multicols}
\caption{Plots of persistence diagram KDEs for Ex. \ref{ex2}.
Each plot is presented as a heat map where color indicates the probability density.
White regions above the diagonal indicate portions of very low probability density.
Each column is a particular slice, while each row is a particular global KDE with $n$ and $\sigma$ as indicated in Table \ref{table_Nsigma_simp}.
The left column are the local KDEs $\hat f_{n,\sigma}((b,d))$ evaluated at a diagram with only one feature.
The mode of the converged density is approximately $(b_2',d_2') = (0.77,0.98)$.
The right column are the local KDEs $\hat f_{n,\sigma}((b,d), (0.77,0.98))$ evaluated at a diagram with two features and one feature fixed.
These slices have two modes which are very close to the diagonal at $(0,0)$ and $(1,1)$.
Overall, this figure demonstrates KDE convergence.}
\label{fig_KDE_simp}
\end{figure}
Two more examples of persistence diagram KDEs at increasing $n$ and decreasing $\sigma$ are given in the supplementary materials, but which involve more complex underlying data.
\end{example}
\subsection{A Measure of Dispersion} \label{subsect:KDE_MoD}
Theorem \ref{thm_KDE} has established the convergence of a kernel density estimator.
Along with density function estimation, one would like to verify the convergence of properties such as spread.
In the absence of vector space structure on the space of persistence diagrams, we turn to the bottleneck metric (Defn. \ref{defn_bottleneck}) to define a notion of spread.
Specifically, we measure dispersion with respect to a distribution of persistence diagrams through its mean absolute deviation in this metric.
\begin{defn} \label{defn_MAD}
The mean absolute bottleneck deviation (MAD) from origin diagram $\mathscr{D}$ with respect to a global pdf $f$ is given by
\begin{equation} \label{eqn_bottle_moment}
\textrm{MAD}_f(\mathscr{D}) = \int_{\W} W_\infty(\mathscr{D},Z) f(Z) \delta Z
\end{equation}
\end{defn}
The following proposition aids in proving convergence of MAD kernel estimates.
Proofs for this section are delegated to the supplementary materials.
\begin{prop} \label{prop_large_dev}
Consider $D$ distributed according to the kernel density $K_\sigma(\cdot,\mathscr{D})$ with center diagram $\mathscr{D}$ and bandwidth $\sigma$.
Fix $\delta \geq 1$. Then,
\begin{equation} \label{eqn_large_dev}
\P\LB W_\infty(D,\mathscr{D}) < \delta \sigma\RB \geq \LP \int_{B(\bm 0,\delta)} \frac{1}{2 \pi} e^{-(x^2+y^2)/2} \, dx \, dy \RP^M
\end{equation}
where $M$ is the maximal cardinality of $D$ (a multiple of $\Ln \mathscr{D} \Rn$).
Here $B(x,r)$ refers to a ball with respect to the infinity metric (as is used in bottleneck distance).
\end{prop}
Next, we relax assumption $(A2)$ by considering the entire multi-wedge $\W$ and tighten the decay control from assumption $(A3)$.
Formally,
\begin{align*}
(A&2)^* \,\, \textrm{The local density } f_N:\W^N \goto \R \textrm{ is bounded for each } N \in \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 1,...,M \RC\!. \\
(A&3)^* \,\, \textrm{There exists } C >0 \textrm{ so that } f(\xi_1,...,\xi_N) \leq C \LN (\xi_1,...,\xi_N) \RN^{-2N-2} \textrm{ for } N \in \left\{} \newcommand{\RC}{\right\}}% Curly Braces { 1,...,M \RC.
\end{align*}
These assumptions (and $(A1)$) are required for the subsequent lemma, which ensures that the mean absolute bottleneck deviation (MAD) is finite.
\begin{lemma} \label{lemma_bottleneck_control}
Consider a random persistence diagram $D$ distributed according to a global pdf $f$ satisfying assumptions $(A1)$, $(A2)^*$, and $(A3)^*$.
Then $D$ has finite MAD for any choice of origin diagram $\mathscr{D}$.
\end{lemma}
Similar to assumption $(A3)$ (given prior to Thm. \ref{thm_KDE}), $(A3)^*$ holds for a random persistence diagram associated with underlying data sampled from a compact set perturbed by Gaussian noise.
One may also replace Lemma \ref{lemma_bottleneck_control} and its assumptions by directly assuming that the maximal persistence moment is bounded; with this, the results of the lemma follow immediately from Eq. \eqref{eqn_minkowski} in the supplementary.
This direct assumption is weaker (implied by $(A1)$, $(A2)^*$, and $(A3)^*$), but may be difficult to show directly in practice.
\begin{theorem} \label{thm_moment}
Consider a distribution of persistence diagrams with bounded global pdf, $f$, satisfying assumptions $(A1)$, $(A2)^*$, and $(A3)^*$.
Let $\hat f(Z) = \frac{1}{n} \sum_{i=1}^n K_\sigma(Z,\mathscr{D}_i)$ be a kernel density estimate with centers $\mathscr{D}_i$ sampled i.i.d. according to global pdf $f$ and bandwidth $\sigma = O(n^{-\alpha})$ chosen with $0 < \alpha < \alpha_{2M}$.
Then, the mean absolute bottleneck deviation estimate converges; in other words,
\begin{equation} \label{eqn_moment}
\int_{\W} W_\infty(\mathscr{D}_0,Z) \hat{f}(Z) \delta Z \goto \int_{\W} W_\infty(\mathscr{D}_0,Z) f(Z) \delta Z
\end{equation}
as $n \goto \infty$ for any origin diagram $\mathscr{D}_0$.
\end{theorem}
\section{Discussion and Conclusions} \label{sect:discussion}
A nonparametric approach to approximating density functions of finite random persistence diagrams has been presented.
This includes the introduction of a kernel density function, as well as proof that the kernel density itself and its mean absolute deviation converge to those of the target distribution.
Future work will investigate the convergence of powers of the absolute deviation (e.g., bottleneck variance) and deviations involving the Wasserstein metric (an $L^p$ generalization of bottleneck metric, see \citep{CompyTopo}).
Our framework is presented through the lens of geometric simplicial complexes, and in particular $\cech$ complexes.
The resulting persistence diagrams are based on underlying datasets in a metric space.
In general, one may define persistent homology for a function $f$ defined on a topological space \citep{CompyTopo}, and therefore random functions may also give rise to random persistence diagrams, see \citep{tda_fields} for an example.
A similar kernel density estimate approach can be formulated in this case, but perhaps different assumptions may be needed on the target pdf.
Our approach is fully data-driven, a necessary step since distributions of persistence diagrams were previously poorly understood.
The assumptions $(A1)$-$(A3)$, $(A2)^*$, and $(A3)^*$ are typical for kernel density estimators \citep{KDE_book}.
Similar assumptions on the underlying data are inherited by the random persistence diagram, because variation in \v Cech persistent homology is controlled by interpoint distances.
In particular, probability density decay follows the same trends as noise in the underlying data;
this is seen in Fig. \ref{fig_KDE_simp} (a) for Gaussian noise.
Thus, the kernel density estimates defined here can be reliably used for data analysis, adding a detailed tool to the methods used in topological data analysis.
In particular, this is the first result yielding probability density functions which directly analyze the full distribution information of a random persistence diagram.
For applications in machine learning such as classification, the kernel density estimates carry information for generating more sophisticated features than previously available;
e.g., the value of the global pdf at a specific input or list of inputs or the integral of the global pdf over a specified region.
Access to a pdf also provides a tool with which one can check for classification robustness in terms of likelihood or Bayes factors, providing a measure of the confidence in a particular outcome.
Lending credence to applicability in data analysis, an example of kernel density estimation is presented in Subsection \ref{subsect:Examples}.
In this example, underlying datasets are generated to lie on the unit circle with additive noise, a prototypical example for topological data analysis.
Our analysis yields detailed information about the distribution of diagrams, even though only two 2-dimensional slices of the kernel density estimate are shown.
This example demonstrates the convergence of the kernel density estimator in practice for large enough sample size (number of persistence diagrams).
This example along with the supplementary examples also demonstrate the detailed information contained in a persistence diagram KDE.
In the context of Fig. \ref{heat}, it is clear that sampling from the kernel density is straightforward, and in fact computation time scales linearly in the number of features in the center diagram $\mathscr{D}$.
In contrast, precise evaluation of the kernel global pdf at a diagram requires the more thorough computations shown in Eq. \eqref{eqn_construction}.
This evaluation is made tractable due to the separation of the center diagram into upper and lower portions: $\mathscr{D} = \mathscr{D}^u \cup \mathscr{D}^\ell$ as described in Eq. \eqref{eqn_split}.
In practice, diagrams should split so that $\Ln \mathscr{D}^u \Rn$ is small while $\Ln \mathscr{D}^\ell \Rn$ is large.
Evaluation of individual feature pdfs on the multi-wedge $\W$ only scales quadratically on the cardinality $\Ln \mathscr{D} \Rn$ and higher degree calculations are required only for combinatorics on the large persistence features in the upper diagram $\mathscr{D}^u$.
Consequently, these calculations are tractable so long as $\mathscr{D}^u$ does not grow too much in cardinality, while an increased cardinality for $\mathscr{D}^\ell$ has a lesser effect on computation time.
The kernel density presented here treats the small persistent features in $D^\ell$ as a single group.
Since convergence (Thm. \ref{thm_KDE}) requires very little structure in the lower random diagram, it may be helpful in practice to cluster the lower portion of the center diagram, followed by defining a random diagram centered at each cluster.
This approach somewhat complicates the expression and evaluation of the kernel density, but does not complicate sampling from the kernel density.
The goal of this approach is to more carefully capture the geometric features of the underlying random dataset, since such geometric features often correspond to briefly persistent homological features.
For example, geometric features are of paramount importance for classifying periodic signals through their delay embeddings, wherein the large persistent feature indicates periodicity and thus is expected to appear in every class.
|
{
"timestamp": "2018-03-14T01:05:25",
"yymm": "1803",
"arxiv_id": "1803.02739",
"language": "en",
"url": "https://arxiv.org/abs/1803.02739"
}
|
\section*{Introduction}
Nestedness was originally proposed to uncover biogeographic meta-community patterns of occurrence of species and patterns of interaction among species in mutualistic ecological networks in which mutually beneficial interactions occur between participants of two distinct sets~\cite{atmar1993,almeida2008consistent,bascompte2003nested,bastolla2009architecture,patterson1986}. Typically, nestedness has been used to capture the extent to which more specialist species interact with proper subsets of species that, in turn, interact with more generalist ones. In ecological mutualistic networks, the nested architecture has been shown to minimize competition between species, and therefore to enable the system to foster greater biodiversity~\cite{bastolla2009architecture}. In particular, because in a nested system specialist species tend to interact only with generalist species, and because the latter tend to be less vulnerable than the former, nestedness is expected to amplify the chances of survival of rare species. A recent work, however, has suggested that decreased risk of extinction is distributed heterogeneously across the nodes of ecological networks. In particular, evidence has shown that there is a negative relationship between nodes' contributions to the nested architecture of the system and their individual survival benefits~\cite{saavedra2011strong}.
Applications of nestedness beyond biological ecosystems are not new~\cite{Erman,konig,leontief1965structure,saavedra2009simple,saavedra2011strong}. For instance, recent studies have suggested that the trajectories followed by the productive structures of countries and regions tend to be shaped by the underlying product space in which any two products are connected if they are exported by two or more countries~\cite{hidalgo2007}. Similarly, it has been suggested that the structure of the taxonomy network between products resulting from the export of countries is associated with countries' potential growth and development paths~\cite{zaccaria2014taxonomy}. Moreover, recent work has concentrated on the nestedness of economic systems to shed light on the economic geography of domestic and international trade~\cite{bustos2012dynamics}. In particular, it has been suggested that industry--location networks display a nested structure that tends to remain stable over time and can help predict the evolution of countries' product space and industrial reconfigurations. Since the diversity of products that countries export has direct bearing on economic growth~\cite{bustos2012dynamics}, understanding the underlying nested structure emerging from trade can therefore help design more effective policies aimed at strengthening and sustaining countries' economic prosperity.
Traditionally, scholars have investigated a system's nestedness by formalizing the bipartite networks in which a node belonging to a group is assumed to be linked with another node in a different group if there is an interaction between them~\cite{dormann2009indices,thebault2010stability,fortuna2010nestedness,jonhson2013factors,staniczenko2013ghost,fortuna2010nestedness,cristelli2013measuring,saracco2015randomizing}. This has also been the case with most empirical studies of the international trade between countries~\cite{bustos2012dynamics,Erman,konig}. Typically, in a bipartite trade network, a set of nodes represents the countries and another set includes the industries to (from) which the countries export (import)~\cite{bustos2012dynamics}. However, a bipartite network connecting countries to (exported or imported) products cannot account for the full extent of interactions that typically occur among countries in the international production network underpinning the global value chain.
Indeed, countries are traditionally involved in economic transactions within and across multiple industries. In addition, a large number of transactions can originate from, and terminate at, the same country, both within and across industries, thus contributing to the various production stages along which intermediary products are transformed into final ones~\cite{cingolani}. A bipartite network connecting countries to products would be unable to fully represent all such transactions. It neglects the possible transactions within and across industries, and does not disentangle transactions within the same country from those occurring between different countries. Even if we considered the one-mode projections of the bipartite network~\cite{saracco2015randomizing,cristelli2013measuring,mastrandrea2014reconstructing}, such as the product-to-product or country-to-country networks, we would only be able to focus on one type of interaction at a time, and in any case we would be unable to assess the assembly patterns among the various productions stages to which countries contribute. Moreover, even when trade networks are formalized as multiplex networks~\cite{menichetti}, in which the nodes are the countries and the layers are the industries in which countries trade with one another, connections between countries would only be allowed within layers, and any involvement of countries in relationships between industries would therefore be neglected.
Transactions of goods or services between industries within and across countries are related to the production stages into which the global value chain can be articulated~\cite{cingolani}. Thus, to properly evaluate the nestedness of countries with respect to production stages as well as the nestedness of production stages with respect to countries, a more comprehensive approach to trade would be needed that combines both international and domestic transactions occurring within and across the various stages of the global value chain. In this work, we take a step in this direction and investigate economic nestedness using a multi-layer representation of the worldwide production network~\cite{boccaletti,kivela}.
To build this multi-layer network, we consider the World Input-Output Database (WIOD)~\cite{timmer2015illustrated}, covering data on exchange of intermediate and final products and services among $43$ countries and $57$ different economic activities (industries) in the period from 2000 to 2014. In this multi-layer network, each economic activity is represented as a layer, and each layer is populated by the $43$ countries in our data set. Connections are directed from sellers (i.e., countries selling a product or a service) to buyers (i.e., countries purchasing a product or service). A connection is established between any two countries when there is an economic transaction between them either within the same industry (intra-layer connections) or across different economic industries (cross-layer connections)~\cite{timmer2015illustrated}. Moreover, in this multi-layer network, a given country can exchange products or services to itself when a transaction takes place from one industry to another or within the same industry, and the same country is involved both as a seller and as a buyer.
Based on the multi-layer network, to assess nestedness we construct the buyers' and sellers' participation matrices in which buyers' and sellers' involvement in transactions within and across layers can be measured. Our findings suggest that the nested structure of these matrices is similar to the one uncovered in ecological networks~\cite{atmar1993,patterson1986}. Unlike other studies of nestedness based on bipartite trade networks~\cite{bustos2012dynamics,Erman} or one-mode projections of bipartite networks~\cite{konig}, we draw on countries' involvement in the various stages of the global value chain, and investigate the nestedness of countries (with respect to production stages) and of production stages (with respect to countries). We show how values of country- and transaction-based nestedness vary over time, and distinguish between the cases in which suppliers or buyers are involved in the transactions. To assess the statistical significance of our findings, we compare the actual values of country- and transaction-based nestedness with the ones obtained using appropriate null multi-layer models in which links are reshuffled while the countries' or layers' degree distributions, respectively, are preserved. We further evaluate how individual countries and individual industries contribute to nestedness by drawing on null models in which the connections of each country or each industry, respectively, are reshuffled one at a time. We then argue in favor of the salience of our results for the study of economic stability and growth as well the system's vulnerability to exogenous shocks. Finally, because multi-layer networks can be found across a variety of biological, technological and social systems, we discuss the implications that our proposed approach to measuring nestedness can have beyond trade, for a wide range of empirical domains.
\section*{Results}
\subsection*{The data set}
Our study draws on data from the WIOD (Release 2016) covering $28$ EU countries and $15$ other major countries in the world within the period from 2000 to 2014. For every year, a World Input-Output Table (WIOT) is provided in current prices, expressed in millions of US dollars (USD). Each table represents economic exchanges among the $56$ economic activities (industries) in each country and their respective final demand. The final demand is represented by five separated components: the final consumption expenditure, the final consumption expenditure by non-profit organizations serving households (NPISH), the final consumption expenditure by government, the gross fixed capital formation, and the changes in inventories and valuable. For the purpose of this work, the five final demand components were combined into a unique aggregated component representative of all product consumption (i.e., individuals, non-profit organizations, and enterprises), capital formation, governmental expenditure, and changes in inventories.
Fig.~\ref{fig:map} shows a network representation of the aggregated economic interactions among countries in all sectors of activity in 2010. The nodes of the network in Fig.~\ref{fig:map} represent the countries, links refer to trade, and the intensity of the color of links between countries as well as their width vary as a function of the total amount of value exchanged between the connected countries across all economic activities. Finally, each node's size in Fig.~\ref{fig:map} reflects the value exchanged within the corresponding country. Indeed the international production network comprises both a domestic and a strictly international trade component. Thus, to account for this, the size of each node was made proportional to the sum of the corresponding country's internally exchanged products/services, i.e., the sum of the value of all intermediate exchanged inputs and/or consumption within the country.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.95\textwidth]{Fig1.jpg}
\caption{Network of the overall worldwide trade network in 2010. Network representation of the total amount exchanged between and within countries across all $57$ economic industries in 2010. The shade of the links between countries as well as their widths are proportional to the total amount (in millions of US dollars) exchanged between the connected countries. The darker the color, the greater the amount exchanged between the countries. Each node's size is proportional to the value exchanged and/or consumed within the corresponding country.}
\label{fig:map}
\end{figure}
Notice that, while the WIOD provides the most complete publicly available representation of trade between countries and industries on a large international scale, the data set is restricted to a limited sample of countries and industries. On the other hand, while other available data sets account for more countries and industries (e.g., the United Nations COMTRADE data), they do not provide as detailed information as the WIOD on single economic transactions, and cannot therefore be used for the analysis of the global value chain. Thus, the price of using a more detailed description of trade for studying the international production network is paid in terms of a coarse-grained description of the industrial sectors and of the incompleteness of economic transactions, which in principle may affect the results.
To address the shortcomings of using bipartite networks to study nestedness, in what follows we shall propose an approach based on a multi-layer trade network in which every layer represents an economic industry (i.e., a set of products or services classified as similar given their nature and economic function), populated by the trading countries that, unlike what happens in a multiplex network, can now trade with themselves or with other countries both within and across layers. Notice that, in what follows, we shall use the terms ``product'' (or ``service''), ``economic activity'', and ``industry'' interchangeably to refer to a single layer of the network, which in turn may represent either a single (intermediary or final) production stage at which an economic transaction can occur or the final consumption.
\subsection*{The multi-layer trade network}
Based on data from the WIOD, we start by constructing the multi-layer trade network reproducing the complex system of transactions within and between industries and within and between countries. To this end, we obtain a block matrix including: (i) $57$ diagonal sub-matrices, each referring to transactions within one single layer (i.e., 56 sub-matrices representing the economic activities showed in Table~S1 and one sub-matrix representing the sum of the five components resulting in the final demand); (ii) $3,192$ off-diagonal matrices representing transaction betweens pairs of distinct layers. Each of the square diagonal and off-diagonal sub-matrices has 43 rows (columns) corresponding to the countries showed in Table~S2.
Thus, each cell in the resulting block matrix provides the USD values of products and services exchanged within and across aggregated economic activities (industries) and within and across countries. The block matrix is very dense and visually little informative. For this reason and for illustrative purposes, Fig.~\ref{fig:multilayer}A shows a simplified matrix displaying the data provided by the WIOD on a reduced scale. The matrix in Fig.~\ref{fig:multilayer}A shows the transactions among four countries $c_i$ concerned with three hypothetical industries $\alpha_i$. A cell $a^{\alpha_i}_{c_i,c_j}$ is black if there is a transaction (i.e., the USD value exchanged is different from zero) between country $i$ and country $j$ within industry $\alpha_i$, and white if there is no such transaction (i.e., value exchanged equal to zero). Notice that transactions are not symmetric, and thus the buyers (columns) and sellers (rows) are likely to play different roles in the structural organization of the worldwide production network. Fig.~\ref{fig:multilayer}B provides a visual representation of the adjacency matrix reported in Fig.~\ref{fig:multilayer}A. Notice that the three-layer network includes: (i) transactions between different countries within the same industry, represented by the intra-layer connections (e.g., transactions from Brazil to China in layer $\alpha_1$); (ii) transactions between different countries across different industries, represented by the cross-layer connections (e.g., transactions from Brazil in layer $\alpha_1$ to Spain in layer $\alpha_2$); (iii) transactions across industries involving the same country, represented by cross-layer connections departing from and point to the same node (e.g., transactions from layer $\alpha_1$ to layer $\alpha_2$ from and to the United States); and (iv) transactions within the same industry and involving the same country, represented by self-loops of length one (e.g., transactions in layer $\alpha_1$ from and to the United States). Notice that these self-loops are displayed as black diagonal cells in the diagonal matrices in Fig.~\ref{fig:multilayer}A, and as arrows departing from and pointing to the same node in Fig.~\ref{fig:multilayer}B.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.95\textwidth]{Fig2.jpg}
\caption{Schematic representation of a multi-layer trade network. A) The adjacency matrix containing four countries $c_i$ and three hypothetical products (layers) $\alpha_i$. The three diagonal sub-matrices represent the connections among countries in the same layer, whereas the other off-diagonal sub-matrices represent the cross-layer connections among countries. B) Visual representation of the corresponding multi-layer network built from the matrix. The cross-layer connections are represented by dashed lines and the intra-layer connections by solid ones. For both types of connections, the arrow represents the directionality of each economic exchange originating from a seller and pointing to a buyer. Self-loops of length one represent transactions occurring within the same country and the same layer.}
\label{fig:multilayer}
\end{figure}
We define a multi-layer network as a pair $M=(G,C)$, where $G=\{G_{\alpha};~\alpha~\in~\{1,\dots,k\}\}$ is a family of directed graphs $G_{\alpha}=(X_{\alpha},E_{\alpha})$ called layers of $M$, and $C$ is the set of interconnections between nodes belonging to different layers $G_{\alpha}$ and $G_{\beta}$ with $\alpha \neq \beta$. Formally,
\begin{equation}
C = \{ E_{\alpha \beta} \subseteq X_\alpha \times X_\beta;\; \alpha,\beta \in \{ 1,\dots,k\}, \alpha \neq \beta \}.
\end{equation}
The elements of $C$ are called ``cross--layer connections'', and the elements of each $E_\alpha$ are called ``intra--layer connections''. On the one hand, given a layer $G_\alpha$ corresponding to one of the $56$ economic industries or the final demand, the $N_\alpha=43$ nodes corresponding to the countries are denoted by $X_\alpha=\{ c_1^\alpha,\dots,c_{N_\alpha}^\alpha \}$, and the intra-layer adjacency matrix of each layer $G_\alpha$ will be denoted by $A^{[\alpha]}=(a_{ij}^\alpha)$, where:
\begin{equation}
\begin{array}{c}
a_{ij}^\alpha =
\begin{cases}
1 & \quad \text{if } (c_i^\alpha, c_j^\alpha) \in E_\alpha\\
0 & \quad \text{otherwise},\\
\end{cases}\\
\end{array}
\end{equation}
\noindent for $1 \leq i, \; j \leq N_\alpha$ and $1 \leq \alpha \leq k$. An intra-layer connection is established from node $i$ to node $j$ in layer $\alpha$ if there is at least one economic exchange from country $c_i$ to country $c_j$ in the same layer $\alpha$. Notice that $a_{ii}^\alpha=1$ would refer to transactions occurring within the same country $c_i$ and the same layer $\alpha$. On the other hand, the cross--layer adjacency matrix corresponding to $E_{\alpha \beta}$ is the matrix $A^{[\alpha,\beta]}=(a_{ij}^{\alpha \beta})$ given by:
\begin{equation}
a_{ij}^{\alpha \beta} =
\begin{cases}
1 & \quad \text{if } (c_i^\alpha, c_j^\beta) \in E_{\alpha \beta}\\
0 & \quad \text{otherwise.}\\
\end{cases}
\end{equation}
A cross-layer connection is established from node $i$ in layer $\alpha$ to node $j$ in layer $\beta$ when there is at least one economic exchange from country $c_i$ in layer $\alpha$ to country $c_j$ in layer $\beta$. Once again, $a_{ii}^{\alpha \beta}=1$ would imply that a transaction occurs within the same country $c_i$ from layer $\alpha$ to layer $\beta$.
\subsection*{Nestedness of countries and products}
In economic systems, nestedness is akin to maximal possible diversification subject to the constraints of international competition. For instance, in the simplified case of international trade between countries with no domestic intra- and inter-layer exchange, an economic system can be regarded as nested when a number of countries export (import) a proper subset of the products exported (imported) by other countries, which in turn export (import) a (larger) proper subset of the products exported (imported) by other countries, and so forth (see Fig.~\ref{fig:bipartite}). Countries can, therefore, be hierarchically organized into progressively richer levels such that as countries move from an inner to an outer level they are involved in the trade of more products. Notice that, unlike the simplified case of international trade in this example, our multi-layer perspective enables us to capture the intricacies of both domestic and international trade as well as countries' involvement in multiple stages of the global value chain.
Moreover, a nested structure of the bipartite country-product network would imply that countries in outer levels (e.g., country $c_4$ in Fig.~\ref{fig:bipartite}) are less similar to countries in inner levels (e.g., country $c_1$ in Fig.~\ref{fig:bipartite}) with respect to the products traded than vice versa~\cite{tversky}. As a result, the countries belonging to the core of the system are those associated with the largest degree of similarity to all other countries with respect to the products exported (imported). Yet, while the countries in the core of the system are connected to fewer products than the countries in outer levels, the former do not necessarily concentrate their exports (imports) on any of the products they trade. Indeed their export (import) profile may be characterized by a homogeneous distribution of trade across a (relatively small) number of products. On the other hand, the countries in the outermost level are those with the most diverse trade profile in the system. That is, they are connected to all products traded in the system, and among these products there is at least one of which they are the sole traders. However, while these countries in the outermost level are connected to more products than the countries in inner levels, they do not necessarily spread their efforts uniformly across products. Indeed, among the many products they trade, they may well concentrate most of their economic transactions on a select minority of them.
\begin{figure}[!h]
\centering
\includegraphics[width=0.95\textwidth]{Fig3.pdf}
\caption{Nestedness of country-product connections. A) Matrix representation of the connections between countries and exported products ordered by degree centrality. B) Bipartite network of country-product connections. Countries are hierarchically organized starting from the one ($c_1$) that exports products that all other countries export. Notice that such bipartite network representation does not distinguish between transactions occurring within the same country and transactions involving different countries. This network representation is also unable to distinguish between transactions across different industries and transactions within the same industry, and therefore does not capture the full organization of the global value chain.}
\label{fig:bipartite}
\end{figure}
The idea of hierarchically organizing the elements of an economic system into progressively richer subsets can also be applied to (imported or exported) products. In particular, the core products of a system are the smallest number of products imported (exported) by the largest number of countries. In turn, the core products represent a proper subset of the products imported (exported) by other countries, and so forth up to the final set of products imported (exported) by the countries in the outermost level. As products move from an inner to an outer level, they are traded by fewer countries and only in larger combinations with other products. At one extreme, an economy may be organized in such a way that all products except one are traded by all countries, while the remaining product is traded only by one country. In this case, the only product controlled by one country occupies the outer level, while all others belong to the core (most nested part) of the system.
Core products, being the ones with the highest country-level substitutability, are likely to be based on the most widespread know-how, technologies, and competences. Less nested and more ``peripheral'' products, on the other hand, are likely to be more country-specific and characterized by lower degrees of country-level substitutability. It may be speculated that, if low degrees of product nestedness may secure high returns to the producing countries, it may also prompt a large degree of market instability should an external shock affect the few countries that are the sole suppliers of the product.
\subsection*{Nestedness in the multi-layer network}
We now extend the notion of nestedness so far discussed in connection with a bipartite network to account for the complexity of a multi-layer network in which transactions can be both domestic and international and can originate from, and point to, different industries within different countries. More specifically, unlike the simplified case of the bipartite country-product network, in a multi-layer trade network a given country is not simply connected to products but, more properly, it buys or sells products within specific transactions that take place from an industry to another or even within the same industry. Moreover, these transactions may have different countries as suppliers and buyers, or they may even occur within the same countries. The multi-layer perspective, therefore, enables us to shift focus from products (industries) to transactions, and to draw on these transactions to evaluate the nestedness of both countries and production stages within the global value chain. For instance, the core suppliers can be defined as the largest set of countries involved as sellers in the smallest number of production stages in which all other countries are involved as suppliers. By contrast, the supplier in the outermost level would be the one involved as a seller in at least one production stage in which no other country is involved. Similarly, we can assess the nestedness of production stages in the global value chain, and measure the degree to which country-poor production stages are proper subsets of country-rich ones.
Before we can compute nestedness using the WIOD, we need to build a matrix, called the \textit{participation matrix}, in which rows are the countries and columns are all possible \textit{ordered combinations of any two layers} (i.e., productions stages or economic transactions between or within industries). This means that, for every year, we have a participation matrix with $43$ rows and $57 \times 57 = 3,249$ columns. Moreover, in each layer, we can distinguish between buying countries, i.e., countries with incoming links, and selling countries, i.e., countries with outgoing links. Thus, for every year we built two matrices -- the buyers' and the sellers' participation matrices -- where a generic element, $B_{c_i,\alpha\beta}$ or $S_{c_i,\alpha\beta}$, is equal to $1$ if there is a link, respectively, ending at or starting from country $c_i$ in layer $\alpha$ and starting from or ending at layer $\beta$, and zero otherwise. Figs.~\ref{fig:nestedness}A and B show the buyers' and sellers' participation matrices, respectively, in which the black color refers to a value of $1$ in the corresponding matrix, while the white color refers to zero.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.92\textwidth]{Fig4.pdf}
\caption{Nested organization of buyers' and sellers' participation matrices. Buyers' (A) and sellers' (B) participation matrices in 2010. A generic element $B_{c,\alpha\beta}$ ($S_{c,\alpha\beta}$) of the matrix is equal to $1$ (black square) if there is a link ending (starting) at (from) country $c_i$ in layer $\alpha$ and starting (ending) from (at) layer $\beta$, and is equal to zero otherwise (white square). In a perfectly nested matrix, the black cells should fill in the upper triangular portion of the matrix (above the secondary diagonal) and the white cells should lie in the lower one (below the secondary diagonal). (C-D) Nestedness calculated over the years from 2000 to 2014 for buyers' (C) and sellers' (D) participation matrices.}
\label{fig:nestedness}
\end{figure}
Drawing on the buyers' and sellers' participation matrices, we computed the nestedness of the multi-layer network. To this end, first we reordered the columns and rows by decreasing degree centrality, i.e., respectively, the number of countries participating in transactions, and the number of transactions in which countries are involved~\cite{jonhson2013factors,dominguez2015ranking}. We then computed nestedness by using the measure proposed by Almeida-Neto \textit{et al.}~\cite{almeida2008consistent}, referred to as \textit{NODF} and here denoted by $N$. More specifically, this measure $N$ is based on two properties: decreasing fill ($DF$) and paired overlap ($PO$). Let us suppose that the above defined participation matrices have $m$ rows and $n$ columns and consider a pair of rows $(i,j)$ such that $i < j$ and a pair of columns $(k,l)$ with $k < l$. Let $MT$ be the marginal total (i.e., the sum of ones) of any column or row. For any pair $(i,j)$ of rows, $DF_{ij}$ is defined as equal to $100$ if $MT_j < MT_i$ and zero otherwise. Similarly, for any pair of columns $(k,l)$, $DF_{kl}$ is equal to $100$ if $MT_l < MT_k$ and zero otherwise.\\
The paired overlap can be computed as follows. For any pair of columns $(k,l)$ such that $k < l$, $PO_{kl}$ is the percentage of ones in column $l$ that are located at the same row positions as the ones in column $k$. Similarly, for any two rows $(i,j)$ such that $i < j$, $PO_{ij}$ is the percentage of ones in row $j$ that are located at the same column positions as the ones in row $i$. Formally, given any left-to-right column pair and any up-to-down row pair, the degree of paired nestedness ($N_{paired}$) is defined as
\begin{equation}
N_{paired} =
\begin{cases}
0 & \text{if } DF_{paired}=0; \\
PO & \text{if } DF_{paired}=100.
\end{cases}
\end{equation}
The measure of row (column) nestedness $N_{row}$ ($N_{col}$) is calculated by averaging all values of paired row (column) nestedness. Notice that the total number of values of row and column paired nestedness for a matrix with $n$ rows and $m$ columns is $n(n-1)/2$ and $m(m-1)/2$, respectively. Thus, we define the nestedness of the whole matrix as
\begin{equation}
\centering
N = \frac{\sum N_{paired}}{\frac{n(n-1)}{2} + \frac{m(m-1)}{2}}.
\end{equation}
So conceived of, the values of nestedness range between $0$ and $100$.
In our case, the total nestedness among rows is a measure of nestedness of countries with respect to production stages (i.e., economic transactions here defined as combinations of products or industries). It refers to the degree to which a subset of countries are involved in transactions between industries that are a proper subset of the more diverse transactions in which other countries are involved, and so forth. That is, the countries controlling country-poor stages of production (or transactions) constitute proper subsets of the countries involved in country-rich stages of production. Thus, in a perfectly (country-based) nested economic system, the core country would be the one involved in the smallest set of transactions within the international production network in which all other countries are involved. The most peripheral country lying in the outermost level, by contrast, would be the one involved in all transactions in which all other countries are involved and in at least one transaction in which no other country is involved. Notice that to have a perfectly (country-based) nested economic system, no pair of countries can be involved in the same (number of) production stages; yet, any pair of countries may differ by more than one associated production stage.
On the other hand, nestedness among columns quantifies the nestedness of stages of production or economic transactions with respect to countries. This refers to the degree to which a number of stages of production involve countries that constitute a proper subset of the countries involved in other production stages, and so forth up to the stage at which all countries are involved. Similarly, in a perfectly (transaction-based) nested system, the core production stage would be the one in which all countries are involved, while the most peripheral transaction lying in the outermost level would be the one controlled only by one country. Once again, to have a perfectly (transaction-based) nested system, no pair of production stages or transactions can be controlled by the same (number of) countries; yet, any two transactions may differ by more than one involved country.
Finally, the matrix nestedness is a measure of the nestedness of the whole multi-layer trade network in a given year. It thus combines country-based and transaction-based nestedness. In this sense, a perfectly nested economic system would be both perfectly country-based and transaction-based nested. Thus, no pair of countries can be involved in the same (number of) production stages, and no pair of production stages can be controlled by the same (number of) countries. In addition, any two adjacent rows (countries) may differ only by one production stage, and any two adjacent columns (productions stages) may differ by only one country.
In our multi-layer network, each country can be the buyer or the supplier in each production stage, and therefore the measures of nestedness outlined above can be computed both for buying and for selling countries, in each year. Buyers' country-based nestedness refers to the degree to which the countries that act as buyers (of intermediary or final products) in buyer-poor stages of production constitute proper subsets of the countries that act as buyers in buyer-rich stages of production. Similarly, sellers' country-based nestedness refers to the degree to which the countries that act as suppliers in seller-poor stages of production constitute proper subsets of the countries acting as suppliers in seller-rich production stages. Finally, transaction-based nestedness from the buyers' (sellers') perspective refers to the degree to which stages of production in which few buyers (suppliers) are involved constitute proper subsets of the production stages in which more buyers (suppliers) are involved.
Fig.~\ref{fig:nestedness} shows the evolution over time of both country-based and transaction-based nestedness from the perspective of both buyers (Fig.~\ref{fig:nestedness}C) and sellers (Fig.~\ref{fig:nestedness}D). Results suggest that, while sellers' country-based nestedness remained constant over the years, buyers' country-based nestedness fluctuated between 2003 and 2011. Moreover, sellers' nestedness remained higher than buyers' nestedness constantly over the years. In particular, sellers' country-based nestedness was remarkably higher than buyers' country-based nestedness, thus suggesting a more structured organization of countries' involvement in production stages in which countries acted as sellers of intermediate and final products than in stages where countries acted as buyers. Finally, it is worth noting that the ordering between country-based nestedness and transaction-based nestedness varies depending on the role of countries as buyers or sellers. When countries participated in transactions as buyers, the nestedness of production stages with respect to countries was larger than the nestedness of countries with respect to production stages. Vice versa when countries acted as sellers. The structural organization of countries and productions stages was therefore affected by the nature of the economic transaction.
It is worth noting that even matrices with random entries and optimally reordered rows and columns can exhibit some degree of nestedness~\cite{dominguez2015ranking}. This is especially the case of matrices populated by a large number of ones, since density amplifies the probability of overlapping rows/columns. It is therefore essential to assess whether the values of nestedness computed with the real data statistically significantly differ from the ones that could be obtained using matrices with random entries. To this end, an appropriate null model for the multi-layer network is needed, on which nestedness can be calculated.
A full randomization of the matrix would not represent an appropriate null model, because it would completely destroy the countries' degree distribution in the multi-layer network as well as the degrees of countries in each layer. Besides, the number of connections between pairs of layers would not remain unchanged. On the one hand, an appropriate null model for country-based nestedness would preserve the degree of each country in each layer and across the whole network. On the other, an appropriate null model for transaction-based nestedness would preserve the number of connections linking each pair of layers, i.e., the number of connections linking each layer to itself and to other layers.
Here, we propose two null models that satisfy the above requirements. Fig.~S1 shows a simple example of how the two null models were constructed for testing sellers' country- and transaction-based nestedness. Following~[\citenum{saavedra2011strong}], the first null model (\textit{Model I}) aims to provide a benchmark for assessing the statistical significance of country-based nestedness. For each country $c_i$ and each layer $\alpha$, \textit{Model I} keeps unchanged the number of connections pointing to (for buyers) or departing from (for sellers) country $c_i$ in layer $\alpha$, but randomly reshuffles the layers from (at) which these connections originate (terminate). That is, a given country $c_i$ in layer $\alpha$ will remain involved in the same number of transactions pointing to (departing from) $\alpha$, but these transactions will originate from (terminate at) randomly chosen layers among the $57$ ones (i.e., layer $\alpha$ itself and the remaining others). In terms of the participation matrix, this is equivalent to reshuffling the ones in each row by blocks of columns that share the same importing (exporting) layer. By replicating this procedure for every row, the resulting matrix preserves the global degree of each node as well as the degree of each node in each layer (i.e., the number of links arriving at (departing from) each node in each layer). In summary, \textit{Model I} reshuffles both inter-layer and intra-layer connections while preserving the countries' degree distribution in each layer and across the whole multi-layer network.
The second null model (\textit{Model II}) aims to provide a benchmark for assessing the statistical significance of transaction-based nestedness. To this end, \textit{Model II} randomly assigns countries to production stages, while maintaining the same (number of) connections between pairs of layers (including connections of each layer with itself) and the same countries' degree distributions within each layer as in the real multi-layer network (i.e., it preserves the in-degree distributions for buyers and the out-degree distributions for sellers in each layer, but not across the whole network). To construct such model, for each layer $\alpha$, we kept the connections linking $\alpha$ to itself and to all other layers, but reshuffled the countries at (from) which these connections terminated (originated). That is, the connections to (from) layer $\alpha$ were randomly assigned to countries. In terms of the participation matrix, this is equivalent to swapping rows by blocks of columns defined by the same layer at (from) which connections terminate (depart). For instance, given layer $\alpha$, a block of columns, $\boldsymbol B_{\alpha}$ or $\boldsymbol S_{\alpha}$ for buyers or sellers, respectively, is defined by all pairs of layers exporting to (buyers) or importing from (sellers) layer $\alpha$ (i.e., \{$\alpha\alpha,\alpha\beta, \dots, \alpha\omega$\}). Given any two rows associated with countries $c_i$ and $c_j$, random assignment of countries to transactions would then be obtained by swapping the entire entries of row $c_i$ with the entries of row $c_j$ within block $\boldsymbol B_{\alpha}$ or $\boldsymbol S_{\alpha}$ (i.e., by reassigning to another country the whole set of participations of a given country in transactions in a given block of columns).
In summary, \textit{Model II} randomly reassigns countries to blocks of transactions while preserving the countries' in- or out-degree distribution in each layer (i.e., the distribution of connections arriving at, or departing from, countries in each layer) and the layers' degree distribution (i.e., the distribution of connections between pairs of layers).
Drawing on the above null models, we generated an ensemble of matrices, both for buyers and for sellers, to evaluate whether nestedness measured using the real data is statistically significantly different from the values one would expect by chance (see methods). Fig.~S2 shows the evolution of country-, transaction-based and total nestedness compared with the values obtained using the appropriate null models. In all cases, nestedness computed in the null models is smaller than nestedness found in the real data (at the $5\%$ significance level). Notice that even in the case of sellers, country-based nestedness appears very close to, and yet remains statistically significantly different from, the values one would expect by chance on a comparable multi-layer network with the same countries' degree distributions within each layer and globally as in the real network.
To further explore the evolution of nestedness, we computed the growth rate of nestedness defined as
\begin{equation}
G(t+\Delta t) = \frac{N(t+\Delta t)-N(t)}{N(t)},
\end{equation}
where $N(t)$ is nestedness in a given year $t$ and $\Delta t=1$. Fig.~S3A shows the evolution over time of the growth rate of nestedness for buyers. We can observe an oscillatory behavior with a $2$-year period, as well as a remarkable increase in 2005 and a decline in 2008. On the other hand, the growth rate of nestedness for sellers (Fig.~S3B) displays a less clear oscillatory trend, with only small deviations from zero (Fig.~S3B).
Next, we investigated whether variations in nestedness are associated with economic downturns and, more generally, with the global economic performance of countries. There is a growing body of evidence suggesting that countries' involvement in global value chains is associated with their economic growth and productivity \cite{OECD, worldbank}. Here we relied on our measures of nestedness to capture countries' participation in the international production network. We used data from the World Bank~\cite{WorldBank2017}, and computed the sum of the Gross Domestic Product (GDP) in USD at constant price (2010) of all countries included in our data set. We then examined the relationship between aggregated GDP and the nestedness of the worldwide trade multi-layer network over time. Fig.~S3C shows the buyers' total nestedness as well as country-based and transaction-based nestedness as a function of the GDP. Fig.~S3D, on the other hand, shows the sellers' total, country-based and transaction-based nestedness as a function of GDP. To quantify the association between nestedness $N(t)$ and total GDP, we evaluated the Pearson correlation coefficient, $\rho$, and its $95\%$ confidence intervals (CI) using bootstrapping~\cite{efron1994introduction} (see Table~\ref{t:regression}). Findings suggest that, except for buyers' transaction-based and total nestedness, there is a statistically significant and negative relationship between nestedness and GDP.
More formally, a simple linear regression model of the relationship between nestedness and GDP in year $t$, $N(t)$ and $GDP(t)$ respectively, can be written as
\begin{eqnarray}\label{eq:nestgdp}
N(t) = \gamma+\beta\,\log_{10}\text{GDP}(t)+\epsilon(t),
\end{eqnarray}
where $\epsilon(t)$ is the residual error term for year $t$ (assumed to be independent of the residuals for other years), and the parameters $\gamma$ (the intercept) and $\beta$ (the regression coefficient) can be estimated through ordinary least-squares (OLS) estimation. The estimated parameters and corresponding standard errors, $p-$values, coefficients of determination $R^2$, and Pearson correlation coefficients $\rho$ for sellers' and buyers' country-based nestedness ($N_c$), transaction-based nestedness ($N_t$) and total nestedness ($N_{total}$) are shown in Table~\ref{t:regression}. The estimated curves are displayed in Fig.~S3.
\begin{table}[h]
\centering
\caption{OLS estimates from regression models and Pearson correlation coefficients.}
\label{t:regression}
\begin{tabular}{lrrrrrr}
Model & $\gamma$ & $\beta$ & Std. error & $p-$value & $R^2$ & $\rho$ [$95\%$ CI] \\
\hline
Buyers' $N_{\text{total}}$ vs. GDP & 47 & 1.16 & 0.50 & 0.03 & 0.53 & 0.29 [0.05,0.79] \\
Buyers' $N_{\text{c}}$ vs. GDP & 319 & -18.79 & 7.27 & 0.02 & 0.33 & -0.60 [-0.83,-0.24] \\
Buyers' $N_{\text{t}}$ vs. GDP & 50 & 1.25 & 0.54 & 0.03 & 0.29 & 0.37 [0.08, 0.79]\\
Sellers' $N_{\text{total}}$ vs. GDP & 173 & -7.71 & 1.60& 0.0003 & 0.64 & -0.71 [-0.91,-0.58] \\
Sellers' $N_{\text{c}}$ vs. GDP & 251 & -11.28 & 3.20& 0.004 & 0.48 & -0.75 [-0.85,-0.53] \\
Sellers' $N_{\text{t}}$ vs. GDP & 186 & -8.27 & 1.72 & 0.0003 & 0.63 & -0.86 [-0.92, -0.58] \\
\hline
\end{tabular}
\end{table}
Overall, these findings suggest that, as GDP increases, nestedness is expected to decline, and vice versa. It may be speculated that the observed negative association between nestedness and GDP is a reflection of the disorder induced in the system by the countries' freedom and ability to tap more economic opportunities and achieve a better allocation of resources. In this sense, higher economic prosperity can be achieved at the expense of an ordered organization of the production system. By contrast, results seem to suggest that a decline in economic prosperity might be associated with more restraints on transactions and stronger constraints on countries' involvement in production stages, thus yielding improvements in global nestedness. While these are simply broad-brush conjectures on associations between economic and structural variables, any attempt to explain any (causal) relationship between nestedness and GDP would clearly require further scrutiny and empirical investigation.
\subsection*{Contribution of countries and industries to nestedness}
To investigate the contribution of individual countries and industries to nestedness, here we propose to evaluate the nestedness that would result subsequent to the reshuffling of the connections involving each country and each industry individually (see methods section). The general idea is that each country and each industry can be associated with an induced variation in country- and transaction-based nestedness respectively, which in turn can be regarded as reflecting the salience of the country or industry to nestedness~\cite{saavedra2011strong}. Indeed this approach would enable us to assess the effects of potential external shocks, such as an unexpected variation in a buyer's or seller's involvement in production stages (here simulated through the reshuffling of the buyer's or seller's connections at the global level), or the unexpected variation in supply or demand of a specific product (here simulated through the reshuffling of connections originating from or terminating at the corresponding layer). In this sense, the effects caused by such reshuffling on global nestedness would shed light on the influence of countries and products/industries on the global structural organization of the worldwide production multi-layer network.
Formally, we define the contributions of country $c_i$ and layer $\alpha$ to country- and transaction-based nestedness as the $Z-$scores calculated, respectively, over an ensemble of multi-layer networks in which the connections involving country $c_i$ and layer $\alpha$ are randomly reshuffled. Formally, we have
\begin{equation}
Z_{c_i} =(N_c-\langle N_{c_i} \rangle)/\sigma
\end{equation}
and
\begin{equation}
Z_{\alpha} = (N_t-\langle N_{\alpha} \rangle)/\sigma,
\end{equation}
\noindent where $N_c$ and $N_t$ are the values of country- and transaction-based nestedness, respectively, calculated on the original data, $\langle N_{c_i} \rangle$ and $\langle N_{\alpha} \rangle$ are the average values of country- and transaction-based nestedness calculated over the ensembles of the matrices resulting from the reshuffling of country $c_i$'s and layer $\alpha$'s connections respectively, and $\sigma$ is the standard deviation of nestedness across these ensembles. Thus, a positive (negative) value of $Z_{c_i}$ would imply a decline (increase) in country-based nestedness resulting from the reshuffling of the connections of country $c_i$ or, alternatively, a positive (negative) contribution of $c_i$ to country-based nestedness. Similarly, a positive (negative) value of $Z_{\alpha}$ implies a negative (positive) effect on transaction-based nestedness resulting from the reshuffling of connections of layer $\alpha$ or, alternatively, a positive (negative) contribution of layer $\alpha$ to transaction-based nestedness.
First, we assessed the contributions of buyers and sellers to country-based nestedness over all years in our data set. Fig.~\ref{fig:country}A shows the influence of individual buyers on country-based nestedness in 2000 and 2014. Findings suggest that an external shock affecting buyers is likely to cause a negative impact on nestedness in both years, i.e., $Z_{c_i} (t) >0$. In addition, we computed the value of $Z_{c_i}(t)$ for each buyer $c_i$ in each year $t$. Fig.~\ref{fig:country}B shows the evolution of $Z_{c_i}(t)$ over time. Fig.~\ref{fig:country}A suggests that Korea Republic, United States, and Belgium are among the importing countries that most contributed to country-based nestedness. Over the whole observation period, these countries are also among those associated with the largest variability in contribution to country-based nestedness (i.e., with the largest standard deviation $\sigma[Z_{c_i}(t)]$). For instance, while Korea Republic is always ranked as one of the suppliers with the largest positive influence on nestedness (Fig.~\ref{fig:country}A), it is also the country whose contribution is characterized by the largest variability over time.
In the case of sellers, Fig.~\ref{fig:country}C suggests that Luxembourg, Hungary, and Sweden are the exporters characterized by the largest contributions to country-based nestedness in 2014. However, the same ranking was not observed in 2000, unlike the case of buyers that instead almost preserved their ranking in both years. Moreover, it is worth noting that the most influential sellers, ranked as top contributors to country-based nestedness, are not among the exporters that experienced the largest variability in such contribution over the years. Fig.~\ref{fig:country}D shows that Italy, United States, and Romania are the exporting countries associated with the largest standard deviation of $Z_{c_i}(t)$ (i.e., $\sigma[Z_{c_i}(t)]$), thus suggesting different production trajectories for buyers and sellers.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.98\textwidth]{Fig5.pdf}
\caption{Contribution of buyers and sellers to country-based nestedness. A) The bars represent the values of $Z_{c_i}$ for each buyer in 2000 (hollow bar) and 2014 (purple bar). The countries are ranked by the corresponding values of $Z_{c_i}$ in 2014 in an increasing order. All buyers contribute positively to country-based nestedness in both years (i.e., $Z_{c_i}(t)>0$). B) Evolution of $Z_{c_i}(t)$ over the years for buyers. The countries highlighted are the ones associated with the largest variability in contribution to country-based nestedness, i.e., with the largest standard deviation of influence, $\sigma[Z_{c_i}(t)]$, during the observation period. C) Similarly, the bars represent the values of $Z_{c_i}$ for each seller in 2000 (hollow bar) and 2014 (blue bar). The countries are ranked by the corresponding values of $Z_{c_i}$ in 2014 in an increasing order. All sellers have a positive effect on country-based nestedness (i.e., $Z_{c_i}(t)>0$). D) Evolution of $Z_{c_i}(t)$ over the years for sellers. The countries highlighted are the sellers associated with the largest variability in contribution to country-based nestedness, i.e., with the largest standard deviation of influence, $\sigma[Z_{c_i}(t)]$, during the observation period. }
\label{fig:country}
\end{figure}
To evaluate the contribution of individual economic activities to transaction-based nestedness in the worldwide production network, we followed a similar procedure to the one used for investigating the contributions of countries to country-based nestedness. In this case, we calculated the values of $Z_{\alpha}$ by randomly allocating to countries the connections departing from (sellers) or arriving at (buyers) each layer at a time (see methods). Fig.~\ref{fig:layer}A shows the effect of individual layers on the buyers' transaction-based nestedness in 2000 and 2014. Results suggest that the economic industries that have the largest influence on the buyers' transaction-based nestedness are the ones related to: sewerage; waste collection, treatment and disposal activities; materials recovery; remediation activities and other waste management services (E37-E39); manufacture of fabricated metal products, except machinery and equipment (C25); and publishing activities (J58). Notice that real estate activities (L68) have the largest negative influence on nestedness in both years.
Just as with the contributions of countries, we measured $Z_{\alpha}(t)$ for each layer $\alpha$ in each year $t$, and uncovered the layers with the largest variability in contribution to buyers' transaction-based nestedness over time, i.e., the layers with the greatest standard deviation $\sigma[Z_{\alpha}(t)]$. Fig.\ref{fig:layer}B shows that the production sectors with the largest positive and negative contributions to nestedness are also among the ones with the greatest variability in contribution to nestedness over the years.
Finally, we investigated the contributions of individual layers to sellers' transaction-based nestedness. Fig.~\ref{fig:layer}C shows the extent to which the reshuffling of connections involving each industry at a time affects the sellers' transaction-based nestedness. In this case, all layers have a positive effect on transaction-based nestedness (i.e., $Z_{\alpha}(t)>0$). Fig.~\ref{fig:layer}D also suggests that the production sectors associated with the largest contributions to sellers' nestedness are among the ones associated with the greatest variability in such contribution over the years (e.g., manufacture of textiles, wearing apparel and leather products (C13-C15) and accommodation and food service activities (I)).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.98\textwidth]{Fig6.pdf}
\caption{Contribution of industries to buyers' and sellers' transaction-based nestedness. A) The bars represent the values of $Z_{\alpha}$ for each layer $\alpha$ in 2000 (hollow) and 2014 (purple). The layers are ranked by the corresponding values of $Z_{\alpha}$ in 2014 in an increasing order. Almost half of the layers have a positive influence on buyers' transaction-based nestedness, (i.e., $ Z_{\alpha}(t)>0$), while several layers have no influence and a minority are characterized by a negative influence. B) Evolution of $Z_{\alpha}(t)$ over the years for buyers. The layers highlighted are the industries associated with the largest variability in contribution to buyers' transaction-based nestedness, i.e., with the largest standard deviation of influence, $\sigma[Z_{\alpha}(t)]$, during the observation period. C) All layers have a positive effect on the sellers' transaction-based nestedness (i.e., $Z_{\alpha}(t)>0$) both in 2000 (hollow bar) and 2014 (blue bar). D) Evolution of $Z_{\alpha}(t)$ over the years for sellers. The layers highlighted are the industries associated with the largest variability in contribution to sellers' transaction-based nestedness, i.e., with the largest standard deviation of influence, $\sigma[Z_{\alpha}(t)]$, during the observation period.}
\label{fig:layer}
\end{figure}
\section*{Discussion}
In this work, we argued that the bipartite network of connections between countries and industries cannot fully capture the economic interactions among countries underpinning the worldwide production network and the global value chain. In this study, we proposed to formalize the interactions between countries and between industries by constructing a multi-layer exchange network in which each layer represents an industry and the nodes represent the trading countries. In this network, countries can be connected within and across layers, and the same country can be involved in multiple stages of the global value chain as well as in transactions within the same industry. Thus, the network includes intra- and inter-layer connections that may depart from and point to any country (i.e., inter-layer connections from and to the same country and self-loops of length one are allowed). Based on the multi-layer network, we built two participation matrices, one for buyers and the other for sellers, in which rows are the (buying or selling) countries and columns are all possible ordered combinations of any two layers. We then used these matrices to compute both country-based and transaction-based nestedness, each from the perspective of buyers and sellers.
We showed that, as typically occurs within ecological networks, the structural organization of the participation matrices of buyers and sellers displays a nested signature. Such nested structure is significantly statistically different from the structure one would expect in a multi-layer null model in which connections are randomly reshuffled while the countries' or layers' degree distributions are kept unaltered. Our results suggested that, while sellers' nestedness remained constant over the years, buyers' country-based nestedness fluctuated between 2003 and 2011. We also uncovered associations between nestedness and GDP, and found that, except for buyers' transaction-based nestedness, an increase in GDP is associated with a decline in nestedness.
Moreover, we investigated the contributions of individual countries and individual economic industries to country- and transaction-based nestedness, respectively, by reshuffling the connections of one country or one layer at a time, and by comparing the resulting nestedness with the original one. To this end, we computed $Z-$scores over an ensemble of multi-layer networks, both for buyers and sellers. Results indicated that, while all countries exerted a positive contribution to buyers' country-based nestedness, a minority of countries played no significant role in sustaining nestedness. These are the countries whose reshuffled connections did not induce any change in nestedness and therefore whose $Z-$scores did not differ from zero. We further explored variations over time of countries' contributions to country-based nestedness, and found that the top-ranked contributors tend to be precisely the suppliers associated with the largest variation of such contributions over the years. It is worth noting that, among the top contributors (both buyers and sellers), there are also relatively smaller, developing and emerging countries (e.g., Luxembourg, Korea Republic, and the Czech Republic among the buyers, and Luxembourg and Hungary among the sellers) as well as larger and more advanced countries (e.g., the United States and Belgium among the buyers and Japan and Germany among the sellers). Similarly, among the countries that contributed least to nestedness are not only smaller East European countries (e.g., Croatia among the buyers and Estonia and Czech Republic among the sellers) and developing countries (e.g., India among the buyers), but also large advanced countries (e.g., Japan among the buyers and the United Kingdom and Belgium among the sellers). These findings might therefore suggest that a country's contribution to nestedness is not necessarily correlated with its economic size.
Results also indicated that, while all industries had a positive effect on the sellers' transaction-based nestedness, a number of them (e.g., mining and quarrying (B) and human health and social work activities (Q)) played no role in sustaining the buyers' transaction-based nested organization of the international production multi-layer network.
We believe that our multi-layer network perspective can shed a new light on the structural foundations and dynamics of competition between countries and industries in the global value chain, on the global system's vulnerability to exogenous shocks, on the contribution that each component of the system can make to global nestedness, and ultimately on the role that nestedness can play in enhancing or inhibiting economic growth over time. Because a variety of empirical domains, from biological to technological and social ones, can be characterized as complex networks in which relationships have a multi-layer representation, our approach also has important implications beyond international trade, and can help gain a better understanding of the structural organization, stability, and growth mechanisms of a number of different systems.
\section*{Methods}
\subsection*{The data set}
We used data from the WIOD, which is connected to a project funded by the European Commission as part of the $7^{th}$ Framework Programme with the aim of developing new databases, accounting frameworks and models to increase our understanding of the dynamic interrelatedness of countries and industries.
The core of the database is a set of harmonized supply and use tables, as well as data on international trade in goods and services. These two sets of data have been integrated into sets of inter-country (world) input-output tables. For further information, please refer to Data Description in the ``Supplementary Information'' section.
\subsection*{Reordering of columns and rows}
To calculate nestedness, the rows (countries) of the participation matrix were sorted in decreasing order based on the number of transactions (i.e., pairs of layers) in which countries were involved. Similarly, the columns of the participation matrix were sorted in decreasing order by number of countries that shared involvement in transactions.
\subsection*{Null models}
To test the hypothesis that countries and stages of production have a nested organization in the multi-layer network, we assessed whether the value of nestedness measured using our data is statistically significantly different from the value one would obtain by random expectation. To this end, we drew on, and extended, the ideas proposed by Saavedra {\it et al.}~\cite{saavedra2011strong}, and constructed appropriate null models (I and II) for the multi-layer network. We also used these models to assess the influence of individual countries and individual industries upon country- and transaction-based nestedness, respectively. All the results were obtained based on an ensemble of $1,000$ realizations of the null models. We assumed as statistically significant (at the 5\% level) all observed values of country-based, transaction-based, and total nestedness that lie outside the 95\% confidence interval of nestedness obtained using the corresponding null models over $1,000$ replicates.
\textbf{Null model I.} To test the statistical significance of country-based nestedness, we followed the approach proposed in [\citenum{saavedra2011strong}] and constructed \textit{Model I}. In particular, we randomized the rows of the participation matrix by blocks of columns defined by the common importing (for buyers) or exporting (for sellers) layer, while keeping unchanged the number of ones in each row (i.e., the countries' degree distribution across the whole multi-layer network) and also preserving the degree of each node in each layer (i.e., the intra-layer countries' degree distribution). For example, if node $c_i$ has two links pointing to (departing from) layer $\alpha$ and one departing from (pointing to) layer $\beta$ and the other from (to) layer $\gamma$, in \textit{Model I} node $c_i$ would still have two links pointing to (departing from) layer $\alpha$ but the origin (destination) of these links would be randomly reassigned to any other layer among all available ones (i.e., including layer $\alpha$ itself). Thus, for each country (buyer or seller) \textit{Model I} preserves the total number of links ending at (buyers) or departing from (sellers) layer $\alpha$. As a result of this constraint, the total number of links of each node across the whole network is also preserved. We used the same model to investigate the influence of individual countries on country-based nestedness. That is, for each country $c_i$ we randomized (subject to the above constraints) the corresponding row in the participation matrix, and then computed the $Z-$score over the ensemble produced by the realizations of the model.
\textbf{Null model II.} To test the statistical significance of transaction-based nestedness, we followed an approach similar to the one outlined above and constructed \textit{Model II}. In particular, we randomized the rows of the participation matrix by blocks of columns defined by the common importing (for buyers) or exporting (for sellers) layer, while preserving not only the countries' (in- or out-) degree distribution in each layer of the multi-layer network (i.e., the intra-layer in- or out-degree distribution), but also the layers' degree distribution (i.e., the number of connections from one layer to another and/or to itself did not change). This is equivalent to swapping the connections of country $c_i$ pointing to (originating from) layer $\alpha$ with the connections of another country $c_j$ pointing to (originating from) the same layer $\alpha$. We also used \textit{Model II} to investigate the impact of individual industries to transaction-based nestedness. To this end, for each individual layer $\alpha$, we randomized (subject to the above constraints) the corresponding block of columns (i.e., all columns in which layer $\alpha$ is the destination or origin of connections) in the participation matrix, and then computed the $Z-$score over the ensemble produced by the realizations of the model.
|
{
"timestamp": "2019-09-10T02:29:22",
"yymm": "1803",
"arxiv_id": "1803.02872",
"language": "en",
"url": "https://arxiv.org/abs/1803.02872"
}
|
\section{Introduction}
The uniform sampling of graphs with fixed degree sequence has attracted a large research effort in network science~\cite{MolloyReed1995,NewmanStrogatzWatts2001,Artzy-Randrup2005,Rao1996}. Samples of random networks are used to determine the significance of properties of real-world networks. For instance, to study the clustering in social networks \cite{Roberts2000}, to understand which subgraphs form the important building blocks of a network~\cite{Milo2002} or to find out if a network is expected to be connected given its degree sequence~\cite{NewmanStrogatzWatts2001}. The equivalent problem of uniformly sampling a 0,1 matrix with fixed row and column sums is considered as one of the most useful `null model approaches' in ecology~\cite{gotelli1996null,Strona2014}.
Markov chain algorithms, such as the switch chain and Curveball chain, are a popular approach to the above sampling problem \cite{Rao1996,Kannan1999,Strona2014}. Here a graph is randomised by repeatedly applying small degree preserving changes. These algorithms are known to converge to the uniform distribution on the set of \emph{labelled} graphs with a given degree sequence \cite{Artzy-Randrup2005,Rao1996,Verhelst2008,Carstens2015}. Even though Markov chain algorithms are easy to implement and provide a flexible sampling framework, they have a serious drawback: in general it is unknown how many changes need to be applied to obtain a close-to-random graph.
Two completely separate communities appear to work on the Markov chain approach to this problem. The first one provides us with important theoretical insights to understand the problem in greater depth. It finds graph classes for which the Markov chain can be proven to be efficient~\cite{Cooper07,Greenhill2011,Miklos2013,Erdos2017,Greenhill2018,Amanatidis2018}. Unfortunately the polynomial upper bounds for the running time possess such large exponents, that they can never be used to draw a graph at random in a real world scenario. Furthermore, the graph classes considered (matrices with identical sums for rows or columns) barely occur in the real world.
The second community uses implementations of these algorithms in practice~\cite{Milo2002,Artzy-Randrup2005,Strona2014,NewmanStrogatzWatts2001}, i.e. they need them for their research to create null models. Applied researchers often stop the Markov chains after a fixed number of steps, using some assessment to judge that this number was large enough such that the sampling was done according to an almost uniform distribution.
Even though existing theoretical results give impractically large limits, we are optimistic about the speed of this class of algorithms. Several experiments~\cite{Rechner2016,Strona2014,Carstens2017,Ray2015} indicate that both the well known \emph{switch chain}~\cite{Ryser1957,Taylor1981,Kannan1999,Diaconis1995,Rao1996} and the lesser known \emph{Curveball algorithm}~\cite{Verhelst2008,Strona2014,Carstens2015} are quite fast. The only problem is that we can not prove how good they really are, i.e. we have a lack of theory.
In this article we offer a partial explanation for the discrepancy between theoretical and experimental bounds. In most applications, the network statistic of interest only depends on the structure of the network, i.e. it is a \emph{topological} property. In practice, convergence of the statistic of interest is used as an indicator that a Markov chain has converged to its stationary distribution~\cite{Artzy-Randrup2005,Ray2015}. This approach ignores node labels when judging the convergence of the Markov chain. We formally show that ignoring node labels corresponds to projecting a Markov chain onto equivalence classes of isomorphic graphs. We prove that the projected chain converges at least as fast as the original chain and give several examples where convergence is much faster. The speed up is due to sampling from a (often much) smaller state space. In some applications node labels are important, for instance when determining the number of expected edges between certain individuals or communities. We show that faster sampling can be achieved by combining the projected Markov chain with a linear-time preprocessing step. The resulting improved run-time is of clear benefit to practitioners. Furthermore, our contribution opens new pathways for theoretical research, by reducing the size of the state space. As a result of independent interest, we prove that this combination of a projected Markov chain and preprocessing step results in an ergodic Markov chain for all directed graphs, that is, it removes the need for `hexagonal moves'\cite{Rao1996} (Theorem \ref{thm:hex_move}).
The remainder of this article is organised as follows. We start with a general description of projected Markov chains and prove that the mixing time of these chains is smaller than or equal to the mixing time of the original chain. We then briefly discuss well-known Markov chains used for the sampling of graphs: the switch and Curveball chain. We show that these Markov chains can be projected onto isomorphism classes of graphs and that this results in faster mixing. Furthermore we introduce a preprocessing step which allows us to speed-up the switch and Curveball chain. We give several explicit examples of the method. Finally we discuss limitations and potential extensions to this framework.
\section{Applications}
The following examples illustrate the main idea behind our speed-up of the switch and Curveball Markov chains.
\begin{example}
Motif finding is a popular tool in network analysis~\cite{Milo2002}. A motif is defined as a small subgraph which appears significantly more frequently in an observed (real-world) graph than in randomly generated graphs. The switch chain is often used to generate such random graphs.
It samples a graph uniformly at random from the space of all graphs with a given degree sequence. As a small example, Figure \ref{fig:example_labels} illustrates the six different graphs with degree sequence $(2,2,3,2,1)$. Note that the three graphs on the left have the same topology and the three graphs on the right have a second different topology. For motif finding, it is not necessary to generate a sample from all six labelled graphs. We only need to know the probability with which we sample each of these two classes $G$ and $H$, as this allows us to compute the expected number of occurrences of a given subgraph. For instance in this small example, we find a graph with topology $G$ with probability $\nicefrac{1}{2}$ and we find a graph with topology $H$ also with probability $\nicefrac{1}{2}$. Hence, the expected number of triangles equals $0.5$.
\end{example}
\begin{figure}[!htb]
\centering
\includegraphics[width=300px]{figure3.pdf}
\caption{\label{fig:example_labels}In the top-row the six simple undirected graphs with degree sequence $(2,2,3,2,1)$, the nodes are fixed in place. There are two sets of three graphs with the same topology. The bottom-row shows the same six graphs, but here the different graphs are illustrated as a relabelling of the nodes.}
\end{figure}
\begin{example}
\label{exmp:bip_prod_cons}
We are interested in the buying behaviour of customers. We may represent this as a bipartite graph $G$ where the primary nodes represent customers, the secondary nodes represent products and the edges indicate that a customer has bought an item. Say we have observed four customers who each bought two items. In total there are four items and each item has been bought twice. We want to know the probability that the customers can be divided into two groups (see Figure \ref{fig:example_2222}(a)) while fixing the number of items bought per customer and the number of times each item is bought. To do so, we may generate samples of bipartite graphs with the degree sequence $k = ((2,2,2,2),(2,2,2,2))$ using the switch or the Curveball chain, and estimate this probability. Both algorithms sample a graph uniformly at random (provided we run them for long enough) from the set of $90$ distinct \emph{labelled bipartite graphs} with degree sequence $k$. Only $18$ of these states correspond to the situation where we can split the customers into two types. Hence we will find a probability close to $0.2$ provided we take a large enough sample and run the chains for long enough.
But the property of interest, if the customers can be split into two groups, is a topological property and does not depend on the labelling of the nodes. So in fact we are interested in sampling from a much smaller state space, that of unlabelled bipartite graphs with the given degrees. When removing the node labels, we find that there are only two distinct graphs (as illustrated in Figure \ref{fig:example_2222}(b)). We will later see that we can obtain the probabilities of sampling either of these two topologies by projecting the switch or Curveball chain. After projecting, fewer switches and trades are required to get close to the stationary distribution, largely due to the reduced number of graphs we are sampling from. Figure \ref{fig:example_2222}(c) illustrates this by showing the Markov chain of the projected switch chain.
\end{example}
\begin{figure}[!htb]
\centering
\includegraphics[width=300px]{figure_2222_2.pdf}
\caption{\label{fig:example_2222}(a) A toy-example of a customer product network, the customers can be divided into two groups $\{A,B\}$ and $\{C,D\}$ based on the items they have bought. (b) The two unlabelled bipartite graphs with degrees $((2,2,2,2),(2,2,2,2))$. (c) The projected switch chain. It is not hard to see that it has stationary distribution $(\nicefrac{1}{5},\nicefrac{4}{5})$, implying that topology $G$ will be sampled with probability $0.2$.}
\end{figure}
These two applications show that it is often unnecessary to sample from the set of labelled graphs. Instead only the topology of the sampled networks is important. The next example shows that even when the labels are important, sampling can be sped up by making use of the projection mechanism.
\begin{example}
\label{exmp:conn_specific_individuals}
We are studying a social network and want to know what the probability of two specific individuals being connected is given the number of connections of all individuals in the network. In this case, the labels of the nodes, i.e. \emph{who they represent} is important. However, we can still benefit from the speed-up of sampling isomorphic graphs. To see this, we return to the toy-example in Figure \ref{fig:example_labels}. If we generate a sample with the projected switch chain we obtain a given representative of class $G$, say $G_1$, roughly half the time and a representative of $H$, say $H_1$, the rest of the time. To obtain a uniform sample from the six labelled graphs we can use this sample and apply a random node relabelling to all of the sampled graphs. Note that this relabelling has to preserve the degrees of the nodes. Hence in this example we simply choose a random permutation of the node labels $v_1, v_2$ and $v_4$. Now we obtain a random sample from \emph{all} labelled graphs with degree sequence $(2,2,3,2,1)$.
\end{example}
\section{Projected Markov chains}\label{sec:MC_proj}
We now introduce the framework of projected Markov chains. We show that the projection of a Markov chain has two equivalent interpretations. Firstly we can think of the projection as a Markov chain on equivalence classes. That is the chain $X_0, X_1, \dots, X_t$ becomes $\overline{X_0}, \overline{X_1}, \dots, \overline{X_t}$ and the state space is reduced in size: $\overline{\Omega} := \Omega \backslash \sim$. Secondly, we can interpret the projected chain as running the original chain with an alternative starting distribution: instead of starting in a single state the chain starts from the uniform distribution on states in a single equivalent class.
In this article we consider discrete time Markov chains $\mathcal{M}=(\Omega, P)$ with finite state space $\Omega$ and transition matrix $P$. It is well-known that such a Markov chain converges to a unique stationary distribution $\pi$ if it is ergodic. If the Markov chain is time-reversible then this distribution satisfies $\pi_i P_{ij} = \pi_j P_{ji}$ for all $X_i,X_j \in \Omega$. All Markov chains discussed in this article are finite, ergodic and time-reversible.
We will be interested in Markov chains that satisfy condition (\ref{eq:MC_nice_P}) as defined below, and their \emph{projected} or \emph{lumped} chain. We use the following result of Wilson~\cite[Theorem 2.5]{Levin2009}.
\begin{lemma}[Projected Markov chains]\label{thm:projectedChains}
Let $\mathcal{M}=(\Omega, P)$ be a Markov chain and let $\sim$ be an equivalence relation on $\Omega$ with equivalence classes $[x] \in \bar{\Omega}$. Assume that $P$ satisfies
\begin{equation}
P_{x[y]} = P_{x'[y]} \label{eq:MC_nice_P}
\end{equation}
whenever $x \sim x'$, and where $P_{x[y]}:=\sum_{z\in [y]}P_{xz}$. Then $\bar{\mathcal{M}}=(\bar{\Omega}, \bar{P})$ with $\bar{P}_{[x][y]} := P_{x[y]}$ is a Markov chain. $\bar{\mathcal{M}}$ is called the \emph{projected chain}.
\end{lemma}
The stationary distribution of the projected Markov chain is proportional to the sizes of the equivalence classes. The following lemma was proved by Grone et al. \cite[Propostion 3]{grone2008interlacing}.
\begin{lemma} The projected chain $\overline{\mathcal{M}}=(\overline{\Omega}, \overline{P})$ satisfies $\pi_{[x]} P_{[x][y]} = \pi_{[y]} P_{[y][x]}$ where $\pi_{[x]} = \sum_{x\in[x]} \pi(x)$ and hence has stationary distribution $\overline{\pi} = (\pi_{[x_1]}, \dots, \pi_{[x_n]})$. \label{lem:proj_stationary}
\end{lemma}
The mixing time of a Markov chain quantifies how quickly the chain approaches its stationary distribution. It is defined in terms of the variation distance between distributions.
\begin{definition}
Let $\mu, \nu: \Omega \rightarrow [0,1]$ be probability distributions on $\Omega$. Their variation distance is defined as
\[d_V(\mu,\nu) = \max_{A \subset \Omega} |\mu(A) - \nu(A)|\]
and can be shown to equal $\nicefrac{1}{2} \sum_{x \in \Omega} |\mu(x) - \nu(x)|$.
\end{definition}
Let $P^t_x$ be the distribution of the Markov chain at time $t$ when started from state $x$, that is when started from the one-point distribution $\mathbf{1}_x$. This distribution $\mathbf{1}_x(y)$ equals $1$ when $x=y$ and $0$ otherwise. When the complete transition matrix $P$ is known, the distribution $P^t_x$ can be computed by $t$ times right multiplying $\mathbf{1}_x$ with $P$, i.e. $P^t_x = \mathbf{1}_x P^t$. The mixing time of a Markov chain is defined as
\[\tau(\epsilon) = \max_{x \in \Omega} \min_{T} \{T | d_V(P^t_x, \pi) \leq \epsilon \mbox{ for all } t > T\}.\]
Informally, the mixing time is the maximum number of steps needed to get within distance $\epsilon$ of the stationary distribution regardless of the starting state. We now show that the mixing time $\overline{\tau}(\epsilon)$ of a projected chain is smaller or equal to the mixing time $\tau(\epsilon)$ of the original chain.
\begin{lemma}\label{thm:ori_comp_proj} Let $\mathcal{M}=(\Omega,P)$ be a finite, ergodic Markov chain with stationary distribution $\pi$ and satisfying (\ref{eq:MC_nice_P}). Then $\tau(\epsilon) \geq \overline{\tau}(\epsilon)$.
\begin{proof}
We will show that for any $x \in \Omega$, $t \in \mathbb{N}$ we have $d_V(P^t_x, \pi) \geq d_V(\overline{P}^t_{[x]}, \overline{\pi})$ which gives the result.
Let $f: \Omega \rightarrow \overline{\Omega}$ be the function that maps a state $x$ to its equivalence class $[x]$. Furthermore, for a probability distribution $\mu$ on $\Omega$ let $\mu f^{-1}$ be the probability distribution on $\overline{\Omega}$ given by:
\[(\mu f^{-1})([x]) := \mu(f^{-1}([x])) = \sum_{z \in [x]} \mu(z).\]
Notice that $\overline{\pi}([x]) = \sum_{x \in [x]} \pi(x) = \pi f^{-1}([x])$, i.e. $\overline{\pi}$ equals $\pi f^{-1}$. Furthermore the `one-point' starting distribution $\overline{P}_{[x]}^0 = \mathbf{1}_{[x]}$ equals $\mathbf{1}_x f^{-1}$ for any $x \in [x]$, that is $\overline{P}_{[x]}^0 = P_x^0 f^{-1}$. We now show that if $\overline{\mu} = \mu f^{-1}$ then also $(\overline{\mu} \overline{P}) = (\mu P) f^{-1}$.
We evaluate $(\overline{\mu} \overline{P})$ on a class $[y]$ and find
\[\overline{\mu}\overline{P}([y]) = \sum_{[x] \in \overline{\Omega}} \overline{\mu}([x]) \overline{P}_{[x][y]} = \sum_{[x] \in \overline{\Omega}} \sum_{z \in [x]} \mu(z) \overline{P}_{[x][y]} = \sum_{z \in \Omega} \mu(z) P_{z[y]}\]
where the first equality comes from writing out the matrix multiplication. The second equality uses $\overline{\mu} = \mu f^{-1}$ and the last equality uses the definition of $\overline{P}$. Next we obtain
\[\overline{\mu}\overline{P}([y]) = \sum_{y \in [y]} \sum_{z \in \Omega} \mu(z) P_{zy} = \sum_{y \in [y]} (\mu P)(y) = (\mu P) f^{-1}([y])\]
by using $P_{z[y]} = \sum_{y \in [y]} P_{zy}$ and again recognizing the matrix multiplication.
Thus we now know that $\overline{P}_{[x]}^t = P_{x}^t f^{-1}$ for all $t$ and $x$. The proof now follows from \cite[Lemma 7.9]{Levin2009} where it is shown that $d_V(\mu, \nu) \geq d_V(\mu f^{-1}, \nu f^{-1})$ for any $\mu$ and $\nu$.
\end{proof}
\end{lemma}
We may think of the projected chain $\overline{\mathcal{M}}$ as the original chain $\mathcal{M}$ started from the uniform distribution on an equivalence class $[x]$. That is with starting distribution
\[\overline{\mathbf{1}_{x}} = \begin{cases} \frac{1}{|[x]|} & \mbox{when } x \in [x] \\ 0 & \mbox{otherwise.} \end{cases}\]
We will denote by $P^t_{\overline{x}}$ the probability distribution of $\mathcal{M}$ at time $t$ with starting distribution $\overline{\mathbf{1}_{x}}$. We now show that the `mixing time', $\hat{\tau}(\epsilon)$, of $\mathcal{M}$ when started from $\overline{\mathbf{1}_{x}}$ is exactly equal to that of $\overline{\mathcal{M}}$, i.e. starting $\mathcal{M}$ from $\overline{\mathbf{1}_x}$ is at least as fast as starting it from $\mathbf{1}_x$ (Lemma 3.3.). To do so we define
\[\hat{\tau}(\epsilon) := \max_{[x] \in \overline{\Omega}} \min_{T} \{T | d_V(P^t_{\overline{x}}, \pi) \leq \epsilon \mbox{ for all } t > T\}.\]
\begin{lemma}\label{thm:proj_equivcl} Let $\mathcal{M}=(\Omega,P)$ be a finite, ergodic Markov chain with stationary distribution $\pi$ the uniform distribution and satisfying (\ref{eq:MC_nice_P}). Then $\hat{\tau}(\epsilon) = \overline{\tau}(\epsilon)$.
\begin{proof} To prove this statement we will show that $d_V(P^t_{\overline{x}}, \pi) = d_V(\overline{P}^t_{[x]}, \overline{\pi})$. \\
Let $\overline{\mu}: \overline{\Omega} \rightarrow [0,1]$ be a probability distribution on $\overline{\Omega}$, we define a probability distribution $g\overline{\mu}$ on $\Omega$ by $g\overline{\mu}(x) := \nicefrac{\overline{\mu}([x])}{|[x]|}$. Now clearly
\[d_V(g\overline{\mu}, g\overline{\nu}) = \frac{1}{2} \sum_{x\in \Omega} \left|\frac{\overline{\mu}([x])}{|[x]|} - \frac{\overline{\nu}([x])}{|[x]|}\right| = \frac{1}{2} \sum_{[x] \in \overline{\Omega}} |\overline{\mu}([x]) - \overline{\mu}([x])| = d_V(\overline{\mu}, \overline{\nu}).\]
Furthermore $P^0_{\overline{x}} = \overline{\mathbf{1}_{x}} = g \mathbf{1}_{[x]} = g P^0_{[x]}$. We next show that if a probability distribution $\mu$ on $\Omega$ can be written as $\mu = g\overline{\mu}$ then $\mu P = g (\overline{\mu}\overline{P})$. We derive
\[\mu P (y) = \sum_{x \in \Omega} \mu(x) P_{yx} = \sum_{[x] \in \overline{\Omega}} \sum_{x \in [x]} \frac{\overline{\mu}([x])}{|[x]|} P_{yx} = \sum_{[x] \in \overline{\Omega}} \frac{\overline{\mu}([x])}{|[x]|} P_{y[x]} = \sum_{[x] \in \overline{\Omega}} \frac{\overline{\mu}([x])}{|[x]|} \overline{P}_{[y][x]} \]
Now using detailed balance for the the projected chain $\frac{|[x]|}{|\Omega|} \overline{P}_{[x][y]} = \frac{|[y]|}{|\Omega|} \overline{P}_{[y][x]}$ we obtain
\[\mu P (y) = \sum_{[x] \in \overline{\Omega}} \frac{\overline{\mu}([x])}{|[y]|} \overline{P}_{[x][y]} = \frac{(\overline{\mu}\overline{P})([y])}{|[y]|} = g(\overline{\mu}\overline{P})([y]).\]
This implies $P^t_{\overline{x}} = g \overline{P}^t_{[x]}$ for all $t$ and $x$ and thus $d_V(P^t_{\overline{x}}, \pi) = d_V(g \overline{P}^t_{[x]}, g\overline{\pi}) = d_V(\overline{P}^t_{[x]}, \overline{\pi})$.
\end{proof}
\end{lemma}
In this framework we are able to compare the mixing time of Markov chains \emph{directly} in terms of the variation distance. In the theoretical literature the mixing time of a Markov chain is often bounded using the \emph{spectral gap upper and lower bound}. The spectral gap of an ergodic finite Markov chain $\mathcal{M}=(\Omega, P)$ is defined as follows. Let us denote the left eigenvalues of the transition matrix $P$ by $\lambda_N \leq \dots \leq \lambda_1$. It is a classical result that $-1 < \lambda_N, \lambda_1 =1$ and $\lambda_2 < 1$ (see for example \cite[Lemma 12.1.]{Levin2009}.) Let us denote by $\lambda^* := \max\{|\lambda_i| : 1 < i \leq N\}$ the eigenvalue with largest absolute value smaller than 1. The value $1-\lambda^*$ is often called spectral gap. The main effort for bounding the mixing time of a Markov chain often goes into finding an expression or a bound for the spectral gap. We will later show that our method can sometimes be used to find an explicit expression for the spectral gap of the projected chain (i.e. Example \ref{example:quadratic_to_constant}).
\section{Markov chains for sampling graphs}\label{sec:MC}
There are two commonly used Markov chain algorithms designed for the sampling of graphs; we briefly discuss the switch chain $\mathcal{M}^S$ and the Curveball chain $\mathcal{M}^C$. Both exist in several flavours: for the sampling of bipartite graphs, simple directed graphs, directed graphs and simple undirected graphs. We describe the algorithms in terms of bipartite graphs, in some sense the most general case. The bi-adjacency matrix of a bipartite graph is an $n \times n'$ matrix where $n$ is the number of primary nodes and $n'$ the number of secondary nodes. The $(i,j)$-th entry of this matrix equals one if there is an edge between primary node $p_i$ and secondary node $s_j$, otherwise it equals zero. We will describe the switch chain and Curveball chain as algorithms that randomise the bi-adjacency matrix of a bipartite graph while keeping its row and column sums fixed. Note that this corresponds exactly to sampling a bipartite graph with fixed degrees.
A Markov chain $\mathcal{M} = (\Omega, P)$ is described by its state space $\Omega$ and its transition probabilities $P$. Given a binary matrix $A$ with row and columns sums $k = ((r_1, \dots, r_n), (c_1, \dots, c_{n'}))$, both $\mathcal{M}^S$ and $\mathcal{M}^C$ have as their state space the set of all binary $n \times n'$ matrices with row and column sums $k$. We denote this state space by $\Omega_k$.
In practice, both Markov chains are started from a specific state $X_0$ (a binary matrix) and each transition corresponds to making a small change to the current state $X_i$. The switch chain applies \emph{switches}: replacing a submatrix
\begin{equation}
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \mbox{ by } \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \mbox{ or vice versa.} \label{eq:Switch}
\end{equation}
The Curveball algorithm applies \emph{trades}: a trade randomly exchanges `ones' between two selected rows. For instance, the rows
\begin{equation}
\begin{pmatrix} 1 & 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 & 0 & 0 \end{pmatrix} \mbox{ can be replaced by } \begin{pmatrix} 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 1 & 0 \end{pmatrix} \mbox{.} \label{eq:CB}
\end{equation}
The top row had `tradeable ones', i.e. where the bottom row equals zero, in columns 1,2 and 6 before the trade and the bottom row had tradeable ones in columns 3 and 5. A trade corresponds to randomly selecting three columns from these five available columns. In this example columns 2,3 and 5 were chosen. Notice that also columns 1,2 and 3 could be chosen which corresponds to applying a switch. One trade can apply multiple switches at once.
There are several versions of the switch chain known in the literature \cite{Kannan1999}, here we use the chain which randomly selects two non-zero matrix entries, and applies a switch if possible, i.e. if the $(2\times 2)$-submatrix formed by the rows and columns corresponding to these entries are as in equation (\ref{eq:Switch}). The Curveball chain that we use in this article proceeds by selecting a pair of rows, $i$ and $j$ at random (with probability $\binom{n}{2}^{-1}$) and applying a trade with probability $\binom{s_i+s_j}{s_i}^{-1}$ where $s_i$ is the number of columns where row $A_i$ equals $1$ and row $A_j$ equals $0$ and $s_j$ is the number of columns where row $A_i$ equals $0$ and row $A_j$ equals $1$, see \cite{Verhelst2008,Strona2014} for more information.
To define the projected switch and Curveball chain we use the framework discussed in Section \ref{sec:MC_proj}. The equivalence relation that we will use is that of \emph{graph isomorphism}, that is graphs are equivalent when they have the same \emph{topology}. Formally, two (bipartite) graphs $G$ and $H$ are isomorphic if there is a bijective map $\sigma$ between their nodes such that edges are preserved, i.e. such that the edge $(\sigma(u), \sigma(v))$ is present in $H$ if and only if the edge $(u,v)$ is present in $G$. Note that, graph isomorphisms are degree preserving maps. In general it can be hard to decide if two graphs are isomorphic, but creating isomorphic graphs is simple: pick a random labelling of the nodes of a given graph $G$ to obtain an isomorphic graph $H$.
\begin{figure}[!h]
\centering
\includegraphics[width=300px]{figure2_hgb.pdf}
\caption{\label{fig:iso_nbh}. The switch adjacent graphs of two equivalent graphs $G \sim H$ are identical up to a relabelling of nodes. In fact, the graph isomorphism $\sigma$ between $G$ and $H$ is a graph isomorphism for all graphs that are switch-adjacent to $G$ and $H$. In this case $\sigma$ maps $v_2$ to $v_4$ and $v_4$ to $v_2$ while mapping all other nodes to themselves.}
\end{figure}
We say two bipartite graphs $G \sim H \in \Omega$ are equivalent if and only if they are isomorphic as bipartite graphs, i.e. if there exists a graph isomorphism $\sigma = (\sigma_1, \sigma_2)$, where $\sigma_1$ ($\sigma_2$) maps the primary (secondary) nodes of $G$ to the primary (secondary) nodes of $H$. We need to show that the transition matrices $P^S$ and $P^C$ of the switch chain and the Curveball chain respectively are of form (\ref{eq:MC_nice_P}) under this equivalence relation. Intuitively, this holds because applying a specific switch or trade to isomorphic graphs will lead to isomorphic graphs. This implies that the probability of ending up in a given equivalence class is equal for graphs that are in the same equivalence class. In Figure \ref{fig:iso_nbh} we illustrate this for the switch chain of a simple directed graph on five nodes. The following lemma is a formal statement of the above argument in the setting of bipartite graphs, but can easily be generalized to other graph classes.
\begin{lemma} Let $\mathcal{M}^S=(\Omega,P^S)$, $\mathcal{M}^C=(\Omega,P^C)$ be the switch and Curveball chain for a given bipartite degree sequence. Both $P^S$ and $P^C$ are of the form (\ref{eq:MC_nice_P}).
\begin{proof}
Let $G=(P,S,E)$ and $G'=(P,S,E')$ be bipartite graphs in $\Omega$ such that $G \sim G'$. That is, there exist degree preserving isomorphisms $\sigma = (\sigma_1, \sigma_2)$, with $\sigma_1: P \rightarrow P$ and $\sigma_2: S \rightarrow S$ such that for any $e=\{p,s\} \in E$ we have $\{\sigma(p),\sigma(s)\} \in E'$. We will write $\sigma(G) := G'$. Let $H$ be switch-adjacent to $G$ with regard to a specific switch: i.e. $H=(V,E \backslash \{\{p_i,s_j\}, \{p_k,s_l\}\} \cup \{\{p_i,s_l\}, \{p_k,s_j\}\}$. Then the graph $\sigma(H) = (V, E' $ $\backslash \{\{\sigma(p_i),\sigma(s_j)\}, \{\sigma(p_k),\sigma(s_l)\}\}$ $\cup \{\{\sigma(p_i),\sigma(s_l)\}, \{\sigma(p_k),\sigma(s_j)\}\})$ is switch-adjacent to $\sigma(G)$ and furthermore $H \sim \sigma(H)$ by definition. That is, all graphs that are switch-adjacent to $G$ are isomorphic to graphs that are switch adjacent to $\sigma(G)$ under the graph isomorphism $\sigma$ and $P^S_{GH} = P^S_{\sigma(G)\sigma(H)}$. We now write $N_{G}$ for the set of graphs that is switch-adjacent to $G$.
For any equivalence class $[H]$ and $G \sim G'$ with graph isomorphism $\sigma$ we obtain
\[P^S_{G[H]} = \sum_{K \in [H]}P^S_{GK} = \sum_{K \in N_G \cap [H]}P^S_{GK} \]
\[= \sum_{\sigma(K) \in N_{G'} \cap [H]}P^S_{G'\sigma(K)} = \sum_{\sigma(K) \in [H]}P^S_{G'\sigma(K)} = P^S_{G'[H]}. \]
Since any trade equals a sequence of $k$ switches the result immediately follows for $P^C$.
\end{proof}
\end{lemma}
In particular, the \emph{projections} $\overline{\mathcal{M}}^S$ and $\overline{\mathcal{M}}^C$ of the switch chain and the Curveball chain with respect to $\sim$ are well-defined. Lemma \ref{lem:proj_stationary} now tells us that the stationary distribution of these projected chains is proportional to the size of the equivalence classes. Hence, if we generate a sample using the projected chains we obtain each topology with the correct probability: the probability of sampling a graph in $\Omega$ with the given topology. In practice, when we are running experiments where we are only interested in the topology of the sampled networks, we could argue that we already use the projected chain. To illustrate this, we elaborate on Example \ref{exmp:bip_prod_cons} with respect to the switch chain.
\begin{example}
In Example \ref{exmp:bip_prod_cons} we wanted to know the probability that a bipartite graph with degrees $k = ((2,2,2,2), (2,2,2,2))$ is disconnected. Due to the small size of $\Omega$ and $\overline{\Omega}$ we can explicitly compute $P^S$ and $\overline{P}^S$ (see Figure \ref{fig:example_2222}(c)) and hence determine the mixing time for a given $\epsilon$. For $\epsilon=0.001$ we find $\tau(\epsilon)=28$ and $\overline{\tau}(\epsilon)=6$. This means that after $28$ switches, the probability of obtaining any specific graph $G$ of the $90$ distinct labelled graphs with degrees $k$ is roughly $\frac{1}{90}$. However, the probability of obtaining a graph with topology $G$ is already roughly $\frac{1}{5}$ after $6$ steps in $\overline{P}^s$.
In general, if we know the mixing time of the switch chain theoretically and run the chain $N$ times for $\tau(\epsilon)$ steps to obtain a sample of size $N$, we could be taking much longer than necessary because the property of interest (and any other topological property) already converges after $\overline{\tau}(\epsilon)$ steps.
In fact, for any property of interest (motifs, number of connected components, diameter) we may try to project the chain to an even smaller state space. To see this, let the property be given as a function $f:\Omega \rightarrow \mathbb{R}$. We say two states $s,s'$ are equivalent if and only if $f(s)=f(s')$. Hence, $f$ decomposes $\Omega$ in equivalence classes. If this equivalence relation satisfies 3.1 we obtain a projected chain with smaller mixing time than the original chain. Moreover, since many properties of interest are topological measurements, we know that we can always project the chain down to the isomorphism classes (even if the equivalence relation given by $f$ does not satisfy 3.1). In practice the convergence of the switch chain is often estimated through the convergence of the property of interest. That is, the estimated convergence is the convergence of a projected chain. This can explain part of the difference between theoretically proven bounds and experimentally observed bounds.
\end{example}
In certain applications, e.g. Example \ref{exmp:conn_specific_individuals}, node labels are important. We now show how we can speed up sampling by adding a preprocessing step to the switch or Curveball chain. We define the preprocessing step in terms of the bi-adjacency matrix.
\begin{definition}[Preprocessing step]
\label{def:preproc}
Let $A$ be a binary $(n\times n')$-matrix in $\Omega_k$ for fixed row and column sums $k$. Let $R$ be the set of all permutations $\rho:$ $(1, \dots, n)$ $\rightarrow$ $(1, \dots, n)$ such that the row sums $\sum_{j=1}^{n'} A_{\rho(i)j} = r_i$ for all $i$ and let $S$ be the set of all permutations $\sigma:$ $\{1, \dots, n'\}$ $\rightarrow$ $\{1, \dots, n'\}$ such that the column sums $\sum_{i=1}^n A_{i\sigma(j)} = c_j$. We define a preprocessing step which randomly selects a $\rho \in R$ and $\sigma \in S$ to form the matrix $B_{ij} = A_{\rho(i)\sigma(j)}$. This preprocessing step can be implemented by choosing a random order for sets of rows with equal row sum and a random order for sets of columns with equal column sum in $O(n+n')$ \cite{Durstenfeld:1964}.
\end{definition}
We can think of including the preprocessing step as starting the Markov chain from the uniform distribution on a single equivalence class. That is with starting distribution
\[\mathbf{1}_{\overline{A}}(B{}) = \begin{cases} \frac{1}{|[A]|} &\mbox{if } B \in [A] \\ 0 & \mbox{otherwise.}\end{cases} \]
Importantly, adding the preprocessing step to the Curveball chain or the switch chain, does not change their convergence to the uniform distribution since the stationary distribution is independent of the starting distribution of the chains. However, it may speed up the convergence.
For undirected and directed graphs we can similarly introduce a preprocessing step.
\begin{definition}[Preprocessing step]
\label{def:preproc2}
Let $G$ be a (directed) graph with $n$ nodes and adjacency matrix $A$. Let $P$ be a partition of the nodes with equal degree (equal in- and out-degree in case of directed graphs). Let $R$ be the set of all permutations $\rho:$ $\{1, \dots, n\}$ $\rightarrow$ $\{1, \dots, n\}$ which respect the partition $P$, i.e. nodes are permuted within each partition separately. We define the following preprocessing step: randomly select a permutation $\rho \in R$ and apply it to the nodes of $G$, then return the resulting adjacency matrix.
\end{definition}
This finishes our discussion of how the framework of projected Markov chains may improve the mixing time in applications. We finish this section by proving that the inclusion of a preprocessing step removes the need for `hexagonal moves' in the switch and Curveball chain for directed graphs as proved by Rao et al~\cite{Rao1996}. The intuition behind this proof is simple: the reason that the state graph of $M^S$ and $M^C$ is disconnected for some directed graphs is that the direction of certain directed triangles can not be reversed~\cite{Berger2010}, which can be achieved with our preprocessing step.
\begin{theorem}\label{thm:hex_move} The switch chain $M^S$ and Curveball chain $M^C$ have connected state space when including the preprocessing step in Definition \ref{def:preproc2}.
\begin{proof}
It has been shown that the above chains sample uniformly when including a pre-sampling step which assigns orientations to each `induced cycle set' randomly~\cite{Berger2010}. Our suggested preprocessing step does this too, since it permutes nodes with equal in and out degree, and it was shown that three nodes that form an induced cycle set always have the same in- and out-degree~\cite{Lamar2009}. Hence, our preprocessing also re-orients all induced cycle sets.
\end{proof}
\end{theorem}
In the next section we will discuss several examples where preprocessing improves the mixing time. Furthermore, we show that for certain families of graphs, the size of the state space $\overline{\Omega}$ is constant whereas the size of $\Omega$ grows quadratically in the number of rows of the matrices.
\section{Examples: Smaller Universes and Faster Sampling}\label{sec:examples}
In this section we discuss several examples where the state space is reduced significantly by using the framework of projected Markov chains. We start with an example where a quadratically growing state space is reduced to constant size. Furthermore, we show that our method makes it possible to explicitly compute the spectral gap of the projected chain, which would be very complicated for the original chain since its state space is growing in size. The spectral gap is often used to bound the mixing time of Markov chains \cite[Theorem 12.3]{Levin2009}.
\begin{example}
\label{example:quadratic_to_constant}
Let $r_n$ be a vector of length $n$ where all entries are equal to two, i.e. $r_n=(2,\dots,2)$ and let $c_n$ be the vector $(n-1,n-1,1,1)$. Then $k_n = (r_n, c_n)$ are valid degrees for a bipartite graph with $n$ primary nodes and four secondary nodes $(s_1,s_2,s_3,s_4)$ whenever $n \geq 2$. Let $G$ be the bipartite graph consisting of $K_{n-1,2}$ and $K_{1,2}$, i.e. $s_1$ and $s_2$ are connected to the same $n-1$ primary nodes and $s_3$ and $s_4$ are connected to the same primary node. This graph belongs to the state space $\Omega_{k_n}$. The equivalence class $[G]$ of graph $G$ has size $n$ since there are $n$ choices for the label of the primary node in the disconnected $K_{1,2}$. There is one other equivalence class, the class $[H]$, the graphs in this class are connected, $s_1$ and $s_2$ share $n-2$ neighbours and are each connected to a single additional primary node. One of these two nodes is furthermore connected to $s_3$ and the other to $s_4$. The class $[H]$ has size $2n(n-1)$. Hence the size of the state space $\Omega_{k^n}$ equals $n(2n-1)$ and grows quadratically in terms of $n$. On the other hand, the size of the state space of the projected chain is independent of $n$ and always equal to two.
We are able to explicitly compute the spectral gap for the projected switch chain. The transition probability $P^S_{[G][H]}$ equals the probability of selecting an edge in $K_{n-1,2}$ and an edge in $K_{1,2}$ and hence equals $\binom{m}{2}^{-1} 4 (n-1)$ with $m = 2n$, the total number of edges. The transition probability $P^S_{[H][G]}$ equals $\binom{2n}{2}^{-1} 2$ which can be seen by inspecting a specific graph $H \in [H]$. Let $H$ be the graph where $s_1$ is connected to $p_1, \dots, p_{n-1}$ and $s_2$ is connected to $p_1, \dots, p_{n-2}, p_n$. Furthermore $p_{n-1}$ is connected to $s_3$ and $p_{n}$ is connected to $s_4$. The only switches that will give us a graph in $[G]$ are $(p_{n-1}, s_1)$ with $(p_n, s_4)$ and $(p_{n-1},s_3)$ with $(p_n, s_4)$.
The eigenvalues of $\overline{P}^S$ can be symbolically computed and equal $\lambda_1=1$, $\lambda_2 = 1 - \nicefrac{2}{n}$. For $n\geq2$, the spectral gap is given by $1-\lambda_2=\nicefrac{2}{n}$, leading to an upper bound of the mixing time of $O(n \log(n/\epsilon))$ \cite{Sinclair1989}.
\end{example}
In the next example we show that for a family of matrices, using our preprocessing step reduces the size of the state space impressively: from growing exponentially in the number of columns to always consisting of a single state.
\begin{example}
Let $G_l$ be the bipartite graph with two primary nodes with degree $l$ and $2l$ secondary nodes with degree $1$. The size of the state space $\Omega_l$ of graphs with these degrees is $\binom{2l}{l}$ and grows exponentially in $l$. For the Curveball chain, a trade from $G_l$ reaches \emph{all realisations} of the degree sequence with probability $\binom{2l}{l}^{-1}$. Now consider the graphs $G_l$ and $H_l$ where the neighbours of node $p_1$ in $G_l$ are the neighbours of node $p_2$ in $H_l$ and vice versa. To go from state $G_l$ to state $H_l$ with the switch chain, at least $l$ switches are needed. In other words, at least $l$ steps of the switch chain are needed to reach every state with positive probability. This is a clear example where the Curveball chain is the better choice in terms of mixing time. Finally, if we use our preprocessing step and project either chain with respect to the equivalence relation $\sim$, we find that only a single state remains since all states are isomorphic. Hence we only need to apply the preprocessing step and are left with a uniformly sampled labelled graph.
\end{example}
\section{Discussion and conclusion}\label{sec:discussion}
In this article we introduce a projected version of the switch and Curveball Markov chain where only the topology of the resulting graph is used. In many applications this is the main feature of interest and projecting can significantly reduce the size of the state space and hence improve the mixing time of a Markov chain. We furthermore introduce a preprocessing step that can be used in combination with the projected chain to obtain a random sample from the set of labelled graphs.
Clearly we can find examples where projecting does not alter the size of the state space, that is, any graph where each node has a unique degree leads to equivalence classes of size one and hence no reduction in the size of the state space. However, such degree sequences have smaller state spaces to begin with, exactly due to the absence of this redundant symmetry. In \cite{Berger2018} an interesting relation is discussed between majorization of degree sequences and the size of the state space. This could turn out to be the key to showing \emph{all} state spaces are small after projection.
Most theoretical bounds on the mixing time of the switch chain give a bound on the \emph{spectral gap}. The spectral gap of a projected chain is smaller than or equal to the spectral gap of the original chain since it has been proven that the eigenvalues of a projected chain are a subset of the eigenvalues from the original chain \cite[Theorem 12.8.(ii)]{Levin2009}. Focusing the analysis of spectral gap bounds for projected chains would make an interesting area of future research.
\section*{Acknowledgement}
The authors would like to thank Pieter Kleer for insightful comments and discussions on the content of this work.
|
{
"timestamp": "2018-07-27T02:11:32",
"yymm": "1803",
"arxiv_id": "1803.02624",
"language": "en",
"url": "https://arxiv.org/abs/1803.02624"
}
|
\section{Introduction}
In today’s digital world, the communication networks like the Internet technology has rapidly developed and extended as a suitable channel for transferring all kinds of data, particularly multimedia data. Since the amount of multimedia data in Internet transmission increases extensively, the problem of copyright and integrity protection has become a very serious issue and arisen more attention nowadays \cite{ref11, ref8}. In other words, digital data can be easily copied or maliciously tampered by using various tools without loss of quality and perceived by the human visual system. Therefore, the secure strategies should be designed for solving these challenges. Among the solutions for these issues, digital watermarking techniques \cite{ref11, ref8, ref10} are the most popular until now.
Digital watermarking is a science and art that imperceptibly hides useful information into the digital media for various goals such as copyright protection \cite{ref26, ref27, ref29, ref30}, broadcast monitoring, authentication and etc \cite{ref11, ref8, ref10}. In recent years, watermarking has attracted much attention as an effective solution to guaranty the integrity and authenticity of digital images from being illegally modified \cite{ref9, ref1}. To do so, the watermarks information include authentication pattern and digest are embedded into host image without severely affecting the perceptual quality to detect and recover tampered regions in the receiver side. It should be noted, the host image is referred as the original image without the embedded watermark, while the image that is obtained by embedding the watermark into host image without serious destroying the quality is named as the watermarked image. Generally, these methods can be classified into two categories as fragile and semi-fragile techniques \cite{ref8, ref1}. The fragile scheme makes the hidden information invalid after any modifications in the content of the watermarked image. In another word, it can be introduced as a design of watermarks that become undetectable in the view of the slightest modification to the host signal. Therefore, fragile schemes are mainly used for authentication goals \cite{ref7, ref12, ref13, ref14, ref15, ref16, ref17, ref18, ref19, ref20, ref21, ref22, ref23}. On the other hand, a semi-fragile technique aims at making hidden information fragile to modify the content of the signal, and robust to all possible attacks such as compression, image processing operations and etc.
The fragile watermarking has its own specific requirements include increasing imperceptibility, capacity, and security \cite{ref8, ref9}. The imperceptibility denotes the idea that an embedded watermark must be invisible to the human visual system. In other words, the embedded information should keep the images’ visual quality. The size of information which embedded in the host is presented by capacity. Finally, the security of the watermarking system is referred as the safety of embedded watermark into the host, even though the hacker having full knowledge of embedding and detecting procedures. In this filed, security has become one of the most important and challenging problems for watermarking schemes.
\subsection{Literature review}
In this subsection, a brief review of several fragile schemes which proposed in the last decade is presented. Also, the advantages and weaknesses of each method are described, and compared with each other. Nowadays, the fragile watermarking authentication schemes have been extended and developed extremely. These methods can be divided into two types. Some methods only focused on locating the suspicious regions in host image \cite{ref23, ref15, ref18, ref32}. On the contrary, more schemes can recovered tampered parts using the information which embedded in non-tampered information, clearly \cite{ref7, ref12, ref13, ref14, ref16, ref17, ref19, ref20, ref21, ref22}.
In \cite{ref7}, an effective dual watermark method for image tamper detection and recovery was proposed. In this scheme, two chances are provided in the entire image to recover tampered regions for the first time. Consequently, the recovery rate and quality of the recovered image are efficiently optimized rather than previous methods. In addition, the hierarchical authentication is employed to detect the tampered regions. In \cite{ref15}, a probability-based tampering detection scheme for digital images was presented to reduce errors in the authentication phase. In another word, a probability theory is used to enhance authentication accuracy. The experimental results show that the proposed scheme provides good accuracy in terms of detection precision. In \cite{ref32}, an image authentication scheme based on absolute moment block truncation coding was proposed. In this scheme, a hybrid mechanism was employed to hide the authentication watermark using AMBTC and improve the embedding efficiency. In embedding phase, the watermark is embedded into Bitmap or quantization levels based on the texture of blocks. The experimental results illustrate that the scheme can effectively thwart collage attack. Another self-embedding fragile watermarking scheme was presented in \cite{ref13}, as a novel image tamper localization and recovery algorithm based on watermarking technology. The security of this method has been increased by using non-linear chaotic sequence. In order to generate the digest, DCT is applied in coefficients of each 2$\times$2 block and embedded into another block according to the block mapping. A novel chaos-based fragile watermarking for image tampering detection and self-recovery was presented in \cite{ref12}. In this scheme, to determine blocks mapping, a new chaotic sequence generator as the cross chaotic map is employed. Hence, the security is increased due to the application of this map with many parameters, which can be used as keys. Similarly, two chances are considered to recover 2$\times$2 modified blocks. An effective Singular Value Decomposition based image tampering detection and self-recovery using active watermarking were proposed in \cite{ref14}. In this method, 12-bit tamper detection data were generated and embedded in a random block after being encrypted. One of the positive aspects of the proposed scheme to the previous schemes is the ability to detect tampered region under various security attacks include vector-quantization and collage attacks.
In \cite{ref16}, authors presented an efficient fragile watermarking scheme for image authentication and restoration based on Discrete Cosine Transform. In this scheme, the host is divided into 2$\times$2 non-overlapping blocks. Similar to most schemes, for each block 12 bits watermark is generated from the five Most Significant Bits of each pixel and are embedded into the three Least Significant Bits of the pixels corresponding to the mapped block. In addition, the proposed scheme uses two levels encoding for content correction bits generation. In \cite{ref19}, an image tamper detection and recovery scheme using adaptive embedding rules was presented. One of the major novelty of this method is used smoothness to distinguish the characteristics of image blocks. Accordingly, the different watermark embedding, tamper detection, and recovery strategies were designed and applied to different block types. Hence, information of authentication and recovery can be effectively embedded in a limited space to increase information hiding efficiency. Experimental result showed that the proposed scheme causes less damage to the original image compared to the most fragile scheme. In the scheme \cite{ref20}, a DCT based effective self-embedding watermarking scheme for image tamper detection and localization with recovery capability was presented. In this scheme, as most schemes for each 2$\times$2 non-overlapping block, two authentication, and ten recovery bits are generated from the five Most Significant Bits of pixels. The experimental results illustrate that the proposed scheme not only outperforms high-quality restoration, also removes the blocking artifacts. The authors of scheme \cite{ref18}, proposed image tamper detection scheme based on fragile watermarking and Faber-Schauder wavelet. The maximum coefficients of FSDWT are utilized with a logo to generate the watermark which is embedded in the Least Significant Bit of specified pixels in the host. In \cite{ref23}, a novel efficient reversible image authentication method using improved PVO and LSB substitution techniques was presented. In this scheme, instead of embedding the block-independent AC as the previous work, the proposed scheme embedded the hashed value of block features. In addition, a mechanism to deal with the overflow and underflow problems was considered. The proposed scheme \cite{ref18, ref23} achieved high image quality, low complexity computing, but the main drawback of this method is the inability to recover tempered regions.
Another scheme as improved image tamper localization using chaotic maps and self-recovery was proposed in \cite{ref17}. In this scheme, the authentication bits of a 2$\times$2 image block is generated using the chaotic maps. Thereinafter, for each non-overlapping block, two different sets of recovery bits of length 5 and 3 were computed and each one is embedded into randomly selected distinct blocks. In \cite{ref21}, a new fragile image watermarking with pixel-wise recovery based on overlapping embedding strategy was presented. In this work, the block-wise mechanism for tampering localization, and the pixel-wise mechanism for content recovery are considered. Compared to other methods, the proposed scheme can achieve superior performance of tampering recovery even for larger tampering rates. To achieve better performance of tampering recovery, authors in \cite{ref22} proposed hierarchical recovery for tampered images based on watermark self-embedding. In this scheme, the higher MSB layers of tampered parts have higher priority to be corrected than the lower MSB layers. Hence, the quality of the recovered image can be improved, especially for larger tampering rates. Experimental results demonstrate the effectiveness and superiority of the proposed scheme compared to previous methods.
In \cite{ref1}, a fragile and blind dual watermarking for image tamper detection and self-recovery based on Lifting Wavelet Transform and halftoning technique was proposed. In order to improve quality of the recovered image, two chances are provided by embedding a novel LWT-based digest and halftone version. In addition, to enhance the quality of the LWT-based digest, a new LSB$_{Rounding}$ technique was proposed. Experimental results prove the effectiveness, imperceptibility and real-time requirement of TRLH compared to another scheme which reviewed until now, especially in term of quality of watermarked and recovered image and security. In addition, TRLH not only outperforms high-quality restoration effectively but also removes the blocking artifacts and increase the accuracy of tamper localization due to use of very small size blocks.
Totally, the fragile methods which proposed in recent years have low visual quality for watermarked and recovered image; Also, the low recovery rate under large tampering, weak localization, and poor security was observed. The most schemes have a severe security threat because of the independence between content and the watermark. In addition, the mostly schemes which proposed in recent years are vulnerable against vector quantization, collage and protocol attacks.
\subsection{Key contributions of TRLG}
In this paper, in order to perform better performance of visual quality for both watermarked and recovered image, and also improve security and overcome the mentioned challenges, an efficient fragile blind quad watermarking scheme for image tamper detection and recovery based on Lifting Wavelet Transform (LWT) and Genetic Algorithm (GA) is proposed. TRLG provides interesting extensions to the most important limitations of some of the previous state-of-the-art schemes.
In TRLG, the digests classified into two categorize as primary and secondary digests. The two primary digests are generated based on LWT, and the rest two secondary digests are obtained by Floyd kernel of halftoning techniques. LWT (Haar, integer) \cite{ref31, ref33} is used because this transform uses the integer coefficients and has less computational time and memory requirement than traditional wavelet. In order to improve and optimize the quality of primary digests GA \cite{ref4, ref3} is employed. The utilizing GA avoids the exhausting searching and allows us to intelligently classify blocks of the image in terms of texture into flat or rough. Experimental results will show that the generated digest have better quality and decrease blocky artifact for recovered image compared to traditional digests which achieved based on averaging pixels, DCT-based or MSBs planes.
Furthermore, In TRLG, to increase recovery rate and guaranty quality of recover image more and more, a novel mapping strategy for shuffling four digests is considered. Based on this technique, the coefficients of each digest is embedded in host image, so that the maximum distance between the coefficients of other digest and the initial position of original values is achieved. In TRLG, to enhance the security and raise detection accuracy a new chaotic map as CCS has been used. The irregular outputs are used to shuffle digest and improve the security of watermark. During watermark bits embedding process, first, authentication bits and digest are combined to form the watermark data in the LSBs by using LSB matching. Next, In order to avoid special tamperings such as vector-quantization, collage-attack, and protocol attack, the watermark is encrypted and permuted per block. In this way, a small non-overlapping block sized 2$\times$2 is used to improve the accuracy of localization.
Moreover, In TRLG, to improve quality of the watermarked image, and also improve the security of watermarks, the embedded watermark in each block of the image is encrypted with a key that intelligently selected with GA. In other words, GA is applied to intelligently optimum and modified the watermark’s values of each block to decrease the difference between watermarked and original values, and also achieve the high level of security. Generally, applying optimization algorithms into watermarking techniques is practical and effective. Experimental results of other state-of-the-art methods are compared with TRLG, and it is revealed that the proposed scheme exhibits excellent quality for watermarked and recovered image, and as well as improve security.
Generally, TRLG makes three main contributions. First, to the best knowledge of the authors, this is the first work that generating compact digests with quality optimization. Also, it is the first time that provides more than two chances for recover tampered regions. Second, in TRLG which is fragile scheme, generating digest and embedding watermark are modeled as a search and optimization problem. Third, combining chaotic maps, and utilized various keys to enhance the security of the watermarking system.
\subsection{Road map}
The remainder of this paper is organized as follows: Section 2 briefly explains some background material for TRLG. In Section 3, the design and implementation of TRLG are described in detail. Next, the experimental evaluation scenario and details of comparison with the fragile state-of-the-art methods are described in Section 4. Finally, the conclusion and future scope of TRLG is found in Section 5.
\section{Background}
In this section, some background material for the subsequent section is presented. First, the Chebyshev-Chebyshev (CCS) chaotic map is introduced. Next, a brief review of the Genetic algorithm (GA) is described. Finally, a new inverse halftoning method is presented.
\subsection{Chebyshev-Chebyshev map (CCS)}
\label{sec:Chebyshev}
The chaotic maps are the simple and efficient technique that is utilized in watermarking schemes for shuffling and encrypting the watermark.
The Logistic map is one of the popular and simplest 1D chaotic map which is used in this field, widely. The random sequence of this map is generated by Eq. (\ref{eq:Logistic}):
\begin{equation}
x_{n+1} = \mu\times x_n\times (1-x_n)
\label{eq:Logistic}
\end{equation}
where $\mu\in(0, 4]$ and $x_0$ is control parameter and initial value of map, respectively. This map has two main drawbacks: firstly, Its chaotic range is limited [3.57, 4], and secondly, $\mu$ beyond the range cannot generate chaotic behaviors \cite{ref2}.
To overcome these issues, the fusion of two chaotic maps include Chebyshev and Logistic maps with well performance is proposed as Chebyshev-Chebyshev map in \cite{ref2}. The CCS is described by Eq. (\ref{eq:CCS}):
\begin{align}
x_{n+1} &= F(u, x_n, k) \nonumber \\
&= G(u, x_n) \times H(k) - \lfloor G(u, x_n) \times H(k) \rfloor \nonumber \\
G(u, x_n) &= cos((\mu + 1) \times arccos(x_{n})) \nonumber \\
H(k) &= 2^k, 8 \leq k \leq 20
\label{eq:CCS}
\end{align}
where $\mu \in(0, 10]$ and $k$ are control parameters, and $x_0$ is the initial value of sequence. The chaotic performance of CCS is much better than single map.
\subsection{Genetic algorithm}
Genetic algorithm (GA) is one of the famous optimization tools in artificial intelligent that introduced by Holland \cite{ref4, ref3}. It is a heuristic searching algorithm based on the mechanism of natural selection and genetics that find the best global minimum or maximum solutions in large space. The optimization problem based on GA is modeled by defining the chromosome, fitness function, and three main operators such as selection, crossover, and mutation. The whole steps of GA are shown in Fig. \ref{fig:GA}.
The process is started with an initial population of chromosomes that represent the variables of the problem by an encoded binary string. The initial population is selected randomly from sets of possible solutions. The binary strings are adjusted to maximize or minimize the fitness values. To do so, a fitness function is utilized to measure the quality of each chromosome in the population. It should be noted, the fitness function should be carefully selected based on the requirement of the optimization problem. Next, GA tries to produce further possible solutions to achieve the desired optimization. In the other word, the next generation will be generated from a particular group of chromosomes to survive whose fitness values are high. Hence, three genetic operators are triggered to recombine the composition of the genes to create new chromosomes over successive generations. A brief summary for these basic operators can be summarized as follows:
\begin{description}[font=$\bullet$\scshape\bfseries,leftmargin=0cm]
\item \textbf{Selection:} In this step, the portion of fitter chromosomes are selected to generate new population, similar to the natural world. The chromosome that holds higher fitness value, subsequently, have the high chance to be survived. In another word, a part of the low fitness chromosomes is ignored through this natural selection step.
\item \textbf{Crossover:} In this step, pairs of optimal chromosomes among the survived chromosomes are chosen as parents to produce two new children. Evidently, the chromosomes with the higher fitness values generate more children. To do so, a crossover point is selected between the first and last chromosomes of the parent chromosomes. Next, two new children are generated by swapping the fraction of each chromosome after the crossover point.
\item \textbf{Mutation:} Finally, to avoid GA get trapped on a local optimum and keeps GA from converging fast, the mutation operator is employed. To do so, some random positions of the chromosomes are flipped by changing 0 to 1 and vice versa.
\end{description}
At the end, the GA period is repeated until the desired termination criterion is satisfied or the number of iteration is reached.
\begin{figure}[t]
\center
\includegraphics[width=0.35\textwidth,trim=0cm 17.5cm 10cm 0.5cm,clip]{ga.pdf}
\caption{The block diagram of Genetic Algorithm (GA).}
\label{fig:GA}
\end{figure}
\subsection{Halftone technique}
Digital halftoning is a technique to generate halftone version of the image by homogeneously distributed of the black and white pixel from continues tone \cite{ref28, ref25, ref24}. In order to generate halftone version of the image, a Floyd kernel (Filter) is chosen \cite{ref33}. This kernel is illustrated in Eq. (\ref{eq:Filter}):
\begin{equation}
K = \frac{1}{16}\begin{bmatrix}
&&\\
&*&7\\
3&5&1\\
\end{bmatrix}
\label{eq:Filter}
\end{equation}
where $*$ represent current pixel.
One of the major applications of halftone technique is inverse halftoning. In this process, a halftone version of the image is used to reconstruct the continues tone version of the image. Noways, several methods have been proposed for this aim, but most of them have low quality for inverse version compared to the original. In TRLG, a novel and effective inverse halftoning technique base on Deep Convolution Neural Network that proposed in \cite{ref5} is utilized. In order to map a halftone version of the image to continues tone, a deep CNN as a nonlinear transform form is used. For this aim, a pre-trained deep CNN as a feature extractor is employed to construct the objective function for the training of the transformation CNN. The experimental results illustrate that it can create the inverse halftoned image with high image quality, compared to WInHD \cite{ref6} which is used in \cite{ref1}. For more information about the process of generating halftone version and the inverse method, refer to \cite{ref5}.
\begin{figure*}[t]
\center
\includegraphics[width=0.95\textwidth,trim=1cm 13.2cm 1cm 1cm,clip]{map.pdf}
\caption{The overall scheme of dividing digests. (a) Primary digest 2, (b) Secondary digest 1, (c) Secondary digest 2.}
\label{fig:map}
\end{figure*}
\section{Proposed method}
In this section, a fragile blind quad watermarking for image tamper detection and recovery by providing compact digests with quality optimized based on Lifting Wavelet Transform (LWT) and Genetic Algorithm (GA) is proposed. TRLG includes two main phases that are described below in details:
\begin{description}[font=$\bullet$\scshape\bfseries,leftmargin=0cm]
\item\textbf{Generating and embedding watermark:} In this phase, First, four digests are generated based on LWT and Floyd kernel. In the following, to improve recovery rate and increase security, each digest is shuffled and arranged separately by using a new chaotic map. Then, an authentication bit for each 2$\times$2 blocks is calculated based on the relation of pixels of the block and the digest that must be embedded in it. Finally, to form and embed watermark, first, digests and authentication bit should be combined, and then encrypting and embedding the watermark by using chaotic map, GA, and modified LSB-matching technique. The block diagram of this phase is shown in Fig. \ref{fig:embedding}.
\item\textbf{Tamper detection and recovery:} In this phase, to analyze the integrity of watermarked image received from the communication channels, first, the watermark is extracted and decrypted. Next, the tamper regions are marked based on the extracted and calculated authentication bit. Finally, four digests are reshuffled and reconstructed to recover tampered regions by valid parts of them. Fig. \ref{fig:extracting} illustrates the block diagram of tamper detection and recovery phase.
\end{description}
\subsection{Generating and embedding watermark}
Let's denoted the cover image as $host$ with the size of $M\times N$ (divisible by 4). TRLG is able to detect and recover 2$\times$2 modified blocks. Also, [$R, G, B$] and [$Y, U, V$] represent the color component of $host$ in RGB and YUV color spaces, respectively. If $host$ is in grayscale mode, chrominance components are meaningless and further processing is not needed for them. The procedure of generating and embedding watermark is described in details as below:
\subsubsection{Generating digests}
As mentioned before, four digests are considered in TRLG to recover tampered regions. These digests are classified as primary and secondary digests. The two primary digests are generated based on LWT and GA. Also, the two secondary digests are obtained by using the halftoning technique.
The steps of generating primary digest are as follows:
\begin{enumerate}[1),itemsep=0mm]
\item The $Y$ component is resized to 50\% of original size, and a level of LWT is applied on the result to generate $LL$, $LH$, $HL$ and $HH$ bands.
\item Quantizing coefficients of each band by Eq. (\ref{eq:quantization}):
\begin{align}
Coef_{i, j} &= sign(Coef_{i, j})\times \lfloor\frac{|Coef_{i, j}|}{\mu}\rfloor \nonumber\\
\forall & i \in [1, \frac{M}{4}], j \in [1, \frac{N}{4}]
\label{eq:quantization}
\end{align}
where $\mu (=2)$ and $Coef_{i, j}$ are quantization step and coefficients of wavelet bands, respectively.
\item In this step, a texture analysis on each block is performed to intelligently generate the digest for any image with the various type. Hence, the type of each 4$\times$4 blocks of $Y$ is classified into two classes as texture and flat region based on Standard Deviation (STD) measure and GA. For this aim, first, STD is applied in $Y$ and denote result as $texture_{M\times N}$. Next, the optimal thresholds for separating blocks are obtained based on GA. The details of GA training will further explain in the Thresholds Optimization sub-section. At the end of GA training, a threshold matrix where denoted as $thresholds_{M \times N}$ is obtained. Then, the type of each block is marked as texture or flat region by Eq. (\ref{eq:texture}):
\begin{align}
\Gamma_{i, j} &=
\begin{cases}
1 & \text{if } thresholds_{i, j} < texture_{i, j}\nonumber\\
0 & \text{otherwise}\\
\end{cases}\\
& \forall i \in [1, N], j \in [1,M]
\label{eq:texture}
\end{align}
where $\Gamma_{\rfrac{M}{4}\times \rfrac{N}{4}}$ ($\in$ [0, 1]) is illustrated the type of each block as two classes.
\item In this step, the coefficients of each bands are modified according to Eq. (\ref{eq:modified}):
\begin{align}
LL_{i, j} &=
\begin{cases}
(LL_{i, j}+\vartheta)\&124_{(01111100)_{2}} & \text{if }\Gamma_{i, j} = 1\\
LL_{i, j} & \text{otherwise}\\
\end{cases} \nonumber \\
LH_{i, j} &=
\begin{cases}
(LH_{i, j}+\vartheta)\&60_{(00111100)_{2}} & \text{if }\Gamma_{i, j} = 1\\
(LH_{i, j}+\vartheta)\&28_{(00011100)_{2}} & \text{otherwise}\\
\end{cases} \nonumber \\
HL_{i, j} &=
\begin{cases}
(HL_{i, j}+\vartheta)\&60_{(00111100)_{2}} & \text{if }\Gamma_{i, j} = 1\\
(HL_{i, j}+\vartheta)\&28_{(00011100)_{2}} & \text{otherwise}\\
\end{cases} \nonumber \\
HH_{i, j} &=
\begin{cases}
(HH_{i, j}+\vartheta)\&56_{(00111000)_{2}} & \text{if }\Gamma_{i, j} = 1\\
(HH_{i, j}+\vartheta)\&28_{(00011100)_{2}} & \text{otherwise}\\
\end{cases} \nonumber \\
& \forall i \in [1, \rfrac{M}{4}], j \in [1, \rfrac{N}{4}]
\label{eq:modified}
\end{align}
where $\vartheta$ is the parameter of $LSB_{Rouning}$ technique which is proposed in TRLH \cite{ref1}. Based on this technique, the difference between two corresponding coefficient is reduced, and this leads to increase the quality of image digest. If two and three LSBs of coefficient must be ignored are zeros, $\vartheta$ is set as 2 and 4, respectively.
Totally, 20 bits (19 bits for describing coefficient of each band, and 1 bit for described the type of corresponded block) is obtained that represent $Y$ of digest.
\item If $host$ is in grayscale mode, $U$ and $V$ components are resized to 25\% of original size. Then, the values of components are modified and updated by Eq. (\ref{eq:colormodified}):
\begin{align}
U_{i, j} &= (U_{i, j}\&254_{(11111110)_{2}}) >> 1 \nonumber \\
V_{i, j} &= (V_{i, j}\&254_{(11111110)_{2}}) >> 1 \nonumber \\
\forall i &\in [1, \rfrac{M}{4}], j \in [1, \rfrac{N}{4}]
\label{eq:colormodified}
\end{align}
where $>>$ is bitwise right shift operation.
Totally, 14 bits is obtained which described the 4$\times$4 blocks of chrominance components.
\end{enumerate}
Finally, 34 bits (20 bits for gray) are considered to represent 4$\times$4 blocks of gray or color images. It should be noted, although the size of the block is 4$\times$4, in the inverse procedure, TRLG can recover each block with 2$\times$2 precision. In another word, the digest which is proposed in TRLG has amazing quality rather than traditional method which are based on 2$\times$2 blocks or larger. This claim will be proved in Sec. \ref{sec:Experimental}.
At the end, lets define the result as $digest_{prim}$ which include [$\Gamma, LL, LH, HL, HH, U, V$]$_{\rfrac{M}{4}\times\rfrac{N}{4}}$.
A novel primary digest which proposed in TRLG is named as DLG.
\begin{figure*}[t]
\center
\includegraphics[width=0.95\textwidth,trim=1cm 11.5cm 1cm 2cm,clip]{organize.pdf}
\caption{The structure of embedding 8 bits in each 2$\times$2 blocks of the host. (a) red plane (grayscale), (b) green plane, (c) blue plane. L and $\text{\{U, V\}}$ mean Luminance and Chrominance, and T define texture.
S determine the secondary digest, and A belong to authentication bits.}
\label{fig:organize}
\end{figure*}
Thereinafter, to generate the secondary digest, the Floyd kernel is used. To do so, first, $R$, $G$ and $B$ are resized to 50\% of original size, and Floyd kernel is applied in each band, separately. Totally, for each 2$\times$2 blocks, 3 bits (a bit for gray) halftone is considered.
Let define the result as $digest_{sec}$.
\noindent\textbf{Thresholds Optimization:} As can be seen, the threshold step is playing important role in DLG algorithm. In another word, the key challenge is how to classify block in terms of texture to achieve efficient digest with well quality. Therefore, to guarantee the quality of generated digest and select optimal thresholds, GA which is a well known modern optimization algorithm is employed. To do so, $Y$ is divided into non-overlap blocks of size $W\times H$ (128$\times$128). The overall GA-based generating digest is summarized in three steps:
\begin{enumerate}[1),itemsep=0mm]
\item First, the initial thresholds population is randomly created and converted into chromosomes. Next, the digest of the current block is generated by using the solutions in population.
\item The fitness function is evaluated between the current block of $Y$ and reconstructed primary digest that belongs to it for each corresponding solution by Eq. (\ref{eq:fitness1}):
\begin{equation}
Fitness function = SSIM
\label{eq:fitness1}
\end{equation}
where SSIM is the Structure Similarity Index.
\item Finally, GA operators include selection, crossover, and mutation are applied on each chromosome to generate next generation.
\end{enumerate}
These steps are continued for all blocks until a predefined condition is satisfied, or a constant number of generations is exceeded. Finally, the optimal thresholds are denoted as $threshold_{\rfrac{M}{W}\times \rfrac{N}{H}}$. At the end, $threshold$ is resized to the original size of $host$.
\subsubsection{Scrambling digests}
As mentioned above, in TRLG, four digests are considered for tampering recovery. Therefore, the four schemes are designed to shuffle and place each part of four digests in the maximum possible distance from the original place in $host$, and other corresponded parts in rest digests. By using these strategies, the security and recovery rate will be increased and improved. Accordingly, if more than half of $host$ is manipulated, TRLG is able to recover tampered region efficiently. In Sec. \ref{sec:Experimental}, we will see, TRLG can efficiently recover tampering part with amazing quality when the watermarked image is manipulated under 80\% rate.
First of all, the coefficients of primary digest is shuffled to improve security by using novel chaotic map discussed in Sec. \ref{sec:Chebyshev}. The shuffling steps of primary digests are explained below in details:
\begin{enumerate}[1),itemsep=0mm]
\item Two copy of $digest_{prim}$ are taken, and named them as $\dot{d}_p$ and $\ddot{d}_p$.
\item Two sequences $\chi_1$ and $\chi_2$ are generated based on Eq. (\ref{eq:CCS}) with $key_1$ and $key_2$ by running ${\frac{M}{4}\times \frac{N}{4}}$ times.
\item The permutation position matrix $\chi^{\prime}_1$ and $\chi^{\prime}_2$ is achieved by sorting the $\chi_1$ and $\chi_2$ in ascending order.
\item Each plan of $\dot{d}_p$ and $\ddot{d}_p$ which is generated in previous section [$\Gamma, LL, LH, HL, HH, U, V$] are converted into 1D matrix as:
\begin{align*}
\dot{d}_p &= \{\dot{d}^1_p, \dot{d}^2_p, ..., \dot{d}_p^{{\frac{M}{4}\times \frac{N}{4}}} \}\\
\ddot{d}_p &= \{\ddot{d}^1_p, \ddot{d}^2_p, ..., \ddot{d}_p^{{\frac{M}{4}\times \frac{N}{4}}} \}
\end{align*}
\item In this step, the shuffled digest pixel matrix $\dot{d}_p^{s}$ and $\ddot{d}_p^{s}$ are achieved by utilizing Eq. (\ref{eq:permute1}):
\begin{align}
\dot{d}_p^{s}(i) &=\dot{d}_p(\chi^{\prime}_1(i)) \nonumber \\
\ddot{d}_p^{s}(i) &= \ddot{d}_p(\chi^{\prime}_2(i)) \nonumber \\
\forall i & \in [1,{\rfrac{M}{4}\times \rfrac{N}{4}}]
\label{eq:permute1}
\end{align}
\item Convert the $\dot{d}_p^{s}$ and $\ddot{d}_p^{s}$ to 2D matrix with size of ${\frac{M}{4}\times \frac{N}{4}}$.
\end{enumerate}
In the following, to improve recovery rate a Shift-aside technique \cite{ref1} is utilized for reordering the coefficients of $\dot{d}_p^{s}$, again. Accordingly, if the right or the left half side of $host$ totally tampered, the recovery phase is able to recover the tamper region which embedded in another side of $host$. In addition, each side is divided into two separate parts again, that makes the recovery phase more efficient when the tampered region is located at the center of $host$. It should be noted, these processes are applied on all plane in $\dot{d}_p^{s}$ include [$\Gamma, LL, LH, HL, HH, U, V$], and finally, $\dot{d}_p^{s}$ will be updated.
In order to reorder the coefficients of $\ddot{d}_p^{s}$, a new technique is proposed in TRLG as Mirror-aside operation. In Mirror-aside scheme, the coefficient of top and bottom half of the digests are swapped. To do so, first, $\ddot{d}_p^{s}$ is divided into four non-overlap blocks. Next, each block is divided into four non-overlap blocks, again. Fig. \ref{fig:map}(a) is illustrated the dividing process. As Shift-aside scheme, the determined location by CCS is reordered to placed into the corresponded quarter. Finally, the reordered $\ddot{d}_p^{s}$ is formed and updated.
Subsequently, to reorder the coefficients (Bits) of the secondary digest, first, two copy of $digest_{sec}$ are generated, as $\dot{d}_s$ and $\ddot{d}_s$. Next, the coefficients of these digest are reordered according to Fig. \ref{fig:map}(b) and Fig. \ref{fig:map}(c), respectively. Let named this strategy as Partner-block. Unlike primary digests, the secondary digests are not shuffled by any chaotic map. At the end, $\dot{d}_s^s$ and $\ddot{d}_s^s$ are achieved. As seen, the shuffling and reordering schemes for all digest in TRLG are designed to achieve maximum recovery rate in large tampering rate.
\subsubsection{Generating authentication bits}
\label{sec:Generatingauthenticationbits}
In TRLG, a bit is considered for authenticating each 2$\times$2 block. The process of generating authentication bit are explained below in details:
\begin{enumerate}[1),itemsep=0mm]
\item In first step, each band of $\dot{d}_p^{s}$ and $\ddot{d}_p^{s}$ which include [$\Gamma, LL, LH, HL, HH, U, V$] are converted into binary form. Next, the result are combined by Eq. (\ref{eq:digest}):
\begin{align}
\bar{d}_p{(i, j)} &=\overline{\dot{d}_p^{s}(i, j, k)} \uplus \overline{\ddot{d}_p^{s}(i, j, k)} \nonumber \\
\forall i \in [1, &\rfrac{M}{4}], j \in [1, \rfrac{N}{4}], k \in [1, 7]
\label{eq:digest}
\end{align}
where $k$ and $\uplus$ are represented index of each band in primary digest and string joint operator, respectively. In this equation, $\bar{d}_p$ is a binary matrix with size of $\frac{M}{4}\times \frac{N}{4}$ that each cell contains 68 bits which belong to the information of two primary digests.
\item The bits of primary digests are formed to place into considered position in 4$\times$4 block according Fig. \ref{fig:organize} as:
\begin{align*}
\Delta_{(i,j,1)} &= {\bar{d}_p^k{(i, j)}}, \forall k \in [1, 2, ..., 20]\\
\Delta_{(i,j,2)} &= {\bar{d}_p^k{(i, j)}}, \forall k \in [21, 22, ..., 44]\\
\Delta_{(i,j,3)} &= {\bar{d}_p^k{(i, j)}}, \forall k \in [45, 46, ..., 68]\\
\forall i &\in [1, \rfrac{M}{4}], j \in [1, \rfrac{N}{4}]
\end{align*}
where $\Delta_{(i, j, p)} \{ p \in [1, 2, 3]\}$ is expressed the primary bits in each planes.
\item Next, the primary bits which must be embedded in each $2\times2$ blocks in all planes are denoted as:
\begin{align*}
\Omega_{(i, j)} &= \begin{bmatrix}
\omega_1&\omega_2\\
\omega_3&\omega_4
\end{bmatrix}\\
\forall i \in & [1,\rfrac{M}{4}], j \in [1, \rfrac{N}{4}]
\end{align*}
where $\Omega$ and $\omega$ are represented $4\times4$ block, and its inner $2\times2$ sub-block, respectively, and $\omega$ is calculated by Eq. (\ref{eq:form}):
\begin{align}
\label{eq:form}
&\hspace{1cm}\Omega^n_{(i, j)}=\omega_n = \Delta^k_{(i,j,1)} \uplus \Delta^l_{(i,j,2)} \uplus \Delta^l_{(i,j,3)}\nonumber\\
&\hspace{1.5cm}\forall i \in [1,\rfrac{M}{4}], j \in [1, \rfrac{N}{4}]\\
&\begin{cases}
\forall k \in [1, 2, ..., 5], l \in [1, 2, ..., 6] & \quad \text{if } n = \text{1},\\
\forall k \in [6, 7, ..., 10], l \in [7, 8, ..., 12] & \quad \text{if } n = \text{2},\\
\forall k \in [11, 12, ..., 15], l \in [13, 14, ..., 18] & \quad \text{if } n = \text{3},\\
\forall k \in [16, 17, ..., 20], l \in [19, 20, ..., 24] & \quad \text{if } n = \text{4}
\nonumber
\end{cases}
\end{align}
where $\Omega^n_{(i, j)}$ and $\uplus$ are represented, $n$th sub-block in $\Omega_{(i, j)}$ and string joint operator, respectively.
At the end, $\Phi$ with size of $\frac{M}{2}\times\frac{N}{2}$ is generated as:
\begin{align*}
\Phi=
\begin{pmatrix}
\Omega^1_{1, 1}&\Omega^2_{1, 1} & \Omega^1_{1, 2}&\cdots&\Omega^1_{1, \frac{N}{4}}&\Omega^2_{1, \frac{N}{4}}\\
\Omega^3_{1, 1}&\Omega^4_{1, 1} & \Omega^3_{1, 2}&\cdots&\Omega^3_{1, \frac{N}{4}}&\Omega^4_{1, \frac{N}{4}}\\
\Omega^1_{2, 1}&\Omega^2_{2, 1} & \Omega^1_{2, 2}&\cdots&\Omega^1_{2, \frac{N}{4}}&\Omega^2_{2, \frac{N}{4}}\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots \\
\Omega^1_{\frac{M}{4}, 1}&\Omega^2_{\frac{M}{4}, 1}&\Omega^1_{\frac{M}{4}, 2}&\cdots&\Omega^1_{\frac{M}{4}, \frac{N}{4}}&\Omega^2_{ \frac{M}{4}, \frac{N}{4}}\\
\Omega^3_{\frac{M}{4}, 1}&\Omega^4_{\frac{M}{4}, 1}&\Omega^3_{\frac{M}{4}, 2}&\cdots&\Omega^3_{\frac{M}{4}, \frac{N}{4}}&\Omega^4_{\frac{M}{4}, \frac{N}{4}}
\end{pmatrix}
\end{align*}
where each element in $\Phi$ contains 17 bits (or 5 bit for gray image) that belong to data of primary digests.
\item In this step, the authentication bits for each $2\times2$ block is calculated by Eq. (\ref{eq:auth}):
\begin{align}
\label{eq:auth}
\tilde{A}_{i,j} &=\delta_{i,j} \oplus \xi_{i,j}, \forall i \in [1, \rfrac{M}{2}], j \in [1, \rfrac{N}{2}] \nonumber\\
\delta_{i,j} &= [\displaystyle\sum_{n=1}^{k} \Phi^n_{i,j}\bmod2] \oplus [\displaystyle\sum_{n=1}^{len}\overline{\gamma^n}\bmod 2] \nonumber\\
\xi_{i,j} &= \dot{d}_s^s{(i, j)} \oplus \ddot{d}_s^s{(i, j)}
\end{align}
where $\overline{\gamma}$ is a binary form of $\gamma$ obtains by Eq. (\ref{eq:index}):
\begin{align}
\gamma &= \zeta_i \oplus \zeta_{i-1}, \forall i \in [1, 2, ..., len] \nonumber\\
\zeta &= f(\Phi_{i,j})
\label{eq:index}
\end{align}
Here, $f$ is function as:
\begin{align*}
f(\bar{\chi}) &=
\begin{cases}
null & \quad \text{if } \bar{\chi}_i \text{ = 0}\\
i & \quad \text{if } \bar{\chi}_i \text{ = 1}
\end{cases}
\end{align*}
where return index of each element in $\bar{\chi}$ equal by 1.
\end{enumerate}
\begin{figure*}[t]
\center
\includegraphics[width=0.95\textwidth,trim=9cm 12cm 10cm 9cm,clip]{embedding.pdf}
\caption{Block diagram of generating and embedding watermarks.}
\label{fig:embedding}
\end{figure*}
\subsubsection{Combining watermarks bits}
After generating and shuffling primary and secondary digests, and also computing authentication bits, all bits are organized to be ready for embedding in $host$. In TRLG, 8 bits are embedded into 2 LSBs of each 2$\times$2 block. The bits arrangement of four digests and authentication bit is shown in Fig. \ref{fig:organize}. As shown, by assuming $host$ is colored, 20 bits are required for luminance (19+1 bits), and 14 bits are considered for chrominance. Totally, to provide a second chance for the primary digest, 68 bits space should be reserved for hiding information. In addition, 24 bits space are required for embedding two copies of the secondary digest. In other words, for each 2$\times$2 blocks, six bits are reserved for secondary digests. Totally, 92 bits for digests and 4 bits for authentication are combined to embed into 2 LSBs of each 4$\times$4 block in each plane. Subsequently, 20 bits (19+1 bits) are considered for the primary digest, and 8 bits are reserved for embedding two copies of secondary digests for gray images. Similarly, 28 bits digest and 4 bits authentication are combined for embedding into 2 LSBs of each 4$\times$4 block in the next phase.
Hence, Let the watermark must be embedded in each $2\times2$ block be $\Psi^k_{i, j}$ where achieved by using Eq. (\ref{eq:watermark}):
\begin{align}
\Psi^k_{i, j}=&
\begin{cases}
\Phi^n_{i, j, k} \uplus \dot{d}_s^s{(i, j, k)} \uplus \ddot{d}_s^s{(i, j, k)} \uplus \tilde{A}_{i,j} &\quad \text{if } k= \text{1}\\
\Phi^n_{i, j, k} \uplus \dot{d}_s^s{(i, j, k)} \uplus \ddot{d}_s^s{(i, j, k)} &\quad \text{otherwise }\nonumber\\
\end{cases}\nonumber\\
&\begin{cases}
\forall n \in [1, 2, ..., 5] & \quad \text{if } k= \text{1},\nonumber\\
\forall n \in [6, 7, ..., 11] & \quad \text{if } k = \text{2},\nonumber\\
\forall n \in [12, 13, ..., 17] & \quad \text{if } k = \text{3}
\end{cases}\\
\forall i &\in [1, \rfrac{M}{2}], j \in [1, \rfrac{N}{2}], k \in [1, 2, 3]
\label{eq:watermark}
\end{align}
where $k$ and $\uplus$ are represented index of each plane and string joint operator, respectively. At the end of this phase, 8 bits which will be embedded into 2$\times$2 are encapsulated in each element of $\Phi^n$ to embed in next phase.
\subsubsection{Encrypting and embedding watermark}
In this phase, first, the watermark of each block must be depended to content of current block and its neighbors. Due to this strategy, TRLG is able to detect security tampering that applied based on collage, vector-quantization or protocol attacks.
The detail of this strategy is explained below:
\begin{enumerate}[1),itemsep=0mm]
\item Two sequences $\chi_1$ and $\chi_2$ are generated based on Eq. (\ref{eq:CCS}) with $key_3$ and $key_4$ by running ${\frac{M}{2}\times \frac{N}{2}}$ times.
\item The candidate pixel $p'_c$ in each blocks can be calculated using Eq. (\ref{eq:candidate1}):
\begin{equation}
p'_c(i) = (\lfloor\chi_1(i)\times10^{14}\rfloor \bmod 4) + 1, \forall i
\label{eq:candidate1}
\end{equation}
Next, the candidate plane $p''_c$ in each blocks can be calculated using Eq. (\ref{eq:candidate2}):
\begin{equation}
p''_c(i) = (\lfloor\chi_2(i)\times10^{14}\rfloor \bmod 3) + 1, \forall i
\label{eq:candidate2}
\end{equation}
Now, $p'_c$ and $p''_c$ are converted into 2D matrix with size of ${\frac{M}{2}\times \frac{N}{2}}$.
\item The candidate version of $host$ is obtained by Eq. (\ref{eq:candidate3}):
\begin{align}
\label{eq:candidate3}
h_c(i, j) &= host^{p''_c{(i, j)}}_{p'_c{(i, j)}}\& 252_{(11111100)_{2}} \nonumber\\
\forall i &\in [1,\rfrac{M}{2}], j \in [1, \rfrac{N}{2}]
\end{align}
This process should be repeated for all 2$\times$2 blocks.
\item In this step, the relations between 2$\times$2 sub-blocks of each 4$\times$4 block in $host$ is computed. To do so, $h_c$ of size $\frac{M}{2}\times\frac{N}{2}$ is partitioned into $\frac{M}{4}\times\frac{N}{4}$ non-overlapping blocks of 2$\times$2 pixels, and $h^i_c$ the $i$th block which is expressed as:
\begin{align*}
h^i_c = \begin{bmatrix}
h^i_c(1)&h^i_c(2)\\
h^i_c(3)&h^i_c(4)
\end{bmatrix}, \forall i \in [1, 2, 3, ..., \rfrac{M}{2}\times\rfrac{N}{2}]
\end{align*}
Now, the relations between pixels of $h^i_c$ is calculated by using Eq. (\ref{eq:relations}):
\begin{align}
\label{eq:relations}
R' &= \lfloor\arctan(\frac{h^i_c(1) - h^i_c(4)}{h^i_c(2) - h^i_c(3)}) \times 10^{14}\rfloor \bmod 256 \nonumber\\
R'' &= \lfloor\arctan(\frac{h^i_c(1) - h^i_c(3)}{h^i_c(2) - h^i_c(4)}) \times 10^{14}\rfloor \bmod 256 \nonumber\\
R''' &= \lfloor\arctan(\frac{h^i_c(1) - h^i_c(2)}{h^i_c(3) - h^i_c(4)}) \times 10^{14}\rfloor \bmod 256 \nonumber\\
R'''' &= \lfloor \text{DCT}(h^i_c)_\text{DC} \times 10^{14}\rfloor \bmod 256
\end{align}
\item Finally, the dependent watermark for each 2$\times$2 block is achieved by Eq. (\ref{eq:relwatermark}):
\begin{align}
\label{eq:relwatermark}
\Psi'^k_{i, j} &= R'_{i, j} \oplus R''_{i, j} \oplus R'''_{i, j} \oplus R''''_{i, j} \oplus \Psi^k_{i, j}\nonumber\\
\forall i &\in [1, \rfrac{M}{2}], j \in [1, \rfrac{N}{2}], k \in [1, 2, 3]
\end{align}
\end{enumerate}
In the following, in order to improve security and guaranty the originality of watermark, and also to prevent the predictability of the arrange and value of bits, further process are done on $\Psi'^k_{i, j} $. To do so, the watermark bits are encrypted and permuted according to the following steps:
\begin{enumerate}[1),itemsep=0mm]
\item Two sequences $\chi_1$ and $\chi_2$ are generated based on Eq. (\ref{eq:CCS}) with $key_5$ and $key_6$ by running ${\frac{M}{2}\times \frac{N}{2}}$ times.
\item The sequence of secret values to encrypt watermark bits is calculated by using Eq. (\ref{eq:encrypt}):
\begin{equation}
s'_v(i) = \lfloor\chi_1(i)\times10^{14}\rfloor \bmod 256, \forall i
\label{eq:encrypt}
\end{equation}
Next, the sequence of secret values to permute watermark bits is computed based on Eq. (\ref{eq:permute}):
\begin{equation}
s''_v(i) = (\lfloor\chi_2(i)\times10^{14}\rfloor \bmod 8) + 1, \forall i
\label{eq:permute}
\end{equation}
Finally, $s'_v$ and $s''_v$ are converted into 2D matrix with size of ${\frac{M}{2}
\times \frac{N}{2}}$.
\item In this step, the permuted and encrypted watermark is achieved according Eq. (\ref{eq:pe}):
\begin{equation}
\label{eq:pe}
\Psi''^k_{i, j} = f(\Psi'^k_{i, j} \oplus s'_v(i, j), s''_v(i, j)), \forall i, j, k
\end{equation}
where $\oplus$ is exclusive-or operator, and $f$ is permuted function which permute bits of watermark by Eq. (\ref{eq:pe1}):
\begin{equation}
f(v, p) =v[f'(p)]
\label{eq:pe1}
\end{equation}
where $f'$ is a 1D simple mapping sequence generator algorithm \cite{ref7}, which calculated by Eq. (\ref{eq:pe2}):
\begin{equation}
f'(\chi_i) = [(\chi_{i-1} \times k) \bmod N)]+1, \forall i
\label{eq:pe2}
\end{equation}
where $\chi_i \in [1, N]$, $k$ a secret key, and $N$ total number of bits ($k$ = 13, $N$ = 8).
\end{enumerate}
As said in the previous section, in TRLG to improve quality of the watermarked image, and also improve the security of watermarks, the embedded watermark in each block of $host$ are encrypted with a $key_7$ that intelligently selected with GA. In other words, this strategy leads to decrease the difference between watermark and original values (LSBs), and also achieve high level of security. The details of GA training will further explain in the watermark optimization sub-section. At the end of the GA training, $key_7$ is achieved. Now, the watermark bits are encrypted again by Eq. (\ref{eq:encrypted2}):
\begin{equation}
\Psi'''^k_{i, j} = \Psi''^k_{i, j} \oplus key_7, \forall i, j, k
\label{eq:encrypted2}
\end{equation}
where $\oplus$ is exclusive-or operator.
Finally, the 24 bits (8 bits for gray) watermark are embedded into 2 LSBs of each 2$\times$2 non-overlap block of $host$ in each plane. In TRLG, to decrease the difference between watermarked and original pixel, a modified LSB-Mathching with considering statistical parameter of block is proposed. It should be noted, this strategy is applied on all pixels in each planes, expect the candidate pixels and planes [$p'_c, p''_c$] choosed in encryption phase. In Algorithm. \ref{ALG:LSBmathcing}, the pseudo code of this technique is showed in details.
\begin{algorithm}[t]
\caption{Modified LSB-Matching technique.}
\label{ALG:LSBmathcing}
\textbf{Input:} 1$\times$4 block, watermark (8 Bits)\\
\textbf{Output:} watermarked block
\begin{algorithmic}[1]
\Procedure{Matching}{$block$, $\Psi'''$}
\State $initial_{std}$ = STD$(block)$
\State $shift = 0$
\State $block_{w} = block$
\For{$i = 1$ to ${4}$}
\State $f = 1$
\If {$initial_{std} < \text{STD}(block_{w}$)}
\State $f = -1$
\EndIf
\State $\omega = (\Psi''' >> shift) \& 3$
\For{$j = 0$ to ${3}$}
\State $\xi = j \times f$
\State $p = (block(i)\&3) \pm \xi$
\If {$p =\omega$}
\State $block_{w}(i) = block(i) \pm \xi$
\State $break$
\EndIf
\EndFor
\State $shift = shift - 2$
\EndFor
\State \Return $block_{w}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
Finally, the watermark image as $host_w$ is achieved, and it can be transfer during communication channels.
\noindent\textbf{Watermark Optimization:} In TRLG to maximize the similarity between watermark bits and LSBs of pixels in each 2$\times$2 block, and also to enhance security more and more GA is employed. This strategy can effectively balance the difference between watermark and original bits. To do so, the GA is applied to find optimal parameter as $key_7$ that will be used to decrease the difference between coefficients. The overall GA-based watermark optimization is summarized as below:
\begin{enumerate}[1),itemsep=0mm]
\item In the first step, the initial key population is randomly created, and convert them into chromosomes. Next, the watermarked image is generated based on the solutions in population.
\item The fitness function value is evaluated between the $host$ and $host_w$ by Eq. (\ref{eq:fitness}):
\begin{equation}
Fitness function = PSNR
\label{eq:fitness}
\end{equation}
where PSNR is Peak Signal to Noise Ratio.
\item In the last step, the operators of selection, crossover, and mutation are applied on each chromosome to generate next generation.
\end{enumerate}
These steps are continued until a predefined condition is satisfied, or a constant number of generations is exceeded. Finally, the optimal key as $key_7$ is achieved
\begin{figure*}[t]
\center
\includegraphics[width=0.95\textwidth,trim=13cm 9cm 12cm 9cm,clip]{extracting.pdf}
\caption{Block diagram of extracting watermark and recovering tampered regions.}
\label{fig:extracting}
\end{figure*}
\subsection{Tamper detection and recovery}
After receiving the suspicious watermarked image as $host_s$ through the public communication channels, In this phase, first, the tampered regions with 2$\times$2 accuracies are located and marked, and then are recovered by valid parts of four digests which embedded in $host$. The procedure of tamper detection and recovery is described in details as below:
\subsubsection{Extracting and decrypting watermark}
In this phase, the watermark bits are extracted from two LSBs of each 2$\times2$ blocks of $host_s$. Subsequently, the watermark bits are decrypted and depermuted to achieve the initial watermark bits. These process are explained below in details:
\begin{enumerate}[1),itemsep=0mm]
\item Firstly, the watermark bits are extacted from each 2$\times$2 block. Let denote the result as $\Psi'''^k_{i, j}$.
\item The watermark bits are decrypted based on $key_7$ by Eq. (\ref{eq:decrypted2}):
\begin{equation}
\Psi''^k_{i, j} = \Psi'''^k_{i, j} \oplus key_7, \forall i, j, k
\label{eq:decrypted2}
\end{equation}
where $\oplus$ is exclusive-or operator.
\item To locate and decrypt the watermark bits to initial position, the process are performed in inverse direction. Hence, the watermark is reconstructed by Eq. (\ref{eq:decrypted1}):
\begin{equation}
\Psi'^k_{i, j} = f(\Psi''^k_{i, j} \oplus s'_v(i, j), s''_v(i, j)), \forall i, j, k
\label{eq:decrypted1}
\end{equation}
where $\oplus$ is exclusive-or operator, and \{$s'_v$, $s''_v$\} are generated based $key_5$ and $key_6$ by utilizing Eqs. (\ref{eq:encrypt}, \ref{eq:permute}); and $f$ is depermuted function which depremute watermark bits by Eq. (\ref{eq:depermuted}):
\begin{equation}
f(v, p) =f'(p)[v]
\label{eq:depermuted}
\end{equation}
where $f'$ is achived based on Eq. (\ref{eq:pe2}).
\item Finally, the undependent watermark bits is generated by using Eq. (\ref{eq:relwatermark2}):
\begin{align}
\label{eq:relwatermark2}
\Psi^k_{i, j} &= R'_{i, j} \oplus R''_{i, j} \oplus R'''_{i, j} \oplus R''''_{i, j} \oplus \Psi'^k_{i, j}\nonumber\\
\forall i &\in [1, \rfrac{M}{2}], j \in [1, \rfrac{N}{2}], k \in [1, 2, 3]
\end{align}
where $\oplus$ is exclusive-or operator, and \{$R'$, $R''$, $R'''$, and $R''''$\} are computed according Eq. (\ref{eq:relations}). It should be noted, $p'_c$ and $p''_c$ are generated based on $key_3$ and $key_4$ by utilizing Eqs. (\ref{eq:candidate1}, \ref{eq:candidate2}).
\end{enumerate}
\begin{table*}[t]
\centering
\footnotesize
\caption{The PSNR and SSIM values of watermarked images for TRLG and related works. \\ Note: - means image is unavailable when using the previous scheme.}
\label{TABLE:compare_psnr_ssim}
\renewcommand{\arraystretch}{1.5}
\setlength{\tabcolsep}{4pt}
\scalebox{1} {
\begin{tabular*}{\textwidth}{ @{\extracolsep{\fill}}l@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\cline{1-18}
\multicolumn{1}{c}{\multirow{4}{*}{Image}} & \multicolumn{4}{c}{TRLG}&\cite{ref7}&\cite{ref12}&\cite{ref13}&\cite{ref14} &\cite{ref15}&\multicolumn{2}{c}{\cite{ref16}}&\cite{ref19}& \multicolumn{2}{c}{\cite{ref20}}&\cite{ref21}&\cite{ref22}&\cite{ref32}\\
\cline{2-5}\cline{6-18}&\multicolumn{2}{c}{Color}&\multicolumn{2}{c}{Gray} &Gray &Color & Gray &Gray&Gray&Gray&Color&Gray&Gray& Color&Gray&Gray&Gray\\
\cline{2-3}\cline{4-5}\cline{6-18}
&PSNR&SSIM&PSNR&SSIM&\multicolumn{13}{c}{PSNR}\\
\cline{1-18}
Baboon&46.2362&0.9991&45.7945&0.9959&40.73&40.71&44.30&-&-&39.03&39.55&40.92&37.49&-&-&44.17&-\\
Barbara&46.2322&0.9980&45.8124&0.9903&40.72&-&44.26&-&-&-&-&40.94&-&-&-&-&-\\
Lena&46.4529&0.9995&45.8231&0.9883&40.68&40.73&44.16&44.22&44.18&39.31&39.80&40.95&38.06&37.59&44.27&-&40.69\\
Pepper&46.0257&0.9992&45.7965&0.9881&40.73&-&44.28&-&-&-&40.20&40.92&-&-&-&-&41.50\\
Gril&46.2269&0.9970&45.7908&0.9899&-&-&-&-&44.16&-&-&40.82&-&-&-&-&41.94\\
Lake&46.2211&0.9982&45.7947&0.9910&40.70&-&-&-&44.17&-&-&40.94&-&38.52&42.49&-&38.10\\
F16&46.2325&0.9904&45.8104&0.9861&-&40.86&-&43.39&44.16&-&-&-&-&-&43.85&44.11&40.77\\
House&46.2229&0.9972&45.7982&0.9896&-&-&-&-&-&-&39.16&-&-&38.25&-&-&-\\
Elaine&-&-&45.7827&0.9902&-&-&-&44.17&-&-&-&-&37.49&-&-&-&-\\
Goldhill&-&-&45.2293&0.9930&-&-&-&-&44.16&-&-&40.99&-&-&-&44.16&41.35\\
Boat&-&-&45.7610&0.9909&-&40.58&44.22&-&44.12&-&39.93&-&-&-&-&44.11&39.46\\
Camera&-&-&45.8084&0.9840&-&-&-&-&-&39.00&39.45&-&37.17&-&-&-&-\\
Toys&-&-&45.7968&0.9854&-&-&-&-&44.16&-&-&-&-&-&-&-&41.18\\
Zelda&-&-&45.7942&0.9866&40.71&-&44.21&-&-&-&-&40.83&-&-&-&-&-\\
Crowd&-&-&45.8469&0.9901&-&-&-&-&-&-&-&-&-&-&43.32&-&-\\
\cline{1-18}
\end{tabular*}}
\end{table*}
\begin{figure*}[t!]
\center
\setlength{\tabcolsep}{2pt}
\begin{tabular*}{1\textwidth}{lll}
\includegraphics[width=0.165\textwidth]{1digest1.png}\includegraphics[width=0.165\textwidth]{1digest2.png} &
\includegraphics[width=0.165\textwidth]{2digest1.png}\includegraphics[width=0.165\textwidth]{2digest2.png}&
\includegraphics[width=0.165\textwidth]{3digest1.png}\includegraphics[width=0.165\textwidth]{3digest2.png}\\
\includegraphics[width=0.165\textwidth]{4digest1.png}\includegraphics[width=0.165\textwidth]{4digest2.png} &
\includegraphics[width=0.165\textwidth]{5digest1.png}\includegraphics[width=0.165\textwidth]{5digest2.png} &
\includegraphics[width=0.165\textwidth]{6digest1.png}\includegraphics[width=0.165\textwidth]{6digest2.png} \\
\includegraphics[width=0.165\textwidth]{7digest1.png}\includegraphics[width=0.165\textwidth]{7digest2.png} &
\includegraphics[width=0.165\textwidth]{8digest1.png}\includegraphics[width=0.165\textwidth]{8digest2.png} &
\includegraphics[width=0.165\textwidth]{9digest1.png}\includegraphics[width=0.165\textwidth]{9digest2.png} \\
\includegraphics[width=0.165\textwidth]{10digest1.png}\includegraphics[width=0.165\textwidth]{10digest2.png}&
\includegraphics[width=0.165\textwidth]{11digest1.png}\includegraphics[width=0.165\textwidth]{11digest2.png}&
\includegraphics[width=0.165\textwidth]{12digest1.png}\includegraphics[width=0.165\textwidth]{12digest2.png}\\
\includegraphics[width=0.165\textwidth]{13digest1.png}\includegraphics[width=0.165\textwidth]{13digest2.png}&
\includegraphics[width=0.165\textwidth]{14digest1.png}\includegraphics[width=0.165\textwidth]{14digest2.png}&
\includegraphics[width=0.165\textwidth]{15digest1.png}\includegraphics[width=0.165\textwidth]{15digest2.png}\\
\end{tabular*}
\caption{Primary digest based on LWT, Inverse of secondary digest \cite{ref5}. (Primary digest\{PSNR, SSIM\}, Secondary digest\{PSNR, SSIM\}), Baboon: (\{22.6487, 0.7969\}, \{21.0739, 0.6934\}), Barbara(\{25.1693, 0.8373\}, \{23.8361, 0.7615\}), Lena(\{31.4157, 0.9814\}, \{28.5038, 0.9651\}), Pepper(\{28.8269, 0.9733\}, \{27.2044, 0.9610\}), Girl(\{30.8184, 0.9367\}, \{27.6036, 0.8705\}), Lake(\{27.0669, 0.9269\}, \{25.0003, 0.8844\}), F16(\{29.6441, 0.9276\}, \{27.2846, 0.8600\}), House(\{27.0235, 0.9277\}, \{25.3203, 0.8806 \}), Elaine(\{32.8147, 0.8029\}, \{29.8457, 0.7071\}), Goldhill(\{30.7620, 0.8245\}, \{27.1399, 0.6414\}), Boat(\{29.6655, 0.8499\}, \{26.0845, 0.6762\}), Camera(\{34.2506, 0.9590\}, \{29.0340, 0.8307\}), Toys(\{33.9918, 0.9365\}, \{29.1781, 0.8243\}), Zelda(\{36.6406, 0.9220\}, \{31.4033, 0.8349\}), Crowd(\{32.4380, 0.9384\}, \{26.8611, 0.7786\}).}
\label{fig:dbdigest}
\end{figure*}
\begin{figure*}[t]
\center
\includegraphics[width=1\textwidth,trim=0cm 0cm 0cm 0cm,clip]{1.png}
\includegraphics[width=1\textwidth,trim=0cm 0cm 0cm 0cm,clip]{2.png}
\caption{The comparison between the proposed digests TRLG (DLG) and other techniques in terms of quality (PSNR, SSIM).}
\label{fig:compared_digest}
\end{figure*}
\begin{figure*}[t!]
\center
\setlength{\tabcolsep}{2pt}
\begin{tabular}{ccccc}
\includegraphics[width=0.19\textwidth]{1texture.png} &
\includegraphics[width=0.19\textwidth]{2texture.png} &
\includegraphics[width=0.19\textwidth]{3texture.png} &
\includegraphics[width=0.19\textwidth]{4texture.png} &
\includegraphics[width=0.19\textwidth]{5texture.png} \\
\includegraphics[width=0.19\textwidth]{6texture.png} &
\includegraphics[width=0.19\textwidth]{7texture.png} &
\includegraphics[width=0.19\textwidth]{8texture.png} &
\includegraphics[width=0.19\textwidth]{9texture.png} &
\includegraphics[width=0.19\textwidth]{10texture.png} \\
\includegraphics[width=0.19\textwidth]{11texture.png} &
\includegraphics[width=0.19\textwidth]{12texture.png} &
\includegraphics[width=0.19\textwidth]{13texture.png} &
\includegraphics[width=0.19\textwidth]{14texture.png} &
\includegraphics[width=0.19\textwidth]{15texture.png} \\
\end{tabular}
\caption{The results of texture classification based on Standard Deviation and Genetic Algorithm for fifteen standard images.}
\label{fig:texture}
\end{figure*}
\subsubsection{Authenticate received image}
In this phase, the authentication bits are fetched from each block, and then compared with generated bits using the previous procedure. Thereinafter, if the extracted and generated authentication bits of the block are matched, the block is marked as valid, otherwise it is invalid. The authentication steps are explained below in details:
\begin{enumerate}[1),itemsep=0mm]
\item Firstly, the authentication bit as $\tilde{A}_{i,j}$ is computed for all 2$\times$2 block according Sec. \ref{sec:Generatingauthenticationbits}. To do so, $\dot{d}_s^s$, $\ddot{d}_s^s$, and $\Phi$ are fetched and formed from $\Psi^k_{i, j}$. Then, $\tilde{A}_{i,j}$ is calculated by using Eq. (\ref{eq:auth}). In addition, the authentication bit which embedded before is extracted and denote as $\tilde{A}^e_{i,j}$.
\item Next, the tampered 2$\times$2 blocks are recognized based on comparing the extracted and generated authentication bits by Eq. (\ref{eq:tamperedregions}):
\begin{equation}
\label{eq:tamperedregions}
\varphi_{i, j} =
\begin{cases}
0 & \quad \text{if } \tilde{A}^e_{i,j} = \tilde {A}_{i,j}\\
1 & \quad \text{otherwise}
\end{cases}, \forall i, j
\end{equation}
\item Last, the closing morphology operator is applied on $\varphi$ as post-processing to fill gaps between tampered blocks that incorrectly mark as valid. A 5$\times$5 square is used as a structure element in this step.
\end{enumerate}
\subsubsection{Reconstruct digest, Recover tampered regions}
After authentication phase, 2$\times$2 tampered block can be further recovered. To do so, first, the four primary and secondary digest are reconstructed and reshuffled to initial position. In the following, for each invalid block of $host_s$ according $\varphi$ the recovery steps are triggered to correct tampered regions. The reconstruct digest and recover tampered regions procedure includes the following steps:
\begin{enumerate}[1),itemsep=0mm]
\item First of all, the four primary and secondary digests are extracted from $\Psi^k_{i, j}$. Then, the digests are formed according Fig. \ref{fig:organize}, and define primary digests as $\{\dot{d}_p^{s}, \ddot{d}_p^{s}\}$, and the secondary digests as $\{\dot{d}_s^s, \ddot{d}_s^s\}$.
\item In the following, the valid part of $\{\dot{d}_p^{s}, \ddot{d}_p^{s}\}$ and $\{\dot{d}_s^s, \ddot{d}_s^s\}$ are marked and updated. In the other words, $\varphi_{i, j}$ is checked during extraction step to reconstruct each digests based on valid parts.
\item To place the coefficients into initial positions, the inverse reshuffling are applied on the plane's coefficients of $\{\dot{d}_p^{s}, \ddot{d}_p^{s}\}$ include [$\Gamma, LL, LH, HL, HH, U, V$]. For this aim, each planes of $\{\dot{d}^s_p$, $\ddot{d}^s_p\}$ are converted into 1D matrix. Then, the coefficients of each digests are reordered and reshuffled based on Shift-aside and Mirror-aside operators, and two chaotic sequence generate based on Eq. (\ref{eq:CCS}) with $key_1$ and $key_2$ by Eq. (\ref{eq:reshuffling}):
\begin{align}
&\dot{d}_p(\chi^{\prime}_1(i)) =\dot{d}_p^{s}(i) \nonumber \\
&\ddot{d}_p(\chi^{\prime}_2(i)) = \ddot{d}_p^{s}(i) \nonumber \\
&\forall i \in [1,{\rfrac{M}{4}\times \rfrac{N}{4}}]
\label{eq:reshuffling}
\end{align}
At the end, the $\dot{d}_p$ and $\ddot{d}_p$ is converted to 2D matrix with size of ${\frac{M}{4}\times \frac{N}{4}}$, and $\{\dot{d}^s_p$, $\ddot{d}^s_p\}$ is updated.
\item Also, the inverse Partner-block reordering process are employed on $\{\dot{d}_s^s, \ddot{d}_s^s\}$ according to Fig. \ref{fig:map}(b) and Fig. \ref{fig:map}(c), respectively. Let denote result as $\dot{d}_s$ and $\ddot{d}_s$.
\item Now, two unique digests are generated based on valid part of each digests by Eq. (\ref{eq:merge}):
\begin{align}
\label{eq:merge}
d_p(i, j, k) &= \dot{d}_p(i, j, k) \cup \ddot{d}_p(i, j, k) \nonumber \\
d_s(i, j, k) &= \dot{d}_s(i, j, k) \cup \ddot{d}_s(i, j, k) \nonumber \\
\forall i &\in [1, \rfrac{M}{2}], j \in [1, \rfrac{N}{2}], k \in [1, 2, 3]
\end{align}
where $\cup$ is union operator.
\item In this step, to reconstruct $d_p$ pad zeros bits to all coefficients include [$LL, LH, HL$, and $HH$] based on $\Gamma$ and LSBs ignored according Eq. (\ref{eq:modified}). Also, pad a zeros to chrominance components include [$U, V$]. At the end, convert coefficients form binary to integer type.
\item Next, the invalid regions of chrominance components include [$U, V$] are reconstructed based on valid neighbors, and resized them to $\frac{M}{2}\times\frac{N}{2}$.
\item Now, inverse quantization are applied on all coefficients of each bands and update them by Eq. (\ref{eq:inversequantization}):
\begin{equation}
Coef_{i, j} = Coef_{i, j} \times \mu, \forall i, j
\label{eq:inversequantization}
\end{equation}
where $\mu$ and $Coef_{i, j}$ are quantization step and coefficients of wavelet bands, respectively.
\item A level inverse LWT is applied on [$LL, LH, HL$, and $HH$], and reconstruct luminance as $Y$. Then, primary digest is converted to RGB, and denote result as $\Psi_p$.
\item The inverse halftone algorithm is employed on $d_s$ to generate the secondary digest from halftone version, and denote result as $\Psi_s$ \cite{ref5}.
\item Finally, the unique digest is achieved by combining valid parts of digests based on Eq. (\ref{eq:seperate}):
\begin{equation}
\Psi i, j = \Psi_pi, j + \Psi_si, j , \forall i, j
\label{eq:seperate}
\end{equation}
where $+$ is combining operator. So, the unreconstructed pixels in $\Psi_pi, j$ can be reconstruct by $\Psi_si, j$.
\item $\Psi$ is resized to original size of $host_s$ by Eq. (\ref{eq:resize1}):
\begin{equation}
\Psi = R(\Psi + R(host_s, 0.5)\&\neg\varphi, 2)
\label{eq:resize1}
\end{equation}
where +, \&, and $\neg$ are combining, bitwise-and, and complement operators, respectively. Also, $R$ is represented the resize function using bi-cubic interpolation.
\item Now, the pixels of $\Psi$ which are not reconstructed due to the large size of the manipulation, can be recovered based on neighboring pixels.
\item Last, the recovered image is achieved by Eq. (\ref{eq:resize2}):
\begin{equation}
host_r = host_s\&\neg R(\varphi ,2) + \Psi \& R(\varphi ,2)
\label{eq:resize2}
\end{equation}
\end{enumerate}
\section{Experimental results}
\label{sec:Experimental}
In this section, a series of experiments are employed to prove superiority and efficiency of TRLG compared with other state-of-the-art schemes. All experiments were implemented on a computer with a 3.30 GHz Intel i5 processor, 4.00 GB memory, and Windows 10 operating system, and the programming environment was Matlab R2016b. Furthermore, each watermarked image is modified by Adobe Photoshop cc 2015 to make forgery image. A feasibility and effectiveness investigation for TRLG is conducted using a set of standard images with the size of 512$\times$512 include Baboon, Barbara, Lena, Pepper, Girl, Lake, F16, House, Elaine, Goldhill, Boat, Camera, Toys, Zelda, and Crowd. The number of objects and various types of texture such as edge, smooth, rough, etc. in these images leads to challenging watermarking methods.
To demonstrate the superiority of TRLG, first, a set of experiments are reported to illustrate the extreme performance of TRLG in term of quality of the watermarked image and generated digests. The second set is conducted to demonstrate the performance of TRLG in terms of security and detect special tampering. The third set of experiments is performed to show the recovered rates and quality of the recovered image under various tampering. In the following, some types of tampering with security attacks are done on the watermarked image to visually present the performance of TRLG in the context of tamper detection and recovery. At the end, TRLG is compared with state-of-the-art schemes in various terms which play the main role in tamper detection and recovery schemes.
\subsection{Quality analysis of watermarked image and digests}
In this set of experiments, first, the quality of the watermarked image is analyzed by comparing with state-of-the-art schemes. To do so, the PSNR and SSIM values of the watermarked images which generated based on TRLG and other schemes that presented in recent years are reported and compared in Table \ref{TABLE:compare_psnr_ssim}. As seen, the PSNR values of the watermarked image of TRLG are extremely larger than other proposed schemes. Furthermore, to prove the performance of TRLG in term of quality, besides PSNR, the SSIM metric is employed. The measure of SSIM was presented based on the characteristics of the Human Visual System (HVS), that compared the information of structure, luminance, and contrast for watermarked image quality assessment. Subsequently, the SSIM metric of the watermarked image is close to one, that proves the efficiency of TRLG in term of quality of the watermarked image. It should be noted, unfortunately, the most of proposed schemes in recent years only focused on grayscale image, and prove the performance based on PSNR. However, in TRLG, the SSIM metric are reported for fifteen standard color and gray images.
\begin{table}[b]
\footnotesize
\caption{Number of bits for each 4$\times$4 block (Accuracy 2$\times$2).\\
CR : Compression Ratio.}
\label{TABLE:volume}
\renewcommand{\arraystretch}{1.5}
\scalebox{1} {
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}@{}l@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\cline{1-8}
&\multirow{2}{*}{Method}&Inverse&MSB&MSB&DCT&TRLG&TRLG\\
&&Halftone&5 Bits&6 Bits&DC&DCT&LWT\\
\cline{1-8}
\multirow{2}{*}{Bits}&Gray&4&20&24&40&20&20\\
&Color&12&60&72&120&34&34\\
\cline{2-8}
\multirow{2}{*}{CR}&Gray&32&6.4&5.3&3.2&6.4&6.4\\
&Color&32&6.4&5.3&3.2&11.3&11.3\\
\cline{1-8}
\end{tabular*}}
\end{table}
As said before, one of the novelties of TRLG is proposing compact digest with high quality as DLG. In addition, the halftone version of the image is used to provide the further chance for recovering tampered regions. To prove the superiority and efficiency of DLG, and also the inverse halftone version \cite{ref5}, first, PSNR and SSIM values and the zoom of digests that obtained based on each technique are illustrated in Fig. \ref{fig:dbdigest}. As can be seen, the visual distortions are very low, and no blocky artifact can be detected in digests. In addition, the quality of digests is compared by other traditional strategies such as an averaging and DCT based technique in Fig. \ref{fig:compared_digest}. In this way, the digest based on averaging technique is obtained by averaging 2$\times$2 blocks and fetch 5 or 6 Most Significant Bits of each block. Also, the DCT based digest is constructed by fetching DC coefficient from each 2$\times$2 block. Furthermore, the proposed digest is analyzed based on DCT instead of LWT. To do so, the DC and first two AC coefficients are considered for each block. Finally, all digests are resized to initial size (512$\times$512) and compared to the original image. It is observed that the essential metrics like PSNR and SSIM are effectively high for DLG compared to other techniques. However, there is a light degradation of the texture image like Baboon rather than other schemes. Since all digests can recover tampered region with 2$\times$2 accuracies, so the compassion is quite fair.
As described previously, GA is employed in TRLG to select best thresholds for classifying each block in term of texture. Accordingly, the main challenge of generating digests for various type of blocks are solved, clearly. In another word, an intelligent trade-off is achieved between the low and high frequency of each block and makes DLG more flexible for various type of images. The result of this strategy is shown in Fig. \ref{fig:texture}. Also, the volume of digests which generated based on each technique are listed in Table \ref{TABLE:volume}. The low volume and high quality of digests that generated based on DGL makes the TRLG more efficient for tamper detection based on watermarking techniques. In other words, the low volume of embedded watermark bits leads the low degradation on host image that can not be detected by the naked eye, and also useful for the watermarking scheme with low data payload or embedding capacity. Summarily, these are primary reasons for why TRLG has better quality for watermarked and recovered image rather than other schemes. Totally, the experimental results have proved the efficiency and superiority of DLG in terms of quality and compactness compared to other schemes.
\begin{figure}[t]
\center
\begin{tabular}{cc}
\includegraphics[width=0.45\columnwidth]{digest1_1.png} &
\includegraphics[width=0.45\columnwidth]{digest1_2.png} \\
(a) & (b) \\
\includegraphics[width=0.45\columnwidth]{digest2_1.png} &
\includegraphics[width=0.45\columnwidth]{digest2_2.png} \\
(c) & (d)
\end{tabular}
\caption{Lena image scrambling. (a) Primary digest 1, (b) Primary digest 2, (c) Secondary digest 1, (d) Secondary digest 2.}
\label{fig:scramble}
\end{figure}
\subsection{Analysis the security of watermark and detect tampering}
In this subsection, the sets of experiments are reported to prove the efficiency of TRLG in terms of security and detect special tampering.
Evidently, the security of watermarking schemes plays important role in designing watermarking systems. The security of tamper detection and recovery methods are divided into two categories as scrambling (shuffling) digests and security of embedded watermark. In TRLG, a new chaotic method for determining blocks mapping, reduce the correlation between adjacent pixels and encrypting the watermark is applied. To shuffling the primary digests CCS with Shift-aside and Mirror-aside operations are employed. The CCS is one of the novels chaotic map which proposed by Pak et al \cite{ref2}. The CCS compared to other chaotic maps have three advantages. Firstly, it is one-dimensional. Secondly, have various control parameters which lead to an increase keyspace. Thirdly, implemented easily and have lower computation-cost. Generally, the chaotic algorithm is a good option for securing watermarking methods, because of sensitivity to initial conditions, uncertain and non-statistical prediction. In other words, these methods have a definite process, but appear to be random. So, by using chaotic maps the security of the watermarking system will be improved, and the location of embedding digests is seen random and unpredictable for the attacker, but it is clear and meaningful for the main recipient. As said before, two secondary digests are scrambled by Partner-block without using any chaotic map. The result of scrambling four digests are illustrated in Fig. \ref{fig:scramble}.
\begin{table}[H]
\footnotesize
\caption{Analysis encryption measures for watermark - Test image Lena.}
\label{TABLE:security}
\renewcommand{\arraystretch}{1.5}
\scalebox{1} {
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}@{}l@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\cline{1-7}
Measures&Entropy&STD&MAE&NPCR&UACI&EQ\\
\cline{1-7}
Initial&6.0475&66.2202&0&0.9961&0.3346&0\\
Dependent&7.8127&73.8491&19.7287&0.9961&0.3346&334.0078\\
Encrypt\&Permute&7.9972&73.7177&19.1009&0.9961&0.3346&312.6719\\
Encrypt (GA)&7.9972&73.8918&19.1541&0.9961&0.3346&313.0021\\
\cline{1-7}
\end{tabular*}}
\end{table}
The security of watermark is a vital problem in tamper detection schemes. In particular, the security of embedded watermark implies that the watermark should be difficult to remove or modify without damaging the watermarked image, even with a full knowledge of the embedding and extracting algorithm. As mentioned in the previous section, In TRLG to providing the security of watermark, first, the watermark of each block is depended to the content of block, second, the watermark is encrypted and permuted, and at last the watermark is encrypted by the key which is generated based on GA.
To evaluate the performance of TRLG in term of security of watermark the statistical tests and security analysis are carried out in Table \ref{TABLE:security}. In this Table various measures include Entropy, Standard Deviation, Mean Absolute Error, Number of Pixels Change Rate, Unified Average Changing Intensity, Encryption Quality are reported. The results and security analysis are proved that TRLG has a high-security level and excellent performance in encryption.
\begin{figure}[t]
\center
\includegraphics[width=1\columnwidth,trim=0cm 0cm 0cm 0cm,clip]{tamperrate1.png}
\caption{The performance of TRLG in term of detect security tampering under various tampering rates. Test images Lena and Barbara. \\ Note: The thin bar shows post-processing (fill the gap by morphology).}
\label{fig:typeoftamper}
\end{figure}
\begin{figure*}[t!]
\center
\setlength{\tabcolsep}{1pt}
\begin{tabular}{ccc}
\includegraphics[width=0.33\textwidth,trim=2.2cm 0cm 3cm 0cm,clip]{nr1.png} &
\includegraphics[width=0.33\textwidth,trim=2.2cm 0cm 3cm 0cm,clip]{nr2.png} &
\includegraphics[width=0.33\textwidth,trim=2.2cm 0cm 3cm 0cm,clip]{nr3.png}\\
(a) & (b) & (C)\\
\end{tabular}
\caption{The portion of digests in recovery (a) Center \% (height, width), (b) Left to right \%width, (c): Up to bottom \%height - Test image Lena.}
\label{fig:recoveryrate}
\end{figure*}
\begin{figure*}[t!]
\center
\setlength{\tabcolsep}{1pt}
\begin{tabular}{cc}
\includegraphics[width=0.49\textwidth,trim=2.5cm 0cm 3cm 0cm,clip]{rr1.png} &
\includegraphics[width=0.49\textwidth,trim=2.5cm 0cm 3cm 0cm,clip]{rr2.png}\\
(a) & (b)\\
\includegraphics[width=0.49\textwidth,trim=2.5cm 0cm 3cm 0cm,clip]{rr3.png} &
\includegraphics[width=0.49\textwidth,trim=2.5cm 0cm 3cm 0cm,clip]{rr4.png}\\
(c) & (d)\\
\end{tabular}
\caption{The quality of the recovered image with different tampering rates. (a)
Color-PSNR, (b) Color-SSIM, (c) Gray-PSNR, (d) Gray-SSIM.}
\label{fig:recoverypsnr}
\end{figure*}
\begin{figure*}[t!]
\center
\setlength{\tabcolsep}{2pt}
\begin{tabular}{ccccc}
\includegraphics[width=0.19\textwidth]{3colorWatermarked_Image.png} &
\includegraphics[width=0.19\textwidth]{recovered_image1.png} &
\includegraphics[width=0.19\textwidth]{recovered_image2.png} &
\includegraphics[width=0.19\textwidth]{recovered_image3.png} &
\includegraphics[width=0.19\textwidth]{recovered_image4.png}\\
(a) & (b) & (c) & (d) & (e)\\
\includegraphics[width=0.19\textwidth]{recovered_image5.png} &
\includegraphics[width=0.19\textwidth]{recovered_image6.png} &
\includegraphics[width=0.19\textwidth]{recovered_image7.png} &
\includegraphics[width=0.19\textwidth]{recovered_image8.png} &
\includegraphics[width=0.19\textwidth]{recovered_image9.png} \\
(f) & (g) & (h) & (i) & (j)
\end{tabular}
\caption{The quality of recovered image under various tampering rates - Center (Tampering rate, PSNR, SSIM). (a) Watermarked image, (b) \{10\%, 44.8281, 0.9992\}, (c) \{20\%, 41.7046, 0.9983\}, (d) \{30\%, 39.4957, 0.9971\}, (e) \{40\%, 37.1910, 0.9951\}, (f) \{50\%, 35.4515, 0.9926\}, (g) \{60\%, 33.9525, 0.9896\}, (h) \{70\%, 32.0112, 0.9846\}, (i) \{80\%, 29.7049, 0.9768\}, (j) \{90\%, 25.7410, 0.9574\}.}
\label{fig:visualleanrecover1}
\end{figure*}
\begin{figure*}[t!]
\center
\setlength{\tabcolsep}{2pt}
\begin{tabular}{ccccc}
\includegraphics[width=0.19\textwidth]{3grayWatermarked_Image.png} &
\includegraphics[width=0.19\textwidth]{recovered_imageg1.png} &
\includegraphics[width=0.19\textwidth]{recovered_imageg2.png} &
\includegraphics[width=0.19\textwidth]{recovered_imageg3.png} &
\includegraphics[width=0.19\textwidth]{recovered_imageg4.png}\\
(a) & (b) & (c) & (d) & (e)\\
\includegraphics[width=0.19\textwidth]{recovered_imageg5.png} &
\includegraphics[width=0.19\textwidth]{recovered_imageg6.png} &
\includegraphics[width=0.19\textwidth]{recovered_imageg7.png} &
\includegraphics[width=0.19\textwidth]{recovered_imageg8.png} &
\includegraphics[width=0.19\textwidth]{recovered_imageg9.png} \\
(f) & (g) & (h) & (i) & (j)
\end{tabular}
\caption{The quality of recovered image under various tampering rates - Center (Tampering rate, PSNR, SSIM). (a) Watermarked image, (b) \{10\%, 44.7263, 0.9876\}, (c) \{20\%, 42.2214, 0.9841\}, (d) \{30\%, 40.3250, 0.9793\}, (e) \{40\%, 38.3121, 0.9709\}, (f) \{50\%, 36.6694, 0.9602\}, (g) \{60\%, 33.9932, 0.9371\}, (h) \{70\%, 31.5106, 0.8993\}, (i) \{80\%, 28.4436, 0.8470\}, (j) \{90\%, 22.1790, 0.7201\}.}
\label{fig:visualleanrecover2}
\end{figure*}
The various types of tampering which are applied on the watermarked image are divided into five categories as below:
\begin{enumerate}[1),itemsep=0mm]
\item \textbf{Normal tampering:} In this type of tampering an object added, removed, or modified on watermarked image from the desired image to produce the fake image.
\item \textbf{Copy move:} In copy move tampering, a part of the watermarked image is copied and pasted it somewhere else to generated forge image.
\item \textbf{Collage:} In collage attack, a forged image is generated by placing some part of the second watermarked image into the same spatial location in destination watermarked image. Accordingly, their relative spatial locations in the image were preserved. In this type, both watermarked image are generated based on same keys.
\item \textbf{Vector quantization:} In this mode, a spurious image is constructed by copying a section from the second watermarked image and past into the desired position in the watermarked image. It should be noted, similar to the collage both watermarked image are achieved based on same keys. These types of tampering are effective on the schemes that are block or pixel independent.
\item \textbf{Protocol:} One of the major tampering is protocol attack that the most of previous schemes are ignored it. The protocol attack involves replacing a part of an extra image into the watermarked image with keeping the Least Significant Bits of the block intact which carried the watermark data (deactivation of the watermark). In other words, it used the advantage of semantic deficits of the watermark‘s implementation.
\end{enumerate}
The result of TRLG in term of tamper detection based on mentioned tampering under various rates are illustrated in Fig. \ref{fig:typeoftamper}. The experiments are shown the high accuracy of tamper detection under the security tampering.
\subsection{Analysis recovery rates and quality of recovered image}
In this subsection, various experiments are conducted to prove the performance of TRLG in terms of recovery rate and quality of recovered image compared to state-of-the-art schemes. In TRLG, PSNR and SSIM values of the recovered image with respect to the original images are reported.
In the first set, to show the performance of TRLG in term of recovery rate, the number of recovered blocks by each digest is illustrated in Fig. \ref{fig:recoveryrate}. As seen, in TRLG the tampered regions under 50\% ratio are recovered without using neighboring pixels. In other words, the TRLG can recover tampered regions by using primary digest when the left, right, top, or bottom of the image is totally modified. Moreover, TRLG is able to recover most tamper regions by embedded digest when tampering rates is up to 80\%. Accordingly, the desired performance of Shift-aside, Mirror-aside, and Partner-block operations are proved.
In Fig. \ref{fig:recoverypsnr}, the curve of PSNR and SSIM values for the various recovered image with respect to the different tampering ratio are illustrated. As seen, in high tampering rates, the recovered images have satisfactory quality. Hence, TRLG has a good performance in term of recovery under various tampering rates for different type of image with extensive tampering.
In the following, the recovered Lena image with quality measures under various tampering rates is visually presented in Figs. \ref{fig:visualleanrecover1} and \ref{fig:visualleanrecover2}. As it’s clear in this figures, TRLG has an extraordinary result. As mentioned before, four chances are provided in TRLG to recover the tampered regions. Meanwhile, attention to high and low frequencies, and also intelligently classify block in term of texture leads to remove blocky effect and increase the quality of recovered image under large tampering rates. So, it is easier for expert or any viewer to get more perceptual information from the recovered image.
Next, the quality of recovered image relative to various tampered sizes and locations are listed in Table \ref{TABLE:recoveryratelcation}. From the results, it was found that TRLG has absolute superiority compared to other schemes.
Finally, In Table \ref{TABLE:compare_recovery} a comparison between TRLG and some recent fragile schemes in terms of quality of recovered image relative to various tampered rates is provided. As seen, PSNR values of recovered images in the TRLG is significantly and effectively higher than the other methods. In other words, TRLG has the much better capability of tamper recovery particularly when the tampered regions are extremely large. Thus, TRLG is more flexible than previously existing schemes.
Totally, the satisfactory performance and superiority of TRLG in terms of recovery rate and quality of the recovered image are demonstrated in the above extensive experiments. As seen, both PSNR and SSIM, and also visual effects prove that TRLG has not only an extremely high accuracy of tampering localization but also a relatively very high recovery rate. Also, the blocking effects are removed because of using the small size of block and generate intelligent digests with high quality.
\begin{table*}[t]
\footnotesize
\caption{Quality of recovery image under various tampering rates - Test image Lena.}
\label{TABLE:recoveryratelcation}
\renewcommand{\arraystretch}{1.5}
\scalebox{1} {
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}l@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\cline{1-12}
\multirow{2}{*}{Metric}&\multirow{2}{*}{Mode}&\multirow{2}{*}{Location}&\multicolumn{9}{c}{Tampering Rates \%}\\
\cline{4-12}
&&&10&20&30&40&50&60&70&80&90\\
\cline{1-12}
\cline{3-12}
\multirow{6}{*}{PSNR}&\multirow{3}{*}{Color}&Center&44.305&41.316&39.039&37.111&35.396&33.937&31.981&29.553&25.728\\
&&Left to Right&42.539&38.853&36.195&34.238&33.455&30.718&26.849&18.969&16.739\\
&&Up to Bottom&42.221&39.765&37.642&35.911&34.799&29.654&26.157&23.071&21.342\\
\cline{2-12}
&\multirow{3}{*}{Gray}&Center&44.156&41.836&40.221&38.173&36.554&33.834&31.489&28.419&22.169\\
&&Left to Right&43.666&40.511&37.418&35.597&34.832&27.019&21.836&18.313&15.603\\
&&Up to Bottom&43.679&41.153&38.634&36.182&34.381&28.446&26.042&23.005&19.264\\
\cline{1-12}
\multirow{6}{*}{SSIM}&\multirow{3}{*}{Color}&Center&0.9992&0.9982&0.9969&0.9950&0.9926&0.9896&0.9845&0.9764&0.9572\\
&&Left to Right&0.9986&0.9970&0.9939&0.9906&0.9884&0.9813&0.9698&0.9140&0.8705\\
&&Up to Bottom&0.9989&0.9976&0.9959&0.9939&0.9920&0.9824&0.9690&0.9476&0.9123\\
\cline{2-12}
&\multirow{3}{*}{Gray}&Center&0.9875&0.9839&0.9792&0.9705&0.9599&0.9363&0.8989&0.8467&0.7198\\
&&Left to Right&0.9805&0.9705&0.9582&0.9447&0.9321&0.8820&0.8252&0.7368&0.6129\\
&&Up to Bottom&0.9785&0.9640&0.9493&0.9296&0.9085&0.8695&0.8249&0.7559&0.6347\\
\cline{1-12}
\end{tabular*}}
\end{table*}
\begin{table}[H]
\footnotesize
\caption{PSNR comparison of recovered image between TRLG and related works.
\\ Note: - means recovered image is unavailable for current tampering.}
\label{TABLE:compare_recovery}
\renewcommand{\arraystretch}{1.5}
\scalebox{1} {
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}l@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\cline{1-7}
\multicolumn{1}{l}{\multirow{2}{*}{Image}} & \multicolumn{1}{c}{\multirow{2}{*}{Method}} &
\multicolumn{5}{c}{Tampering rate \%}\\
\cline{3-7}
&&10&20&30&40&50\\
\cline{1-7}
Baboon&TRLG&44.1772&42.5842&41.0369&38.4502&34.8195\\
&\cite{ref16}&39.92&37.00&35.22&34.12&33.16\\
&\cite{ref17}&32.69&29.93&28.29&27.19&26.28\\
&\cite{ref20}&38.05&35.26&33.61&32.51&31.67\\
Lena&TRLG&44.1560&41.8357&40.2214&38.1727&36.5542\\
&\cite{ref14}&43.02&37.92&33.01&32.23&31.14\\
&\cite{ref16}&45.09&40.58&38.25&36.84&35.79\\
&\cite{ref17}&40.20&36.57&33.38&32.14&29.32\\
&\cite{ref20}&39.57&36.15&34.35&33.00&31.94\\
&\cite{ref21}&34.53&31.95&30.75&29.90&-\\
Pepper&TRLG&44.0731&41.7370&40.3887&39.1925&38.0176\\
&\cite{ref22}&42.49&26.53&-&-&-\\
Lake&TRLG&42.0142&41.6306&39.7389&37.5469&36.2926\\
&\cite{ref17}&34.43&31.93&29.46&27.68&25.98\\
&\cite{ref21}&37.80&35.50&33.30&32.05&-\\
F16&TRLG&41.9860&40.2426&38.5662&36.9869&35.9469\\
&\cite{ref21}&36.52&34.60&33.40&32.32&-\\
Elaine&TRLG&43.2848&41.7539&40.2213&38.2445&37.2662\\
&\cite{ref20}&38.59&35.25&33.33&32.15&31.51\\
Goldhill&TRLG&43.8774&40.5553&38.7288&36.5216&34.8993\\
&\cite{ref22}&40.75&-&-&-&-\\
Boat&TRLG&43.3326&39.8584&37.4337&35.3961&34.0040\\
&\cite{ref22}&-&36.90&-&-&-\\
Camera&TRLG&43.8403&40.5883&38.4819&36.7889&36.5658\\
&\cite{ref16}&42.45&38.77&36.37&34.76&33.53\\
&\cite{ref17}&41.31&37.93&32.82&29.42&27.84\\
\cline{1-7}
\end{tabular*}}
\end{table}
\begin{figure*}[t!]
\center
\setlength{\tabcolsep}{1pt}
\begin{tabular}{ccccc}
\multirow{3}{*}[0.393in]{\includegraphics[width=0.248\textwidth]{t17.png}}&
\includegraphics[width=0.148\textwidth]{t18.png} &
\includegraphics[width=0.148\textwidth]{t19.png} &
\includegraphics[width=0.148\textwidth]{t20.png}&\multirow{3}{*}[0.393in]{\includegraphics[width=0.248\textwidth]{t24.png}}\\
&(b) & (c) & (d)&\\
&\includegraphics[width=0.148\textwidth]{t21.png} &
\includegraphics[width=0.148\textwidth]{t22.png} &
\includegraphics[width=0.148\textwidth]{t23.png}&\\
(a)&(e) & (f) & (g)&(h)\\
\end{tabular}
\caption{Tampering test of 50\% image splicing by collage attack. (a) Tampered image, (b) Primary digest 1, (c) Primary digest 2, (d) Secondary digest 1, (e) Secondary digest 2, (f) Recovery map, (g) Tamper detection, (h) Recovered image (PSNR=33.4826, SSIM=0.9885).}
\label{fig:tamperdetection1}
\end{figure*}
\begin{figure*}[t!]
\center
\setlength{\tabcolsep}{1pt}
\begin{tabular}{ccccc}
\multirow{3}{*}[0.393in]{\includegraphics[width=0.248\textwidth]{t1.png}}&
\includegraphics[width=0.148\textwidth]{t2.png} &
\includegraphics[width=0.148\textwidth]{t3.png} &
\includegraphics[width=0.148\textwidth]{t4.png}&\multirow{3}{*}[0.393in]{\includegraphics[width=0.248\textwidth]{t8.png}}\\
&(b) & (c) & (d)&\\
&\includegraphics[width=0.148\textwidth]{t5.png} &
\includegraphics[width=0.148\textwidth]{t6.png} &
\includegraphics[width=0.148\textwidth]{t7.png}&\\
(a)&(e) & (f) & (g)&(h)\\
\end{tabular}
\caption{Tampering test of 70\% image splicing by protocol attack. (a) Tampered image, (b) Primary digest 1, (c) Primary digest 2, (d) Secondary digest 1, (e) Secondary digest 2, (f) Recovery map, (g) Tamper detection, (h) Recovered image (PSNR=32.5198, SSIM=0.9862).}
\label{fig:tamperdetection2}
\end{figure*}
\begin{figure*}[t!]
\center
\setlength{\tabcolsep}{1pt}
\begin{tabular}{ccccc}
\multirow{3}{*}[0.393in]{\includegraphics[width=0.248\textwidth]{t9.png}}&
\includegraphics[width=0.148\textwidth]{t10.png} &
\includegraphics[width=0.148\textwidth]{t11.png} &
\includegraphics[width=0.148\textwidth]{t12.png}&\multirow{3}{*}[0.393in]{\includegraphics[width=0.248\textwidth]{t16.png}}\\
&(b) & (c) & (d)&\\
&\includegraphics[width=0.148\textwidth]{t13.png} &
\includegraphics[width=0.148\textwidth]{t14.png} &
\includegraphics[width=0.148\textwidth]{t15.png}&\\
(a)&(e) & (f) & (g)&(h)\\
\end{tabular}
\caption{Tampering test of 10\% add object, copy move, and vector quantization. (a) Tampered image, (b) Primary digest 1, (c) Primary digest 2, (d) Secondary digest 1, (e) Secondary digest 2, (f) Recovery map, (g) Tamper detection, (h) Recovered image (PSNR=40.1968, SSIM=0.9893).}
\label{fig:tamperdetection3}
\end{figure*}
\begin{sidewaystable}
\footnotesize
\caption{Compression between TRLG and recent schemes in various terms.\\
Note: Watermarked image (A:45$\leq$PSNR, B:42$\leq$PSNR$<$45, C:PSNR$<$42), RG and - means Random Generator and not supported, respectively.}
\label{TABLE:compressions_various_terms}
\renewcommand{\arraystretch}{1.5}
\scalebox{1} {
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill} }@{}l@{}c@{}c@{}c@{}c@{}c@{}c@
{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\cline{1-16}
Features &TRLG &\cite{ref1}&\cite{ref7}&\cite{ref12}&\cite{ref13}&\cite{ref14} &\cite{ref15}&\cite{ref16}&\cite{ref17}&\cite{ref18}&\cite{ref19}& \cite{ref20}&\cite{ref21}&\cite{ref22}&\cite{ref32}\\
\cline{1-16}
Block size&2$\times$2&2$\times$2&2$\times$2&2$\times$2&2$\times$2&4$\times$4&2$\times$2&2$\times$2&2$\times$2&4$\times$4&4$\times$4&2$\times$2&3$\times$3&2$\times$2&2$\times$2\\
Embedding technique&LSBs&LSBs&LSBs&LSBs&LSBs&LSBs&LSBs&LSBs&LSBs&LSBs&LSBs&LSBs&LSBs&LSBs&Quantize\\
Support color&Yes&Yes&No&Yes&No&Yes&No&Yes&No&No&No&Yes&No&No&No\\
Watermarked image&A&A&C&C&B&B&B&C&C&A&C&C&B&B&C\\
Generating digest&W+H&W+H&AVG&AVG&-&AVG&-&DCT&AVG&-&AVG&DCT&AVG&MSBs&-\\
Number of digest&4&2&2&2&-&2&-&1&2&-&1&1&1&1&-\\
Shuffling digest&CCS&ACM&1-DT&CCM&-&RG&-&1-DT&ACM&-&-&1-DT&-&-&-\\
Payload (bpp)&2&2&3&3&2&2&2&3&3&0.5&2.5&3&2&2&1\\
Watermark security&Yes&Yes&No&No&No&Yes&No&Yes&No&No&Yes&Yes&Yes&Yes&Yes\\
Copy-move&Yes&Yes&No&No&No&Yes&No&No&No&No&Yes&No&No&No&No\\
Vector-quantization&Yes&Yes&No&No&No&Yes&No&No&Yes&No&No&No&No&No&No\\
Collage attack&Yes&No&No&No&No&Yes&No&No&Yes&Yes&No&No&Yes&No&Yes\\
Protocol attack&Yes&No&No&No&No&No&No&No&No&No&No&No&No&No&No\\
Extraction process&Blind&Blind&Blind&Blind&Blind&Blind&Blind&Blind&Blind&S-blind&Blind&Blind&Blind&Blind&Blind\\
\cline{1-16}
\end{tabular*}}
\end{sidewaystable}
\subsection{Visual representation performance of TRLG }
In the last set of experiments, the context of tamper detection and recovery is visually presented to prove the excellent performance of TRLG. For this aim, some type of tampering such as image splicing and adding object based on copy move, collage, vector quantization and protocol attacks are applied on the watermarked image to illustrate the superiority and efficiency of TRLG. For this aim, Photoshop is used to apply these tampering on the watermarked images. These experiments are illustrated in Figs. \ref{fig:tamperdetection1}, \ref{fig:tamperdetection2}, and \ref{fig:tamperdetection3}. In these Figs, recovery map is demonstrated the recovered regions by primary digest 1 (red), primary digest 2 (green), secondary digest 1 (blue), secondary digest 2 (white), and neighboring pixels (cyan).
In Fig. \ref{fig:tamperdetection1}, the watermarked Lena is tampered by half of watermarked Barbara based on collage attack. As can be seen, the TRLG can detect and recover tampered parts, clearly. The PSNR and SSIM of the recovered image from 50\% spliced image is 33.4826 dB and 0.9885, respectively. Next, to simulate the performance of TRLG against protocol attack, 70\% of Baboon is pasted on watermarked Pepper in Fig. \ref{fig:tamperdetection2}. As said before, in protocol attack, the Least Significant Bits of the fake part is replaced by Least Significant Bits of destination image which carried watermarked bits. It can be found that, due to the novel and intelligent shuffling, the recovery rate of the recovered image in TRLG is extremely high. The PSNR and SSIM of the recovered image are 32.5198 dB and 0.9862, respectively. In Fig. \ref{fig:tamperdetection3}, the watermarked Girl is modified based on various tampering. First, the extra object is added. Thereinafter, a copy move and vector quantization are employed by copying parts of watermarked Barbara and watermarked Girl, respectively. It can be clearly seen that the tamper detection is highly accurate, and TRLG can recover the tampered region successfully. The PSNR and SSIM between the original image and the recovered image are 40.1968 dB and 0.9893, respectively.
Generally, the excellence and superiority of TRLG are proved based on various experiments. It is concluded that the essential metrics like PSNR and SSIM are effectively high for watermarked and recovered image. Also, TRLG can detect special tampering accurately. In another word, the overall performance of tampering recovery for TRLG is satisfactory.
\subsection{Comparison with recent schemes in various terms}
There are a number of important characteristics that a tamper detection and recovery schemes should be considered. In this set of experiment, the various properties of TRLG are reported and compared with other fragile schemes in Table \ref{TABLE:compressions_various_terms}. These characteristics include the size of a block (localization), embedding technique, support color, quality of watermarked image, the technique of generating the digest, provide multiple chances, shuffling digest, payload, security of watermark, detect special tampering, and extraction process. As illustrated, It is clear that TRLG is superior, efficient and effective compared to other schemes.
\section{Conclusion and future works}
Digital watermarking is a science of hiding information in digital media. The embedded information should be later extracted for various goals such as authenticate ownership, tamper detection and recovery, and so on. The most methods have been proposed in recent years have single chance for recovering tampered regions, and poor quality for watermarked and recovered images. In addition, previous methods generate digest based on traditional averaging techniques without considering the characteristics of the block. How to design efficient digest in fragile tampering recovery scheme which works in spatial or frequency domain plays an important role.
In this paper, an efficient fragile blind quad watermarking scheme for image tamper detection and recovery based on lifting wavelet transform and genetic algorithm is proposed. A novel compact digests generating with super quality has been proposed in TRLG. For this purpose, four digests are generated based on the LWT and halftoning technique. Generating compress digests with admissible quality can be referred as an optimization problem. Hence, the genetic algorithm is used to optimize the data which fetched from rough and smooth regions for generating the digest. To do so, the best thresholds are selected by GA for classifying the type of blocks. Totally, for each 2$\times$2 non-overlapping block two different digests as primary and secondary are used which each of them provides two chances for recover tampered regions. As been expected, the performance of TRLG in term of quality of the recovered image can be significantly improved, especially when the rate of tampering is extremely high. Experimental results have proved this claim. In order to guaranty, the security of TRLG, the combining modified Logistic map as CCS is used to shuffle the digests and encrypt the watermarks, respectively. As said before, in TRLG the watermarks embedding procedure is modeled as a search problem to minimize and optimize the difference between the original and watermark values. To do so, the genetic algorithm is applied, again.
Experiment results have demonstrated the superiority of the TRLG in comparison to other state-of-the-art schemes, particularly in large tampering rates. PSNR and SSIM represent that TRLG has good imperceptibility after embedding process. Totally, TRLG achieves high quality for watermarked and recovered image without blocky artifact, low complexity, and well localization. Furthermore, TRLG has good results in terms of security, and detect security tamperings such as copy move, vector quantization, collage, and protocol attack. Finally, based on the advantages described above, TRLG is efficient, secure, safe, and applicable for blind and fragile applications.
Because of the watermark is embedded into LSBs planes, the watermark may be destroyed by image processing operations or other attacks. In the ongoing research, we will extend TRLG as semi-fragile scheme. Since the capacity in the frequency domain is lower than the spatial domain, a compact digest with super quality can be used to solve one of the main challenges that exist there.
\input{behrouz_bolourian_haghighi.bbl}
\end{document}
|
{
"timestamp": "2018-03-08T02:07:38",
"yymm": "1803",
"arxiv_id": "1803.02623",
"language": "en",
"url": "https://arxiv.org/abs/1803.02623"
}
|
\section{ Introduction}
In this paper, we construct a reference score for online peer assessments based on HodgeRank~\cite{jiang2011statistical}. Peer assessment is a process in which students grade their peers’ assignments~\cite{falchikov2000student, topping1998peer}.
A peer assignment system is used to enhance students’ learning process, especially in higher education. Through such a system, students are given the opportunity to not only learn knowledge from textbooks and instructors, but also from the process of making judgements on assignments completed by their peers. This process helps them understand the weaknesses and strengths in the work of others, and then to review their own.
However, there are some practical issues associated with a peer assignment system. For example, students tend to give significantly higher grades than senior graders or professionals (see ~\cite{freeman2010accurate} for more details). Also, students have a tendency to give grades within a range, with the center of such a range often being based on the first grade they gave. Therefore, bias and heterogeneity can occur in a peer assignment system.
There are various ranking methods on peer assessment problem, such as PeerRank~\cite{walsh2014} and Borda-like aggregation algorithm~\cite{caragiannis2015}. PeerRank, a famous method based on a iterative process to solve the fixed-point equation. PeerRank has many interesting properties from the view of linear algebra. Borda-like aggregation algorithm, a random matheod based on the theory of random graphs and voting theory, whcih provides some probabilistic explanation on peer assessment problem.
In this paper, we propose another ranking scheme to deal with peer assessment problems that uses HodgeRank, a statistical preference aggregation problem from pairwise comparison data. The purpose of HodgeRank is to find a global ranking system based on pairwise comparison data. HodgeRank can not only generate a ranking order, but also highlight inconsistencies in the comparisons (see~\cite{jiang2011statistical} for more detail). We apply HodgeRank to the problems in online assessment and display ranking results from HodgeRank and PeerRank in turn.
We will briefly introduce HodegRank and its useful properties in next section.
\section{HodgeRank}
HodgeRank, a statistical ranking method based on combinatorial Hodge theory to find a consistent ranking. Rigorously speaking, HodgeRank is one solution of a graph Laplacian problem with minimum Euclidean norm.
Now, we start from notations borrowed from graph theory.
Consider a connected graph $\mathcal{G} = (V, E)$, where $V=\{1, 2, \cdots, n\}$ is the set of alternatives to be ranked, and $E\subseteq V\times V$, consists of some unordered pairs from $V$.
In this paper, $V$ represents the set of students to be ranked by their peers, and $E$ collects the information of pairwise comparisons. i.e., $(i, j)\in E$ if students $i$ and $j$ are compared at least once.
Denote $\Lambda$ to be the number of assignments. Then for each assignment $\alpha\in\Lambda$, pairwise comparison data on a graph $\mathcal{G}$ of assignment $\alpha$, is given by $Y^\alpha:E\to\mathbb{R}$ so that $Y^\alpha$ is skew-symmetry. i.e., $Y^\alpha_{ij} = - Y^\alpha_{ji}$ for all $i,j\in V$. $Y^\alpha_{ij}>0$ if grade of the student $j$ is higher than student $i$ by $Y^\alpha_{ij}$ credits. For example, $Y^\alpha_{ij}\in[-100, 100]$ on hundred-mark system.
For each $\alpha\in\Lambda$, a weight matrix $W^\alpha = [w_{ij}^\alpha]$ is associated as follows: $w_{ij}^\alpha>0$ if $Y_{ij}^\alpha\neq0$, and $0$ otherwise. Set $W = \sum\limits_{\alpha\in\Lambda}W^\alpha$.
Let $Y = \sum\limits_{\alpha\in\Lambda}Y^\alpha$ be a $n$-by-$n$ matrix. The goal of the HodgeRank is find a ranking $s:V\to\mathbb{R}$ so that
\begin{equation} \label{e1.1}
Y_{ij} = s_j - s_i\mbox{ for all }i,j\in V.
\end{equation}
However, equations (\ref{e1.1}) can not be admissible in general. Consider the following example,
\[
Y = \begin{bmatrix}
0 & 1 & -1\\
-1 & 0 & -1\\
1 & 1 & 0
\end{bmatrix}
\]
If there exists $s:V\to\mathbb{R}$ such that (\ref{e1.1}) hold. Then
\[
1 = Y_{12} = s_2-s_1 = (s_2-s_3)+(s_3-s_1) = Y_{32} + Y_{13} = 0
\]
which leads to a contradiction. That is, it is impossible to solve (\ref{e1.1}) for any skew-symmetric matrix $Y$. Therefore, we should consider the least square solution of (\ref{e1.1}) instead. Before we rewrite above problem, we need to introduce some notations below.
\begin{Definition}{\rm ~\cite{jiang2011statistical} Denote
\[\mathcal{M}_G = \{X\in\mathbb{R}^{n\times n}~|~X_{ij} = s_i-s_j\mbox{for some }s:V\to\mathbb{R}\},\] the space of global ranking,
and the combinatorial gradient operator
\[
\mbox{grad}: \mathcal{F}(V, \mathbb{R})\to \mathcal{M}_G
\]
is an operator defined from $\mathcal{F}(V, \mathbb{R})$, the set of all function from $V$ to $\mathbb{R}$ (or the space of all potential functions), to $\mathcal{M}_G$, as follows
\[
\big(\mbox{grad}s\big)(i, j) = s_j - s_i.
\]
}
\end{Definition}
From the example above, it is easy to find that if $X = grad(s)$ for some $s\in\mathcal{F}(V, \mathbb{R})$, then $X_{ij}+X_{jk}+X_{ki} = 0$ for any $(i, j), (j, k), (k, i)\in E$. However, the converse might not be true in general. That is, denote
\[
\mathcal{A}=\{X\in\mathbb{R}^{n\times n}~|~X^T=-X\},
\] the set of all skew-symmetric matrices, and let
\[
\mathcal{M}_T=\{X\in\mathcal{A}~|~X_{ij}+X_{jk}+X_{ki}=0\},
\]
then $\mathcal{M}_G\subseteq\mathcal{M}_T$.
With these notations above, then the above problem becomes the following optimization problem:
\[
\min\limits_{X\in\mathcal{M}_G}|| X - Y||^2_{2, w}
=
\min\limits_{X\in\mathcal{M}_G}\sum\limits_{(i, j)\in E}w_{ij}(X_{ij}-Y_{ij})^2
\]
That is, once a graph is given, then the weight on edge $E$ determines an optimization problem. Conversely, a graph can intuitively arise from the ranking data.
Let $\{Y^\alpha~|~\alpha\in\Lambda\}$ be a set of $n$-by-$n$ skew-symmetric matrices, and $\{W^\alpha~|~\alpha\in\Lambda\}$ is associated as above.
Then an undirected graph $\mathcal{G}=(V, E)$ can be defined by $V = \{1, 2, \cdots, n\}$ and
\[
E = \{(i, j)\in V\times V~|~W_{ij}>0\}.
\]
In this case, we can treat $X$ as a edge flow on $\mathcal{G}$ in the sense of combinatorial vector calculus.
In conclusion, we have the following relation between graph and
\[
\begin{tikzcd}
\mathcal{G}=(V, E)\arrow[rr, Leftrightarrow] & & \left\{\begin{tabular}{l}
$X^T = -X$\\
$W = \sum\limits_{\alpha\in\Lambda}W^{\alpha}$.
\end{tabular}\right.
\end{tikzcd}
\]
Hence, the optimization problem of a skew-symmetric least square problem can be view as an optimization problem of edge flow on a graph.
\begin{Definition}(Consistency){\rm ~\cite{jiang2011statistical}
Let $X:V\times X\to\mathbb{R}$ be a pairwise ranking edge flow on a graph $\mathcal{G}=(G, E)$.
\begin{itemize}
\item X is called consistency on $\{i, j, k\}$ if
$(i,j), (j,k), (k,i)\in E$ and $X\in\mathcal{M}_T$
\item X is called globally consistency on $\{i, j, k\}$ if $X=\mbox{grad}(s)$ for some $s\in\mathcal{F}(V,\mathbb{R})$
\end{itemize}
}
\end{Definition}
Note that if $X$ is called globally consistency, then $X$ is consistency on any 3-clique $\{i, j, k\}$, where $(i,j), (j,k), (k,i)\in E$.
Now, consider the weighted trace induced by $W$. i.e.,
\[
<X, Y>=\mbox{tr}\big(X^T(W\odot Y)\big)=\sum\limits_{(i,j)\in E}W_{ij}X_{ij}Y_{ij}
\] for $X,Y\in\mathcal{A}$, where $\odot$ represents the Hadamard product or elementwise product.
With this weighted inner product, we obtain two orthogonal complement of $\mathcal{A}$
\[
\mathcal{A} = \mathcal{M}_G\oplus \mathcal{M}_G^{\perp}
= \mathcal{M}_T\oplus \mathcal{M}_T^{\perp}
\]
Since $\mathcal{M}_G\subseteq\mathcal{M}_T$, we have $\mathcal{M}_G^{\perp}\supseteq\mathcal{M}_T^{\perp}$ and we can get further orthogonal direct sum decomposition of $\mathcal{A}$ as follows:
\[
\mathcal{A} = \mathcal{M}_G\oplus \mathcal{M}_H\oplus \mathcal{M}_T^{\perp},
\]
where $\mathcal{M}_H=\mathcal{M}_T\cap\mathcal{M}_G^{\perp}$.
This decomposition is called the combinatorial Hodge decomposition. For more detail about the theory of combinatorial Hodge decomposition, please refer~\cite{jiang2011statistical} for more detail.
We now state one useful theorem in~\cite{jiang2011statistical}.
\begin{theorem}{\rm ~\cite{jiang2011statistical}\label{t2.1}
\begin{enumerate}
\item The minimum norm solution $s$ of (\ref{e1.1}) is the solution of the normal equation:
\[
\Delta_0 s = -\mbox{div}~Y,
\]
where $\Delta_0=\left\{\begin{tabular}{ll}
$\sum\limits_{(i,j)}w_{ij}$ & if $i = j$\\
$-w_{ij}$ & if $j\in V$ with $(i,j)\in E$\\
0 & otherwise
\end{tabular}\right.$, and
\[
\mbox{div}(Y)(i)=\sum\limits_{j s.t. (i,j)\in E}w_{ij}Y_{ij}
\]
is the combinatorial curl operator of $Y$.
\item The minimum norm solution $s$ of (\ref{e1.1}) is
\[
s^*=-\Delta_0^\dagger~\mbox{div}Y,
\]
where $\Delta_0^\dagger$ represents the Moore-Penrose pseudo inverse of the matrix $\Delta_0$.
\end{enumerate}
}
\end{theorem}
The Hodge decomposition indicates the solution of $(\ref{e1.1})$, while the theorem~\ref{t2.1} shows how to calculate the minimum solution by solving the normal equation. In the next section, we display how to apply HodgeRank to the online peer assessment problem.
\section{Online peer assessment problem}
As previously mentioned, bias and heterogeneity can lead to unfair scoring in online peer assessments. Students usually grade other students based on the first score they gave, which causes bias. However, since scores are usually compared with others, we can use this comparison behavior to reconstruct true ranking.
The data we used in this section were collected from an undergraduate calculus course. In this course, 133 students were asked to upload their GeoGebra ~\cite{hohenwarter2002geogebra} assignments. Each student was then asked to review five randomly chosen assignments completed by their peers to receive partial credits in return. There are 13 assignments during one semester.
Note that ne key point of the HodgeRank is the connectedness of the graph generated by pairwise comparison data. From table ~\ref{table1.1} above, we can easily see that after half the semester passed, comparison data between students forms a connected graph. Hence, we can apply HodgeRank to calculate the ranking of all the students after assignment 7.
\begin{table}[h]
\caption{Number of components with respect to the number of assignments}
\centering
\begin{tabular}{|c|ccccccc|}\hline
Assignment \# & 1 & 2 & 3 & 4 & 5 & 6 & $7\sim13$\\\hline
\# of components & 21 & 5 & 4 & 3 & 2 & 2 & 1\\\hline
\end{tabular}\label{table1.1}
\end{table}
The traditional method for finalizing peer assessment consists of either using an average cumulative score or a truncated average score. Although these approaches might have some statistical meaning, they cannot avoid bias and heterogeneity in peer assessment.
Figure~\ref{fig1.1} displays the cumulative score, PeerRank and HodgeRank, respectively. Here, $(\alpha, \beta) = (0.5, 0)$ in the setting of PeerRank. For what these parameters represent in PeerRank, please refer to ~\cite{walsh2014} for more discussion.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{results.png}
\caption{Final results using different ranking methods}\label{fig1.1}
\end{figure}
To compare these results, ranking results were normalized into the interval [0, 1] linearly and sorted in ascending order. In addition, to reveal the tendency of each ranking method, a steady line was plotted on the graph. There are some interesting implications that can be observed from this figure.
First, the cumulative score offers a ranking higher than the steady line. This reflects the existence of bias and heterogeneity in the cumulative average method. Second, PeerRank can be viewed as a modification of the average scoring. Third, sorted ranking result from HodgeRank is a normal distributed curve. This result can might be an explanation why HodgeRank can be solution to eliminate bias and heterogeneity by the normality.
Note that the reason why HodgeRank and PeerRank show different results is their conclusion base are totally different, while former method relies on the pairwise comparison data and latter one is applied on the average score as an initial ranking. Hence, HodgeRank provides instructors with an objective scoring reference using score difference rather than cumulative or average score.
In conclusion, this is the first time HodgeRank has been applied in the field of education. While numerical results were processed using real world data in this study, certain issues, such as how to aggregate the HodgeRank ranking method into a peer assessment system, remain unsolved. This task will be attempted as part of our future work.
|
{
"timestamp": "2018-03-08T02:03:53",
"yymm": "1803",
"arxiv_id": "1803.02509",
"language": "en",
"url": "https://arxiv.org/abs/1803.02509"
}
|
\section{$F$-MAD families}
|
{
"timestamp": "2018-07-26T02:05:37",
"yymm": "1803",
"arxiv_id": "1803.02740",
"language": "eu",
"url": "https://arxiv.org/abs/1803.02740"
}
|
\subsubsection*{{\textsf{5.1 Collisions at 62.4 GeV and Higher}}}
Let us begin by considering the highest-energy collisions studied by the STAR collaboration, particularly those with an impact energy of 200 GeV per pair, for which the observation of vorticity \cite{kn:STARcoll2} is most unambiguous.
In \cite{kn:STARcoll,kn:STARcoll2}, the focus is on collisions with $20\%$ to $50\%$ centrality. This means \cite{kn:bron,kn:olli2} that the impact parameters vary from around 6.75 femtometres (fm) up to around 10.5 fm. On this domain, one finds \cite{kn:jiang} that the angular momentum imparted to the plasma in 200 GeV collisions steadily decreases, from about $110000$ (in natural units; in conventional units, $110000 \cdot \hbar \,$) at $b = 6.75$ fm to around $40000$ at $b = 10.5$ fm. However, the volume of the overlap region \emph{also} decreases as the impact parameter increases, and we find that, to a good approximation, the two effects cancel for collisions with $20\%$ to $50\%$ centrality: that is, the angular momentum density $\alpha$ is roughly independent of $b$ in this range of $b$ values. We therefore focus on collisions at $20\%$ centrality, since, in this range, these collisions are least affected by the variations of the nuclear density near the boundary of the nucleus, and by other effects associated with the very small volumes of the plasma produced by high-centrality collisions.
The volume of the plasma sample can then be computed in an elementary way, in a ``hard sphere'' model, using the formula \cite{kn:wolf} for the volume of the intersection of two spheres. However, as explained in \cite{kn:jiang}, this underestimates the effective volume, both because the ``sharp edge'' assumption is inadequate (a Woods-Saxon profile is used in \cite{kn:jiang}) and because, in reality, some nucleons outside the overlap zone contribute to the fireball, effectively increasing the volume. This effect is estimated in \cite{kn:jiang} to be of order 2 to 3, depending on the impact parameter: in our case it is around 2. In addition, we must of course take into account relativistic contraction, estimated in \cite{kn:phobos} to be roughly 7 for the equilibrated plasma.
Taking all this into account, we find that, for $\sqrt{s_{\m{NN}}} = 200$ GeV collisions at $\mathcal{C} = 20\%$ centrality, the angular momentum density is approximately given by
\begin{equation}\label{N}
\alpha\left(\sqrt{s_{\m{NN}}} = 200\, \m{GeV},\,\mathcal{C} = 20\%\right)\;\approx\; 758 \; \m{fm}^{- 3}.
\end{equation}
According to \cite{kn:sahoo}, the energy density in this case is approximately $10.55/\m{fm}^4$, and so we compute the maximal vorticity, according to equation (\ref{M}), as
\begin{equation}\label{O}
\omega_{\m{max}}\left(\sqrt{s_{\m{NN}}} = 200\, \m{GeV},\,\mathcal{C} = 20\%\right)\;\approx\; 0.00387 \;\m{fm}^{-1}.
\end{equation}
In order to compute a total polarization from this, we need to use the (initial) temperature of the plasma, and this is the point of greatest uncertainty, as mentioned above. Using a temperature of approximately 190 MeV \cite{kn:sahoo} (which may be an over-estimate, so our result may well be somewhat too low), we can express the vorticity bound in this case in the form
\begin{equation}\label{Q}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 200\, \m{GeV},\,\mathcal{C} = 20\%\right) \;\leq \;\approx\, 0.402\%.
\end{equation}
When the first observations of polarization of $\Lambda$ and $\overline{\Lambda}$ hyperons were announced, such a value was too small to be detected (see the rightmost points in Figure 4 of \cite{kn:STARcoll}). Subsequent analysis of a much larger data set \cite{kn:STARcoll2} has however found evidence of such polarization, reporting values of $$\overline{\mathcal{P}}_{\Lambda'}\left(\sqrt{s_{\m{NN}}} = 200\, \m{GeV},\,\mathcal{C} = 20\%\right)\,\approx \, 0.277\,\pm\,0.040\,(+\,0.039\,-0.049)\%$$ and $$\overline{\mathcal{P}}_{\overline{\Lambda}'}\left(\sqrt{s_{\m{NN}}} = 200\, \m{GeV},\,\mathcal{C} = 20\%\right) \,\approx \, 0.240\,\pm\,0.045\,(+\,0.061\,-\,0.045)\%,$$ the uncertainties being statistical and systematic respectively.
This is the most precise vorticity observation thus far reported, and is considered to be particularly trustworthy because the values of $\overline{\mathcal{P}}_{\Lambda'}\left(\sqrt{s_{\m{NN}}} = 200\, \m{GeV},\,\mathcal{C} = 20\%\right)$ and $\overline{\mathcal{P}}_{\overline{\Lambda}'}\left(\sqrt{s_{\m{NN}}} = 200\, \m{GeV},\,\mathcal{C} = 20\%\right)$ are considered (in \cite{kn:STARcoll2}) to be essentially indistinguishable with these uncertainties, and this is expected on theoretical grounds. We see that, by the same measure, these results are also consistent with both with our vorticity bound (\ref{ALPHA}) \emph{and} with our conjectured equality, (\ref{BETA}).
If one repeats this calculation for collisions at an impact energy of 62.4 GeV, one finds that the energy density is of course lower (about $7.59/\m{fm}^4$), as is the temperature (about 179 MeV) and that the angular momentum density also drops, \emph{but more sharply}, to around $236.4/\m{fm}^3$ (it scales approximately linearly with $\sqrt{s_{\m{NN}}}\;$ \cite{kn:jiang}): this pattern is seen throughout these calculations. The result is a much less\footnote{That is, the vorticity is predicted to be larger for smaller angular momentum densities; this is clear from (\ref{ALPHA}) directly, and it is in fact in agreement with all of the reported data. See \cite{kn:jiang} for the physics of this.} stringent bound,
\begin{equation}\label{R}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 62.4\, \m{GeV},\,\mathcal{C} = 20\%\right) \;\leq \;\approx\, 0.980\%,
\end{equation}
which might well be detectable in an analysis similar to that of \cite{kn:STARcoll2}; unfortunately, with the current data the error bars are large in this case (see the second-from-rightmost points in Figure 4 of \cite{kn:STARcoll}), and clear evidence of polarization is yet to be obtained. There is in any case no conflict with our claim that $\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}$ can be no larger than this or indeed that (\ref{BETA}) might be valid here.
At the other extreme, one can consider the lead-lead collisions studied in the ALICE experiment at the LHC: here, in the collisions at 2.76 TeV, the energy density \cite{kn:aliceenergy} is about 2.3 times larger than in the 200 GeV collisions, but the angular momentum density is about 13.5 times larger for a given centrality; furthermore, the temperature is considerably higher, roughly 300 MeV. The ALICE investigation of peripheral collisions \cite{kn:bed} considered centrality in two ranges: $15\%$ to $50\%$, and also $5\%$ to $15\%$. As before, in the first case we can take $20\%$ to be representative, and then we obtain from (\ref{ALPHA}) an extremely severe bound:
\begin{equation}\label{S}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 2.76\, \m{TeV},\,\mathcal{C} = 20\%\right) \;\leq \;\approx\, 0.046\%.
\end{equation}
The other range is interesting, since the data go down to a very low centrality. Here the angular momentum is enormous, but it does not vary monotonically with impact parameter, so this case merits separate investigation. The much larger overlap volume when the impact parameter is small (around 3.5 fm for $5\%$ centrality) makes itself felt here, and we find in this case a slightly \emph{less} stringent bound despite the higher angular momentum:
\begin{equation}\label{T}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 2.76\, \m{TeV},\,\mathcal{C} = 5\%\right) \;\leq \;\approx\, 0.055\%.
\end{equation}
This interesting relaxation of the bound at low centralities is characteristic of the holographic model, and we will discuss it in more detail elsewhere. For the present we merely note that this is still an extremely low value.
Even with the substantial (theoretical and observational) uncertainties here, it is clear that, in all cases, the vorticity bound is (at present) completely inconsistent with any observation of hyperon polarization in these experiments (and of course this prediction is even more firm for the collisions at 5.02 TeV \cite{kn:ALICEoverview})\footnote{Hyperon polarization may, however, be observable at very high impact energies in future, perhaps in runs 3 or 4 of the LHC \cite{kn:future}.}. This is entirely consistent with the reported data, in which no evidence of $\Lambda/\overline{\Lambda}$ polarization was detected \cite{kn:bed}.
In summary, the vorticity bound asserts that global polarization of $\Lambda$ and $\overline{\Lambda}$ hyperons should certainly not be observable in current data at impact energies much above 200 GeV. It is consistent with a tiny total polarization at 200 GeV ---$\,$ now observed, at almost exactly the maximum value permitted by the bound. If the uncertainties can be very considerably reduced, and if (\ref{BETA}) continues to hold, we expect it to be observable in collisions at 62.4 GeV, at a total percentage about double the observed value at 200 GeV.
Let us turn, then, to much \emph{lower} impact energies.
\subsubsection*{{\textsf{5.2 Collisions at 39, 27, and 19.6 GeV }}}
The STAR collaboration took data at 39, 27, and 19.6 GeV impact energies. We interrupt our investigation at 19.6 GeV because, while data were also taken at still lower impact energies (to be discussed below), it is not completely clear that the QGP is actually formed in those cases; this is discussed in detail in \cite{kn:sahoo}. We will not take a stand on this issue, but we find it clearest to focus first on the cases which are not in doubt.
In the case of collisions at 39 GeV, with $20\%$ centrality, we find that the angular momentum density $\alpha$ has dropped to around $147.8/\m{fm}^3$, the energy density $\varepsilon$ to $7.25/\m{fm}^4$, the temperature to 178 MeV, and so the vorticity bound (\ref{ALPHA}) gives us
\begin{equation}\label{U}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 39\, \m{GeV},\,\mathcal{C} = 20\%\right) \;\leq \;\approx\, 1.51\%;
\end{equation}
the corresponding collisions at 27 GeV have $\alpha \approx 102.3/\m{fm}^3$, $T \approx 172$ MeV, and $\varepsilon \approx 5.89/\m{fm}^4$, and so we have
\begin{equation}\label{UU}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 27\, \m{GeV},\,\mathcal{C} = 20\%\right) \;\leq \;\approx\, 1.83\%.
\end{equation}
Finally, for collisions at 19.6 GeV we have a still lower angular momentum density of around $74.2/\m{fm}^3$, $T \approx 171$ MeV, and the energy density is about $5.6/\m{fm}^4$, leading to
\begin{equation}\label{V}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 19.6\, \m{GeV},\,\mathcal{C} = 20\%\right) \;\leq \;\approx\, 2.42\%.
\end{equation}
The agreement with Figure 4 of \cite{kn:STARcoll} (sixth pair from left for 39 GeV, fifth from left for 27 GeV, fourth from left for 19.6 GeV) (of course one has to add the two values shown there at each impact energy) is better than one was entitled to expect in a holographic model (that is, agreement to within a factor of at best 2). The rate at which the total polarization declines with increasing impact energy is reproduced particularly well.
In short: at these impact energies, the vorticity bound relaxes quite dramatically, to the point where global polarization of $\Lambda$ and $\overline{\Lambda}$ hyperons should be clearly observable; and so it has proved: these are the impact energies for which the evidence for hyperon polarization arising from QGP vorticity was most clear-cut in \cite{kn:STARcoll}.
Finally, we consider the collisions with the lowest impact energies.
\subsubsection*{{\textsf{5.3 Collisions at 14.5, 11.5, and 7.7 GeV}}}
The reported data \cite{kn:STARcoll} on the $\Lambda$ and $\overline{\Lambda}$ hyperon polarizations present a less clear picture than in the case just considered. In particular, the $\overline{\Lambda}$ polarization results appear to be significantly larger than those for $\Lambda$ hyperons, and this suggests that some additional effect may be at work here, making the interpretation of these results somewhat dubious: see \cite{kn:kolo} and particularly \cite{kn:csernkap}. In addition, at these impact energies (particularly for the 7.7 GeV case), it is open to doubt whether a QGP actually forms. If this is not the case, of course, then a gauge-gravity approach \emph{cannot be used}.
With these warnings noted, the predictions of the holographic model are as follows.
At 14.5 GeV, $\alpha \approx 54.96/\m{fm}^3$, $T \approx 168$ MeV, $\varepsilon \approx 4.56/\m{fm}^4$, and then
\begin{equation}\label{W}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 14.5\, \m{GeV},\,\mathcal{C} = 20\%\right) \;\leq \;\approx\, 2.73\%;
\end{equation}
collisions at 11.5 GeV have $\alpha \approx 43.59/\m{fm}^3$, $T \approx 164$ MeV, and $\varepsilon \approx 3.97/\m{fm}^4$, leading to
\begin{equation}\label{X}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 11.5\, \m{GeV},\,\mathcal{C} = 20\%\right) \;\leq \;\approx\, 3.04\%;
\end{equation}
and finally the 7.7 GeV collisions have $\alpha \approx 29.18/\m{fm}^3$, $T \approx 160$ MeV, and $\varepsilon \approx 3.00/\m{fm}^4$, giving
\begin{equation}\label{Y}
\left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}} = 7.7\, \m{GeV},\,\mathcal{C} = 20\%\right) \;\leq \;\approx\, 3.53\%.
\end{equation}
Except at 7.7 GeV, the agreement with \cite{kn:STARcoll} continues to be fairly good. In the 7.7 GeV case, the reported polarization for $\overline{\Lambda}$ hyperons is so much larger than that for $\Lambda$ hyperons that this case should be viewed with particular caution. In any event, in view of the large error bars in these cases, we can still assert that there is at least no contradiction to the vorticity bound.
It is noteworthy that, as one proceeds to higher impact energies, the difference between the reported $\Lambda$ and $\overline{\Lambda}$ hyperon polarizations grows steadily smaller, being quite negligible \cite{kn:STARcoll2} at 200 GeV; at the same time, the agreement of the vorticity bound, and of equation (\ref{BETA}), with the data becomes steadily better. This may not be a coincidence.
\begin{figure}[!h]
\centering
\includegraphics[width=1\textwidth]{PHI1.eps}
\caption{Theoretical upper bounds on total $\Lambda$ hyperon polarization, that is, $ \left[\overline{\mathcal{P}}_{\Lambda'}\,+\,\overline{\mathcal{P}}_{\overline{\Lambda}'}\right]\left(\sqrt{s_{\m{NN}}},\,\mathcal{C} = 20\%\right) \,\leq \, \Phi\left(\sqrt{s_{\m{NN}}},\, \mathcal{C} = 20 \%\right)$, as a percentage, for collisions at $\sqrt{s_{\m{NN}}} = 7.7,\, 11.5,\, 14.5,\, 19.6,\, 27,\, 39,\, 62.4,\, 200$ GeV and $20\%$ centrality.}
\end{figure}
Our results are summarized in Figure 1, which should be compared with Figure 4 of \cite{kn:STARcoll} and Figure 4 of \cite{kn:STARcoll2} by adding together the values corresponding to the two points at each impact energy. The figures appear to be compatible.
A more broad-brush way of making a comparison with the results of \cite{kn:STARcoll} is to compute the vorticity itself, averaged over impact energies. As mentioned above, in \cite{kn:STARcoll} this is given as $9\,\pm 1\,\times 10^{21}\,\cdot\,$s$^{-1}$, but with a large systematic uncertainty of order 2. Here we find that the $\sqrt{s_{\m{NN}}}$-averaged value of $\omega$, computed using (\ref{BETA}), is approximately $5.3\,\times 10^{21}\,\cdot\,$s$^{-1}$, somewhat low, but in reasonable agreement with the data in view of the uncertainties. (The principal uncertainty is, once again, primarily associated with the difficulty \cite{kn:bus} of determining the temperatures; the temperature estimates used in \cite{kn:STARcoll} differ somewhat from those used here.)
Our claim, then, is that the relation (\ref{BETA}), inspired by the simplest possible holographic model of this system, approximately captures the actual relation between the vorticities and the angular momentum densities of the plasmas generated by peripheral collisions, at least for impact energies which are not very low (meaning below 11.5 GeV).
We should also be cautious with regard to centralities. We have seen that both (\ref{ALPHA}) and (\ref{BETA}) are valid for collisions at $\sqrt{s_{\m{NN}}} = 11.5$ GeV and centrality $20\%$, with $\alpha \approx 44$ fm$^{-3}$. We should therefore not assume that the bound is attained at any angular momentum density below around $40$ fm$^{-3}$. This translates to an impact parameter no lower than 2.5 fm (or centrality about 2.5$\%$). On the other hand, all of our discussions have concerned collisions which are not very peripheral, with centrality not much greater than 20$\%$, corresponding to an impact parameter no greater than about 7 fm. This, then, is the domain in which we claim that (\ref{ALPHA}) and (\ref{BETA}) are valid.
\section* {\large{\textsf{6. Conclusion}}}
We have studied the AdS$_5$-Kerr spacetime from a holographic point of view. Such a black hole, with an angular momentum to mass ratio $\mathcal{A}$, corresponds to matter at conformal infinity with an angular momentum density to energy density ratio also equal to $\mathcal{A}$, and with an angular velocity which can at least be bounded above. We have conjectured that a more complete analysis, were it possible, would turn this bound into an equation, and we have argued that the data reported by the STAR collaboration is consistent with this conjecture; so are the corresponding results from ALICE at the LHC, in the sense that the non-observation of $\Lambda$/$\overline{\Lambda}$ hyperon polarization there is consistent with the small values predicted by equation (\ref{BETA}).
The applicability of holographic techniques to this problem is fundamentally limited: the no-hair theorems ensure that we have very few parameters at our disposal in the bulk. The ``universality'' of black hole physics is often cited \cite{kn:nat} as a virtue of the holographic approach, but in this case it severely restricts the number of properties of the ``peripheral plasma'' we can hope to represent\footnote{The only parameter we have not used is the angular momentum corresponding to rotation of the bulk black hole around a second axis; that is, one could use the most general metric given in \cite{kn:hawk}, the metric $g\left(\m{AdSK}_5^{(a,b)}\right)$ in our notation, where $b$ represents a second, independent angular momentum parameter. More speculatively, one could try to use five-dimensional rotating objects with non-spherical horizon topologies, if these can be found explicitly in the asymptotically AdS context \cite{kn:reall}.}.
An optimistic assessment of these results would assert that, within its domain of applicability, the holographic model works unexpectedly well. The agreement of Figure 1 with Figure 4 of \cite{kn:STARcoll} and Figure 4 of \cite{kn:STARcoll2}, apart from one possible outlier, is surprising. The fact that the model predicts, correctly, that hyperon polarization associated with QGP vorticity should be readily observable at impact energies up to around 39 GeV, observable only with difficulty at impact energy 200 GeV, and not at all (in current experiments) at higher energies, is very suggestive. A pessimistic assessment would assert that the predictions of the holographic model are at least not in blatant conflict with the data.
Even if one is sceptical regarding the holographic model, the results do make it reasonable to conjecture that vorticity in the QGP is subject to some kind of general constraint, and that its mathematical form is similar to that of (\ref{ALPHA}) or (\ref{BETA}). Our simple model produces a very specific value for the constant $\varkappa\,$ occurring in those relations; perhaps this can be improved or given a firmer basis by more sophisticated considerations. At least we have a concrete basis for further investigations by other methods.
We have seen that holography focuses our attention\footnote{It might be said \cite{kn:karch} that focusing attention on the ``right'' variables is one of the principal services that holography can render.} on a specific parameter, the ratio $\varepsilon/\alpha$. This quantity depends in a complicated but definite manner on the centrality of a peripheral collision, and the dependence is particularly marked for centralities much smaller than those considered here (or in \cite{kn:STARcoll,kn:STARcoll2}). Our considerations therefore allow predictions to be made regarding what one must expect to find if data can be taken at small centralities. This will be discussed elsewhere.
\addtocounter{section}{1}
\section*{\large{\textsf{Acknowledgements}}}
The author thanks Dr Soon Wanmei for valuable discussions.
|
{
"timestamp": "2019-07-19T02:05:51",
"yymm": "1803",
"arxiv_id": "1803.02528",
"language": "en",
"url": "https://arxiv.org/abs/1803.02528"
}
|
\subsection{Human Pose Estimation}
\label{subsec:human_pose_estimation}
We aim for estimating the human body keypoints $\mathbf{w} = (\vec{w}_1, \dots, \vec{w}_J) \in \mathbb{R}^{3 \times J}$ for $J$ keypoints in real world coordinates relative to the Kinect sensor given color image $\mathbf{I} \in \mathbb{R}^{N\times M\times 3}$, depth map $\mathbf{D}' \in \mathbb{R}^{N'\times M'}$ and their calibration. Additionally we predict the hand normal vectors $\mathbf{n} \in \mathbb{R}^{3\times 2}$ for both hands of the person. Without loss of generality we define the coordinate system, our predictions live in, to be identical with the color sensors frame.
For the Kinect, the color and depth sensors are located in close proximity, but still the frames resemble two distinct cameras. Our approach needs to collocate information of the two frames. Therefore we transform the depth map into the color frame using the camera calibration. As a result, our approach operates on the warped depth map $\mathbf{D} \in \mathbb{R}^{N\times M}$. Due to occlusions, differences in resolution and noise, the resulting depth map $\mathbf{D}$ is sparse, but for better visualization a linear interpolation of $\mathbf{D}$ is shown in \figref{fig:overview_pose}.
\subsubsection{Color Keypoint Detector}
The keypoint detector is applied to the color image $\mathbf{I}$, which yields score maps $\mathbf{s_{\text{2D}}} \in \mathbb{R}^{N\times M\times J}$ encoding the likelihood of a specific human keypoint being present.
The maxima of the score maps $\mathbf{s}_{\text{2D}}$ correspond to the predicted keypoint locations $\mathbf{p} = (\vec{p}_0, \dots \vec{p}_J) \in \mathbb{R}^{2\times J}$ in the image plane.
Thanks to many datasets with annotated color frames for human pose estimation \cite{lin2014microsoft, andriluka_2d_2014}, robust detectors are available. We use the Open Pose Library \cite{cao2017realtime, simon2017hand, wei2016cpm} with fixed weights in this work.
\subsubsection{VoxelPoseNet}
Given the warped depth map $\mathbf{D}$ a voxel occupancy grid $\mathbf{V} \in \mathbb{R}^{K\times K\times K}$ is calculated with $K=64$. For this purpose the depth map $\mathbf{D}$ is transformed into a point cloud and we calculate an 3D coordinate $\vec{w}_{\text{r}}$, which is the center of $\mathbf{V}$. We calculate $\vec{w}_{\text{r}}$ as back projection of the predicted 2D 'neck' keypoint $\vec{p}_{\text{r}}$ using the median depth $d_r$ extracted from the neighborhood of $\vec{p}_{\text{r}}$ in $\mathbf{D}$:
\begin{equation}
\vec{w}_r = d_r \cdot \mathbf{K}^{-1} \cdot \vec{p}_r \text{.}
\end{equation}
Where $\mathbf{K}$ denotes the intrinsic calibration matrix camera and $\vec{p}_r$ is in homogeneous coordinates. We pick the value $d_r$ from the depth map taking into account the closest $3$ neighboring valid depth values around $\vec{p}_r$.
We calculate $\mathbf{V}$ by setting elements to $1$, when there is at least one point of the point cloud lying in the interval represented and zero otherwise. We chose the resolution of the voxel grid to be approximately \SI{3}{cm}.
\textit{VoxelPoseNet} gets $\mathbf{V}$ and a volume of tiled score maps $\mathbf{s}_{\text{2D}}$ as input and processes them with a series of 3D convolutions. We propose to tile $\mathbf{s}_{\text{2D}}$ along the z-axis, which is equivalent to an orthographic projection approximation.
\textit{VoxelPoseNet} estimates score volumes $\mathbf{s}_{\text{3D}} \in \mathbb{R}^{K\times K\times K\times J}$, which resemble keypoint likelihoods the same way as its 2D counterpart
\begin{equation}
\mathbf{w}_{\text{VPN}} = \argmax_{x, y, z}(\mathbf{s_{\text{3D}}}) \text{.}
\end{equation}
We use the following heuristic to assemble our final prediction: On the one hand $\mathbf{w}_{\text{VPN}}$ is predicted by \textit{VoxelPoseNet}. On the other hand we take the z-component of $\mathbf{w}_{\text{VPN}}$ and the predicted 2D keypoints $\mathbf{p}_{\text{2D}}$ to calculate another set of world coordinates $\mathbf{w}_{\text{projected}}$. For these coordinates the accuracy in x- and y-direction is not limited by the choice of $K$ anymore. We chose our final prediction $\mathbf{w}$ from $\mathbf{w}_{\text{projected}}$ and $\mathbf{w}_{\text{VPN}}$ based on the 2D networks prediction confidence, which is the score of $\mathbf{s}_{\text{2D}}$ at $\mathbf{p}$.
\figref{fig:overview_pose} shows the network architecture used for \textit{VoxelPoseNet}, which is a encoder decoder architecture inspired by the U-net \cite{ronneberger2015u} that uses dense blocks \cite{huang2017densely} in the encoder. While decoding to the full resolution score map, we incorporate multiple intermediate losses denoted by $\mathbf{s}^i_\text{3D}$, which are discussed in section section \ref{subsec:network_loss}.
\subsection{Hand Normal Estimation}
The approach presented in section section \ref{subsec:human_pose_estimation} yields locations for the human hands, which are used to crop the input image centered around the predicted hand keypoint.
For \textit{HandNormalNet} we adopt our previous work on hand pose estimation \cite{zb2017hand}. We exploit that the network from \cite{zb2017hand} estimates the relative transformation between the depicted hand pose and a canonical frame, which gives us the normal vector. We use that network without further retraining.
\subsection{Network training}
\label{subsec:network_loss}
We train \textit{VoxelPoseNet} using a sum of squared $L_2$ losses:
\begin{equation}
\text{L} = \sum_{i} \norm{\mathbf{s}^\text{gt}_\text{3D} - \mathbf{s}^{i \text{, pred}}_\text{3D}}^2
\end{equation}
with a batch size of $2$. Datasets used for training are discussed in section section \ref{sec:datasets}.
The networks are implemented in Tensorflow \cite{abadi2016tensorflow} and we use the ADAM solver \cite{adam_kingsma}. We train for $40000$ iterations with an initial learning rate of $10^{-4}$, which drops by the factor $0.1$ every $10000$ iterations.
Ground truth score volumes $\mathbf{s}^\text{gt}_\text{3D}$ are calculated from the ground truth keypoint location within the voxel $\mathbf{V}$. A Gaussian function is placed at the ground truth location and normalized such that its maximum is equal to $1$.
\subsection{Action learning}
With the ability to record the human motion trajectories, action learning requires them to be transferred to the robot. Due to its deviating kinematics and grasping capabilities the robot cannot directly reproduce the human motions. For the necessary adaption and the action model generation we use the learning-from-demonstration approach presented in our previous work~\cite{twelsche16iros,twelsche17iros}. Here, the robot motion is designed to follow the teacher's demonstrations as closely as possible, while deviating as much as necessary to fulfill constraints posed by its geometry. We pose it as a graph optimization problem, in which trajectories of the manipulated object and the teacher's hand and torso serve as input. We account for the robot's grasping skills and kinematics as well as occlusions in the observations and collisions with the environment. We assume that the grasp on the object is fixed during manipulation and all trajectories are smooth in the sense that consecutive poses should be near each other. These constraints are addressed via the graphs edges. During optimization the teacher's demonstrations are adapted towards trajectories that are feasible for robot execution. For details on the graph structure and the implementation we refer to Welschehold~\textit{et al}.~~\cite{twelsche16iros,twelsche17iros}.
\subsection{Multi View Kinect Dataset (MKV)}
\label{subsec:kinect_dataset}
Therefore, for training of our neural network we recorded a new dataset, which comprises 5 actors, 3 locations, and up to 4 viewpoints. There are 2 female and 3 male actors and the locations resemble different indoor setups. Some examples are depicted in \figref{fig:mkv_samples}. The poses include various upright and sitting poses as well as walking sequences.
Short sequences were recorded simultaneously by multiple calibrated Kinect v2 devices with a frame rate of \SI{10}{Hz}, while recording the skeletal predictions of the Kinect SDK. In a post processing step we applied state-of-the-art Human Keypoint Detectors \cite{cao2017realtime, simon2017hand, wei2016cpm} and used standard triangulation techniques to lift the 2D predictions into 3D.
This results in a dataset with $22406$ samples. Each sample comprises of color image, depth map, infrared image, the SDK prediction and a ground truth skeleton annotation we get through triangulation. The skeleton annotations comprises of $18$ keypoints that follow the Coco definitions \cite{lin2014microsoft}.
We apply data augmentation techniques and split the set into an evaluation set of $3546$ samples (\textit{MVK-e}) and a training set with $18860$ (\textit{MVK-t}). We divide the two sets by actors and assign both female actors into the evaluation set, which also leaves one location unique to this set.
Additionally this dataset contains annotated hand normals for a small subset of the samples. The annotations stem from detected and lifted hand keypoints, which were used to calculate the hand normal ground truth. Because detection accuracy was much lower and bad samples were discarded afterwards this dataset is much smaller and provides a total of $129$ annotated samples.
\subsection{Captury Dataset}
Due to the limited number of cameras in the \textit{MKV} setup and the necessity to avoid occluding too many cameras views at the same time, we are limited in the amount of possible object interaction of the actors. Therefore we present a second dataset that was recorded using a commercial marker-less motion capture system called Captury\footnote{http://www.thecaptury.com}. It uses $12$ cameras to track the actor with \SI{120}{Hz} and we calibrated a Kinect v2 device with respect to the Captury. The skeleton tracking provides $23$ keypoints, from which we use $13$ for comparison. We recorded three actors, which performed simple actions like pointing, walking, sitting and interacting with objects like a ball, chair or umbrella. One actor of this setting was already recorded for the \textit{MKV} dataset and therefore constitutes the set used for training. Two previously unseen actors were recorded and form the evaluation set. There are $1535$ samples for training (\textbf{CAP-t}) and $1505$ samples for evaluation (\textbf{CAP-e}).
The definition of human keypoints between the two datasets is compatible, except for the "head" keypoint, which misses a suitable counterpart in the \textit{MKV} dataset. This keypoint is excluded from evaluation to avoid systematic error in the comparison.
\section{EXPERIMENTS - POSE ESTIMATION}
\subsection{Datasets for training}
\tabref{tab:train_sets} shows that the proposed \textit{PoseNet3D} already reaches good results on the evaluation split of both datasets when trained only on \textit{MKV-t}. Training a network only on \textit{CAP-t} leads to inferior performance, which is due to starkly limited variation in the training split of the Captury dataset, which only contains a single actor and scene. Training jointly on both sets performs roughly on par with training exclusively on \textit{MKV-t}. Therefore we use \textit{MKV-t} as default training set for our networks and evaluate on \textit{CAP-e} for following experiments. Furthermore, we confirm generalization of our \textit{MKV-t} trained approach on the \textit{InOutDoor} Dataset \cite{mees2016choosing}. Because the dataset does not contain pose annotations we present qualitative results in the supplemental video.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Training set & \textit{CAP-e} full & \textit{CAP-e} subset & \textit{MKV-e} \\
\hline\hline
\textit{MKV-t} & $0.627$ & $0.618$ & $0.793$\\
\textit{CAP-t} & $0.603$ & $0.588$ & $0.665$\\
\textit{CAP-t} \& \textit{MKV-t} & $0.633$ & $0.625$ & $0.794$\\
\hline
\end{tabular}
\caption{Performance measured as area under the curve (AUC) for different training sets of \textit{VoxelPoseNet}. \textit{CAP-t} does not generalize to \textit{MKV-e}, whereas \textit{MKV-t} provides sufficient variation to generalize to \textit{CAP-e}. Training jointly on \textit{CAP-t} and \textit{MKV-t} doesn't improve results much anymore. }\label{tab:train_sets}
\end{center}
\end{table}
\subsection{Comparison to literature}
In \tabref{tab:epe_results} we compare our approach with common baseline methods.
The first baseline is the Skeleton Tracker integrated in Microsofts Software Development Kit\footnote{https://www.microsoft.com/en-us/download/details.aspx?id=44561} (Kinect SDK). We show that its performance heavily drops on the more challenging subset and therefore argue that it is unsuitable for many robotics applications. Furthermore, \figref{fig:pck_over_dist} shows that the Kinect SDK is unable to predict keypoints farther away than a certain distance. The qualitative examples in \figref{fig:qualitative_results} reveal that the SDK is led astray by objects and is unable to distinguish if a person is facing towards or away from the camera, which expresses itself in mixing up left and right side.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& Captury full & Captury subset & Multi Kinect \\
\hline\hline
Kinect SDK & $13.5$ & $16.4$ & $8.9$\\
Naive Lifting & $14.7$ & $15.2$ & $8.8$\\
Tome \textit{et al}.~\cite{tome_lifting_2017} & $22.7$ & $21.9$ & $15.1$\\
Proposed & $\mathbf{11.2}$ & $\mathbf{11.6}$ & $\mathbf{6.1}$\\
\hline
\end{tabular}
\caption{Average mean end point error per keypoint of the predicted 3D pose for different approaches in \SI{}{cm}. For the Captury dataset we additionally report results on the subset of non-frontal scenes and with object interaction.}\label{tab:epe_results}
\end{center}
\end{table}
The baseline named Naive Lifting uses the same Keypoint detector for color images as our proposed approach and simply picks the corresponding depth value from the depth map. It chooses the depth value as median value of the $3$ closest neighbors. The approach shows reasonable performance, but is prone to pick bad depth values from the noisy depth map. Also any kind of occlusion results into an error, which is seen in \figref{fig:qualitative_results}.
Tome \textit{et al}.~\cite{tome_lifting_2017} predicts scale and translation normalized poses. So in order to compare the results to the other approaches we provide the algorithm with ground truth scale and translation. For every prediction we seek scale and translation in order to minimize the reconstruction error between ground truth and prediction. \tabref{tab:epe_results} shows that the approach reaches competitive results, but performs worst in our comparison, which is reasonable given the lack of depth information. In \figref{fig:auc_curves_cap} the approach stays far behind, which partly lies in the fact that the approach misses to provide predictions in $8.7 \%$ of the frames of \textit{CAP-e}, which compares to $12.4 \%$ for Kinect SDK and $~0 \%$ for Naive Lifting and our approach.
\textit{VoxelPoseNet} outperforms its baseline methods, because it exploits both modalities. On the one hand, color information helps to disambiguate left and right side, which is infeasible from depth alone. On the other hand, the depth map provides valuable information to exactly infer the 3D keypoint. Furthermore, the network learns a prior about possible body part configurations, which makes it possible to infer 3D locations even for completely occluded keypoints (see \figref{fig:qualitative_results}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth, height=.6\columnwidth]{./figures/pose/auc_curve_new2.tikz}
\caption{Performance of different algorithms on \textit{CAP-e} measured as percentage of correct keypoints (PCK) on the more challenging subset of non-frontal poses and object interaction.}\label{fig:auc_curves_cap}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth, height=.6\columnwidth]{./figures/pose/auc_pck_dist_new2.tikz}
\caption{Percentage of correct keypoints (PCK) over their distance to the camera. Most approaches are only mildly affected by the keypoint distance to the camera, but the Kinect SDK can only provide predictions in a limited range.}\label{fig:pck_over_dist}
\end{figure}
\begin{figure*}
\centering
\includegraphics[]{./figures/pose/quali_tikz_new.tikz}
\caption{Typical failure cases of the algorithms evaluated for samples from \textit{CAP-e}. The first row shows the scene and the other two rows depict the ground truth skeleton in dashed blue and the prediction in solid green and red. Green color indicates the persons right side. Predictions of our proposed approach are shown in the last row, whereas the middle row shows predictions by other algorithms. The first two columns correspond to predictions of the Kinect SDK, the next two are by the Naive Lifting approach and the last two by the approach presented by Tome \textit{et al}.~\cite{tome_lifting_2017}. Typical failures for the SDK are caused by objects and or people that face away from the camera. Naive Lifting fails when any sort of keypoint occlusion is present. }
\label{fig:qualitative_results}
\end{figure*}
\subsection{HandNormalNet}
We use the annotated samples of \textit{MKV} to evaluate the accuracy of the normal estimation we achieve with the adopted network from \cite{zb2017hand}. For the $129$ samples we get an average angular error of $60.3$ degree, which is sufficient for the task learning application as is shown in the next section.
\section{EXPERIMENTS - ACTION LEARNING}
\label{exp_action_learning}
We evaluate our recently proposed graph-based approach~\cite{twelsche17iros} for learning a mobile manipulation task from human demonstrations on data acquired with the approach for 3D human pose estimation presented in this work. We evaluate the methods on the same four tasks as in our previous work~\cite{twelsche17iros}: one task of opening and moving through a room door and three tasks of opening small furniture pieces. The tasks will be referred to as room door, swivel door, drawer, and sliding door. Each consists of three parts. First a specific part of the object is grasped, \textit{i}.\textit{e}., a handle or a knob, then the object is manipulated according to its geometry, and lastly released. The demonstrations were recorded with a Kinect v2 at $\SI{10}{Hz}$. As we need to track both, the manipulated object and the human teacher, the actions were recorded from a perspective that show the human from the side or back making pose estimation challenging. For an example of the setup see \figref{fig:robotaction}.
\subsection{Adapting Human Demonstrations to Robot Requirements}
First we evaluate adaption of the recorded demonstrations towards the robot capabilities. Specifically we compare the optimization for all aforementioned tasks for two different teacher pose estimation methods. The first relies on detecting markers attached to the teachers hand and torso and was conducted for our previous work~\cite{twelsche17iros}. The second follows the approach presented in this work. In Table~\ref{T:grasp_comparison} we summarize the numerical evaluation for both recording methods. The table shows that the offset between a valid robot grasp and the demonstrated grasp pose is higher for the 3D human pose estimation than for the estimation with markers for all tasks. The highest difference occurs for the room door task, because the hand is occluded in many frames resulting in fewer predictions. Nevertheless our graph-based optimization is still able to shift the human hand trajectory to reproduce the intended grasp, see \figref{fig:GraspAdaption}. This is reflected in higher distances, both Euclidean and angular, between gripper and recorded hand poses after the optimization. Next we compare the standard deviation on the transformations between the object and the gripper, respectively the object and hand in the manipulation segment. These transformations correspond to the robot and human grasps. We see that for both the translational and the rotational part we have comparable values for the two pose estimation methods. This indicates that, although not being as accurate as using markers, we still have a high robustness in the pose estimation, meaning that the error is systematic and the relative measurements are consistent with little deviation. After the optimization we obtain low standard deviations for both the human and the robot grasp, which corresponds, as desired, to a fixed grasp during manipulation. On the one hand the results show that our graph optimization approach is able and stringently necessary to adapt noisy human teacher demonstrations to robot friendly trajectories. On the other hand they also demonstrate that our approach for pose estimation without markers is sufficiently accurate for action learning.
\setlength{\tabcolsep}{5pt}
\begin{table*}
\centering
\begin{tabular}{l | c c | c c | c c | c c |}
&\multicolumn{2}{c|}{Room Door} &\multicolumn{2}{c|}{Swivel Door} &\multicolumn{2}{c|}{Drawer} &\multicolumn{2}{c|}{Sliding Door}\\
& Before Opt. & After Opt. & Before Opt. & After Opt. & Before Opt. & After Opt. & Before Opt. & After Opt.\\
\hline
& \multicolumn{8}{c|}{Human Pose Estimation with AR-Marker}\\
&\multicolumn{2}{c|}{10 demos, 1529 poses} &\multicolumn{2}{c|}{4 demos, 419 poses} &\multicolumn{2}{c|}{6 demos, 656 poses} &\multicolumn{2}{c|}{10 demos, 1482 poses}\\
\hline
Euclidean distance gripper-grasp & $\SI{2.82}{cm}$ & $\SI{0.55}{cm}$ & $\SI{2.36}{cm}$ & $\SI{0.49}{cm}$ & $\SI{6.33}{cm}$ & $\SI{0.37}{cm}$ & $\SI{3.23}{cm}$ & $\SI{0.60}{cm}$\\
Angular distance gripper-grasp & $18.3\degree$ & $8.0\degree$ & $5.3 \degree$ & $0.7 \degree$ & $5.4\degree$ & $1.6\degree$ & $6.5\degree$ & $0.5\degree$\\
Euclidean distance gripper-hand & $-$ & $\SI{2.2}{cm}$ & $-$ & $\SI{2.68}{cm}$ & $-$ & $\SI{5.54}{cm}$ & $-$ & $\SI{3.13}{cm}$\\
Angular distance gripper-hand & $-$ & $13.5 \degree$ & $-$ & $3.0 \degree$ & $-$ & $2.8\degree$ & $-$ &$5.8\degree$\\
Std dev on gripper-object trans. & $\SI{1.7}{cm}$ & $\SI{0.53}{cm}$ & $\SI{2.35}{cm}$ & $\SI{0.21}{cm}$ & $\SI{2.66}{cm}$ & $\SI{0.18}{cm}$ & $\SI{0.51}{cm}$ & $\SI{0.12}{cm}$\\
Std dev on gripper-object rot. & $20.5\degree$ & $2.4 \degree$ & $19.3\degree$ & $1.6\degree$ & $0.88\degree$ & $0.21\degree$ & $3.4\degree$ & $0.34\degree$\\
Std dev on hand-object trans. & $\SI{1.7}{cm}$ & $\SI{0.5}{cm}$ & $\SI{2.35}{cm}$ & $\SI{0.16}{cm}$ & $\SI{2.66}{cm}$ & $\SI{0.28}{cm}$ & $\SI{0.51}{cm}$ & $\SI{0.16}{cm}$\\
Std dev on hand-object rot. & $20.5\degree$ & $4.6\degree$ & $19.3\degree$ & $0.9\degree$ & $0.88\degree$ & $0.3\degree$ & $3.4\degree$ & $0.6\degree$\\
Map collision free poses & $\SI{89.2}{\%}$ & $\SI{99.74}{\%}$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\
Kinematically achievable & $\SI{69.8}{\%}$ & $\SI{96.86}{\%}$ & $\SI{85.9}{\%}$ & $\SI{99.52}{\%}$ & $\SI{87.3}{\%}$ & $\SI{100}{\%}$ & $\SI{63.2}{\%}$ & $\SI{99.93}{\%}$\\
\hline
& \multicolumn{8}{c|}{3D Human Pose Estimation from RGBD}\\
& \multicolumn{2}{c|}{10 demos, 1215 poses} & \multicolumn{2}{c|}{5 demos, 330 poses} & \multicolumn{2}{c|}{5 demos, 370 poses} &\multicolumn{2}{c|}{5 demos, 451 poses}\\
\hline
Euclidean distance gripper-grasp & $\SI{31.17}{cm}$ & $\SI{0.40}{cm}$ & $\SI{9.77}{cm}$ & $\SI{0.53}{cm}$ & $\SI{16.18}{cm}$ & $\SI{0.32}{cm}$ & $\SI{5.64}{cm}$ & $\SI{0.19}{cm}$\\
Angular distance gripper-grasp & $130.27\degree$ & $0.24\degree$ & $102.9\degree$ & $0.4\degree$ & $108.89\degree$ & $0.06\degree$ & $149.30\degree$ & $0.07\degree$\\
Euclidean distance gripper-hand & $-$ & $\SI{32.78}{cm}$ & $-$ & $\SI{14.19}{cm}$ & $-$ & $\SI{24.18}{cm}$ & $-$ & $\SI{19.26}{cm}$\\
Angular distance gripper-hand & $-$ & $63.94\degree$ & $-$ & $92.87\degree$ & $-$ & $103.39\degree$ & $-$ &$121.69\degree$\\
Std dev on gripper-object trans. & $\SI{17.40}{cm}$ & $\SI{0.25}{cm}$ & $\SI{1.75}{cm}$ & $\SI{0.15}{cm}$ & $\SI{1.08}{cm}$ & $\SI{0.12}{cm}$ & $\SI{1.03}{cm}$ & $\SI{0.10}{cm}$\\
Std dev on gripper-object rot. & $34.31\degree$ & $0.14\degree$ & $23.39\degree$ & $0.76\degree$ & $14.28\degree$ & $0.06\degree$ & $15.86\degree$ & $0.03\degree$\\
Std dev on hand-object trans. & $\SI{17.40}{cm}$ & $\SI{8.01}{cm}$ & $\SI{1.75}{cm}$ & $\SI{1.18}{cm}$ & $\SI{1.08}{cm}$ & $\SI{0.83}{cm}$ & $\SI{1.03}{cm}$ & $\SI{0.73}{cm}$\\
Std dev on hand-object rot. & $34.31\degree$ & $0.50\degree$ & $23.39\degree$ & $0.89\degree$ & $14.28\degree$ & $0.18\degree$ & $15.86\degree$ & $0.27\degree$\\
Map collision free poses & $\SI{89.14}{\%}$ & $\SI{99.51}{\%}$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$\\
Kinematically achievable & $\SI{38.10}{\%}$ & $\SI{96.05}{\%}$ & $\SI{80.0}{\%}$ & $\SI{97.27}{\%}$ & $\SI{60.81}{\%}$ & $\SI{98.92}{\%}$ & $\SI{63.64}{\%}$ & $\SI{95.12}{\%}$\\
\end{tabular}
\caption{Results for the optimization for all four trained tasks. The upper half of the table summarizes the results from experiments conducted in~\cite{twelsche17iros}. There the human pose estimation was obtained using markers. The lower half shows the results of the experiments carried out with the human pose estimation presented in this work. The total number of recorded poses refers to the length after interpolating missing ones. The shown distance between gripper and grasp poses is a mean over the endpoints of the reaching segments of the demonstrations. For the distance between gripper and hand as well as the collisions and the kinematic feasibility all pose tuples are considered. Kinematic feasibility expresses the lookup in the inverse reachability map. For the relation between object and robot gripper respectively human hand a mean over all poses in the manipulation segments is calculated. Since gripper poses are initialized with the measured hand poses no meaningful distance before optimization can be given. For the three furniture operating tasks no collisions with the map are considered.
}
\label{T:grasp_comparison}
\end{table*}
\setlength{\tabcolsep}{6pt}
\setlength{\tabcolsep}{1pt}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.239\textwidth ,trim={10.0cm 2.5cm 5.5cm 3.5cm},clip]{figures/robotics/graspAdaptionDrawer4} &
\includegraphics[width=0.239\textwidth ,trim={17.0cm 8.5cm 9.5cm 1.5cm},clip]{figures/robotics/graspAdaptionKallaxDoor1}
\end{tabular}
\caption{Adaption of the recorded human teacher trajectory to the robot grasping capabilities for grasping the handle of the drawer (left) and the swivel door (right). The gripper poses (magenta dots) are shifted towards the handle of the drawer, respectively the door, leading to a successful robot grasp. By just imitating the human hand motion (orange dots) the grasps would fail.}
\label{fig:GraspAdaption}
\end{figure}
\setlength{\tabcolsep}{6pt}
\subsection{Action Imitation by the Robot}
In a follow-up experiment we used the adapted demonstrations from our pose estimation approach shown in Table~\ref{T:grasp_comparison} to learn action models that our PR2 robot can use to imitate the demonstrated actions in real world settings. These time-driven models are learned as in our previous work~\cite{twelsche17iros} using mixtures of Gaussians~\cite{Calinon12Hum}. We learn combined action models for robot gripper and base in Cartesian space. The models are used to generate trajectories for the robot in the frame of the manipulated object.
With the learned models we reproduced each action five times. For opening the swivel door we had one failure due to localization problems during the grasping. For the drawer and the room door all trials of grasping and manipulating were successful. The sliding door was always grasped successfully but due to the small door knob and the tension resulting from the combined gripper and base motion, the knob was accidentally released during the manipulation process. We ran five successful trials of opening the sliding door by keeping the robot base steady. A visualization of the teaching process and the robot reproducing the action demonstration can be seen in \figref{fig:robotaction}.
\setlength{\tabcolsep}{1pt}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.239\textwidth]{figures/robotics/OpenKallaxTraj-1} &
\includegraphics[width=0.239\textwidth]{figures/robotics/PR2_OpenKallax-1}
\end{tabular}
\caption{On the left image the teacher demonstrates the task of opening the swivel door. Superimposed on the image we see the recorded trajectories for hand (orange), torso (green) and manipulated object (blue) which serve as the input for the action learning. The right image shows the robot reproducing the action using a model learned from the teacher demonstration.}
\label{fig:robotaction}
\end{figure}
\setlength{\tabcolsep}{6pt}
\section{INTRODUCTION}
\input{1_Introduction.tex}
\section{RELATED WORK}
\input{2_RelatedWork.tex}
\section{APPROACH}
\input{3_Approach.tex}
\section{DATASETS}
\label{sec:datasets}
\input{4_Datasets.tex}
\input{5_Experiments.tex}
\section{CONCLUSIONS}
We propose a CNN based system that jointly uses color and depth information in order to predict 3D human pose in real world units. This allows us to exceed the performance of existing methods. Our work introduces two RGBD datasets, which can be used for future approaches. We show, how our approach for 3D human pose estimation is applied in a task learning application that allows non-expert users to teach tasks to service robots. This is demonstrated in real-world experiments that enable our PR2 robot to reproduce human-demonstrated tasks without any markers on the human teacher.
{\small
\bibliographystyle{ieee}
|
{
"timestamp": "2018-03-14T01:08:59",
"yymm": "1803",
"arxiv_id": "1803.02622",
"language": "en",
"url": "https://arxiv.org/abs/1803.02622"
}
|
\section{INTRODUCTION}
The task of incrementally building a semantically annotated 3D map is a challenging research topic for both the robotics and computer vision communities. It has a wide range of applications including autonomous grasping and manipulation of objects, scene understanding, robotics navigation and augmented reality. For this reason, a valuable research effort is currently undergoing in literature with the aim of developing efficient systems that can scale up to mobile/embedded architectures while being robust enough to generalize to unseen environments.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\hsize]{teaser2.eps}
\caption{Our method achieves accurate semantic mapping (comparable to the state of the art \cite{mccormac2017semanticfusion}, bottom row) while being more efficient and scalable. It relies on the geometric segmentation that takes into account semantic information and thus providing meaningful segments than the method of Tateno et al. \cite{tateno2015real} (middle row).
\label{fig:teaser}}
\end{center}
\end{figure}
Motivated by the recent developments of deep learning and Convolutional Neural Networks (CNNs) for 3D data, recent methods have mostly focused on increasing the accuracy of the semantic segmentation map \cite{mccormac2017semanticfusion,li2016semi,yang2017semantic}. At the same time, they still face the critical issue of yielding real-time performance, since such systems are built on a set of computationally demanding processing stages, including 3D reconstruction, camera pose estimation and CNN-based semantic segmentation. This becomes even more relevant with regards to embedded and mobile architectures that are typically employed for the aforementioned applications of robotics navigation/grasping and augmented reality.
To achieve real-time performance, some of these methods suggested to only extract semantic information on a subset of the input frames.
For example, the methods proposed by Hermans et al. \cite{hermans2014dense} and McCormac et al. (SemanticFusion) \cite{mccormac2017semanticfusion} achieved, respectively, an output frame-rate of 4Hz and 25.3Hz, by running semantic segmentation, respectively, every 6 and every 10 frames.
While such frame skipping strategy can improve run-time performance, it limits their range of application, since it tends to bring in inaccuracies under fast camera motions.
In this paper, we propose a novel incremental semantic mapping approach that aims at overcoming such issues by yielding highly accurate semantic scene reconstruction (see bottom row of Fig. \ref{fig:teaser}) in real-time. The framework relies on effectively combining a reliable camera pose tracking (InfiniTAM v3 \cite{InfiniTAM_V3_Report_2017}), an incremental segmentation approach \cite{tateno2015real}, and an efficient CNN-based semantic segmentation method. In particular, the 3D map of the scene is built through the fast and robust surfel-based SLAM approach in \cite{keller2013real}, and geometric segmentation labels are assigned to each surfel based on the approach of \cite{tateno2015real}. Class probabilities of each label are updated through a specifically designed CNN.
We introduce a new probabilistic strategy to deal with one of the most delicate stages, i.e. class probability assignment. According to this strategy, and in contrast to conventional semantic mapping methods which assign class probabilities to each surfel \cite{hermans2014dense,mccormac2017semanticfusion,li2016semi}, we assign class probabilities to each segment. This reduces notably the time complexity since at each new frame probability distributions need to be updated for those segments which are visible on the image plane from the current camera pose, in contrast to conventional methods which need to update such probabilities for all surfels on the image plane.
This strategy also reduces notably the space complexity since probability distributions need to be stored only at each segment rather than each surfel.
In return, the semantic information also improves the geometric-based segmentation from \cite{tateno2015real}. By taking into account semantic information, it provides additional edges that better represent the semantic structure of the scene, hence allowing to obtain accurate segment regions (see middle row of Fig. \ref{fig:teaser}). Since smoothing of semantic labels is carried out at the geometric fusion stage, this allows us to utilize a CNN with a low resolution (i.e. $40\times30$) output, with a forward pass requiring only 19ms on an off-the-shelf GPU (i.e., a GeForce GTX 1080).
The overall framework is capable of working in real-time on off-the-shelf architectures, while the requiring a low computational complexity with respect to state of the art.
In addition, differently from other methods such as \cite{hermans2014dense,mccormac2017semanticfusion,li2016semi,yang2017semantic,kundu2014joint}, our approach does not require any post-processing based on, e.g., Conditional Random Field, to refine the output of the semantic mapping.
We demonstrate the effectiveness and efficiency of our approach on a common benchmark, i.e. the NYUv2 dataset \cite{silberman2012indoor}, reporting comparable accuracy than the state-of-the-art approaches while being notably faster and scaling better in terms of memory requirements. In addition, we also report an analysis in terms of time and space complexity of our method, demonstrating its advantages with respect to conventional approaches.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\hsize]{flow.eps}
\caption{Flow of the proposed framework. Efficient CNN-based semantic segmentation is exploited to refine the geometric edges on frame-wise segmentation, then it is efficiently fused in the SLAM-based model using the rendered viewpoint according to the estimated camera pose. \label{fig:flow}}
\end{center}
\end{figure*}
\section{RELATED WORK}
\subsection{Semantic mapping}
Related work aimed at incrementally computing a semantic 3D map of the environment are mostly build on top of the following three main stages: (i) frame-wise segmentation to estimate the per-pixel class probability of the input frame, (ii) 2D-3D label transfer to fuse the 2D semantic segmentation labels to the 3D map; and, (iii) 3D refinement to denoise the class probabilities of the 3D map \cite{hermans2014dense,mccormac2017semanticfusion,li2016semi,yang2017semantic,vineet2015incremental,kundu2014joint}.
Notably, \cite{hermans2014dense} employed Random Decision Forests (RDF), a Bayesian framework and Conditional Random Field (CRF) respectively to carry out the three above-mentioned stages.
Since the CRF works on each element of the 3D map reconstructed via SLAM, it is effective in refining the semantic model and obtain high accuracy. Nevertheless, it is computationally heavy, as it requires 400 to 1800ms just for the CRF stage, yielding a frame-rate of 3.9 to 4.6Hz even if the method computes the RDF once every 6 input frames and the CRF once every 30 frames.
SemanticFusion \cite{mccormac2017semanticfusion} employs the CNN model proposed by Noh et al. \cite{noh2015learning} for 2D semantic segmentation, a Bayesian framework for 2D-3D label transfer, and a CRF for 3D refinement.
By using a CNN to carry out semantic segmentation of each input frame, the method can achieve a better runtime performance. However, the CNN still requires 51.2ms and the Bayesian update scheme requires a further 41.1ms, eventually running at 25.3Hz by applying these stages once every 10 input frames.
Other related works include \cite{sengupta2013urban,koppula2011semantic,zhao2016building} that aim at building a semantic 3D map, although not incrementally.
\cite{sengupta2013urban} firstly builds a 3D map of a scene through RGB-D SLAM framework, then assigns class probabilities to each point of the 3D map by means of a Dense CRF.
\cite{koppula2011semantic} exploits relational information derived from the full-scene 3D map for object labeling relying on a Markov-Random-Field (MRF)-based model.
In addition, several methods for recognizing only a part of the 3D map without making a dense semantic 3D map have been proposed \cite{salas2013slam++,bowman2017probabilistic,galvez2016real,fioraio2013joint}.
SLAM++ \cite{salas2013slam++} maps indoor scenes at the level of semantically defined objects.
Bowman et al. \cite{bowman2017probabilistic} improved the RGB SLAM performance in terms of camera pose and scale estimation by utilizing not only low-level geometric features such as points, lines, and planes but also detected objects as landmarks.
\subsection{2D semantic segmentation}
Several CNN models \cite{long2015fully,badrinarayanan2017segnet,noh2015learning,chen2017rethinking} for semantic segmentation have been proposed, sometimes yielding impressive results.
To achieve a highly precise semantic segmentation map, such methods aim at exploiting global information and context to improve the features extracted by the convolutional layers. In particular, Fully Convolutional Network (FCN) \cite{long2015fully} proposed a skip architecture that combines semantic information from a deep layer with appearance information from a shallow layer to perform accurate and detailed segmentation.
\subsection{3D geometric segmentation}
On the other hand, 3D geometric segmentation algorithms have been developed, to extract geometrically separated segments from 3D data by unsupervised fashion. Real-time segmentation for depth map has been investigated by the works of Uckermann et. al. \cite{Uckermann2013, Uckermann2012}, Pieropan et al. \cite{Pieropan2014} and Abramov et al. \cite{Abramov2012}. As a consequence, in addition to frame-wise segmentation, \cite{Finman2013, tateno2015real} has addressed the problem of real-time geometric segmentation for 3D point cloud or 3D mesh reconstructed via dense SLAM by incremental approach.
\section{METHOD}
Fig. \ref{fig:flow} shows the flow diagram of our framework.
The input is represented by RGB and depth frames obtained from a moving RGB-D sensor, which are processed individually.
Our method has four components: SLAM framework, 2D semantic segmentation with a specifically designed CNN, incrementally building a geometric 3D map, and updating class probabilities assigned to each segment of the geometric 3D map.
Firstly, SLAM and semantic segmentation with the CNN are performed simultaneously.
In the segmentation stage, the geometric edge map is generated from the current depth frame and improved with edges extracted from the semantic segmentation result toward the semantic-aware representation.
The geometric 3D map is updated through the edge map, and rendered to the current image plane.
Finally, class probabilities assigned to each segmented region are updated with the rendered segmentation map.
The following section describes these components in more detail.
\subsection{SLAM}
To carry out SLAM in terms of camera pose estimation and fusion we employ the dense approach of InfiniTAM v3 \cite{InfiniTAM_V3_Report_2017}, relying on the efficient and scalable data representation proposed by Keller et al. \cite{keller2013real}, which uses a set \emph{surfels} $s_k$ to build the 3D map.
As per this method, at the $t$-th incoming RGB-D frames, the current camera pose $\bm{T}_t \in \mathbb{SE} (3)$ is estimated through Iterative Closest Point \cite{low2004linear} and RGB alignment.
The new surfels generated from the current depth map are fused into the 3D map by means of the estimated camera pose, and are used to refine the 3D coordinates and normal associated to the existing surfels.
\subsection{CNN architecture}
The details of the CNN architecture proposed in our framework, Low-Res Net, are shown in Fig. \ref{fig:flow} (g).
The architecture combines concepts from state-of-the-art CNN models, i.e. Deep Residual Networks (ResNet) \cite{he2016deep} and FCN \cite{long2015fully}.
Specifically, the original FCN architecture \cite{long2015fully} utilizes the VGG model \cite{simonyan2014very} to extract features and outputs a semantic segmentation result at the same resolution of the input image.
On the other hand, Low-Res Net employs the ResNet architecture \cite{he2016deep}, which achieved higher accuracy than the VGG model \cite{simonyan2014very} in ImageNet \cite{krizhevsky2012imagenet}, and employs skip connections as done by FCN \cite{long2015fully}.
Towards the goal of achieving a fast forward pass, we do not incorporate multi-layered upsampling and design it only with two deconvolution layers with two strides.
Therefore, given the input image $\mathcal{I}_t(\bm{u}), \bm{u}=(x, y) \subset \mathbb{Z}^2, 0 \leq x < W, 0 \leq y < H$, Low-Res Net outputs a semantic segmentation map in Fig. \ref{fig:flow} (h) as a set of semantic class probabilities, i.e.
\begin{equation}
\tilde{\mathcal{S}}(\bm{v})=\mathcal{P}(c|\mathcal{I}_t)
\end{equation}
where $\bm{v}=(s, t) \subset \mathbb{Z}^2, 0 \leq s < W/8, 0 \leq t < H/8$. Here, $\mathcal{P}(c)$ denotes a class probability, where $\mathcal{P}(c) \subset \mathbb{R}, 0 \leq \mathcal{P}(c) \leq 1, c \subset \mathbb{Z}, 0 \leq c < N$ with $N$ being the number of categories.
The symbol $\tilde{\ }$ denotes instead hereinafter a map of size $H / 8 \times W / 8$.
In our implementation, $H = 240$, $W = 320$, and the number of channels of the input image $\mathcal{I}$ is 3 as in ResNet \cite{he2016deep}.
\subsection{Segmentation}
Our geometrical segmentation scheme is based on the method proposed by Tateno et al. \cite{tateno2015real}.
The method incrementally builds up a geometric 3D map, where a segmentation label $l_i$ is associated with each surfel $s_k$, by properly propagating and merging segments extracted from the current depth map.
As a result, we obtain a binary geometric edge map $\mathcal{B}^g$ in Fig. \ref{fig:flow} (c) from the input depth frame by comparing neighboring normal angles and vertex distances and by relying on a vertex and normal map as proposed in \cite{tateno2015real}.
Here, $\mathcal{B}^g(\bm{u})$ takes $1$ if $\bm{u}$ is on an edge and $0$ for otherwise.
It is important to point out that, while $ \mathcal{B}^g $ is stable since those edges are extracted geometrically, edges between objects that do not present notable geometric characteristics (e.g., such as with two nearby flat objects) can not be extracted.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\hsize]{ours_vs_inseg.eps}
\caption{Example results of our segmentation improvement scheme. \label{fig:vsinseg}}
\end{center}
\end{figure*}
Differently from the geometric segmentation from \cite{tateno2015real}, we introduce semantic information into the segments.
First, we generate a class map $\tilde{\mathcal{S}}^c$, where each component $\tilde{\mathcal{S}}^c(\bm{v})$ has a class category $c$, with
\begin{equation}
\tilde{\mathcal{S}}^c(\bm{v}) = \mathop{\rm arg~max}\limits_{c} \tilde{\mathcal{S}}(\bm{v})=\mathcal{P}(c|\mathcal{I}_t).
\end{equation}
After applying a median filter to $\tilde{\mathcal{S}}^c$ to remove isolated points, we resize $\tilde{\mathcal{S}}^c$ to $\mathcal{S}^c$ with a nearest neighbor interpolation. We would like to point out that the choice of such an efficient interpolation approach over a higher quality resizing such as bilinear interpolation is motivated by the fact that contours of a CNN-based semantic segmentation map are often imprecise, hence a better interpolation method would not yield benefits in terms of accuracy. At the same time, noise in the segment contours is eventually averaged out by the employed confidence-based label fusion approach. Then, we generate a binary semantic edge map $\mathcal{B}^s$ with the following scheme:
\begin{equation}
\label{eq:semedge}
\mathcal{B}^s (\bm{u} = (x, y)) =
\left\{%
\begin{array}{ll}
1 & \mbox{if}\ \ \text{\parbox{0.4\textwidth}{%
$\mathcal{S}^c (x, y) \neq \mathcal{S}^c (x + 1, y) \lor \\ \mathcal{S}^c (x, y) \neq \mathcal{S}^c (x, y + 1) \lor \\ \mathcal{S}^c (x, y) \neq \mathcal{S}^c (x + 1, y + 1)$}} \\
0 & \mbox{otherwise}
\end{array}%
\right.
\end{equation}
The final binary semantic-aware edge map $\mathcal{B}$, (d) in Fig. \ref{fig:flow}, is obtained by applying a binary $OR$ operator between $\mathcal{B}^g$ and $\mathcal{B}^s$.
In Fig. \ref{fig:vsinseg}, the geometric edge map in (c) and the semantic-aware edge map in (d) show the benefit of our segmentation improvement scheme.
Edges between objects which have poor geometric characteristics (i.e., wall and picture in the upper row and desk and paper in the bottom row) are successfully merged to the edge map.
Similar to \cite{tateno2015real}, segments of the semantic-aware edge map $\mathcal{B}$ are properly extracted by means of a connected component algorithm and utilized for incrementally propagating and merging into the geometric 3D map with the estimated camera pose $\bm{T}_t$.
\subsection{Probability fusion}
\label{sec:pf}
Conventional methods assign class probabilities to each element that composes the 3D map \cite{hermans2014dense,mccormac2017semanticfusion,li2016semi,yang2017semantic,vineet2015incremental,kundu2014joint}.
Conversely, we propose to assign class probabilities to each segmentation label $l_i$ associated to each region constituting the geometric 3D map.
With our approach, each label $l_i$ is assigned to a discrete probability distribution $\mathcal{P}(c|\mathcal{I}_{1...t})$ and to a probability confidence $\Gamma$.
$\mathcal{P}(c|\mathcal{I}_{1...t})$ is initialized to $0$ over all class probabilities and $\Gamma$ is also initialized to $0$.
Therefore, the space complexity for storing class probabilities is $O(N \cdot N_l)$, where $N_l$ denotes the number of segmentation labels, in contrast to conventional methods \cite{hermans2014dense,mccormac2017semanticfusion} which require $O(N \cdot N_s)$, where $N_s$ is the number of elements of the 3D map (e.g., the number of surfels). This is an important difference in terms of scalability since typically $N_s \gg N_l$.
This also appears as a more natural approach, since it could be argued that humans recognize objects by assigning semantic labels in a region-wise manner rather than element-wise.
In order to fuse the output of the Low-Res CNN properly with the 3D map, we update class probabilities assigned to each segmentation label $l_i$ using a confidence-based approach.
Firstly, we render the updated geometric 3D map onto the current image plane using the estimated camera pose $\bm{T}_t$ and the 3D position $\bm{x}(k)$ associated with each surfel $s_k$.
The rendered segmentation map $\mathcal{L}(\bm{u})$, where each component is associated to a segmentation label $l_i$, is generated with $\mathcal{L}(\pi(\bm{T}^{-1}_t \bm{x}(k))) = l_i(k)$ by denoting the segmentation label $l_i$ of a surfel $s_k$ with $l_i (k)$.
Here, $\mathcal{L}(\bm{u})$ takes $\phi$ on the pixel $\bm{u}$ which is not filled with a label $l_i$.
Although the CNN-based semantic segmentation used in our framework is fast, its output $ \tilde{\mathcal{S}} $ has a low resolution.
Using the rendered segmentation map $\mathcal{L}$ whose size is $H \times W$ (i.e. the size of input image), detailed information is introduced to $ \tilde{\mathcal{S}} $ to update the class probabilities of each label $l_i$ with the following update scheme.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\hsize]{expl.eps}
\caption{Example definition of set $\mathcal{C}_{\bm{v}}$ and $\mathcal{C}_{\bm{v}, l_i}$. \label{fig:expl}}
\end{center}
\end{figure}
\begin{table*}[t]
\begin{center}
\caption{Quantitative results for the NYUv2 dataset \cite{silberman2012indoor}. These results were captured immediately after processing the frame. All accuracy evaluations were performed at $320 \times 240$ resolution. We calculated these accuracies with the same strategies as \cite{mccormac2017semanticfusion}. Ours-Geometric-Only denotes the method of building the geometric 3D map without our segmentation improvement scheme. \label{tab:pre}}
\scalebox{1.05}{
\begin{tabular}{l|rrrrrrrrrrrrr|r|r} \bhline{1pt}
Method &
\multicolumn{1}{|c|}{ \cellcolor{bed}\color{white} \rotatebox{90}{bed}} &
\multicolumn{1}{c|}{ \cellcolor{books} \rotatebox{90}{books}} &
\multicolumn{1}{c|}{ \cellcolor{ceiling}\color{white} \rotatebox{90}{ceiling}} &
\multicolumn{1}{c|}{ \cellcolor{chair}\color{white} \rotatebox{90}{chair}} &
\multicolumn{1}{c|}{ \cellcolor{floor} \rotatebox{90}{floor}} &
\multicolumn{1}{c|}{ \cellcolor{furniture} \rotatebox{90}{furniture}} &
\multicolumn{1}{c|}{ \cellcolor{objects} \rotatebox{90}{objects}} &
\multicolumn{1}{c|}{ \cellcolor{painting}\color{white} \rotatebox{90}{painting}} &
\multicolumn{1}{c|}{ \cellcolor{sofa} \rotatebox{90}{sofa}} &
\multicolumn{1}{c|}{ \cellcolor{table} \rotatebox{90}{table}} &
\multicolumn{1}{c|}{ \cellcolor{tv} \rotatebox{90}{tv}} &
\multicolumn{1}{c|}{ \cellcolor{wall} \rotatebox{90}{wall}} &
\multicolumn{1}{c|}{ \cellcolor{window} \rotatebox{90}{window}} &
\multicolumn{1}{c|}{ {\bf \rotatebox{90}{class avg.\ }}} &
{\bf \rotatebox{90}{pixel avg.\ }} \\ \hline
Hermans et al. \cite{hermans2014dense} & 68.4 & 45.4 & {\bf 83.4} & 41.9 & 91.5 & 37.1 & 8.6 & 35.8 & 28.5 & 27.7 & 38.4 & 71.8 & 46.1 & 48.0 &54.3 \\
RGBD-SF \cite{mccormac2017semanticfusion} & 61.7 & {\bf 58.5} & 43.4 & 58.4 & 92.6 & 63.7 & {\bf 59.1} & 66.4 & 47.3 & 34.0 & 33.9 & 86.0 & 60.5 & 58.9 &67.5 \\
RGBD-SF-CRF \cite{mccormac2017semanticfusion} & 62.0 & 58.4 & 43.3 & 59.5 & {\bf 92.7} & 64.4 & 58.3 & 65.8 & 48.7 & 34.3 & 34.3 & 86.3 & 62.3 & 59.2 & 67.9 \\
Eigen-SF \cite{mccormac2017semanticfusion} & 47.8 & 50.8 & 79.0 & 73.3 & 90.5 & 62.8 & 46.7 & 64.5 & 45.8 & 46.0 & 70.7 & 88.5 & 55.2 & 63.2 & 69.3 \\
Eigen-SF-CRF \cite{mccormac2017semanticfusion} & 48.3 & 51.5 & 79.0 & {\bf 74.7} & 90.8 & 63.5 & 46.9 & 63.6 & 46.5 & 45.9 & {\bf 71.5} & {\bf 89.4} & 55.6 & {\bf 63.6} & 69.9 \\
Li et al. \cite{li2016semi} & 64.9 & 34.6 & 72.0 & 67.5 & 90.5 & 65.0 & 17.2 & {\bf 67.3} & 59.3 & 41.3 & 60.0 & 85.1 & 57.0 & 60.3 & 70.3 \\ \hline
Ours-Geometric-Only & {\bf 83.7} & 6.4 & 32.0 & 52.8 & 83.1 & 73.5 & 40.0 & 4.3 & 75.3 & {\bf 56.6} & 53.1 & 75.0 & 50.2 & 52.8 & 66.9 \\
Ours & {\bf 83.7} & 15.6 & 24.4 & 56.7 & 83.3 & {\bf 76.1} & 52.5 & 40.8 & {\bf 77.7} & 53.0 & 57.3 & 75.3 & {\bf 64.4} & 58.5 & {\bf 70.7} \\ \bhline{1pt}
\end{tabular}
}
\end{center}
\end{table*}
First, a set $\mathcal{C}_{\bm{v}}$ and a set $\mathcal{C}_{\bm{v}, l_i}$ are defined as
\begin{equation}
\label{eq:set_cv}
\begin{split}
\mathcal{C}_{\bm{v}=(s, t)} = \bigl\{ \bm{u} = (x, y) \subset \mathbb{Z}^2 | \mathcal{L}(\bm{u}) \neq \phi \land \\ 8s \leq x < 8(s + 1) \land 8t \leq y < 8(t + 1) \bigr\}
\end{split}
\end{equation}
and
\begin{equation}
\label{eq:set_cvli}
\begin{split}
\mathcal{C}_{\bm{v}, l_i} = \bigl\{ \bm{u} \subset \mathcal{C}_{\bm{v}} | \mathcal{L}(\bm{u}) = l_i \bigr\} \mbox{.}
\end{split}
\end{equation}
In words: $\mathcal{C}_{\bm{v}}$ is a set of coordinates to which the labels are assigned in the region of $\mathcal{L}(\bm{u})$ corresponding to $\tilde{\mathcal{S}}(\bm{v})$, while $\mathcal{C}_{\bm{v}, l_i}$ is a set of coordinates to which the label $l_i$ is assigned (See Fig. \ref{fig:expl}).
When the set $\mathcal{U}_{\bm{v}}$ of labels $l_i$ which is included in the region of $\mathcal{L}(\bm{u})$ corresponding to $\tilde{\mathcal{S}}(\bm{v})$ is defined as
\begin{equation}
\label{eq:set_label}
\begin{split}
\mathcal{U}_{\bm{v} = (s, t)} = \bigl\{ l_i = \mathcal{L}(x, y) \subset \mathbb{Z} | 8s \leq x < 8(s+1) \land \\ 8t \leq y < 8(t+1) \bigr\} \mbox{,}
\end{split}
\end{equation}
the class probabilities $\mathcal{P}(c|\mathcal{I}_{1...t})$ and the probability confidence $\Gamma$ of each element $l \subset \mathcal{U}_{\bm{v}}$ are updated through
\begin{equation}
\label{eq:update}
\begin{split}
\mathcal{P}(c|\mathcal{I}_{1...t}) \gets \frac{1}{Z} \cdot \frac{\Gamma \mathcal{P}(c|\mathcal{I}_{1...t-1}) + {\gamma} \mathcal{P}(c|\mathcal{I}_{t})}{\Gamma + \gamma} \\
\Gamma \gets \Gamma + \gamma, \ \gamma = \frac{| \mathcal{C}_{\bm{v}, l} |}{| \mathcal{C}_{\bm{v}} |}
\end{split}
\end{equation}
which is applied to all class probabilities.
Here, the constant $Z$ is for normalizing class probabilities to a proper distribution.
With this scheme, the weight of the probability which cross over two or more segment regions (e.g., wall and object in Fig. \ref{fig:expl}) is reduced.
By applying the same strategy to all $ \bm{v} $ constituting $ \tilde{\mathcal{S}}(\bm{v}) $, we update class probabilities of all labels included in the rendered segmentation map $\mathcal{L}(\bm{u})$.
Therefore, letting the size of $ \tilde{\mathcal{S}} (\bm{v}), H/8 \times W/8 $ be $\tilde{H} \times \tilde{W}$, the time complexity for updating class probabilities is $\mathcal{O}( \tilde{H} \tilde{W} (8 \times 8 + | \mathcal{U}_{\bm{v}} | N))$, which means calculating set $\mathcal{C}_{\bm{v}}$, $\mathcal{C}_{\bm{v}, l_i}$, and $\mathcal{U}_{\bm{v}}$ takes $8 \times 8$ and updating all class probabilities $N$ assigned to each label in $\mathcal{U}_{\bm{v}}$ takes $|\mathcal{U}_{\bm{v}}|N$.
Note that conventional methods \cite{hermans2014dense,mccormac2017semanticfusion,yang2017semantic,vineet2015incremental} take $\mathcal{O}(HWN)$ for updating class probabilities of the 3D map with a frame-wise recognition.
\section{EXPERIMENTS}
\subsection{Dataset and implementation}
We evaluate our system on the common NYUv2 dataset \cite{silberman2012indoor}.
The dataset contains 206 test set video sequences, however, for a fair comparison, we picked up 140 test sequences having a frame-rate over 2Hz which is the same as \cite{mccormac2017semanticfusion}.
Since our Low-Res CNN outputs semantic segmentation with the size of $W/8 \times H/8$, we resized the ground truth $\mathcal{S}_{gt}$ to $\tilde{\mathcal{S}}_{gt}$ by filling $\tilde{\mathcal{S}}_{gt} (\bm{v})$ with the label which mostly occupies the area of $\mathcal{S}_{gt} (\bm{u})$ corresponding to $\tilde{\mathcal{S}}_{gt} (\bm{v})$.
After training our Low-Res Net with the MS COCO dataset \cite{lin2014microsoft} for 10 epochs, we fine-tuned the network with the training dataset of the NYUv2 dataset \cite{silberman2012indoor} for 50 epochs.
These evaluations are conducted on an Intel Core i7-5557U 3.1GHz CPU, GeForce GTX 1080 GPU, and 16GB RAM.
\subsection{Accuracy}
\label{sec:res}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\hsize]{map.eps}
\caption{Qualitative results of our dense 3D semantic mapping on two scenes (left: {\it bedroom\_0112}, right: {\it dining\_room\_0017}). See Table \ref{tab:pre} for class colors. \label{fig:map}}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\hsize]{compare.eps}
\caption{Qualitative results for the NYUv2 dataset \cite{silberman2012indoor}. As with Table \ref{tab:pre}, Ours-Geometric-Only denotes the method of building the geometric 3D map without our segmentation improvement scheme. See Table \ref{tab:pre} for class colors. \label{fig:comp}}
\end{center}
\end{figure*}
In this section, we experimentally demonstrate the accuracy of our method by quantitatively comparing the accuracy with other state-of-the-art methods through Table \ref{tab:pre}. Additionally, Fig. \ref{fig:map} and Fig. \ref{fig:comp} show qualitative results of our dense semantic mapping.
As shown in Table \ref{tab:pre}, our method achieves 0.8\% higher average pixel accuracy compared to SemanticFusion \cite{mccormac2017semanticfusion} and 0.4\% higher average pixel accuracy compared to Li et al. \cite{li2016semi}.
As it can be noted, our method is particularly capable of outperforming other semantic mapping methods for object categories characterized by a big size.
For the class {\it bed}, there is a significant accuracy increase of 15.3\% over the state of the art; while, for the class {\it furniture} and {\it sofa}, we achieve 11.1\% and 18.4\% improvement, respectively.
The reason why we achieve high accuracy especially on such categories is that our segmentation strongly relies on geometric information, and geometric boundaries associated to these categories (e.g., {\it bed} and {\it wall} and {\it floor} and {\it furniture}) are often quite clear.
Fig. \ref{fig:comp} shows the benefit of the segmentation improvement from the viewpoint of accuracy compared with ``Ours-Geometric-Only'', where we build the geometric 3D map without our segmentation improvement scheme.
Particularly in the upper three rows, the {\it paintings} and the {\it window} on the {\it wall}, which are difficult to distinguish only with the geometric-based segmentation, are also segmented and annotated correctly.
The geometric 3D map in Fig. \ref{fig:map} also shows the validity of the segmentation improvement especially on the above-mentioned regions.
The example results of building a geometric 3D map with/without segmentation improvement are in Fig. \ref{fig:vsinseg} (e) geometric 3D map of Tateno et al. \cite{tateno2015real} and (f) geometric 3D map of our method.
We achieved semantic-aware representation rather than the geometric-only incremental segmentation method \cite{tateno2015real}.
This improved segmentation scheme allows achieving higher accuracy in terms of pixel average than state-of-the-art methods.
As shown in Table \ref{tab:pre}, the accuracies of the class {\it painting} and {\it window} are significantly improved for 36.5\% and 14.2\%, respectively, and 3.8\% for overall categories between ``Ours'' and ``Ours-Geometric-Only''.
The lower two rows of Fig. \ref{fig:comp} show failure cases.
Since our method mainly extracts edges from the vertex and normal map obtained from the incoming depth image, it is difficult to successfully segment distant objects where depth values tend to be unstable (i.e., the third row of Fig. \ref{fig:comp}) and manage scenes where many small objects are lined up where vertices and normals are cluttered (i.e., the fourth row of Fig. \ref{fig:comp}).
In Table \ref{tab:pre}, this is the same reason why the categories of small objects such as {\it book} and {\it objects} score low accuracies.
We leave the exploration of improving these limitation to future work.
\subsection{Computational cost}
\label{sec:exp_cc}
\begin{table}[tb]
\begin{center}
\caption{Comparison of run-time performance. FQ denotes the frequency to perform a recognition of the input frame and update class probabilities of the 3D map. \label{tab:runtime}}
\scalebox{1.0}{
\begin{tabular}{lrrr} \bhline{1pt}
Method & 3D map & FQ & FPS \\ \hline
Hermans et al. \cite{hermans2014dense} & Dense & every 6 frames & 3.9 - 4.6 Hz \\
SemanticFusion \cite{mccormac2017semanticfusion} & Dense & every 10 frames & 25.3 Hz \\
Yang et al. \cite{yang2017semantic} & Dense & every frame & 2 Hz \\
Li et al. \cite{li2016semi} & Semi-Dense & every key-frame & 10 Hz \\ \hline
Ours & Dense & every frame & {\bf 30.9 Hz} \\ \bhline{1pt}
\end{tabular}
}
\end{center}
\end{table}
\begin{table}[tb]
\begin{center}
\caption{Average time spent on each processing stage. processing for segmentation are in line 2-4 and processing for recognition are in 5-7. Note that the processing with * and the processing with ** can be processed simultaneously. \label{tab:ana}}
\scalebox{1.0}{
\begin{tabular}{lr} \bhline{1pt}
Component & Consumed time \\ \hline
SLAM * & 8.13 ms \\ \hline
Generate a binary geometric edge map $\mathcal{B}^g$ * & 1.04 ms\\
Segmentation improvement & 0.39 ms \\
Update the geometric 3D map & 8.74 ms \\ \hline
Low-Res CNN ** & 19.32 ms \\
Generate a rendered segmentation map $\mathcal{L}$ & 2.52 ms \\
Probability fusion & 1.37 ms \\ \hline
Total & 32.34 ms \\ \bhline{1pt}
\end{tabular}
}
\end{center}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\hsize]{space.pdf}
\caption{Comparison of memory usage for storing class probabilities with SemanticFusion \cite{mccormac2017semanticfusion} on the scene {\it bedroom\_0112} of the NYUv2 dataset \cite{silberman2012indoor}. \label{fig:space}}
\end{center}
\end{figure}
In this section, we demonstrate the advantage of reducing the computational complexity, i.e. one of the main contributions of this method.
We quantitatively compare the run-time performance with state-of-the-art approaches through Table \ref{tab:runtime}.
As shown in Table \ref{tab:runtime}, we achieved real-time performance (i.e., over 30Hz) while performing all processing components on every input frame.
As analyzed in the last paragraph of Sec. \ref{sec:pf}, the time complexity for updating class probabilities of the 3D map (i.e., Probability fusion) is $\mathcal{O}( \tilde{H} \tilde{W} (8 \times 8 + | \mathcal{U}_{\bm{v}} | N))$.
Considering the average number of $| \mathcal{U}_{\bm{v}} |$ was $1.28$ through the experiments, the average time complexity $\mathcal{O}( \tilde{H} \tilde{W} (8 \times 8 + | \mathcal{U}_{\bm{v}} | N))$ turns into $\mathcal{O}( \tilde{H} \tilde{W} (8 \times 8 + N)) = \mathcal{O}(HW + \tilde{H} \tilde{W} N)$ in contrast to the one of conventional methods $\mathcal{O}(HWN)$ \cite{hermans2014dense,mccormac2017semanticfusion,yang2017semantic,vineet2015incremental}.
Therefore, as shown in Table \ref{tab:ana}, updating class probabilities of the 3D map only took 1.37ms on average, whereas SemanticFusion \cite{mccormac2017semanticfusion} spent 41.1ms for the processing.
Furthermore, the processing for 2D recognition (i.e., Low-Res CNN) only took 19.32ms while maintaining high accuracy in the end, as mentioned in Section. \ref{sec:res}.
Lastly, we discuss about the results of reducing the space complexity through Fig. \ref{fig:space}.
As shown there, the memory usage of our method is significantly reduced compared to the one of SemanticFusion \cite{mccormac2017semanticfusion} over all frames.
The average memory usage of our method is 0.08\% of the one of SemanticFusion \cite{mccormac2017semanticfusion}.
The reason for this significant improvement is that, as mentioned in Sec. \ref{sec:pf}, the space complexity of our method is $O(N \cdot N_l)$ whereas SemanticFusion takes $O(N \cdot N_s)$, where $N_l$ and $N_s$ were $1032$ and $844260$ in the end of the scene respectively.
\section{CONCLUSION}
In this paper, we proposed an efficient semantic mapping approach by assigning class probabilities to each region of the geometric 3D map which is incrementally built up through a robust SLAM framework and a geometric-based incremental segmentation.
Through our experiments, we demonstrated that our approach notably compressed the computational complexity in terms of both of time and space while achieving comparable accuracy against state-of-the-art approaches without any post-processing to the semantic 3D map.
Furthermore, we confirmed that our strategy improved the incremental segmentation framework beyond the geometric only to the semantic-aware representation.
\section*{ACKNOWLEDGMENT}
This research presentation is supported in part by a research assistantship of a Grant-in-Aid to the Program for Leading Graduate School for ``Science for Development of Super Mature Society'' from the Ministry of Education, Culture, Sport, Science, and Technology in Japan.
\bibliographystyle{ieeetr}
|
{
"timestamp": "2018-03-08T02:11:41",
"yymm": "1803",
"arxiv_id": "1803.02784",
"language": "en",
"url": "https://arxiv.org/abs/1803.02784"
}
|
\section{Introduction}
The theory of (dual) strongly relative Rickart objects developed in the companion paper \cite{CO1} is systematically
used in the present paper in order to study strongly relative regular objects and
(dual) strongly relative Baer objects in abelian categories. We also give applications to Grothendieck categories,
(graded) module categories and comodule categories.
For an introduction and motivation of the topic as well as for all needed concepts and notation the reader is referred to \cite{CO1}.
Usually the statements of our results have two parts, out of which we only prove the first one, the second one following by
the duality principle in abelian categories.
In Section 2 we define strongly relative regular objects in abelian categories.
Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$. Then $N$ is called
\emph{strongly $M$-regular} if $N$ is strongly $M$-Rickart and dual strongly $M$-Rickart.
Also, $N$ is called \emph{strongly self-regular} if $N$ is strongly $N$-regular. We show that $M$
is strongly self-regular if and only if ${\rm End}_{\mathcal{A}}(M)$ is a strongly regular ring
if and only if $M$ is self-regular and weak duo. Also, we prove that $N$ is strongly $M$-regular if and only if
$N$ is strongly $M$-Rickart and $M$ is direct $N$-injective if and only if $N$ is dual strongly $M$-Rickart and
direct $M$-projective.
In Section 3 we study (co)products of strongly relative regular objects. We prove that if
$M$, $N_1,\dots,N_n$ are objects of an abelian category $\mathcal{A}$,
then $\bigoplus_{i=1}^n N_i$ is strongly $M$-regular
if and only if $N_i$ is strongly $M$-regular for every $i\in \{1,\dots,n\}$.
We show that if $M=\bigoplus_{i\in I}M_i$ is a direct sum decomposition of
an object $M$ of an abelian category $\mathcal{A}$, then $M$ is strongly self-regular if and only if
$M_i$ is strongly self-regular for every $i\in I$, and ${\rm Hom}_{\mathcal{A}}(M_i,M_j)=0$ for every $i,j\in I$ with $i\neq j$.
We derive a corollary on the structure of strongly self-regular modules over a Dedekind domain.
In Section 4 we deal with the transfer of the strong relative regular property via functors.
We show various results involving fully faithful functors, adjoint pairs and adjoint triples of functors.
Let $(L,R)$ be an adjoint pair of covariant functors $L:\mathcal{A}\to \mathcal{B}$ and
$R:\mathcal{B}\to \mathcal{A}$ between abelian categories such that $L$ is exact,
and let $M$ and $N$ be objects of $\mathcal{B}$ such that $M,N\in {\rm Stat}(R)$.
Then we prove that the following are equivalent: $(i)$ $N$ is strongly $M$-regular in $\mathcal{B}$;
$(ii)$ $R(N)$ is strongly $R(M)$-regular in $\mathcal{A}$ and for every morphism $f:M\to N$, ${\rm Ker}(f)$ is $M$-cyclic;
$(iii)$ $R(N)$ is strongly $R(M)$-regular in $\mathcal{A}$ and for every morphism $f:M\to N$, ${\rm Ker}(f)\in {\rm Stat}(R)$.
In Section 5 we define (dual) strongly relative Baer objects in abelian categories.
Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$. Then $N$ is called
\emph{strongly $M$-Baer} if for every family $(f_i)_{i\in I}$ with each $f_i\in {\rm Hom}_{\mathcal{A}}(M,N)$, $\bigcap_{i\in I}
{\rm Ker}(f_i)$ is a fully invariant direct summand of $M$. Also, $N$ is called \emph{strongly self-Baer} if
$N$ is strongly $N$-Baer. We show that $M$ is strongly self-Baer if and only if $M$ is self-Baer and weak duo.
Also, if there exists the product $M^I$ for every set $I$, then $M$ is strongly self-Baer
if and only if $M$ is strongly self-Rickart and has the strong summand intersection property.
If there exists the product $N^I$ for every set $I$, then we prove that
$N$ is strongly $M$-Baer and $M$-$\mathcal{K}$-cononsingular if and only if
$M$ is strongly extending and $N$ is $M$-$\mathcal{K}$-nonsingular.
In Section 6 we study (co)products of (dual) strongly relative Baer objects. We prove that if
$M$, $N_1,\dots,N_n$ are objects of an abelian category $\mathcal{A}$,
then $\bigoplus_{i=1}^n N_i$ is strongly $M$-Baer
if and only if $N_i$ is strongly $M$-Baer for every $i\in \{1,\dots,n\}$.
We show that if $M=\bigoplus_{i\in I}M_i$ is a direct sum decomposition of
an object $M$ of an abelian category $\mathcal{A}$, then $M$ is strongly self-Baer if and only if
$M_i$ is strongly self-Baer for every $i\in I$, ${\rm Hom}_{\mathcal{A}}(M_i,M_j)=0$ for every $i,j\in I$ with $i\neq j$,
and $N=\bigoplus_{i\in I}(N\cap M_i)$ for every direct summand $N$ of $M$.
We derive a corollary on the structure of strongly self-Baer modules over a Dedekind domain.
In Section 7 we study the transfer of the strong relative Baer property via functors, similarly to Section 4.
For a right $R$-module $M$ with $S={\rm End}_R(M)$, we show that the following are equivalent:
$(i)$ $M$ is a strongly self-Baer right $R$-module;
$(ii)$ $S$ is a strongly self-Baer right $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each $f_i\in
S$, $\bigcap_{i\in I}{\rm Ker}(f_i)$ is $M$-cyclic;
$(iii)$ $S$ is a strongly self-Baer right $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each $f_i\in
S$, $\bigcap_{i\in I}{\rm Ker}(f_i)\in {\rm Stat}({\rm Hom}_R(M,-))$;
$(iv)$ $S$ is a strongly self-Baer right $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each $f_i\in
S$, $\bigcap_{i\in I}{\rm Ker}(f_i)$ is a locally split submodule;
$(v)$ $S$ is a strongly self-Baer right $S$-module and $M$ is quasi-retractable.
\section{Strongly relative regular objects}
In this section we begin to systematically apply our theory of (dual) strongly relative Rickart objects
to the study of some corresponding regular-type objects of an abelian category, called strongly relative regular objects.
Let us first recall the concept of relative regular object in a category.
\begin{defn} \cite[Definition~2.1]{DNTD} \rm Let $M$ and $N$ be objects of an arbitrary category $\mathcal{C}$. Then $N$ is called:
\begin{enumerate}
\item \emph{$M$-regular} if every morphism $f:M\to N$ in $\mathcal{C}$ has a generalized inverse,
in the sense that there exists a morphism $g:N\to M$ in $\mathcal{C}$ such that $fgf=f$.
\item \emph{self-regular} if $N$ is $N$-regular.
\end{enumerate}
\end{defn}
Relative regular objects of abelian categories are characterized as follows.
\begin{theo} \cite[Proposition~3.1]{DNTD}, \cite[Corollary~2.3]{CK}
Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$. Then
$N$ is $M$-regular if and only if for every morphism $f:M\to N$, ${\rm Ker}(f)$ is a direct summand of $M$
and ${\rm Im}(f)$ is a direct summand of $N$, that is, $N$ is $M$-Rickart and dual $M$-Rickart.
\end{theo}
In a similar way, we introduce the following concept.
\begin{defn} \label{d:strreg} \rm Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$. Then $N$ is called:
\begin{enumerate}
\item \emph{strongly $M$-regular} if for every morphism $f:M\to N$, ${\rm Ker}(f)$ is a fully invariant direct summand of $M$ and
${\rm Im}(f)$ is a fully invariant direct summand of $N$, that is, $N$ is strongly $M$-Rickart and dual strongly $M$-Rickart.
\item \emph{strongly self-regular} if $N$ is strongly $N$-regular.
\end{enumerate}
\end{defn}
As relative regularity has its root in von Neumann regularity of rings, soon we shall
see that strong relative regularity is related to strong regularity of rings.
Recall that a ring $R$ is called \emph{strongly regular} if
for every $a\in R$ there exists an element $b\in R$ such that $a=a^2b$ (equivalently,
for every $a\in R$ there exists an element $b\in R$ such that $a=ba^2$) \cite{AK}.
We also recall the following well known characterization of strongly regular rings.
\begin{prop} \label{p:strreg} A ring $R$ is strongly regular if and only if $R$ is von Neumann regular and abelian.
\end{prop}
\begin{prop} \label{p:Mstrreg} Let $M$ be an object of an abelian category $\mathcal{A}$.
Then $M$ is strongly self-regular if and only if its endomorphism ring ${\rm End}_{\mathcal{A}}(M)$ is strongly regular.
\end{prop}
\begin{proof} Assume that $M$ is strongly self-regular. Then $M$ is self-regular,
and so ${\rm End}_{\mathcal{A}}(M)$ is von Neumann regular.
Let $e\in {\rm End}_{\mathcal{A}}(M)$ be an idempotent and $h\in {\rm End}_{\mathcal{A}}(M)$. Since every idempotent splits,
there exists an object $K$ and morphisms $k:K\to M$ and $p:M\to K$ such that $kp=e$ and $pk=1_K$.
Since $k$ is a kernel and $M$ is strongly self-Rickart, $hek=k\alpha$ for some morphism $\alpha:K\to K$. It follows that
$ehek=ek\alpha=kpk\alpha=k\alpha=hek$, hence $ehe=ehekp=hekp=he$. Thus, $e$ is left semicentral.
Hence ${\rm End}_{\mathcal{A}}(M)$ is abelian. Then ${\rm End}_{\mathcal{A}}(M)$ is strongly regular by Proposition \ref{p:strreg}.
Conversely, assume that ${\rm End}_{\mathcal{A}}(M)$ is strongly regular.
Then ${\rm End}_{\mathcal{A}}(M)$ is von Neumann regular by Proposition \ref{p:strreg},
and so $M$ is self-regular. It follows that $M$ is self-Rickart and dual self-Rickart.
We claim that $M$ is weak duo. To this end, let $k:K\to M$ be a section and $p:M\to K$ the canonical projection.
Then $pk=1_K$ and $e=kp\in {\rm End}_{\mathcal{A}}(M)$ is idempotent. By Proposition \ref{p:strreg},
${\rm End}_{\mathcal{A}}(M)$ is abelian, and so $e$ is central.
It follows that $hkp=he=eh=kph$, hence $hk=kphk$. Thus, $k$ is a fully invariant section, and so $M$ is weak duo.
Finally, $M$ is strongly self-Rickart and dual strongly self-Rickart by \cite[Corollary~2.10]{CO1}.
Hence $M$ is strongly self-regular.
\end{proof}
\begin{coll} \label{c:wduo-reg} Let $M$ be an object of an abelian category $\mathcal{A}$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $M$ is strongly self-regular.
\item $M$ is self-regular and weak duo.
\item $M$ is self-regular and ${\rm End}_{\mathcal{A}}(M)$ is abelian.
\item For every endomorphism $f:M\to M$, $M={\rm Ker}(f)\oplus {\rm Im}(f)$.
\end{enumerate}
\end{coll}
\begin{proof} The equivalence of the first three conditions follows by \cite[Corollary~2.10]{CO1} and \cite[Proposition~2.14]{CO1}.
The equivalence $(i)\Leftrightarrow (iv)$ follows in the same way as \cite[Theorem~2.2]{LRR13},
whose proof works in abelian categories.
\end{proof}
\begin{ex} \label{e:strreg} \rm (a) Consider the ring $A=\mathbb{Z}_2^{\mathbb{N}}$, and its subrings
$T=\{(a_n)_{n\in \mathbb{N}}\mid a_n \textrm{ is eventually constant}\}$ and
$I=\{(a_n)_{n\in \mathbb{N}}\mid a_n=0 \textrm{ eventually}\}=\mathbb{Z}_2^{(\mathbb{N})}$.
Let $R=\begin{pmatrix}T&T/I\\0&T/I\end{pmatrix}$, the idempotent $e=\begin{pmatrix}(1,1,\dots)&I\\0&I\end{pmatrix}\in R$
and $M=eR=\begin{pmatrix}T&T/I\\0&0\end{pmatrix}$.
Then $M$ is a self-Rickart right $R$-module \cite[Example~2.18]{LRR11}
and a dual self-Rickart right $R$-module \cite[Example~4.1]{LRR10},
hence $M$ is self-regular. But ${\rm End}_R(M)=\begin{pmatrix}T&0\\0&0\end{pmatrix}$ is commutative,
hence $M$ is strongly self-regular by Corollary \ref{c:wduo-reg}.
(b) The full $2\times 2$ matrix ring $R=M_2(K)$ over a field $K$ is a self-regular right $R$-module,
which is not strongly self-regular.
\end{ex}
\begin{theo} \label{t:epimono-reg} Let $r:M\to M'$ be an epimorphism and $s:N'\to N$ a monomorphism in an abelian category
$\mathcal{A}$. If $N$ is strongly $M$-regular, then $N'$ is strongly $M'$-regular.
\end{theo}
\begin{proof} This follows by \cite[Theorem~2.17]{CO1}.
\end{proof}
\begin{coll} \label{c:summand-reg} Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$, $M'$ a direct summand
of $M$ and $N'$ a direct summand of $N$. If $N$ is strongly $M$-regular, then $N'$ is strongly $M'$-regular.
\end{coll}
\begin{proof} This follows by \cite[Corollary~2.18]{CO1}.
\end{proof}
Next we explore further relationships between our concepts and strong relative regularity.
Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$. We recall some generalizations of injectivity and
projectivity that are useful in the study of relative regular objects. Following \cite[p.~220]{NZ}, $M$ is called
\emph{direct $N$-injective} if every subobject of $N$ isomorphic to a direct summand of $M$ is a direct summand of $N$.
Dually, $N$ is called \emph{direct $M$-projective} if for every factor object of $M/K$ isomorphic to a direct summand of $N$,
$K$ is a direct summand of $M$. For $M=N$ the above notions particularize to direct injectivity and direct projectivity
respectively. In this case, a direct injective object is also called a \emph{$C_2$-object}, while a direct projective
object is also called a \emph{$D_2$-object}.
\begin{theo} \label{t:char} Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$. Then the following are
equivalent:
\begin{enumerate}[(i)]
\item $N$ is strongly $M$-regular.
\item $N$ is strongly $M$-Rickart and $M$ is direct $N$-injective.
\item $N$ is dual strongly $M$-Rickart and direct $M$-projective.
\end{enumerate}
\end{theo}
\begin{proof} (i)$\Rightarrow$(ii) Assume that $N$ is strongly $M$-regular. Then $N$ is strongly $M$-Rickart by
Proposition~\ref{p:Mstrreg} and $N$ is $M$-regular. It follows that $M$ is direct $N$-injective by \cite[Theorem~5.3]{CK}.
(ii)$\Rightarrow$(i) Assume that $N$ is strongly $M$-Rickart and $M$ is direct $N$-injective.
Then $N$ is $M$-Rickart and $M$ is weak duo by \cite[Proposition~2.9]{CO1}. By \cite[Theorem~5.3]{CK}, $N$ is dual $M$-Rickart.
Then $N$ is dual strongly $M$-Rickart by \cite[Proposition~2.9]{CO1}. Hence $N$ is strongly $M$-regular.
The equivalence (i)$\Leftrightarrow$(iii) follows by duality.
\end{proof}
\begin{coll} Let $M$ be an object of an abelian category $\mathcal{A}$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $M$ is strongly self-regular.
\item $M$ is strongly self-Rickart and direct injective.
\item $M$ is dual strongly self-Rickart and direct projective.
\end{enumerate}
\end{coll}
Theorem~\ref{t:char} allows one to use properties of strongly relative Rickart objects and direct relative injectivity (or
equivalently, properties of dual strongly relative Rickart objects and direct relative projectivity) in order to deduce
properties of strongly relative regular objects. We shall show several results which underline this technique.
\begin{theo} \label{t:extensions-reg} Let $\mathcal{A}$ be an abelian category.
\begin{enumerate} \item Consider a short exact sequence $$0\to N_1\to N\to N_2\to 0$$ and an object $M$ of $\mathcal{A}$
such that $N_1$ and $N_2$ are strongly $M$-regular. Then $N$ is strongly $M$-regular.
\item Consider a short exact sequence $$0\to M_1\to M\to M_2\to 0$$ and an object $N$ of $\mathcal{A}$ such that $N$ is
dual strongly $M_1$-regular and dual strongly $M_2$-regular. Then $N$ is dual strongly $M$-regular.
\end{enumerate}
\end{theo}
\begin{proof} This follows by \cite[Theorem~2.19]{CO1} and \cite[Lemma~5.2]{CK}.
\end{proof}
Next we give some applications to graded rings and modules.
\begin{defn} \rm A $G$-graded ring $R=\bigoplus_{\sigma\in G}R_{\sigma}$ is called \emph{strongly gr-regular}
if for every $x_{\sigma}\in R_{\sigma}$ there exists $y\in R$ (which can be assumed to be in $R_{\sigma^{-1}}$)
such that $x_{\sigma}=x^2_{\sigma}y$.
\end{defn}
Now we may easily give an analogue of \cite[Theorem~5.2]{DNTD} for strongly gr-regular rings.
\begin{theo} \label{t:grring} Let $R=\bigoplus_{\sigma\in G}R_{\sigma}$ be a $G$-graded ring.
Then $R$ is strongly gr-regular if and only if
$R(\sigma)$ is a strongly $R$-regular graded right $R$-module for every $\sigma\in G$.
\end{theo}
\begin{proof} Assume first that $R$ is strongly gr-regular. Let $\sigma\in G$ and let $f:R\to R(\sigma)$ be a homomorphism
of graded right $R$-modules. For $x_{\sigma}=f(1)$ there exists $y\in R_{\sigma^{-1}}$ such that
$x_{\sigma}=x^2_{\sigma}y$. Then $g:R(\sigma)\to R$ defined by $g(r)=yr$ is a homomorphism of graded right $R$-modules,
and $f=f^2g$. Hence $R(\sigma)$ is a strongly $R$-regular graded right $R$-module.
Conversely, assume that $R(\sigma)$ is a strongly $R$-regular graded right $R$-module for every $\sigma\in G$.
Let $\sigma\in G$ and $x_{\sigma}\in R_{\sigma}$. Consider the homomorphism $f:R\to R(\sigma)$
of graded right $R$-modules defined by $f(r)=x_{\sigma}r$. Then there exists a homomorphism $g:R(\sigma)\to R$
of graded right $R$-modules such that $f=f^2g$. Then $x_{\sigma}=x^2_{\sigma}f(1)$, which shows that $R$ is strongly gr-regular.
\end{proof}
\begin{coll} Let $R=\bigoplus_{\sigma\in G}R_{\sigma}$ be a $G$-graded ring.
\begin{enumerate}[(i)]
\item If $R$ is strongly gr-regular, then $R_{\sigma}$ is strongly $R_e$-regular for every $\sigma\in G$.
In particular, $R$ is strongly $R_e$-regular.
\item If $R$ is strongly graded and $R_{\sigma}$ is strongly $R_e$-regular for every $\sigma\in G$,
then $R$ is strongly gr-regular.
\end{enumerate}
\end{coll}
\begin{proof} (i) Let $\sigma\in G$ and let $f:R_e\to R_{\sigma}$ be a homomorphism of right $R_e$-modules.
Let $x_{\sigma}=f(1)$. There exists $y\in R_{\sigma^{-1}}$ such that $x_{\sigma}=x^2_{\sigma}y$.
Then $g:R_{\sigma}\to R_e$ defined by $g(r)=yr$ is a homomorphism of right $R_e$-modules,
and $f=f^2g$. Hence $R_{\sigma}$ is strongly $R_e$-regular.
(ii) This follows by Theorem \ref{t:grring}.
\end{proof}
\section{(Co)products of strongly relative regular objects}
We show several results on (co)products of (dual) strongly relative regular objects in abelian categories.
They are naturally obtained by using the two main ways of deducing results of strongly relative regular objects from
the theory of (dual) strongly relative Rickart objects, namely Definition \ref{d:strreg} and Theorem \ref{t:char}.
\begin{theo} \label{t:dsreg} Let $\mathcal{A}$ be an abelian category.
\begin{enumerate}
\item Let $M$ and $N_1,\dots,N_n$ be objects of $\mathcal{A}$. Then $\bigoplus_{i=1}^n N_i$ is strongly $M$-regular
if and only if $N_i$ is strongly $M$-regular for every $i\in \{1,\dots,n\}$.
\item Let $M_1,\dots,M_n$ and $N$ be objects of $\mathcal{A}$. Then $N$ is strongly $\bigoplus_{i=1}^n M_i$-regular
if and only if $N$ is strongly $M_i$-regular for every $i\in \{1,\dots,n\}$.
\end{enumerate}
\end{theo}
\begin{proof} (1) By \cite[Theorem~3.1]{CO1}, Theorem \ref{t:char} and \cite[Lemma~5.2]{CK},
$\bigoplus_{i=1}^n N_i$ is strongly $M$-regular
if and only if [$\bigoplus_{i=1}^n N_i$ is strongly $M$-Rickart, and $M$ is direct $\bigoplus_{i=1}^n N_i$-injective]
if and only if [$N_i$ is strongly $M$-Rickart, and $M$ is direct $N_i$-injective for every $i\in \{1,\dots,n\}$]
if and only if $N_i$ is strongly $M$-regular for every $i\in \{1,\dots,n\}$.
\end{proof}
\begin{rem} \rm \cite[Example~3.3 (a)]{CO1} also shows that Theorem~\ref{t:dsreg} does not hold in general for arbitrary coproducts.
\end{rem}
As an application of Theorem~\ref{t:dsreg}, we give the following property involving coproducts of (dual) strongly self-Rickart
objects.
\begin{theo} \label{t:59} Let $M_1,\dots,M_n$ be objects of an abelian category $\mathcal{A}$.
\begin{enumerate}
\item Assume that $M_i$ is direct $M_j$-injective for every $i,j\in \{1,\dots,n\}$. Then $\bigoplus_{i=1}^n M_i$ is
strongly self-Rickart if and only if $M_i$ is strongly $M_j$-Rickart for every $i,j\in \{1,\dots,n\}$.
\item Assume that $M_i$ is direct $M_j$-projective for every $i,j\in \{1,\dots,n\}$. Then $\bigoplus_{i=1}^n M_i$ is
dual strongly self-Rickart if and only if $M_i$ is dual strongly $M_j$-Rickart for every $i,j\in \{1,\dots,n\}$.
\end{enumerate}
\end{theo}
\begin{proof} (1) If $\bigoplus_{i=1}^n M_i$ is strongly self-Rickart, then $M_i$ is strongly $M_j$-Rickart for every $i,j\in
\{1,\dots,n\}$ by \cite[Corollary~2.18]{CO1}.
Conversely, assume that $M_i$ is strongly $M_j$-Rickart for every $i,j\in \{1,\dots,n\}$. Since $M_i$ is direct $M_j$-injective
for every $i,j\in \{1,\dots,n\}$, it follows by Theorem~\ref{t:char} that $M_i$ is strongly $M_j$-regular for every
$i,j\in \{1,\dots,n\}$. Then $\bigoplus_{i=1}^n M_i$ is strongly self-regular by Theorem~\ref{t:dsreg}. Finally,
$\bigoplus_{i=1}^n M_i$ is strongly self-Rickart by Theorem~\ref{t:char}.
\end{proof}
We also have the following result on arbitrary (co)products under some finiteness conditions.
\begin{coll} \label{c:fg-reg} Let $\mathcal{A}$ be an abelian category.
\begin{enumerate} \item Assume that $\mathcal{A}$ has coproducts, let $M$ be a finitely generated object of
$\mathcal{A}$, and let $(N_i)_{i\in I}$ be a family of objects of $\mathcal{A}$. Then $\bigoplus_{i\in I}
N_i$ is strongly $M$-regular if and only if $N_i$ is strongly $M$-regular for every $i\in I$.
\item Assume that $\mathcal{A}$ has coproducts, let $N$ be a finitely cogenerated object of $\mathcal{A}$, and let
$(M_i)_{i\in I}$ be a family of objects of $\mathcal{A}$ such that $N$ is strongly $M_i$-regular for every $i\in I$. Then
$N$ is strongly $\prod_{i\in I} M_i$-regular if and only if $N$ is strongly $M_i$-regular for every $i\in I$.
\end{enumerate}
\end{coll}
\begin{proof} (1) By \cite[Corollary~3.2]{CO1}, Theorem~\ref{t:char} and the immediate analogue of
\cite[16.2]{Wis} for abelian categories, $\bigoplus_{i\in I} N_i$ is strongly $M$-regular
if and only if [$\bigoplus_{i\in I} N_i$ is strongly $M$-Rickart, and $M$ is direct $\bigoplus_{i\in I} N_i$-injective]
if and only if [$N_i$ is strongly $M$-Rickart, and $M$ is direct $N_i$-injective for every $i\in \{1,\dots,n\}$]
if and only if $N_i$ is strongly $M$-regular for every $i\in \{1,\dots,n\}$.
\end{proof}
Let $G$ be a finite group and let $R=\bigoplus_{\sigma\in G}R_{\sigma}$ be a $G$-graded ring.
Then one may associate to $R$ a ring $R\# G^*$, called \emph{smash product}, defined as follows \cite[Chapter 7]{Nasta-04}:
$R\# G^*=\bigoplus_{x\in G} Rp_x$, where the family $(p_x)_{x\in G}$ is a basis of $R\# G^*$, and
the multiplication is defined by $(r p_x)(s p_y)=rs_{xy^{-1}}p_y$ for every $r,s\in R$ and $x,y\in G$.
Now we can give an analogue of \cite[Corollary~5.4]{DNTD} for strongly gr-regular rings.
\begin{coll} Let $G$ be a finite group and let $R=\bigoplus_{\sigma\in G}R_{\sigma}$ be a $G$-graded ring.
Then $R$ is strongly gr-regular if and only if the smash product $R\#G^*$ is a strongly regular ring.
\end{coll}
\begin{proof} Assume first that $R$ is strongly gr-regular. Denote $U=\bigoplus_{\sigma}R(\sigma)$.
Since $R$ is strongly gr-regular, $R(\sigma)$ is a strongly $R$-regular graded right $R$-module
for every $\sigma\in G$ by Theorem \ref{t:grring}. But $R$ is a finitely generated graded right $R$-module,
hence $U$ is a strongly $R$-regular graded right $R$-module by Corollary \ref{c:fg-reg}.
Consider the $\sigma$-suspension functor $T_{\sigma}:{\rm gr}(R)\to {\rm gr}(R)$ defined by
$T_{\sigma}(M)=M_{\sigma}$ for every graded right $R$-module $M$. Since $T_{\sigma}$ is an isomorphism of categories
for every $\sigma\in G$, $U=U(\sigma)$ is a strongly $R(\sigma)$-regular graded right $R$-module.
Since $G$ is finite, it follows that $U$ is a strongly self-regular graded right $R$-module by Theorem \ref{t:dsreg}.
Then $R\#G^*\cong {\rm End}_{{\rm gr}(R)}(U)$ \cite[Theorem~7.2.1]{Nasta-04} is a strongly regular ring by Proposition \ref{p:Mstrreg}.
Conversely, assume that $R\#G^*$ is a strongly regular ring. Then ${\rm End}_{{\rm gr}(R)}(U)\cong R\#G^*$ is
a strongly regular ring, hence $U$ is a strongly self-regular graded right $R$-module by Proposition \ref{p:Mstrreg}.
Using again the $\sigma$-suspension functor $T_{\sigma}$, it follows that $U$ is a strongly $R$-regular graded right $R$-module.
Then $R(\sigma)$ is a strongly $R$-regular graded right $R$-module for every $\sigma\in G$ by
Corollary \ref{c:summand-reg}. Finally, $R$ is strongly gr-regular by Theorem \ref{t:grring}.
\end{proof}
The next result gives a necessary condition for an infinite (co)product of objects to be a strongly self-regular object.
\begin{prop} \label{p:relreg} Let $(M_i)_{i\in I}$ be a family of objects of an abelian category $\mathcal{A}$
such that $\prod_{i\in I} M_i$ or $\bigoplus_{i\in I} M_i$ is a strongly self-regular object.
Then $M_i$ is strongly $M_j$-regular for every $i,j\in I$.
\end{prop}
\begin{proof} This follows by \cite[Theorem~2.17]{CO1}, \cite[Proposition~3.4]{CO1} and the immediate analogues of
\cite[16.2, 18.2]{Wis} for abelian categories.
\end{proof}
\begin{theo} \label{t:homzero-reg}
Let $\mathcal{A}$ be an abelian category. Let $M=\bigoplus_{i\in I}M_i$ be a direct sum decomposition of
an object $M$ of $\mathcal{A}$. Then $M$ is strongly self-regular if and only if
$M_i$ is strongly self-regular for every $i\in I$, and ${\rm Hom}_{\mathcal{A}}(M_i,M_j)=0$ for every $i,j\in I$ with $i\neq j$.
\end{theo}
\begin{proof} This follows by \cite[Theorem~3.6]{CO1}.
\end{proof}
Finally, we deduce a result on the structure of strongly self-regular modules over a Dedekind domain,
and in particular, on the structure of strongly self-regular abelian groups.
\begin{coll} \label{c:dede-reg} Let $R$ be a Dedekind domain.
\begin{enumerate}[(i)]
\item A non-zero torsion $R$-module $M$ is strongly self-regular if and only if
$M\cong \bigoplus_{i\in I} R/P_i$ for some distinct maximal ideals $P_i$ of $R$.
\item A non-zero finitely generated $R$-module $M$ is strongly self-regular if and only if
$M\cong \bigoplus_{i=1}^k R/P_i$ for some distinct maximal ideals $P_i$ of $R$.
\item A non-zero injective $R$-module $M$ is strongly self-regular if and only if $M\cong K$.
\end{enumerate}
\end{coll}
\begin{proof} This follows by \cite[Corollary~3.8]{CO1}.
\end{proof}
\begin{coll} \label{c:abgr-reg}
\begin{enumerate}[(i)]
\item A non-zero torsion abelian group $G$ is strongly self-regular if and only if
$G\cong \bigoplus_{i\in I} \mathbb{Z}_{p_i}$ for some distinct primes $p_i$.
\item A non-zero finitely generated abelian group $G$ is strongly self-regular if and only if
$G\cong \bigoplus_{i=1}^k \mathbb{Z}_{p_i}$ for some distinct primes $p_i$.
\item A non-zero injective abelian group $G$ is strongly self-regular if and only if $G\cong \mathbb{Q}$.
\end{enumerate}
\end{coll}
\begin{proof} This follows by Corollary \ref{c:dede-reg}.
\end{proof}
\begin{ex} \rm The abelian group $\mathbb{Z}_p\oplus \mathbb{Z}_p$ (for some prime $p$) is self-regular,
but not strongly self-regular by \cite[Example~3.10]{CO1}.
\end{ex}
\section{Strongly relative regular objects: transfer via functors}
Our first result on the transfer of strongly relative regular property via (additive) functors
involves a fully faithful covariant functor.
\begin{theo} \label{t:ff-reg} Let $F:\mathcal{A}\to \mathcal{B}$ be an exact fully faithful covariant functor between abelian
categories, and let $M$ and $N$ be objects of $\mathcal{A}$. Then $N$ is strongly $M$-regular in $\mathcal{A}$
if and only if $F(N)$ is strongly $F(M)$-regular in $\mathcal{B}$.
\end{theo}
\begin{proof} This follows by \cite[Theorem~4.1]{CO1}.
\end{proof}
In the case of strong self-regularity, let us see that one can remove the exactness of the functor from Theorem \ref{t:ff-reg}.
\begin{theo} \label{t:ff-selfreg} Let $F:\mathcal{A}\to \mathcal{B}$ be a fully faithful covariant functor between abelian
categories, and let $M$ be an object of $\mathcal{A}$. Then $M$ is strongly self-regular in $\mathcal{A}$
if and only if $F(M)$ is strongly self-regular in $\mathcal{B}$.
\end{theo}
\begin{proof} Assume that $M$ is strongly self-regular in $\mathcal{A}$. Let $u:F(M)\to F(M)$ be a morphism in $\mathcal{B}$.
Since $F$ is full, we have $u=F(f)$ for some morphism $f:M\to M$ in $\mathcal{A}$.
Since $M$ is strongly self-regular, there exists a morphism $g:M\to M$ such that $f=f^2g$. Then $u=F(f)=F(f)^2F(g)=u^2F(g)$.
This shows that $F(M)$ is strongly self-regular.
Conversely, assume that $F(M)$ is strongly self-regular in $\mathcal{B}$. Let $f:M\to M$ be a morphism in $\mathcal{A}$.
Then there exists a morphism $v:F(M)\to F(M)$ such that $F(f)=F(f)^2v$.
Since $F$ is full, we have $v=F(g)$ for some morphism $g:M\to M$.
Then $F(f)=F(f)^2F(g)=F(f^2g)$, hence $f=f^2g$, because $F$ is faithful. This shows that $M$ is strongly self-regular.
\end{proof}
For Grothendieck categories we have the following corollary.
\begin{coll} \label{c:gp-reg} Let $\mathcal{A}$ be a Grothendieck category with a generator $U$, $R={\rm
End}_{\mathcal{A}}(U)$, $S={\rm Hom}_{\mathcal{A}}(U,-):\mathcal{A}\to {\rm Mod}(R)$, and let $M$ be an object
of $\mathcal{A}$. Then $M$ is a strongly self-regular object of $\mathcal{A}$ if and only if
$S(M)$ is a strongly self-regular right $R$-module.
\end{coll}
\begin{proof} By the Gabriel-Popescu Theorem \cite[Chapter~X, Theorem~4.1]{St}, $S$ is a fully faithful functor.
Then the conclusion follows by Theorem \ref{t:ff-selfreg}.
\end{proof}
It is well-known that von Neumann regularity is a Morita invariant property.
Next we show that this is still true for strong regularity.
\begin{coll} Strong regularity is a Morita invariant property.
\end{coll}
\begin{proof} Let $R$ and $S$ be two Morita equivalent rings via inverse equivalences $F:{\rm Mod}(R)\to {\rm Mod}(S)$ and
$G:{\rm Mod}(S)\to {\rm Mod}(R)$. We show that strong regularity of one of the rings implies strong regularity of the other.
Assume that $S$ is a strongly regular ring. Then $S$ is a strongly self-regular right $S$-module by Proposition \ref{p:strreg}.
If $Q=G(S)$, then it is well known that $Q$ is a progenerator in ${\rm Mod}(R)$ and $F(Q)\cong S$. Hence
$F(Q)$ is a strongly self-regular right $S$-module, and so $Q$ is a strongly self-regular right $R$-module
by Theorem \ref{t:ff-selfreg}. Since $Q$ is a generator in ${\rm Mod}(R)$, $R$ is isomorphic to a direct summand of
$Q^n$ for some natural number $n\geq 1$. Since $S$ a strongly self-regular right $S$-module,
$Q=G(S)$ is a strongly self-regular right $R$-module by Theorem \ref{t:ff-selfreg}. It follows that
$Q^n$ is a strongly self-regular right $R$-module by Theorem \ref{t:dsreg}. Finally,
$R$ is a a strongly self-regular right $R$-module by Corollary \ref{c:summand-reg},
and so $R$ is a strongly regular ring by Proposition \ref{p:strreg}.
\end{proof}
\begin{coll} \label{c:tripleff-reg} Let $(L,F,R)$ be an adjoint triple of covariant functors $F:\mathcal{A}\to \mathcal{B}$
and $L,R:\mathcal{B}\to \mathcal{A}$ between abelian categories.
\begin{enumerate}
\item Let $M$ and $N$ be objects of $\mathcal{A}$, and assume that $F$ is fully faithful. Then $N$ is strongly
$M$-regular in $\mathcal{A}$ if and only if $F(N)$ is strongly $F(M)$-regular in $\mathcal{B}$.
\item Let $M$ and $N$ be objects of $\mathcal{B}$, and assume that $L$ (or $R$) is fully faithful.
Then $M$ is strongly self-regular in $\mathcal{B}$ if and only if $R(M)$ is strongly self-regular in $\mathcal{A}$
if and only if $L(M)$ is strongly self-Rickart in $\mathcal{A}$.
\end{enumerate}
\end{coll}
\begin{proof} This follows by Theorem \ref{t:ff-selfreg}.
\end{proof}
\begin{coll} Let $\varphi:R\to S$ be a ring epimorphism, and let $M$ and $N$ be right $S$-modules. Then $N$ is a
strongly $M$-regular right $S$-module if and only if $N$ is a strongly $M$-regular right $R$-module.
\end{coll}
\begin{proof} Since $\varphi:R\to S$ is a ring epimorphism, the restriction of scalars $\varphi_*:{\rm
Mod}(S)\to {\rm Mod}(R)$ is an exact fully faithful functor \cite[Chapter~XI, Proposition~1.2]{St}. Then use Theorem \ref{t:ff-reg}.
\end{proof}
\begin{coll} Let $R$ be a strongly $G$-graded ring, and let $M$ and $N$ be right $R_e$-modules.
Then $N$ is a strongly $M$-regular right $R_e$-module if and only if
${\rm Ind}(N)$ is a strongly ${\rm Ind}(M)$-regular graded right $R$-module.
\end{coll}
\begin{proof} Since $R$ is a strongly $G$-graded ring, the functors ${\rm Ind},{\rm Coind}:{\rm Mod}(R_e)\to {\rm gr}(R)$
are functorially isomorphic. Now use \cite[Corollary~4.5]{CO1}. Alternatively, since $R$ is a strongly $G$-graded ring,
${\rm Ind},{\rm Coind}:{\rm Mod}(R_e)\to {\rm gr}(R)$ are equivalences of categories \cite[Theorem~3.1.1]{Nasta-04}
and use Theorem \ref{t:ff-reg}.
\end{proof}
\begin{coll} \label{c:rc-reg} Let $\mathcal{A}$ be an abelian category, $\mathcal{C}$ an abelian full subcategory of
$\mathcal{A}$ and $i:\mathcal{C}\to \mathcal{A}$ the inclusion functor.
\begin{enumerate}
\item Assume that $\mathcal{C}$ is a coreflective subcategory of $\mathcal{A}$.
Let $M$ and $N$ be objects of $\mathcal{C}$. Then $N$ is strongly $M$-regular in $\mathcal{C}$ if and only if
$i(N)$ is strongly $i(M)$-regular in $\mathcal{A}$.
\item Assume that $\mathcal{C}$ is a reflective subcategory of $\mathcal{A}$.
Let $M$ be an object of $\mathcal{C}$. Then $M$ is strongly self-regular in $\mathcal{C}$
if and only if $i(M)$ is strongly self-regular in $\mathcal{A}$.
\end{enumerate}
\end{coll}
\begin{proof} (1) Note that $i$ is exact fully faithful and use Theorem \ref{t:ff-reg}.
(2) Note that $i$ is fully faithful and use Theorem \ref{t:ff-selfreg}.
\end{proof}
\begin{coll} \label{c:com1-reg} Let $C$ be a coalgebra over a field, and
let $M$ and $N$ be left $C$-comodules. Then $N$ is strongly $M$-regular if and only if
$N$ is strongly $M$-regular as a right $C^*$-module.
\end{coll}
\begin{proof} Note that ${}^C\mathcal{M}$ is a coreflective abelian subcategory of ${\rm Mod}(C^*)$
and use Corollary \ref{c:rc-reg}.
\end{proof}
In order to discuss the transfer of the strong relative regular property to endomorphism
rings, we establish some general results involving adjoint functors.
First, we need the following property on the transfer of direct relative injectivity (projectivity).
\begin{prop} \label{p:transferdip} Let $(L,R)$ be an adjoint pair of covariant functors $L:\mathcal{A}\to \mathcal{B}$ and
$R:\mathcal{B}\to \mathcal{A}$ between abelian categories.
\begin{enumerate}
\item Assume that $L$ is exact. Let $M$ and $N$ be objects of $\mathcal{B}$ such that $M,N\in {\rm Stat}(R)$.
Then $M$ is direct $N$-injective in $\mathcal{B}$ if and only if $R(M)$ is direct $R(N)$-injective in $\mathcal{A}$.
\item Assume that $R$ is exact. Let $M$ and $N$ be objects of $\mathcal{A}$ such that $M,N\in {\rm Adst}(R)$.
Then $N$ is direct $M$-projective in $\mathcal{A}$ if and only if $L(N)$ is direct $L(M)$-projective in $\mathcal{B}$.
\end{enumerate}
\end{prop}
\begin{proof} (1) Let $\varepsilon:LR\to 1_{\mathcal{B}}$ and $\eta:1_{\mathcal{A}}\to RL$ be the counit and the unit of
adjunction respectively.
Assume that $M$ is direct $N$-injective. Let $\alpha:P\to R(N)$ be a monomorphism with $P$ isomorphic to a
direct summand of $R(M)$, and let $\beta:P\to R(M)$ be a morphism. Since $L$ is left exact and $M,N\in {\rm Stat}(R)$,
$\varepsilon_NL(\alpha):L(P)\to N$ is a monomorphism, and $L(P)$ is isomorphic to a direct summand of $M$.
By the direct $N$-injectivity of $M$, there exists a morphism $h:M\to N$ such that
$h\varepsilon_NL(\alpha)=\varepsilon_ML(\beta)$ \cite[Lemma~5.1]{CK}.
Let $\gamma=R(h):R(N)\to R(M)$. Since $(L,R)$ is an adjoint pair,
we have $R(\varepsilon_N)\eta_{R(N)}=1_{R(N)}$ and $\eta$ is a natural transformation. It follows that:
\begin{align*} \gamma\alpha&=R(h)\alpha=R(h)R(\varepsilon_N)\eta_{R(N)}\alpha=R(h)R(\varepsilon_N)RL(\alpha)\eta_P \\
& =R(h\varepsilon_NL(\alpha))\eta_P=R(\varepsilon_ML(\beta))\eta_P=
R(\varepsilon_M)RL(\beta)\eta_P=R(\varepsilon_M)\eta_{R(M)}\beta=\beta.
\end{align*}
This shows that $R(M)$ is direct $R(N)$-injective.
Conversely, assume that $R(M)$ is direct $R(N)$-injective. Let $\alpha:P\to N$ be a monomorphism with $P$ isomorphic to a
direct summand of $M$, and let $\beta:P\to M$ be a morphism. Since $R$ is left exact, $R(\alpha):R(P)\to R(N)$ is a
monomorphism, and $R(P)$ is clearly isomorphic to a direct summand of $R(M)$. By the direct $R(N)$-injectivity of $R(M)$,
there exists a morphism $h:R(N)\to R(M)$ such that $hR(\alpha)=R(\beta)$ \cite[Lemma~5.1]{CK}.
There exists a split epimorphism $p:M\to P$. Since $\varepsilon_PLR(p)=p\varepsilon_M$ is a split epimorphism,
then so is $\varepsilon_P$. Hence there exists a morphism $r:P\to LR(P)$ such that $\varepsilon_Pr=1_P$.
Since $(L,R)$ is an adjoint pair, $\varepsilon$ is a natural transformation.
Also, $\varepsilon_N$ is an isomorphism, because $N\in {\rm Stat}(R)$.
Let $\gamma=\varepsilon_ML(h)\varepsilon_N^{-1}:N\to M$. It follows that:
\begin{align*} \gamma\alpha&=\gamma\alpha\varepsilon_Pr=\varepsilon_ML(h)\varepsilon_N^{-1}\alpha\varepsilon_Pr=
\varepsilon_ML(h)\varepsilon_N^{-1}\varepsilon_NLR(\alpha)r \\
&=\varepsilon_ML(hR(\alpha))r=\varepsilon_MLR(\beta)r=\beta\varepsilon_Pr=\beta.
\end{align*}
This shows that $M$ is direct $N$-injective.
\end{proof}
\begin{theo} \label{t:equiv-reg} Let $(L,R)$ be an adjoint pair of covariant functors $L:\mathcal{A}\to \mathcal{B}$ and
$R:\mathcal{B}\to \mathcal{A}$ between abelian categories.
\begin{enumerate}
\item Assume that $L$ is exact. Let $M$ and $N$ be objects of $\mathcal{B}$ such that $M,N\in {\rm Stat}(R)$.
Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is strongly $M$-regular in $\mathcal{B}$.
\item $R(N)$ is strongly $R(M)$-regular in $\mathcal{A}$ and for every morphism $f:M\to N$, ${\rm Ker}(f)$ is $M$-cyclic.
\item $R(N)$ is strongly $R(M)$-regular in $\mathcal{A}$ and for every morphism $f:M\to N$, ${\rm Ker}(f)\in {\rm Stat}(R)$.
\end{enumerate}
\item Assume that $R$ is exact. Let $M$ and $N$ be objects of $\mathcal{A}$ such that $M,N\in {\rm Adst}(R)$.
Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is strongly $M$-regular in $\mathcal{A}$.
\item $L(N)$ is strongly $L(M)$-regular in $\mathcal{B}$ and for every morphism $f:M\to N$, ${\rm Coker}(f)$ is
$N$-cocyclic.
\item $L(N)$ is strongly $L(M)$-regular in $\mathcal{B}$ and for every morphism $f:M\to N$, ${\rm Coker}(f)\in {\rm
Adst}(R)$.
\end{enumerate}
\end{enumerate}
\end{theo}
\begin{proof} This follows by \cite[Theorem~4.9]{CO1}, Theorem \ref{t:char} and Proposition \ref{p:transferdip}.
\end{proof}
Now we can extend Corollary \ref{c:gp-reg} from self-regularity to relative regularity.
\begin{coll} \label{c:gp-reg2} Let $\mathcal{A}$ be a Grothendieck category with a generator $U$, $R={\rm
End}_{\mathcal{A}}(U)$, $S={\rm Hom}_{\mathcal{A}}(U,-):\mathcal{A}\to {\rm Mod}(R)$, and let $M$ and $N$ be objects
of $\mathcal{A}$. Then $N$ is a strongly $M$-regular object of $\mathcal{A}$ if and only if
$S(N)$ is a strongly $M$-regular right $R$-module and for every morphism $f:M\to N$, ${\rm Ker}(f)$ is $M$-cyclic.
\end{coll}
\begin{proof} By the Gabriel-Popescu Theorem \cite[Chapter~X, Theorem~4.1]{St}, $S$ is a fully faithful functor,
hence $M\in {\rm Stat}(S)$ for every object $M$ of $\mathcal{A}$. Also, $S$ has an exact left adjoint
$T:{\rm Mod}(R)\to \mathcal{A}$. Then the conclusion follows by Theorem \ref{t:equiv-reg}.
\end{proof}
For contravariant functors we have the following results.
\begin{prop} \label{p:transferdip2} Let $(L,R)$ be an adjoint pair of contravariant functors $L:\mathcal{A}\to \mathcal{B}$ and
$R:\mathcal{B}\to \mathcal{A}$ between abelian categories.
\begin{enumerate}
\item Assume that $(L,R)$ is left adjoint and $L$ is exact.
Let $M$ and $N$ be objects of $\mathcal{B}$ such that $M,N\in {\rm Refl}(R)$.
Then $M$ is direct $N$-injective in $\mathcal{B}$ if and only if $R(M)$ is direct $R(N)$-projective in $\mathcal{A}$.
\item Assume that $(L,R)$ is right adjoint and $R$ is exact.
Let $M$ and $N$ be objects of $\mathcal{A}$ such that $M,N\in {\rm Refl}(L)$.
Then $N$ is direct $M$-projective in $\mathcal{A}$ if and only if $L(N)$ is direct $L(M)$-injective in $\mathcal{B}$.
\end{enumerate}
\end{prop}
\begin{theo} \label{t:dual-reg} Let $(L,R)$ be a pair of contravariant functors $L:\mathcal{A}\to \mathcal{B}$ and
$R:\mathcal{B}\to \mathcal{A}$ between abelian categories.
\begin{enumerate}
\item Assume that $(L,R)$ is left adjoint and $L$ is exact.
Let $M$ and $N$ be objects of $\mathcal{B}$ such that $M,N\in {\rm Refl}(R)$.
Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is strongly $M$-regular in $\mathcal{B}$.
\item $R(M)$ is strongly $R(N)$-regular in $\mathcal{A}$ and for every morphism $f:M\to N$, ${\rm Ker}(f)$ is $M$-cyclic.
\item $R(M)$ is strongly $R(N)$-regular in $\mathcal{A}$ and for every morphism $f:M\to N$, ${\rm Ker}(f)\in {\rm Refl}(R)$.
\end{enumerate}
\item Assume that $(L,R)$ is right adjoint and $R$ is exact.
Let $M$ and $N$ be objects of $\mathcal{A}$ such that $M,N\in {\rm Refl}(L)$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is strongly $M$-regular in $\mathcal{A}$.
\item $L(M)$ is strongly $L(N)$-regular in $\mathcal{B}$ and for every morphism $f:M\to N$, ${\rm Coker}(f)$ is $N$-cocyclic.
\item $L(M)$ is strongly $L(N)$-regular in $\mathcal{B}$ and for every morphism $f:M\to N$, ${\rm Coker}(f)\in {\rm Refl}(L)$.
\end{enumerate}
\end{enumerate}
\end{theo}
\begin{proof} This follows by \cite[Theorem~4.10]{CO1}, Theorem \ref{t:char} and Proposition \ref{p:transferdip2}.
\end{proof}
Next we deduce the transfer of the strong self-regular property to endomorphism rings of (graded) modules.
The following theorem generalizes \cite[Proposition~4.3]{LRR13}.
\begin{theo} \label{t:end-reg} Let $M$ be a right $R$-module, and let $S={\rm End}_R(M)$. Then:
\begin{enumerate}[(i)]
\item $M$ is a strongly self-regular right $R$-module.
\item $S$ is a strongly self-regular right $S$-module.
\item $S$ is a strongly self-regular left $S$-module.
\end{enumerate}
\end{theo}
\begin{proof} This follows by Proposition \ref{p:Mstrreg}. Alternatively,
if $S$ is a strongly self-regular right (left) $R$-module, then $S$ is a strongly regular ring.
It follows that $S$ is a unit regular ring, that is, for every $f\in S$, there exists an automorphism $g\in S$
such that $f=fgf$ \cite{Ehrlich}. By \cite[Theorem~1]{Ehrlich}, $S$ is unit regular if and only if
$S$ is von Neumann regular and for every $f\in S$, one has ${\rm Ker}(f)\cong {\rm Coker}(f)$. Hence for every $f\in S$,
${\rm Ker}(f)$ is $M$-cyclic and ${\rm Coker}(f)$ is $M$-cocyclic. Now conclude by \cite[Theorem~4.12]{CO1}.
\end{proof}
\begin{coll} \label{c:endgr-reg} Let $M$ be a graded right $R$-module, and let $S={\rm END}_R(M)$.
\begin{enumerate}[(i)]
\item $M$ is a strongly self-regular graded right $R$-module.
\item $S$ is a strongly self-regular graded right $S$-module.
\item $S$ is a strongly self-regular graded left $S$-module.
\end{enumerate}
\end{coll}
\begin{proof} This follows in a similar way as the alternative proof of Theorem \ref{t:end-reg} by using \cite[Corollary~4.13]{CO1}.
\end{proof}
\section{(Dual) strongly relative Baer objects}
In this section we begin to systematically apply our theory of (dual) strongly relative Rickart objects
to the study of some corresponding Baer-type objects of an abelian category, called (dual) strongly relative Baer objects.
Let us first recall the following definition.
\begin{defn} \cite[Definition~6.1]{CK} \rm Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$. Then $N$ is called:
\begin{enumerate}
\item \emph{$M$-Baer} if for every family $(f_i)_{i\in I}$ with each $f_i\in {\rm Hom}_{\mathcal{A}}(M,N)$, $\bigcap_{i\in I}
{\rm Ker}(f_i)$ is a direct summand of $M$.
\item \emph{dual $M$-Baer} if for every family $(f_i)_{i\in I}$ with each $f_i\in {\rm Hom}_{\mathcal{A}}(M,N)$, $\sum_{i\in
I} {\rm Im}(f_i)$ is a direct summand of $N$.
\item \emph{self-Baer} if $N$ is $N$-Baer.
\item \emph{dual self-Baer} if $N$ is dual $N$-Baer.
\end{enumerate}
\end{defn}
Now we introduce and study a particular instance of both the (dual) strongly relative Rickart property and
the (dual) relative Baer property.
\begin{defn} \rm Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$. Then $N$ is called:
\begin{enumerate}
\item \emph{strongly $M$-Baer} if for every family $(f_i)_{i\in I}$ with each $f_i\in {\rm Hom}_{\mathcal{A}}(M,N)$, $\bigcap_{i\in I}
{\rm Ker}(f_i)$ is a fully invariant direct summand of $M$.
\item \emph{dual strongly $M$-Baer} if for every family $(f_i)_{i\in I}$ with each $f_i\in {\rm Hom}_{\mathcal{A}}(M,N)$, $\sum_{i\in
I} {\rm Im}(f_i)$ is a fully invariant direct summand of $N$.
\item \emph{strongly self-Baer} if $N$ is strongly $N$-Baer.
\item \emph{dual strongly self-Baer} if $N$ is dual strongly $N$-Baer.
\end{enumerate}
\end{defn}
\begin{ex} \rm Consider the right $R$-module $M$ from Example \ref{e:strreg}.
We have seen that $M$ is both strongly self-Rickart and dual strongly self-Rickart.
But $M$ is neither self-Baer \cite[Example~2.18]{LRR11}, nor dual self-Baer \cite[Example~4.1]{LRR10},
and so it is neither strongly self-Baer, nor dual strongly self-Baer.
\end{ex}
\begin{lemm} \label{l:Bwduo} Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$.
\begin{enumerate}
\item Assume that every direct summand of $M$ is isomorphic to a subobject of $N$.
Then $N$ is strongly $M$-Baer if and only if $N$ is $M$-Baer and $M$ is weak duo.
\item Assume that every direct summand of $N$ is isomorphic to a factor object of $M$.
Then $N$ is dual strongly $M$-Baer if and only if $N$ is dual $M$-Baer and $N$ is weak duo.
\end{enumerate}
\end{lemm}
\begin{proof} (1) Assume that $N$ is strongly $M$-Baer. Clearly, $N$ is $M$-Baer. Also, $N$ is strongly $M$-Rickart.
Then $M$ is weak duo by \cite[Proposition~2.9]{CO1}. The converse is clear.
\end{proof}
We immediately have the following corollaries.
\begin{coll} \label{c:Bwduo} Let $M$ be an object of an abelian category $\mathcal{A}$. Then:
\begin{enumerate}
\item $M$ is strongly self-Baer if and only if $M$ is self-Baer and weak duo.
\item $M$ is dual strongly self-Baer if and only if $M$ is dual self-Baer and weak duo.
\end{enumerate}
\end{coll}
\begin{coll} \label{c:Bindec} Let $M$ be an indecomposable object of an abelian category $\mathcal{A}$. Then:
\begin{enumerate}
\item $M$ is strongly self-Baer if and only if $M$ is self-Baer.
\item $M$ is dual strongly self-Baer if and only if $M$ is dual self-Baer.
\end{enumerate}
\end{coll}
\begin{prop} \label{p:Bstrring} Let $M$ be an object of an abelian category $\mathcal{A}$.
\begin{enumerate}
\item $M$ is strongly self-Baer if and only if $M$ is self-Baer and ${\rm End}_{\mathcal{A}}(M)$ is abelian.
\item $M$ is dual strongly self-Baer if and only if $M$ is dual self-Baer and ${\rm End}_{\mathcal{A}}(M)$ is abelian.
\end{enumerate}
\end{prop}
\begin{proof} (1) If $M$ is strongly self-Baer, then $M$ is clearly self-Baer and ${\rm End}_{\mathcal{A}}(M)$ is abelian
by \cite[Proposition~2.14]{CO1}.
Conversely, assume that $M$ is self-Baer and ${\rm End}_{\mathcal{A}}(M)$ is abelian.
Then $M$ is self-Rickart and ${\rm End}_{\mathcal{A}}(M)$ is abelian, and so
$M$ is strongly self-Rickart by \cite[Proposition~2.14]{CO1}. Then $M$ is weak duo by \cite[Corollary~2.10]{CO1}.
Finally, $M$ is strongly self-Baer by Corollary \ref{c:Bwduo}.
\end{proof}
We also have the following connection between (dual) strongly relative Baer objects and (dual) strongly relative Rickart objects.
\begin{lemm} \label{l:BR} Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$.
\begin{enumerate}
\item Assume that there exists the product $N^I$ for every set $I$. Then $N$ is strongly $M$-Baer if and only if $N^I$ is
strongly $M$-Rickart for every set $I$.
\item Assume that there exists the coproduct $M^{(I)}$ for every set $I$. Then $N$ is dual strongly $M$-Baer if and only
if $N$ is dual strongly $M^{(I)}$-Rickart for every set $I$.
\end{enumerate}
\end{lemm}
\begin{proof} (1) This is immediate by using \cite[Lemma~3.11]{CO}.
\end{proof}
\begin{coll} \label{c:epimonobaer} Let $r:M\to M'$ be an epimorphism and $s:N'\to N$ a monomorphism in an abelian
category $\mathcal{A}$.
\begin{enumerate}
\item Assume that $\mathcal{A}$ has products. If $N$ is strongly $M$-Baer, then $N'$ is strongly $M'$-Baer.
\item Assume that $\mathcal{A}$ has coproducts. If $N$ is dual strongly $M$-Baer, then $N'$ is dual strongly $M'$-Baer.
\end{enumerate}
\end{coll}
\begin{proof} (1) If $N$ is strongly $M$-Baer, then $N^I$ is strongly $M$-Rickart for every set $I$ by Lemma~\ref{l:BR}.
Then $N'^I$ is strongly $M'$-Rickart for every set $I$ by \cite[Theorem~2.17]{CO1}.
Hence $N'$ is strongly $M'$-Baer by Lemma~\ref{l:BR}.
\end{proof}
\begin{coll} \label{c:dsbaer} Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$, $M'$ a direct summand
of $M$ and $N'$ a direct summand of $N$.
(1) If $N$ is strongly $M$-Baer, then $N'$ is strongly $M'$-Baer.
(2) If $N$ is dual strongly $M$-Baer, then $N'$ is dual strongly $M'$-Baer.
\end{coll}
\begin{coll} \label{c:extensionsbaer} Let $\mathcal{A}$ be an abelian category.
\begin{enumerate} \item Consider a short exact sequence $$0\to N_1\to N\to N_2\to 0$$ and an object $M$ in $\mathcal{A}$
such that $N_1$ and $N_2$ are strongly $M$-Baer. Then $N$ is strongly $M$-Baer.
\item Consider a short exact sequence $$0\to M_1\to M\to M_2\to 0$$ and an object $N$ in $\mathcal{A}$ such that $N$ is
dual strongly $M_1$-Baer and dual strongly $M_2$-Baer. Then $N$ is dual strongly $M$-Baer.
\end{enumerate}
\end{coll}
\begin{proof} (1) Since $N_1$ and $N_2$ are strongly $M$-Baer, $N_1^I$ and $N_2^I$ are strongly $M$-Rickart for every set $I$.
We have an induced short exact sequence $0\to N_1^I\to N^I\to N_2^I\to 0$ for every set $I$.
Then $N^I$ is strongly $M$-Rickart for every set $I$ by \cite[Theorem~2.19]{CO1}. Hence $N$ is strongly $M$-Baer by Lemma~\ref{l:BR}.
\end{proof}
(Dual) strongly relative Baer objects and (dual) strongly relative Rickart objects are also related as follows.
\begin{theo} \label{t:SS} Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$.
\begin{enumerate}
\item Assume that there exists the product $N^I$ for every set $I$, and every direct summand of $M$ is isomorphic to
a subobject of $N$. Then $N$ is strongly $M$-Baer if and only if $N$ is strongly $M$-Rickart and $M$ has SSIP.
\item Assume that there exists the coproduct $M^{(I)}$ for every set $I$, and every direct summand of $N$ is isomorphic
to a factor object of $M$. Then $N$ is dual strongly $M$-Baer if and only if $N$ is dual strongly $M$-Rickart and
$N$ has SSSP.
\end{enumerate}
\end{theo}
\begin{proof} (1) Assume that $N$ is strongly $M$-Baer. Then $N$ is strongly $M$-Rickart.
Also, $N$ is $M$-Baer, and so $M$ has SSIP by \cite[Theorem~6.3]{CK}.
Conversely, assume that $N$ is strongly $M$-Rickart and $M$ has SSIP. Then $N$ is $M$-Rickart and $M$ is weak duo
by \cite[Proposition~2.9]{CO1}. It follows that $N$ is $M$-Baer by \cite[Theorem~6.3]{CK}. Finally,
$N$ is strongly $M$-Baer by Lemma \ref{l:Bwduo}.
\end{proof}
\begin{coll} \label{c:BR} Let $M$ be an object of an abelian category $\mathcal{A}$.
\begin{enumerate}
\item Assume that there exists the product $M^I$ for every set $I$. Then $M$ is strongly self-Baer if and only if $M$ is
strongly self-Rickart and has SSIP. In particular, an indecomposable object is strongly self-Baer
if and only if it is strongly self-Rickart.
\item Assume that there exists the coproduct $M^{(I)}$ for every set $I$. Then $M$ is dual strongly self-Baer if and only if $M$
is dual strongly self-Rickart and has SSSP. In particular, an indecomposable object is dual strongly self-Baer
if and only if it is dual strongly self-Rickart.
\end{enumerate}
\end{coll}
Recall that a {\it (monotone) Galois connection} between two partially ordered sets $(A,\leq)$ and $(B,\leq)$ consists
of a pair $(\alpha, \beta)$ of two order-preserving functions $\alpha:A\to B$ and $\beta:B\to A$ such that for every
$a\in A$ and $b\in B$, we have $\alpha(a)\leq b \Leftrightarrow a\leq \beta(b)$ (e.g., see \cite{Erne}).
Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$, and denote $U={\rm Hom}_{\mathcal{A}}(M,N)$.
We introduce and use the following notation (see \cite{AN}).
For every subobject $X$ of $M$ and every subobject $Z$ of $U$, we denote:
\[l_U(X)=\{f\in U\mid X\subseteq {\rm Ker}(f)\}, \quad r_M(Z)=\bigcap_{f\in Z}{\rm Ker}(f).\]
For every subobject $Y$ of $N$ and every subobject $Z$ of $U$, we denote:
\[l'_U(Y)=\{f\in U\mid {\rm Im}(f)\subseteq Y\}, \quad r'_N(Z)=\sum_{f\in Z}{\rm Im}(f).\]
For a lattice $A$ we denote by $A^{\rm op}$ its dual lattice. One may extend \cite[Propositions~3.1, 3.2 and 3.4]{AN} to
the following theorem in abelian categories, which will be freely used.
\begin{theo} Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$. Then $(r_M,l_U)$ is a Galois
connection between the subobject lattices $L(U)$ and $L(M)^{\rm op}$, and $(r'_N,l'_U)$ is a Galois connection between
the subobject lattices $L(U)$ and $L(N)$.
\end{theo}
Recall the following concepts adapted from module theory \cite{CLVW,DHSW} (extending and lifting modules),
\cite{Atani,WangY} (strongly extending and strongly lifting modules).
\begin{defn} \rm Let $\mathcal{A}$ be an abelian category, $M$ an object of $\mathcal{A}$ and $K$ a subobject of
$M$.
Then $K$ is called:
\begin{enumerate}
\item an \emph{essential} subobject of $M$ if for any subobject $X$ of $M$, $K\cap X=0$ implies $X=0$.
\item a \emph{superfluous} subobject of $M$ if for any subobject $X$ of $M$, $K+X=M$ implies $X=M$.
\end{enumerate}
Then $M$ is called:
\begin{enumerate}
\item[(3)] \emph{(strongly) extending} if every subobject of $M$ is essential in a (fully invariant) direct summand of $M$.
\item[(4)] \emph{(strongly) lifting} if every subobject $L$ of $M$ contains a (fully invariant) direct summand $K$ of $M$
such that $L/K$ is superfluous in $M/K$.
\end{enumerate}
\end{defn}
Next we introduce some relative versions of some notions which were used, under some different names, in the theory of
(dual) Baer modules \cite{KT,RR04}, and which were slightly modified in \cite{GO} to fit the theory of Baer-Galois
connections.
\begin{defn} \rm \cite[Definition~9.4]{CK} Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$.
Then $N$ is called:
\begin{enumerate}
\item \emph{$M$-$\mathcal{K}$-nonsingular} if for any morphism $f:M\to N$ in $\mathcal{A}$, ${\rm Ker}(f)$ essential in
$M$ implies $f=0$.
\item \emph{$M$-$\mathcal{K}$-cononsingular} if for any subobject $X$ of $M$, $l_U(X)=0 $ implies that $X$ is
essential in $M$.
\end{enumerate}
Then $M$ is called:
\begin{enumerate}
\item[(3)] \emph{$N$-$\mathcal{T}$-nonsingular} if for any morphism $f:M\to N$ in $\mathcal{A}$, ${\rm Im}(f)$ superfluous in
$N$ implies $f=0$.
\item[(4)] \emph{$N$-$\mathcal{T}$-cononsingular} if for any subobject $Y$ of $N$, $l_U'(Y)=0$ implies that $Y$ is
superfluous in $M$.
\end{enumerate}
\end{defn}
Now we may establish a result connecting the (dual) strongly relative Baer property to
the strongly extending (strongly lifting) property. It is similar to \cite[Theorem~9.5]{CK}
(also see \cite[Corollaries 3.1 and 3.2]{GO}), which relates the (dual) relative Baer property to the extending (lifting) property.
\begin{theo} \label{t:khuri} Let $M$ and $N$ be objects of an abelian category $\mathcal{A}$.
\begin{enumerate}
\item Assume that there exists the product $N^I$ for every set $I$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is strongly $M$-Baer and $M$-$\mathcal{K}$-cononsingular.
\item $M$ is strongly extending and $N$ is $M$-$\mathcal{K}$-nonsingular.
\end{enumerate}
\item Assume that there exists the coproduct $M^{(I)}$ for every set $I$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is dual strongly $M$-Baer and $M$ is $N$-$\mathcal{T}$-cononsingular.
\item $M$ is strongly lifting and $N$-$\mathcal{T}$-nonsingular.
\end{enumerate}
\end{enumerate}
\end{theo}
\begin{proof} (1) $(i)\Rightarrow(ii)$ Assume that $N$ is strongly $M$-Baer and $M$-$\mathcal{K}$-cononsingular.
Let $L$ be a subobject of $M$. Denote $$K=r_M(l_U(L))=\bigcap\{{\rm Ker}(f)\mid f\in U \textrm{ and } L\subseteq {\rm
Ker}(f)\}.$$ Clearly, we have $L\subseteq K$. Since $N$ is strongly $M$-Baer,
$K$ is a fully invariant direct summand of $M$, say $M=K\oplus K'$ for some subobject $K'$ of $M$. Now one shows that
$L$ is essential in $K$ as in the proof of \cite[Theorem~9.5]{CK}. Hence $L$ is essential in
the fully invariant direct summand $K$ of $M$, and so $M$ is strongly extending.
Finally, $N$ is $M$-$\mathcal{K}$-nonsingular also by \cite[Theorem~9.5]{CK}.
$(ii)\Rightarrow(i)$ Assume that $M$ is strongly extending and $N$ is $M$-$\mathcal{K}$-nonsingular.
Let $I$ be a set, and let $f:M\to N^I$ be a morphism in $\mathcal{A}$. Since $M$ is strongly extending, ${\rm Ker}(f)$ is
essential in a fully invariant direct summand $L$ of $M=L\oplus L'$. Now one shows that ${\rm Ker}(f)=L$
as in the proof of \cite[Theorem~9.5]{CK}. Then ${\rm Ker}(f)$ is a fully invariant direct summand of $M$.
Hence $N^I$ is strongly $M$-Rickart. It follows that $N$ is strongly $M$-Baer by Lemma~\ref{l:BR}.
Finally, $N$ is $M$-$\mathcal{K}$-cononsingular also by \cite[Theorem~9.5]{CK}.
\end{proof}
\begin{coll} Let $M$ be an object of an abelian category $\mathcal{A}$.
\begin{enumerate}
\item Assume that there exists the product $M^I$ for every set $I$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $M$ is strongly self-Baer and $M$-$\mathcal{K}$-cononsingular.
\item $M$ is strongly extending and $M$-$\mathcal{K}$-nonsingular.
\end{enumerate}
\item Assume that there exists the coproduct $M^{(I)}$ for every set $I$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $M$ is dual strongly self-Baer and $M$-$\mathcal{T}$-cononsingular.
\item $M$ is strongly lifting and $M$-$\mathcal{T}$-nonsingular.
\end{enumerate}
\end{enumerate}
\end{coll}
\begin{proof} (1) This follows by Theorem \ref{t:khuri}. Alternatively, note that $M$ is strongly self-Baer
if and only if $M$ is self-Baer and weak duo by Corollary \ref{c:Bwduo},
while $M$ is strongly extending if and only if $M$ is extending and weak duo.
Now the result follows by \cite[Theorem~9.5]{CK}.
\end{proof}
\section{(Co)products of (dual) strongly relative Baer objects}
We continue to apply our theory of (dual) strongly relative Rickart objects in order to obtain corresponding results for
(co)products of (dual) strongly relative Baer objects.
\begin{coll} Let $\mathcal{A}$ be an abelian category.
\begin{enumerate}
\item Let $M$ and $N_1,\dots,N_n$ be objects of $\mathcal{A}$. Then $\bigoplus_{i=1}^n N_i$ is strongly $M$-Baer if and only if
$N_i$ is strongly $M$-Baer for every $i\in \{1,\dots,n\}$.
\item Let $M_1,\dots,M_n$ and $N$ be objects of $\mathcal{A}$. Then $N$ is dual strongly $\bigoplus_{i=1}^n M_i$-Baer if and only
if $N$ is dual strongly $M_i$-Baer for every $i\in \{1,\dots,n\}$.
\end{enumerate}
\end{coll}
\begin{proof} (1) By \cite[Theorem~3.1]{CO1} and Lemma~\ref{l:BR}, $\bigoplus_{i=1}^n N_i$ is strongly $M$-Baer if and only if
$(\bigoplus_{i=1}^n N_i)^I\cong \bigoplus_{i=1}^n N_i^I$ is strongly $M$-Rickart for every set $I$ if and only if
$N_1^I,\dots,N_n^I$ are strongly $M$-Rickart for every set $I$ if and only if $N_1,\dots,N_n$ are strongly $M$-Baer.
\end{proof}
\begin{lemm} \label{l:selfbaer} Let $\mathcal{A}$ be an abelian category, and let $(M_i)_{i\in I}$ be a family of
objects of $\mathcal{A}$.
\begin{enumerate}
\item Assume that $\prod_{i\in I} M_i$ is strongly self-Baer.
Then $M_i$ is strongly self-Baer and strongly $M_j$-Rickart for every $i,j\in I$.
\item Assume that $\bigoplus_{i\in I} M_i$ is dual strongly self-Baer.
Then $M_i$ is dual strongly self-Baer and dual strongly $M_j$-Rickart for every $i,j\in I$.
\end{enumerate}
\end{lemm}
\begin{proof} (1) Assume that $\prod_{i\in I} M_i$ is strongly self-Baer.
Then $M_i$ is strongly self-Baer and strongly $M_j$-Baer for every $i,j\in I$ by Corollary~\ref{c:epimonobaer}.
By Theorem~\ref{t:SS}, $M_i$ is strongly $M_j$-Rickart for every $i,j\in I$.
\end{proof}
\begin{theo} Let $M_1,\dots,M_n$ be objects of an abelian category $\mathcal{A}$.
\begin{enumerate}
\item Assume that $M_i$ is direct $M_j$-injective for every $i,j\in \{1,\dots,n\}$. Then $\bigoplus_{i=1}^n M_i$ is
strongly self-Baer if and only if $M_i$ is strongly self-Baer and strongly $M_j$-Rickart for every $i,j\in \{1,\dots,n\}$.
\item Assume that $M_i$ is direct $M_j$-projective for every $i,j\in \{1,\dots,n\}$. Then
$\bigoplus_{i=1}^n M_i$ is dual strongly self-Baer if and only if $M_i$ is dual strongly self-Baer and
dual strongly $M_j$-Rickart for every $i,j\in \{1,\dots,n\}$.
\end{enumerate}
\end{theo}
\begin{proof} (1) The direct implication follows by Lemma~\ref{l:selfbaer}.
Conversely, assume that $M_i$ is strongly self-Baer and strongly $M_j$-Rickart for every $i,j\in \{1,\dots,n\}$.
Since $M_i$ is direct $M_j$-injective, it follows that $M_i$ is strongly $M_j$-regular for every $i,j\in \{1,\dots,n\}$
by Theorem~\ref{t:char}. Furthermore, $\bigoplus_{i=1}^n M_i$ is strongly self-regular by Theorem~\ref{t:dsreg},
and so $\bigoplus_{i=1}^n M_i$ is strongly self-Rickart by Proposition~\ref{p:Mstrreg}.
Since $M_i$ is self-Baer and $M_j$-Rickart for every $i,j\in \{1,\dots,n\}$,
it follows that $\bigoplus_{i=1}^n M_i$ is self-Baer by \cite[Theorem~7.3]{CK}.
Then $\bigoplus_{i=1}^n M_i$ has SSIP by \cite[Corollary~6.4]{CK}.
Finally, $\bigoplus_{i=1}^n M_i$ is strongly self-Baer by Corollary~\ref{c:BR}.
\end{proof}
\begin{coll} Let $M_1,\dots,M_n$ be objects of an abelian category $\mathcal{A}$ such that $M_i$ is strongly $M_j$-regular for
every $i,j\in \{1,\dots,n\}$. Then $\bigoplus_{i=1}^n M_i$ is (dual) strongly self-Baer if and only if
$M_i$ is (dual) strongly self-Baer for every $i\in \{1,\dots,n\}$.
\end{coll}
\begin{theo} \label{t:Bhomzero}
Let $\mathcal{A}$ be an abelian category. Let $M=\bigoplus_{i\in I}M_i$ be a direct sum decomposition of
an object $M$ of $\mathcal{A}$.
\begin{enumerate}
\item Then $M$ is strongly self-Baer if and only if $M_i$ is strongly self-Baer for every $i\in I$,
${\rm Hom}_{\mathcal{A}}(M_i,M_j)=0$ for every $i,j\in I$ with $i\neq j$, and
$N=\bigoplus_{i\in I}(N\cap M_i)$ for every direct summand $N$ of $M$.
\item Then $M$ is dual strongly self-Baer if and only if $M_i$ is dual strongly self-Baer for every $i\in I$,
${\rm Hom}_{\mathcal{A}}(M_i,M_j)=0$ for every $i,j\in I$ with $i\neq j$, and
$N=\bigoplus_{i\in I}(N\cap M_i)$ for every direct summand $N$ of $M$.
\end{enumerate}
\end{theo}
\begin{proof} (1) Assume first that $M$ is strongly self-Baer. Then $M$ is strongly self-Rickart,
hence we have ${\rm Hom}_{\mathcal{A}}(M_i,M_j)=0$ for every $i,j\in I$ with $i\neq j$ by \cite[Theorem~3.6]{CO1}.
Also, $M$ is self-Baer and weak duo by Corollary \ref{c:Bwduo}. It follows that
$M_i$ is self-Baer for every $i\in I$ by \cite[Proposition~3.20]{RR09}.
Also, we have $N=\bigoplus_{i\in I}(N\cap M_i)$ for every direct summand $N$ of $M$ by \cite[Theorem~2.7]{OHS}.
Conversely, assume that $M_i$ is strongly self-Baer for every $i\in I$,
${\rm Hom}_{\mathcal{A}}(M_i,M_j)=0$ for every $i,j\in I$ with $i\neq j$, and
$N=\bigoplus_{i\in I}(N\cap M_i)$ for every direct summand $N$ of $M$.
Then $M_i$ is self-Baer and weak duo for every $i\in I$ by Corollary \ref{c:Bwduo}.
It follows that $M$ is self-Baer and weak duo for every $i\in I$ by \cite[Proposition~3.20]{RR09}
and \cite[Theorem~2.7]{OHS}. Finally, $M$ is strongly self-Baer by Corollary \ref{c:Bwduo}.
(2) This is not completely dual to (1), but it follows in a similar way as (1) by using images instead of kernels
in order to prove a property as \cite[Proposition~3.20]{RR09} for dual Baer modules.
\end{proof}
\begin{coll} \label{c:Bdede} Let $R$ be a Dedekind domain with quotient field $K$.
\begin{enumerate}
\item
\begin{enumerate}[(i)]
\item A non-zero torsion $R$-module $M$ is strongly self-Baer if and only if
$M\cong \bigoplus_{i\in I} R/P_i$ for some distinct maximal ideals $P_i$ of $R$.
\item A finitely generated $R$-module $M$ is strongly self-Baer if and only if
$M\cong J$ for some ideal $J$ of $R$ or $M\cong \bigoplus_{i=1}^kR/P_i$
for some distinct maximal ideals $P_i$ of $R$.
\item A non-zero injective $R$-module $M$ is strongly self-Baer if and only if $M\cong K$.
\end{enumerate}
\item An $R$-module $M$ is dual strongly self-Baer if and only if $M\cong \bigoplus_{i\in I}M_i$
for some distinct $R$-modules $M_i$ which are either $K$ or $E(R/P_i)$ for some maximal ideals $P_i$ of $R$, or
$M\cong \bigoplus_{i\in I}M_i'$ for some distinct $R$-modules $M_i'$ which are either $K$ or $R/P_i$
for some maximal ideals $P_i$ of $R$.
\end{enumerate}
\end{coll}
\begin{proof} (1) Since every strongly self-Baer $R$-module is strongly self-Rickart, it follows by
\cite[Corollary~3.8]{CO1} that every self-Baer that is non-zero torsion or finitely generated or non-zero injective
has an indecomposable direct sum decomposition. But indecomposable strongly self-Baer $R$-modules coincide with
indecomposable strongly self-Rickart $R$-modules by Corollary \ref{c:BR}.
Then the conclusion follows by \cite[Corollary~3.8]{CO1} and Theorem \ref{t:Bhomzero}.
(2) Every dual self-Baer $R$-module has an indecomposable direct sum decomposition \cite[Corollary~2.6]{KT}.
The indecomposable dual self-Baer $R$-modules are $K$, $E(R/P)$ and $R/P$ for some prime ideal $P$ of $R$
by \cite[Theorem~3.4]{KT}. Using also the structure theorem of torsion weak duo $R$-modules,
it follows that the indecomposable dual strongly self-Baer $R$-modules are $K$, $E(R/P)$ and $R/P$
for some maximal ideal $P$ of $R$. Alternatively, one can deduce the same by using Corollary \ref{c:BR} and \cite[Corollary~3.8]{CO1}.
Now the conclusion follows by \cite[Theorem~3.6]{CO1}.
\end{proof}
\begin{coll} \label{c:Babgr}
\begin{enumerate}
\item
\begin{enumerate}[(i)]
\item A non-zero torsion abelian group $G$ is strongly self-Baer if and only if
$G\cong \bigoplus_{i\in I} \mathbb{Z}_{p_i}$ for some distinct primes $p_i$.
\item A finitely generated abelian group $G$ is strongly self-Baer if and only if
$G\cong \mathbb{Z}$ or $G\cong \bigoplus_{i=1}^k \mathbb{Z}_{p_i}$ for some distinct primes $p_i$.
\item A non-zero injective abelian group $G$ is strongly self-Baer if and only if $G\cong \mathbb{Q}$.
\end{enumerate}
\item An abelian group $G$ is dual strongly self-Baer if and only if $M\cong \bigoplus_{i\in I}M_i$
for some distinct abelian groups $M_i$ which are either $\mathbb{Q}$ or $\mathbb{Z}_{p_i^{\infty}}$
for some primes $p_i$, or $M\cong \bigoplus_{i\in I}M_i'$ for some distinct abelian groups $M_i'$
which are either $\mathbb{Q}$ or $\mathbb{Z}_{p_i}$ for some primes $p_i$.
\end{enumerate}
\end{coll}
\begin{ex} \rm The abelian group $\mathbb{Z}\oplus \mathbb{Z}$ is self-Baer by \cite[Proposition~2.19]{RR04},
but not strongly self-Baer by Corollary \ref{c:Babgr}.
The abelian group $\mathbb{Q}\oplus \mathbb{Q}$ is dual self-Baer by \cite[Theorem~3.4]{KT},
but not strongly dual self-Baer by Corollary \ref{c:Babgr}.
\end{ex}
\section{(Dual) strongly relative Baer objects: transfer via functors}
Our first result on the transfer of the (dual) strongly relative Baer property
via (additive) functors involves a fully faithful covariant functor.
\begin{coll} \label{c:ffbaer} Let $F:\mathcal{A}\to \mathcal{B}$ be a fully faithful functor between abelian categories,
and let $M$ and $N$ be objects of $\mathcal{A}$.
\begin{enumerate}
\item Assume that there exists the product $N^I$ for every set $I$, $F$ is left exact and $F$ preserves products. Then
$N$ is strongly $M$-Baer in $\mathcal{A}$ if and only if $F(N)$ is strongly $F(M)$-Baer in $\mathcal{B}$.
\item Assume that there exists the coproduct $M^{(I)}$ for every set $I$, $F$ is right exact and $F$ preserves
coproducts. Then $N$ is dual strongly $M$-Baer in $\mathcal{A}$ if and only if $F(N)$ is dual strongly $F(M)$-Baer in $\mathcal{B}$.
\end{enumerate}
\end{coll}
\begin{proof} This follows by \cite[Theorem~4.1]{CO1} and Lemma \ref{l:BR}.
\end{proof}
For Grothendieck categories we have the following corollary.
\begin{coll} Let $\mathcal{A}$ be a Grothendieck category with a generator $U$, $R={\rm End}_{\mathcal{A}}(U)$, $S={\rm
Hom}_{\mathcal{A}}(U,-):\mathcal{A}\to {\rm Mod}(R)$. Let $M$ and $N$ be objects of $\mathcal{A}$. Then $N$ is a strongly
$M$-Baer object of $\mathcal{A}$ if and only if $S(N)$ is a strongly $S(M)$-Baer right $R$-module.
\end{coll}
\begin{proof} By the Gabriel-Popescu Theorem \cite[Chapter~X, Theorem~4.1]{St}, $S$ is a fully faithful functor which
has a left adjoint $T:{\rm Mod}(R)\to \mathcal{A}$. Since $S$ is a right adjoint, it is left exact and preserves
products. Then use Corollary \ref{c:ffbaer}.
\end{proof}
\begin{coll} \label{c:tripleffbaer} Let $(L,F,R)$ be an adjoint triple of covariant functors $F:\mathcal{A}\to
\mathcal{B}$ and $L,R:\mathcal{B}\to \mathcal{A}$ between abelian categories.
\begin{enumerate}
\item Let $M$ and $N$ be objects of $\mathcal{A}$, and assume that $F$ is fully faithful.
\begin{enumerate}[(i)]
\item Assume that there exists the product $N^I$ for every set $I$. Then $N$ is strongly $M$-Baer in $\mathcal{A}$ if and only if
$F(N)$ is strongly $F(M)$-Baer in $\mathcal{B}$.
\item Assume that there exists the coproduct $M^{(I)}$ for every set $I$. Then $N$ is dual strongly $M$-Baer in $\mathcal{A}$ if
and only if $F(N)$ is dual strongly $F(M)$-Baer in $\mathcal{B}$.
\end{enumerate}
\item Let $M$ and $N$ be objects of $\mathcal{B}$, and assume that $L$ (or $R$) is fully faithful.
\begin{enumerate}[(i)]
\item Assume that there exists the product $N^I$ for every set $I$. Then $N$ is strongly $M$-Baer in $\mathcal{B}$ if and only if
$R(N)$ is strongly $R(M)$-Baer in $\mathcal{A}$.
\item Assume that there exists the coproduct $M^{(I)}$ for every set $I$. Then $N$ is dual strongly $M$-Baer in $\mathcal{B}$ if
and only if $L(N)$ is dual strongly $L(M)$-Baer in $\mathcal{A}$.
\end{enumerate}
\end{enumerate}
\end{coll}
\begin{proof} This follows by \cite[Corollary~4.3]{CO1}, Lemma \ref{l:BR} and the facts that $F$ preserves products
and coproducts as a left and right adjoint, $R$ preserves products, and $L$ preserves coproducts.
\end{proof}
\begin{coll} Let $\varphi:R\to S$ be a ring epimorphism, and let $M$ and $N$ be right $S$-modules. Then $N$ is a (dual)
strongly $M$-Baer right $S$-module if and only if $N$ is a (dual) strongly $M$-Baer right $R$-module.
\end{coll}
\begin{proof} Since $\varphi:R\to S$ is a ring epimorphism, the restriction of scalars functor $\varphi_*:{\rm
Mod}(S)\to {\rm Mod}(R)$ is fully faithful \cite[Chapter~XI, Proposition~1.2]{St}. Then use Corollary
\ref{c:tripleffbaer} for the adjoint triple of functors $(\varphi^*,\varphi_*,\varphi^!)$.
\end{proof}
\begin{coll} Let $R$ be a $G$-graded ring, and let $M$ and $N$ be right $R_e$-modules. Then:
\begin{enumerate}
\item $N$ is a strongly $M$-Baer right $R_e$-module if and only if
${\rm Coind}(N)$ is a strongly ${\rm Coind}(M)$-Baer graded right $R$-module.
\item $N$ is a dual strongly $M$-Baer right $R_e$-module if and only if
${\rm Ind}(N)$ is a dual strongly ${\rm Ind}(M)$-Baer graded right $R$-module.
\end{enumerate}
\end{coll}
\begin{proof} As in the proof of \cite[Corollary~4.5]{CO1}, the functors ${\rm Ind}$ and ${\rm Coind}$ are fully faithful.
Now use Corollary \ref{c:tripleffbaer}.
\end{proof}
\begin{coll} \label{c:rcbaer} Let $\mathcal{A}$ be an abelian category, $\mathcal{C}$ an abelian full subcategory of
$\mathcal{A}$ and $i:\mathcal{C}\to \mathcal{A}$ the inclusion functor.
\begin{enumerate}
\item Assume that $\mathcal{C}$ is a reflective subcategory of $\mathcal{A}$.
Also, assume that there exists the coproduct $M^{(I)}$ for every set $I$.
Let $M$ and $N$ be objects of $\mathcal{C}$. Then $N$ is strongly $M$-Baer in $\mathcal{C}$
if and only if $i(N)$ is strongly $i(M)$-Baer in $\mathcal{A}$.
\item Assume that $\mathcal{C}$ is a coreflective subcategory of $\mathcal{A}$.
Also, assume that there exists the product $N^I$ for every set $I$.
Let $M$ and $N$ be objects of $\mathcal{C}$. Then $N$ is (dual) strongly $M$-Baer in $\mathcal{C}$
if and only if $i(N)$ is (dual) strongly $i(M)$-Baer in $\mathcal{A}$.
\end{enumerate}
\end{coll}
\begin{proof} This follows by Lemma \ref{l:BR} and \cite[Corollary~4.6]{CO1}.
\end{proof}
For comodule categories we have the following corollary.
\begin{coll} \label{c:com3} Let $C$ be a coalgebra over a field, and let $M$ and $N$ be left $C$-comodules.
Then $N$ is (dual) strongly $M$-Baer if and only if $N$ is (dual) strongly $M$-Baer as a right $C^*$-module.
\end{coll}
\begin{proof} Note that ${}^C\mathcal{M}$ is a coreflective abelian subcategory of ${\rm Mod}(C^*)$
and use Corollary \ref{c:rcbaer}.
\end{proof}
In order to discuss the transfer of the (dual) strong relative Baer property to endomorphism
rings, we give first some general results involving adjoint functors.
\begin{coll} \label{c:equivbaer} Let $(L,R)$ be an adjoint pair of covariant functors $L:\mathcal{A}\to \mathcal{B}$ and
$R:\mathcal{B}\to \mathcal{A}$ between abelian categories.
\begin{enumerate}
\item Let $M$ and $N$ be objects of $\mathcal{B}$ such that $M,N\in {\rm Stat}(R)$ and for every set $I$ there exists
the product $N^I$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is strongly $M$-Baer in $\mathcal{B}$.
\item $R(N)$ is strongly $R(M)$-Baer in $\mathcal{A}$ and for every family $(f_i)_{i\in I}$ with each $f_i\in {\rm
Hom}_{\mathcal{A}}(M,N)$, $\bigcap_{i\in I}{\rm Ker}(f_i)$ is $M$-cyclic.
\item $R(N)$ is strongly $R(M)$-Baer in $\mathcal{A}$ and for every family $(f_i)_{i\in I}$ with each $f_i\in {\rm
Hom}_{\mathcal{A}}(M,N)$, $\bigcap_{i\in I}{\rm Ker}(f_i)\in {\rm Stat}(R)$.
\end{enumerate}
\item Let $M$ and $N$ be objects of $\mathcal{A}$ such that $M,N\in {\rm Adst}(R)$ and for every set $I$ there exists
the coproduct $M^{(I)}$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is dual strongly $M$-Baer in $\mathcal{A}$.
\item $L(N)$ is dual strongly $L(M)$-Baer in $\mathcal{B}$ and for every family $(f_i)_{i\in I}$ with each $f_i\in {\rm
Hom}_{\mathcal{A}}(M,N)$, $\sum_{i\in I} {\rm Im}(f_i)$ is $N$-cocyclic.
\item $L(N)$ is dual strongly $L(M)$-Baer in $\mathcal{B}$ and for every family $(f_i)_{i\in I}$ with each $f_i\in {\rm
Hom}_{\mathcal{A}}(M,N)$, $\sum_{i\in I} {\rm Im}(f_i)\in {\rm Adst}(R)$.
\end{enumerate}
\end{enumerate}
\end{coll}
\begin{proof} This follows by \cite[Theorem~4.9]{CO1}, Lemma \ref{l:BR}, \cite[Lemma~3.11]{CO} and the facts that $R$
preserves products and $L$ preserves coproducts.
\end{proof}
\begin{coll} Let $(L,R)$ be a pair of contravariant functors $L:\mathcal{A}\to \mathcal{B}$ and
$R:\mathcal{B}\to \mathcal{A}$ between abelian categories.
\begin{enumerate}
\item Assume that $(L,R)$ is left adjoint. Let $M$ and $N$ be objects of $\mathcal{B}$ such that $M,N\in {\rm Refl}(R)$
and for every set $I$ there exists the product $N^I$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is strongly $M$-Baer in $\mathcal{B}$.
\item $R(M)$ is dual strongly $R(N)$-Baer in $\mathcal{A}$ and for every set $I$ and for every family $(f_i)_{i\in
I}$ with each $f_i\in {\rm Hom}_{\mathcal{A}}(M,N)$, $\bigcap_{i\in I}{\rm Ker}(f_i)$ is $M$-cyclic.
\item $R(M)$ is dual strongly $R(N)$-Baer in $\mathcal{A}$ and for every set $I$ and for every family $(f_i)_{i\in
I}$ with each $f_i\in {\rm Hom}_{\mathcal{A}}(M,N)$, $\bigcap_{i\in I}{\rm Ker}(f_i)\in {\rm Refl}(R)$.
\end{enumerate}
\item Assume that $(L,R)$ is right adjoint. Let $M$ and $N$ be objects of $\mathcal{A}$ such that $M,N\in {\rm Refl}(L)$
and for every set $I$ there exists the coproduct $M^{(I)}$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $N$ is dual strongly $M$-Baer in $\mathcal{A}$.
\item $L(M)$ is strongly $L(N)$-Baer in $\mathcal{B}$ and for every set $I$ and for every family $(f_i)_{i\in I}$ with each
$f_i\in {\rm Hom}_{\mathcal{A}}(M,N)$, $\sum_{i\in I} {\rm Im}(f_i)$ is $N$-cocyclic.
\item $L(M)$ is strongly $L(N)$-Baer in $\mathcal{B}$ and for every set $I$ and for every family $(f_i)_{i\in I}$ with each
$f_i\in {\rm Hom}_{\mathcal{A}}(M,N)$, $\sum_{i\in I} {\rm Im}(f_i)\in {\rm Refl}(L)$.
\end{enumerate}
\end{enumerate}
\end{coll}
\begin{proof} (2) This follows by \cite[Theorem~4.10]{CO1}, Lemma \ref{l:BR}, \cite[Lemma~3.11]{CO} and the fact $L$ converts
coproducts into products.
\end{proof}
Finally, we discuss the transfer of the (dual) strong relative Baer property to endomorphism rings of (graded) modules.
\begin{coll} Let $M$ be a right $R$-module, and let $S={\rm End}_R(M)$.
\begin{enumerate}
\item The following are equivalent:
\begin{enumerate}[(i)]
\item $M$ is a strongly self-Baer right $R$-module.
\item $S$ is a strongly self-Baer right $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each $f_i\in
S$, $\bigcap_{i\in I}{\rm Ker}(f_i)$ is $M$-cyclic.
\item $S$ is a strongly self-Baer right $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each $f_i\in
S$, $\bigcap_{i\in I}{\rm Ker}(f_i)\in {\rm Stat}({\rm Hom}_R(M,-))$.
\item $S$ is a strongly self-Baer right $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each $f_i\in
S$, $\bigcap_{i\in I}{\rm ker}(f_i)$ is a locally split monomorphism.
\item $S$ is a strongly self-Baer right $S$-module and $M$ is quasi-retractable.
\end{enumerate}
\item The following are equivalent:
\begin{enumerate}[(i)]
\item $M$ is a dual strongly self-Baer right $R$-module.
\item $S$ is a strongly self-Baer left $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each
$f_i\in S$, $\sum_{i\in I} {\rm Im}(f_i)$ is $M$-cocyclic.
\item $S$ is a strongly self-Baer left $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each
$f_i\in S$, $\sum_{i\in I} {\rm Im}(f_i)\in {\rm Adst}({\rm Hom}_R(M,-))$.
\item $S$ is a strongly self-Baer left $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each
$f_i\in S$, $\sum_{i\in I} {\rm im}(f_i)$ is a locally split epimorphism.
\item $S$ is a strongly self-Baer left $S$-module and $M$ is quasi-coretractable.
\end{enumerate}
\end{enumerate}
\end{coll}
\begin{proof} The equivalences (i)$\Leftrightarrow$(ii)$\Leftrightarrow$(iii) follow by \cite[Theorem~4.12]{CO1},
Lemma \ref{l:BR} and \cite[Lemma~3.11]{CO}.
The other equivalences follow in a similar way as the corresponding ones from \cite[Theorem~4.12]{CO1}
with endomorphisms replaced by families of endomorphisms, kernels replaced by intersections of kernels, and
cokernels replaced by sums of images.
\end{proof}
\begin{coll} Let $M$ be a graded right $R$-module, and let $S={\rm END}_R(M)$.
\begin{enumerate}
\item The following are equivalent:
\begin{enumerate}[(i)]
\item $M$ is a strongly self-Baer graded right $R$-module.
\item $S$ is a strongly self-Baer graded right $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each
$f_i\in S$, $\bigcap_{i\in I}{\rm Ker}(f_i)$ is $M$-cyclic.
\item $S$ is a strongly self-Baer graded right $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each
$f_i\in S$, $\bigcap_{i\in I}{\rm Ker}(f_i)\in {\rm Stat}({\rm HOM}_R(M,-))$.
\end{enumerate}
\item The following are equivalent:
\begin{enumerate}[(i)]
\item $M$ is a dual strongly self-Baer graded right $R$-module.
\item $S$ is a strongly self-Baer graded left $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each
$f_i\in S$, $\sum_{i\in I} {\rm Im}(f_i)$ is $M$-cocyclic.
\item $S$ is a strongly self-Baer graded left $S$-module and for every set $I$ and for every family $(f_i)_{i\in I}$ with each
$f_i\in S$, $\sum_{i\in I} {\rm Im}(f_i)\in {\rm Refl}({\rm HOM}_R(-,M))$.
\end{enumerate}
\end{enumerate}
\end{coll}
\begin{proof} This follows by \cite[Corollary~4.13]{CO1}, Lemma \ref{l:BR} and \cite[Lemma~3.11]{CO}.
\end{proof}
|
{
"timestamp": "2018-03-08T02:09:12",
"yymm": "1803",
"arxiv_id": "1803.02683",
"language": "en",
"url": "https://arxiv.org/abs/1803.02683"
}
|
\section{Introduction}
\label{sec:intro}
Supersymmetry (SUSY)~\cite{Golfand:1971iw,Volkov:1973ix,Wess:1974tw,Wess:1974jb,Ferrara:1974pu,Salam:1974ig,Martin:1997ns} is one of the most studied extensions
of the Standard Model (SM).
In its minimal realization (the Minimal Supersymmetric Standard Model, or MSSM)~\cite{Fayet:1976et,Fayet:1977yc},
it predicts new fermionic (bosonic) partners of the fundamental SM bosons (fermions)
and an additional Higgs doublet. These new SUSY particles, or sparticles,
can provide an elegant solution to the gauge hierarchy problem~\cite{Sakai:1981gr,Dimopoulos:1981yj,Ibanez:1981yh,Dimopoulos:1981zb}.
In $R$-parity-conserving models~\cite{Farrar:1978xj}, sparticles can only be produced in pairs and
the lightest supersymmetric particle (LSP) is stable. This is typically assumed to be the \ninoone
neutralino\footnote{The SUSY partners of the Higgs field (known as higgsinos) and of the electroweak gauge fields (the bino for the U(1) gauge field and winos for the $W$ fields) mix to form the mass eigenstates known as charginos and neutralinos.}, which can then provide a natural candidate for dark matter~\cite{Goldberg:1983nd,Ellis:1983ew}.
If produced in proton--proton collisions, a neutralino LSP would escape detection
and lead to an excess of events with large missing transverse momentum above the expectations from SM processes,
a characteristic that is exploited to search for SUSY signals in analyses presented in this paper.
The production cross-sections of SUSY particles at the Large Hadron Collider (LHC)~\cite{1748-0221-3-08-S08001} depend both on the type of interaction involved
and on the masses of the sparticles.
The coloured sparticles (squarks and gluinos) are produced in strong interactions with significantly larger production cross-sections
than non-coloured sparticles of equal masses, such as the charginos ($\tilde{\chi}^{\pm}_{i}$, $i = 1, 2$) and neutralinos ($\tilde{\chi}^{0}_{j}$, $j = 1, 2, 3, 4$) and the
sleptons ($\slepton$ and $\snu$). The direct production of charginos and neutralinos or slepton pairs can dominate SUSY production at the LHC if the masses of the gluinos and the squarks are significantly larger.
With searches performed by the ATLAS~\cite{PERF-2007-01} and CMS~\cite{CMS-TDR-08-001} experiments
during LHC Run 2, the exclusion limits on coloured sparticle masses extend up to approximately $2\,$TeV~\cite{Aaboud:2017vwy,Sirunyan:2017cwe,Sirunyan:2017kqq}, making electroweak production an increasingly important probe for SUSY signals at the LHC.
This paper presents a set of searches for the electroweak production of charginos, neutralinos and sleptons decaying into final states with two or
three electrons or muons using 36.1 fb$^{-1}$ of proton--proton collision data delivered by the LHC at a centre-of-mass energy of $\sqrt{s}=13$~TeV.
The results build on studies performed during LHC Run 1 at $\sqrt{s}=7$~TeV and 8~TeV by the ATLAS Collaboration~\cite{SUSY-2013-11,SUSY-2013-12,Aad:2015eda}.
Analogous studies by the CMS Collaboration are presented in Refs.~\cite{Sirunyan:2017lae,CMS-SUS-12-006,Khachatryan:2014mma,Khachatryan:2014qwa}.
After descriptions of the SUSY scenarios considered (Section~\ref{sec:signal}), the experimental apparatus (Section~\ref{sec:detector}),
the simulated samples~(Section~\ref{sec:dataMCsamples}) and the event reconstruction (Section~\ref{sec:objects}), the analysis search strategy is discussed in Section~\ref{sec:analysis}.
This is followed by Section~\ref{sec:backgrounds}, which describes the estimation of
SM contributions to the measured yields in the signal regions, and by Section~\ref{sec:systematics}, which
discusses systematic uncertainties affecting the searches. Results are presented in Section~\ref{sec:result}, together with
the statistical tests used to interpret them in the context of relevant SUSY benchmark scenarios.
Section~\ref{sec:conclusion} summarizes the main conclusions.
\section{SUSY scenarios and search strategy}
\label{sec:signal}
This paper presents searches for the direct pair-production of $\chinoonep\chinoonem$, $\chinoonepm\ninotwo$ and
$\slepton\slepton$ particles, in final states with exactly two or
three electrons and muons, two $\ninoone$ particles, and possibly additional jets or neutrinos.
Simplified models~\cite{Alwall:2008ag}, in which the masses of the relevant sparticles are the only free parameters,
are used for interpretation and to guide the design of the searches.
The pure wino $\chinoonepm$ and $\ninotwo$ are taken to be
mass-degenerate, and so are the scalar partners of the left-handed charged leptons and neutrinos ($\tilde{e}_{\textup{L}}$, $\tilde{\mu}_{\textup{L}}$, $\tilde{\tau}_{\textup{L}}$ and $\tilde{\nu}$).
Intermediate slepton masses, when relevant, are chosen to be midway between the mass of the heavier chargino and neutralino and that of the lightest neutralino, which is pure bino,
and equal branching ratios for the three slepton flavours are assumed. The analysis sensitivity is not expected to depend strongly on the slepton mass hypothesis for a broad range
of slepton masses, while it degrades as the slepton mass approaches that of the heavier chargino and neutralino, leading to lower $p_T$ values for the leptons produced in the heavy
chargino and neutralino decays~\cite{Aad:2015eda}.
Lepton flavour is conserved in all models.
Diagrams of processes considered are shown in Figure~\ref{fig:TreeDiagramsPhysScenarios}.
For models exploring $\chinoonep\chinoonem$ production, it is assumed that the sleptons are also light and thus accessible in the sparticle decay chains, as illustrated in Figure~\ref{fig:C1C1tree}.
Two different classes of models are considered for $\chinoonepm\ninotwo$ production: in one case, the $\chinoonepm$ chargino and $\ninotwo$ neutralino can decay into final-state SM particles and a $\ninoone$ neutralino via
an intermediate $\slepton_{\textup{L}}$ or $\tilde{\nu}_{\textup{L}}$, with a branching ratio of 50\% to each (Figure~\ref{fig:C1N2tree}); in the other case the $\chinoonepm$ chargino and $\ninotwo$ neutralino decays
proceed via SM gauge bosons ($W$ or $Z$).
For the gauge-boson-mediated decays, two distinct final states are considered: three-lepton (where lepton refers to an electron or muon) events where both
the $W$ and $Z$ bosons decay leptonically (Figure~\ref{fig:C1N2treeWZ}) or events with two opposite-sign leptons and two jets where the
$W$ boson decays hadronically and the $Z$ boson decays leptonically (Figure~\ref{fig:C1N2treeWZj}).
In models with direct $\slepton\slepton$ production, each
slepton decays into a lepton and a $\ninoone$ with a 100\% branching ratio (Figure~\ref{fig:sleptontree}), and $\tilde{e}_{\textup{L}}$, $\tilde{e}_{\textup{R}}$, $\tilde{\mu}_{\textup{L}}$, $\tilde{\mu}_{\textup{R}}$, $\tilde{\tau}_{\textup{L}}$ and $\tilde{\tau}_{\textup{R}}$ are assumed to be mass-degenerate.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.3\textwidth}\includegraphics[width=\textwidth]{fig_01a.pdf}\caption{}\label{fig:C1C1tree}\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}\includegraphics[width=\textwidth]{fig_01b.pdf}\caption{}\label{fig:C1N2tree}\end{subfigure} \\
\begin{subfigure}[t]{0.3\textwidth}\includegraphics[width=\textwidth]{fig_01c.pdf}\caption{}\label{fig:C1N2treeWZ}\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}\includegraphics[width=\textwidth]{fig_01d.pdf}\caption{}\label{fig:C1N2treeWZj}\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}\includegraphics[width=\textwidth]{fig_01e.pdf}\caption{}\label{fig:sleptontree}\end{subfigure} \\
\caption{Diagrams of physics scenarios studied in this paper:
(a) $\chinoonep\chinoonem$ production with $\slepton$-mediated decays into final states with two leptons,
(b) $\chinoonepm\ninotwo$ production with $\slepton$-mediated decays into final states with three leptons,
(c) $\chinoonepm\ninotwo$ production with decays via leptonically decaying $W$ and $Z$ bosons into final states with three leptons,
(d) $\chinoonepm\ninotwo$ production with decays via a hadronically decaying $W$ boson and a leptonically decaying $Z$ boson into final states with two leptons and two jets, and
(e) slepton pair production with decays into final states with two leptons.
}
\label{fig:TreeDiagramsPhysScenarios}
\end{figure}
Events are recorded using triggers requiring the presence of at least two leptons and assigned to one of three mutually exclusive analysis channels depending on the
lepton and jet multiplicity. The 2$\ell$+0jets channel targets chargino- and slepton-pair production,
the 2$\ell$+jets channel targets chargino-neutralino production with gauge-boson-mediated decays,
and the 3$\ell$ channel targets chargino-neutralino production with slepton- or gauge-boson-mediated decays.
For each channel, a set of signal regions (SR), defined in Section~\ref{sec:analysis}, use requirements on \MET and other kinematic quantities, which are
optimized for different SUSY models and sparticle masses. The analyses employ ``inclusive'' SRs to quantify significance without assuming a particular
signal model and to exclude regions of SUSY model parameter space, as well as sets of orthogonal ``exclusive'' SRs that are considered simultaneously
during limit-setting to improve the exclusion sensitivity.
\section{ATLAS detector}
\label{sec:detector}
The ATLAS experiment~\cite{PERF-2007-01} is a multi-purpose particle detector with a forward-backward symmetric cylindrical
geometry and nearly $4\pi$ coverage in solid angle.\footnote{ATLAS uses
a right-handed coordinate system with its origin at the nominal
interaction point (IP) in the centre of the detector and the
$z$-axis along the beam direction. The $x$-axis points from the IP to the
centre of the LHC ring, and the $y$-axis points upward. Cylindrical
coordinates ($r$, $\phi$) are used in the transverse plane, $\phi$
being the azimuthal angle around the beam direction. The pseudorapidity
is defined in terms of the polar angle $\theta$ as $\eta = -\ln
\tan(\theta/2)$.
Angular distance is measured in units of $\Delta R \equiv \sqrt{ (\Delta \eta)^2 + (\Delta \phi)^2 }$.
The transverse momentum, \pt, and energy, \et, are defined with respect to the beam axis ($x$--$y$ plane).
}
The interaction point is surrounded by an inner detector (ID), a
calorimeter system, and a muon spectrometer.
The ID provides precision tracking of charged particles for
pseudorapidities $|\eta| < 2.5$ and is surrounded by a superconducting solenoid providing a \SI{2}{T} axial magnetic field.
The ID consists of silicon pixel and microstrip detectors inside a
transition radiation tracker.
One significant upgrade for the $\sqrt{s}=13$~TeV running period is the installation of the
insertable B-layer~\cite{ATLAS-TDR-19}, an additional pixel layer close to the interaction point which provides high-resolution hits at small radius to improve the tracking performance.
In the pseudorapidity region $|\eta| < 3.2$, high-granularity lead/liquid-argon (LAr)
electromagnetic (EM) sampling calorimeters are used.
A steel/scintillator tile calorimeter measures hadron energies for
$|\eta| < 1.7$.
The endcap and forward regions, spanning $1.5<|\eta| <4.9$, are
instrumented with LAr calorimeters, for both the EM and hadronic
measurements.
The muon spectrometer consists of three large superconducting toroids
with eight coils each,
and a system of trigger and precision-tracking chambers,
which provide triggering and tracking capabilities in the
ranges $|\eta| < 2.4$ and $|\eta| < 2.7$, respectively.
A two-level trigger system is used to select events \cite{Aaboud:2016leb}. The first-level
trigger is implemented in hardware and uses a subset of the detector
information. This is followed by the software-based high-level trigger,
which runs offline reconstruction and
calibration software, reducing the event rate to about \SI{1}{kHz}.
\section{Data and simulated event samples}
\label{sec:dataMCsamples}
This analysis uses proton--proton collision data delivered by the LHC at $\sqrt{s}=13\TeV$ in 2015 and 2016.
After fulfilling data-quality requirements, the data sample amounts to an integrated luminosity of 36.1~fb$^{-1}$. This value is derived using a methodology similar to that detailed in Refs.~\cite{DAPR-2013-01}, from a calibration of the luminosity scale using $x$--$y$ beam-separation scans performed in August 2015 and May 2016.
Various samples of Monte Carlo (MC) simulated events are used to model the SUSY signal and help
in the estimation of the SM backgrounds. The samples include an ATLAS detector simulation~\cite{SOFT-2010-01}, based on {\sc Geant4}~\cite{Agostinelli:2002hh}, or a fast simulation~\cite{SOFT-2010-01} that uses a parameterization of the calorimeter response~\cite{ATL-PHYS-PUB-2010-013} and {\sc Geant4} for the other parts of the detector. The simulated events are reconstructed in the same manner as the data.
Diboson processes were simulated with the \SHERPA~v2.2.1 event
generator~\cite{Gleisberg:2008ta, ATL-PHYS-PUB-2016-002}
and normalized using next-to-leading-order (NLO) cross-sections~\cite{diboson1,diboson2}. The matrix
elements containing all diagrams with four electroweak vertices with
additional hard parton emissions were calculated with {\sc Comix}~\cite{Gleisberg:2008fv} and
virtual QCD corrections were calculated with {\sc OpenLoops}~\cite{Cascioli:2011va}. Matrix element
calculations were merged with the \SHERPA parton shower~\cite{Schumann:2007mg} using the
ME+PS@NLO prescription~\cite{Hoeche:2012yf}. The NNPDF3.0 NNLO parton distribution function (PDF) set~\cite{Ball:2014uwa} was used in
conjunction with dedicated parton shower tuning developed by the
\SHERPA authors.
The fully leptonic channels were
calculated at NLO in the strong coupling constant with up to one additional parton for $4\ell$ and
$2\ell+2\nu$, at NLO with no additional parton for $3\ell+\nu$, and at leading order (LO) with up to three additional partons.
Processes with one of the
bosons decaying hadronically and the other leptonically were calculated
with up to one additional parton at NLO and up to three additional
partons at LO.
Diboson processes with six electroweak vertices, such as
same-sign $W$ boson production in association with two
jets, $W^\pm W^\pm jj$, and triboson processes were simulated as above
with \SHERPA~v2.2.1 using the NNPDF3.0 PDF set. Diboson
processes with six vertices were calculated at LO with up to one additional parton.
Fully leptonic triboson processes ($WWW$, $WWZ$, $WZZ$ and $ZZZ$) were
calculated at LO with up to two additional partons and at NLO for the
inclusive processes and normalized using NLO cross-sections.
Events containing \Zboson bosons and associated jets ($Z/\gamma^*$+jets, also referred to as $Z$+jets in the following) were also produced using the \SHERPA v2.2.1 generator with massive $b/c$-quarks to improve the treatment of the associated production of $\Zboson$ bosons with jets containing $b$- and $c$-hadrons~\cite{ATL-PHYS-PUB-2016-003}. Matrix elements were calculated with up to two additional partons at NLO and up to four additional partons at LO, using {\sc Comix}, {\sc OpenLoops}, and \SHERPA parton shower with ME+PS@NLO in a way similar to that described above. A global $K$-factor was used to normalize the $Z$+jets events to the next-to-next-to-leading-order (NNLO) QCD cross-sections~\cite{Catani:2009sm}.
For the production of \ttbar and single top quarks in the $Wt$ channel, the \POWHEGBOX v2~\cite{Re:2010bp,Frixione:2007nw} generator with the CT10 PDF set~\cite{CT10pdf} was used, as discussed in Ref.~\cite{ATL-PHYS-PUB-2016-004}.
The top quark mass was set at 172.5~GeV for all MC samples involving top quark production. The \ttbar events were normalized using the NNLO+next-to-next-to-leading-logarithm (NNLL) QCD~\cite{Czakon:2011xx} cross-section, while the cross-section for single-top-quark
events was calculated at NLO+NNLL~\cite{Kidonakis:2010ux}.
Samples of $t\bar{t} V$ (with $V=W$ and $Z$, including non-resonant $Z/\gamma^*$ contributions)
and $t\bar{t}WW$ production were generated at LO with \MGMCatNLO v2.2.2~\cite{Alwall:2014hca} interfaced to \PYTHIA 8.186~\cite{Sjostrand:2007gs} for parton showering, hadronisation and the description of the underlying event, with up to two ($\ttbar W$), one ($\ttbar Z$) or no ($\ttbar WW$) extra partons included in the matrix element,
as described in Ref.~\cite{ATL-PHYS-PUB-2016-005}.
\MADGRAPH was also used to simulate the $tZ$, $\ttbar\ttbar$ and $\ttbar t$ processes.
A set of tuned parameters called the {\sc A14} tune~\cite{ATL-PHYS-PUB-2014-021} was used together with the {\sc NNPDF2.3LO} PDF set~\cite{Ball:2012cx}.
The $\ttbar W$, $\ttbar Z$, $\ttbar WW$ and $\ttbar\ttbar$ events were normalized using their NLO cross-section~\cite{ATL-PHYS-PUB-2016-005}
while the generator cross-section was used for $tZ$ and $\ttbar t$.
Higgs boson production processes (including gluon--gluon fusion, associated $VH$ production and vector-boson fusion) were generated
using \POWHEGBOX v2~\cite{Alioli:2010xd} and \PYTHIA 8.186 and normalized using cross-sections calculated at NNLO
with soft gluon emission effects added at NNLL accuracy~\cite{deFlorian:2016spz}, whilst $\ttbar H$
events were produced using \MGMCatNLO 2.3.2 + \Herwigpp~\cite{Bahr:2008pv} and normalized using the NLO cross-section~\cite{ATL-PHYS-PUB-2016-005}.
All samples assume a Higgs boson mass of 125 GeV.
The SUSY signal processes were generated from LO matrix elements with up to two extra partons,
using the \MADGRAPH v2.2.3 generator interfaced to \PYTHIA 8.186 with the {\sc A14} tune for the modelling of the SUSY decay chain,
parton showering, hadronization and the description of the underlying event.
Parton luminosities were provided by the {\sc NNPDF2.3LO} PDF set.
Jet--parton matching was realized following the CKKW-L prescription~\cite{Lonnblad:2011xx}, with a matching scale set to one quarter of the pair-produced superpartner mass.
Signal cross-sections were calculated at NLO,
with soft gluon emission effects added at next-to-leading-logarithm (NLL) accuracy~\cite{Beenakker:1996ch,Kulesza:2008jb,Kulesza:2009kq,Beenakker:2009ha,Beenakker:2011fu}.
The nominal cross-section and its uncertainty were taken from an envelope of cross-section predictions using different PDF sets and factorization and renormalization scales,
as described in Ref.~\cite{Borschensky:2014cia}.
The cross-section for $\chinoonep\chinoonem$ production, each with a mass of $600$~GeV, is $9.50\pm0.91$~fb,
while the cross-section for $\chinoonepm\ninotwo$ production, each with a mass of $800$~GeV, is $4.76\pm0.56$~fb.
In all MC samples, except those produced by \SHERPA, the {\sc EvtGen}~v1.2.0 program~\cite{Lange:2001uf} was used to model the properties of $b$- and $c$-hadron decays.
To simulate the effects of additional $pp$ collisions per bunch crossing (pile-up),
additional interactions were generated using the soft QCD processes of \PYTHIA 8.186
with the A2 tune~\cite{ATLAS-PHYS-PUB-2012-003} and the MSTW2008LO PDF set~\cite{Martin:2009iq},
and overlaid onto the simulated hard-scatter event.
The Monte Carlo samples were reweighted so that the distribution of the number of pile-up interactions matches the distribution in data.
\section{Event reconstruction and preselection}
\label{sec:objects}
Events used in the analysis were recorded during stable data-taking conditions and must have a reconstructed primary vertex~\cite{ATL-PHYS-PUB-2015-026} with at least two
associated tracks with $\pt>400~\MeV$. The primary vertex of an event is identified as the vertex with the highest $\Sigma \pt^2$ of associated tracks.
Two identification criteria are defined for the objects used in these analyses, referred to as ``baseline'' and ``signal'' (with the signal objects being a subset of the baseline ones).
The former are defined to disambiguate between overlapping physics objects and to perform data-driven estimations of non-prompt leptonic backgrounds (discussed in Section~\ref{sec:backgrounds})
while the latter are used to construct kinematic and multiplicity discriminating variables needed for the event selection.
Baseline electrons are reconstructed from isolated electromagnetic
calorimeter energy deposits matched to ID tracks and are required to have $|\eta|<2.47$,
$\pT>\SI{10}{\GeV}$,
and to pass a loose likelihood-based identification requirement~\cite{ATL-PHYS-PUB-2015-041,PERF-2016-01}.
The likelihood input variables include measurements of calorimeter shower shapes and track properties from the ID.
Baseline muons are reconstructed in the region $|\eta|<2.7$
from muon spectrometer tracks matching ID tracks.
All muons must have $\pT>\SI{10}{\GeV}$ and must pass the ``medium identification'' requirements defined in Ref.~\cite{PERF-2015-10},
based on selection of the number of hits
and curvature measurements in the ID and muon spectrometer systems.
Jets are reconstructed with the anti-$k_t$
algorithm~\cite{Cacciari:2008} as implemented in the FastJet package~\cite{Cacciari:2011ma}, with radius parameter $R=0.4$, using three-dimensional energy
clusters in the calorimeter~\cite{Aad:2016upy} as input.
Baseline jets must have $\pT>\SI{20}{\GeV}$ and $|\eta|<4.5$ and signal jets have the tighter requirement of $|\eta|<2.4$.
Jet energies are calibrated as described in Refs.~\cite{ATL-PHYS-PUB-2015-015,Aaboud:2017jcu}.
In order to reduce the effects of pile-up, jets with $\pt<\SI{60}{GeV}$ and $|\eta|<2.4$ must have
a significant fraction of their associated tracks compatible with originating from the primary vertex, as defined by the jet vertex tagger~\cite{PERF-2014-03}.
Furthermore, for all jets the expected average energy contribution from pile-up is subtracted according to the jet area~\cite{Cacciari:2008gn,PERF-2014-03}.
Events are discarded if they contain any jet that is judged by basic quality criteria to be detector noise or non-collision background.
Identification of jets containing $b$-hadrons ($b$-jets), so called $b$-tagging, is performed with the MV2c10 algorithm,
a multivariate discriminant making use of track impact parameters
and reconstructed secondary vertices~\cite{ATL-PHYS-PUB-2016-012,PERF-2012-04}.
A requirement is chosen corresponding to a 77\% average efficiency
obtained for $b$-jets in simulated $\ttbar$ events.
The corresponding rejection factors against jets originating from $c$-quarks, from $\tau$-leptons, and from light quarks and gluons in the same sample
at this working point are 6, 22 and 134, respectively.
Baseline photon candidates are required to meet the ``tight'' selection criteria of Ref.~\cite{PERF-2013-04} and satisfy $\pt>25$~\GeV\ and $|\eta|<2.37$, but excluding the transition region $1.37<|\eta|<1.52$,
where the calorimeter performance is degraded.
After object identification, an ``object-removal procedure'' is performed on all baseline objects to remove possible double-counting in the reconstruction:
\vspace{-5pt}
\begin{enumerate}[nolistsep]
\item Any electron sharing an ID track with a muon is removed.
\item If a $b$-tagged jet (identified using the 85\% efficiency working point of the MV2c10 algorithm) is within $\Delta R = 0.2$ of an electron candidate, the electron is rejected, as it is likely to be from a semileptonic
$b$-hadron decay; if the jet within $\Delta R = 0.2$ of the electron is not $b$-tagged, the jet itself is discarded, as it likely originates from an electron-induced shower.
\item Electrons within $\Delta R=0.4$ of a remaining jet candidate are discarded, to further suppress electrons from semileptonic decays of $b$- and $c$-hadrons.
\item Jets with a nearby muon that carries a significant fraction of the transverse momentum of the jet
($\pt^\mu > 0.7 \sum \pt^{\textrm{jet tracks}}$, where $\pt^\mu$ and $\pt^{\textrm{jet tracks}}$ are the transverse momenta of the muon and the tracks associated with the jet, respectively) are discarded either if the candidate muon is within $\Delta R=0.2$ of the jet or if the muon is matched to a track associated with the jet. Only jets with fewer than three associated tracks can be discarded in this step.
\item Muons within $\Delta R=0.4$ of a remaining jet candidate are discarded to suppress muons from semileptonic decays of $b$- and $c$-hadrons.
\end{enumerate}
Signal electrons must satisfy a ``medium'' likelihood-based identification requirement~\cite{ATL-PHYS-PUB-2015-041} and the track associated with the electron must have a significance of the transverse impact parameter relative to the reconstructed primary vertex, $d_0$, of $\vert d_0\vert/\sigma(d_0) < 5$, with $\sigma(d_0)$ being the uncertainty in $d_0$. In addition, the longitudinal impact parameter (again relative to the reconstructed primary vertex), $z_0$, must satisfy $\vert z_0 \sin\theta\vert < 0.5$~mm. Similarly, signal muons must satisfy the requirements of $\vert d_0\vert/\sigma(d_0) < 3$, $\vert z_0 \sin\theta\vert < 0.5$~mm, and additionally have $|\eta|<2.4$. Isolation requirements are also applied to both the signal electrons and muons to reduce the contributions of ``fake'' or non-prompt leptons, which originate from misidentified hadrons, photons conversions, and hadron decays. These $\pt$- and $\eta$-dependent requirements use track- and calorimeter-based information and have efficiencies in $Z\rightarrow\ee$ and $Z\rightarrow\mumu$ events that rise from 95\% at 25 GeV to 99\% at 60 GeV.
The missing transverse momentum \pTmiss, with magnitude \MET, is the negative vector sum of the transverse momenta of
all identified physics objects (electrons, photons, muons, jets) and an additional soft term. The soft term is constructed
from all tracks that are not associated with any physics object and that are associated with the primary vertex, to
suppress contributions from pile-up interactions. The \MET value is adjusted for the calibration of the jets and the
other identified physics objects above~\cite{ATL-PHYS-PUB-2015-027}.
Events considered in the analysis must pass a trigger selection requiring either two electrons, two muons or an electron plus a muon.
The trigger-level thresholds on the \pt value of the leptons involved in the trigger decision are in the range 8--22 GeV and are looser than
those applied offline to ensure that trigger efficiencies are constant in the relevant phase space.
Events containing a photon and jets are used to estimate the $Z$+jets background in events with two leptons and jets. These events are selected with a set of prescaled single-photon
triggers with $\pt$ thresholds in the range 35--100~\GeV\ and an unprescaled single-photon trigger with threshold $\pt=140$ GeV.
Signal photons in this control sample must
have $\pt>37$~\GeV\ to be in the efficiency plateau of the lowest-threshold single-photon trigger,
fall outside the barrel-endcap transition region defined by $1.37 < |\eta| < 1.52$,
and pass ``tight'' selection criteria described in Ref.~\cite{PERF-2013-05}, as well as
\pt- and $\eta$-dependent requirements on both track- and calorimeter-based isolation.
Simulated events are corrected to account for small differences in the signal lepton trigger, reconstruction, identification, isolation, as well as
$b$-tagging efficiencies between data and MC simulation.
\section{Signal regions}
\label{sec:analysis}
In order to search for the electroweak production of supersymmetric particles, three different search channels that target different SUSY processes are defined:
\begin{itemize}
\item \textbf{2$\ell$+0jets channel: } targets $\chinoonep\chinoonem$ and $\slepton\slepton$ production (shown in Figures~\ref{fig:C1C1tree} and~\ref{fig:sleptontree}) in signal regions with a jet veto and defined using the ``stransverse mass'' variable, $m_{\rm{T2}}$~\cite{Lester:1999tx,Barr:2003rg}, and the dilepton invariant mass \mll;
\item \textbf{2$\ell$+jets channel: } targets $\chinoonepm\ninotwo$ production with decays via gauge bosons (shown in Figure~\ref{fig:C1N2treeWZj}) into two same-flavour opposite-sign (SFOS) leptons (from the $Z$ boson) and at least two jets (from the $W$ boson);
\item \textbf{3$\ell$ channel: } targets $\chinoonepm\ninotwo$ production with decays via intermediate $\slepton$ or gauge bosons into three-lepton final states (shown in Figures~\ref{fig:C1N2tree} and~\ref{fig:C1N2treeWZ}).
\end{itemize}
In each channel, inclusive and/or exclusive signal regions (SRs) are defined that require exactly two or three signal leptons, with vetos on any additional baseline leptons.
In the 2$\ell$+0jets channel only, this additional baseline lepton veto is applied before considering overlap-removal. The leading and sub-leading
leptons are required to have \pt$>25$ GeV and 20 GeV respectively; however, in the 2$\ell$+jets and 3$\ell$ channels, tighter lepton \pt\
requirements are applied to the sub-leading leptons.
\subsection{Signal regions for 2$\ell$+0jets channel}
In the 2$\ell$+0jets channel the leptons are required to be of opposite sign and events are separated into ``same flavour'' (SF) events (corresponding to dielectron, $e^{+}e^{-}$, and dimuon, $\mu^{+}\mu^{-}$, events) and ``different flavour'' (DF) events (electron--muon, $e^{\pm}\mu^{\mp}$). This division is driven by the different background compositions in the two classes of events. All events used in the SRs are required to have a dilepton invariant mass $m_{\ell\ell}>40$ GeV and
not contain any $b$-tagged jets with \pt$>20$ GeV or non-$b$-tagged jets with \pt$>60$~GeV.
After this preselection, exclusive signal regions are used to maximize exclusion sensitivity across the simplified model parameter space for $\chinoonep\chinoonem$ and $\slepton\slepton$ production.
In the SF regions a two-dimensional binning in \mttwo\ and $m_{\ell\ell}$ is used as high-$m_{\ell\ell}$ requirements provide strong suppression of the $Z$+jets background,
whereas in the DF regions, where the $Z$+jets background is negligible, a one-dimensional binning in \mttwo\ is sufficient. The stransverse mass \mttwo\ is defined as:
\[\mttwo = \min_{\qTvec}\left[\max\left(\mT(\pTell{1},\qTvec),\mT(\pTell{2},\pTmiss-\qTvec)\right)\right],\]
where $\pTell{1}$ and $\pTell{2}$ are the transverse momentum vectors of the two leptons,
and $\qTvec$ is a transverse momentum vector that minimizes the larger
of $\mT(\pTell{1},\qTvec)
$ and $\mT(\pTell{2},\pTmiss-\qTvec)$, where:
\[
\mT(\pTvec,\qTvec) = \sqrt{2(\pT\qT-\pTvec\cdot\qTvec)}.
\]
For SM backgrounds of $\ttbar$ and $WW$ production in which the missing transverse momentum and the pair of selected leptons originate from two $W\to\ell\nu$ decays and all momenta are accurately measured,
the \mttwo\ value must be less than the $W$ boson mass $m_W$, and requiring the \mttwo\ value to significantly exceed $m_W$ thus strongly suppresses these backgrounds
while retaining high efficiency for many SUSY signals.
When producing model-dependent exclusion limits in the $\chinoonep\chinoonem$ simplified models, all signal regions are statistically combined, whereas only the
same-flavour regions are used when probing $\slepton\slepton$ production. In addition, a set of inclusive signal regions are also defined, and these are used to provide a more model-independent test for an excess of events. The definitions of both the exclusive and inclusive signal regions are provided in Table~\ref{tab:exclusiveSR}.
\begin{table}[t]
\centering
\caption{The definitions of the exclusive and inclusive signal regions for the 2$\ell$+0jets channel. Relevant kinematic variables are defined in the text.
The bins labelled ``DF''or ``SF'' refer to signal regions with different-flavour or same-flavour lepton pair combinations, respectively.
\label{tab:exclusiveSR}}
\begin{tabular}{|c | c | c| c|}
\hline
\multicolumn{4}{|c|}{{ \bf{2$\ell$+0jets exclusive signal region definitions }}} \\
\hline
\mttwo\ [GeV] & $m_{\ell\ell}$ [GeV] & SF bin & DF bin \\
\hline
\multirow{4}{*}{ 100--150} & 111--150 & SR2-SF-a & \multirow{4}{*}{SR2-DF-a} \\
& 150--200 & SR2-SF-b & \\
& 200--300 & SR2-SF-c & \\
& $>300$ & SR2-SF-d & \\
\hline
\multirow{4}{*}{ 150--200} & 111--150 & SR2-SF-e & \multirow{4}{*}{SR2-DF-b} \\
& 150--200 & SR2-SF-f & \\
& 200--300 & SR2-SF-g & \\
& $>300$ & SR2-SF-h & \\
\hline
\multirow{4}{*}{ 200--300} & 111--150 & SR2-SF-i & \multirow{4}{*}{SR2-DF-c} \\
& 150--200 & SR2-SF-j & \\
& 200--300 & SR2-SF-k & \\
& $>300$ & SR2-SF-l & \\
\hline
$>300$ & $>111$ & SR2-SF-m & SR2-DF-d \\
\hline
\multicolumn{4}{|c|}{{\bf{ 2$\ell$+0jets inclusive signal region definitions }}} \\
\hline
$>$ 100 & $>$ 111 & SR2-SF-loose & -\\
$>$ 130 & $>$ 300 & SR2-SF-tight & - \\
\hline
$>$ 100 & $>$ 111 &- & SR2-DF-100 \\
$>$ 150 & $>$ 111 &- & SR2-DF-150\\
$>$ 200 & $>$ 111 &- & SR2-DF-200\\
$>$ 300 & $>$ 111 &- & SR2-DF-300 \\
\hline
\end{tabular}
\end{table}
\subsection{Signal regions for 2$\ell$+jets channel}
In the 2$\ell$+jets channel, two inclusive signal regions differing only in the \MET\ requirement, denoted SR2-int and SR2-high,
are used to target intermediate and large mass splittings between the $\chinoonepm /\ninotwo$ chargino/neutralino and the
$\ninoone$ neutralino. In addition to the preselection used in the 2$\ell$+0jets channel, with the exception of the veto requirement on non-$b$-tagged jets, the sub-leading lepton is also required to have \pt$>25$ GeV and events must have at least two jets, with the leading two jets satisfying \pt$>30$ GeV. The $b$-jet veto is applied in the same way as in the 2$\ell$+0jets channel.
Several kinematic requirements are applied to select two leptons consistent with an on-shell $Z$ boson and two jets consistent with a $W$ boson. A tight requirement of $\mttwo>100$~GeV
is used to suppress the $t\bar{t}$ and $WW$ backgrounds
and $\MET>150~(250)$ GeV is required for SR2-int (SR2-high).
An additional region in the 2$\ell$+jets channel, denoted SR2-low, is optimized for the region of parameter space where the mass splitting
between the $\chinoonepm /\ninotwo$ and the $\ninoone$ is similar to the $Z$ boson mass and the signal becomes kinematically similar to the diboson ($VV$)
backgrounds.
It is split into two orthogonal subregions for performing background estimation and validation, and these are merged when presenting the results in Section~\ref{sec:result}. SR2-low-2J requires exactly two jets, with $\pT>30$ GeV, that are both assumed to originate from the $W$ boson, while SR2-low-3J requires 3--5 signal jets (with the leading two jets satisfying \pt$>30$ GeV) and assumes the $\chinoonepm\ninotwo$ system recoils against initial-state-radiation (ISR) jet(s). In the latter case, the two jets originating from the $W$ boson are selected to be those closest in $\Delta \phi$ to the $Z(\rightarrow\ell\ell) + \MET$ system. This is different from SR2-int and SR2-high, where the two jets with the highest \pt in the event are used to define the $W$ boson candidate. The rest of the jets that are not associated with the $W$ boson are collectively defined as ISR jets.
All regions use variables, including angular distances and the $W$ and $Z$ boson transverse momenta, to select the signal topologies of interest.
The definitions of the signal regions in the 2$\ell$+jets channel are summarized in Table~\ref{tab:2L2J_SR_def}.
\begin{table}[t]
\centering
\caption{
Signal region definitions used for the 2$\ell$+jets channel. Relevant kinematic variables are defined in the text. The symbols $W$ and $Z$ correspond to the reconstructed $W$ and $Z$ bosons in the final state. The $Z$ boson is always reconstructed from the two leptons, whereas the $W$ boson is reconstructed from the two jets leading in \pt for SR2-int, SR2-high and the 2-jets channel of SR2-low, whilst for the 3--5 jets channel of SR2-low it is reconstructed from the two jets which are closest in $\Delta \phi$ to the $Z$ ($\rightarrow\ell\ell$) + \MET system. The $\Delta R_{(jj)}$ and $m_{jj}$ variables are calculated using the two jets assigned to the $W$ boson. ISR refers to the vectorial sum of the initial-state-radiation jets in the event (i.e. those not used in the reconstruction of the $W$ boson) and jet1 and jet3 refer to the leading and third leading jet respectively. The variable $n_{\text{non-} b \text{-tagged jets}}$ refers to the number of jets with $\pT>30$ GeV that do not satisfy the $b$-tagging criteria.
}
\label{tab:2L2J_SR_def}
\begin{tabular}{|l|c|c|c|c| }
\hline
\multicolumn{5}{|c|}{\bf {2$\ell$+jets signal region definitions}} \\
\hline
& SR2-int & SR2-high & SR2-low-2J & SR2-low-3J \\
\hline
$n_{\text{non-} b \text{-tagged jets}}$ & \multicolumn{2}{c|}{$\geq 2$} & 2 & 3--5 \\[0.05cm]
$m_{\ell\ell}$ [GeV] & \multicolumn{2}{c|}{81--101} & 81--101 & 86--96 \\[0.05cm]
$m_{jj}$ [GeV] & \multicolumn{2}{c|}{70--100} & 70--90 & 70--90 \\[0.05cm]
\MET [GeV] & $>150$ &$>250$ & $>100$ & $>100$\\[0.05cm]
$p^{Z}_{\rm T}$ [GeV] & \multicolumn{2}{c|}{$>80$} & $>60$ & $>40$ \\[0.05cm]
$p^{W}_{\rm T}$ [GeV] & \multicolumn{2}{c|}{$>100$} & & \\[0.05cm]
$m_{\rm T2}$ [GeV] & \multicolumn{2}{c|}{$>100$} & & \\[0.05cm]
$\Delta R_{(jj)}$ & \multicolumn{2}{c|}{$<1.5$} & & $<2.2$ \\[0.05cm]
$\Delta R_{(\ell\ell)}$ & \multicolumn{2}{c|}{$<1.8$} & & \\[0.05cm]
$\Delta\phi_{(\pTmiss ,Z)}$ & \multicolumn{2}{c|}{} & $<0.8$ & \\[0.05cm]
$\Delta\phi_{(\pTmiss ,W)}$ & \multicolumn{2}{c|}{0.5--3.0 } & $>1.5$ & $<2.2$ \\[0.05cm]
$\MET/p^{Z}_{\rm T}$ & \multicolumn{2}{c|}{} & $0.6--1.6$ & \\[0.05cm]
$\MET/p^{W}_{\rm T}$ & \multicolumn{2}{c|}{} & $<0.8$ & \\[0.05cm]
$\Delta\phi_{(\pTmiss ,\rm{ISR})} $ & \multicolumn{2}{c|}{} & & $>2.4$\\[0.05cm]
$\Delta\phi_{(\pTmiss ,\rm{jet1})}$ & \multicolumn{2}{c|}{} & & $>2.6$ \\[0.05cm]
$\MET/p^{\rm {ISR}}_{\rm T}$ & \multicolumn{2}{c|}{} & & 0.4--0.8 \\[0.05cm]
$|\eta(Z)|$ & \multicolumn{2}{c|}{} & & $<1.6$ \\[0.05cm]
$p^{\rm{jet3}}_{\rm T}$ [GeV] & \multicolumn{2}{c|}{} & & $>30$ \\[0.05cm]
\hline
\end{tabular}
\end{table}
\FloatBarrier
\subsection{Signal regions for 3$\ell$ channel}
The 3$\ell$ channel targets $\chinoonepm\ninotwo$ production and uses kinematic variables such as \MET and the transverse mass $\mT$, which were used in the Run 1 analysis~\cite{SUSY-2013-12}.
Events are required to have exactly three signal leptons and no additional baseline leptons, as well as zero $b$-tagged jets with $\pT>20$ GeV. In addition, two of the leptons must form an SFOS pair (as expected in $\ninotwo\rightarrow \ell^{+}\ell^{-}\ninoone$ decays).
To resolve ambiguities when multiple SFOS pairings are present, the transverse mass is calculated using the unpaired lepton and \pTmiss\ for each possible SFOS pairing, and the lepton that yields the minimum transverse mass is assigned to the $W$ boson. This transverse mass value is denoted by
$m_{\rm{T}}^{\rm{min}}$, and is
used alongside \MET, jet multiplicity (in the gauge-boson-mediated scenario) and other relevant kinematic variables to define exclusive signal regions that have sensitivity to $\slepton$-mediated and gauge-boson-mediated decays. The definitions of these exclusive regions are provided in Table~\ref{tab:threelepSRs}. The bins denoted ``slep-a,b,c,d,e'' target $\slepton$-mediated decays and consequently have a veto on SFOS pairs with an invariant mass consistent with the $Z$ boson (this suppresses the $WZ$ background). The invariant mass of the SFOS pair, \mll, the magnitude of the missing transverse momentum, \MET, and the \pT\ value of the third leading lepton, $p_{\textrm{T}}^{\ell_{3}}$, are used to define the SR bins.
Conversely, the bins denoted ``WZ-0Ja,b,c'' and ''WZ-1Ja,b,c'' target gauge-boson-mediated decays and thus require the SFOS pair to have an invariant mass consistent with an on-shell $Z$ boson. The 0-jet and $\geq$ 1-jet channels are considered separately and the regions are
binned in $\mT^{\rm{min}}$ and \MET.
\FloatBarrier
\begin{table}[t]
\centering
\caption{Summary of the exclusive signal regions used in the 3$\ell$ channel. Relevant kinematic variables are defined in the text. The bins labelled ``slep'' target slepton-mediated decays whereas those labelled ``WZ'' target gauge-boson-mediated decays. The variable $n_{\text{non-} b \text{-tagged jets}}$ refers to the number of jets with $\pT>20$ GeV that do not satisfy the $b$-tagging criteria. Values of $p^{\ell_{3}}_{\rm T}$ refer to the \pt\ of the third leading lepton and $\pt^{\rm{jet1}}$ denotes the \pt\ of the leading jet. \label{tab:threelepSRs}}
\begin{tabular}{|c |c |c |c |c |c |c |c | }
\hline
\multicolumn{8}{|c|}{\bf{3$\ell$ exclusive signal region definitions}} \\
\hline
$m_{\rm{SFOS}}$ & \MET & $p^{\ell_{3}}_{\rm T}$ & $n_{\text{non-} b \text{-tagged jets}}$ & $\mT^{\rm{min}}$& $p^{\ell\ell\ell}_{\rm T}$ & $\pt^{\textup{jet1}}$ & Bins \\
$\rm{[GeV]}$ & $\rm{[GeV]}$ & $\rm{[GeV]}$ & & $\rm{[GeV]}$ & $\rm{[GeV]}$ & $\rm{[GeV]}$ & \\
\hline
\multirow{2}{*}{\textless 81.2} & \multirow{2}{*}{$>130$} & 20--30 & \multirow{2}{*}{} & \multirow{2}{*}{$>110$} & \multirow{2}{*}{}& \multirow{2}{*}{} & SR3-slep-a \\
& & $>30$ & & & & & SR3-slep-b \\
\hline
\multirow{3}{*}{\textgreater 101.2} & \multirow{3}{*}{$>130$} & 20--50 &\multirow{3}{*}{} &\multirow{3}{*}{$>110$} & \multirow{3}{*}{}& \multirow{3}{*}{} & SR3-slep-c \\
& & 50--80 & & & & & SR3-slep-d \\
& & $>80$ & & & & & SR3-slep-e\\
\hline
\multirow{3}{*}{81.2--101.2} & 60-120 & & \multirow{3}{*}{0} & \multirow{3}{*}{$>110$} & \multirow{3}{*}{} & \multirow{3}{*}{} & SR3-WZ-0Ja \\
& 120--170 & & & & & & SR3-WZ-0Jb \\
& $>170$ & & & & & & SR3-WZ-0Jc \\
\hline
\multirow{3}{*}{81.2-101.2} & 120--200 & & \multirow{3}{*}{$\geq$ 1} & $>110$ & $<120$ & $>70$ & SR3-WZ-1Ja\\
& \multirow{2}{*}{$>200$} & & & 110--160 & \multirow{2}{*}{} &\multirow{2}{*}{} & SR3-WZ-1Jb\\
& & $>35$ & & $>160$ & & & SR3-WZ-1Jc \\
\hline
\end{tabular}
\end{table}
\section{Background estimation and validation}
\label{sec:backgrounds}
The SM backgrounds can be classified into irreducible backgrounds with prompt leptons and genuine \MET\ from neutrinos,
and reducible backgrounds that contain one or more ``fake'' or non-prompt (FNP) leptons or
where experimental effects (e.g., detector mismeasurement of jets or leptons or imperfect removal of object double-counting) lead to significant ``fake'' \MET.
A summary of the background estimation techniques used in each channel is provided in Table~\ref{tab:bgestimationmethods}.
In the 2$\ell$+0jets and 3$\ell$ channels only, the dominant backgrounds are estimated from MC simulation and normalized in dedicated control regions (CRs)
that are included, together with the SRs, in simultaneous likelihood fits to data, as described further in Section~\ref{sec:result}. In addition, all channels employ validation regions (VRs)
with kinematic requirements that are similar to the SRs but with smaller expected signal-to-background ratios, which are used to validate the background
estimation methodology. In the 2$\ell$+jets channel, the MC modelling of diboson processes is studied in dedicated VRs and found to accurately reproduce data.
\begin{table}[t]
\centering
\caption[]{Summary of the estimation methods used in each search channel. Backgrounds denoted CR have a dedicated control region that is included in a simultaneous likelihood fit to data to extract a data-driven normalization factor that is used to scale the MC prediction. The $\gamma$+jet template method is used in the 2$\ell$+jets channel to provide a data-driven estimate of the $Z$+jets background. Finally,
MC stands for pure Monte Carlo estimation.
}
\label{tab:bgestimationmethods}
\begin{tabular}{| c | c | c | c |}
\hline
\multicolumn{4}{|c|}{ \bf{Background estimation summary}} \\
\hline
Channel & 2$\ell$+0jets & 2$\ell$+jets & 3$\ell$\\
\hline
Fake/non-prompt leptons & \multicolumn{2}{c|}{Matrix method } & Fake-factor method \\
\hline
$t\bar{t} + Wt$ & CR & MC & Fake-factor method \\
$VV$ & CR & MC & CR (WZ-only)\\
$Z$+jets & MC & $\gamma$+jet template & Fake-factor method \\
\hline
Higgs/ $VVV$/ top+$V$ & \multicolumn{3}{c|}{MC}\\
\hline
\end{tabular}
\end{table}
For the 2$\ell$+0jets channel the dominant backgrounds are irreducible processes from SM diboson production ($WW$, $WZ$, and $ZZ$) and dileptonic $t\bar{t}$ and $Wt$ events.
MC simulation is used to predict kinematic distributions for these backgrounds, but the $t\bar{t}$ and diboson backgrounds are then normalized to data in dedicated control regions. For the diboson
backgrounds, SF and DF events are treated separately and two control regions are defined. The first one (CR2-VV-SF) selects SFOS lepton pairs with an invariant mass consistent with the $Z$ boson mass and has a tight requirement of \mttwo\ $>130$ GeV to reduce the $Z$+jets contamination. This region is dominated by $ZZ$ events, with subdominant contributions from $WZ$ and $WW$ events. The DF diboson control region (CR2-VV-DF) selects events with a different flavour opposite sign pair and further requires $50<m_{\rm{T2}}<75$ GeV. This region is dominated by $WW$ events, with a subdominant contribution from $WZ$ events. The $t\bar{t}$ control region (CR2-Top) uses DF events with at least one $b$-tagged jet to obtain a high-purity sample of $t\bar{t}$ events. The control region definitions are summarized in Table~\ref{tab:CRdef_2LOS}.
The $Z$+jets and Higgs boson contributions are expected to be small in the 2$\ell$+0jets channel and are estimated directly from MC simulation.
The three control regions are included in a simultaneous profile likelihood fit to the observed data which provides data-driven normalization factors for these backgrounds, as described in Section~\ref{sec:result}. The results are propagated
to the signal regions, and to dedicated VRs that are defined in Table~\ref{tab:CRdef_2LOS}.
The normalization factors returned by the fit for the $t\bar{t}$, VV-DF and VV-SF backgrounds are $0.95\pm 0.03$, $1.06\pm 0.18$ and $0.96\pm 0.11$, respectively. Figures~\ref{figbody:metVRVVSF} and \ref{figbody:mt2VRVVSF} show the \MET and $m_{\mathrm{T2}}$ distributions, respectively, for data and
the estimated backgrounds in VR2-VV-SF with these normalization factors applied.
\begin{table}[t]
\begin{center}
\caption{\label{tab:CRdef_2LOS}
Control region and validation region definitions for the 2$\ell$+0jets channel. The
DF and SF labels refer to different-flavour or same-flavour lepton pair combinations, respectively.
The \pT\ thresholds placed on the requirements for $b$-tagged and non-$b$-tagged jets correspond to 20~GeV and 60~GeV, respectively.
}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{ \bf{2$\ell$+0jets control and validation region definitions}} \\
\hline
Region & CR2-VV-SF & CR2-VV-DF & CR2-Top & VR2-VV-SF (DF) & VR2-Top \\
\hline
Lepton flavour & SF & DF & DF & SF (DF) & DF \\
$n_{\text{non-} b \text{-tagged jets}}$ & 0 & 0 & 0 & 0 & 0 \\
$n_{b \text{-tagged jets}}$ & 0 &0 & $\ge1$ & $0$ & $\ge1$ \\
$|m_{\ell\ell}-m_Z|$ [GeV] & $<20$ & --- & --- & $>20$ (--) & --- \\
$m_{\mathrm{T2}}$ [GeV] & $>130$ & 50--75 & 75--100 & 75--100 & $>100$ \\
\hline
\end{tabular}
\end{center}
\end{table}
In the 2$\ell$+jets channel, the largest background contribution is also from SM diboson production. In addition,
$Z$+jets events can enter the SRs due to fake \MET\ from jet or lepton mismeasurements or genuine \MET\ from neutrinos in semileptonic decays of $b$- or $c$-hadrons. These effects are difficult to model in MC simulation, so instead $\Gjet$ events in data are used to extract the \MET\ shape in $\Zjet$ events, which have a similar topology and \MET\ resolution.
Similar methods have been employed in searches for SUSY in events with two leptons, jets, and large \MET\
in ATLAS~\cite{SUSY-2016-05}
and CMS~\cite{CMS-SUS-11-021,CMS-SUS-14-014}.
The \MET shape is extracted from a data control sample of $\Gjet$ events
using a set of single-photon triggers and weighting each event by the trigger prescale factor.
Corrections to account for differences in the $\gamma$ and $Z$ boson $\pt$ distributions, as well as
different momentum resolutions for electrons, muons and photons, are applied.
Backgrounds of $W\gamma$ and $Z\gamma$ production, which contain a photon and genuine \MET\ from neutrinos, are subtracted using MC samples that are normalized to data in a $V\gamma$ control region containing a selected lepton and photon.
For each SR separately, the \MET\ shape is then normalized to data in a corresponding control region with $\MET<100$ GeV but all other requirements the same as in the SR.
To model quantities that depend on the individual lepton momenta, an \mll\ value is assigned to each \Gjet\ event by sampling from \mll\ distributions
(parameterized as functions of boson \pt\ and \METl, the component of \MET\ that is parallel to the boson's transverse momentum vector) extracted from a \Zjet\ MC sample. With this \mll\ value assigned to the photon, each \Gjet\ event is boosted to the rest frame of the hypothetical $Z$ boson and the photon is split into two pseudo-leptons, assuming isotropic decays in the rest frame.
To validate the method, two sets of validation regions, ``tight'' and ``loose'', are defined for each SR. The definitions of these regions are provided in Table~\ref{tab:2L2J_VR_def}.
The selections in the ``tight'' regions are identical to the SR selections with the exception of the dijet mass $m_{jj}$ requirement, which is replaced by the requirement ($m_{jj}<60$ GeV
or $m_{jj}>100$ GeV) to suppress signal. These ``tight'' regions
are used to verify the expectation from the $\Gjet$ method that the residual $\Zjet$ background after applying the SR selections is very small.
The ``loose'' validation regions are instead defined by
removing several other kinematic requirements used in the SR
definition (\mttwo, all $\Delta\phi$ and $\Delta R$
quantities, and the ratios of \MET\ to $W$ \pT, $Z$ \pT, and
\pT of the system of ISR jets). These samples
have enough $\Zjet$ events to perform comparisons of kinematic distributions, which validate
the normalization and kinematic modelling of the $\Zjet$ background.
The data distributions are consistent with the expected background in these validation regions, as shown in
Figure~\ref{figbody:metvr2int} for the \MET\ distribution in VR2-int-loose.
Once the signal region requirements are applied, the dominant background in the 2$\ell$+jets channel is the diboson background. This is taken from MC simulation, but the modelling is verified in two dedicated validation regions, one for signal regions with low mass-splitting (VR2-VV-low) and one for the intermediate and high-mass signal regions (VR2-VV-int).
Requiring high \MET\ and exactly one signal jet (compared to at least two jets in the SRs) suppresses the \ttbar\ background and enhances the purity of diboson events containing an ISR jet,
in which each boson decays leptonically.
Figure~\ref{figbody:mt2vr2vvint} shows the \mttwo\ distribution in VR2-VV-int for data and the expected backgrounds.
\begin{table}[t]
\centering
\caption{Validation region definitions used for the 2$\ell$+jets channel.
Symbols and abbreviations are analogous to those in Table~\ref{tab:2L2J_SR_def}.}
\label{tab:2L2J_VR_def}
\begin{tabular}{|c|c|c|c|c| }
\hline
\multicolumn{5}{|c|}{\bf{ 2$\ell$+jets validation region definitions}} \\
\hline
& VR2-int(high) & VR2-low-2J(3J) & VR2-VV-int & VR2-VV-low \\
\hline
\multicolumn{5}{|c|}{Loose selection} \\
\hline
$n_{\text{non-} b \text{-tagged jets}}$ &$\geq 2$ & 2 (3--5)& 1 & 1\\[0.05cm]
\MET [GeV] & $>150$ ($>250$) & $>100$ & $>150$ & $>150$\\[0.05cm]
$m_{\ell\ell}$ [GeV] & 81--101 & 81--101 (86--96) & & 81--101\\[0.05cm]
$m_{jj}$ [GeV] & $\notin [60,100]$ & $\notin [60,100]$ & & \\[0.05cm]
$p^{Z}_{\rm T} $ [GeV] & $>80$ & $>60$ ($>40$) & & \\[0.05cm]
$p^{W}_{\rm T} $ [GeV] & $>100$ & & &\\[0.05cm]
$|\eta(Z)|$ & & $(<1.6 )$ & & \\
$p^{\rm{jet}3}_{\rm T}$ [GeV] & & ($>30$) & &\\[0.05cm]
$\Delta\phi_{(\pTmiss ,\rm{jet})}$ & & & $>0.4$ & $>0.4$ \\[0.05cm]
$m_{\rm T2}$ [GeV] & & & $>100$ &\\[0.05cm]
$\Delta R_{(\ell\ell)}$ & & & & $<0.2$\\[0.05cm]
\hline
\multicolumn{5}{|c|}{Tight selection} \\
\hline
$\Delta R_{(jj)}$ & $<1.5$ & ($<2.2$) & &\\[0.05cm]
$\Delta\phi_{(\pTmiss ,W)}$ & 0.5--3.0 & $>1.5$ ($<2.2$) & & \\[0.05cm]
$\Delta\phi_{(\pTmiss ,Z)}$ & & $ <0.8$ $(-)$ & & \\[0.05cm]
$\MET/p^{W}_{\rm T}$ & & $ <0.8$ $(-)$ & & \\[0.05cm]
$\MET/p^{Z}_{\rm T}$ & & 0.6--1.6 $(-)$ & & \\[0.05cm]
$\MET/p^{\rm ISR}_{\rm T}$ & & (0.4--0.8) & & \\[0.05cm]
$\Delta\phi_{(\pTmiss ,\rm{ISR})}$ & & $(>2.4)$ & & \\[0.05cm]
$\Delta\phi_{(\pTmiss ,\rm{jet1})}$ & & $(>2.6)$ & & \\[0.05cm]
$m_{\rm T2}$ [GeV] & $>100$ & & &\\[0.05cm]
$\Delta R_{(\ell\ell)}$ & $<1.8$ & & & \\[0.05cm]
\hline
\end{tabular}
\end{table}
For both the 2$\ell$+0jets and 2$\ell$+jets channels, reducible backgrounds with one or two FNP leptons arise from multijet, $W$+jets and single-top-quark production events.
For both analyses, the FNP lepton background is estimated from data using the matrix method (MM)~\cite{TOPQ-2010-01}. This method uses two types of lepton identification criteria: ``signal'', corresponding to leptons passing the full analysis selection, and ``baseline'', corresponding to candidate electrons and muons as defined in Section~\ref{sec:objects}.
Probabilities for real leptons satisfying the baseline selection to also satisfy the signal selection are measured as a function of \pt\ and $\eta$ in dedicated regions enriched in $Z$ boson processes;
similar probabilities for FNP leptons are measured in events dominated by leptons from heavy flavour decays and photon conversions.
The method uses the number of observed events containing baseline--baseline, baseline--signal, signal--baseline and signal--signal lepton pairs in a given SR
to extract data-driven estimates for the FNP lepton background in the CRs, VRs, and SRs for each analysis.
For the 3$\ell$ channel, the irreducible background is dominated by SM $WZ$ diboson processes. As in the 2$\ell$+0jets channel, the shape of this background is taken from MC simulation but normalized to data in a dedicated control region. The signal regions shown in Table~\ref{tab:threelepSRs} include a set of exclusive regions inclusive in jet multiplicity which target $\slepton$-mediated decays, and a set of exclusive regions separated into 0-jet and $\geq 1$ jet categories which target gauge-boson-mediated decays. To reflect this, three control regions are defined in order to extract the normalization of the $WZ$ background: an inclusive region (CR3-WZ-inc) and two exclusive control regions (CR3-WZ-0j and CR3-WZ-1j). The results of the background estimations are validated in a set of dedicated validation regions.
This includes two validation regions that are binned in jet multiplicity (VR3-Za-0J and VR3-Za-1J),
and a set of inclusive validation regions (VR3-Za, VR3-Zb, VR3-offZa and VR3-offZb) targeting different regions of phase space considered in the analysis (i.e.\ within and outside the $Z$ boson mass window, high and low \MET, and vetoing events with a trilepton invariant mass within the $Z$ boson mass window). The definitions of the control and validation regions used in the 3$\ell$ analysis are shown in Table~\ref{tab:3lepCRVR}. The normalization factors extracted from the fit for inclusive $WZ$ events, $WZ$ events with zero jets, and $WZ$ events with at least one jet are $0.97\pm0.06$, $1.08\pm0.06$ and $0.94\pm0.07$, respectively. Other small
background sources such as $VVV$, $tV$ and Higgs boson production processes contributing to the irreducible background are taken from MC simulation.
\begin{table}[t]
\centering
\caption{Control and validation region definitions used in the 3$\ell$ channel.
The $m_{\mathrm{SFOS}}$ quantity is the mass of the same-flavour opposite-sign lepton pair and $m_{\mathrm{\ell \ell \ell}}$ is the trilepton invariant mass.
Other symbols and abbreviations are analogous to those in Table~\ref{tab:threelepSRs}.}
\label{tab:3lepCRVR}
\setlength{\tabcolsep}{0.0pc}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}|l c c c c c c c |}
\hline
\multicolumn{8}{|c|}{\bf{ 3$\ell$ control and validation region definitions}} \\
\hline
~ & $p^{\ell_{3}}_{\rm T}$ & $m_{\mathrm{\ell \ell \ell}}$ & $m_{\mathrm{SFOS}}$ & \MET & $\mTmin$ & $n_{\text{non-} b \text{-tagged~jets}}$ & $n_{ b \text{-tagged jets}}$ \\
~ & [GeV] & [GeV] & [GeV] & [GeV] & [GeV] & & \\
\hline
CR3-WZ-inc & $>20$ & -- & 81.2--101.2 & $>120$ & $<110$ & -- & 0 \\
CR3-WZ-0j & $>20$ & -- & 81.2--101.2 & $>60$ & $<110$ & 0 & 0 \\
CR3-WZ-1j & $>20$ & -- & 81.2--101.2 & $>120$ & $<110$ & $>0$ & 0 \\
\hline
VR3-Za & $>30$ & \multirow{2}{*}{$\notin [81.2,101.2]$} & 81.2--101.2 & 40--60 &-- & -- & -- \\
VR3-Zb & $>30$ & & 81.2--101.2 & $>$60 & -- &-- & $>0$ \\
\hline
VR3-offZa & $>30$ & \multirow{2}{*}{$\notin [81.2,101.2]$} & \multirow{2}{*}{$\notin [81.2,101.2]$} & 40--60 & -- &-- &-- \\
VR3-offZb & $>20$ & & & $>$ 40 & -- &-- & $>0$ \\
\hline
VR3-Za-0J & \multirow{2}{*}{$>20$} & \multirow{2}{*}{$\notin [81.2,101.2]$} & \multirow{2}{*}{81.2--101.2} & 40--60 & -- & 0 & 0 \\
VR3-Za-1J & & & & 40--60 & -- & $>0$ & 0
\\
\hline
\end{tabular*}
\end{table}
In addition to processes contributing to the reducible backgrounds in the 2$\ell$ channels, the reducible backgrounds in the 3$\ell$ channel also include $Z+$jets, \ttbar, $WW$ and in general any physics process leading to less than three prompt and isolated leptons. The reducible backgrounds in the 3$\ell$ channel are estimated using a data-driven fake-factor (FF) method~\cite{STDM-2015-19}. This method uses two sets of lepton
identification criteria: the tight, or ``ID'', criteria corresponding to the signal lepton selection used in the analysis and the orthogonal loose, or ``anti-ID'', criteria which are designed to yield an enrichment in FNP leptons. In particular, for the anti-ID leptons the isolation and identification requirements applied to signal leptons are reversed.
The $Z$+jets background events in the signal, control and validation regions are estimated using lepton \pT-dependent fake factors, defined as the ratio of the numbers of ID to anti-ID leptons in an FNP-dominated region. These fake factors are then applied to events passing selection requirements identical to those in the signal, control or validation region in question but where one of the ID leptons is replaced by an anti-ID lepton. The ``top-like'' contamination, which includes $t\bar{t}$, $Wt$, and $WW$, is subtracted from these anti-ID regions along with contributions from any remaining MC processes, to avoid double-counting. The top-like reducible background contributions are then estimated differently: data-to-MC scale factors derived with DF opposite-sign events are applied to simulated SF events.
Figures~\ref{figbody:3lSlepBinBMET} and \ref{figbody:3lSlepBinAMET} show the \MET distribution in VR3-Zb and the $m_{T}^{\mathrm{min}}$ distribution in VR3-Za, respectively.
\begin{figure}[!h]
\centering
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_02a.pdf}
\caption{\MET\ distribution in VR2-VV-SF}\label{figbody:metVRVVSF}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_02b.pdf}
\caption{\mttwo\ distribution in VR2-VV-SF}\label{figbody:mt2VRVVSF}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_02c.pdf}
\caption{\MET\ distribution in VR2-int-loose}\label{figbody:metvr2int}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_02d.pdf}
\caption{ \mttwo\ distribution in VR2-VV-int.}\label{figbody:mt2vr2vvint}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_02e.pdf}
\caption{\MET\ distribution in VR3-Zb}\label{figbody:3lSlepBinBMET}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_02f.pdf}
\caption{$m_{T}^{\mathrm{min}}$ distribution in VR3-Za}\label{figbody:3lSlepBinAMET}\end{subfigure}
\caption{ Distributions of \MET, $m_{T}^{\mathrm{min}}$, and \mttwo\ for data and the estimated SM backgrounds in the (top) 2$\ell$+0jets channel,
(middle) 2$\ell$+jets channel, and (bottom) 3$\ell$ channel. Simulated signal models are overlaid for comparison.
For the 2$\ell$+0jets (3$\ell$) channel, the normalization factors extracted from the corresponding CRs are used to rescale the $t\bar{t}$ and $VV$ ($WZ$) backgrounds. For the 2$\ell$+0jets channel the ``top'' background includes \ttbar and $Wt$, the ``other'' backgrounds include Higgs bosons, $t\bar{t}V$ and $VVV$ and the ``reducible'' category corresponds to the data-driven matrix method estimate.
For the 2$\ell$+jets channel, the ``top'' background includes \ttbar, $Wt$ and $t\bar{t}V$, the ``other'' backgrounds include Higgs bosons and $VVV$,
the ``reducible'' category corresponds to the data-driven matrix method estimate, and the $Z$+jets contribution is evaluated with
the data-driven $\gamma$+jet template method.
For the 3$\ell$ channel, the ?reducible? category corresponds to the data-driven fake-factor estimate.
The uncertainty band includes all systematic and statistical sources and the final bin in each histogram also contains the events in the overflow bin.
}
\end{figure}
\clearpage
\section{Systematic uncertainties}
\label{sec:systematics}
Several sources of experimental and theoretical systematic uncertainty are considered in the SM background estimates and signal
predictions. These uncertainties are included in the profile likelihood fit described
in Section~\ref{sec:result}. The primary sources of systematic uncertainty are related to the jet energy scale (JES) and resolution (JER), theory uncertainties in the MC modelling, the reweighting procedure applied to simulation to match the distribution of the number of reconstructed vertices observed in data, the systematic uncertainty considered in the non-prompt background estimation and the theoretical cross-section uncertainties. The statistical uncertainty of the simulated event samples is taken into account as well. The effects of these uncertainties were evaluated for all signal samples and background processes. In the 2$\ell$+0jets and 3$\ell$ channels the normalizations of the MC predictions for the dominant background processes are extracted in dedicated control regions and the systematic uncertainties thus only affect the extrapolation to the signal regions in these cases.
The JES and JER uncertainties are derived as a function of jet \pT and $\eta$, as well as of the pile-up conditions and the jet flavour composition of the selected jet sample. They are determined using a combination of data and simulation, through measurements of the jet response balance in dijet, $Z$+jets and $\gamma+$jets events \cite{ATL-PHYS-PUB-2015-015,Aaboud:2017jcu}.
The systematic uncertainties related to the \MET modelling in the simulation are estimated by propagating the uncertainties in the energy or momentum scale of each of the physics objects, as well as the uncertainties in the soft term's resolution and scale~\cite{ATL-PHYS-PUB-2015-023}.
The remaining detector-related systematic uncertainties, such as those in the lepton reconstruction efficiency, energy scale and energy resolution, in the $b$-tagging efficiency and in the modelling of the trigger~\cite{ATL-PHYS-PUB-2015-041,PERF-2015-10}, are included but were found to be negligible in all channels.
The uncertainties coming from the modelling of diboson events in MC simulation are estimated by varying the renormalization, factorization and merging scales used to generate the samples, and the PDFs.
In the 2$\ell$+0jets channel the impact of these uncertainties in the modelling of $Z$+jets events is also considered, as well as
uncertainties
in the modelling of $t\bar{t}$ events due to parton shower simulation
(by comparing samples generated with \POWHEG\ +\ \PYTHIA to \POWHEG\ +\ \Herwigpp~\cite{Bahr:2008pv}), ISR/FSR modelling
(by comparing the predictions from an event sample generated by \POWHEG\ +\ \PYTHIA with those from two samples where the radiation settings are varied), and the PDF set.
In the 2$\ell$+jets channel, uncertainties in the data-driven $\Zjet$ estimate are calculated following the methodology used in Ref.~\cite{SUSY-2016-05}. An additional uncertainty is based on the difference between the expected background yield from the nominal method and a second method implemented as a cross-check, which extracts the dijet mass shape from data validation regions, normalizes the shape to the sideband regions of the SRs, and extrapolates the background into the $W$ mass region.
For the matrix-method and fake-factor estimates of the FNP background, systematic uncertainties are assigned to account for
differences in FNP lepton composition between the SR and the CR used to derive the fake rates and fake factors.
An additional uncertainty is assigned to the MC subtraction of prompt leptons from this CR.
The exclusive SRs in the 2$\ell$+0jets and 3$\ell$ channels are dominated by statistical uncertainties in the background estimates (which range from 10\% to 70\% in the higher mass regions in the 2$\ell$+0jets channel and from 5\% to 30\% in the 3$\ell$ channel). The largest systematic uncertainties are those related to diboson modelling, the JES and JER uncertainties and those associated with the \MET\ modelling.
In the 2$\ell$+jets channel the dominant uncertainties are those associated with the data-driven estimate of the $\Zjet$ background, which range from approximately 45\% to 75\%.
\FloatBarrier
\section{Results}
\label{sec:result}
The HistFitter framework~\cite{HistFitter} is used for the statistical interpretation of the results, with the CRs (for the 2$\ell$+0jets and 3$\ell$ channels) and SRs both participating in a simultaneous likelihood fit. The likelihood is built as the product of a Poisson probability density function describing the observed number of events in each CR/SR and Gaussian distributions that constrain the nuisance parameters associated with the systematic uncertainties and whose widths correspond to the sizes of these uncertainties; Poisson distributions are used instead for MC statistical uncertainties. Correlations of a given nuisance parameter among the different background sources and the signal are taken into account when relevant.
In the 2$\ell$+0jets and 3$\ell$ channels, a background-only fit which uses data in the CRs is performed to constrain the nuisance parameters of the likelihood function (these include the normalization factors for dominant backgrounds and the parameters associated with the systematic uncertainties). In all channels the background estimates are also used to evaluate how well the expected and observed numbers of events agree in the validation regions, and good agreement is found.
In the 2$\ell$+0jets, 2$\ell$+jets, and 3$\ell$ channels, the number of considered VRs is 3, 8, and 6, respectively,
and the most significant deviations observed are 0.4$\sigma$, 1.4$\sigma$, and 0.8$\sigma$, respectively.
The precision of the expected background yields in the VRs is significantly better than in the corresponding SRs
and the dominant sources of systematic uncertainty in the VRs and corresponding SRs are similar.
For the 2$\ell$+0jets channel, the results for the exclusive signal regions are shown in Tables~\ref{tab:SR0jetResultsA}, \ref{tab:SR0jetResultsB} and~\ref{tab:SR0jetResultsC} for SR2-SF-a to SR2-SF-g, SR2-SF-h to SR2-SF-m and SR2-DF-a to SR2-DF-d, respectively. The results for the 2$\ell$+0jets inclusive signal regions are shown in Table~\ref{tab:SR0jetInc}, while Table~\ref{tab:SR2LjetResults} summarizes the expected SM background and observed events in the 2$\ell$+jets SRs. For the 3$\ell$ channel, the results are shown in Table~\ref{tab:SR3LResultsWZ} for SR3-WZ-0Ja to SR3-WZ-0Jc and SR3-WZ-1Ja to SR3-WZ-1Jc (which target gauge-boson-mediated decays) and Table~\ref{tab:SR3LResultsSlep} for SR3-slep-a to SR3-slep-e. A summary of the observed and expected yields in all of the signal regions considered in this paper is provided in Figure~\ref{fig:summaryPlot}. No significant excess above the SM expectation is observed in any SR.
\begin{table}[t]
\centering
\caption{Background-only fit results for SR2-SF-a to SR2-SF-g in the 2$\ell$+0jets channel. All systematic and statistical uncertainties are included in the fit. The ``other'' backgrounds include all processes producing a Higgs boson, $VVV$ or $t\bar{t}V$. A ``--'' symbol indicates that the background contribution is negligible.}
\label{tab:SR0jetResultsA}
\setlength{\tabcolsep}{0.0pc}
{\small
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccccccc}
\noalign{\smallskip}\hline\noalign{\smallskip}
SR2- & SF-a & SF-b & SF-c & SF-d & SF-e & SF-f & SF-g \\[-0.05cm]
\noalign{\smallskip}\hline\noalign{\smallskip}
Observed & $56$ & $28$ & $19$ & $13$ & $10$ & $6$ & $6$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Total SM &$ 47 \pm 12 $ & $ 25 \pm 5\;\,\,$ & $ 25 \pm 4\;\,\, $ & $ 14 \pm 7\;\,\, $ & $ 5.2 \pm 1.4 $ & $ 1.9 \pm 1.2 $ & $ 3.8 \pm 1.9 $ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$t\bar{t}$ & $ 10 \pm 4\;\, $ & $ 7.4 \pm 3.5 $ & $ 7.3 \pm 3.0 $ & $ 2.7 \pm 1.7 $ & -- & -- & $0.11_{-0.11}^{+0.21}\,\,$ \\
$Wt$ & $ \,1.0 \pm 1.0 $ & $ 1.3 \pm 0.7 $ & $ 1.6 \pm 0.6 $ & $ 1.1 \pm 1.1 $ & -- & -- & -- \\
$VV$ & $ 21 \pm 4\;\, $ & $ 11.3 \pm 2.9\;\, $ & $ 12.6 \pm 2.4\;\, $ & $ 3.9 \pm 2.4 $ & $ 4.4 \pm 1.3 $ & $ 1.8 \pm 1.2 $ & $ 2.8 \pm 1.6 $ \\
FNP & $2.1_{-2.1}^{+2.9}\,$ & $0.0^{+0.4}_{-0.0}$ & $0.0^{+0.5}_{-0.0}$ & $5 \pm 4$ & $0.0^{+0.1}_{-0.0}$ & $0.00^{+0.01}_{-0.00}$ & $0.9 \pm 0.4$ \\
$Z$+jets & $ 13 \pm 9\;\, $ & $ 4.7 \pm 2.6 $ & $ 3.3 \pm 3.2 $ & $1.2_{-1.2}^{+1.7}\,$ & $ 0.7 \pm 0.6 $ & $0.02_{-0.02}^{+0.21}\,\,$ & -- \\
Other & $ \,0.18 \pm 0.08 $ & $ 0.12 \pm 0.05 $ & $ 0.11 \pm 0.04 $ & $ 0.09 \pm 0.05 $ & $ 0.05 \pm 0.03 $ & $ 0.03 \pm 0.01 $ & $ 0.05 \pm 0.02 $ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular*}
}
\end{table}
\begin{table}[t]
\centering
\caption{Background-only fit results for SR2-SF-h to SR2-SF-m in the 2$\ell$+0jets channel. All systematic and statistical uncertainties are included in the fit. The ``other'' backgrounds include all processes producing a Higgs boson, $VVV$ and $t\bar{t}V$. A ``--'' symbol indicates that the background contribution is negligible.}
\label{tab:SR0jetResultsB}
\setlength{\tabcolsep}{0.0pc}
{\small
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc}
\noalign{\smallskip}\hline\noalign{\smallskip}
SR2- & SF-h & SF-i & SF-j & SF-k & SF-l & SF-m \\[-0.05cm]
\noalign{\smallskip}\hline\noalign{\smallskip}
Observed & $0$ & $1$ & $3$ & $2$ & $2$ & $7$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Total SM & $ 3.1 \pm 1.0 $ & $ 1.9 \pm 0.9 $ & $ 1.6 \pm 0.5 $ & $ 1.5 \pm 0.6 $ & $ 1.8 \pm 0.8 $ & $ 2.6 \pm 0.9 $ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$t\bar{t}$ & -- & -- & -- & -- & -- & -- \\
$Wt$ & -- & -- & -- & -- & -- & -- \\
$VV$ &$ 3.0 \pm 1.0 $ & $ 1.5 \pm 0.8 $ & $ 1.6 \pm 0.5 $ & $ 1.4 \pm 0.6 $ & $ 1.7 \pm 0.8 $ & $ 2.6 \pm 0.9 $ \\
FNP & $0.00^{+0.02}_{-0.00}$ & $0.0^{+0.1}_{-0.0}$ & $0.00^{+0.01}_{-0.00}$ & $0.00^{+0.01}_{-0.00}$ & $0.00^{+0.02}_{-0.00}$ & $0.00^{+0.01}_{-0.00}$ \\
$Z$+jets & $0.02_{-0.02}^{+0.11}\,\,$ & $0.42 \pm 0.20$ & -- & $0.02_{-0.02}^{+0.20}\,\,$ & -- & $0.02_{-0.02}^{+0.06}\,\,$ \\
Other & $0.03 \pm 0.01$ & $0.03 \pm 0.02$ & -- & $0.04 \pm 0.02$ & $0.02 \pm 0.01$ & $0.02 \pm 0.02$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular*}
}
\end{table}
\begin{table}[t]
\centering
\caption{Background-only fit results for SR2-DF-a to SR2-DF-d in the 2$\ell$+0jets channel. All systematic and statistical uncertainties are included in the fit. The ``other'' backgrounds include all processes producing a Higgs boson, $VVV$ or $t\bar{t}V$. A ``--'' symbol indicates that the background contribution is negligible.}
\label{tab:SR0jetResultsC}
\setlength{\tabcolsep}{0.0pc}
{\small
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccc}
\noalign{\smallskip}\hline\noalign{\smallskip}
SR2- & DF-a & DF-b & DF-c & DF-d \\[-0.05cm]
\noalign{\smallskip}\hline\noalign{\smallskip}
Observed & $67$ & $5$ & $4$ & $2$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Total SM & $ 57 \pm 7\;\,\, $ & $ 9.6 \pm 1.9 $ & $1.5_{-1.5}^{+1.7}\,\,$ & $0.6 \pm 0.6$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$t\bar{t}$ & $ 24 \pm 8\;\, $ & -- & -- & -- \\
$Wt$ & $4.5 \pm 1.0$ & -- & -- & -- \\
$VV$ & $ 26 \pm 6\;\, $ & $ 8.8 \pm 1.8 $ & $1.5_{-1.5}^{+1.7}\,\,$ & $0.6 \pm 0.6$ \\
FNP &$ 1.75 \pm 0.18 $ & $ 0.57 \pm 0.23 $ & $0.00^{+0.01}_{-0.00}$ & $0.00^{+0.01}_{-0.00}$ \\
$Z$+jets & -- & -- & -- & -- \\
Other &$ 0.40 \pm 0.09 $ & $ 0.17 \pm 0.07 $ & $ 0.07 \pm 0.07 $ & $ 0.02 \pm 0.02 $ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular*}
}
\end{table}
\begin{table}
\centering
\caption{Background-only fit results for the inclusive signal regions in the 2$\ell$+0jets channel. All systematic and statistical uncertainties are included in the fit. The ``other'' backgrounds include all processes producing a Higgs boson, $VVV$ and $t\bar{t}V$. A ``--'' symbol indicates that the background contribution is negligible. }
\label{tab:SR0jetInc}
\setlength{\tabcolsep}{0.0pc}
{\small
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc}
\noalign{\smallskip}\hline\noalign{\smallskip}
SR2- & SF-loose & SF-tight & DF-100 & DF-150 & DF-200 & DF-300 \\[-0.05cm]
\noalign{\smallskip}\hline\noalign{\smallskip}
Observed & $153$ & $9$ & $78$ & $11$ & $6$ & $2$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Total SM &$ 133 \pm 22\;\,\, $ & $ 9.8 \pm 2.9 $ & $ 68 \pm 7\;\,\, $ & $ 11.5 \pm 3.1\;\,\, $ & $ 2.1 \pm 1.9 $ & $ 0.6 \pm 0.6 $ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$t\bar{t}$ & $27 \pm 11$ & -- & $24 \pm 8\;\,$ & -- &-- & -- \\
$Wt$ & $5.0 \pm 2.2$ &-- & $4.5 \pm 1.0$ & -- &-- & -- \\
$VV$ &$ 70 \pm 11 $ & $ 9.6 \pm 3.0 $ & $ 37 \pm 8\;\, $ & $ 10.8 \pm 3.0\;\, $ & $ 2.0 \pm 1.9 $ & $0.6 \pm 0.6$ \\
FNP & $ 6 \pm 4 $ & $ 0.0 \pm 0.0 $ & $ 2.17 \pm 0.29 $ & $ 0.42 \pm 0.23 $ & $0.00^{+0.01}_{-0.00}$ & $0.00^{+0.01}_{-0.00}$ \\
$Z$+jets & $23 \pm 14$ & $0.09_{-0.09}^{+0.34}\,\,$ &-- & -- & -- & -- \\
Other & $ 0.79 \pm 0.23 $ & $ 0.09 \pm 0.01 $ & $ 0.67 \pm 0.16 $ & $ 0.26 \pm 0.08 $ & $ 0.09 \pm 0.07 $ & $ 0.02 \pm 0.02 $ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular*}
}
\end{table}
\begin{table}[t]
\centering
\caption{ SM background results in the 2$\ell$+jets SRs. All systematic and statistical uncertainties are included. The ``top'' background includes
all processes producing one or more top quarks and the
``other'' backgrounds include all processes producing a Higgs boson or $VVV$. A ``--'' symbol indicates that the background contribution is negligible. }
\label{tab:SR2LjetResults}
\setlength{\tabcolsep}{0.0pc}
{\small
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccc}
\noalign{\smallskip}\hline\noalign{\smallskip}
SR2- & int & high & low (combined) \\[-0.05cm]
\noalign{\smallskip}\hline\noalign{\smallskip}
Observed & $2$ & $0$ & $11$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Total SM & $4.1^{+2.6}_{-1.8}\,\,$ & $ 1.6^{+1.6}_{-1.1} \,\,$ & $ 4.2^{+3.4}_{-1.6}\,\,$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$VV$ & $ 4.0 \pm 1.8 $ & $ 1.6 \pm 1.1 $ & $ 1.7 \pm 1.0 $ \\
Top & $ 0.15 \pm 0.11 $ & $ 0.04 \pm 0.03 $ & $ 0.8 \pm 0.4 $ \\
FNP & $0.0^{+0.2}_{-0.0}\,$ & $0.0^{+0.1}_{-0.0}\,$ & $0.7^{+1.8}_{-0.7}\,$ \\
$\Zjet$ & $0.0^{+1.8}_{-0.0}\,$ & $0.0^{+1.2}_{-0.0}\,$ & $1.0^{+2.7}_{-1.0}\,$ \\
Other & -- & -- & -- \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular*}
}
\end{table}
\begin{table}[t]
\centering
\caption{Background-only fits for SR3-WZ-0Ja to SR3-WZ-0Jc and SR3-WZ-1Ja to SR3-WZ-1Jc in the 3$\ell$ channel. All systematic and statistical uncertainties are included in the fit.
}
\label{tab:SR3LResultsWZ}
\setlength{\tabcolsep}{0.0pc}
{\small
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc}
\noalign{\smallskip}\hline\noalign{\smallskip}
SR3- & WZ-0Ja & WZ-0Jb & WZ-0Jc & WZ-1Ja & WZ-1Jb & WZ-1Jc \\[-0.05cm]
\noalign{\smallskip}\hline\noalign{\smallskip}
\noalign{\smallskip}\hline\noalign{\smallskip}
Observed & $21$ & $1$ & $2$ & $1$ & $3$ & $4$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Total SM & $21.7 \pm 2.9\;\,$ & $ 2.7 \pm 0.5 $ & $ 1.56 \pm 0.33 $ & $ 2.2 \pm 0.5 $ & $1.82 \pm 0.26$ & $1.26 \pm 0.34$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$WZ$ & $ 19.5 \pm 2.9\;\,$ & $ 2.5 \pm 0.5 $ & $ 1.33 \pm 0.31 $ & $ 1.8 \pm 0.5 $ & $ 1.49 \pm 0.22 $ & $ 0.92 \pm 0.28 $ \\
$ZZ$ & $ 0.81 \pm 0.23 $ & $ 0.06 \pm 0.03 $ & $ 0.05 \pm 0.01 $ & $ 0.05 \pm 0.02 $ & $ 0.02 \pm 0.01 $ & -- \\
$VVV$ & $0.31 \pm 0.07$ & $0.13 \pm 0.04$ & $0.13 \pm 0.03$ & $0.11 \pm 0.02$ & $0.12 \pm 0.03$ & $0.23 \pm 0.05$ \\
$t\bar{t}V$ &$0.04 \pm 0.02$ & $0.01 \pm 0.01$ & $0.01 \pm 0.01$ & $0.14 \pm 0.04$ & $0.12 \pm 0.02$ & $0.08 \pm 0.02$ \\
Higgs & -- & -- &-- & $0.01 \pm 0.00$ & -- &-- \\
FNP & $1.1 \pm 0.5 $ & $0.02 \pm 0.01$ & $0.04 \pm 0.02$ & $0.11 \pm 0.06$ & $0.07 \pm 0.04$ & $0.01 \pm 0.00$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular*}
}
\end{table}
\begin{table}[t]
\centering
\caption{
Background-only fits for SR3-slep-a to SR3-slep-e in the 3$\ell$ channel. All systematic and statistical uncertainties are included in the fit.
}
\label{tab:SR3LResultsSlep}
\setlength{\tabcolsep}{0.0pc}
{\small
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccccc}
\noalign{\smallskip}\hline\noalign{\smallskip}
SR3- & slep-a & slep-b & slep-c & slep-d & slep-e \\[-0.05cm]
\noalign{\smallskip}\hline\noalign{\smallskip}
Observed & $4$ & $3$ & $9$ & $0$ & $0$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Total SM & $ 2.2 \pm 0.8 $ & $ 2.8 \pm 0.4 $ & $ 5.4 \pm 0.9 $ & $ 1.4 \pm 0.4 $ & $ 1.14 \pm 0.23 $ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$WZ$ & $ 1.1 \pm 0.4 $ & $ 1.98 \pm 0.31 $ & $ 3.9 \pm 0.7 $ & $ 0.91 \pm 0.26 $ & $ 0.76 \pm 0.17 $ \\
$ZZ$ &$0.02 \pm 0.01$ & $0.01 \pm 0.01$ & $0.13 \pm 0.03$ & $0.06 \pm 0.02$ & $0.03 \pm 0.01$ \\
$VVV$ & $0.26 \pm 0.08$ & $0.34 \pm 0.05$ & $0.72 \pm 0.12$ & $0.36 \pm 0.10$ & $0.25 \pm 0.05$ \\
$t\bar{t}V$ & $0.07 \pm 0.03$ & $0.09 \pm 0.02$ & $0.20 \pm 0.04$ & $0.07 \pm 0.02$ & $0.02 \pm 0.01$ \\
Higgs & $0.01 \pm 0.00$ & $0.01 \pm 0.01$ & $0.03 \pm 0.02$ & $0.01 \pm 0.00$ & -- \\
FNP & $0.80 \pm 0.46$ & $0.36 \pm 0.18$ & $0.48 \pm 0.25$ & -- & $0.08 \pm 0.04$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular*}
}
\end{table}
\begin{figure}[htb!]
\centering
\includegraphics[width=1.0\textwidth]{fig_03.pdf}
\caption{The observed and expected SM background yields in the signal regions considered in the 2$\ell$+0jets, 2$\ell$+jets and 3$\ell$ channels. The statistical uncertainties in the background prediction are included in the uncertainty band, together with the experimental and theoretical uncertainties. The bottom plot shows the difference in standard deviations between the observed and expected yields.
Here $n_{\mathrm{obs}}$ and $n_{\mathrm{exp}}$ are the observed data and expected background yields, respectively,
$\sigma_{\mathrm{tot}}=\sqrt{n_{\mathrm{bkg}}+\sigma^{2}_{\mathrm{bkg}}}$, and $\sigma_{\mathrm{exp}}$ is the total background uncertainty. }
\label{fig:summaryPlot}
\end{figure}
Figure~\ref{fig:Plot_SR2L0jets} shows a selection of kinematic distributions for data and the estimated SM backgrounds with their associated statistical and systematic uncertainties for the loosest inclusive SRs in the 2$\ell$+0jets channel: SR2-SF-loose and SR2-DF-100. The normalization factors extracted from the corresponding CRs are propagated to the $VV$ and $t\bar{t}$ contributions.
Figure~\ref{fig:Plot_SR2L2J} shows the \MET\ distribution in SR2-int and SR2-high, which differ only in the \MET requirement, and in SR2-low of the 2$\ell$+jets channel. In the 3$\ell$ channel, distributions of \MET\ and the third leading lepton \pt\ are shown for the SR bins targeting $\slepton$-mediated decays in Figure~\ref{fig:Plot_3LSRslep} while Figure~\ref{fig:Plot_3LSRWZ} shows distributions of \MET\ in the bins targeting gauge-boson-mediated decays. Good agreement between data and expectations
is observed in all distributions within the uncertainties.
\begin{figure}[htb!]
\centering
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_04a.pdf}
\caption{}\label{fig:SRloosemll}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_04b.pdf}
\caption{}\label{fig:SRloosemt2}\end{subfigure} \\
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_04c.pdf}
\caption{}\label{fig:2L0JDFmt2}\end{subfigure}
\caption{The (a) $m_{\ell\ell}$ and (b) \mttwo\ distributions for data and the estimated SM backgrounds in the 2$\ell$+0jets channel for SR2-SF-loose and (c) the \mttwo~distribution for the SR2-DF-100 selection. The normalization factors extracted from the corresponding CRs are used to rescale the $t\bar{t}$ and $VV$ contributions. The ``top'' background includes \ttbar\ and $Wt$, and the ``other'' backgrounds include Higgs bosons, $t\bar{t}V$ and $VVV$. The "reducible" category corresponds to the data-driven matrix method's estimate. The uncertainty bands include all systematic and statistical contributions. Simulated signal models for sleptons (a,b) or charginos (c) pair production are overlayed for comparison. The final bin in each histogram also contains the events in the overflow bin. The vertical red arrows indicate bins where the ratio of data to SM background, minus the uncertainty on this quantity, is larger than the $y$-axis maximum. }
\label{fig:Plot_SR2L0jets}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_05a.pdf}
\caption{}\label{fig:2L2JSRmed}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_05b.pdf}
\caption{}\label{2L2JSRlow}\end{subfigure} \\
\caption{\label{fig:Plot_SR2L2J} Distributions of \MET\ for data and the expected SM backgrounds in the 2$\ell$+jets channel for (a) SR2-int/high and (b) SR2-low, without the final $\MET$ requirement applied. The ``top'' background includes \ttbar\ , $Wt$ and $t\bar{t}V$, and the ``other'' backgrounds include Higgs bosons and $VVV$. The $Z$+jets contribution is evaluated using the data-driven $\gamma$+jet template method and the ``reducible'' category corresponds to the data-driven matrix method's estimate. The uncertainty bands include all systematic and statistical contributions.
Simulated signal models for charginos/neutralinos production are overlayed for comparison. The final bin in each histogram also contains the events in the overflow bin.
The vertical red arrows indicate bins where the ratio of data to SM background, minus the uncertainty on this quantity, is larger than the $y$-axis maximum. }
\end{figure}
\begin{figure}[htb!]
\centering
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_06a.pdf}
\caption{}\label{fig:3lSlepBinAMET}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_06b.pdf}
\caption{}\label{fig:3lSlepBinBMET}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_06c.pdf}
\caption{}\label{fig:3lSlepBinDEFpt}\end{subfigure}
\caption{Distributions of \MET\ for data and the estimated SM backgrounds in the 3$\ell$ channel for (a) SR3-slep-a and (b) SR3-slep-b and (c) distributions of the third leading lepton \pt\ in SR3-slep-c,d,e. The normalization factors extracted from the corresponding CRs are used to rescale the $WZ$ background. The ``reducible'' category corresponds to the data-driven fake-factor estimate. The uncertainty bands include all systematic and statistical contributions. Simulated signal models for charginos/neutralinos production are overlayed for comparison. The final bin in each histogram also contains the events in the overflow bin. }
\label{fig:Plot_3LSRslep}
\end{figure}
\begin{figure}[htb!]
\centering
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_07a.pdf}
\caption{}\label{fig:3lWZBinABCMET}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_07b.pdf}
\caption{}\label{fig:3lSlepBinBMET}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_07c.pdf}
\caption{}\label{fig:3lSlepBinDEFpt}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_07d.pdf}
\caption{}\label{fig:3lSlepBinDEFpt}\end{subfigure}
\caption{Distributions of \MET\ for data and the estimated SM backgrounds in the 3$\ell$ channel for (a) SR3-WZ-0Ja,b,c, (b) SR3-WZ-1Ja, (c) SR3-WZ-1Jb and (d) SR3-WZ-1Jc. The normalization factors extracted from the corresponding CRs are used to rescale the 0-jet and $\geq$ 1-jet $WZ$ background components. The ``reducible'' category corresponds to the data-driven fake-factor estimate. The uncertainty bands include all systematic and statistical contributions. Simulated signal models for charginos/neutralinos production are overlayed for comparison. The final bin in each histogram also contains the events in the overflow bin.
The vertical red arrows indicate bins where the ratio of data to SM background, minus the uncertainty on this quantity, is larger than the $y$-axis maximum.}
\label{fig:Plot_3LSRWZ}
\end{figure}
In the absence of any significant excess, two types of exclusion limits for new physics scenarios are calculated using the CL$_\mathrm{s}$ prescription~\cite{Read:2002hq}. First, exclusion limits are set on the masses of the charginos, neutralinos, and sleptons for the simplified models in Figure~\ref{fig:TreeDiagramsPhysScenarios},
as shown in Figure~\ref{fig:Results_LimitsAll}.
Figures~\ref{figa} and \ref{figb} show the limits in the 2$\ell$+0jets channel in the models of direct chargino pair production with decays
via sleptons and direct slepton pair production, respectively.
Limits are calculated by statistically combining the mutually orthogonal exclusive SRs.
For the chargino pair model, all SF and DF bins are used and chargino masses up to 750 GeV are excluded at 95\% confidence level for a massless $\ninoone$ neutralino.
In the region with large chargino mass, the observed limit is weaker than expected because the data exceeds the expected backgrounds in SF-e, SF-f, and SF-g.
For the slepton pair model, which assumes mass-degenerate $\slepton_{\textup{L}}$ and $\slepton_{\textup{R}}$ states (where $\slepton=\tilde{e},\tilde{\mu},\tilde{\tau}$), only SF bins are used and slepton masses up to 500 GeV are excluded for a massless $\ninoone$ neutralino.
Figure~\ref{figc} shows the limits from the 3$\ell$ channel in the model of mass-degenerate chargino--neutralino pair production with decays via sleptons,
calculated using a statistical combination of the five SR3-slep regions. In this model, chargino and neutralino masses up to 1100~GeV are excluded for $\ninoone$ neutralino masses less than 550~GeV.
Figure~\ref{figd} shows the limits from the 3$\ell$ and 2$\ell$+jets channels in the model of mass-degenerate chargino--neutralino pair production with decays via $W/Z$ bosons.
The 3$\ell$ limits are calculated using a statistical combination of the six SR3-WZ regions.
Since the SRs in the 2$\ell$+jets channel are not mutually exclusive, the observed CL$_\mathrm{s}$ value is taken from the signal region with the best expected CL$_\mathrm{s}$ value.
The 3$\ell$ and 2$\ell$+jets channels are then combined, using the channel with the best expected CL$_\mathrm{s}$ value for each point in the model parameter space.
In this model, chargino and neutralino masses up to 580 GeV are excluded for a massless $\ninoone$ neutralino.
\begin{figure}[htb!]
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_08a.pdf}
\caption{\label{figa} 2$\ell$+0jets channel:\\ $\chinoonep\chinoonem$ production with $\slepton$-mediated decays}\label{fig:limits_C1C1slep}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_08b.pdf}
\caption{\label{figb} 2$\ell$+0jets channel:\\ $\slepton\slepton$ production}\label{fig:limits_directslepton}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_08c.pdf}
\caption{\label{figc} 3$\ell$ channel:\\ $\chinoonepm\ninotwo$ production with $\slepton$-mediated decays}\label{fig:limits_C1N2slep}\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}\includegraphics[width=\textwidth]{fig_08d.pdf}
\caption{\label{figd} 2$\ell$+jets and 3$\ell$ channels:\\ $\chinoonepm\ninotwo$ production with decays via $W/Z$}\end{subfigure}
\caption{
Observed and expected exclusion limits on SUSY simplified models for (a) chargino-pair production, (b) slepton-pair production, (c) chargino--neutralino production with slepton-mediated decays,
and (d) chargino--neutralino production with decays via $W/Z$ bosons.
The observed (solid thick red line) and expected (thin dashed blue line) exclusion contours are indicated.
The shaded band corresponds to the $\pm$1$\sigma$ variations in the expected limit, including all uncertainties except theoretical
uncertainties in the signal cross-section.
The dotted lines around the observed limit illustrate the change in the observed limit as the nominal signal cross-section is scaled up and down by the theoretical uncertainty.
All limits are computed at 95\% confidence level. The observed limits obtained from ATLAS in Run 1 are also shown~\cite{SUSY-2013-11}.}
\label{fig:Results_LimitsAll}
\end{figure}
Second, model-independent upper limits are set on the visible signal cross-section ($\langle\epsilon{\rm \sigma}\rangle_{\rm{obs}}^{95}$) as well as on the observed ($S^{95}_{\rm{obs}}$) and expected ($S^{95}_{\rm{exp}}$) number of events from processes beyond-the-SM in the signal regions considered in this analysis.
The $p$-value and the corresponding significance for the background-only hypothesis are also evaluated.
For the 2$\ell$+0jets channel the inclusive signal regions defined in Table~\ref{tab:exclusiveSR} are considered whereas for the 3$\ell$ channel the calculation is performed for each bin separately. All the limits are at 95\% confidence level. The results can be found in Table~\ref{tab:upperlimits}.
\begin{table}[t]
\centering
\caption[Breakdown of upper limits.]{Summary of results and model-independent limits in the inclusive 2$\ell$+0jets, 2$\ell$+jets, and 3$\ell$ SRs.
The observed ($N_{\rm{obs}}$) and expected background ($N_{\rm{exp}}$) yields in the signal regions are indicated.
Signal model-independent upper limits at 95\% confidence level on the the visible signal cross-section ($\langle\epsilon{\rm \sigma}\rangle_{\rm{obs}}^{95}$),
and the observed and expected upper limit on the number of BSM events ($S^{95}_{\rm{obs}}$ and $S^{95}_{\rm{exp}}$, respectively) are also shown.
The $\pm$1$\sigma$ variations of the expected limit originate from the statistical and systematic uncertainties in the background prediction.
The last two columns show the $p$-value and the corresponding significance for the background-only hypothesis.
For SRs where the data yield is smaller than expected, the $p$-value is truncated at 0.5 and the significance is set to 0.
\label{tab:upperlimits}}
\setlength{\tabcolsep}{0.0pc}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}llrccScSS}
\noalign{\smallskip}\hline\noalign{\smallskip}
{Signal channel} & Region & $N_{\rm{obs}}$ & $N_{\rm{exp}}$ & $\langle\epsilon{\rm \sigma}\rangle_{\rm{obs}}^{95}$[fb] & $S_{\rm{obs}}^{95}$ &
$S_{\rm{exp}}^{95}$ & \text{\textit{p}(\textit{s}=0)} & $Z$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
2$\ell$+0jets & DF-100 & $78$ & $68 \pm 7\;\,\,$ & $0.88$ & 32 & ${ 27}^{ +11 }_{ -8 }\;$ & 0.22 & 0.77 \\
& DF-150 & $11$ & $11.5 \pm 3.1\;\,\,$ & $0.32$ & 11.4 & $ { 12 }^{ +5 }_{ -4 }\;\,\,$ & 0.5 & 0 \\
& DF-200 & $6$ & $2.1\pm1.9$ & $0.33$ & 12.0 & $ { 10.3 }^{ +2.9 }_{ -1.9 }\;\,\,$ & 0.06 & 1.5 \\
& DF-300 & $ 2$ & $0.6\pm0.6$ & $0.18$ & 6.6 & $ { 5.6 }^{ +1.1 }_{ -0.9 }$ & 0.10 & 1.3 \\
& SF-loose & $153$ & $133\pm22\;\,\,$ & $2.02$ & 73 & $ { 53 }^{ +21 }_{ -16 }$ & 0.16 & 1.0 \\
& SF-tight & $9$ & $9.8\pm2.9$ & $0.29$ & 10.5 & $ { 12}^{ +4 }_{ -3 }\;\,\,$ & 0.5 & 0 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
2$\ell$+jets & SR2-int & $2$ & $4.1^{+2.6}_{-1.8}\;\,$ & $0.13$ & 4.5 & $ { 5.6 }^{ +2.2 }_{ -1.4 }$ & 0.5 & 0 \\
& SR2-high & $0$ & $1.6^{+1.6}_{-1.1}\;\,$ & $0.09$ & 3.1 & $ { 3.1 }^{ +1.4 }_{ -0.1 }$ & 0.5 & 0 \\
& SR2-low & $11$ & $4.2^{+3.4}_{-1.6}\;\,$ & $0.43$ & 15.7 & $ { 12 }^{ +4 }_{ -2 }\;\,\,$ & 0.06 & 1.6 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
3$\ell$ & WZ-0Ja & $21$ & $21.7 \pm 2.9\;\,$ & $0.35$ & 12.8 & $ { 14 }^{ +3 }_{ -5 }\;\,\,$ & 0.5 & 0 \\
& WZ-0Jb & $1$ & $2.7 \pm 0.5$ & $0.10$ & 3.7 & $ { 4.6 }^{ +2.1 }_{ -0.9 }$ & 0.5 & 0 \\
& WZ-0Jc & $2$ & $1.6 \pm 0.3$ & $0.13$ & 4.8 & $ { 4.1 }^{ +1.7 }_{ -0.7 }$ & 0.28 &0.57 \\
& WZ-1Ja & $1$ & $2.2 \pm 0.5$ & $0.09$ & 3.2 & $ { 4.5 }^{ +1.6 }_{ -1.3 }$ & 0.5 & 0 \\
& WZ-1Jb & $ 3$ & $1.8 \pm 0.3$ & $0.16$ & 5.6 & $ { 4.3 }^{ +1.7 }_{ -0.9 }$ & 0.18 & 0.91 \\
& WZ-1Jc & $4$ & $1.3 \pm 0.3$ & $0.20$ & 7.2 & $ { 4.2 }^{ +1.7 }_{ -0.4 }$ & 0.03 & 1.8\\
& slep-a & $4$ & $2.2 \pm 0.8$ & $0.19$ & 6.8 & $ { 4.7 }^{ +2.3 }_{ -0.5 }$ & 0.23 & 0.72 \\
& slep-b & $3$ & $2.8 \pm 0.4$ & $0.14$ & 5.2 & $ { 5.1 }^{ +1.9 }_{ -1.2 }$ & 0.47 & 0.08 \\
& slep-c & $9$ & $5.4 \pm 0.9$ & $0.29$ & 10.5 & $ { 6.8 }^{ +2.9 }_{ -1.3 }$ & 0.09 & 1.4\\
& slep-d & $0$ & $1.4 \pm 0.4$ & $0.08$ & 3.0 & $ { 3.6 }^{ +1.2 }_{ -0.6 }$ & 0.5 & 0 \\
& slep-e & $0$ & $1.1 \pm 0.2$ & $0.09$ & 3.3 & $ { 3.6 }^{ +1.3 }_{ -0.5 }$ & 0.5 & 0 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular*}
\end{table}
\FloatBarrier
\section{Conclusion}
\label{sec:conclusion}
Searches for the electroweak production of neutralinos, charginos and sleptons decaying into final states with exactly two or three electrons or muons and missing transverse momentum are performed using 36.1 fb$^{-1}$ of $\sqrt{s}=13$~TeV proton--proton collisions recorded by the ATLAS detector at the Large Hadron Collider. Three different search channels are considered. The 2$\ell$+0jets channel targets direct $\chinoonep\chinoonem$ production where each $\chinoonepm$ decays via an intermediate $\slepton$, and direct $\slepton\slepton$ production. The 2$\ell$+jets channel targets associated $\chinoonepm\ninotwo$ production where each sparticle decays via an SM gauge boson giving a final state with two leptons consistent with a $Z$ boson and two jets consistent with a $W$ boson. Finally, the 3$\ell$ channel targets associated $\chinoonepm\ninotwo$ production with decays via either intermediate $\slepton$ or gauge bosons.
No significant excess above the SM expectation is observed in any of the signal regions considered across the three channels, and the results are used to calculate exclusion limits at 95\% confidence level in several simplified model scenarios. For associated $\chinoonepm\ninotwo$ production with $\slepton$-mediated decays, masses up to 1100~GeV are excluded for $\ninoone$ neutralino masses less than 550~GeV. Both the 2$\ell$+jets and 3$\ell$ channels place exclusion limits on associated $\chinoonepm\ninotwo$ production with gauge-boson-mediated decays. For a massless $\ninoone$ neutralino, $\chinoonepm$/$\ninotwo$ masses up to approximately 580~GeV are excluded.
In the 2$\ell$+0jets channel, for direct $\chinoonep\chinoonem$ production with decays via
an intermediate $\slepton$, masses up to 750 GeV are excluded for a massless $\ninoone$ neutralino.
For $\slepton\slepton$ production, masses up to 500 GeV are excluded for a massless $\ninoone$ neutralino, assuming mass-degenerate $\slepton_{\textup{L}}$ and $\slepton_{\textup{R}}$ (where $\slepton=\tilde{e},\tilde{\mu},\tilde{\tau}$).
These results significantly improve upon previous exclusion limits based on Run 1 data.
\section*{Acknowledgements}
We thank CERN for the very successful operation of the LHC, as well as the
support staff from our institutions without whom ATLAS could not be
operated efficiently.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS, CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, HGF, and MPG, Germany; GSRT, Greece; RGC, Hong Kong SAR, China; ISF and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; NWO, Netherlands; RCN, Norway; MNiSW and NCN, Poland; FCT, Portugal; MNE/IFA, Romania; MES of Russia and NRC KI, Russian Federation; JINR; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, the Canada Council, CANARIE, CRC, Compute Canada, FQRNT, and the Ontario Innovation Trust, Canada; EPLANET, ERC, ERDF, FP7, Horizon 2020 and Marie Sk{\l}odowska-Curie Actions, European Union; Investissements d'Avenir Labex and Idex, ANR, R{\'e}gion Auvergne and Fondation Partager le Savoir, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF; BSF, GIF and Minerva, Israel; BRF, Norway; CERCA Programme Generalitat de Catalunya, Generalitat Valenciana, Spain; the Royal Society and Leverhulme Trust, United Kingdom.
The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref.~\cite{ATL-GEN-PUB-2016-002}.
\printbibliography
\clearpage
\input{atlas_authlist}
\end{document}
|
{
"timestamp": "2019-01-11T02:19:57",
"yymm": "1803",
"arxiv_id": "1803.02762",
"language": "en",
"url": "https://arxiv.org/abs/1803.02762"
}
|
\section{Introduction}
Near the end of his remarkable career in both pure and applied mathematics (see
\cite{O} for highlights of major achievements), Konstantin Ivanovich Babenko
(1919--1987) published four brief notes \cite{B,BB,BP,BPR} (the last two appeared
posthumously) on a classical nonlinear problem in the mathematical theory of water
waves, namely, the two-dimensional problem of steady, periodic waves on infinitely
deep water. In this paper dedicated to the centenary of Babenko's birth, we extend
the approach developed in \cite{B} to the case of water of finite depth and deduce a
single pseudo-differential operator equation (Babenko's equation) equivalent to the
corresponding free-boundary problem in some sense explained below (see Section~3.3).
Moreover, using the spectral decomposition of a linear operator involved in the
equation, we transform it to a form convenient for discretisation and then apply a
very robust numerical method that allows us to produce convincing results concerning
global bifurcation branches, secondary bifurcations and free surface profiles
including those of the extreme form.
It was Stokes \cite{S}, who had initiated studies in this field. On
the basis of approximations developed for waves with a single crest per wavelength
(now, they are referred to as {\it Stokes waves}), he made some conjectures about
the behaviour of such waves on deep water. To a great extent, these conjectures had
determined the development of research in the 20th century; see the paper \cite{PT}
and references cited therein to get an idea how these conjectures were proved. In
particular, the so-called Nekrasov's equation was essential for this purpose. The
first version of this nonlinear integral equation for waves on deep water was
derived in \cite{N1} (see also \cite{N3}, Part~1). Soon after that, Nekrasov
generalized his equation to the case of finite depth (see \cite{N2} and \cite{N3},
Part~2). Much later, Amick and Toland \cite{AT2} proposed and investigated a more
sophisticated version of the latter equation.
Levi-Civita \cite{LC} and Struik \cite{St} considered (independently of Nekrasov)
the problem of periodic waves on deep water and on water of finite depth
respectively. The hodograph transform allowed them to reduce the question of
existence of waves to that of finding a pair of conjugate harmonic functions
satisfying nonlinear Neumann boundary conditions. The existence proofs given in
\cite{LC} and \cite{St} are based on a majorant method for demonstrating the
convergence of power series that provide formal solutions. In his book \cite{Z2},
Chapter~71, Zeidler writes about these proofs that they are `very complicated' in
view of `voluminous computations' involved. By now, both techniques (Nekrasov's
equations and the method of Levi-Civita and Struik) are investigated in detail by
means of analytic bifurcation theory. An account of this theory can be found in the
books \cite{Z1} and \cite{BT}, whereas many authors studied its application to
equations describing steady water waves; these results are summarised in \cite{T}
(deep water) and in \cite{Z2}, Chapter~71 (water of finite depth), where one also
finds detailed historical remarks. It should be mentioned that Krasovskii
\cite{Kras} extended studies of water waves to the case of a periodic wavy bottom.
Another method for periodic waves on deep water (with and without surface tension)
was developed in detail by Buffoni, Dancer and Toland \cite{BDT1,BDT2}. In the
absence of surface tension, it is based on the so-called Babenko's equation:
\begin{equation}
\mu \, \mathcal{C} (v') = v + v \, \mathcal{C} (v') + \mathcal{C} (v' v) , \quad t \in
(-\pi, \pi) . \label{bid}
\end{equation}
Here $\mu$ is a dimensionless bifurcation parameter (it is related to the speed of
wave pro\-pagation), which must be found along with the $2 \pi$-periodic function
$v (t)$ that describes the wave profile parametrically in certain dimensionless
coordinates; $'$ stands for differentiation with respect to $t$ and ${\cal C}$ is
the $2 \pi$-periodic Hilbert transform (the conjugation operator in the theory of
Fourier series; see, for example, \cite{Z}). It is defined on $L^2 (-\pi, \pi)$ by
linearity from the following relations:
\begin{equation}
{\cal C} (\cos n t) = \sin n t \ \ \mbox{for} \ n \geq 0 , \quad {\cal C} (\sin n t)
= - \cos n t \ \ \mbox{for} \ n \geq 1 . \label{HT}
\end{equation}
The original form of \eqref{bid} was announced by Babenko \cite{B} (see also
\cite{OS}, Section~3.7), where the equation is derived and expressed in terms of the
self-adjoint operator $J_0 = \mathcal{C} \, \D / \D t$. In his second note
\cite{BB}, Babenko outlines how to prove the local existence theorem for his
equation in a neighbourhood of the first bifurcation point equal to unity. He
demonstrates that $\mu$ must be equal to $1 + \epsilon^2$, and, after changing the
unknown function by applying the invertible operator $I + J_0$ to the original one,
a solution is sought in the form of expansion in powers of $\epsilon$. It is shown
that the expansion converges for $\epsilon \leq 1 / 25$. Some numerical computations
related to the Babenko's version of equation \eqref{bid} are presented in
\cite{BP,BPR}.
A generalization of Babenko's equation was later studied in \cite{ST}. Besides,
Longuet-Higgins \cite{LH1} derived an infinite system of algebraic equations
equivalent to \eqref{bid} (see also \cite{BS} and \cite{OS}, Section~3.6). He used
this system for numerical computations of Stokes-wave bifurcations (see \cite{LH2}
and also \cite{OS}, Sections~3.2 and 3.8). It is worth mentioning that this system
naturally appears from the Lagrangian formalism developed in \cite{Ba}. In the paper
\cite{LH3}, a similar system of quadratic relations between the Fourier coefficients
of the wave profile was obtained in the case of water having finite depth.
Interesting results concerning the secondary or sub-harmonic bifurcations from
branches describing Stokes-wave solutions of \eqref{bid} are proved in the articles
\cite{BDT1} and \cite{BDT2}. In the first of these, it is shown that such
bifurcations do not occur in a neighbourhood of those points, where Stokes waves
bifurcate from a trivial solution. On the other hand, significant numerical evidence
about the existence of steady periodic waves that distinguish from Stokes waves had
appeared in the 1980s. These new waves have more than one crest per period and
bifurcate from Stokes waves. Branches of sub-harmonic bifurcations for deep water
were first computed by Chen and Saffman \cite{CS}, whereas Vanden-Broeck \cite{VB}
obtained similar result for water of finite depth by solving numerically an
integro-differential system arising after the hodograph transform; this system was
proposed in \cite{VBS}. Craig and Nicholls \cite{CN} used a different method for
computing numerical results generalising those of Vanden-Broeck. Moreover, it was
shown that non-symmetric periodic waves exist on water of finite depth for which
purpose a weakly nonlinear Hamiltonian model was applied in \cite{Zuf1}.
References to other works containing numerical results on sub-harmonic bifurcations
can be found in \cite{OS} and \cite{BM}. In the latter paper, some theoretical
insights concerning these bifurcations are also given. The above mentioned numerical
observations were confirmed rigorously in \cite{BDT2}, where it was `concluded that
the sub-harmonic bifurcations [\dots] are {\it an inevitable consequence} of the
formation of Stokes highest wave'. A characteristic property of the latter wave is
the angle equal to $2\pi/3$ formed at the crest by two smooth, symmetric curves.
Concerning this wave see \cite{PT} and references cited therein.
Apart of numerical approaches mentioned above, the direction of studies was quite
different for water of finite depth. Of course, Nekrasov's equation and the approach
of Struik were both developed for waves of a given wavelength. On the other hand, a
psuedo-differential equation in terms of variables arising after the hodograph
transform was derived in \cite{KK}. This equation describes all waves for which the
rate of flow per unit span and the Bernoulli constant are prescribed and serves for
justifying the Benjamin--Lighthill conjecture for near-critical waves. However, it
is not suitable for studying the Stokes-wave and sub-harmonic bifurcations. The
results obtained for equation \eqref{bid} in \cite{BDT1,BDT2} show that Babenko's
equation serves better for this purpose. Here the analysis of equation similar to
\eqref{bid}, but describing waves on water of finite depth is initiated and new
numerical results are obtained.
Besides, Constantin, Strauss and V\u{a}rv\u{a}ruc\u{a} investigated the problem of
water waves with constant vorticity in their recent paper \cite{CSV}. In the case of
finite depth, this problem is reduced to a quasilinear pseudo-differential system
provided the vorticity is non-zero. In the irrotational case (that is, for zero
vorticity) and for some particular value of a parameter involved in the system, the
latter turns into a single equation that has the same form as \eqref{bid} with
$\mathcal C$ changed to the so-called periodic Hilbert transform on a strip. In
Section~5, we compare this equation with Babenko's equation derived in this paper;
see \eqref{37} below.
The plan of the paper is as follows. For the problem describing steady, periodic
waves on water of finite depth two equivalent statements\,---\,dimensional and
non-dimensional\,---\,are formulated in Section~2. Babenko's equation is derived
from the non-dimensional for\-mulation in Section~3.1 and the existence of local
bifurcation branches for this equation is proved on the basis of the
Crandall--Rabinowitz theorem in Section~3.2. In Section~3.3, we outline how to
obtain a solution of the non-dimensional problem from a given solution of Babenko's
equation. Numerical procedure applied for solving Babenko's equation is presented in
Section~4 along with various bifurcation curves and wave profiles obtained with its
help. Section~5 contains concluding remarks.
\section{Statements of the problem}
In its simplest form, the problem of steady surface waves concerns the
two-dimensional, irrotational motion of an inviscid, incompressible, heavy fluid,
say water, bounded above by a free surface and below by a rigid horizontal bottom.
(For example, this kind of motion occurs in water occupying an infinitely long
channel with rectangular cross-section and having uniform width.) In an appropriate
frame of reference the velocity field of steady motion is time-independent as well
as the free-surface profile, and they can be described in two equivalent ways that
differ by prescribed parameters.
\subsection{The Benjamin--Lighthill statement for steady waves}
The classical formulation proposed by Benjamin and Lighthill is convenient for
justification of their conjecture (see \cite{Ben} and \cite{KK}, where it had been
justified for Stokes waves and all near-critical waves respectively). In this
formulation, $Q$\,---\,the constant rate of flow per channel's unit span\,---\,is
prescribed along with the total head $R$ also referred to as the Bernoulli constant.
Let Cartesian coordinates $(X,Y)$ be chosen so that the bottom coincides with the
$X$-axis and gravity acts in the negative $Y$-direction, whereas the wave profile
has a crest on the $Y$-axis (this does not restrict generality). Thus, the profile
is given by the graph of an unknown positive function $\xi$ (that is, $Y = \xi (X)$,
$X \in \RR$), attaining its maximum at $X=0$. Moreover, we suppose that $\xi$ is
continuously differentiable and even. In the longitudinal section of the water
domain ${\cal D} = \{ X \in \RR,\ 0 < Y < \xi (X) \}$, the velocity field is
described by the stream function $\Psi (X, Y)$, that is, the projections of the
velocity vector at $(X, Y)$ on the $X$- and $Y$-axis are $\Psi_Y$ and $-\Psi_X$
respectively. It is assumed that $\Psi$ belongs to $C^2 ({\cal D}) \cap C^1
(\bar{\cal D})$ and is an even function of $X$ (hence it is bounded on $\bar{\cal
D}$).
If the surface tension is neglected, then $\Psi$ and $\xi$ must satisfy the
following free-boundary problem:
\begin{eqnarray}
&& \Psi_{XX} + \Psi_{YY} = 0, \quad (X,Y) \in {\cal D}; \label{1} \\ && \Psi (X, 0)
= - Q, \quad X \in \RR; \label{2} \\ && \Psi (X, \xi (X)) = 0, \quad X \in \RR;
\label{3} \\ && \frac{1}{2} |\nabla \Psi (X, \xi (X))|^2 + g \, \xi (X) = R , \quad
X \in \RR . \label{4}
\end{eqnarray}
In the left-hand side of the last relation usually referred to as Bernoulli's
equation, $g > 0$ is the constant acceleration due to gravity. It is known that
non-trivial solutions of problem (\ref{1})--(\ref{4}) exist only when $Q \neq 0$ and
$R > R_c = \frac{3}{2} (Q g)^{2/3}$. For the proof of the first relation see
Proposition 1.1 in \cite{KK1}, whereas the last inequality is proved in \cite{KK2}
under weaker assumptions than listed above. In what follows, these restrictions on
$Q$ and $R$ hold; moreover, we suppose (without loss of generality) that $Q > 0$.
\subsection{A non-dimensional statement for periodic waves}
Let us assume that $\xi$ is a $2 \ell$-periodic function ($\ell > 0$), whereas $\Psi
(X, Y)$ is $2 \ell$-periodic in $X$, but the constant $R$ is to be found along with
these functions from problem (\ref{1})--(\ref{4}). In order to reduce the
reformulated problem to a non-dimensional form, we average Bernoulli's equation over
$(-\ell, \ell)$. Since $\Psi$ is constant on the free surface, we obtain that $c^2 =
2 (R - g H)$, where
\begin{equation}
H = \frac{1}{2 \ell} \int_{-\ell}^\ell \xi (X) \, \D X \quad \mbox{and} \quad c^2 =
\frac{1}{2 \ell} \int_{-\ell}^\ell \left| \frac{ \partial \Psi}{\partial n} (X, \xi
(X)) \right|^2 \D X . \label{abe}
\end{equation}
Here $n$ is the unit normal to $Y = \xi (X)$ directed out of ${\cal D}$. One can
show that the last equality \eqref{abe} is true with the same constant $c^2$ when
this curve is changed to $Y = \tilde \xi (X)$\,---\,an arbitrary
streamline\,---\,and $n$ is changed to $\tilde n$\,---\,the unit normal to this
streamline. Therefore, $c > 0$ is the unknown mean velocity of flow in the positive
direction of the $X$-axis.
It is convenient to introduce the following non-dimensional quantities: $h = \pi H /
\ell$ (the mean depth of flow) and $Q_0 = Q / \sqrt{ g (\ell / \pi)^3}$ (the rate
of flow). Now we scale the dimensional variables and shift the vertical variables
downwards as follows:
\begin{equation}
x = \frac{\pi}{\ell} X ,\ y = \frac{\pi}{\ell} Y - h ; \quad \eta (x) = \frac{\pi}
{\ell} \, \xi (X) - h ; \quad \psi (x,y) = \frac{Q_0}{Q} \Psi (X,Y) . \label{dlv}
\end{equation}
This is advantageous because the new unknown $\eta$ is a $2 \pi$-periodic and even
function of $x$ satisfying the following condition:
\begin{equation}
\int_{-\pi}^\pi \eta (x) \, \D x = 0 . \label{eta}
\end{equation}
Furthermore, the function $\psi$ has the same properties on $\bar D$ as $\Psi$ has
on $\bar{\cal D}$; namely,
\[ \psi \in C^1 (\bar{D}) \cap C^2 (D), \quad \mbox{where} \
D = \{ x \in \RR, -h < y < \eta (x) \},
\]
and is a $2 \pi$-periodic and even function of $x$. Moreover, the change of
variables (\ref{dlv}) reduces relations (\ref{1})--(\ref{4}) to the following
\begin{eqnarray}
&& \psi_{xx} + \psi_{yy} = 0, \quad (x,y) \in D; \label{lapp} \\ && \psi (x, -h) =
-Q_0 , \quad x \in \RR; \label{bcp} \\ && \psi (x, \eta (x)) = 0, \quad x \in \RR;
\label{kcp} \\ && |\nabla \psi (x, \eta (x))|^2 + 2 \eta (x) = \mu , \quad x \in \RR .
\label{bep}
\end{eqnarray}
In the non-dimensional Bernoulli equation, the parameter $\mu = \pi c^2 / (g \ell)$
is the Froude number squared which must be found along with $\eta$ and $\psi$.
Besides, $\mu / 2$ serves as the independent of $h$ upper bound for $\eta$ with
equality achieved only for the wave of extreme form with the Lipschitz crest; see
\cite{CN}. Thus, the non-dimensional statement of the problem is as follows.
\begin{definition}
{\rm Let $Q_0$ and $h$ be given positive numbers, then problem P$(Q_0,h)$ is to find
a triple $(\mu, \eta, \psi)$ from relations (\ref{lapp})--(\ref{bep}) so that $\mu$
is positive, $\eta$ satisfies condition \eqref{eta}, whereas all other properties of
$\eta$ and $\psi$ (smoothness, $2 \pi$-periodicity and symmetry) are as described above.}
\end{definition}
On the other hand, having a solution of problem P$(Q_0,h)$, formulae (\ref{dlv})
yield a $2 \ell$-periodic solution of problem (\ref{1})--(\ref{4}) for any $\ell >
0$, for which purpose one has to put $Q = Q_0 \sqrt{ g (\ell / \pi)^3}$ and to
determine $R$ from the equality $c^2 = 2 (R - g H)$ with $c^2 = \mu g \ell / \pi$
and $H = h \ell / \pi$.
\section{Babenko's equation}
The aim of this section is to derive a single nonlinear pseudo-differential equation
that has the same form as (\ref{bid}), but $\mathcal{C}$ is replaced by the sum of
$\mathcal{C}$ and some compact operator depending on a real parameter. The equation
is related to the family of problems P$(Q_0,h)$ in the following sense. The value of
operator's parameter together with equation's solution allow us to determine $h$ and
to obtain some solution of the water-wave problem.
\subsection{Derivation of Babenko's equation}
First, we follow considerations in Section~8 of \cite{N3}; see also the rather
recent paper \cite{Bod}. By $w (z) = \varphi + \ii \psi$ $(z = x + \ii y)$ we denote
the complex potential restricted to the one-wave domain $D_{2 \pi} = \{ -\pi < x <
\pi , -h < y < \eta (x) \}$ of some periodic wave on water of a certain depth $h$.
Here $\varphi (x, y)$ is the odd in $x$ harmonic conjugate to $\psi$, for which
purpose the additive constant must be chosen properly. For some $r \in (0,1)$ we
consider a conformal mapping, say $u (z)$, from the $z$-plane to the auxiliary
$u$-plane; it maps $D_{2 \pi}$ onto
\begin{equation}
A_r = \{ r < |u| < 1 ; \ \Re\,u \notin (-1 , -r) \ \mbox{when} \ \Im\,u =0 \} .
\label{A_r}
\end{equation}
This annular domain has a cut which makes it simply connected and the map is such
that the images of the upper and bottom parts of $\partial D_{2 \pi}$ are
\[ \{ |u|=1 ; \Re\,u \neq -1 \} \quad \mbox{and} \quad \{ |u|=r ; \Re\,u \neq -r \}
\]
respectively, whereas the left (right) side of $\partial D_{2 \pi}$ is mapped onto
the upper (lower respectively) side of the cut $\{ \Re\,u \in (-1 , -r) \
\mbox{when} \ \Im\,u =0 \}$. Putting
\begin{equation}
u = \E^{-\ii w} \quad \mbox{and} \quad \frac{\D z}{\D u} = \ii \left[ u^{-1} + f (u)
\right] \, , \label{zu}
\end{equation}
where $f (u)$ is a certain Laurent series, one obtains that
\begin{equation}
- \frac{\D w}{\D z} = \left[ 1 + u f (u) \right]^{-1} \, . \label{wu}
\end{equation}
In \cite{N3}, Section 8, this formula serves as the basis for deriving Nekrasov's
equation in the case of finite depth. An equivalent form of this equation is given
in \cite{Bod}; see equation (1.1) there.
According to the second equality \eqref{zu}, the general form of the inverse
conformal map\-ping $A_r \ni u \mapsto z \in D_{2 \pi}$ is as follows:
\begin{equation}
z (u) = \ii \Big[ \log u - a_0 + \sum_{k=1}^\infty a_k \left( u^k - r^{2 k} u^{-k}
\right) \Big] \, , \quad \mbox{where} \ a_k \in \RR , \ k = 0,1,2,\dots \, .
\label{z_u}
\end{equation}
Here, we put minus at $a_0$ because it is convenient in what follows. The fact that
all coefficients $a_k$ are real is a consequence of equality \eqref{wu} because
$\psi$ is equal to a real constant on the bottom part of $\partial D_{2 \pi}$ which
corresponds to $\{ |u|=r ; \Re\,u \neq -r \}$\,---\,the circumference cut on the
negative real axis.
Let us derive some relations for the coefficients from \eqref{z_u}. First, for $u=r$
we obtain
\begin{equation}
a_0 = h + \log r . \label{a_0}
\end{equation}
Substituting $u = \E^{\ii t}$, $t \in (- \pi, \pi)$, into \eqref{z_u} and separating
the real and imaginary parts, we arrive at the following parametric representation
of the free surface profile:
\begin{equation}
x (t) = - t - \sum_{k=1}^\infty a_k \left( 1 + r^{2 k} \right) \sin kt \, , \quad
\eta (t) = - a_0 + \sum_{k=1}^\infty a_k \left( 1 - r^{2 k} \right) \cos kt \, .
\label{x_eta_t}
\end{equation}
Now we see that another relation is equivalent to condition \eqref{eta} written in
terms of the last two expressions, namely:
\begin{equation}
\int_{-\pi}^\pi \!\! \eta (t) x' (t) \, \D t = 0 \quad \Longleftrightarrow \quad a_0 =
\frac{1}{2} \sum_{k=1}^\infty k \, a_k^2 \left( 1 - r^{4 k} \right) \, .
\label{etat}
\end{equation}
It follows from the last equality that $a_0 > 0$ in the non-trivial case. Then
equality \eqref{a_0} shows that the value of $r$ is related not only to the depth
$h$, but also to a particular solution of problem P$(Q_0,h)$.
It is worth mentioning that both expressions \eqref{x_eta_t} are similar to those
for the infinite depth (cf.~\cite{OS}, Section~3.7, where Babenko's results are
outlined), and in that case, a consequence is the relation $x_t = - 1 - \mathcal{C}
\eta_t$ with
\[ (\mathcal{C} v) (t) = \frac{1}{2 \pi} \int_{-\pi}^\pi v (\tau) \cot \frac{t-\tau}{2}
\D \tau \, ,
\]
which is the form of the $2 \pi$-periodic Hilbert transform alternative to formulae
\eqref{HT}.
The crucial point for obtaining a similar relation in the case of finite depth is to
introduce the operator $\mathcal{B}_r = \mathcal{C} + \mathcal{K}_r$ for $r \in (0,
1)$, where
\begin{equation}
(\mathcal{K}_r v) (t) = \frac{2}{\pi} \int_{-\pi}^\pi v (\tau) K_r (t-\tau) \, \D
\tau \quad \mbox{with} \ \ K_r (t-\tau) = \sum_{n=1}^\infty \frac{r^{2 n}}{1 - r^{2
n}} \sin (t-\tau) . \label{K_r}
\end{equation}
It is straightforward to calculate that $\mathcal{B}_r$ can also be defined on $L^2
(-\pi, \pi)$ by linearity from the following relations
\begin{equation}
\mathcal{B}_r (\cos n t) = \frac{1 + r^{2 n}}{1 - r^{2 k}} \sin n t \ \ \mbox{for} \
n \geq 0 , \quad \mathcal{B}_r (\sin n t) = - \frac{1 + r^{2 n}}{1 - r^{2 n}} \cos n
t \ \ \mbox{for} \ n \geq 1 \label{HTB_r}
\end{equation}
that are similar to \eqref{HT}. Combining these formulae and \eqref{x_eta_t} yields
that
\begin{equation}
x_t = - 1 - \mathcal{B}_r \eta_t \quad \mbox{for} \ t \in (- \pi, \pi) \, .
\label{30}
\end{equation}
An important fact about the operator $\mathcal{B}_r$ is that it is a conjugation in
the following sense. If $F (u)$ is analytic in $A_r$ and $\Im F$ vanishes
identically on $\{ |u|=r ; \Re\,u \neq -r \}$, then
\begin{equation}
\Re F (\E^{\ii t}) + [ \mathcal{B}_r (\Im F) ] (t) = 0 \quad \mbox{for all} \ t
\in (- \pi, \pi) . \label{32}
\end{equation}
Let us calculate the derivative $z_\varphi$ of the mapping inverse to the complex
potential. In view of the first equality \eqref{zu}, we have
\[ z_\varphi = z_u \, u_\varphi = - \ii z_u \, \E^{-\ii w} w_\varphi = - \ii u \, z_u .
\]
Combining this and \eqref{z_u}, we obtain that
\begin{equation}
z_\varphi = 1 + \sum_{k=1}^\infty k a_k \left( u^k + r^{2 k} u^{-k} \right) \, ,
\label{33}
\end{equation}
and the function on the right-hand side is analytic in $A_r$. Since $z_\varphi$ does
not vanish in the closure of $A_r$, we have that $z_\varphi^{-1} = |\nabla
\varphi|^2 \, \overline{z_\varphi}$ is also analytic in $A_r$. Moreover, the
Bernoulli equation \eqref{bep} implies that
\begin{equation}
z_\varphi^{-1} = (\mu - 2 \eta) (x_\varphi - \ii y_\varphi) = (\mu - 2 \eta) (\ii
\eta_t - x_t) \quad \mbox{when} \ u = \E^{-\ii t} ,
\label{34}
\end{equation}
(cf.~formula (3.38) in \cite{OS}). Here the second equality is a consequence of the
Cauchy--Riemann equations. Then equality \eqref{30} yields that
\begin{equation}
z_\varphi^{-1} = (\mu - 2 \eta) (1 + \mathcal{B}_r \eta_t + \ii \eta_t) \quad
\mbox{for} \ t \in (-\pi, \pi) .
\label{36}
\end{equation}
It follows from previous considerations that the constant in the Laurent expansion
of $z_\varphi^{-1}$ is equal to $\mu$. Furthermore, $\Im \{z_\varphi^{-1} - \mu\}$
vanishes identically on $\{ |u|=r ; \Re\,u \neq -r \}$, which allows us to apply
formula \eqref{32} to the function $z_\varphi^{-1} - \mu$, whose trace on $\{u =
\E^{\ii t}\}$ is equal to
\[ (\mu - 2 \eta) \, \mathcal{B}_r (\eta') - 2 v + \ii (\mu - 2 \eta) \eta' .
\]
Here again $'$ stands for differentiation with respect to $t$. Thus, we arrive at
\[ (\mu - 2 \eta) \, \mathcal{B}_r (\eta') - 2 \eta + \mathcal{B}_r \, [(\mu - 2 \eta)
\eta'] = 0 \quad \mbox{for} \ t \in (-\pi, \pi)\, ,
\]
which simplifies to Babenko's equation for waves on water of finite depth:
\begin{equation}
\mu \, \mathcal{B}_r (\eta') = \eta + \eta \, \mathcal{B}_r (\eta') + \mathcal{B}_r \,
(\eta' \eta) \quad \mbox{for} \ t \in (-\pi, \pi) .
\label{37}
\end{equation}
This equation is similar to \eqref{bid} and the derivation procedure yields that for
each $r \in (0, 1)$ it is related to some solution of problem P$(Q_0,h)$.
\subsection{Local bifurcation branches for Babenko's equation}
To show the existence of small solutions of equation \eqref{37}, bifurcating from
the zero solution, we apply the Crandall--Rabinowitz theorem (see \cite{CR},
Theorem~1.7) that deals with bifurcation from simple eigenvalues of the linearised
equation; its formulation is as follows.
\begin{theorem}
Let ${\cal X}$, ${\cal Y}$ be real Banach spaces with the continuous embedding
${\cal X} \subset {\cal Y}$. If a continuous map ${\cal F} (\mu, v): \RR \times
{\cal X} \mapsto {\cal Y}$ has the following properties:
\vspace{2mm}
{\rm (i)} the equality ${\cal F} (\mu, 0) = 0$ holds for all $\mu \in \RR$,
{\rm (ii)} the operators ${\cal F}_\mu$, ${\cal F}_v$ and ${\cal F}_{\mu v}$ exist
and are continuous,
{\rm (iii)} for some $\mu^*$ the operator ${\cal F}_v (\mu^*, 0)$ is a Fredholm one
with zero index and its null-space is one-dimensional,
{\rm (iv)} if the null-space of ${\cal F}_v (\mu^*, 0)$ is generated by $v^{(0)}$,
then ${\cal F}_{\mu v} (\mu^*, 0) \, v^{(0)}$ does not belong to the range of ${\cal
F}_v (\mu^*, 0)$.
\vspace{2mm}
\noindent Then a sufficiently small $\varepsilon > 0$ exists and a continuous curve
\[ \{ (\mu (s), \, v (s)) : |s| < \varepsilon \} \subset \RR \times {\cal X} ,
\]
bifurcates from $(\mu^*, 0);$ for pairs belonging to this curve
\[ \mu (s) = \mu^* + o (s) \quad and \quad v (s) = s \, v^{(0)} + o (s)
\quad when \ 0 < |s| < \varepsilon .
\]
Moreover, if ${\cal F}_{v v}$ is continuous, then the curve is of class $C^1$.
\end{theorem}
As in \cite{BDT1}, we say that a real-valued function $v$ belongs to the Sobolev
space $H_0$ provided it is absolutely continuous on $[-\pi, \pi]$, $v (-\pi) = v
(\pi)$, and its weak derivative $v'$ belongs to $L^2 (-\pi, \pi)$. Let $\hat H_0$ be
the subspace of $H_0$ consisting of even functions.
In terms of the map ${\cal F}: \RR \times \hat H_0 \mapsto L^2 (-\pi, \pi)$ defined
by
\begin{equation}
{\cal F} (\mu, v) = \mu \mathcal{B}_r (v') - v + v \, \mathcal{B}_r (v') +
\mathcal{B}_r \, (v' v) ,
\label{calF}
\end{equation}
equation \eqref{37} takes the following form:
\begin{equation}
{\cal F} (\mu, v) = 0 , \quad (\mu, v) \in \RR \times \hat H_0 .
\label{eqF}
\end{equation}
Let us apply the Crandall--Rabinowitz theorem to this equation to obtain local
branches of Stokes-wave solutions of small amplitude, for which purpose we have to
check conditions (i)--(iv) for ${\cal F} (\mu, v)$.
It is clear that (i) and (ii) are fulfilled and ${\cal F}_{v} (\mu, 0) = \mu
\mathcal{B}_r \, (\D / \D t) - I$, where $I$ is the identity operator. Hence the set
of bifurcation points of equation \eqref{eqF} is $\{ \mu_n \}_{n=1}^\infty$, where
\begin{equation}
\mu_n = \frac{1 - r^{2n}}{n (1 + r^{2n})} , \quad n=1,2,\dots \, ,
\label{lambda_n}
\end{equation}
are the characteristic values of $\mathcal{B}_r \, (\D / \D t)$. Furthermore, ${\cal
F}_{v} (\mu_n, 0)$ is a Fredholm operator, its index is equal to zero for every
$\mu_n$, and the corresponding null-space in $\hat H_0$ is one-dimensional being
generated by $v^{(0)}_n (t) = \cos n t$, thus yielding condition (iii). Since ${\cal
F}_{\mu v} (\mu_n, 0) = - I$ we see that ${\cal F}_{\mu v} (\mu_n, 0) v^{(0)}_n (t)
= - \cos n t$, and so condition (iv) for $n=1,2,\dots$ is a consequence of the fact
that the equation
\[ \mu_n \mathcal{B}_r (v') - v = - \cos n t
\]
has no solution. Indeed, a solution of this equation exists if and only if its
right-hand side is orthogonal to the null-space of the adjoint operator
\[ [\mu_n \mathcal{B}_r \, (\D / \D t) - I]^* = \mu_n (\D / \D t) \mathcal{B}_r - I .
\]
Since its null-space is one-dimensional and generated by $\cos n t$, the
orthogonality condition does not hold for $- \cos n t$. This completes verification
of condition (iv).
Then the Crandall--Rabinowitz theorem yields the following.
\begin{theorem}
For every $n=1,2,\dots$ there exists $\varepsilon_n > 0$ such that for $0 < |s| <
\varepsilon_n$ there is the family $\big( \mu_n^{(s)}, \, v_n^{(s)} \big)$ of
Stokes-wave solutions to equation \eqref{eqF}. Together with the bifurcation point
$(\mu_n , 0)$, where $\mu_n$ is given by formula \eqref{lambda_n}, the points of
this family form the continuous curve
\[ C_n = \big\{ \big( \mu_n^{(s)}, \, v_n^{(s)} (t) \big) : |s| < \varepsilon_n
\big\} \subset \RR \times \hat H_0 , \quad n=1,2,\dots \, .
\]
Moreover, the asymptotic formulae
\begin{equation}
\mu_n^{(s)} = \mu_n + o (s) \, , \quad v_n^{(s)} (t) = s \cos n t + o (s)
\label{s_a}
\end{equation}
hold for these solutions as $|s| \to 0$. Finally, each curve $C_n$ is of class
$C^1$.
\end{theorem}
\begin{figure}
\vspace{-4mm}
\begin{center}
\SetLabels
\L (0.5*0.01) $\mu$\\
\L (0.56*0.41) $C_1$\\
\L (-0.06*0.5) $\| v \|_\infty$\\
\endSetLabels
\leavevmode \strut\AffixLabels{\includegraphics[width=68mm]{B1_Diagram_N=2048_r=0,8.eps}}
\end{center}
\vspace{-3mm} \caption{The branch of solutions of equation \eqref{37} with $r=4/5$,
bifurcating from the zero solution at $\mu_1 (4/5) = 0.219512195122$. The upper bound
mentioned prior to Definition~1 is also included.} \vspace{-4mm}
\label{fig:1}
\end{figure}
The last assertion is a consequence of the fact that ${\cal F}_{v v}$ is continuous
which is obvious. This theorem is illustrated in Fig.~1, where we have a plot of the
bifurcation branch $C_1$ in terms of $\mu$ and the norm of solution $\|v\|_\infty$
in the space $L^\infty (-\pi, \pi)$. The plotted branch bifurcating from $\mu_1 (4/5)$
has no secondary bifurcation points as the analogous branch for equation
\eqref{bid}; see \cite{BDT1,BDT2} for the rigorous proof and detailed discussion.
Moreover, it exhibits the phenomenon of a turning point at the largest value of
$\mu$ attained on $C_1$, occurring high on the branch; see further details in
Section~4.3. (The fastest traveling wave of given period corresponds to this point.)
By means of a different method this property was demonstrated by \cite{CN}, whereas
our method shows that it also takes place for equation \eqref{bid} on the branch
bifurcating from $\mu_1 (0)$. This phenomenon is related to the `Tanaka instability'
found numerically in \cite{Tan}, and later investigated analytically in \cite{Saf}.
\subsection{Solutions of Babenko's equation define periodic waves}
Let us outline a procedure demonstrating how to obtain a solution of problem
\eqref{lapp}--\eqref{bep} from that of Babenko's equation; that is, if equation
\eqref{37} with $r \in (0, 1)$ is satisfied by some $\mu > 0$ and an even function
$v$ (the existence of such pairs\,---\,at least in the form
\eqref{s_a}\,---\,follows from Theorem~2), then one can find the following:
(1) a $2 \, \pi$-periodic, symmetric curve with zero mean and a negative number
$-h$, which define the wave profile and the level of horizontal bottom,
respectively, thus giving a one-period water domain, say $\Omega$, on the $(x,
y)$-plane;
(2) a function $\psi$ harmonic in $\Omega$ and vanishing on its top side and two
positive constants serving as the right-hand side terms in the boundary conditions
\eqref{bcp} and \eqref{bep}.
Let we have an even, $2 \pi$-periodic solution $v$ of equation \eqref{37}, whose
Fourier coefficients we denote $b_0, b_1, b_2, \dots$ to distinguish these
coefficients from those in \eqref{z_u}, and let the periodic extension of $v$ to
$\RR$ be real-analytic. Using these coefficients, we define the following
holomorphic function on $A_r$:\\[-3mm]
\begin{equation}
z (u) = \ii \Big[ \log u - B + \sum_{k=1}^\infty b_k \left( u^k - r^{2 k} u^{-k}
\right) \Big] \, . \label{z_u'}
\end{equation}
Here $B$ is a real number that will be determined below in terms of the Fourier
coefficients of $v$. Let us consider the images that correspond under this mapping
to the curves and segments of $\partial A_r$. First, we see that $z (\E^{\ii t}) = x
(t) + \ii y (t)$ for $t \in (-\pi, \pi)$, where\\[-3mm]
\begin{equation}
x (t) = - t - \sum_{k=1}^\infty b_k \left( 1 + r^{2 k} \right) \sin kt \, , \ \ y
(t) = - B + \sum_{k=1}^\infty b_k \left( 1 - r^{2 k} \right) \cos kt \, .
\label{x_y}
\end{equation}
Since this curve given parametrically serves as the upper part of $\partial \Omega$,
we require its mean value to vanish. This gives that\\[-3mm]
\begin{equation}
B = \frac{1}{2} \sum_{k=1}^\infty k b_k^2 \left( 1 - r^{4 k} \right) \, ,
\label{d}
\end{equation}
where the series converges because the Fourier coefficients of the real-analytic $v$
decay faster than any power of $k$.
Now we are in a position to determine the mean depth of flow $h$. In view of
symmetry we have that $z (r)$ is the mid-point of the bottom; that is, $z (r) = -
\ii h$. Then putting $u=r$ into \eqref{z_u'}, we find that\\[-3mm]
\begin{equation}
h = B - \log r = \frac{1}{2} \sum_{k=1}^\infty k b_k^2 \left( 1 - r^{4 k} \right) -
\log r \, , \label{h}
\end{equation}
and so is positive; here the last equality is a consequence of \eqref{d}. Thus, the
second expression \eqref{x_y} takes the following form:\\[-3mm]
\begin{equation}
y (t) = - (h + \log r) + \sum_{k=1}^\infty b_k \left( 1 - r^{2 k} \right) \cos kt \,
. \label{_y}
\end{equation}
Hence the curve $z_s = \{ x = x (t) = - t - (\mathcal{B}_r \, y) (t) , \ \ y = y
(t) ; \ \ t \in [-\pi, \pi] \}$ has the zero mean value. Here, the first formula
\eqref{HTB_r} is applied to express $x (t)$ in terms of $y (t)$.
Furthermore, we have\\[-3mm]
\begin{equation}
z (|u| \E^{\pm \ii \pi}) = \mp \pi + \ii \Big[ \log |u| - h + \sum_{k=1}^\infty
(-1)^k b_k \left( u^{2 k} - r^{2 k} \right) \! / |u|^{k} \Big] \quad \mbox{for} \ u
\in [-1, -r] \, , \label{right}
\end{equation}
thus obtaining two vertical segments $z_-$ and $z_+$ on the lines $x = -\pi$ and $x
= \pi$ respectively.
Taking into account \eqref{d} and \eqref{h}, we see that\\[-3mm]
\begin{equation}
z (r \E^{\ii t}) = - \ii h - t - 2 \sum_{k=1}^\infty b_k r^k \sin k t \quad
\mbox{for} \ t \in [-\pi, \pi] \label{horiz}
\end{equation}
on the inner circumference. This defines a horizontal segment $z_b$ on the line $y =
-h$.
\begin{figure}
\begin{center}
\leavevmode \strut\AffixLabels{\includegraphics[width=48mm]{annulus_with_cut.eps}}
\end{center}
\vspace{-2mm} \caption{A sketch of the annular domain $A_r$ with several points on
its boundary marked in the counter-clockwise order.}
\label{fig:1}
\end{figure}
It is clear that the curve $\Gamma = z_+ \cup z_s \cup z_- \cup z_b$ constructed
above is closed and one can check (for example, numerically) that the set $\Omega$
enclosed within $\Gamma$ is a domain. The next step is to show that $z (u)$ defined
with the help of the Fourier coefficients of $v$ is a conformal mapping of $A_r$
onto $\Omega$. For this purpose one can use the boundary correspondence principle;
its form relevant for our case (see, for example, \cite{E}, Chapter~5, Theorem~1.3)
is formulated for the convenience of the reader.
\begin{theorem}
[The boundary correspondence principle] Let $D$ and $D^*$ be two bounded simply
connected domains with piecewise smooth boundaries and let $f$ be holomorphic in $D$
and continuous in $\bar D$. If $f (p)$ parametrises $\partial D^*$ counter-clockwise
provided $p$ is a counter-clockwise parametrisation of $\partial D$, then $f$ is a
conformal mapping of $D$ onto~$D^*$.
\end{theorem}
According to this theorem $z (u)$ maps $A_r$ onto $\Omega$ conformally provided one
can show (for example, numerically) that the map $\partial A_r \ni u \mapsto z \in
\Gamma$ is a homeomorphism. Moreover, condition \eqref{eta} is fulfilled for $\eta
(x) = y (t (x))$; here $t (x)$ is the inverse of $x = - t - (\mathcal{B}_r \, y)
(t)$, existing provided the curve $z_s$ is not self-intersecting. Thus, the curve $y
= \eta (x)$ defines the upper side of $\Omega$.
\begin{figure}
\begin{center}
\SetLabels
\L (0.58*0.66) $\Gamma$\\
\L (0.4*0.3) $\Omega$\\
\endSetLabels
\leavevmode \strut\AffixLabels{\includegraphics[width=64mm]{solution.eps}}
\end{center}
\vspace{-3mm} \caption{The curve $\Gamma$ corresponding to $\partial A_r$ through
the mapping $z (u)$ defined by \eqref{z_u'} and \eqref{d}, where the sequence
$\{b_k\}_{k=0}^\infty$ consists of the Fourier coefficients of $v$. The latter
solves \eqref{37} with $r=4/5$ and $\mu \approx 0.32671$, and $(\mu, v)$ belongs to
the branch bifurcating from $\mu_1$. The marked points on $\Gamma$ correspond to
those having the same numbers on $\partial A_r$ in Fig.~2. The mean depth of the
one-wave domain $\Omega$ is $h \approx 0.22739$, whereas the wave amplitude is
approximately equal to 0.17326.} \vspace{-6mm}
\label{fig:3}
\end{figure}
Figs~2--6 illustrate how the boundary correspondence principle works numerically in
recovering Stokes waves from solutions of \eqref{37}. We consider the equation with
$r=4/5$ and take the solution $(\mu, v)$ with $\mu \approx 0.32671$. This solution
belongs to the branch bifurcating from $\mu_1$ (equal to $0.219512195122$ for
$r=4/5$), and the value of $\mu$ in point is close to the critical one on this
branch (see Fig.~1 and Fig.~8). Substituting the Fourier coefficients of $v$ into
\eqref{z_u'} and \eqref{d}, we define $z (u)$ which is holomorphic in $A_r$ and maps
$\overline {A_r}$ onto $\overline \Omega$ continuously; the latter set is the
closure of the prospective one-wave domain. To demonstrate that $z (u)$ is a
conformal mapping we choose several points on $\partial A_r$, numbering them
counter-clockwise (see Fig.~2), and calculate their images on $\partial \Omega$,
assigning to each the same number as the object point has on $\partial A_r$. It
occurs that the images are also numbered counter-clockwise in agreement with the
boundary correspondence principle (see Fig.~3).
To be sure that the counter-clockwise boundary correspondence is not violated
between the chosen points we provide three figures more. In Fig.~4, the graph
of\\[-3mm]
\begin{equation}
x_h (t) = - t - 2 \sum_{k=1}^\infty b_k r^k \sin k t \label{horiz'}
\end{equation}
is plotted for $r=4/5$ and $t$ varying from 0 to $\pi$ (this parametrises the upper
half of the inner circumference clockwise provided it is considered as a part of
$\partial A_r$; see Fig.~2). According to \eqref{horiz}, this gives the left-hand
half of the bottom shown in Fig.~3 also parametrised clockwise. Since
\eqref{horiz'} is a monotonic function, there is no violation of the boundary
correspondence on the bottom because, by symmetry, it is sufficient to check this on
its right-hand half only.
\begin{figure}
\vspace{-4mm}
\begin{center}
\SetLabels
\L (0.5*0.02) $t$\\
\L (0.0*0.5) $x_h$\\
\endSetLabels
\leavevmode \strut\AffixLabels{\includegraphics[width=68mm]{half_bottom.eps}}
\end{center}
\vspace{-4mm} \caption{The graph of \eqref{horiz'} with $r=4/5$; its monotonicity
confirms that the boundary correspondence is not violated on the bottom part of
$\Gamma$.} \vspace{-4mm}
\label{fig:4}
\end{figure}
\begin{figure}
\begin{center}
\SetLabels
\L (0.486*0.0) $|u|$\\
\L (0.0*0.5) $y_+$\\
\endSetLabels
\leavevmode \strut\AffixLabels{\includegraphics[width=68mm]{right_vertical.eps}}
\end{center}
\vspace{-2mm} \caption{The graph of \eqref{right'} with $r=4/5$; its monotonicity
confirms that the boundary correspondence is not violated on the right-hand side of
$\Gamma$.} \vspace{-4mm}
\label{fig:5}
\end{figure}
In Fig.~5, the graph of
\begin{equation}
y_+ (u) = \log |u| - h + \sum_{k=1}^\infty (-1)^k b_k \left( u^{2 k} - r^{2 k}
\right) \! / |u|^{k} \label{right'}
\end{equation}
is plotted for $r=4/5$ and $u$ varying from $-r$ to $-1$ (this parametrises the lower
side of the cut counter-clockwise provided it is considered as a part of $\partial
A_r$; see Fig.~2). According to \eqref{right}, this gives the right-hand side of
$\Gamma$ shown in Fig.~3. Since \eqref{right'} is a monotonic function, there is no
violation of the boundary correspondence on the right-hand side of $\Gamma$.
\begin{figure}
\vspace{-4mm}
\begin{center}
\SetLabels
\L (0.5*0.0) $t$\\
\L (0.02*0.5) $x$\\
\endSetLabels
\leavevmode \strut\AffixLabels{\includegraphics[width=68mm]{half_surface.eps}}
\end{center}
\vspace{-4mm}
\caption{The graph of the first function \eqref{x_y} with $r=4/5$; its monotonicity
confirms that the boundary correspondence is not violated on the left-hand half of
the upper part of $\Gamma$.} \vspace{-4mm}
\label{fig:6}
\end{figure}
Finally, the graph of the first function \eqref{x_y} is plotted in Fig.~6 for
$r=4/5$ and $t$ varying from $0$ to $\pi$ (this parametrises the upper half of the
exterior circumference of $\partial A_r$ counter-clockwise; see Fig.~2). According
to the first equation \eqref{x_y}, this parametrises the left-hand part of the upper
side of $\Gamma$ shown in Fig.~3. Since \eqref{x_y} is a monotonic function, there
is no violation of the boundary correspondence on this part of $\Gamma$.
It remains to check that $\Omega$ is a one-wave domain for some Stokes wave; that
is, there exists a stream function $\psi$ defined on $\overline \Omega$ so that it
satisfies conditions \eqref{bcp}--\eqref{bep} with some constant serving as the
right-hand side term in \eqref{bcp}, whereas $\mu$ stands in \eqref{bep}. For this
purpose we map $\Omega$ conformally on an auxiliary rectangle
\[ R^* = \{ (\varphi^*, \psi^*) : -\pi < \varphi^* < \pi , -\psi_0 < \psi^* < 0 \}
\]
so that the images of $z_s$ and $z_b$ are the top and bottom parts of $\partial R^*$
respectively, whereas the value $\psi_0 > 0$ will be be chosen later. Thus, there
are harmonic functions $\varphi^*$ and $\psi^*$ defined on $\Omega$, and for every
$\psi_0$ the image of $R^*$ under the mapping $\E^{-\ii (\varphi^* + \ii \psi^*)}$
is the annular domain $A_\rho$ with some $\rho$. It is clear that the value of
$\rho$ decreases from unity to zero as $\psi_0$ characterising $R^*$ increases from
zero to infinity. Requiring $\rho$ to be equal to $r$, we fix the value of $\psi_0$,
thus determining $\psi^*$ which, in its turn, gives the constant value $-Q_*$ that
stands on the right-hand side of condition \eqref{bcp}; here the sign is chosen so
that $Q_*$ is positive. It should be noted that this procedure guarantees that
condition \eqref{kcp} is also fulfilled. It remains to use $\varphi^*$ and $\psi^*$
for determining $\psi$ so that it satisfies condition \eqref{bep} along with
\eqref{bcp} and \eqref{kcp}.
Using the Fourier coefficients $b_1, b_2, \dots$ of $v$ in formula \eqref{33}, we
obtain the function $z_{\varphi^*}$ holomorphic in $A_r$ and non-vanishing there.
According to equation \eqref{37}, we have that
\[ \left[ \{1 - 2 \mu^{-1} y (u)\} \overline{z_{\varphi^*} (u)} - 1 \right]_{|u|=1}
\]
is the limit as $|u| \to 1$ of some holomorphic function given in $A_r$ and having
its imaginary part equal to zero on $\partial A_r \cap \{ |u| = r \}$. Besides, the
same property holds for $z_{\varphi^*}$, and so it is also true for the function
whose limit as $|u| \to 1$ is equal to
\[ \left[ \{1 - 2 \mu^{-1} y (u)\} |z_{\varphi^*} (u)|^2 \right]_{|u|=1} \, .
\]
Therefore, we have that\\[-3mm]
\[ 1 - 2 \mu^{-1} \eta (x) = q^2 |\nabla \psi^* (x, \eta (x))|^2 , \quad x \in (-\pi,
\pi) ,
\]
with some $q > 0$ and $\eta$ defined above. For $\psi = q \sqrt \mu \, \psi^*$ the
last relation coincides with \eqref{bep}.
Thus, the triple $(\mu, \eta, \psi)$ satisfies problem P$(Q_0,h)$ with $h$ defined
by \eqref{h}, whereas $Q_0 = q \sqrt \mu \, Q_*$ and $Q_*$ depends on $r$ implicitly
as described above. This completes the description of a procedure how to obtain a
solution of problem \eqref{lapp}--\eqref{bep} from the given solution of Babenko's
equation.
\section{Numerical solution of Babenko's equation}
In this section, we describe a numerical method for solving equation \eqref{37} in
the class of even, periodic functions on $(-\pi, \pi)$. The existence of small
solutions of this kind is proved in Section~3.2, whereas general solutions are
discussed in Section~5. The essence of our method is to calculate the solution's
Fourier coefficients $b_0, b_1, \ldots$, which allows us to restore the conformal
mapping $z(u)$ (see Section~3.3), thus demonstrating numerically the equivalence of
Babenko's equation and problem P$(Q_0,h)$.
\subsection{Transformation of \eqref{37} to a form convenient for discretisation}
Let $r \in [0, 1)$ be fixed, then $J_r = \mathcal B_r \D / \D t$ is a self-adjoint
operator on $L^2_{per} (-\pi, \pi)$ of $2 \pi$-periodic square integrable functions.
Its domain is $H_0$ (see Section~3.2 for the definition), and it can also be defined
by linearity from $J_r \cos n t = \lambda_n \cos nt$ for $n=0,1,\dots$ and $J_r \sin
n t = \lambda_n \sin nt$ for $n=1,2,\dots$; here the eigenvalues are $\lambda_n =
\mu_n^{-1}$ for $n \geq 1$ and $\lambda_0 = 0$; cf. \eqref{lambda_n}. Since the
corresponding eigenfunctions form a basis in $L^2 (-\pi, \pi)$, the following
spectral decomposition holds:\\[-3mm]
\begin{equation}
J_r = \sum_{n = 1}^{\infty} \lambda_n ( \hat P_n + \tilde P_n ) .
\label{spect_decomp}
\end{equation}
Here $\hat P_n$ $(\tilde P_n)$ is the projector onto the subspace spanned by $\cos
nt$ ($\sin nt$, respectively).
Seeking solutions of \eqref{37} in $\hat H_0$, it is convenient to write the
equation in an equivalent form to accelerate numerical calculations. This form
arises after replacing $J_r$ in \eqref{37} by the right-hand side of
\eqref{spect_decomp} with omitted $\tilde P_n$, which is possible in view of the
bijection between $\hat H_0$ and the Sobolev space $W^{1,2} (0, \pi)$; indeed, for
every $w \in W^{1,2} (0, \pi)$ its even extension $v$ belongs to $\hat H_0$ and vice
versa. Therefore, it is convenient to put $\mathcal J_r = \sum_{n = 1}^{\infty}
\lambda_n P_n$, where $P_n$ is the projector onto the subspace of $L^2 (0, \pi)$
spanned by $\cos nt$. Then $\mathcal J_r$ is defined on $W^{1,2} (0, \pi)$ and
$\mathcal J_r w = J_r v (= \mathcal B_r v')$ almost everywhere on $(0, \pi)$, and so
\eqref{37} takes the following equivalent form\\[-3mm]
\begin{equation}
\mu \mathcal J_r w = w + w \mathcal J_r w + \frac 12 \mathcal J_r (w^2) \, , \quad t
\in (0, \pi) \, , \label{spectralBabenko}
\end{equation}
where $w (t)$ is sought in $W^{1,2} (0, \pi)$. To solve this equation numerically, a
modified version of the software SpecTraVVave is applicable; the latter is available
freely at the site indicated in \cite{MVK}, whereas its detailed description can be
found in \cite{KMV}.
For the reason made clear below, we amend \eqref{spectralBabenko} further; namely,
we set $\mu_0 = 1$ and put $\mathcal L_r = \sum_{n = 0}^{\infty} \mu_n P_n$. Hence
$\mathcal L_r$ is invertible and ${\mathcal L_r}^{-1} = P_0 + \mathcal J_r$; that
is, $\mathcal L_r \mathcal J_r = I - P_0$, where $I$ is the identity operator.
Applying $\mathcal L_r$ to both sides of \eqref{spectralBabenko}, we obtain the
following equation:
\begin{equation}
\mu (I - P_0) w = \mathcal L_r w + \mathcal L_r ( w \mathcal J_r w ) + \frac 12 (I -
P_0) w^2 \, , \quad t \in (0, \pi) \, . \label{inverse_spectralBabenko}
\end{equation}
It should be noted that the unbounded operator $\mathcal J_r$ is present in the
nonlinear part of the last equation only, and so one can expect that
\eqref{inverse_spectralBabenko} would demonstrate better numerical stability.
Finally, equations \eqref{inverse_spectralBabenko} and \eqref{37} are equivalent in
the following sense. The sets $\{ b_n (w) \}_{n=0}^\infty$ and $\{ b_n (v)
\}_{n=0}^\infty$ of the Fourier coefficients coincide for solutions
of\eqref{inverse_spectralBabenko} and \eqref{37}, respectively, provided the value
of $\mu$ is the same for both solutions.
For equation \eqref{inverse_spectralBabenko} the existence of small solutions
follows from its equivalence to \eqref{37}. It can also be established directly with
the help of the Crandall--Rabinowitz theorem; see Section~3.2, which yields the
asymptotic formulae \eqref{s_a} for the branch of solutions of
\eqref{inverse_spectralBabenko} bifurcating from $\mu_n$ and trivial $w$. This can
serve as a solution guess to start with in the numerical procedure.
\vspace{-1mm}
\subsection{Discretisation of equation \eqref{inverse_spectralBabenko}}
We use the standard cosine collocation method, according to which solutions of
\eqref{inverse_spectralBabenko} are are sought in the form of linear combinations of
$\cos mx$, $m = 0, 1, \dots$\,---\,a basis in $L^2(0, \pi)$. For the discretisation
the subspace $\mathcal S_N$ spanned by the first $N$ cosines is used, which is
defined by their values at the collocation points $x_n = \pi \frac{2n - 1}{2N}$ for
$n = 1, \ldots, N$. Thus, for any $f \in W^{1,2} (0, \pi)$ the vector $f^N$ given by
its coordinates\\[-2mm]
\[ f^N_n = \sum_{k=0}^{N-1} (P_k f) (x_n) \, , \quad n = 1, \dots, N ,
\]
is considered. The operator $\mathcal L_r^N$, discretising $\mathcal L_r$, is
defined as follows:\\[-2mm]
\[ ( \mathcal L_r^N f^N )_n = \sum_{k=0}^{N-1} (P_k \mathcal L_r f) (x_n) \, , \quad
n = 1, \dots, N .
\]
Furthermore, $\mathcal J_r^N$ and $P_0^N$ are introduced as the discretisations of
$\mathcal J_r$ and $P_0$ respectively.
These definitions are correct because $f^N$ defines the function $f$ with values
$f(x_n) = f^N_n$ uniquely up to a projection on the subspace orthogonal to $\mathcal
S_N$. It is clear that each of these discrete operators is a composition of the
discrete cosine transform, some diagonal matrix and the inverse discrete cosine
transform. The diagonal matrix for $\mathcal L_r^N$ is $\{ 1, \ldots, \mu_{N-1} \}$,
whereas the diagonal for $\mathcal J_r^N$ is $\{ 0, \lambda_1, \ldots, \lambda_{N-1}
\}$, and $\{ 1, 0, \ldots, 0 \}$ is the diagonal for $P_0^N$. The discrete analogue
of \eqref{inverse_spectralBabenko} is as follows:\\[-3mm]
\begin{equation}
\mathcal L_r^N w^N - \mu \left( I - P_0^N \right) w^N + \mathcal L_r^N \left( w^N
\mathcal J_r^N w^N \right) + \frac 12 \left( I - P_0^N \right) \left( w^N \right)^2
= 0 . \label{discrete_Babenko}
\end{equation}
Since solutions $(\mu, w^N)$ of this equation form curves in the $(\mu, a)$-plane,
where\\[-3mm]
\[ a = \| w^N \| = \max_n |w^N_n| ,
\]
it is convenient to parametrise these curves for making calculations more effective.
Thus, due to a new parameter, say $\theta$, we have $\mu = \mu(\theta)$ and $a =
a(\theta)$ on each curve of solutions. Therefore, $\mu(\theta)$ must be substituted
into \eqref{discrete_Babenko} instead of $\mu$, and this algebraic system must be
complemented by the equation:\\[-4mm]
\begin{equation}
\max _{n = 1, \ldots, N} |w^N_n| = a(\theta) . \label{waveheight}
\end{equation}
The resulting system \eqref{discrete_Babenko}--\eqref{waveheight} has $N + 1$
equations with the following unknowns $\theta, w^N_1, \ldots, w^N_N$. Hence the
standard Newton's iteration method is applicable for finding bifurcations from a
trivial solution, and the Crandall--Rabinowitz asymptotic formula \eqref{s_a} yields
an initial guess. Further details concerning the proposed parametrisation and the
particular realisation of algorithm can be found in \cite{KMV}.
\begin{figure}
\vspace{-4mm}
\begin{center}
\SetLabels
\L (0.5*0.01) $\mu$\\
\L (0.764*0.41) $C_2$\\
\L (0.81*0.8) \tiny $C_{21}$\\
\L (0.428*0.31) $C_3$\\
\L (0.43*0.56) \tiny $C_{31}$\\
\L (0.256*0.26) $C_4$\\
\L (0.232*0.43) \tiny $C_{41}$\\
\L (-0.08*0.44) $\| v \|_\infty$\\
\endSetLabels
\leavevmode
\strut\AffixLabels{\includegraphics[width=68mm]{B2_B3_B4_Diagrams_N=2048_r=0.eps}}
\end{center}
\vspace{-4mm}
\caption{The solution branches $C_2$, $C_3$ and $C_4$ for equation \eqref{37} with
$r=0$, bifurcating from the zero solution at $\mu_2 (0) = 1/2$, $\mu_3 (0) = 1/3$
and $\mu_4 (0) = 1/4$ respectively. The secondary solution branches are denoted
$C_{21}$, $C_{31}$ and $C_{41}$ respectively. The upper bound mentioned prior to
Definition~1 is also included.} \vspace{-3mm}
\label{fig:7}
\end{figure}
\begin{figure}
\begin{center}
\SetLabels
\L (0.5*0.01) $\mu$\\
\L (0.428*0.31) $C_1$\\
\L (-0.08*0.44) $\| v \|_\infty$\\
\endSetLabels
\leavevmode
\strut\AffixLabels{\includegraphics[width=68mm]{B1_Diagram_N=2048_r=0,8_turning_lim.eps}}
\end{center}
\vspace{-4mm} \caption{The solution branch $C_1$ for equation \eqref{37} with
$r=4/5$ in a vicinity of the turning point, whose characteristics are as follows:
$\mu \approx 0.32671$ the solution's $L^\infty$-norm is approximately equal to
$0.15862$. The bold dot marks the solution plotted in Fig.~3. The upper bound
mentioned prior to Definition~1 is also included.} \vspace{-3mm}
\label{fig:8}
\end{figure}
\subsection{Bifurcation curves for equation \eqref{37}}
We begin with the results of a test calculation in which the algorithm described in
Section~4.2 is applied to equation \eqref{inverse_spectralBabenko} with $r = 0$,
thus giving bifurcation curves for equation \eqref{bid}. The curves plotted in
Fig.~7 show the bifurcations from a trivial solution and the first three secondary
bifurcations for this case; the curve $C_1$ is omitted because its behaviour is
similar to that presented in Fig.~1, including the presence of a turning point. The
secondary bifurcation branches $C_{21}$, $C_{31}$ and $C_{41}$ bifurcate from $C_2$,
$C_3$ and $C_4$, respectively, at the points, where $\mu$ is approximately equal to
$0.58768$, $0.39172$ and $0.29389$ respectively. These values are in good agreement
with those presented by Aston \cite{A}; see Table~1 in his paper.
\begin{figure}
\vspace{-2mm}
\begin{center}
\SetLabels
\L (0.5*0.01) $\mu$\\
\L (0.5*0.41) $C_3$\\
\L (0.74*0.764) \tiny $C_{31}$\\
\L (-0.08*0.5) $\| v \|_\infty$\\
\endSetLabels
\leavevmode \strut\AffixLabels{\includegraphics[width=68mm]{B3_Diagrams_N=2048_r=0,8.eps}}
\end{center}
\vspace{-4mm}
\caption{The branch $C_3$ of solutions of equation \eqref{37} with $r=4/5$,
bifurcating from the zero solution at $\mu_3 (4/5) = 0.194868414381$. The secondary
solution branch $C_{31}$ bifurcates from $C_3$ at $\mu \approx 0.25298$. The dots
mark those solutions on $C_3$ and $C_{31}$, whose wave profiles are plotted in
Fig.~10 and Fig.~11 respectively. The upper bound mentioned prior to Definition~1 is
also included.} \vspace{-4mm}
\label{fig:9}
\end{figure}
Now we turn to numerical results obtained for equation \eqref{37} with $r=4/5$. The
so\-lution branch $C_1$ is presented in Fig.~1, and some of its characteristics are
described after Theorem~2. In particular, it is pointed out that it has a turning
point, and so we give a zoomed plot of the curve $C_1$ in a vicinity of this point;
see Fig.~8, where bold dot marks one of two solutions corresponding to $\mu \approx
0.32671$. The wave profile corresponding to this solution is plotted in Fig.~3,
where some of its characteristics are given; moreover, its $L^\infty$-norm is
approximately equal to $0.15862$.
The last example concerns the solution branch $C_3$ for equation \eqref{37} with
$r=4/5$. It is presented in Fig.~9, where one observes the presence of a turning
point as well as the secondary bifurcation. Indeed, the branch $C_{31}$ bifurcates
from $C_3$ at the point, where $\mu$ is approximately equal to $0.25298$, and
shortly after that $C_3$ has its turning point. The algorithm proposed in
Section~4.2 allows us to solve \eqref{inverse_spectralBabenko} up to both critical
values on $C_3$ and $C_{31}$; see Fig.~10 and Fig.~11, respectively, for the plots
of wave profiles corresponding to these solutions.
In Fig.~10, the wave profile corresponds to the end-point solution on the branch
$C_3$; $\mu \approx 0.25175$ for this solution of equation \eqref{37} with $r=4/5$.
Like a small-amplitude wave characterised by the second formula \eqref{s_a}, this
profile has the wavelength $2 \pi / 3$, and so three wave periods are plotted.
Moreover, this Stokes wave has the extreme form; that is, the tangents to two smooth
arcs form the angle $2 \pi / 3$ at every crest. The tangency is demonstrated with
sufficient accuracy in the figure, where the angle inscribed into the wave profile
has the sides $y = y_c \pm x / \sqrt 3$ with $y_c = y (0)$; see \eqref{_y} for $y
(t)$ and the first formula \eqref{x_y} for $x (t)$ that describe the profile
parametrically. Of course, the same tangency with similar angles takes place at
every crest.
\begin{figure}
\vspace{-4mm}
\begin{center}
\SetLabels
\L (0.5*0.01) $x$\\
\L (0.04*0.42) $y$\\
\endSetLabels
\leavevmode
\strut\AffixLabels{\includegraphics[width=68mm]{B3_solution_N=2048_r=0,8_angle.eps}}
\end{center}
\vspace{-4mm}
\caption{The wave profile of the extreme form corresponding to the
end-point solution on the branch $C_3$ for equation \eqref{37} with $r=4/5$. The
characteristics of this wave are as follows: $\mu \approx 0.25175$; the profile's
crests (troughs) are at $y = y_c \approx 0.12777$ ($y = y_t \approx -0.03312$
respectively).} \vspace{-6mm}
\label{fig:10}
\end{figure}
\begin{figure}
\begin{center}
\SetLabels
\L (0.5*0.01) $x$\\
\L (0.0*0.46) $y$\\
\endSetLabels
\leavevmode
\strut\AffixLabels{\includegraphics[width=68mm]{B31_solution_N=2048_r=0,8_extended.eps}}
\end{center}
\vspace{-4mm}
\caption{The wave profile of the extreme form corresponds to the end-point solution
on the branch $C_{31}$ for equation \eqref{37} with $r=4/5$. Its characteristics are
as follows: $\mu \approx 0.24827$ the profile's smooth crests (troughs) are at $y =
\tilde y_c \approx 0.10406$ ($y = y_t \approx -0.03310$ respectively), whereas the
peaks are at $y = \hat y_c \approx 0.12608$.} \vspace{-6mm}
\label{fig:11}
\end{figure}
In Fig.~11, the wave profile corresponds to the end-point solution on the branch
$C_{31}$; $\mu \approx 0.24827$ for this solution of equation \eqref{37} with
$r=4/5$. The profile has the wavelength $2 \pi$, and so two wave periods are
plotted. Thus, the period-tripling occurs as $C_{31}$ bifurcates from the branch
$C_3$; an analogous effect is described in \cite{Zuf} for waves on infinitely deep
water (see, in particular, Fig.~3 on p.~25 of his paper). Moreover, the wave is
symmetric with respect to the vertical through the highest, mid-period crest. The
latter has the extreme form like every crest in Fig.~10, whereas the wave profile is
smooth at two other crests on the period.
\section{Concluding remarks}
We have considered the nonlinear problem describing steady, gravity waves on water
of finite depth. This problem is reduced to a single pseudo-differential operator
equation~\eqref{37} (Babenko's equation), which generalises the well-known equation
\eqref{bid} describing waves on infinitely deep water. The local bifurcation for
\eqref{37} is investigated analytically with the help of the Crandall--Rabinowitz
theorem, whereas a combination of analytical and numerical methods is applied for
demonstrating that the initial, free-boundary problem and Babenko's equation are
equivalent in the following sense. For every solution of the initial problem one of
its components, namely, the free-surface elevation, is a solution of Babenko's
equation for some value of the parameter on which the equation's operator depends;
this value is determined by the solution of the free-boundary problem. On the
contrary, every solution of Babenko's equation defines a solution of some
free-boundary problem through a certain procedure.
Besides, we outline an algorithm which allows us to solve Babenko's equation
numerically using a modification of the free software SpecTraVVave; see \cite{MVK}.
It should be emphasised that the developed numerical procedure is not only very
fast, but remarkable for its high accuracy. The latter is essential when computing
solutions to which wave profiles of the extreme form correspond, thus allowing us to
plot global bifurcation branches presented in Section~4.3.
This paper is just an initial step in studies of Babenko's equation both
analytically and numerically. First, it is desirable to prove rigorously that every
solution of Babenko's equation defines a solution of the free-boundary problem that
describes steady waves on a flow of finite depth with certain characteristics.
Second, it is natural to show that the profiles of waves below the highest, that has
the extreme form being non-smooth at its highest point, are real analytic curves.
Third, one has to demonstrate the absence of sub-harmonic bifurcations in a
neighbourhood of every point, where the bifurcation from the zero solution occurs.
Finally, a global Stokes-wave theory should be developed and used for proving that
there exist sub-harmonic bifurcations on branches of smooth waves close to the
highest wave. All these results had been established for waves on infinitely deep
water on the basis of equation \eqref{bid}; see \cite{BDT1,BDT2}.
An interesting direction for further numerical investigations is to find higher
bifurcations that might exist for waves on water of finite depth as it happens in
the case of deep water as had been shown in \cite{A}, where just several isolated
points of higher bifurcations are listed in Table~1. Since the algorithm based on
equation \eqref{37} and realised by using the software SpecTraVVave is a rather
robust tool, one could apply it for calculating branching bifurcation curves that
have more than one point of bifurcation.
In conclusion, we outline what equation \eqref{37} has in common with Babenko's
equation for finite depth obtained by Constantin, Strauss and V\u{a}rv\u{a}ruc\u{a}
\cite{CSV}; see Remark 4 in their paper. Namely,\\[-3mm]
\begin{equation}
\tilde \mu {\mathcal C}_d (\tilde v') = \tilde v + \tilde v {\mathcal C}_d (\tilde
v') + {\mathcal C}_d (\tilde v' \tilde v) \label{CSV}
\end{equation}
literally coincides with (2.50) in \cite{CSV} with one exception. We use $d$ as the
operator's parameter instead of $h$. There are two reasons for this: (1) $d$ and $h$
are equal to each other in Remark 4, since $k$ is taken equal to unity there; (2) a
different quantity is denoted by $h$ in Section~2.2 of our paper and $h$ will be
used in that meaning below.
It is obvious that the form of the last equation is exactly the same as that of
\eqref{37}, but what about the meaning of symbols involved? First, the parameter $d
> 0$ is equal to the so-called conformal mean depth and the latter is defined
uniquely by the water domain; see \cite{CV}, Appendix~A. However, this depth,
generally speaking, is not equal to the non-dimensional mean depth of the water
domain $D$ introduced in Section~2.2; see, in particular, formulae \eqref{dlv}. By
analogy with the conformal mean depth, it would be natural to call the parameter $r
\in (0, 1)$, on which the operator ${\mathcal B}_r$ depends in \eqref{37}, the
conformal mean radius of the water domain $D$. Furthermore, the conjugation operator
${\mathcal C}_d$ is defined for $2 \pi$-periodic functions on $\RR$ as follows. If
$f$ has zero mean value over a $2 \pi$ interval, that is, its Fourier series has the
form\\[-3mm]
\[ f (x) = \sum_{n=1}^\infty (a_n \cos n x + b_n \sin n x) , \quad x \in \RR ,
\]
then
\[ ({\mathcal C}_d f) (x) = \sum_{n=1}^\infty \coth n d \, (a_n \sin n x - b_n
\cos n x) , \quad x \in \RR .
\]
This definition is similar to that of ${\mathcal B}_r$ in \eqref{HTB_r}, but with
the multiplier $\coth n d$ instead of $(1 + r^{2 n}) / (1 - r^{2 n})$. Moreover,
${\mathcal C}_d$ has the representation analogous to $\mathcal{B}_r = \mathcal{C} +
\mathcal{K}_r$ with $\mathcal{K}_r$ given by \eqref{K_r}; see formulae (A.9) and
(A.12) in \cite{CSV}. Thus, there is a significant similarity between ${\mathcal
C}_d$ and ${\mathcal B}_r$. The essential point that distinguishes ${\mathcal C}_d$
and $\mathcal{B}_r$ is that the latter operator is defined for all $2 \pi$-periodic
functions, whereas the domain of ${\mathcal C}_d$ is orthogonal to constants.
Finally, let us demonstrate that if $k = 1$ (this is the case in \cite{CSV}, Remark
4), the free surface profile satisfies the assumptions made in this paper (see
Section~2.1) and $\tilde v$ is an even and $2 \pi$-periodic solution of \eqref{CSV},
then $\tilde \mu = \mu$. Thus, the bifurcation parameter is the same in both
\eqref{CSV} and \eqref{37}.
According to Remark 4 in \cite{CSV}, we have\\[-3mm]
\begin{equation}
\tilde{\mu} = \frac{2R}{g} - 2 d - 2 \beta , \label{14j}
\end{equation}
where $R$ is the Bernoulli constant in \eqref{4}, whereas the exact value of $\beta$
is unimportant for what follows. Since $k = 1$, formulae used in Section~2.2,
dealing with the derivation of the non-dimensional problem, imply that $l = \pi$ and
$H = h$, and so\\[-3mm]
\[ \mu = \frac{2R}{g} - 2 h .
\]
Combining this formula and \eqref{14j}, we see that $\tilde \mu = \mu$ holds, if we
show that $h = d + \beta$.
In order to prove the last equality, we first notice that the definition of
${\mathcal C}_d$ implies that ${\mathcal C}_d (\tilde v')$ and ${\mathcal C}_d
(\tilde v' \tilde v)$ are orthogonal to constants. Then equation \eqref{CSV} yields
that
\[ \int_{-\pi}^{\pi} \tilde v (x) [ 1 + {\mathcal C}_d (\tilde v') (x) ] \, \D x
= 0 \quad \Longleftrightarrow \quad \int_0^{\pi} \tilde v (x) [ 1 + {\mathcal C}_d
(\tilde v') (x) ] \, \D x = 0 \, ,
\]
where the second relation is a consequence of the assumption that $\tilde v$ is
even. To transform this relation we consider the parametric representation of the
free surface profile used in~\cite{CSV}:
\begin{equation}
X (x) = U (x, 0) = x + \mathcal{C}_d (\tilde{v} + \beta) \, , \quad Y (x) = V (x, 0)
= \tilde{v} + d + \beta \, , \quad x \in \RR , \label{XY}
\end{equation}
see (2.7), (2.8), (2.10) and Remark~4. Hence we have
\[ \int_0^{\pi} \tilde v (x) \frac{\D X}{\D x} (x) \, \D x = \int_0^{\pi} \tilde v
(x (X)) \, \D X = 0 \, ,
\]
where it is taken into account that $X (x)$ is invertible on $(0, \pi)$ since $k =
1$. Averaging the second formula \eqref{XY} over $(0, \pi)$, we obtain
\[ h = \frac{1}{\pi} \int_0^\pi Y(x (X)) \, \D X = \frac{1}{\pi} \int_0^\pi \tilde{v}
(x (X)) \, \D X + d + \beta = d + \beta \, ,
\]
which yields the required equality $\tilde{\mu} = \mu$.
\vspace{1mm}
\noindent {\bf Acknowledgements.}
\noindent The authors are grateful to Henrik Kalisch without whose support the paper
would not appear. E.\,D. acknowledges the support from the Norwegian Research
Council.
\vspace{-2mm}
|
{
"timestamp": "2018-05-17T02:13:34",
"yymm": "1803",
"arxiv_id": "1803.02767",
"language": "en",
"url": "https://arxiv.org/abs/1803.02767"
}
|
\section{Value Alignment and Turing's Test}
A substantial portion of contemporary research into ethics and artificial intelligence is devoted to the problem of ``value alignment'' (hereafter \textbf{VA}) \cite{allen2005artificial,yudkowsky2008artificial,yampolskiy2013artificial,soares2014aligning,russell2015research,arnold2017value}. Rather than deriving ethically appropriate action from first principles or from a direct recognition of the good, VA takes as its goal the (presumably simpler) task of designing AI that conforms to human values. AI that reliably conforms to human values is said to be ``aligned''. A primary concern in this literature is to establish methods that guarantee alignment, potentially within tight parameters, since it is argued that even small and seemingly innocuous cases of misalignment can quickly develop into a serious threat to general human safety \cite{yudkowsky2008artificial,bostrom2012superintelligent,babcock2016agi}.\par
There are reasons to be optimistic about VA as an approach to AI ethics, perhaps most significantly that the framework of ``alignment'' seems to lend itself to contemporary machine learning techniques like supervised learning \cite{mohri2012foundations}, where machines systematically improve their performance relative to a specified training set. There are also reasons to be skeptical that today's machine learning techniques are adequate for generating the complex forms of alignment required for participating in human moral communities \cite{arnold2017value}. However, rather than critiquing the VA literature directly, my goal in this paper is to reflect on connections between the discourse on value alignment and the historical discussion of Turing's notorious ``imitation game'', with the hopes that lessons from the latter might better inform our developing discussions of the former.\par
Turing's test, originally offered as an alternative to the question ``can machines think?'', has since become a standard benchmark for evaluating the intelligence of machines \cite{turing1950computing,saygin2000turing,copeland2004essential,copeland2017turing}. The test revolves around a comparison to human performance: if the machine cannot be correctly identified by a human interrogator after a few minutes of conversation, it is said to "pass" the test and can be called intelligent. The central criterion for passing the test is \textit{indistinguishability from human behavior} \cite{dretske1997naturalizing,saygin2000turing}. We might describe the demand for indistinguishability in terms of ``behavioral alignment'': a machine is behaviorially aligned just in case it behaves indistinguishably from a human. \cite{allen2000prolegomena} already recognized that since the set of moral behaviors is a proper subset of the total behaviors, what today is called ``value alignment'' can be interpreted as a special case of behavioral alignment. From this insight they propose a Moral Turing Test (\textbf{MTT}) \cite{allen2006machine,wallach2008moral,arnold2016against}. The MTT is passed by a machine that behaves indistinguishably from a human in a conversation about moral actions.\footnote{\cite{arnold2016against} argue that even a Total MTT variation, where evaluation of behaviors is not restricted to conversation but encompasses the full range of moral behaviors, is not sufficient for saving the MTT as a viable ethical criterion. For this reason, I will not dwell on the restriction to conversation behaviors in this paper. See \cite{harnad1989minds}.} Just as passing the original Turing Test is supposed to suggest a degree of intelligence on the basis of behavioral alignment, so too does passing the MTT suggest a degree of moral agency on the basis of moral alignment.\par
Although the Turing Test is widely known and discussed, it is generally not accepted as a reliable test for intelligence. Criticisms of Turing's test abound in the literature, perhaps best summarized by Dretske: "despite indistinguishability, all is dark..." \cite{dretske1997naturalizing}. Critics worry that the mere imitation of human behavior is not sufficient for either intelligent or moral agency, and so Turing's test doesn't tell us what we want to know \cite{searle1980minds,dennett1981brainstorms,dreyfus1992computers}. Here the theoretical goals of the MTT come apart from those of VA. Researchers concerned about value alignment don't care whether the machine is a genuine moral agent; a pure automaton (like the Paperclip Maximizer \cite{bostrom2003ethical}) might still pose a threat to humanity if it is sufficiently misaligned. And conversely, a sufficiently aligned machine is no guarantee of moral agency, just as convincing automation is no guarantee of intelligent agency. For this reason, strong rejections of MTT sit awkwardly in the literature alongside expansive research programs into the constraints on alignment, even while the former is a clear example of the latter. For instance, \cite{arnold2016against} criticize the MTT as a standard for building moral machines, and yet they go on in \cite{arnold2017value} to develop some theoretical constraints on applying machine learning to value alignment, while drawing no strong connections between these discussions. The effect is to make it appear as if the MTT is either irrelevant or unhelpful to the to the discussion of value alignment. \par
Arnold et al. reject the MTT on several grounds, including that imitation cannot serve as the basis for intelligent moral agency. Echoing the traditional criticisms, they write, "What becomes ever clearer through explicating the conditions of an MTT is that its imitative premise sets up an unbridgeable gulf between its method and its goal" \cite{arnold2016against} The framework of an "unbridgeable gap" is familiar from the philosophy of mind \cite{dennett1991real}, and seems to render Turing's proposal inadequate for the task of developing genuine moral agents. However, if our task is not to develop intelligent moral agents \textit{per se} but merely to align our machines with our values, than the MTT may continue to prove useful. In the next section, I argue that Turing's principle of "fair play for machines" (\textbf{FP}) \cite{turing1947automatic} provides a non-imitative ground for evaluating the alignment of machines. I argue that the FP avoids many of the classic criticisms of Turing's test, and provides a satisfying method for applying Turing's insights to the problem of value alignment distinct from the MTT.
\section{Fair Play for Machines}
\cite{proudfoot} points to a variety of sources in developing a rich, comprehensive account of the Turing test (``from every angle''). Special emphasis is given to his discussion of a ``little experiment'' \cite{turing1948intelligent} involving chess playing computers in an experimental design that is clearly the germ for his landmark 1950 paper. However, curiously missing from Proudfoot's analysis (and mentioned only in passing in \cite{leavitt2017}), is a short passage from the end of Turing's 1947 Lecture on the Automatic Computing Engine to the London Mathematical Society \cite{turing1947automatic,copeland2004essential,hodges2012alan}. Here Turing is also concerned with evaluating the performance and "I.Q." of chess-playing computers, which suggests this passage should be read alongside his 1948 and 1950 papers for a full appreciation of the developing proposal. Since it is so regularly overlooked, I quote Turing's argument below in full, with paragraph breaks and emphasis added:
\begin{quotation}
``It might be argued that there is a fundamental contradiction in the idea of a machine with intelligence. It is certainly true that ‘acting like a machine’ has become synonymous with lack of adaptability. But the reason for this is obvious. Machines in the past have had very little storage, and there has been no question of the machine having any discretion. The argument might however be put into a more aggressive form. It has for instance been shown that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable, i.e. that there is no test that the machine can apply which will divide propositions with certainty into these two classes. Thus if a machine is made for this purpose it must in some cases fail to give an answer. On the other hand if a mathematician is confronted with such a problem he would search around a[nd] find new methods of proof, so that he ought eventually to be able to reach a decision about any given formula. This would be the argument.\par
\textbf{Against it I would say that fair play must be given to the machine.} Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretense at infallibility. \par
To continue my plea for ‘fair play for the machines’ when testing their I.Q. A human mathematician has always undergone an extensive training. This training may be regarded as not unlike putting instruction tables into a machine. One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge, why should we expect more of a machine? Putting the same point differently, \textbf{the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards}. The game of chess may perhaps be rather suitable for this purpose, as the moves of the machine’s opponent will automatically provide this contact.'' \cite{turing1947automatic,copeland2004essential}
\end{quotation}
There are many striking things to note about this passage. First, Turing is responding to a critic of the very idea of machine intelligence, whose argument points to some necessary (and therefore unbridgeable) gap between the performance of humans and machines. In this case, the critic appeals to G\"odel's incompleteness theorem \cite{godel1931formal,smullyan2001godel} as evidence of such a gap, an objection he returns to under the heading of ``The Mathematical Objection'' in \cite{turing1950computing}. Recall that Turing's major mathematical contribution \cite{turing1937computable} is the formal description of a ``universal computer'', which can in theory perform the work of any other computer. On my interpretation \cite{estrada2014rethinking}, the universality of his machines is what ultimately convinces Turing that computers can be made to think. Without any assumption of behaviorism or appeal to a principle of imitation, the syllogism runs as follows: if the brain is a machine that thinks, and a digital computer can perform the work of any other machine, then a digital computer can think. This syllogism is both valid and sound. However, Turing recognizes that G\"odel's theorem shows ``that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable''. This straightforwardly implies that there are some things that even Turing's universal machines cannot do. This result does not invalidate the syllogism above. Still, Turing's critics draw an inference from (1) there are some things machines cannot do, to (2) humans can do things that (mere) machines cannot do. Although this inference is clearly invalid,\footnote{Turing's original response to the Mathematical Objection remains satisfying: ``The short answer to this argument is that although it is established that there are limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect.'' \cite{turing1950computing}} arguments of this form persist even among respected scholars today \cite{penrose1999emperor,floridi2016should}. The passage from his 1947 Lecture shows Turing contending with this perennial challenge several years before his formal presentation of the imitation game. In other words, Turing was clearly aware of an ``unbridgeable gap'' objection, and both his ``little experiment'' and the principle of fair play serve as ingredients in his response. A full appreciation of Turing's position in this debate ought to take this evidence into account. \par
Second, the core of Turing's response is to offer a ``plea'' for what he calls ``fair play for machines''. This suggestion is proposed in the context of ``testing their I.Q.'', making explicit the connection between FP and the developing framework of Turing's test. Essentially, Turing is worried about a pernicious double standard: that we use one standard for evaluating human performance at some task, and a more rigorous, less forgiving standard for evaluating the machine's performances \textit{at the same task}. Other things equal, a double standard is patently unfair, and thus warrants a plea for "fair play". Of course, one might worry that the mere fact that the performance comes from a machine implies that other things \textit{aren't} equal. Since machines are different from humans, they ought to be held to different standards. But on my interpretation , Turing is primarily motivated by a conviction that universal computers can perform the work of any other machine, and so humans and computers are not essentially different. Turing's test isn't designed to prove that machines can behave like humans, since in principle this follows from the universality of the machines. Instead, the test is designed to strip the human evaluator of his prejudices against the machines, hence the call for fair play. \par
Notice that calling a standard of judgment unfair does not imply that the machines treated unfairly can ``think''. Therefore, FP alone cannot serve as a basis for evaluating the intelligence of machines in the style of the Turing Test. And indeed, Turing's argument makes clear that his appeal to FP is concerned not with the intelligence of the machine, but instead with the standards used to evaluate the machine's performance. After all, Turing's plea is made in defense of machines that are expected to be infallible, and whose performance might be compromised (by ``occasionally providing wrong answers'') in order to more closely approximate the performance of a human. Turing's point is that we'd never demand a human mathematician to occasionally make mistakes in order to demonstrate their intelligence, so it's strange to demand such performance from the machine. If Turing's test is motivated by a call for "fair play for machines", this should inform our interpretation of the test itself. Since the principle of fair play does not depend on an imitative premise, the rejection of Turing's test on this basis seems too hasty.\par
Finally, the quoted passage closes by highlighting the phrase ``fair play for machines'' again\footnote{It may be interesting to consider why the phrase "fair play for machines" doesn't appear in the 1950 paper. Many of the arguments from the passage appear in the final section of his 1950, under the subsection ``Learning Machines'', where he proposes building ``a mind like a child's'' in response to Lovelace's Objection \cite{estrada2014rethinking}. In this section he proposes a number of games to play with machines, including chess and twenty questions. He also expresses a worry that ``The idea of a learning machine may appear paradoxical to some readers.'' Turing's 1950's paper is self-consciously written to a popular audience; perhaps Turing worried that a plea for "fair play for machines", including those that aren't even intelligent, might also confuse his readers too much, and undermine the constructive argument he's given. Hopefully, readers 70 years later are not so easily confused.}, and arguing that ``the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards.'' Clearly, Turing is approaching the challenge of evaluating the performance of machines, even in purely intellectual domains like chess, as a problem of behavioral alignment. Moreover, Turing argues that for machines to achieve that alignment, \textit{they must be allowed} certain privileges, in the interest of ``fair play''. Specifically, Turing argues that if we expect the machine to learn our standards, we must afford access to our behavior. In other words, he's arguing that constraints on \textit{human behavior} are necessary to achieve alignment: in how we evaluate and interact with out machines. This perspective is rare even in the alignment literature today, where concerns are overwhelmingly focused on how to constrain the machine to stay within bounds of acceptable human behavior.
More importantly, Turing suggests that we must be willing to interact with machines, even those that aren't intelligent, if we expect these machines to align to our standards. And this is precisely the kind of interaction Turing's proposed imitation game encourages. These reflections open a new route to defending the importance of Turing's test in today's alignment literature. Turing's test is usually understood as a benchmark for intelligence, and the MTT as a benchmark for moral agency. Commentary traditionally recognizes Turing's worries about standards of evaluation \cite{saygin2000turing,arnold2016against}, but they interpret Turing's imitation game as attempting to settle on some specific standard of evaluation: namely, indistinguishability from human performance, or perfect imitation, as judged by another human. If the machine meets this standard, the machine is considered intelligent. We might call this a ``benchmark'' interpretation of Turing's test, or \textbf{BTT}. The MTT is an instance of BTT that sets the benchmark to imitate human moral behavior, for example. Many machine learning applications today will present themselves as meeting or exceeding human performance (at discrimination tasks, image recognition, translation, etc.), a legacy of Turing's influence on the field. Criticisms of Turing's test focus on whether this benchmark is appropriate for evaluating the machine's performance, with most concluding it is not an adequate measure of general intelligence. But the principle of fair play suggests Turing is less interested in setting a particular benchmark for intelligence, and more concerned with establishing that the standards for evaluation that are fair. Call this interpretation the Fair Play Turing Test \textbf{FPTT}. A machine passes the FPTT when it meets the same standards of evaluation used to judge human performance at the same task. On this interpretation, Turing's imitation game is meant to describe a scenario of "fair play" where the human biases against machines can be filtered out, and the machine could be judged in their capacity to carry a conversation by the same standards as any other human. We typically think of someone who can hold a conversation as being intelligent, so if a machine can also hold a conversation without being detected as non-human, we should judge it intelligent too. Not because conversation is some definitive marker of intelligence, as the BTT interpretation suggests, but rather because conversation is a standard that is often used to evaluate the intelligence of humans, and the principle of fair play demands holding machines to the same standards. On this interpretation, the sort of hostile interrogation typically seen in demonstrations of Turing's test \cite{aaronson2014} seems straightforwardly unfair, since we wouldn't expect an intelligent human to hold up well under hostile interrogation either.
Since the principle of FP does not depend on imitation, the FPTT works in a subtly different way than the BTT. Passing the FPTT doesn't merely imply a machine performs at human levels; passing FPTT implies more strongly that the machine performs at these levels \textit{when evaluated by the same standards} used to judge human performance. For instance, we usually aren't skeptical of mere imitation when talking to a human, so raising this concern in the context of evaluating machine could signal a change in the standards of evaluation, and thus a violation of FP. Cases where machine performance is expected to diverge significantly from humans might warrant a multiplicity of standards. We might, for instance, expect driverless vehicles to adhere to more rigorous safety standards than we typically hold human drivers. Recognizing these misaligned standards as a violation of fair play doesn't necessarily imply the situation is unethical or requires correction. Instead, identifying a failure of fair play draws attention to the multiplicity of standards for evaluating a task, and the lack of a unifying, consistent framework for evaluating all agents at that task. The framework of FPTT easily extends to evaluating performance at tasks other than ``general intelligence'' where we are interested in consistent, unifying standards, including the task of moral alignment in particular contexts. \cite{arnold2016against} reject the MTT as a standard for evaluating moral agency on the basis of its imitative premise. But FPTT doesn't depend on an imitative premise, and only checks for alignment with the standards used to judge humans at a task. In the next section, I argue that this framework of fair play has direct application for evaluating the alignment of robots operating in our world.
\section{The Rights of Service Robots}
Historically, the question of robot rights has turned on questions of personhood \cite{gunkel2012machine,bryson2017and}. Conditions on personhood typically involve both cognitive and moral attributes, such as "recognizing the difference between right and wrong" \cite{christman2008autonomy}. The consensus among scholars is that robots do not yet meet the conditions on minimal personhood, and will not in the near future. However, this consensus is inconclusive, and has been used to argue that robot rights might be necessary to protect machines that operate below the level of human performance \cite{darlingkate2012,darling2015s}. For instance, in 2017 San Francisco lawmakers implemented restrictions on “autonomous delivery services on sidewalks and public right-of-ways,” citing safety and pedestrian priority of use as motivating concerns \cite{joefitzgeraldrodriguez2017}. The proposal raises a natural question of whether these robots have the right to use public spaces, and to what extent a ban on robots might infringe on those rights. These questions seem independent of more general concerns about moral agency and personhood that typically frame the rights debate. Furthermore, it is well known that service robots operating in public spaces are typically subject to bullying and abusive behavior from the crowd (\cite{salvini2010design,salvini2010safe,brscic2015escaping}. Protecting robots from such treatment seems necessary independent of whether they meet strict conditions for personhood. \par
Like many cases of moral alignment, the case of the rights for service robots to operate on public sidewalks seems to demand a standard for evaluating the performance of machines that does not turn on any imitative comparison with human agents. Delivery robots do not have nor require the intellectual and moral capacities typical of humans; to compare their operation with human performance seems at best mismatched, at worst insulting. Interpreted as a benchmark of performance, these machines operate well below the threshold where Turing's test is relevant and the vocabulary of rights and personhood applies. In contrast to the benchmark interpretation, however, the principle of fair play suggests we look for standards of evaluation that are consistent across humans and machines. In the case of service robots, the focus of concern is on the nature of the task these robots are performing, and the standards already in use for evaluating such performances. There's an obvious comparison between service robots and service animals that is tempting, but I think is ultimately unhelpful. Importantly, animals feel pain and can suffer, and service animals are used to support persons with disabilities who can't otherwise access public resources. Service robots, in contrast, are used by tech companies to better service their clients, and it seems implausible that they can 'suffer' in a morally salient way. Given the distinct nature of these roles, holding service robots to the standards of service animals seems inappropriate.
A closer analogy to the work of service robots can be found \cite{chopra2011legal}, who propose an alternative approach to robot law centered not on personhood but instead on a framework of legal agency. A legal agent is empowered to act on behalf of a principal, to whom the agent holds a fiduciary duty that contractually binds the agent to act on the principal's interest. For instance, a lawyer or accountant operates as a legal agent in the service of their clients. In the context of agency law, an agent's right to operate turns both on the capacities of the agent to faithfully represent the principal, and also on the nature and scope of the role being performed. The framework of agency law offers a systematic defense of robot rights which focuses legal and policy attention to the roles we want robots to play in our social spaces, and the constraints which govern operation of any agent performing these roles \cite{chopraestrada}. A social role analysis of robots \textit{as legal agents} has clear application to the protection of service robots operating in public spaces, including delivery robots and self-driving cars. But it also has natural extensions for the regulation of robots in a wide variety of other social roles, including robots that provide services in the context of law and justice, finances, transportation, education, socio-emotional support, sex work, public relations, and security. For instance, agency law provides a straightforward path to the regulation of bots on social media that are used to influence voters and elections \cite{ferrara2016rise}. From this perspective, social media bots are operating on behalf of their operators in the service of specific roles (campaign promotion, electioneering, etc), and therefore fall under the same legal frameworks that already exist to evaluate the ethics and legality of these activities. \par
The proposal to adopt robot rights grounded in a framework of legal agency deserves an explicit elaboration outside the scope of this paper. I raise the suggestion in this context to demonstrate how the principle of fair play might be used to guide developing standards for evaluating machine performances. Recall that the principle of fair play asks that we evaluate the machine according to the same standards used to judge the performance of a human at the same task. Thus, FPTT focuses our discussion on the task-specific standards for evaluation, rather than on the details of the performance of any particular machine, or the contrasts in performance across different agential kinds. In this way, FP also suggests an expansive research agenda for classifying and detailing the types of roles we might want robots to serve in, and the constraints on evaluating the performance of any agent filling that role. For instance, what should social media bots acting as campaign representatives be allowed to say or do? This is not an engineering question about the capabilities of any machine. It is a social policy question about what machines can and cannot do in the service of their role. If we want machines to align to our standards of performance, then Turing argues that fair play must be given to the machines. \par
Of course, we don't want every machine to align to our standards. If standards cannot be made consistent across humans and machines, it entails a stratification of the social order that divides humans from machines. One might worry that a social role defense of robot rights does not eliminate the stratification, but in fact imposes new social divides wherever there is a distinction in social roles, and so threatens a conception of rights grounded on the universal and inalienable rights of humanity. In this way, a social role defense might appear to be a kind of ``3/5th compromise for robots''. This worry is reinforced by a review of the history of agency law, which itself develops out of a logic of slavery and indentured servitude \cite{johnson2016status}. However, the social role analysis of agency law provides a way around this worry by laying out a direct path to full legal agency for robots. To say that robots serve as agents for principals does not preclude the robot from being a full legal agent, since obviously lawyers and accountants retain their personhood and agency even while acting as an agent for their principal. And of course, serving as a principal is yet another social role to perform, one with its own standards of evaluation. Making the the standards for principals explicit allows for a robot to serve as principal, first for other robots, perhaps as a manager or oversight for other robots acting on its behalf, and eventually to serve as a principal for itself, thus bridging the gap to full legal agency.
\section{Conclusions}
In this article we have reviewed some primary concerns of the value alignment literature, and shown these interests were present in the development of Turing's test as early as \cite{turing1947automatic}. We argued that a widespread rejection of Turing's test as a standard of intelligence has led scholars to overlook Turing's call for fair play as a source of inspiration in developing machines that are value-aligned. We have proposed an alternate interpretation of Turing's test which is inspired by Turing's call for ``fair play for machines'', and carefully distinguished this interpretation from benchmark interpretations like the Moral Turing Test. Finally we have briefly discussed how FPTT might be used to justify a defense of robot rights and sketch out a path to full agency on the basis of a social role analysis of agency law.
\section{Acknowledgements}
Thanks to conversations with Samir Chopra, Jon Lawhead, Kyle Broom, Rebecca Spizzirri, Sophia Korb, David Guthrie, Eleizer Yudkowsky, Eric Schwitzgebel, Anna Gollub, Priti Ugghley, @eripsabot, richROT, the participants in my AI and Autonomy seminars and the Humanities department at NJIT, all my HTEC students at CTY:Princeton, and everyone in the \#botally and Robot Rights communities across social media, especially David Gunkel, Julie Carpenter, Roman V. Yampolskiy, Damien Patrick Williams, and Joanna Bryson. Thanks also to the organizers, participants, and tweeps at \#AIES.
|
{
"timestamp": "2018-03-09T02:00:54",
"yymm": "1803",
"arxiv_id": "1803.02852",
"language": "en",
"url": "https://arxiv.org/abs/1803.02852"
}
|
\section{Introduction}
Ultra-high-energy ($>$ 100 PeV) neutrinos are expected to be produced from interactions of high-energy cosmic rays with cosmic microwave background photons~\cite{BZ}.
The low expected flux~\cite{kotera} and small cross section
require monitoring
an immense volume of a dense material
for successful detection. Coherent Cherenkov emission in the radio regime ({\it i.e.}
Askaryan emission~\cite{askaryan}) from neutrino-induced
showers in radio-transparent dense dielectric media such as ice provides a viable mechanism
for achieving a large enough detector volume for detection of the highest
energy neutrinos. The expected signal is broadband up to
a cutoff frequency of $\sim$~GHz and the emitted power scales quadratically
with shower energy.
The Antarctic Impulsive Transient Antenna (ANITA), a NASA long-duration balloon
payload~\cite{instrument}, is an array of high-gain antennas that monitors the
Antarctic ice sheet for impulsive, broadband neutrino and cosmic-ray-induced radio emission.
ANITA is not only sensitive to Askaryan emission from neutrino-induced showers
in ice, but can also observe geomagnetic emission from
extensive air showers (EAS) induced by cosmic rays or decaying $\tau$ leptons created by $\tau$ neutrino interactions~\cite{anita1CR,mysteryEvent}.
The analyses described here are optimized to look for neutrino-induced Askaryan
emission, but are also sensitive to the EAS channel. The EAS channel is a useful sideband region for these analyses, which is a region of phase space adjacent to the neutrino signal region, and useful for determining cut values and estimating efficiencies and backgrounds. Due to the direction of Earth's magnetic field in Antarctica, EAS emission is mainly horizontally polarized. Askaryan emission visible to ANITA is mostly vertically polarized for Standard Model cross sections, due to preferential Fresnel effects as the radio pulse propagates through the ice surface.
\section{Experimental technique}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{system_diagram}
\caption{A schematic diagram of the ANITA-III instrument. Signals from
48~dual-polarized, quad-ridge horn antennas are fed into a bandpass
filter, through a low-noise amplifier, and then through a second-stage
amplifier. Then, the signals are each split into two parts. The
trigger path signal passes through a tunnel diode square-law detector and
amplifier before being compared to a threshold that forms a first-stage
(channel-level) trigger. If a global trigger (a coincidence between multiple channels) is issued, the signal from the
digitizer path is digitized using a LAB3 switched capacitor array
sampling at 2.6~GSa/sec, recorded on the flight computer, and stored to
disk. The first levels of the trigger and digitization are performed by a
custom board called a Sampling Unit for Radio Frequencies (SURF). The Trigger Unit for Radio Frequencies (TURF) collects lower-level trigger information from the entire payload
to form global triggers. The NASA Support Instrument Package (SIP) is responsible for telemetry. }
\label{fig:anita3}
\end{figure*}
The third flight of the ANITA experiment, ANITA-III, launched
on 18 December 2014. The instrument is similar to
the previous two ANITA payloads~\cite{anita1,anita2}. The primary upgrades from ANITA-II are
the addition of 8 more antennas and a low-frequency antenna (ALFA) aimed at enhancing detection of EAS signals, the implementation of a new impulsive full-band-only trigger in both horizontal and vertical polarizations, and the use
of new lower-noise radio-frequency amplifiers. Here we briefly describe the instrument, flight,
and calibration procedures.
\subsection{The ANITA-III instrument}
\begin{figure}
\includegraphics[width=\columnwidth]{trigger_snr}
\caption{Trigger efficiency vs. voltage signal-to-noise ratio (SNR), derived from lab measurements of injected signals in three adjacent azimuthal sectors. Efficiencies are shown for sets of delays between antennas corresponding to two different elevation angles. }
\label{fig:trigger_snr}
\end{figure}
A schematic of the ANITA-III instrument and data acquisition system is depicted in
Fig.~\ref{fig:anita3}. Forty-eight dual-polarization quad-ridge horn antennas from Antenna Research Associates, Inc. are arranged in a three-ring vertical cylindrical pattern to form 96 wideband
(180~MHz-1200~MHz) channels. Each ring has 16 antennas, and each grouping of three
antennas (top, middle, bottom) are azimuthally aligned, forming 16 azimuthal
sectors. The signal from each channel is bandpass-filtered and then amplified by a
custom-built low-noise amplifier, which is
adjacent to the antenna, and then split into trigger and
digitization paths after a second stage of amplification. Antenna temperatures are typically $\sim$ 130~K and the noise temperatures for the front-end filters and amplifiers are $\sim$100~K.
The trigger path uses a custom tunnel diode as a fast square-law
detector. The tunnel diode output is compared to a dynamically-adjusted
threshold to determine if a channel-level (first-level) trigger should be issued. Unlike
previous ANITA payloads, first-level triggers are based solely on the total power within approximately 10-ns
coincident windows
in each channel, not the frequency content of the signal~\cite{instrument}. The
trigger thresholds are adjusted in real time
to keep the first-level
trigger rate approximately at its target rate, which for ANITA-III was 450 kHz.
A second-level trigger condition is imposed at the level of each azimuthal sector and is satisfied by a coincidence of
two or more channels in a single polarization within the sector. If a first-level trigger is issued for a given channel, a
coincidence window opens during which another channel in the
same azimuthal sector satisfying the first-level condition would
generate a second-level trigger. The size of the coincidence window
depends on the rings involved in the trigger, set by the expectation for up-going signals: 16~ns for the bottom ring,
12~ns for the middle ring,
and 4~ns for the top ring.
The third-level (global) trigger is generated by the coincidence of second-level
triggers occurring in the same polarization in adjacent azimuthal sectors. A global
trigger will cause the digitized signals to be read out,
assuming the four-deep digitizer buffer is not full. Over the course of the flight, the average deadtime incurred from full digitizer buffers was 13\%. Because the triggers for horizontal and vertical polarizations
operate independently, it is possible to have a simultaneous trigger for both. The global
trigger rate over the course of the flight for ANITA-III was approximately 50~Hz. The trigger efficiency as a function of voltage signal-to-noise ratio (SNR) in the trigger chain, derived from lab measurements, is shown in Fig.~\ref{fig:trigger_snr}. The trigger efficiency reaches 50\% at a voltage SNR of $4.0\sigma$.
The digitizer path uses LAB3~\cite{lab4} switched capacitor array digitizers
with a mean sample rate of 2.6~GSa/s. Each channel has four 260-sample analog
buffers to minimize deadtime.
In addition to the science triggers generated by the trigger logic described
above, there are triggers generated either by the payload computer or a pulse
per second signal from the onboard GPS devices. These provide a set of
minimum-bias triggers to help assess the noise environment during flight.
To prevent a portion of the payload from triggering too often and
monopolizing all available digitizer buffers, a trigger mask is automatically
enabled by the flight computer if an azimuthal sector's global trigger
rate exceeds a configurable threshold. This allows ANITA to dynamically
mask channels from the trigger that are subject at any given time
to significant anthropogenic (man-made) noise from locations in Antarctica. Because of satellite
interference in ANITA-III,
throughout most
of the flight the channels that are North-facing at a given time are masked.
The ANITA payload rotates freely. Two independent ADU5 differential GPS
units are used to measure the payload attitude and position.
Power is supplied and controlled with photo-voltaic panels, a bank of batteries,
and a charge controller.
Telemetry is available during the flight through Iridium, TDRSS (when
available), and a line-of-sight system when near McMurdo Station, the largest base of operations in Antarctica.
\subsection{The ANITA-III flight}
\begin{figure}
\includegraphics[width=\linewidth]{a3path}
\caption{The ANITA-III flight path is shown on top of a map of ice depth\cite{bedmap2}.
The location of a high-voltage radio impulse generator used as a calibration source (WAIS Divide) and the launch site (LDB Facility) are also shown.
}\label{fig:flightpath}
\end{figure}
ANITA-III launched from the NASA long-duration balloon facility on the Ross Ice
Shelf near McMurdo station on December $18^{\textrm{th}}$, 2014. ANITA-III flew for
22 days before termination on January $9^{\textrm{th}}$, 2015. The flight path is shown in Fig.~\ref{fig:flightpath}.
The hard disks and flight hardware were recovered with the
aid of the Australian Antarctic Division from nearby Davis station.
High-voltage impulsive calibration signals were
sent to ANITA-III from the
launch site and from an autonomous high-voltage calibration pulser deployed at
WAIS divide. These field pulsers are referenced to GPS time
to facilitate identification. The data from the WAIS pulser proved particularly useful since
ANITA-III passed close enough to be triggered over 100,000 times by the
pulser.
ANITA requires extensive calibration of each digitizer in order to make full
use of the precision timing information. In addition, a temperature
correction must be applied to account for changes in clock frequency as a function
of temperature. A
detailed summary of these calibration procedures is provided in~\cite{BStruttThesis,BRotterThesis}.
\section{Analysis Methods}
Of the over seventy million science triggers captured during the ANITA-III
flight, at most a few events of neutrino origin are expected. The
threshold-riding trigger on the instrument is set so that the vast majority of
those events are thermal noise, the level of which in turn dictates ANITA's
threshold. The majority of the remaining events are anthropogenic transient
and continuous-wave (CW) emission and occasional impulsive emission from the
on-board electronics, which we call {\it payload blasts}.
After reviewing the backgrounds to the search and the simulation tools, we will
briefly summarize the three neutrino searches performed. Additional
detail for each analysis is provided in appendices.
\subsection{Classes of backgrounds}
The vast majority of recorded ANITA-III events are random fluctuations of thermal noise due to ANITA's threshold-riding trigger.
The typical antenna temperature for ANITA-III is $\sim130$~K, from a
combination of the sky and the ice that is in their field of view.
Anthropogenic CW from terrestrial
transmitters or satellites will also trigger ANITA-III.
In particular, the 260 and 380~MHz communication bands
used by various satellites are a dominant
cause of science triggers for ANITA-III.
Even events that triggered on an impulsive neutrino-like signal can have
a significant contribution from CW sources, which complicates analysis.
Self-triggered payload blasts are impulsive radio-frequency emission
generated by electronics on the ANITA payload.
Although ANITA electronics are heavily shielded to prevent leakage of electromagnetic interference (EMI) from the payload, some unknown source of self-interference still appears sporadically in the data. Payload blasts are characterized by non-planar wavefront geometry (since they originate
from very close to the antennas), a distinct frequency spectrum, and are typically much
stronger in the bottom and middle rings of antennas than the top ring (also due to their
being local to the payload).
Isolated, broadband impulsive anthropogenic emission from the ground and thermal noise fluctuations
that by chance reconstruct as coming from the continent are both sources of background that remain
after analysis cuts are developed. In all cases, the contribution to the
expected background in the signal region is estimated before unblinding the
search.
\subsection{Simulation}
The primary ANITA simulation tool is \texttt{icemc}, described in detail
in~\cite{icemc}. The \texttt{icemc} program
includes a full treatment of the ANITA trigger
and digitizer signal chain and uses the flight paths and recorded channel
thresholds in order to model the acceptance of ANITA. It is a
weighted Monte Carlo (MC), where each generated neutrino carries a weight
corresponding to its survival probability and a phase-space factor.
We generate a set of simulated neutrinos to characterize the efficiency of the
analyses. The simulated neutrinos follow the maximum mixed-composition Kotera {\it et al.}~\cite{kotera} flux
model, hereafter referred to as ``Kotera", with Standard Model cross sections~\cite{crosssection}. To simulate the flight noise
environment, the trigger path was modeled with synthetic noise with levels and
spectra derived from the flight, and real minimum-bias trigger data were added
in the digitizer path.
The choice of flux model has little effect on predicted neutrino observables. However, changes in neutrino interaction lengths, even within Standard Model bounds on cross section, affect what emission cones are visible, resulting in different observable angular and polarization distributions.
\subsection{Summary of blind searches}
Three independent blind neutrino analyses were performed, which we denote,
in order of completion, \textbf{A}, \textbf{B} and \textbf{C}. Analyses \textbf{A} and \textbf{B} are similar to each other and to previous ANITA analyses in using common criteria across the continent and searching for isolated events~\cite{anita1,anita2}. Analysis \textbf{C} applies a new methodology in developing geographically-dependent
search criteria with the aim to maintain sensitivity even in regions
of ice with higher levels of anthropogenic noise. Each neutrino search was done with at least one method of blinding: keeping hidden the region of parameter space where the signal resides, using a decimated data sample to set cuts, and/or ``salting'' the data by inserting simulated events.
Further details
are available in Appendices~\ref{sec:clustering},~\ref{sec:clustering2},
and~\ref{sec:binned}, respectively.
\begin{figure}
\includegraphics[width=\columnwidth]{maps.pdf}\\
\vspace{0.25cm}
\includegraphics[width=\columnwidth]{wfs}
\caption{An example of an interferometric map (top), a coherently-averaged waveform (bottom left), and a dedispersed coherently-averaged waveform (bottom right) for a calibration pulser event. The color scale in the top panel corresponds to the normalized cross-correlation value. Although we expect Askaryan neutrino signals to be mostly vertically polarized, the calibration pulser is horizontally polarized.}
\label{fig:interferometric_map}
\end{figure}
Each analysis begins by filtering waveforms to mitigate undesired CW
contamination that would otherwise interfere with the analysis. Analyses \textbf{A} and
\textbf{B} use an adaptive time-domain phasor removal technique while
\textbf{C} uses a method that
removes CW phasors in the frequency domain~\cite{brianDaileyThesis}.
The filtered waveforms from antennas with at least a partial common field of view are correlated against each other to produce an
interferometric map~\cite{interferometric}, which indicates the apparent amount of correlated power as
a function of incoming direction. Peaks of the map are considered hypotheses of
coherent sources, for which a coherently-averaged waveform is produced. The
group delay of the instrument response can be
removed from each waveform prior
to coherent averaging, to form a dedispersed
coherently-averaged waveform.
Fig.~\ref{fig:interferometric_map} shows an example map, a coherently-averaged
waveform, and a dedispersed coherently-averaged waveform for a calibration pulser event.
From the raw waveforms, interferometric map, and coherent waveforms, each search
computes a number of observables for each event that may be used to reject
backgrounds. Examples of observables include the peak correlation value of the
interferometric map, the peak of a coherent waveform's analytic envelope,
measures of coherent and dedispersed waveform impulsivity, and polarimetric
quantities. Each search has a set of ``quality cuts'' used to remove
digitizer glitches, payload blasts, and other poor-quality
events prior to attempting to separate thermal and anthropogenic backgrounds.
Analyses \textbf{A} and \textbf{B} use similar approaches to reject thermal and
anthropogenic noise. A multivariate linear discriminant on various observables
(different between the analyses, but much of the power in both is from measures
of impulsivity) is used to discriminate signal-like events from background. This discriminant is trained with simulated events as a signal
sample and events reconstructing above the horizontal as a non-impulsive
sideband. Events passing the signal selection are then spatially clustered in
order to identify isolated signal-like events. Analysis~\textbf{A} projects a bivariate Gaussian distribution corresponding to the pointing resolution for each passing
event onto a map of Antarctica, creating a localization distribution on the continent, and considers the
overlap of each event's localization with the sum of the localizations of all other
events, using no \textit{a priori} information about human activity. Analysis~\textbf{B} considers how close each event is to known locations of human activity (bases) or to the
nearest other event that passes signal-like cuts, where a fit along the continent's surface is used to find
the best mutual location for each event pair. Both searches treat horizontally and vertically-polarized
events in the same way, but only passing vertically-polarized events are in the Askaryan neutrino signal region. Passing
horizontally-polarized events contain a sample of EAS.
Analysis \textbf{C} is complementary in that it uses geographically-dependent selection criteria to identify events that stand out from
other events in the local noise environment. The power of this technique is in its ability to retain additional portions of the continent in the neutrino search in the presence of anthropogenic noise.
The search discretizes the continent, utilizing the HEALPix package~\cite{Gorski:2004by}, from the
start, with each bin (about 400~km on a side) treated as an independent analysis. Cuts are optimized for the best expected limit after combining results
across bins, reflecting bin-dependent
neutrino sensitivities, noise environments, and systematic uncertainties on the background estimates.
Analysis \textbf{C} uses a 10\%
subset of the data to model the total background environment and assess the
associated systematic uncertainties on
the background estimates.
Based on a common appearance of
the background distributions
across bins, we
assert that the backgrounds follow an exponential behavior in a final
cut variable.
If an exponential fit in a bin gives a p-value below 0.05,
or insufficient data is available,
the bin is rejected from the
analysis. In addition to the systematic uncertainties that come from the fits,
the optimization also accounts for
a systematic uncertainty due to
spillover of events between bins~\cite{jacobGordonThesis}.
Analysis \textbf{C} utilizes
cross-correlation values
derived from
both linearly and circularly polarized waveforms to reject thermal noise and
events influenced by satellite interference~\cite{samStaffordThesis}.
Treating horizontal and vertical polarizations as
separate search channels,
Analysis \textbf{C} imposes a
cut on a linear
combination of the strength of the coherent waveform and the peak
cross correlation that is bin-dependent to distinguish thermal events from signal-like events.
Analyses \textbf{A} and \textbf{B} also set final thermal and clustering cuts by
optimizing sensitivity. Analysis~\textbf{A} estimates backgrounds with sidebands as in the on-off
problem~\cite{Rolke}, avoiding the need to assert a model for the background distributions. Analysis~\textbf{B} uses an on-off treatment for the anthropogenic
background, but an empirical model for the non-signal-like background. In both cases,
events that reconstruct above the horizontal are used to estimate the leakage from the multivariate
discriminant. To estimate the anthropogenic background, Analysis~\textbf{A} uses a
sideband that is sub-threshold in the multivariate discriminant while Analysis~\textbf{B} uses a
sideband of signal-like events near known bases. Analysis \textbf{A} has a
total estimated background per polarization of $0.8^{+0.6}_{-0.4}$~and
Analysis \textbf{B} expects $0.7^{+0.5}_{-0.3}$~per polarization.
Analysis \textbf{C} estimates backgrounds and uncertainties
bin-by-bin that are
about 0.1 event per bin
with $\sim$10\% systematic uncertainties.
The overall analysis efficiency, estimated using simulation, is 72$\pm$5\% for
Analysis \textbf{A}, 84$\pm$3\% for Analysis~\textbf{B} and $7^{+6}_{-3}$\% for Analysis
\textbf{C}, while Analysis~\textbf{C} has
more than twice its mean efficiency in some bins.
Statistically, Analysis~\textbf{B} is the most sensitive analysis.
Analyses {\bf A} and {\bf B} choose different clustering techniques to remove anthropogenic noise: Analysis {\bf A} solely relies on event self-clustering and includes a larger event sample for clustering, while Analysis {\bf B} relies on a list of known locations of human activity as well as event self-clustering.
Analysis {\bf C} aims
to complement the other two searches by
peering into noisy as well as quiet environments using
geographically-specific cuts, and with this aim in mind, more aggressively cut on backgrounds.
Of the simulated
neutrino events found by Analysis~\textbf{C},
25\% of them
would have been rejected by
the other two analyses.
\section{Results}\label{sec:results}
\begin{figure}
\includegraphics[width=\columnwidth]{map_events.pdf}
\caption{Events consistent with EASs from Analyses \textbf{A}, \textbf{B}, and \textbf{C} and the event in the vertically-polarized Askaryan neutrino signal region from Analysis \textbf{B} (the most efficient of the three analyses). Only horizontally-polarized events with a good EAS template-correlation match and consistent polarization with the local geomagnetic field are shown on the map.}
\label{fig:map_events}
\end{figure}
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{Identified by Analysis \textbf{A} }\\
\multicolumn{4}{|c|}{Background estimate: $0.8^{+0.6}_{-0.4}$ per polarization }\\
\multicolumn{4}{|c|}{Overall efficiency: 72$\pm5$\%}\\
\hline
Event & \textbf{A} & \textbf{B} & \textbf{C} \\
\hline
-- & -- & -- & -- \\
\hline
\hline
\multicolumn{4}{|c|}{Identified by Analysis \textbf{B}}\\
\multicolumn{4}{|c|}{Background estimate:~$0.7^{+0.5}_{-0.3}$ ~per polarization }\\
\multicolumn{4}{|c|}{Overall efficiency: 84\%}\\
\hline
Event & \textbf{A} & \textbf{B} & \textbf{C} \\
\hline
~~~83139414~~~ & ~~~~~S~~~~ & ~~~~\ding{51} ~~~~ & ~~~~P~~~~ \\
\hline
\hline
\multicolumn{4}{|c|}{Identified by Analysis \textbf{C}}\\
\multicolumn{4}{|c|}{Expect $\sim0.1$ event in each of 37 bins}\\
\multicolumn{4}{|c|}{Effs. per bin: from few \% to $18$\% }\\
\hline
Event & \textbf{A} & \textbf{B} & \textbf{C} \\
\hline
21702154 & S &S & \ding{51} \\
73750661 & S & S & \ding{51} \\
\hline
\end{tabular}
\caption{Summary of events identified by each search in the (vertically-polarized) Askaryan neutrino search region. The analysis efficiencies on MC neutrinos and background estimates per polarization for each analysis are included.
The one vertically-polarized event remaining in the signal region in Analysis \textbf{B} was found to be sub-threshold but isolated in Analysis \textbf{A}, and was cut by a directional cut in Analysis \textbf{C}, discussed in Appendix~\ref{sec:binned}.
All analyses find a number of events in the signal region conistent with their background estimates.
A \ding{51}~indicates that the event was found by a search. For events not identified, ``Q" means the event was
rejected by ``quality'' pre-selection cuts (e.g. requirements on trigger
polarization, time and \textit{a priori} elevation angle cuts), an ``S" means
the event did was sub-threshold in a signal-like selection criteria, and a ``P''
indicates the event was rejected due to its position (clustering, or, for Analysis
\textbf{C}, HEALPix bin or angular proximity to regions with geosynchronous satellites).
}
\label{tbl:events}
\end{table}
Askaryan neutrino signals are
expected to be predominantly vertically polarized for Standard Model cross sections, but all searches consider both
horizontally and vertically-polarized events. Horizontally-polarized events are not in the Askaryan neutrino signal region, but they provide a useful
cross-check on the analyses. Within the horizontally-polarized sideband region are any events from EAS from cosmic
rays and from $\tau$ leptons originating from $\nu_{\tau}$ interactions in the Earth or ice.
\subsection{Summary of events found}
The Askaryan neutrino
search region is exclusively
in the vertical polarization channel.
However, we also report events identified by each analysis that pass all cuts except for the angle of linear polarization, which constitute a sample of horizontally-polarized events that we use as a validation of the relative
signal efficiencies reported each analysis.
We report on EAS candidate events in a separate paper~\cite{a3tau}.
Analysis \textbf{A} finds no events in the
Askaryan signal region and 22 events in the horizontally-polarized sideband. Of the 22, 21 are in agreement with the expected signal shape of an EAS template and have polarization consistent with the local geomagnetic field. The remaining event is inconsistent with an EAS hypothesis (it has both poor correlation with an EAS signal shape template and has nearly equal power in horizontal and vertical polarizations, which is not allowed by the Antarctic geomagnetic
field), but is consistent with the background estimate of $0.8^{+0.6}_{-0.4}$ in this horizontally-polarized region. Eighteen of these events that are consistent with an EAS signature were identified in a separate, dedicated EAS search~\cite{BRotterThesis}.
Analysis \textbf{B} identifies one event in the Askaryan neutrino signal region (event 83139414) and 25 events in horizontally-polarized sideband region. The event in the Askaryan neutrino signal region passed clustering cuts but was sub-threshold in Analysis~\textbf{A}. This is consistent with the slightly better analysis efficiency achieved by Analysis~\textbf{B} compared to Analysis~\textbf{A}. The 25 horizontally-polarized events include 20 of the 21 events from Analysis~\textbf{A} that are consistent with an EAS signature
and five additional events, including one separately identified by the dedicated EAS search~\cite{a3tau,BRotterThesis}. All horizontally-polarized events that pass cuts in Analysis~\textbf{B} are consistent with emission from
EAS in both signal shape and polarization.
Analysis \textbf{C}
identifies two vertically-polarized events in the Askaryan neutrino signal region and seven horizontally-polarized events that pass all cuts.
Two of the horizontally-polarized events (events 33484995 and 58592863) are also
found in Analyses~\textbf{A} and \textbf{B}, and are consistent with an EAS signature in signal shape and polarization angle.
A third (event 48837708) is also consistent with an EAS. The remaining four horizontally-polarized events are consistent with the background estimate.
The two events in the neutrino signal region are also consistent with the
background estimate. We note that observing an event in each of two bins out of 37 has a negligible effect on
the flux constraints, and is one advantage
of using a binned approach.
Table~\ref{tbl:events} lists all vertically-polarized events that pass all cuts in at least one analysis.
The locations of all horizontally-polarized events consistent with EASs and the (vertically-polarized) event in the Askaryan neutrino signal region
identified by Analysis~\textbf{B} are shown in Fig.~\ref{fig:map_events}.
The total number of horizontally-polarized events consistent with EASs observed (27) is consistent with the EAS results from ANITA-I, scaled for the relative exposures of the two flights~\cite{anita1CR}.
\subsection{Limit on the diffuse neutrino flux and model constraints}
\begin{figure}
\includegraphics[width=\columnwidth]{limit} \\[0.2cm]
\begin{tabular}
{l|c|c|c|c|c|c|c}
$\log_{10}$(E(eV)) & 18 & 18.5 & 19 & 19.5 & 20 & 20.5 & 21 \\
\hline
A (km$^2 \cdot$sr) & 0.00038 & 0.016 & 0.31 & 2.5 & 14 & 46 & 109\\
\end{tabular}
\caption{ANITA-III limit on the all-flavor-sum diffuse
ultra-high-energy neutrino flux and a combined limit from ANITA
I-III, using the ANITA-III limit shown here and the published
ANITA-II and ANITA-I limits~\cite{anita1,anita2}. The latest
ultra-high-energy neutrino limits from the Auger~\cite{auger2015}
and IceCube~\cite{icecube2017erratum} experiments, and two
cosmogenic neutrino models~\cite{kotera,ahlers} are also
shown. See Appendix~\ref{sec:limit} for details about the calculation of
the limit. The table lists the ANITA-III effective area as a function of neutrino energy used to make the limit, not including analysis efficiency. }
\label{fig:limit}
\end{figure}
The limit (Fig.~\ref{fig:limit}) on the expected neutrino flux is calculated
using a livetime of 17.4 days and a geometric mean of \texttt{icemc}-computed acceptance with an
acceptance estimate from an independent MC simulation developed for ANITA, the
analysis efficiency as a function of neutrino energy, and the appropriate 90\%
Feldman-Cousins factor for the number of events detected and expected
backgrounds. Further details are available in Appendix~\ref{sec:limit}. While Analysis~\textbf{A} would provide the best limit (as it finds no
events), Analysis~\textbf{B} has the best expected sensitivity, so we use its result to
set the limit. The expected number of events for the Kotera maximum mixed-composition and maximum all-proton models, are $0.029\pm0.002$ and $0.17 \pm 0.01$, respectively. ANITA-III sets a 90\% CL integral flux limit on a pure $E_{\nu}^{-2}$ spectrum for $E_{\nu} \in [10^{18} \mathrm{eV},10^{21} \mathrm{eV}]$ of $E^2_{\nu} \Phi_{\nu} \leq 4.6 \times 10^{-7}~\mathrm{GeV}~\mathrm{cm}^{-2}~\mathrm{s}^{-1}~\mathrm{sr}^{-1}$.
We also show a combined limit from ANITA I-III where we have used the total
number of events seen, total expected background, and the
analysis-efficiency-weighted sum of previously-published effective
volumes~\cite{anita2}.
\section{Discussion}
\newcommand{\evRA}{11.43}
\newcommand{\evDEC}{16.3}
\newcommand{5.0}{5.0}
\newcommand{1.0}{1.0}
\newcommand{\evANG}{73.7}
\begin{figure}
\includegraphics[width=\columnwidth]{83139414}
\begin{tabular}{|l|l|}
\hline
Estimated Event Location & 96.21$^\circ$ E, 68.57$^\circ$ S, 2319 m \\
Payload Location & 90.07$^\circ$ E, 70.26$^\circ$ S, 33643 m \\
Minimum Energy & $10^{19}$ eV\\
UTC Time & 2015-01-08 19:04:24.23740235\\
Estimated Sky Position & \evRA~h, \evDEC$^\circ$ \\
\hline
\end{tabular}
\caption{Event localization (left) and dedispersed coherently-averaged waveform (right) for vertically-polarized event 83139414. This event is in the Askaryan neutrino signal region in Analysis $\textbf{B}$ and a sub-threshold, isolated event in Analysis $\textbf{A}$. The table below provides additional information about the event: the longitude, latitude, and ice depth at the estimated event location; the longitude, latitude, and altitude of payload at time of detection; and the minimum neutrino energy and mean sky position that could have produced the event according to the MC simulation. The black contours on the map represent 1-5 $\sigma$ regions for the event location.
}
\label{fig:vpol_event}
\end{figure}
The isolated vertically-polarized event 83139414 from Analysis \textbf{B} (and just outside the signal region
in \textbf{A}) is particularly intriguing. While consistent with
the pre-unblinding background estimate, our post-unblinding
interpretation is that the event is both unusually isolated and has a signal shape (Fig.~\ref{fig:vpol_event}) consistent with impulsive broadband emission.
There is no known human activity within 260~km.
The polarization of the surviving event is
consistent with expectations from neutrino
simulations, and the signal has no features
that make it easily identifiable as an anthropogenic signal ({\it e.g.} slow rise time, narrow bandwidth, double-pulse structure).
Including additional metrics such as these to distinguish neutrino-like signals from anthropogenic noise would
reduce a post-unblinding
background estimate for this particular event by an order of magnitude.
The emission comes from a location on the continent consistent in ice depth and
elevation angle with simulated distribution of neutrinos. The source location
of the emission is fully consistent with MC neutrinos simulations with the
ANITA-III flight path and recorded thresholds.
Simulations of neutrinos near the interaction location with the payload near
the detection position suggest a minimum possible neutrino energy of
$10^{19}$~eV. Simulation may also be used to estimate the direction of a
neutrino producing radio emission compatible with the observed location and
polarization. The localization corresponding to the surviving event in equatorial
coordinates is well-approximated by an elliptical Gaussian centered at (RA,dec)
= (\evRA~h, \evDEC$^\circ$), with major and minor axis standard deviations of
5.0$^\circ$ and 1.0$^\circ$, respectively, and a position angle of
\evANG$^\circ$.
In summary, despite challenging EMI conditions ANITA-III has yielded a robust sample of radio-detected ultra-high-energy cosmic rays~\cite{a3tau}. While no compelling neutrino signal above background has been detected, the one remaining passing event in Analysis {\bf B} is rather unlike the parent anthropogenic population that comprises the typical background, but shares several important characteristics expected from neutrino events.
\vspace{10pt}
\section{Acknowledgments}
We would like to thank the National Aeronautics and Space Administration and the National Science Foundation. We would especially like to thank the staff of the Columbia Scientific Balloon Facility and the logistical support staff enabling us to perform our work in Antarctica. We are deeply indebted to those who dedicate their
careers to help make our science possible in such remote environments.
This work was supported by the Kavli Institute for Cosmological Physics at the University of Chicago. Computing resources were provided by the University of Chicago Research Computing Center and the Ohio Supercomputing Center at The Ohio State University.
A. Connolly would like to thank the National Science Foundation
for their support through CAREER award 1255557.
O. Banerjee and L. Cremonesi's work was also
supported by collaborative visits funded by the Cosmology and Astroparticle
Student and Postdoc Exchange Network (CASPEN).
The University College London group was also supported by the Leverhulme Trust. The Taiwan team is supported by Taiwan's Ministry of Science and Technology (MOST) under its Vanguard Program 106-2119-M-002-011.
|
{
"timestamp": "2018-06-19T02:20:48",
"yymm": "1803",
"arxiv_id": "1803.02719",
"language": "en",
"url": "https://arxiv.org/abs/1803.02719"
}
|
\section{Introduction}
In our previous work \cite{rafetseder_zulehner_2017} a new mixed variational formulation for the Kirchhoff plate bending problem with the bending moment tensor as additional unknown is derived. The new mixed formulation satisfies Brezzi's conditions and is equivalent to the original problem without additional convexity assumption on the domain. Furthermore, we obtain for polygonal domains an equivalent formulation of the Kirchhoff plate bending problem in terms of three (consecutively to solve) second-order elliptic problems. In \cite{rafetseder_zulehner_2017a} we extended the approach to domains whose boundaries are curvilinear polygons.
The aim of this paper is to adapt the ideas of \cite{rafetseder_zulehner_2017} to the more complex situation of Kirchhoff-Love shells.
Especially for shells, performing analysis directly on the geometry representation provided by the Computer Aided Design (CAD) model, and, thereby, avoiding unnecessary and costly geometry approximations is of essential importance. This is enabled by isogeometric analysis proposed by Hughes and co-workers in \cite{hughes_2005, hughes_2009}, where B-Splines or Non Uniform Rational B-Splines (NURBS) are used for the geometry description as well as for the representation of the unknown fields.
Another feature of isogeometric discretizations is that they easily allow for inter-element continuity beyond the classical $C^0$-continuity within so-called patches.
This permits the straight-forward construction of conforming isogeometric Kirchhoff-Love type shell elements, which are harder to obtain by means of standard Lagrange basis functions, since $C^1$-continuity is necessary as second derivatives appear. The contribution \cite{kiendl_2009} was one of the first to exploit this.
For these approaches application on a single patch is straight-forward. However, for practical computations involving complex geometries usually geometry representations by means of several patches (multi-patches) are necessary. Then the continuity between patches is an issue, for which several techniques have been developed.
In \cite{kiendl_2010} the so-called bending strip method is introduced, in which patches (called bending stripes) of fictitious material with bending stiffness only in direction transverse to the interface and zero membrane stiffness are added at patch interfaces. The crucial point is the choice of a reliable penalty parameter, the bending strip stiffness. Starting from this technique, alternative formulations removing the penalty parameter dependence have been proposed in \cite{goyal_2017}.
Alternatively, dG (discontinuous Galerkin) techniques can be used patch-wise, see, e.g., \cite{guo_ruess_2015}, where a variationally consistent Nitsche formulation, which weakly enforces coupling and continuity constraints among patches, is derived. Another approach are analysis suitable $C^1$ multi-patch isogeometric spaces, see, e.g., \cite{kapl_sangalli_takacs_2018}.
Note, all this techniques require special treatment of the patch interfaces. The essential contribution of this paper consists in a new mixed formulation of Kirchhoff-Love shells, with the bending moment tensor as new unknown, solely based on standard $H^1$ spaces. Therefore, classical $C^0$-coupling across patch interfaces is sufficient. In comparison with \cite{rafetseder_zulehner_2017}, the obtained mixed formulation differs in several aspects. In case of plates membrane and bending parts decouple, which is not the case for shells. Therefore, additionally the membrane part, which only involves first derivatives has to be taken into account. Furthermore, the bending strain includes in contrast to plates additional terms beside the Hessian of the transverse displacement involving geometry quantities and first derivatives of the displacement. In this paper it is shown how to overcome these difficulties and extend the techniques in \cite{rafetseder_zulehner_2017} to obtain again a formulation only based on $H^1$ spaces.
A build-in feature of the new mixed mixed formulation is the explicit computation of the bending moment tensor as separate unknown, which, however, leads to the drawback of having more unknowns. Beside the gained flexibility in the choice of discretization method, our mixed formulation also leads to new solution strategies. For the use of iterative solvers efficient methods for standard second-order problems like multigrid methods can be used as building blocks of a preconditioner.
It is a well known fact that also isogeometric Kirchhoff-Love shell elements exhibit significant membrane locking effects, see, e.g., \cite{echter_oesterle_bischoff_2013, echter_2013}. Therefore, we consider in this paper a combination of our new mixed formulation of the bending part with a popular method to avoid membrane locking by a mixed formulation of the membrane part, see \cite{echter_oesterle_bischoff_2013}. In the numerical experiments the combined method works well.
The paper is organized as follows: In \sectref{sec:primal_formulation} the Kirchhoff-Love shell formulation is introduced. \sectref{sec:mixed_formulation} contains a new mixed formulation and an extension of this approach to circumvent membrane locking. The new mixed formulation leads in a natural way to the construction of a new discretization method, which is introduced in \sectref{sec:discretization}. The paper closes with numerical experiments in \sectref{sec:numerical_experiments}.
\section{Kirchhoff-Love shell formulation}
\label{sec:primal_formulation}
The description of the Kirchhoff-Love shell model is based on the presentations in \cite{ciarlet_2005, chapelle_bathe_2011, bischoff_wall_bletzinger_ramm_2004}
\subsection{Differential geometry and shell kinematics}
Throughout this paper we use both index notation and absolute notation for the expression of vectors and tensors. Note, scalars are printed italic and vectors and tensors bold face italic. Quantities needed in the undeformed and deformed configuration are distinguished with capital and small letters, respectively.
Latin indices take values $\{1,2,3\}$ and Greek indices $\{1,2\}$. Superscripts indicate contravariant components and subscripts mark covariant components of vectors and tensors. Furthermore, Einstein's summation convention is applied to indices appearing twice within a product.
The 3D shell in the undeformed configuration is described by a mapping $\bm{X}$ from a parameter domain into the three-dimensional physical space, $\bm{X}: \Omega \times [-\frac{t}{2}, \frac{t}{2}] \rightarrow \mathbb{R}^3$, where $\Omega \subset \mathbb{R}^2$. The map $\bm{X}$ is defined in terms of curvilinear coordinates $\xi^i$ and is of the form
\begin{equation}
\bm{X}(\xi^1, \xi^2, \xi^3) = \bm{R}(\xi^1, \xi^2) + \xi^3 \ \bm{A}_3(\xi^1, \xi^2),
\end{equation}
where the mapping $\bm{R}: \Omega \rightarrow \mathbb{R}^3$ defines the midsurface of the shell, $\bm{A}_3$ is the unit director, which is normal to the midsurface, and $t$ denotes the constant thickness . The map $\bm{R}$ will be assumed to be sufficiently smooth, e.g., $\mathcal{C}^3(\overline\Omega)$, cf. \cite{ciarlet_2005}.
The covariant base vectors are given by
\begin{equation*}
\bm{A}_\alpha = \pp{\bm{R}}{\alpha}, \qquad \bm{A}_3 = \frac{\bm{A}_1 \times \bm{A}_2}{| \bm{A}_1 \times \bm{A}_2 |},
\end{equation*}
where here and in the following $\pp{}{\alpha} = \pp{}{\xi^\alpha}$. Contravariant base vectors are defined via the orthogonality condition
\begin{equation*}
\bm{A}_i \dotprod \bm{A}^j = \delta^j_i.
\end{equation*}
Note that $\bm{A}^3 = \bm{A}_3$. The covariant and contravariant components $A_{\alpha \beta} $ and $A^{\alpha \beta}$ of the first fundamental form (metric tensor) of the shell surface, the Christoffel symbols $\Gamma^\sigma_{\alpha \beta}$, and the covariant and mixed components $B_{\alpha \beta}$ and $B^\beta_\alpha$ of the second fundamental form of the surface are then defined as follows:
\begin{equation}
\begin{alignedat}{4}
A_{\alpha \beta} &= \bm{A}_\alpha \dotprod \bm{A}_\beta, \quad
A^{\alpha \beta} = \bm{A}^\alpha \dotprod \bm{A}^\beta, \quad
\Gamma^\sigma_{\alpha \beta} = \bm{A}^\sigma \dotprod \pp{\bm{A}_\alpha}{\beta} \\
B_{\alpha \beta} &= \bm{A}_3 \dotprod \pp{\bm{A}_\alpha}{\beta}, \quad
B^\beta_\alpha = A^{\beta \sigma} B_{\sigma \alpha}.
\end{alignedat}
\end{equation}
For later use let us define $\JacDetMap = \sqrt{\mathrm{det}(A_{\alpha \beta})}$.
We define analogously the above introduced quantities for the shell in the deformed configuration and denote them by the corresponding small letters.
The unknown we search for is the displacement $\overline{\bm{u}} = \bm{r} - \bm{R}$ at each point of the midsurface, which can be expressed in terms of covariant components $u_i$ as follows
\begin{equation}
\overline{\bm{u}} = u_i \bm{A}^i.
\end{equation}
We choose to solve for the vector of covariant components $\bm{u} = (u_i)$, since this simplifies the derivation of our new mixed formulation in \sectref{sec:Mmixed_formulation}. In the following we refer to $(u_1, u_2)$ and $u_3$ as tangential and transverse part, respectively.
After linearization, we obtain expressions for the covariant components of the membrane strain tensor
\begin{equation}
\epsilon_{\alpha \beta}(\bm{u}) = \frac{1}{2}(u_{\alpha|\beta} + u_{\beta|\alpha} ) - B_{\alpha \beta} \ \displaceN
\end{equation}
and the bending strain tensor
\begin{equation}
\kappa_{\alpha \beta}(\bm{u}) = \displaceN[|\alpha \beta] - B^\sigma_\alpha B_{\sigma \beta} \displaceN + B^\sigma_\alpha u_{\sigma|\beta} + B^\tau_\beta u_{\tau|\alpha} + B^\tau_{\beta}|_\alpha u_\tau.
\end{equation}
Here, $u_{\alpha|\beta}$ denotes the covariant derivative
$u_{\alpha|\beta} = \pp{u_\alpha}{\beta} - \Gamma^\sigma_{\alpha \beta} u_\sigma$,
$\displaceN[|\alpha \beta]$ the second-order covariant derivative $\displaceN[|\alpha \beta] = \pp{\displaceN}{\alpha \beta} - \Gamma^\sigma_{\alpha\beta} \pp{\displaceN}{\sigma}$,
and $B^\tau_\beta|_\alpha$ the covariant derivative of the second fundamental form
\begin{equation*}
B^\tau_\beta|_\alpha = \pp{B^\tau_\beta}{\alpha} + \Gamma^\tau_{\alpha \sigma} B^\sigma_\beta - \Gamma^\sigma_{\alpha \beta} B^\tau_\sigma.
\end{equation*}
For a thorough derivation of this expressions see, e.g., \cite{ciarlet_2005}.
\subsection{Variational formulation}
The problem is posed in the parameter domain $\Omega$ with boundary $\Gamma$. In what follows let $(n_\alpha)$ and $\tau=(-n_2, n_1)$ represent the unit outer normal vector and the unit counterclockwise tangent vector to $\Gamma$, respectively. Furthermore, $\pp{}{n}$ denotes the normal derivative and $\pp{}{\tau}$ the tangential derivative along $\Gamma$.
The shell is considered to be clamped on a part $\bm{R}(\Gamma_c)$, simply supported on $\bm{R}(\Gamma_s)$, and free on $\bm{R}(\Gamma_f)$, with $\Gamma = \Gamma_c \cup \Gamma_s \cup \Gamma_f$.
We define the displacement function space ${\bm{W}}$ by
\begin{equation}
\begin{aligned}
{\bm{W}} = \{ \bm{v} = (v_i) \in H^1(\Omega) \times H^1(\Omega) \times H^2(\Omega): \ &v_i = 0, \ \pp{\displaceNt}{n} = 0 \ \text{on} \ \Gamma_c,\\
& v_1 = v_3 = 0 \ \text{on} \ \Gamma_s \}.
\end{aligned}
\end{equation}
The (primal) variational formulation is given as follows (cf. \cite{ciarlet_2005, chapelle_bathe_2011}) : find $\bm{u} \in {\bm{W}}$ such that
\begin{equation}\label{eq:primal_formulation}
\int_{\Omega} \left( t \ \bm{\membraneStrainComp}(\bm{u}) : \bm{C} : \bm{\membraneStrainComp}(\bm{v}) + \frac{t^3}{12} \ \bm{\bendingStrainComp}(\bm{u}) : \bm{C} : \bm{\bendingStrainComp}(\bm{v}) \right) \ \JacDetMap \ d\xi = \langle F, \bm{v} \rangle \quad \text{for all} \ \bm{v} \in {\bm{W}},
\end{equation}
where $\bm{A}:\bm{B}$ denotes the double contraction of two tensors $\bm{A}$ and $\bm{B}$ and $d\xi = d\xi^1 d \xi^2$.
Here, the right-hand side is given by $\langle F, \bm{v} \rangle = \int_\Omega \bm{f} \dotprod \bm{v} \ \JacDetMap \ d\xi$ and the contravariant components of the fourth-order material tensor read
\begin{equation*}
C^{\alpha \beta \sigma \tau} = \frac{E}{2(1+\nu)} \left(A^{\alpha \sigma} A^{\beta \tau} + A^{\alpha \tau} A^{\beta \sigma} + \frac{2 \nu}{1-\nu} A^{\alpha \beta} A^{\sigma \tau}\right),
\end{equation*}
where $E$ and $\nu$ are Young's modulus and Poisson's ratio of the material, respectively.
Following, e.g., \cite{adams_fournier_2003}, here and throughout the paper $L^2(\Omega)$ and $H^m(\Omega)$ denote the standard Lebesgue and Sobolev spaces of functions on $\Omega$ with corresponding norms $\|.\|_{0}$ and $\|.\|_{m}$ for positive integers $m$. Moreover, $\bm{L}^2(\Omega)_{\mathrm{sym}}$ denotes the space of symmetric second-order tensors given by
\begin{equation*}
\bm{L}^2(\Omega)_{\mathrm{sym}} = \{ \bm{K} : K^{\alpha\beta} = K^{\beta\alpha} \in L^2(\Omega) \}.
\end{equation*}
For scalars $v$, vectors $\bm{\psi}$, and second-order tensors $\bm{K}$ the first order differential operators with respect to the tangential coordinates $\xi^\alpha$ are defined as follows:
\begin{align*}
\nabla v &=
\begin{pmatrix}
\pp{v}{1} \\
\pp{v}{2}
\end{pmatrix},\quad
&\operatorname{curl} v &=
\begin{pmatrix}
\pp{v}{2} \\
-\pp{v}{1}
\end{pmatrix},\\
\nabla \bm{\psi} &=
\begin{pmatrix}
\pp{\psi_{1}}{1} & \pp{\psi_{1}}{2} \\
\pp{\psi_{2}}{1} & \pp{\psi_{2}}{2} \\
\end{pmatrix},\quad
&\operatorname{Curl} \bm{\psi}&=
\begin{pmatrix}
\pp{\psi_{1}}{2} & -\pp{\psi_{1}}{1} \\
\pp{\psi_{2}}{2} & -\pp{\psi_{2}}{1} \\
\end{pmatrix},\\
\div \bm{\psi} &= \pp{\psi_1}{1} + \pp{\psi_2}{2}, \quad
&\operatorname{Div} \bm{K} &=
\begin{pmatrix}
\pp{K_{11}}{1} + \pp{K_{12}}{2} \\
\pp{K_{21}}{1} + \pp{K_{22}}{2} \\
\end{pmatrix}
\end{align*}
Moreover, the symmetric $\operatorname{Curl}$ is introduced by
\begin{equation*}
\operatorname{symCurl} \bm{\psi} = \frac{1}{2}( \operatorname{Curl} \bm{\psi} + (\operatorname{Curl} \bm{\psi})^T).
\end{equation*}
\section{Two mixed variational formulations}
\label{sec:mixed_formulation}
In this section we derive two mixed formulations by introducing stress resultants as new unknowns.
\subsection{\texorpdfstring{$\bm{M}$}{M}-mixed formulation}
\label{sec:Mmixed_formulation}
We introduce as new unknown the bending moment tensor $\bm{M}$, which is related to the bending strain through the constitutive equation
\begin{equation*}
\bm{M} = \JacDetMap \frac{t^3}{12} \ \bm{C} : \bm{\bendingStrainComp}(\bm{u}) = \hat{\bm{C}}_{\bendingMoment} : \bm{\bendingStrainComp}(\bm{u}), \ \text{with} \ \hat{\bm{C}}_{\bendingMoment} = \JacDetMap \frac{t^3}{12} \ \bm{C}.
\end{equation*}
Note, in contrast to standard notation we additionally include in $\bm{M}$ the geometry measure $\JacDetMap$. This leads to the preliminary $\bm{M}$-mixed formulation: find $\bm{M} \in \bm{L}^2(\Omega)_{\mathrm{sym}}$ and $\bm{u} \in {\bm{W}}$ such that
\begin{equation}\label{eq:first_Mmixed_formulation}
\begin{alignedat}{4}
& \int_\Omega (\hat{\bm{C}}_{\bendingMoment}^{-1} : \bm{M}) : \bm{L} \ d\xi & & - \int_\Omega \bm{\bendingStrainComp}(\bm{u}) : \bm{L} \ d\xi & & = 0\\
& -\int_{\Omega} \bm{M} : \bm{\bendingStrainComp}(\bm{v}) \ d\xi & & - c(\bm{u}, \bm{v}) & & = -\langle F, \bm{v} \rangle
\end{alignedat}
\end{equation}
for all $\bm{L} \in \bm{L}^2(\Omega)_{\mathrm{sym}}$ and $\bm{v} \in {\bm{W}}$, with $c(\bm{u}, \bm{v}) $ given by the membrane part
\begin{equation*}
c(\bm{u}, \bm{v}) = \int_{\Omega} t \ \bm{\membraneStrainComp}(\bm{u}) : \bm{C} : \bm{\membraneStrainComp}(\bm{v}) \ \JacDetMap \ d\xi.
\end{equation*}
In order to make equations more compact, we use in the following at some points the short notation $(.,.)$ for the $L^2$-inner product on $\Omega$ instead of an integral and put the material tensor as subscript, i.e., $(\hat{\bm{C}}_{\bendingMoment}^{-1} \bm{M}, \bm{L}) = (\bm{M}, \bm{L})_{\hat{\bm{C}}_{\bendingMoment}^{-1}}$.
The goal of the remainder of this section is to derive a reformulation of \eqref{eq:first_Mmixed_formulation} allowing us to replace the displacement space ${\bm{W}}$ by a space that uses $H^1(\Omega)$ for all three components of the displacement. In our previous work \cite{rafetseder_zulehner_2017} a new mixed formulation for Kirchhoff plates using $H^1(\Omega)$ for the vertical deflection is introduced. An extension of this approach to shells is possible, since the only term involving second-order derivatives is the Hessian of the transverse displacement $\displaceN$.
We divide the bending strain $\bm{\bendingStrainComp}(\bm{v})$ into the Hessian of the transverse displacement $\displaceNt$ and the remaining terms that only involve first-order derivatives of $\bm{v}$ denoted by $\bm{\kappa}^{1}(\bm{v})$, i.e.,
\begin{equation*}
\bm{\bendingStrainComp}(\bm{v}) = \grad^2 \displaceNt + \bm{\kappa}^{1}(\bm{v}).
\end{equation*}
With this notation we can rewrite the second line in \eqref{eq:first_Mmixed_formulation} separating the integral involving $\grad^2 \displaceNt$ and put all remaining terms into the right-hand side
$\langle G(\bm{M},c,F), \bm{v} \rangle = (\bm{M}, \bm{\kappa}^{1}(\bm{v})) + c(\bm{u}, \bm{v}) -\langle F, \bm{v} \rangle$ leading to
\begin{equation*}
-\int_\Omega \bm{M} : \grad^2 \displaceNt \ d\xi = \langle G(\bm{M},c,F) , \bm{v} \rangle,
\end{equation*}
or in strong form
\begin{equation}\label{eq:secLine_first_Mmixed_formulation}
-\div\operatorname{Div} \bm{M} = G(\bm{M},c,F).
\end{equation}
The main idea is the following ansatz for $\bm{M}$:
\begin{equation*}
\bm{M} = p\bm{I} + \bm{M}_0,
\end{equation*}
where $\div\operatorname{Div} \bm{M}_0 = 0$ and $\bm{I}$ is the identity matrix. Plugging in \eqref{eq:secLine_first_Mmixed_formulation} the just stated representation provides
\begin{equation}\label{eq:thirdLine_new_Mmixed_formulation}
-\div\operatorname{Div} (p\bm{I}) = G(\bm{M},c,F), \ \text{or equivalently} \ - \Delta p = G(\bm{M},c,F).
\end{equation}
Therefore, it is sufficient to consider $p \in H^1(\Omega)$ for the corresponding weak form.
The second essential ingredient is a characterization of the elements in the kernel of $\div\operatorname{Div}$. According to \cite{beirao_niiranen_stenberg_2007, huang_huang_xu_2011, krendl_rafetseder_zulehner_2016} there is a potential function $\bm{\phi} \in (H^1(\Omega))^2$ such that
\begin{equation*}
\bm{M}_0 = \operatorname{symCurl} \bm{\phi}.
\end{equation*}
\begin{remark}
The first step can be viewed as homogenization. In literature, another form of homogenization has been considered. The classical Helmholtz decomposition $\bm{M} = \nabla^2 p + \operatorname{symCurl}\bm{\phi}$, see, e.g., \cite{beirao_niiranen_stenberg_2007, huang_huang_xu_2011}, has the same second component. However, the first component is different and requires the solution of a fourth-order problem, which brings no benefit. In contrast, the decomposition introduced here only requires the solution of a second-order Poisson problem for the first component.
Note the analogy to the well-known characterization of the stress tensor $\bm{\sigma}$ with $\operatorname{Div} \bm{\sigma} = 0$ by means of the Airy stress function $\varphi$ in $2$D
\begin{equation*}
\bm{\sigma} = \operatorname{Curl}\operatorname{curl} \varphi,
\end{equation*}
and a similar result in $3$D with the Beltrami stress functions.
\end{remark}
Summing up, we have the following representation of $\bm{M}$:
\begin{equation*}
\bm{M} = p \bm{I} + \operatorname{symCurl} \bm{\phi}.
\end{equation*}
With this representation the preliminary $(\bm{M}, \bm{u})$ problem in \eqref{eq:first_Mmixed_formulation} becomes a formulation in $(p, \bm{\phi}, \bm{u})$, later referred to as $\bm{M}$-mixed formulation, given as follows: find $(p, \bm{\phi})\in \bm{V}$ and $\bm{u} \in {\bm{Q}}$ such that
\begin{small}
\begin{equation}\label{eq:new_Mmixed_formulation}
\begin{alignedat}{5}
&(p \bm{I}, q \bm{I})_{\hat{\bm{C}}_{\bendingMoment}^{-1}} && +(\operatorname{symCurl} \bm{\phi}, q \bm{I})_{\hat{\bm{C}}_{\bendingMoment}^{-1}} && +(\nabla \displaceN, \nabla q) - (\bm{\kappa}^{1}(\bm{u}), q \bm{I}) &&= 0\\
&(p \bm{I},\operatorname{symCurl} \bm{\psi})_{\hat{\bm{C}}_{\bendingMoment}^{-1}} && +(\operatorname{symCurl} \bm{\phi}, \operatorname{symCurl} \bm{\psi})_{\hat{\bm{C}}_{\bendingMoment}^{-1}} && - (\bm{\kappa}^{1}(\bm{u}), \operatorname{symCurl} \bm{\psi}) &&= 0\\
&(\nabla p, \nabla \displaceNt) - ( p \bm{I}, \bm{\kappa}^{1}(\bm{v})) && - (\operatorname{symCurl} \bm{\phi}, \bm{\kappa}^{1}(\bm{v})) && -c(\bm{u}, \bm{v}) && = -\langle F, \bm{v} \rangle.
\end{alignedat}
\end{equation}
\end{small}
for all $(q, \bm{\psi})\in \bm{V}$ and $\bm{v} \in {\bm{Q}}$.
Here, the third line is given by the weak form of \eqref{eq:thirdLine_new_Mmixed_formulation}, where we plug in the right-hand side $G(\bm{M},c,F)$ the representation of $\bm{M}$. The first two lines follow from the first line in \eqref{eq:first_Mmixed_formulation} using an analogous representation of the test functions $\bm{L} = q \bm{I} + \operatorname{symCurl} \bm{\psi}$ and splitting of the bending strain $\bm{\bendingStrainComp}(\bm{u}) = \grad^2 \displaceN + \bm{\kappa}^{1}(\bm{u})$.
The original displacement space ${\bm{W}}$ is replaces by the space ${\bm{Q}}$ defined by
\begin{equation}
\begin{aligned}
{\bm{Q}} &= \{ \bm{v} = (v_i) \in H^1(\Omega) \times H^1(\Omega) \times H^1(\Omega): \ v_i = 0 \ \text{on} \ \Gamma_c, v_1 = v_3 = 0 \ \text{on} \ \Gamma_s \}.
\end{aligned}
\end{equation}
The definition of the right boundary conditions for $p$ and $\bm{\phi}$ is a subtle issue. It turns out that the space $\bm{V}$ is given by the subset of $ (q, \bm{\psi}) \in Q_3 \times (H^1(\Omega))^2$, with
\begin{equation*}
Q_3 = \{ \displaceNt \in H^1(\Omega): v_3 = 0 \ \text{on} \ \Gamma_c \cup \Gamma_s \},
\end{equation*}
satisfying the boundary condition
\begin{equation}\label{eq:coupling_condition}
\langle \pp{\bm{\psi}} {\tau} ,\nabla \displaceNt \rangle_\Gamma + \int_\Gamma q \ \pp{\displaceNt}{n} \ ds = 0 \quad \text{for all} \ \displaceNt\in W_3,
\end{equation}
where $\langle ., . \rangle_\Gamma$ denotes the duality product on $\Gamma$ and
\begin{equation*}
W_3 = \{ \displaceNt \in H^2(\Omega): \ v_3 = 0, \ \pp{v_3}{n} = 0 \ \text{on} \ \Gamma_c, \quad v_3 = 0 \ \text{on} \ \Gamma_s \}.
\end{equation*}
By \eqref{eq:coupling_condition} the functions $q$ and $\bm{\psi}$ are coupled. Therefore, we refer to \eqref{eq:coupling_condition} in the following as coupling condition.
In case $\bm{\psi}$ is sufficiently smooth, e.g., $\pp{\bm{\psi}}{\tau} \in L^2(\Gamma)$, the coupling condition can be rewritten as
\begin{equation}\label{eq:coupling_condition_smooth}
\int_{\Gamma_s \cup \Gamma_f} (\pp{\bm{\psi}}{\tau}\dotprod n + q) \pp{\displaceNt}{n} \ ds + \int_{\Gamma_f} \pp{\bm{\psi}}{\tau}\dotprod \tau \ \pp{\displaceNt}{\tau} \ ds = 0 \quad \text{for all} \ \displaceNt \in W_3,
\end{equation}
where we use the representation $\nabla \displaceNt = (\pp{\displaceNt}{n}) \, n + (\pp{\displaceNt}{\tau}) \, \tau$ and incorporate the boundary conditions for $\displaceNt \in W_3$. This condition reads in explicit form
\begin{equation}\label{eq:boundaryCond_p_phi}
\begin{aligned}
&\pp{\bm{\psi}} {\tau} \dotprod n = -q \quad && \text{on} \ \Gamma_s\\
&\pp{^2\bm{\psi}} {\tau} \dotprod \tau = 0, \quad \pp{\bm{\psi}} {\tau} \dotprod n = -q \quad && \text{on} \ \Gamma_f.
\end{aligned}
\end{equation}
\begin{remark}
Note, in ${\bm{Q}}$ compared with ${\bm{W}}$ only those boundary conditions for the transverse displacement $\displaceN$ which are also available in $H^1(\Omega)$ are prescribed. In the formulation \eqref{eq:new_Mmixed_formulation} the originally essential boundary condition $\pp{\displaceN}{n} = 0$ becomes a natural condition. In the primal formulation \eqref{eq:primal_formulation} only natural boundary conditions are imposed for the bending moment tensor $\bm{M}$. Those natural boundary conditions corresponding to the normal-normal component of $\bm{M}$ and the corner conditions for the normal-tangential component of $\bm{M}$ become the essential conditions \eqref{eq:boundaryCond_p_phi} for $(p,\bm{\phi})$ and the remaining condition remains natural. For further information we refer the reader to \cite{rafetseder_zulehner_2017}.
\end{remark}
\begin{remark}
The mixed formulation \eqref{eq:new_Mmixed_formulation} is different from the formulation obtained in \cite{rafetseder_zulehner_2017}. In case of plates membrane and bending parts decouple and for the bending part a decomposition into three consecutively to solve second-order problems is obtained. This is no longer possible for shells.
\end{remark}
Problem \eqref{eq:new_Mmixed_formulation} has the typical structure of a saddle point problem: find $\bm{x} = (p, \bm{\phi}) \in \bm{V}$ and $\bm{u} \in {\bm{Q}}$ such that
\begin{equation*}
\begin{alignedat}{4}
& a(\bm{x}, \bm{y}) & & + b(\bm{y},\bm{u}) & & = 0 & \quad & \text{for all} \ \bm{y} = (q, \bm{\psi}) \in \bm{V} , \\
& b(\bm{x}, \bm{v}) & & - c(\bm{u}, \bm{v}) & & = - \langle F , \bm{v} \rangle & \quad & \text{for all} \ \bm{v} \in {\bm{Q}}.
\end{alignedat}
\end{equation*}
As in \cite{rafetseder_zulehner_2017}, for the new mixed formulation \eqref{eq:new_Mmixed_formulation} the following result providing well-posedness and equivalence to the primal formulation holds:
\begin{theorem}
Let $\Omega$ be simply connected. The mixed formulation \eqref{eq:new_Mmixed_formulation} is well-posed, i.e., existence and uniqueness of a solution $((p,\bm{\phi}),\bm{u}) \in \bm{V} \times {\bm{Q}}$ is guaranteed. Moreover, $\bm{u}$ is the solution of the primal problem \eqref{eq:primal_formulation}, with the bending moment tensor $\bm{M} = \hat{\bm{C}}_{\bendingMoment} : \bm{\bendingStrainComp}(\bm{u})$ given by $\bm{M} = p\bm{I} + \operatorname{symCurl} \bm{\phi}$.
\end{theorem}
The proof follows along the lines of \cite[Theorem 3.6]{rafetseder_zulehner_2017} and is omitted here. For more details on the mathematical foundation of the new mixed formulation we refer to the thorough derivation of an analogous mixed formulation for Kirchhoff plates in \cite{rafetseder_zulehner_2017}.
\subsection{\texorpdfstring{$\bm{M}$-$\bm{N}$}{MN}-mixed formulation}
\label{sec:MNmixed_formulation}
In order to alleviate membrane locking one popular concept (among many others) is to consider a mixed formulation with the membrane force tensor $\bm{N}$ as new unknown, see, e.g., \cite{echter_oesterle_bischoff_2013,chapelle_stenberg_1999}. An adoption of this approach allows for a well matching extension of the just introduced formulation.
We additionally introduce as new unknown the membrane force tensor $\bm{N}$, which is connected to the membrane strain through the constitutive equation
\begin{equation*}
\bm{N} = \JacDetMap \ t \ \bm{C} : \bm{\membraneStrainComp}(\bm{u}) = \hat{\bm{C}}_{\membraneForce} : \bm{\membraneStrainComp}(\bm{u}), \ \text{with} \ \hat{\bm{C}}_{\membraneForce} = \JacDetMap \ t \ \bm{C}.
\end{equation*}
Note, again we include in $\bm{N}$ the geometry measure $\JacDetMap$. This leads to the preliminary $\bm{M}$-$\bm{N}$-mixed formulation: find $\bm{M} \in \bm{L}^2(\Omega)_{\mathrm{sym}}$, $\bm{N} \in \bm{L}^2(\Omega)_{\mathrm{sym}}$ and $\bm{u} \in {\bm{W}}$ such that
\begin{equation}\label{eq:first_NMmixed_formulation}
\begin{alignedat}{5}
& \int_\Omega (\hat{\bm{C}}_{\bendingMoment}^{-1} : \bm{M}) : \bm{L} \ d\xi & & & & - \int_\Omega \bm{\bendingStrainComp}(\bm{u}) : \bm{L} \ d\xi & & = 0\\
& && \int_\Omega (\hat{\bm{C}}_{\membraneForce}^{-1} : \bm{N}) : \bm{K} \ d\xi & & - \int_\Omega \bm{\membraneStrainComp}(\bm{u}) : \bm{K} \ d\xi & & = 0\\
& -\int_{\Omega} \bm{M} : \bm{\bendingStrainComp}(\bm{v}) \ d\xi & & -\int_{\Omega} \bm{N} : \bm{\membraneStrainComp}(\bm{v}) \ d\xi & & & & = -\langle F, \bm{v} \rangle
\end{alignedat}
\end{equation}
for all $\bm{L} \in \bm{L}^2(\Omega)_{\mathrm{sym}}$, $\bm{K} \in \bm{L}^2(\Omega)_{\mathrm{sym}}$ and $\bm{v} \in {\bm{W}}$. Analogously as above, we can reformulate the preliminary ($\bm{M}$, $\bm{N}$, $\bm{u}$) problem \eqref{eq:first_NMmixed_formulation} in terms of ($p$, $\bm{\phi}$, $\bm{N}$, $\bm{u}$), later referred to as $\bm{M}$-$\bm{N}$-mixed formulation: find $(p, \bm{\phi})\in \bm{V}$, $\bm{N}\in \bm{L}^2(\Omega)_{\mathrm{sym}}$ and $\bm{u} \in {\bm{Q}}$ such that
\begin{small}
\begin{alignat*}{5}
&(p \bm{I}, q \bm{I})_{\hat{\bm{C}}_{\bendingMoment}^{-1}} && +(\operatorname{symCurl} \bm{\phi}, q \bm{I})_{\hat{\bm{C}}_{\bendingMoment}^{-1}} && && +(\nabla \displaceN, \nabla q) - (\bm{\kappa}^{1}(\bm{u}), q \bm{I}) &&= 0\\
&(p \bm{I},\operatorname{symCurl} \bm{\psi})_{\hat{\bm{C}}_{\bendingMoment}^{-1}} && +(\operatorname{symCurl} \bm{\phi}, \operatorname{symCurl} \bm{\psi})_{\hat{\bm{C}}_{\bendingMoment}^{-1}} && && - (\bm{\kappa}^{1}(\bm{u}), \operatorname{symCurl} \bm{\psi}) &&= 0\\
& && && \ (\bm{N}, \bm{K})_{\hat{\bm{C}}_{\membraneForce}^{-1}} && -(\bm{\membraneStrainComp}(\bm{u}), \bm{K}) && = 0\\
&(\nabla p, \nabla \displaceNt) - ( p \bm{I}, \bm{\kappa}^{1}(\bm{v})) && - (\operatorname{symCurl} \bm{\phi}, \bm{\kappa}^{1}(\bm{v})) && -(\bm{N}, \bm{\membraneStrainComp}(\bm{v})) && && = -\langle F, \bm{v} \rangle.
\end{alignat*}
\end{small}
for all $(q, \bm{\psi})\in \bm{V}$, $\bm{K} \in \bm{L}^2(\Omega)_{\mathrm{sym}}$ and $\bm{v} \in {\bm{Q}}$
\section{The discretization method}
\label{sec:discretization}
In this section we first construct a conforming discretization space for the displacement $\bm{u}$, i.e.,
\begin{equation*}
{\bm{Q}}_h \subset {\bm{Q}} \subset (H^1(\Omega))^3.\\
\end{equation*}
Note, only $C^0$ basis functions are required, so the continuity requirements are easily satisfied with standard basis functions. In the following we consider isogeometric B-spline discretization spaces with degree $p\geq1$. For $p=1$, the discretization space coincides with the standard isoparametric finite element space of continuous and piecewise bilinear elements. We define discretization spaces on patch level and for our formulation continuity between patches does not require extra attention, since standard $C^0$-coupling is sufficient.
We denote by $\mathcal{S}^{p_1,p_2}_{\alpha_1,\alpha_2}$ the tensor product B-spline space defined as
\begin{equation}\label{eq:BsplineSpace}
\mathcal{S}^{p_1,p_2}_{\alpha_1,\alpha_2} = \mathcal{S}^{p_1}_{\alpha_1} \otimes \mathcal{S}^{p_2}_{\alpha_2},
\end{equation}
where $\mathcal{S}^{p}_{\alpha}$ is the one-dimensional B-spline space with degree $p$ and $\alpha$ continuous derivatives across interior knots; see, e.g, \cite{cotrell_hughes_brazilevs_2009,daVeiga_buffa_sangalli_vazquez_2014} for further information.
We use equal order discretization spaces for the three components of the displacement $\bm{u}$ and incorporate the essential boundary conditions, which brings us to the definition:
\begin{equation*}
{\bm{Q}}_h = (\mathcal{S}^{p,p}_{\alpha,\alpha})^3 \cap {\bm{Q}},
\end{equation*}
where $\alpha = p-1$, i.e., maximum smoothness at interior knots.
For $\bm{V}$, the space of the auxiliary variables $(p, \bm{\phi})$, the construction of a conforming discretization space is more involved, since the coupling condition \eqref{eq:coupling_condition} has to be taken into account. Therefore, we first disregard the coupling condition and construct a conforming discretization space of $\hat\bm{V} = Q_3 \times (H^1(\Omega))^2$. Using equal order discretization spaces for $p$ and $\bm{\phi}$ we receive
\begin{equation*}
\hat\bm{V}_h = (\mathcal{S}^{p,p}_{\alpha,\alpha} \times (\mathcal{S}^{p,p}_{\alpha,\alpha})^2) \cap \hat\bm{V}.
\end{equation*}
The space $\bm{V}_h$ is defined as the subset of $\bm{y}_h = (q_h, \bm{\psi}_h) \in \hat\bm{V}_h$ satisfying a discrete version of the coupling condition \eqref{eq:coupling_condition_smooth}
\begin{equation*}
\bm{V}_h = \{ \bm{y}_h = (q_h, \bm{\psi}_h)\in\hat\bm{V}_h : d(\bm{y}_h, \bm{\mu}_h) = 0 \quad \text{for all} \ \bm{\mu}_h=(\mu_{\tau,h},\mu_{n,h}) \in \bm{\Lambda}_h \},
\end{equation*}
with
\begin{equation}\label{eq:coupling_condition_discrete}
d(\bm{y}_h, \bm{\mu}_h) = \int_{\Gamma_s \cup \Gamma_f} (\pp{\bm{\psi}_h}{\tau}\dotprod n + q_h) \ \mu_{n,h} \ ds + \int_{\Gamma_f} \pp{\bm{\psi}_h}{\tau}\dotprod \tau \ \mu_{\tau,h} \ ds.
\end{equation}
The test functions $(\mu_{\tau,h},\mu_{n,h})$ are discrete representations of $(\pp{\displaceNt}{\tau}, \pp{\displaceNt}{n})$ for $\displaceNt\inW_3$ at $\Gamma$. Therefore, $\bm{\Lambda}_h$ is chosen as the space of restrictions of functions from $(\mathcal{S}^{p-1,p-1}_{\alpha,\alpha})^2$ to $\Gamma$, where at corner points of the boundary $\mu_{\tau,h}$ and $\mu_{n,h}$ have to be coupled appropriately, see \cite{rafetseder_zulehner_2017a} for details. Then the discrete version of \eqref{eq:new_Mmixed_formulation} reads:
find $\bm{x}_h = (p_h, \bm{\phi}_h) \in \bm{V}_h$ and $\bm{u}_h \in {\bm{Q}}_h$ such that
\begin{equation*}
\begin{alignedat}{4}
& a(\bm{x}_h, \bm{y}_h) & & + b(\bm{y}_h,\bm{u}_h) & & = 0 & \quad & \text{for all} \ \bm{y}_h = (q_h, \bm{\psi}_h) \in \bm{V}_h, \\
& b(\bm{x}_h, \bm{v}_h) & & - c(\bm{u}_h, \bm{v}_h) & & = - \langle F , \bm{v}_h \rangle & \quad & \text{for all} \ \bm{v}_h \in {\bm{Q}}_h.
\end{alignedat}
\end{equation*}
We do not explicitly build in the coupling condition by constructing a basis of the space $\bm{V}_h$, but incorporate it implicitly, by replacing ${\bm{V}}_h$ by $\hat{\bm{V}}_h$ and adding \eqref{eq:coupling_condition_discrete} as additional constraint. Since $\hat{\bm{V}}_h$, ${\bm{Q}}_h$ and $\bm{\Lambda}_h$ are finite dimensional, the bilinear forms $a$, $b$, $c$ and $d$ can be represented as matrices $\bm{A}_h$, $\bm{B}_h$, $\bm{C}_h$ and $\bm{D}_h$ acting on vectors of real numbers $\underline{\bm{x}}_h$, $\underline{\bm{u}}_h$ and $\underline{\bm{\lambda}}_h$ representing the elements in $\hat{\bm{V}}_h$, ${\bm{Q}}_h$ and $\bm{\Lambda}_h$, respectively, with respect to the chosen basis. In this matrix-vector notation the resulting system reads
\begin{equation*}
\begin{alignedat}{4}
&\bm{A}_h \underline{\bm{x}}_h &&+ \bm{B}_h^T \underline{\bm{u}}_h &&+ \bm{D}_h^T \underline{\bm{\lambda}}_h &&= 0,\\
&\bm{B}_h \underline{\bm{x}}_h &&- \bm{C}_h \underline{\bm{u}}_h && &&= \bm{f}_h,\\
&\bm{D}_h \underline{\bm{x}}_h && && &&=0,
\end{alignedat}
\end{equation*}
where $\bm{B}_h^T$ and $\bm{D}_h^T$ denote the transposed matrices.
For the second, the $\bm{M}$-$\bm{N}$-mixed formulation we consider for $\bm{u}$ and $(p, \bm{\phi})$ the discretization introduced above.
For the additional unknown, the membrane force $\bm{N}$, we use the discretization space proposed in the Hybrid Stress (HS) method presented in \cite{echter_oesterle_bischoff_2013}. The basis for the contravariant components of $\bm{N}$ is given by
\begin{equation*}
{N}^{11}_h \in \mathcal{S}^{p-1,p}_{\alpha-1,\alpha}, \quad
{N}^{22}_h \in \mathcal{S}^{p,p-1}_{\alpha,\alpha-1}, \quad
{N}^{12}_h \in \mathcal{S}^{p-1,p-1}_{\alpha-1,\alpha-1}.
\end{equation*}
\section{Numerical experiments}
\label{sec:numerical_experiments}
In the first part of this section we demonstrate that the $\bm{M}$-mixed formulation (with the $H^1(\Omega)$ conforming discretizations proposed in the previous section) works by testing it with the three benchmark problems of the well-known shell obstacle course \cite{belytschko_1985}, consisting of two cylindrical shells and one spherical shell.
In the second part we show that the $\bm{M}$-$\bm{N}$-mixed formulation introduced in \sectref{sec:MNmixed_formulation} works well.
For all problems in this paper the undeformed midsurface is modeled exactly by non-uniform rational B-splines (NURBS). For the discretization of the unknowns standard B-spline spaces are used, see \sectref{sec:discretization}. Mesh density is characterized by the number of control points per edge. In case several congruent patches are used to describe the surface the corresponding number for one patch is used.
The implementation is done in the framework of G+Smo ("Geometry + Simulation Modules"), an object-oriented C++ library, see \url{https://ricamsvn.ricam.oeaw.ac.at/trac/gismo/wiki/WikiStart}. In all experiments a sparse direct solver is used.
\subsection{\texorpdfstring{$\bm{M}$}{M}-mixed formulation}
\subsubsection{Scordelis-Lo roof}
\label{sec:roof}
The problem setup of the Scordelis-Lo roof benchmark is shown in \figref{fig:roof_geometry}. The structure is supported with rigid diaphragms at both ends and the side edges are free. This setup is realized by imposing homogeneous boundary conditions for the displacement of the form $\tilde u_x = \tilde u_z = 0$, leading to the conditions $u_1 = u_3 = 0$ for the covariant components of the displacement. The roof has moderate slenderness $\frac{R}{t} = 100$, with radius $R$ and thickness $t$ as defined in \figref{fig:roof_geometry}, and is subject to a uniform vertical load of $g = 90$ per unit area. This configuration yields a membrane dominated load-carrying behavior.
Due to the rectangular topology of the roof domain a single patch representation is quite natural. The surveyed quantity is the vertical deflection $\tilde u_z$ at the midpoint of the free edges. In \cite{belytschko_1985} the value of the reference solution is reported as $0.3024$, this values is calculated using a very fine mesh.
In \figref{fig:roof_conv_Mmixed} the displacement convergence of the $\bm{M}$-mixed formulation for $p=1,2,3,4$ is shown. As expected, it tuns out that the convergence becomes considerably faster with higher polynomial degree. For $p=3$ and $p=4$ already the fourth refinement step with $11$ control points per side yields a relative error of less than $1\%$, whereas the discretizations with $p=2$ and especially $p=1$ require a much finer mesh ($19$ and $400$ control points per side, respectively) to provide the same accuracy.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{pictures/roofGeometry}
\caption{Problem setup.}
\label{fig:roof_geometry}
\end{subfigure}
\begin{subfigure}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{pictures/roofConv_Mmixed}
\caption{Displacement convergence of $\bm{M}$-mixed formulation.}
\label{fig:roof_conv_Mmixed}
\end{subfigure}
\caption{Scordelis-Lo roof.}
\end{figure}
According to \tabref{tab:roof}, for $p=2$ the results obtained in \cite{echter_2013} for the standard purely displacement-based 3-parameter Kirchhoff-Love shell formulation (3p) conform well with the $\bm{M}$-mixed shell elements developed in this paper for fine discretizations but provide worse results for coarse meshes.
\begin{table}
\centering
\begin{tabular}{p{4cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}}
\hline
Control points per edge & $5$ & $9$ & $13$ & $20$ & $25$ & $30$ \\
\hline
$p=1$\\
$\bm{M}$-mixed & $0.0252$ & $0.0471$ & $0.0665$ & $0.1027$ & $0.1291$ & $0.1528$ \\
\hline
$p=2$ \\
$\bm{M}$-mixed & $0.1028$ & $0.2636$ & $0.2940$ & $0.2997$ & $0.3003$ & $0.3004$ \\
3p (Echter \cite{echter_2013}) & $0.0440$ & $0.2077$ & $0.2801$ & $0.2975$ & $0.2994$ & $0.3004$ \\
\hline
\end{tabular}
\caption{Scordelis-Lo roof, displacements ($\bm{M}$-mixed, 3p).}
\label{tab:roof}
\end{table}
For comparison we consider a second representation of the midsurface using four patches, as illustrated in \figref{fig:roof_geometry}. In \figref{fig:roof_patches} vertical displacements and bending moments for the one patch and four patch geometry representation are compared. Visually no difference like discontinuity across patch interfaces can be seen. As in \cite{echter_oesterle_bischoff_2013}, in order to obtain $\overline{M}^{xx}$ the components of the bending moment tensor defined in the curvilinear coordinate system are first transformed into a local Cartesian basis with $\overline x$ and $\overline z$-axis aligned with $\xi^1$- and $\xi^3$-directions, for details see \cite{bischoff_wall_bletzinger_ramm_2004}.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{pictures/roofResult_u}
\includegraphics[width=\textwidth]{pictures/roof4patchesResult_u}
\caption{Displacement $\tilde u_z$.}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{pictures/roofResult_M}
\includegraphics[width=\textwidth]{pictures/roof4patchesResult_M}
\caption{Bending moment component $\overline{M}^{xx}$.}
\end{subfigure}
\caption{Scordelis-Lo roof, analysis results for one patch (upper) and four patch (lower) geometry representations.}
\label{fig:roof_patches}
\end{figure}
The obtained convergence results are quite similar to the ones received for the one patch geometry representation in \figref{fig:roof_conv_Mmixed}, and are therefore not shown.
\subsubsection{Pinched hemisphere}
\label{sec:hemisphere}
The problem setup of the pinched hemisphere benchmark is shown in \figref{fig:hemisphere_geometry}. The structure is fixed at the top and free along the bottom circumferential edge. The shell has a slenderness of $\frac{R}{t} = 250$, with radius $R$ and thickness $t$ as defined in \figref{fig:hemisphere_geometry}, and is subject to four radial point loads $F= \pm 2$ at its bottom. This configuration yields a bending dominated behavior with almost no membrane strains.
The undeformed midsurface is modeled using four patches, as illustrated in \figref{fig:hemisphere_geometry}. The investigated quantity is the radial displacement at the points where the loads are applied, with the value of the reference solution reported as $0.0924$ in \cite{belytschko_1985}.
In \figref{fig:hemisphere_conv_Mmixed} the displacement convergence of the $\bm{M}$-mixed formulation for $p=1,2,3,4$ is shown. Most observations carry over from the Scordelis-Lo roof benchmark to the pinched hemisphere. The slowed convergence of low order discretizations with $p=1$ and $p=2$ becomes even more amplified. The discretization with $p=2$ requires $35$ control points per edge (for each of the four patches) to reach a relative error of less than $1\%$. The discretization with $p=1$ shows a considerably slower convergence. With a reasonable number of control points no acceptable accuracy could be achieved. For $p=3$ and $p=4$ only $13$ control points per edge are needed to obtain the same accuracy.
The reason for the slow convergence in case of $p=1$ and $p=2$ for the Scordelis-Lo roof and (even more severe) for the pinched hemisphere is membrane locking, which is mechanically the inability to represent pure bending without unwanted, parasitic membrane strains, see, e.g., \cite{bischoff_wall_bletzinger_ramm_2004} for more details on membrane locking. Similar results showing evidence of membrane locking have already been observed in \cite{kiendl_2009, echter_2013}.
We will come back to the important issue of membrane locking in \sectref{sec:membraneLocking}.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{pictures/hemisphereGeometry}
\caption{Problem setup.}
\label{fig:hemisphere_geometry}
\end{subfigure}
\begin{subfigure}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{pictures/hemisphereConv_Mmixed}
\caption{Displacement convergence of $\bm{M}$-mixed formulation.}
\label{fig:hemisphere_conv_Mmixed}
\end{subfigure}
\caption{Pinched hemisphere.}
\end{figure}
\subsubsection{Pinched cylinder}
\label{sec:cylinder}
The last benchmark of the shell obstacle course is the pinched cylinder. The problem setup is shown in \figref{fig:cylinder_geometry}. The structure is supported with rigid diaphragms at both ends. The cylinder has moderate slenderness $\frac{R}{t} = 100$, with radius $R$ and thickness $t$ as defined in \figref{fig:cylinder_geometry}, and is subject to two opposite point loads $F= \pm 1$ in the middle. This configuration yields a severe test for both inextensional bending and complex membrane states.
The undeformed midsurface is modeled using four patches, as illustrated in \figref{fig:cylinder_geometry}. The analyzed quantity is the radial displacement at the points where the loads are applied, with the value of the reference solution reported as $1.8248 \cdot 10^{-5}$ in \cite{belytschko_1985}.
In \figref{fig:cylinder_conv_Mmixed} the displacement convergence of the $\bm{M}$-mixed formulation for $p=1,2,3,4$ is shown. The convergence behavior differs from the ones obtained for the first two benchmarks. The differences in the results due to the usage of different polynomial orders are significantly smaller, especially discretizations with $p=2,3,4$ provide comparable results. Note, already the first-order discretization shows an acceptable performance.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{pictures/cylinderGeometry}
\caption{Problem setup.}
\label{fig:cylinder_geometry}
\end{subfigure}
\begin{subfigure}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{pictures/cylinderConv_Mmixed}
\caption{Displacement convergence of $\bm{M}$-mixed formulation.}
\label{fig:cylinder_conv_Mmixed}
\end{subfigure}
\caption{Pinched cylinder.}
\end{figure}
\subsection{Membrane locking and \texorpdfstring{$\bm{M}$-$\bm{N}$}{MN}-mixed formulation}
\label{sec:membraneLocking}
In order to investigate membrane locking we consider first a simple model problem consisting of a cylindrical shell strip, see, e.g.,\cite{bischoff_wall_bletzinger_ramm_2004, echter_oesterle_bischoff_2013}. The problem setup of this example is shown in \figref{fig:cylinderStrip_geometry}. The structure is clamped along the edge $x=0$ and subject to a constant constant line load in radial direction with magnitude $q_x = 0.1 \cdot t^3$ at the opposite free edge. This configuration yields a bending dominated behavior. Therefore, membrane locking has to be expected in case the applied discrete formulation is not free from membrane locking.
The quantity of interest is the radial displacement at the midpoint of the free edge, with exact value $0.942$ independent of the slenderness $\frac{R}{t}$, according to an analytical solution based on Bernoulli beam theory (cf. \cite{echter_oesterle_bischoff_2013}). In order to ensure comparability with \cite{echter_oesterle_bischoff_2013} the domain is discretized with a mesh of 10 elements in longitudinal direction and one element in the other direction.
In \figref{fig:cylinderStrip_locking} the influence of varying slenderness $\frac{R}{t}$ on the obtained displacement is investigated for the original $\bm{M}$-mixed formulation and the extended $\bm{M}$-$\bm{N}$-mixed formulation for $p=1$ and $p=2$. Both results obtained with the $\bm{M}$-mixed formulation show severe membrane locking. The radial displacement tends to zero as the slenderness $\frac{R}{t}$ is increased. In case of $p=2$ already for a moderate slenderness of $\frac{R}{t} = 100$ unphysical membrane strains lead to a considerable underestimation of the tip displacement of approximately $25\%$ and for $p=1$ the behavior is even worse. However, for both polynomial orders the results obtained with the $\bm{M}$-$\bm{N}$-mixed formulation indicates that this formulation completely removes the undesired membrane locking effects.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.44\textwidth}
\includegraphics[width=\textwidth]{pictures/cylinderStripGeometry}
\caption{Problem setup.}
\label{fig:cylinderStrip_geometry}
\end{subfigure}
\begin{subfigure}[b]{0.55\textwidth}
\includegraphics[width=\textwidth]{pictures/cylinderStripLocking}
\caption{Displacement convergence.}
\label{fig:cylinderStrip_locking}
\end{subfigure}
\caption{Cylindrical shell strip.}
\end{figure}
According to \tabref{tab:cylinderStrip}, for $p=2$ the results in \cite{echter_oesterle_bischoff_2013} for the standard purely displacement-based 3-parameter formulation (3p) and the corresponding formulation with a mixed Hybrid Stress modification of the membrane part (3p-HS) conform well with the $\bm{M}$-mixed and $\bm{M}$-$\bm{N}$-mixed shell elements proposed in this paper.
\begin{table}
\centering
\begin{tabular}{p{4.5cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1.5cm}}
\hline
Slenderness $\frac{R}{t}$ & $10$ & $100$ & $1000$ & $10 000$ \\
\hline
$p=2$\\
$\bm{M}$-mixed & $0.9420$ & $0.7075$ & $0.1156$ & $0.0112$ \\
$\bm{M}$-$\bm{N}$-mixed & $0.9454$ & $0.9423$ & $0.9422$ & $0.9422$ \\
$3$p (Echter et al. \cite{echter_oesterle_bischoff_2013}) & $0.9326$ & $0.6635$ & $0.0225$ & $0.0002$ \\
$3$p-HS (Echter et al. \cite{echter_oesterle_bischoff_2013}) & $0.9386$ & $0.9425$ & $0.9425$ & $0.9425$ \\
\hline
\end{tabular}
\caption{Cylindrical shell strip, displacements ($\bm{M}$-mixed, $\bm{M}$-$\bm{N}$-mixed, 3p, 3p-HS). }
\label{tab:cylinderStrip}
\end{table}
The improved convergence behavior of the $\bm{M}$-$\bm{N}$-mixed formulation observed in the basic cylindrical shell strip problem carries over to the Scordelis-Lo roof and pinched hemisphere benchmark (cf. \sectref{sec:roof} and \sectref{sec:hemisphere}). In \figref{fig:cylinder_conv_MNmixed} and \figref{fig:hemisphere_conv_MNmixed} the displacement convergence of the $\bm{M}$-$\bm{N}$-mixed formulation for $p=1,2,3,4$ is shown for the Scordelis-Lo roof and pinched hemisphere benchmark, respectively. For both problems the locking free $\bm{M}$-$\bm{N}$-mixed shell element converges considerably faster to the reference value for the same number of control points compared to the $\bm{M}$-mixed element in \figref{fig:roof_conv_Mmixed} and \figref{fig:hemisphere_conv_Mmixed}. Furthermore, the differences in the results for different polynomial orders are significantly reduced.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{pictures/roofConv_MNmixed}
\caption{Scordelis-Lo roof}
\label{fig:cylinder_conv_MNmixed}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{pictures/hemisphereConv_MNmixed}
\caption{Pinched hemisphere}
\label{fig:hemisphere_conv_MNmixed}
\end{subfigure}
\caption{Displacement convergence of $\bm{M}$-$\bm{N}$-mixed formulation.}
\end{figure}
In \tabref{tab:Nmixed} the results of the $\bm{M}$-$\bm{N}$-mixed formulation are compared to the results for the 3p-HS formulation of Echter \cite{echter_2013}. Recall, our formulation does not require an $H^2$-conforming discretization in contrast to the one in \cite{echter_2013}. A quantification of the improvements in numbers is obtained by comparing \tabref{tab:Nmixed} with \tabref{tab:roof}.
\begin{table}
\centering
\begin{tabular}{p{4cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}}
\hline
Control points per edge & $5$ & $9$ & $13$ & $20$ & $25$ & $30$ \\
\hline
$p=1$\\
$\bm{M}$-$\bm{N}$-mixed & $0.2020$ & $0.2711$ & $0.2869$ & $0.2941$ & $0.2970$ & $0.2978$ \\
\hline
$p=2$ \\
$\bm{M}$-$\bm{N}$-mixed & $0.2737$ & $0.2999$ & $0.3005$ & $0.3006$ & $0.3006$ & $0.3006$ \\
3p-HS (Echter \cite{echter_2013}) & $0.2517$ & $0.3000$ & $0.3005$ & $0.3006$ & $0.3006$ & $0.3006$ \\
\hline
\end{tabular}
\caption{Scordelis-Lo roof, displacements ($\bm{M}$-$\bm{N}$-mixed, 3p-HS).}
\label{tab:Nmixed}
\end{table}
\section{Concluding remarks and future work}
The numerical experiments in \sectref{sec:numerical_experiments} demonstrate that the proposed $\bm{M}$-mixed shell element works in well-known benchmark problems. The results for the $\bm{M}$-$\bm{N}$-mixed formulation indicate that this formulation is free from membrane locking.
In contrast to Kirchhoff-Love type thin shells, for conforming Reissner-Mindlin shell elements standard $C^0$-continuous shape functions are commonly used. In order to avoid transverse shear locking, in \cite{long_bornemann_cirak_2012,echter_oesterle_bischoff_2013} an $H^2$ based Mindlin-Reissner formulation obtained by a change of variables from rotations to transverse shear strains is proposed. This approach, which was later mathematically analyzed in the context of plates in \cite{lovadina_2015}, has the advantage that transverse shear locking is eliminated on the continuous formulation level, independent of a particular discretization. An application of our technique to obtain an $H^1$-mixed-formulation to this $H^2$ based Mindlin-Reissner formulation seems possible and would be worth to investigate.
\bibliographystyle{abbrv}
|
{
"timestamp": "2018-03-08T02:10:04",
"yymm": "1803",
"arxiv_id": "1803.02721",
"language": "en",
"url": "https://arxiv.org/abs/1803.02721"
}
|
\section{Introduction}\label{sec:intro}
Statistical matching (SM) is widely used to reduce the effect of confounding~\cite{Rubin1973,Anderson1980,Kupper1981} when evaluating the relative effects of two different paths of action in an observational study. For instance, medical studies use SM to compare mortality rates between two patient populations that have received two different treatments or procedures~\cite{Ray2012,Zhang2015,Gozalo2015,Zhang2016,Cho2016,Bruno2017,Nichay2017,Burden2017,Mcevoy2016,Schermerhorn2008,Lee2017,Capucci2017,Tranchart2016,Zangbar2016,Dou2017,Fukami2017,McDonald2017,Lai2016,Abidov2005,Adams2017,Kishimoto2017,Kong2017,Chen2016,Seung2008,Shaw2008,Liu2016,Svanstrom2013,Salati2017}. With more than $16{,}000$ citations in research papers within the last 12 months \cite{King2016}, Propensity Score Matching (PSM)~\cite{Rosenbaum1983} is the de-facto standard for SM in such applications.
While some limitations of PSM have been studied~\cite{King2016, Austin2011, Pearl2009} and the quality of PSM results have been discussed through empirical evaluations, PSM results have not yet been sufficiently investigated in a mathematical sense. This is particularly important since PSM results are often used for making critical decisions such as choosing the best medical procedure. On the basis of a general PSM algorithm we show, that PSM can lead to arbitrary decision making and that PSM-based results are susceptible to manipulation by cherry-picking outcomes supporting certain hypotheses. We illustrate our findings using the example of a real-world medical study and present Deterministic Balancing Score Exact Matching (DBSeM) - a new approach for exact SM that delivers the average result for all valid sets of exact matchings for the investigated dataset and is therefore reproducible and reliable.
Specifically, we make the following contributions:
$C1$\label{C1}: We investigate potential pitfalls based on an analysis of general PSM implementations taken from guidelines for implementing PSM algorithm and illustrate our findings by using the database for isolated aortic valve procedures $2013$ containing information on $17{,}427$ patients, their treatment and various other, relevant parameters.
$C2$\label{C2}: We formally derive four properties that an optimal SM algorithm has to meet: reproducibility of results, order-independence, data-completeness, and conservation. PSM does not have these properties.
$C3$\label{C3}: We introduce DBSeM as a clustering-based SM approach and prove that DBSeM satisfies the four properties of an optimal SM algorithm.
$C4$\label{C4}: We show that bootstrapped PSM results converge towards the results gained by DBSeM, which is the average result of all sets of exact matched pairs.
The motivation behind our contributions are to develop an algorithm, which is usable for statistical matching in general and is deterministic. Fulfilling the deterministic property is important for the algorithm as results obtained through application of a deterministic algorithm can be reproduced by fellow researchers, leading to verifiability of results as well as to further common ground for scientific discussion in the field of observational studies.
Note that this is a mathematical article. Hence the proven results are generally applicable to all datasets used in exact SM.
While this paper uses medical terminology such as "patients" or "treatment" to illustrate its content, our results are applicable to other fields of research with observational studies as well.
\section{Related work and definitions}\label{sec:related}
In the context of medical observational studies the propensity score (PS) is the probability that a patient is assigned to a particular treatment given a vector of observed covariates~\cite{Rosenbaum1983}. PSM matches patients with similar/equal PS to allow a comparison between treatment results. Thus PS and PSM are defined by Rosenbaum and Rubin in~\cite{Rosenbaum1983} as follows:
\noindent Given a set $G := \{x_1,\,\ldots,\,x_a,\,z_1,\,\ldots,\,z_b\}$ of patients. Let $A := \{x_1,\,\ldots,\,x_a\}$ and $B := \{z_1,\,\ldots,\,z_b\}$ be the patient partition for the respective treatments. The $s$ statistically relevant properties -- covariates -- of each patient $p \in G$ are specified by an $s$-dimensional covariate vector $cv(p)\in \mathbb{R}_{\geq 0}^s$ and the observed result is identified by $obs(p) \in \mathbb{R}$.
In randomized studies the PS is known by design, whereas in non-randomized studies -- the case of the illustrative example -- it needs to be estimated from the dataset. PS is typically~\cite{Stuart2014} estimated using logistic regression, but can also be calculated through other regression methods such as probit, tobit or cox regression, with treatment as the dependent variable and covariates as baseline. Given regression coefficients $\beta_j,\, 0\leq j\leq s$, from the logistic regression the estimated PS of a patient $p$ is defined as
\begin{equation}
\label{eq:prop_score}
ps(p) := \frac{e^{\beta_0 + \sum_{j = 1}^{s}\beta_{j}cv_j(p)}}{1+e^{\beta_0 + \sum_{j = 1}^{s}\beta_{j}cv_{j}(p)}}.
\end{equation}
To compare patients with each other one can now compute the estimated propensity~score~differences~(PSD) from the PS of all patients for the dataset as
\begin{equation}
\label{eq:psd}
psd_{i,\,j} := \vert ps(x_i) - ps(z_j)\vert,\, \forall 1\leq i \leq a,\,1\leq j \leq b.
\end{equation}
Finally one has to match the patients and in general there are two classes of SM,
that are used to match members of different sets, i.e., patients:
\begin{itemize}
\item Exact matching \cite{Stuart2010,Iacus2011}: Only members of different sets with equal covariate vectors are matched, i.e., for PSM $psd_{i,\,j} = 0$.
\item $\delta$-matching \cite{Stuart2010}: Members of different sets can be matched if they are similar enough according to a chosen similarity measure, e.g., Mahalanobis distance \cite{Iacus2011} or for $\delta$-PSM $psd_{i,\,j} \leq \delta$.
\end{itemize}
Different algorithmic realizations of $\delta$-matching are for example caliper matching, nearest neighbor matching or optimal matching~\cite{Stuart2010,Caliendo2005,Rosenbaum1989}.
\newline
\noindent The foundation for both, exact and $\delta$-PSM was laid by Rubin and Rosenbaum~\cite{Rosenbaum1983}, by introducing the notion of balancing scores. A balancing score $b(cv(p))$ of a patient is a value assignment, such that the conditional distribution of $cv(p)$ is the same for patients $p$ from both treatment groups, $A$ and $B$. Rubin and Rosenbaum showed that PS is the coarsest balancing score, while $cv(p)$ is the finest~(\cite{Rosenbaum1983}, section $2$) and that if treatment assignment is strongly ignorable, then the difference between the two respective treatments is an unbiased estimate of the average treatment effect at that balancing score value~(\cite{Rosenbaum1983}, theorem $3$).
We will, if not stated otherwise, only consider exact PSM in this paper. Besides ease of presentation our reasons are manifold:
\begin{enumerate}
\item An ideal experimental design would be to compare the outcome of two therapies for pairs of patients with exactly the same condition vector. For this reason we focus on exact matching in this paper. Additionally exact matching is the best possible type of of $\delta$-PSM~\cite{Rubin1974}.
\item \label{item:ex_1} Exact matching is a special case of the more general $\delta$-matching. Thus every $\delta$-matching contains an exact matching or at least the attempt of an exact matching on a subset of patients and pitfalls emerging in exact matching are present in $\delta$-matching as well.
\item \label{item:ex_2} If pitfalls are present in exact matching, then letting $\delta >0$, either amplifies the effects of these pitfalls or does not affect them in any way. Most importantly the pitfalls do not vanish.
\item \label{item:ex_3} Pitfalls emerging in exact matching are significant for the whole theory of PSM, as the best case for SM is a dataset, which is fully matchable by exact matching.
\item If no exact matches between two therapy groups exist, then the question of comparability of the two groups on the basis of the given dataset arises as they have no common support.
\end{enumerate}
Note that, because of reasons~\ref{item:ex_1}--\ref{item:ex_3}, considering only exact matching does not impair the scope of our deductions regarding the implications for $\delta$-matching.
Additionally we limit the presentation to $1$:$1$ exact matchings as $1$:$1$ matching procedures have the highest amount of possible matchings for fixed match-sizes and all possible $k$:$l$ matchings are included in the set of possible $1$:$1$ matchings, see subsection~\ref{subsec:many-one} for further explanations regarding $k$:$l$ and one-to-many PSM.
Algorithm~\ref{alg:PSM} describes the general structure of an $1$:$1$ PSM-based matching procedure (cf.~\cite{Caliendo2005}): \FloatBarrier
\setlength{\intextsep}{7.5pt}
\begin{algorithm}
\caption{General $1$:$1$ PSM-based/statistical matching procedure}
\label{alg:PSM}
\begin{algorithmic}[1]
\State \label{state:psm} Compute $psd_{i,\,j}\,\forall 1\leq i \leq a,\,1\leq j \leq b$ (e.g., using logistic or tobit regression).
\State \label{state:balance} Check balancing of propensity score (e.g., known covariates of high influence should have high influence on the regression value).
\For{each patient $x_i \in A\, (1\leq i \leq a)$} \label{state:PSM_match}
\Statex Create Matching Set $M_i = \emptyset$.
\Statex Search for unmatched patient $z_j \in B\, (1\leq j \leq b)$ with $psd(i,\,j) \equiv 0$.
\Statex \textbf{If} $z_j \in B$ was found in previous step: Set $M_i := \{x_i,\,z_j\}$
\Statex Continue with next patient from $A$.
\EndFor
\State \label{state:covariate} Check covariate balancing in matches and matching quality (e.g., homogenization) and output matching sets $M_i$.
\end{algorithmic}
\end{algorithm}
\FloatBarrier
Steps \ref{state:balance} and \ref{state:covariate} do not have to be considered in this paper because exact matching -- if viable -- completely balances covariates and achieves complete harmonization.
Furthermore the various matching strategies applicable in step $3$, such as nearest neighbor~\cite{Stuart2010}, stratification~\cite{Iacus2011} or optimal~\cite{Rosenbaum1989} matching, are irrelevant for this paper. This is because each strategy's strengths and weaknesses come to fruition in exact matching as $psd_{i,\,j}=0$ (and $cv(x_i) \equiv cv(y_j)$) is either true for all PSM strategies or for none.
\subsection{$1$:$2$ and one-to-many PSM}\label{subsec:many-one}
$1$:$2$~PSM is a variant of PSM were one patient from one therapy group gets matched to two patients from the other therapy group, if there are two patients meeting the matching criteria. This leads to a loss of information as possible matchings could be ignored. For instance, let $x_i$ be an arbitrary patient of $A$ and there exist no other patients in $A$ with the same PS. Assuming that there are ten patients in $B$ with the same PS as $x_i$, there are
$(\begin{smallmatrix}
10\\
2
\end{smallmatrix}) = 45$ many possible $1$:$2$ matchings out of which only one gets chosen, while the information in the remaining eight unmatched patients gets lost. Note that this can happen in $\delta$-PSM as well.
Obviously this behavior persists in the general case of $k$:$l$ PSM, where $k,\,l\in \mathbb{N}$, $k$ patients from one therapy group get matched to $l$ patients of the other group, if all patients meet the matching criteria. Consequently we will not consider one-to-many or its more general case of $k$:$l$ PSM in this article, see also subsection \ref{subsec:incomplete} on incomplete usage of data.
\subsection{Bootstrapping}\label{subsec:bootstrap}
Bootstrapping techniques \cite{Austin2014} are applied in PSM to avoid negative effects occurring due to randomness or statistical outliers. Considering the example from the previous subsection \ref{subsec:many-one} again:
Let $x_i$ be an arbitrary element of $A$ and there exist ten patients from $B$ with equal PS. Assume that only a single patient $z_j$ out of the ten has $obs(z_j) = 1$. Matching only $x_i$ and $z_j$ and thus leaving the remaining nine possible matches in $B$ unmatched distorts the result. This persists, even if the matching choice was made randomly, as the error lies within the choice of matching only one pair. Note that variants of one-to-many PSM are susceptible to the same error. Bootstrapping avoids this by taking multiple samples, meaning that the matching part of the algorithm is run multiple times.
As each sample can be perceived as a different permutation of the input, one has to take a high number of samples, which adds an overhead to bootstrapping. Because of this added overhead, the bootstrapping approach seems to be used very rarely. In comparison to the widespread use of PSM, only few studies, e.g., \cite{Knight2016,Chiu2016,Ounpraseuth2012}, make use of bootstrapping with PSM. We prove that the result of executing PSM with bootstrapping will converge to the result delivered by DBSeM which avoids the overhead of bootstrapping and does not suffer from the remaining pitfalls of PSM.
\section{PSM's Pitfalls}\label{sec:pitfalls}
With regard to the goal of SM it is desirable to establish a matching procedure that delivers identical results for the same input set. We show in this section that results of multiple PSM runs differ significantly even if PSM is applied to the same dataset and identify some of PSM's Pitfalls.
For illustration we use the quality assurance dataset of isolated aortic valve procedures in $2013$, which is an official mandatory dataset including all isolated aortic valve surgery cases in German hospitals and contains patient information (covariates) and mortality information (observed result) for $17{,}427$ patients. For each patient, the corresponding record contains $19$ variables, i.e., $s=19$. This external quality assurance database for isolated aortic valve procedures 2013 of the German Federal Joint Committee contains $9{,}848$ SAVR (replacement surgery of aortic valves) cases and $7{,}579$ TF-AVI cases (transcatheter/transfemoral implantation of aortic valves)\footnote{The cases were documented in accordance with \S 137 Social Security Code V (SGB V) by hospitals registered under \S 108 SGB V. The data collection is compulsory for all in-patient isolated aortic valve procedures in German hospitals.}, held by the Federal Joint Committee (Germany). Given the dataset it can safely be assumed that the data is independent in a statistical sense as patients were only recorded once. The illustrative results, i.e., mortality rates, were calculated using the internationally validated Euroscore~II\footnote{\url{http://www.euroscore.org}} variables and the PSM functions provided by IBM SPSS Statistics for Windows, Version $24.0$.
\subsection{Randomness of Choice and sort order dependence of PSM}\label{subsec:randomness}
For clarification of exposure the following definitions are essential:
\begin{definition}[Sort order]
\label{def:sort_order}
The \emph{sort order} for SM is the order in which patients are ordered in the matrix representing the dataset.
\end{definition}
The following example illustrates the meaning of sort order for SM:
\begin{example}
\label{ex:sort_order}
Let $x_1$ and $x_2$ be patients with covariate vector $cv(x_1) = (1,\,0,\,1)$ and $cv(x_1) = (0,\,1,\,0)$. The order in which $x_1$ and $x_2$ appear in the matrix representing the dataset is the sort order for SM covariates. Thus
\begin{equation*}
\begin{matrix}
1 & 0 & 1 & (cv(x_1))\\
0 & 1 & 0 & (cv(x_2)) \\
\end{matrix}
\end{equation*}
and
\begin{equation*}
\begin{matrix}
0 & 1 & 0 & (cv(x_2))\\
1 & 0 & 1 & (cv(x_1)) \\
\end{matrix}
\end{equation*}
represent different sort orders.
\end{example}
Note that a sort order is valid for the dataset as a whole, thus the whole data matrix is ordered such that a column represents the value of a specific covariate.
Obviously the information contained in a dataset is independent of the sort order of the given dataset. This motivates the following definition:
\begin{definition}
\label{def:sort_order_dep}
An SM-algorithm is sort order dependent if given a dataset with therapy groups $A$ and $B$ the algorithm calculates different results for different sort orders.
\end{definition}
Looking at step~\ref{state:PSM_match} of the general PSM procedure (algorithm~\ref{alg:PSM}), one can infer that if sort order dependence was not in mind and thus taken care of, PSM implementations generally are sort order dependent as the first, or according to a random number, potential match, regardless of the precise matching criteria, i.e., nearest-neighbor, optimal, caliper-matching, gets picked. Additionally the matching of fixed sizes, independent on the exact values of $k$ and $l$, is sort-order dependent as well for the same reason.
The sort order dependency can also be observed by looking at the results from Table~\hyperref[tab:random_runs]{$1$}, which presents PSM calculations on the aforementioned dataset.
\begin{table}[h!]
\centering
\begin{tabular}{l|ll|ll|l}\hline
$1502$ exact matchings with & \multicolumn{2}{|c|}{SAVR} & \multicolumn{2}{|c|}{TF-AVI} &$\chi^2$ Test \\
regards to all $19$ Euroscore II& \multicolumn{2}{|c|}{in-hospital death} & \multicolumn{2}{|c|}{ in-hospital death} & (2-tailed)\\
variables and without replacement& count & \% & count & \% & p-value \\\hline
Run $1$ & $73$ & $4.9\%$ & $33$ & $2.2\%$ & $<0.0001$\\
Run $2$ & $73$ & $4.9\%$ & $34$ & $2.3\%$ & $<0.0001$\\
Run $3$ (different sort order) & $42$ & $2.8\%$ & $32$ & $2.1\%$ & $0.2398$\\
\hline
\end{tabular}
\caption{Results of exact 1:1 PSM runs for two heart-surgery methods without bootstrapping}
\label{tab:random_runs}
\end{table}
\FloatBarrier
The rows labeled Run~$1$ and Run~$3$ (different sort order) differ only in the sort order given in the input. They differ precisely by changing the sort order through ordering one covariate in descending, the patients with $1$ as entry for this covariate come first, instead of ascending order. If PSM would be sort order independent, the result should at least be similar, as the dataset and every other given input was exactly the same. As the results largely differ the possible conclusions drawn from looking at Run~$3$ are contrary to the conclusions one would draw from looking at Run~$1$.
Besides sort order dependence of PSM there is a random element included as well as Run $1$ and Run $2$ used the same sort order, but obtain a slightly different result. The randomness effect occurs for patients $x\in A$ with more than one patient $z \in B$ such that $ps(x) \equiv ps(z)$. For a method used in a scientific context this should not happen as verification of results through reproduction by fellow researchers with the same dataset and software is severely impeded as results are difficult to reproduce.
To clarify the importance of sort order dependence and randomness of choice we calculated the worst and best possible results for exact $1$:$1$ PSM on the given dataset, for results see Table~\ref{tab:best_worst}. The exemplary dataset had mortality as observed values, thus a patient is either dead or alive at the end of the study. Consequently the best case for a therapy group means that living patients from the therapy group were matched to living patients, while avoiding matching living patients to dead patients as long as possible. This can for example be done in the best case for every patient of one partition group, e.g., $A$, by taking the patient's PS and if there is a living patient in $B$ with the same PS, then both living patients get matched. If there is no living patient in $B$, but dead patients with the same PS exist, then they get matched. Naturally patients with different PS do not get matched as we only considered exact matching. One should note that the observed result is not included in the regression model and does not need to be included for simulating a PSM in this manner as patients were only potentially matched if the PS of both patients coincided.
\begin{table}[h!]
\centering
\begin{tabular}{l|ll|ll|l}\hline
$1{,}502$ exact matchings with & \multicolumn{2}{|c|}{SAVR} & \multicolumn{2}{|c|}{TF-AVI} &$\chi^2$ Test\footnote{t-test p-values for the first four rows are $<0.0001$ and for PSM with replacement $0.0005$.} \\
regards to all $19$ Euroscore II& \multicolumn{2}{|c|}{in-hospital death} & \multicolumn{2}{|c|}{ in-hospital death} & (2-tailed)\\
variables and without replacement& count & \% & count & \% & p-value \\\hline
Best Case & $24$ & $1.6\%$ & $15$ & $1.0\%$ & $0.1470$\\
Worst Case & $73$ & $4.9\%$ & $50$ & $3.3\%$ & $0.0342$\\
Best SAVR/Worst TF-AVI & $24$ & $1.6\%$ & $50$ & $3.3\%$ & $0.0021$\\
Worst SAVR/Best TF-AVI & $73$ & $4.9\%$ & $15$ & $1.0\%$ & $<0.0001$\\
Uniform Bootstrapping ($10{,}000$ samples)& $52.47$ & $3.49\%$ & $32.10$ & $2.14\%$ & $0.0210$ (t-test)\\\hline\hline
PSM with replacement ($3{,}288$ matches) & $73$ & $2.2\%$ & $85$ & $2.5\%$ & $0.3339$ \\
\end{tabular}
\caption{Results for exact 1:1 PSM with the same dataset as in Table $1$}
\label{tab:best_worst}
\end{table}
As the matching procedure was exact PSM the results are balanced regarding the covariates, thus, even if constructed, each of the presented cases is a valid outcome of applying PSM to the dataset. Furthermore the true effect is generally unknown in practice and there is a random element in place. Thus identification of a result as an outlier can be difficult, especially since the balance of these matches is perfect. As the results regarding the observed value is completely different, the conclusions drawn from these results can differ as well. For example most of the medical studies cited in the introduction, e.g., \cite{Burden2017,Ray2012,Nichay2017,Gozalo2015,Mcevoy2016,Capucci2017,Bruno2017,Tranchart2016, Zangbar2016, Dou2017,Fukami2017,McDonald2017,Kishimoto2017,Kong2017,Chen2016,Seung2008,Shaw2008,Liu2016,Svanstrom2013}, given the decision criteria of a $\chi^2$-value above $3.841$, and respectively a p-value below $0.05$, the null hypothesis ( $H_0$: \emph{The mortality-rate does not depend on therapy}), would be rejected
for Best SAVR/Worst TF-AVI and Worst SAVR/Best TF-AVI from Table~\ref{tab:best_worst} even though the direction of the results are different, the matchings
are completely balanced and computed using the same dataset.
Bootstrapping \cite{Austin2014} can solve some of the aforementioned issues if the selection of a matching partner among many is uniform. Thus, we define:
\begin{definition}\label{def:ubPSM}
A bootstrapped PSM is called uniformly bootstrapped PSM (ubPSM) iff the selection choice of patients in $A$ to be matched with a single patient from $B$ of equal PS has the same probability for all patients from $A$ and vice versa.
\end{definition}
Note that if the uniformity assumption made in definition~\ref{def:ubPSM} does not hold, then a bootstrapped result can be skewed, this holds as well if too few bootstrapping iterations were done. Note that this assumption does not hold if one simply applies randomness to the matching procedure.
Table~\hyperref[tab:best_worst]{$2$} shows the result of applying a ubPSM to our dataset. It is evident that the result significantly differs from some of the other results that were not bootstrapped. In regard of the pitfall introduced in this section and subsection~\ref{subsec:bootstrap}, it is obvious that PSM with bootstrapping improves result reliability in exchange for computational effort as the change of variance of the result is smaller. An additional drawback of bootstrapping is that one cannot be certain that the drawn amount of samples during the bootstrapping process is large enough. The method shown in section~\ref{sec:DBSeM} of this paper delivers an alternative for this approach and does not suffer from the drawbacks introduced through bootstrapping.
\subsection{Incomplete usage of Data} \label{subsec:incomplete}
For this paragraph suppose that patients $\tilde{x}_1,\,\ldots,\,\tilde{x}_n$ and $\tilde{z}_1,\,\ldots,\,\tilde{z}_m$ with identical PS, $psd(i,\,j) = 0\,\,\forall i \in \{1,\,\ldots,\,n\},\,j\in \{1,\,\ldots,\,m\}$, exist and that $n < m$.
An exact $1$:$1$ PSM algorithm will create $n$ matching pairs during the matching step, step~\ref{state:PSM_match} in algorithm \ref{alg:PSM}. Therefore $m-n$ many potential matches are ignored and the information provided by the dataset is only incompletely used. As this can, and in practice usually will, happen many times during a single PSM iteration a potentially large amount of information is ignored.
Taking a look at the exemplary calculations the exact $1$:$1$ PSM generates $1502$ matching pairs and thus uses only $15,3\%$ of available SAVR and $19,8\%$ of available TF-AVI-patient data. In section \ref{alg:DBSeM} we will present an algorithm that uses all of the available data and that potentially $34,1\%$ SAVR and $29,7\%$ TF-AVI patients are exact $1$:$1$ matchable.
For the reminder of this paragraph~(\ref{subsec:incomplete}), we will consider $\delta$-matching and assume that the $\tilde{x}_1,\,\ldots,\,\tilde{x}_n$ and $\tilde{z}_1,\,\ldots,\,\tilde{z}_m$ have $psd(i,\,j) \leq \delta\,\forall i \in \{1,\,\ldots,\,n\},\,j\in \{1,\,\ldots,\,m\}$ for given $\delta > 0$ and $n < m$. A $1$:$1$ PSM algorithm will again create at most $n$ matching pairs. Furthermore the larger therapy group usually provides even more potential matching patients for $\delta >0$, thus $n << m$ and the rate of information used is even lower than in the exact matching case.
Note that the shortly discussed $k$:$l$ matching variants, presented in subsection~\ref{subsec:many-one}, will construct at most $n$ matching pairs. Consequently they present no valid solution to this pitfall.
PSM with replacement is supposed to solve the problem of incomplete data usage, but it has the drawback that some patients disproportionally impact the PSM result. This leads to results differing significantly from the outcomes gained through PSM without replacement. This can also be observed by looking at the result presented in the last row of Table~\hyperref[tab:best_worst]{$2$}. While weighting matches according to their frequency~\cite{Stuart2010} alleviates the problem, the distorting nature of PSM with replacement along with the other presented pitfalls persists.
\subsection{Calculation of Propensity Scores}\label{subsec:PSM_calc}
The PS for PSM are typically computed using a type of regression. This results in issues related to floating point comparison, machine precision and the non-uniqueness of solutions of a nonlinear optimization problem. Alongside these issues one has to consider the property stated by proposition \ref{prop:PSM_calc}:
\begin{prop}\label{prop:PSM_calc}
If no two index sets $I,\,J \subseteq \{1,\,\ldots,\,s\}$ with $I \neq J$ and the property
\begin{equation}
\label{eq:PSM_combination}
\sum_{i \in I} \beta_i = \sum_{j \in J} \beta_j,
\end{equation}
exist, then: Two patients $x,\,z$ have the same covariate vectors, $cv(x) \equiv cv(z)$, if and only if they have the same logistic regression propensity scores, $ps(x) \equiv ps(z)$.
\end{prop}
\begin{proof}
Assume there exist no two index sets $I,\,J$ satisfying equation \eqref{eq:PSM_combination}, but that $x,\,z$ are two patients with different covariate vectors, $cv(x) \neq cv(z)$, and equal propensity scores, $ps(x) \equiv ps(z)$. Then the following equations lead to a contradiction.
\begin{eqnarray*}
ps(x) = ps(z) &\Leftrightarrow& \frac{e^{\beta_0 + \sum_{j = 1}^{s}\beta_{j}cv_{j}(x)}}{1+e^{\beta_0 + \sum_{j = 1}^{s}\beta_{j}cv_{j}(x)}} = \frac{e^{\beta_0 + \sum_{j = 1}^{s}\beta_{j}cv_{j}(z)}}{1+e^{\beta_0 + \sum_{j = 1}^{s}\beta_{j}cv_{j}(z)}}\\
&\Leftrightarrow& e^{\sum_{j = 1}^{s}\beta_{j}cv_{j}(x)} = e^{\sum_{j = 1}^{s}\beta_{j}cv_{j}(z)}\\
&\Leftrightarrow& \sum_{j = 1}^{s}\beta_{j}cv_{j}(x) = \sum_{j = 1}^{s}\beta_{j}cv_{j}(z)\\
&\Leftrightarrow& cv_j(x) = cv_j(z) \textup{ for all } 1 \leq j\leq s.
\end{eqnarray*}
The last identity holds because by assumption there exists no index sets $I,\,J$ such that equation~\eqref{eq:PSM_combination} holds, thus regression coefficients are unique in the sense of linear combinations. As $cv_j(x),\,cv_j(z) \in \mathbb{R}_{\geq 0}$ this results in a contradiction to the initial assumption that the covariate vectors are different. The opposite direction holds as all relations were equivalent.
\end{proof}
\bigskip
According to Proposition~\ref{prop:PSM_calc}, PS are not unique if equation~\eqref{eq:PSM_combination} holds for any combination of logistic regression coefficients. Thus, patients with different covariate vectors match despite using exact PSM. This property extends to $\delta$-PSM as one cannot be sure that patients with similar PSs have similar CVs.
This concludes our discussion regarding contribution~\hyperref[C1]{$C1$}. Based on the presented pitfalls, we derive a set of properties which an optimal SM algorithm should have in the next section.
\section{Properties for SM algorithms}\label{sec:properties}
As shown in the previous section, PSM does not compute verifiable and reliable results. Properties~\ref{prop:repro} and~\ref{prop:sort-order} formalize corresponding properties for SM algorithms:
\begin{property}\label{prop:repro}
An SM algorithm has the reproducibility property iff the results given the same input remain exactly the same for any number of computations.
\end{property}
\begin{property}\label{prop:sort-order}
An SM algorithm has the property of sort-order independence iff the result remains the same even if the sort order of covariates of the dataset is changed.
\end{property}
SM algorithms possessing properties~\ref{prop:repro} and~\ref{prop:sort-order} can still produce non-reliable results as they are not necessarily matching in a well defined manner. This is addressed by the following two properties:
\begin{property}\label{prop:complete}
An exact SM algorithm has the data completeness property, iff for all permutations of patients $\tilde{x}_1,\,\ldots,\,\tilde{x}_n \in A$ and $\tilde{z}_1,\,\ldots,\,\tilde{z}_m \in B$ with identical PS and $m\neq n$, the observed information of all $n+m$ patients has influence on the algorithm's result.
\end{property}
For completeness of exposure we will give an extension of the data completeness property for exact matching to $\delta$-matching here. The extension can be done by introducing a cost function for the matching and the notion of existing possible matches:
\begin{definition}
\label{def:cost_match}
Let $M = \{M_1,\,\ldots,\,M_{|M|}\}$ be a matching and denote the matched patient from therapy group $A$ within the matching set $M_i$ of $M$ with $M_i(A)$. Then the weight of the matching $M$ is defined by
\begin{equation}
\label{eq:cost_match}
w(M) \coloneqq \sum_{i=1}^{|M|} psd(M_i(A),\,M_i(B)).
\end{equation}
\end{definition}
\begin{definition}
\label{def:poss_match}
A patient $\tilde{x_i} \in A$ is matchable in a $\delta$-matching, if there exists a patient $\tilde{y_j} \in B$ such that $psd(i,\,j) \leq \delta$.
\end{definition}
\begin{definition}
\label{prop:delta_completeness}
A $\delta$-SM algorithm has the data completeness property iff for a matching $M$ and all patients $\tilde{x}_1,\,\ldots,\,\tilde{x}_n \in A$ and $\tilde{z}_1,\,\ldots,\,\tilde{z}_m \in B$ with an existing possible match are matched and $w(M)$ is minimal.
\end{definition}
SM algorithms fulfilling the data completeness property use all information contained in the input as no possible match is ignored. Even PSM with replacement does not have the data completeness property as randomness and sort order dependency still inhibit choosing some possible matches. The last property necessary for an optimal SM matching algorithm guarantees that the determined matching has no additional errors besides the errors stemming from the underlying data.
\begin{property}\label{prop:conserving}
An SM algorithm is called conserving if it is only possible for patients to be matched
\begin{itemize}
\item in exact matching, if their covariate vectors are the same.
\item in $\delta$-matching, if their covariates are similar enough according to the chosen similarity measure.
\end{itemize}
\end{property}
While PSM is often assumed to have the conserving property, it is computed using estimated regression scores and this can introduce additional errors as Proposition~\ref{prop:PSM_calc} does not always hold. This concludes our discussion regarding contribution \hyperref[C2]{$C2$} and we present our SM algorithm -- Deterministic Balancing Score exact Matching (DBSeM) -- meeting all four properties for exact SM next.
\section{Deterministic Balancing Score Matching}\label{sec:DBSeM}
The general idea of DBSeM is to cluster patients from a therapy group with same covariate vectors and generate a matching between both therapy groups over the constructed clusters.
Clustering of patients $p$ and $q$ requires a distance metric. In exact matching any metric would be applicable, but for ease of presentation we will use the Manhattan metric $d(p,\,q) := \sum_{i = 1}^{s} \vert cv_i(p) - cv_i(q) \vert$ from now on. Note that patients $p$ and $q$ have equal covariate vectors iff $d(p,\,q) \equiv 0$.
\begin{definition}\label{def:cluster}
A cluster of patients from one therapy group $H$ is a non-empty set $C_{H}$ of patients with properties
\begin{enumerate}
\item $d(p,\,q) = 0 \,\,\forall p,\,q \in C_{H}$.
\item $\nexists q \in H$ such that $q\notin C_H$ and $d(p,\,q) = 0$ for $p\in C_H$.
\item If $p\in C_H$, then the assigned covariate vector of $C_H$ is $cv(p)$.
\end{enumerate}
\end{definition}
Because of definition~\ref{def:cluster} clusters have the following characteristics:
\begin{prop}\label{prop:cluster}
Let $H$ be a therapy group in an SM context, then the following holds for clusters in this therapy group:
\begin{enumerate}
\item Every patient in $H$ belongs to exactly one cluster.
\item Every cluster can have exactly one covariate vector assigned to it.
\item Any two clusters in $H$ have different assigned covariate vectors.
\end{enumerate}
\end{prop}
\begin{proof}
We prove every characteristic individually:
\begin{enumerate}
\item The assumption that there exists a patient $p\in H$ not belonging to any cluster is by definition \ref{def:cluster} not possible, thus it remains to show that there exists no patient $p\in H$ belonging to two different clusters $C_1$ and $C_2$. Assume that $p\in C_1\cap C_2$ and let $q_1 \in C_1$ and $q_2 \in C_2$ be two patients in $C_1$ and $C_2$ respectively. As $p\in C_1 \cap C_2$ it holds by definition $\hyperref[def:cluster]{\ref{def:cluster}.1}$ that $d(p,\,q_1) = 0 = d(p,\,q_2)$ and therefore $d(q_1,\,q_2)=0$. This is a contradiction to definition $\hyperref[def:cluster]{\ref{def:cluster}.2}$ an therefore every patient belongs to exactly one cluster.
\item As clusters are non-empty sets of patients every cluster has at least one covariate vector assigned to it. Therefore assume that cluster $C$ has two assigned covariate vectors $v_1$ and $v_2$ differing in at least one entry. Then by definition $\hyperref[def:cluster]{\ref{def:cluster}.3}$ it holds that there exists patients $p,\,q \in C$ such that $v_1 = cv(p)$ and $v_2 = cv(q)$. As $v_1 \neq v_2$ holds by assumption it follows that $d(p,\,q) \neq 0$, contradicting definition $\ref{def:cluster}.1$ as $p,\,q \in C$.
\item Assume that different clusters $C_1$ and $C_2$ have the same assigned covariate vector. This implies that $d(p,\,q) = 0,\,\forall p\in C_1,\,q\in C_2$ and is a contradiction to definition $\hyperref[def:cluster]{\ref{def:cluster}.2}$.
\end{enumerate}
\end{proof}
\bigskip
Because of proposition~\ref{prop:cluster}, clusters can be assigned unique covariate vectors. We denote the similarity of two clusters $C_A$ and $C_B$ -- for therapy groups $A$ and $B$ respectively -- as $d(C_A,\,C_B)$. Similarly the distance between a patient $p$ and a cluster $C$ is $d(p,\,C)$.
\begin{prop}\label{prop:cluster_equivalence}
Let $C_A$ and $C_B$ be clusters from different therapy groups, then $d(C_A,\,C_B) \equiv 0$ holds iff the two clusters have the same assigned covariate vector.
\end{prop}
\begin{proof}
Let $C_A$ and $C_B$ be clusters from different therapy groups and $d(C_A,\,C_B) \equiv 0$. As every cluster has exactly one assigned covariate vector it remains to show that $cv(C_A) \equiv cv(C_B)$ and the following holds:
\begin{equation}
\label{eq:cluster_eq}
d(C_A,\,C_B) \equiv 0 \Leftrightarrow \sum_{i=1}^{s} \vert cv_i(C_A)-cv_i(C_B)\vert \equiv 0 \Leftrightarrow cv_i(C_A) \equiv cv_i(C_B),\,\forall 1\leq i \leq s.
\end{equation}
Thus both clusters have the same assigned covariate vector. The reverse direction follows as all implications in equation~\eqref{eq:cluster_eq} are given through equivalence.
\end{proof}
\paragraph*{The DBSeM algorithm}
Propositions~\ref{prop:cluster} and~\ref{prop:cluster_equivalence} allow us to match clusters in an explicit way and to formulate the following algorithm:
\FloatBarrier
\begin{algorithm}
\caption{DBSeM}
\label{alg:DBSeM}
\begin{algorithmic}[1]
\State Set $c = 0$ and $is\_clustered(x_i) =0$ for all patients in $A$. \label{state:FB_1}
\For{each patient $x_i,\, 1\leq i \leq a$} \label{state:FB_2}
\If{$is\_clustered(x_i) \equiv 0$} \label{state:FB_3}
\State Set $c = c+1$, $C_{A,\,c} := \{x_i\}$ and $is\_clustered(x_i) = 1$. \label{state:FB_4}
\EndIf \label{state:FB_5}
\For{each patient $x_j$ with $i < j\leq a$ and $is\_clustered(x_j) \equiv 0$} \label{state:FB_6}
\If{$d(x_j,\,C_{A,\,c})\equiv 0$} \label{state:FB_7}
\State set $C_{A,\,c} = C_{A,\,c} \cup x_j$ and $is\_clustered(x_j) = 1$ \label{state:FB_8}
\EndIf \label{state:FB_9}
\EndFor \label{state:FB_10}
\EndFor \label{state:FB_11}
\State Repeat steps $1$ and $2$ for $B$ and store the number of clusters from $A$ and $B$ in variables $k$ and $l$ respectively. \label{state:FB_12}
\For{every cluster $C_{A,\,i},\,1\leq i \leq k$} \label{state:FB_13}
\State Create Matching Set $M_i = \emptyset$. \label{state:FB_14}
\State Search for cluster $C_{B,\,c}$ with $d(C_{A,\,i},\,C_{B,\,c}) \equiv 0$. \label{state:FB_15}
\If{A cluster $C_{B,\,c}$ was found in the previous step} \label{state:FB_16}
\State Set $M_i = \{C_{A,\,i},\,C_{B,\,c}\}$. \label{state:FB_17}
\EndIf \label{state:FB_18}
\EndFor \label{state:FB_19}
\State Weight clusters according to a weighting scheme. \label{state:FB_20}
\State Output matching sets $M_k$ and the weighted result. \label{state:FB_21}
\end{algorithmic}
\end{algorithm}
\FloatBarrier
\noindent The weighting in step~\ref{state:FB_20} is required to normalize the results and we will discuss it extensively in the next section. Next, we prove that DBSeM meets the four properties of an optimal SM algorithm.
\begin{theorem}\label{thm:DBSeM}
The \hyperref[alg:DBSeM]{DBSeM algorithm} satisfies properties~\ref{prop:repro} to~\ref{prop:conserving}.
\end{theorem}
\begin{proof}
We prove reproducibility by contradiction. We assume that two runs of DBSeM generated different matching set results $R_1$ and $R_2$, i.e., different clusters were matched. W.l.o.g. assume that $C \subseteq A$ is matched with $C_1 \subseteq B$ in $R_1$ and $C_2 \subseteq B$ in $R_2$. As $C$ was matched with $C_1$ and $C_2$ we know from Proposition~\ref{prop:cluster_equivalence} that $d(C,\,C_1) \equiv 0 \equiv d(C,\,C_2)$. This implies $d(C_1,\,C_2) \equiv 0$ and $C_1 \equiv C_2$ as of Proposition~\ref{prop:cluster}. Therefore $R_1 \equiv R_2$ as $C,\,C_1$ and $C_2$ were arbitrary. Thus we have a contradiction to the assumption that $R_1$ and $R_2$ were different. The proof for sort-order independence is analogous.
Let $\tilde{x}_1,\,\ldots,\,\tilde{x}_n \in A$ and $\tilde{y}_1,\,\ldots,\,\tilde{y}_n \in B$ be two sets of patients with identical covariate vectors. Because of steps~\ref{state:FB_1} to~\ref{state:FB_12} both patient sets belong to a cluster $C_A$ and $C_B$ respectively. This means $cv(\tilde{x}_i) \equiv cv(\tilde{y}_j),\,\forall 1\leq i \leq n,\,1\leq j\leq m$ and $cv(C_A) \equiv cv(C_B)$. Thus, all patients represented by clusters were matched and impact the matching result. Thus the data completeness property is fulfilled.
The conservation property holds because clusters were only matched if their covariate vectors were the same and every cluster has an unique covariate vector. This concludes the proof as long as step~\ref{state:FB_20} does not disturb the four properties, which will be proven in proposition~\ref{prop:min_weight}.
\end{proof}
\bigskip
As theorem~\ref{thm:DBSeM} shows DBSeM satisfies the four properties needed for an optimal SM algorithm. According to~\cite{Rosenbaum1983}, the covariate vector is the finest balancing score that expresses differences between patients. Thus, for exact matching one achieves an expression of differences between patients by applying our algorithm. By clustering the patients and comparing matched cluster cardinality, one can estimate assignment biases in both therapies.
Observe that the result given by the DBSeM algorithm is the same as the expected result given by coarsened exact matching (CEM), introduced by \cite{Iacus2011, Iacus2012}, if the strata used in CEM are generated in such a way that a stratum contains all patients with equal covariate vectors from both therapy groups. We stress that the value given by CEM is still an expected value, thus it can change if the algorithm is applied multiple times to the same dataset, while the value given by the DBSeM algorithm is a deterministic one, which is fixed by the data itself and does not change when applying the algorithm multiple times to the same dataset (property \ref{prop:repro}).
Finally note that the result given by algorithm \ref{alg:DBSeM} is imbalance bounded (IB), as defined in \cite{Iacus2012}. It is also equal percent bias reducing (EPBR) \cite{Rubin1976} and we intend to extend our method to $\delta$-matching, with $\delta > 0$, such that these properties (IB) and (EPBR) are kept, while confirming to the four properties introduced in section \ref{sec:properties}.
This concludes our discussion regarding contribution \hyperref[C3]{$C3$}. Based on the presented algorithm we proceed to present a simple weighting mechanism and prove that bootstrapped ubPSM converges against DBSeM.
\section{Bootstrapped PSM convergence}\label{sec:convergence}
Step~\ref{state:FB_20} of DBSeM (cf. Algorithm~\ref{alg:min_weight}) uses a weighting approach to avoid that different cardinalities of clusters lead to distorted matching results. In the following we use a min-weighing scheme as it allows us to show convergence of bootstrapped PSM to the DBSeM results.
The idea is to weight matched clusters $C_{A,\,i}$ and $C_{B,\,j}$ accordingly to their size such that the influence of both clusters is $\min\{\vert C_{A,\,i}\vert,\,\vert C_{B,\,j}\vert\}$ respectively. Algorithm~\ref{alg:min_weight} outlines a min-weighting procedure that needs to be applied to all matched clusters $C_{A,\,i}$ and $C_{B,\,j}$ in step~\ref{state:FB_20} of Algorithm~\ref{alg:DBSeM} (recall that $k$ and $l$ are the number of clusters from $A$ and $B$ respectively).
\FloatBarrier
\begin{algorithm}
\caption{Min-Weighting Procedure}
\label{alg:min_weight}
\begin{algorithmic}[1]
\State Set $w(C_{A,\,i}) = 0\,\,\forall 1\leq i \leq k$ and $w(C_{B,\,j}) = 0\,\,\forall 1\leq j \leq l$
\For{all $C_{A,\,i},\,1\leq i \leq k$ with $M_i \neq \emptyset$} \label{MW:state_2}
\Statex Determine the matching cluster $C_{B,\,j},\,1\leq j \leq l$.
\Statex Calculate $S_{A,\,i} := S_{B,\,j} := \min\{\vert C_{A,\,i}\vert,\,\vert C_{B,\,j}\vert \}$.
\Statex Compute $w(C_{A,\,i}) := S_{A,\,i}/\vert C_{A,\,i} \vert$ and $w(C_{B,\,j}) := S_{B,\,j}/\vert C_{B,\,j} \vert$.
\EndFor
\State Compute min-weighted results: \label{MW:state_4}
\begin{eqnarray}
R_A &:=& \sum_{i=1}^{k} [ w(C_{A,\,i}) \sum_{h=1}^{\vert C_{A,\,i}\vert} obs(x_{i,\,h})],\label{eqn:2}\\
R_B &:=& \sum_{j=1}^{l} [w(C_{B,\,j}) \sum_{h=1}^{\vert C_{B,\,j}\vert} obs(y_{j,\,h})]\label{eqn:3},
\end{eqnarray} where $x_{i,\,h} \in C_{A,\,i}$ and $y_{j,\,h} \in C_{B,\,j}$.
\end{algorithmic}
\end{algorithm}
\FloatBarrier
\begin{prop}\label{prop:min_weight}
The usage of algorithm~\ref{alg:min_weight} in step~\ref{state:FB_20} of algorithm~\ref{alg:DBSeM} does not disturb the properties of reproducibility, sort-order independence, data completeness and conservation of algorithm~\ref{alg:DBSeM}.
\end{prop}
\begin{proof}
From the proof of theorem~\ref{thm:DBSeM} we know that steps~\ref{state:FB_1} to~\ref{state:FB_19} of algorithm~\ref{alg:DBSeM} fulfill the properties of reproducibility, sort-order independence, data completeness and conservation. Assume now that algorithm~\ref{alg:min_weight} outputs two different min-weighted results $R_{A,\,1}$ and $R_{A,\,2}$ for therapy group $A$. Then there has to exist at least one pair of matched clusters $C_{A,\,i}$ and $C_{B,\,j}$ with different weights in $R_{A,\,1}$ and $R_{A,\,2}$ as the sum over the observed variables inside a cluster $\sum_{h=1}^{\vert C_{A,\,i}\vert}obs(x_{i,\,h})$ always has the same value and the matched clusters are uniquely matched because of proposition~\ref{prop:cluster_equivalence} and steps~\ref{state:FB_1} to~\ref{state:FB_19} of algorithm~\ref{alg:DBSeM} being reproducible and sort-order independent. As the matched clusters are unique so are their sizes and therefore $S_{A,\,i}$ is unique. Thus $w(C_{A,\,i})$ is the same for both assumed results $R_{A,\,1}$ and $R_{A,\,2}$ and as $C_{A,\,i}$ and $C_{B,\,j}$ were chosen arbitrarily this holds for all clusters. Thus $R_{A,\,1} \equiv R_{A,\,2}$ and the proof is analogous for different results regarding $B$. This proves that the property of reproducibility is not disturbed by using algorithm~\ref{alg:min_weight} in step~\ref{state:FB_20} of algorithm~\ref{alg:DBSeM}. The proof for sort-order independence is analogous.
If a patient was inside a matched cluster, then it influences the weight computed in step~\ref{MW:state_2} and the result generated in step~\ref{MW:state_4}. Therefore usage of algorithm~\ref{alg:min_weight} does not disturb algorithm \ref{alg:DBSeM}'s data completeness property.
As algorithm~\ref{alg:min_weight} does not delete matches, does not match itself and every matched patient is considered, it does not disturb algorithm~\ref{alg:DBSeM}'s conservation property.
\end{proof}
\bigskip
An DBSeM algorithm with the min-weighting procedure in step~\ref{state:FB_20} is called min-weighted DBSeM and as $k\leq \vert A\vert$ and $l\leq \vert B\vert$ the following theorem holds:
\begin{theorem}\label{thm:DBSeM_runtime}
The min-weighted DBSeM algorithm has a runtime of $\mathcal{O}(\vert A \vert \cdot \vert B \vert\cdot s + \vert A\vert^2 + \vert B \vert^2)$.
\end{theorem}
\begin{proof}
In DBSeM step~\ref{state:FB_1} every patient of $A$ gets looked exactly once, while DBSeM steps~\ref{state:FB_2} to~\ref{state:FB_12} have two for-loops and therefore a runtime of $\vert A \vert^2$ and $\vert B \vert^2$ respectively. In DBSeM steps~\ref{state:FB_13} to~\ref{state:FB_19} every cluster in $B$ is investigated at most $\vert A \vert$ times and every comparison between clusters needs $s$ (size of covariate vector) operations to determine the Manhattan metric. This leads to a total runtime of $\mathcal{O}(\vert A \vert \cdot \vert B \vert\cdot s + \vert A\vert^2 + \vert B \vert^2)$ for steps $1$--$4$. Algorithm $3$'s runtime in step~\ref{state:FB_20} is only dependent on the number of clusters $l$ and $k$ in an additive way. As $l\leq \vert A \vert$ and $k\leq \vert B \vert$ it follows that Algorithm $3$ has a runtime of $\mathcal{O}(\max\{\vert A\vert,\,\vert B\vert\})$. Thus the min-weighted DBSeM algorithm has a total runtime of $\mathcal{O}(\vert A \vert \cdot \vert B \vert\cdot s + \vert A\vert^2 + \vert B \vert^2)$.
\end{proof}
\bigskip
Note that the notation given in the statement of theorem \ref{thm:DBSeM_runtime} is due to the fact that we did not assume anything about the sizes of $A,\,B$ or $s$ nor their relative sizes with regard to each other.
Theorem~\ref{thm:DBSeM_convergence} establishes that min-weighted DBSeM has the desirable property of bootstrapped PSM convergence.
As shown in proposition~\ref{prop:PSM_calc}, PSM requires $\beta_i \neq \sum_{j=1,\,j\neq i}^{k} \beta_j$ for all indices $i$ in the logistic regression, to obtain meaningful results, hence we assume this in the following.
\begin{theorem}\label{thm:DBSeM_convergence}
Uniformly bootstrapped $1$:$1$ exact PSM converges towards the outcome of min-weighted-DBSeM.
\end{theorem}
\begin{proof}
We have to show that the expected values of bootstrapped $1$:$1$ exact PSM results are the same values as in Equations~\eqref{eqn:2} and~\eqref{eqn:3}. Proving convergence towards equality~\eqref{eqn:2} is sufficient, as the proof of~\eqref{eqn:3} follows analogously.
By the law of large numbers it holds that, for a known distribution, the bootstrapped result converges after sufficiently many iterations towards the expected value of the underlying distribution. As expected values for random variables $X$ and $Y$ underlying the same probability distribution are additive, $\mathbf{E}(X+Y) = \mathbf{E}(X) + \mathbf{E}(Y)$, it suffices to identify the distributions and probability for patients in clusters matched by min-weighted DBSeM to be matched by exact PSM.
By assumption the inequality $\beta_i \neq \sum_{j=1,\,j\neq i}^{k} \beta_j\,\forall \beta_i$ holds and we know from Proposition \ref{prop:PSM_calc} that patients with the same propensity score have the same covariate vectors. As we do an exact $1:1$ matching in the PSM part of every bootstrap iteration, the number of patients matched by PSM for a cluster $C_{A,\,i}$ matched with cluster $C_{B,\,j}$ is $S_{A,\,i}$, as their propensity scores are equal. The probability for one patient in $C_{A,\,i}$ to be chosen for matching with a patient from $C_{B,\,j}$ during one bootstrapping iteration is identical for all patients in $C_{A,\,i}$ as we assumed that the selection choice of patients to be matched has the same probability for all patients. Thus we have a discrete uniform distribution over $C_{A,\,i}$ for the matching partner choice in PSM.
\noindent It follows that the expected value for cluster $C_{A,\,i}$ matched with $C_{B,\,j}$ calculates as
\begin{equation}
\mathbf{E}(C_{A,\,i}) = S_{A,\,i} \cdot (\sum_{h=1}^{\vert C_{A,\,i}\vert}obs(x_{A,\,h}))/\vert C_{A,\,h} \vert.
\end{equation}
Addition of expected values now proves the theorem's statement:
\begin{eqnarray}
\mathbf{E}(A) &=& \sum_{i = 1}^{k}\mathbf{E}(C_{A,\,i}) = \sum_{i = 1}^{k} S_{A,\,i} \cdot (\sum_{h=1}^{\vert C_{A,\,i}\vert}obs(x_{A,\,h}))/\vert C_{A,\,i} \vert\\
&=& \sum_{i = 1}^{k}= S_{A,\,i}/\vert C_{A,\,i}\vert \sum_{h=1}^{\vert C_{A,\,i}\vert}obs(x_{A,\,h}) = \sum_{i = 1}^{k} w(C_{A,\,i})\sum_{h=1}^{\vert C_{A,\,i}\vert}obs(x_{A,\,h})\\
&=& R_A.
\end{eqnarray}
\end{proof}
\bigskip
Table~\hyperref[tab:DBSeM]{$3$} shows the result for min-weighted DBSeM with our dataset from Tables~\hyperref[tab:random_runs]{$1$} and~\hyperref[tab:best_worst]{$2$}. The DBSeM result is close but not equal to the result obtained uniformly bootstrapped PSM in Table~\hyperref[tab:best_worst]{$2$}. This is because even with bootstrapping
\begin{enumerate}
\item some information is lost during the matching (not all possible matches are used) and
\item some matchings are overrepresented, i.e., sampled more than once.
\end{enumerate}
Uniformly bootstrapped PSM will only achieve the exact same result as DBSeM if all permutations of the possible different matching samples are used exactly the same number of times (cf. Theorem~\ref{thm:DBSeM_convergence}). Since DBSeM has the data completeness property and PSM does not, the result in Table~\hyperref[tab:DBSeM]{$3$} represents the ground truth that PSM can only achieve with bootstrapping through all matching permutations. In general, there are $(\max\{a,\,b\})!$ such permutations which makes computing PSM for all of them not feasible. Hence, DBSeM performs better in SM compared to exact PSM as PSM would need a very large amount of iterations to generate the same result with a bootstrapping approach.
\begin{table}[h!]
\centering
\begin{tabular}{l|ll|ll|l}\hline
$1{,}502$ matched clusters with & \multicolumn{2}{|c|}{SAVR} & \multicolumn{2}{|c|}{TF-AVI} &t-test \\
regards to all $19$ Euroscore II& \multicolumn{2}{|c|}{in-hospital death} & \multicolumn{2}{|c|}{ in-hospital death} & (2-tailed)\\
variables and without replacement & count & \% & count & \% & p-value \\\hline
Min-weighted DBSeM & $53.01$ & $ 3.5\%$& $32.32$ & $2.1\%$ & $0.02271$\\
\end{tabular}
\caption{Results for min-weighted DBSeM with the same dataset as in Tables $1$ and $2$}
\label{tab:DBSeM}
\end{table}
We conclude with some remarks for practitioners and comment on the scope of our contribution.
We have shown that PSM delivers non-reliable and non-reproducible results~\hyperref[C1]{($C1$)} and formally deduced four properties for optimal SM algorithms~\hyperref[C1]{($C2$)}. The proposed DBSeM procedure meets the four derived formal properties for optimal SM algorithms~\hyperref[C1]{($C3$)} and delivers as the result the average of all valid sets of matched pairs for the investigated dataset, while being computationally very efficient~\hyperref[C1]{($C4$)}.
The presented DBSeM-algorithm can be used to support results, generated through other methods, e.g. PSM, CEM. As the result given by DBSeM is deterministic for a given dataset, and therefore definite, see Theorem \ref{thm:DBSeM}, it is possible to use the result for verification as the exact matching should be part of every $\delta$-matching with $\delta>0$. If the observational results of the DBSeM-matching and the chosen $\delta$-matching method coincide, then the quality of the calculated $\delta$-matching is more likely to be good in the sense of statistical matching criteria such as (EPBR) and (IB). On the other hand if the results contradict each other the practitioner should consider the collection of additional data.
Further work in regards to the presented method is the extension of DBSeM, such that $\delta$-matchings for $\delta>0$ can be constructed through a deterministic method as well.
|
{
"timestamp": "2019-05-20T02:13:38",
"yymm": "1803",
"arxiv_id": "1803.02704",
"language": "en",
"url": "https://arxiv.org/abs/1803.02704"
}
|
\section{Introduction}
An affine system on a connected Lie group $G$ is a family
\begin{flalign*}
&&\dot{x}(t)=F^0(x(t))+\sum_{j=1}^m\omega_j(t)F^j(x(t)),&&
\end{flalign*}
of ordinary differential equations, where $\omega:=(\omega_{1},\ldots,\omega_{m})\in\UC$ is a piecewise constant function and $F^0, F^1, \ldots, F^m$ are affine vector fields.
The class of affine systems are in fact quite large since it contains the classical linear and bilinear systems on the Euclidean space $\mathbb{R}^{d}$ and more generally the invariant, linear and bilinear systems on $G$ (see \cite{AyTi}, \cite{Elliot}, \cite{Sachkov} and \cite{Wonham}). Therefore, the dynamic involved here is really much more complicated than those of the mentioned systems.
In the present paper we exploit the intrinsic connection between affine and bilinear systems in order to obtain controllability results for affine systems. One example where one can see how strong is such connection is given for $G=\R^n$ by Jurdevic and Sallet in \cite{Jurd}. There the authors showed that an affine system is controllable as soon as it has no singularities and its associated bilinear system is controllable in $\R^n\setminus\{0\}$. However, any other class of Lie groups contains nontrivial proper subsets that are naturally invariant by automorphisms implying that controllability of any bilinear system on $G\setminus\{e\}$ can only be expected when $G$ is isomorphic to $\R^n$ (see Theorem \ref{contbilinear} ahead). Therefore, generalizations of the result of Jurdevic and Sallet for more general Lie groups are not possible.
The above forces us to look at affine systems in a more geometric way by using the above invariant subsets as done in \cite{ADS} and \cite{DS} for linear systems. In order to do that we first prove that there is an intrinsic connection between the solutions of any affine system and its associated bilinear system. More accurate, the solutions of an affine system are given by left translation of the solutions of their associated bilinear system. Using such formula we are able to generalize some results from \cite{DS} allowing us to prove controllability results for affine systems on compact and solvable Lie groups under the assumption of local controllability around the identity.
This paper is structured as follows. In Section 2 we introduce the basic concepts about control systems and affine vector fields. In Section 3 we analyze bilinear systems on Lie groups. We give an explicity formula for the solutions of such systems and show that the controllability of bilinear systems in only to be expected in Euclidean spaces. Section $4$ is devoted to the understanding of affine systems. We show that the solution of an affine system is given by left translation of the solution of its associated bilinear system. Such expression allow us to prove prove some results concerning the controllability of affine systems on compact and solvable Lie groups.
\section{Preliminaries}
In this section, we introduce basic concepts that will be needed through the paper.
\subsection{Notations}
By a smooth manifold we undertand a finite-dimensional, connected, second-countable, Hausdorff manifold endowed with a $\CC^{\infty}$-differentiable structure. If $f:M\rightarrow N$ is a differentiable map between smooth manifolds, we write $(df)_x:T_xM\rightarrow T_{f(x)}N$ for its derivative at $x\in M$, where $T_xM$ is the tangent space at $x\in M$ and $T_{f(x)}N$ the tangent space at $f(x)\in N$. When we do not need to specify the point $x\in M$ we say only that $f_*$ is the derivative of $f$.
A Lie group $G$ will be a group endowed with the structure of a smooth manifold. If $G$ is a Lie group, we write $\mathrm{Aut}(G)$ for the groups of automorphisms of $G$ and $\mathfrak{X}(G)$ for the set of $\CC^{\infty}$ vector fields on $G$. By $e$ we denote the identity element of $G$ and by $i$ the inversion of $G$, that maps $g\in G$ into its inverse $g^{-1}\in G$. For any given $g\in G$ we denote by $L_g, R_g$ and $C_g$ the left translation, right translation and the conjugation by $g$, respectively. The image of the exponential map $\exp:\fg\rightarrow G$ is denoted by $\exp(X)$ or by $\rme^{X}$. The Lie algebra $\fg$ of $G$ will always be identified with the set of right invariant vector fields on $G$.
\subsection{Control systems}
A control system on a smooth manifold $M$ is given by the family
\begin{flalign*}
&&\dot{x}(t)=f^0(x(t))+\sum_{j=1}^m\omega_j(t)f^j(x(t)), \;\;\omega=(\omega_1, \ldots, \omega_m)\in\UC, &&\hspace{-1cm}\left(\Sigma \right)
\end{flalign*}
of ordinary differential equations. Here $f^0, f^1,\ldots, f^m$ are smooth vector fields on $M$. $f^0$ is called the
{\it drift vector field} and $f^1, \ldots, f^m$ the {\it control vector fields}. The set of {\it admissible control functions} $\UC$ is given by the set of piecewise constant functions $\omega:\R\rightarrow\R^m$.
For each $\omega\in\UC$, the corresponding differential equation $\Sigma$ has a unique solution $\varphi(t, x, u)$ with
initial value $x = \varphi(0, x, u)$. The systems considered in this paper all have globally defined
solutions, which give rise to a map
$$\varphi : \R\times M \times\UC\rightarrow M, \;\;(t, x, \omega)\mapsto \varphi(t, x, \omega),$$
called the {\it transition map} of the system. We also use the notation $\varphi_{t, \omega}$ for the map $\varphi_{t, \omega}: M \rightarrow M$ given by $x\mapsto \varphi_{t, \omega}(x):=\varphi(t, x, \omega)$. Since $f^0, f^1, \ldots, f^m$ are smooth, the map $\varphi_{t, \omega}$ is also smooth. The transition map $\varphi$ is a cocycle over the shift flow
$$\theta: \R \times \UC \rightarrow \UC, (t, \omega)\mapsto \theta\omega = \omega(\cdot + t),$$
i.e., it satisfies $\varphi(t+s, x, \omega) = \varphi(s, \varphi(t, x, \omega), \theta_t \omega)$ for all $t, s\in \R, x \in M$ and $\omega\in \UC$. Moreover, it holds that $\varphi_{t, \omega}^{-1}=\varphi_{-t, \theta_{t}\omega}$ and, for all $t_1, t_2>0$ and $\omega_1, \omega_2\in\UC$
$$\varphi(t_1, \varphi(t_2, x, \omega_2), \omega_1) =\varphi(t+s, x, \omega), \;\;\;\mbox{ where }\;\;\;\omega(\tau) =\left\{\begin{array}{c}
\omega_1(\tau)\mbox{ for }\tau \in[0, s]\\
\omega_2(\tau - s) \mbox{ for }\tau\in [s, t + s]
\end{array}\right.$$
For $x\in M$ and $\tau >0$ we write
$$
\mathcal{R}_{\leq\tau}(x) :=\left \{ \varphi(t ,x ,\omega);\; t\in[0, \tau] \mbox{ and } \omega
\in \mathcal{U}\right \} \;\;\;\;\mbox{ and }\;\;\;\;\mathcal{R}(x):=\bigcup_{\tau
>0}\mathcal{R}_{\leq\tau}(x).
$$
for the {\it set of points reachable from $x\in M$ up to time $\tau$} and the {\it reachable set from $x$}, respectively. Analogously, we define the {\it set of points controllable to $x$ within time $\tau$} and the {\it controllable set of $x$} respectively by
$$
\mathcal{R}^*_{\leq\tau}(x):=\left\{y\in M; \exists t\in[0, \tau], \omega\in \mathcal{U} \mbox{ with } \varphi(t, y ,\omega)=x\right\} \;\;\;\mbox{ and }\;\;\;\mathcal{R}^*(x):=\bigcup_{\tau >0}\mathcal{R}^*_{\leq\tau}(x).
$$
The system $\Sigma$ is said to be {\it locally controllable at } $x$ if $x\in\inner\RC(x)$. In the analytic case, it follows from Theorem 3.1 of \cite{Suss0} that $\Sigma$ is locally controllable at $x$ if $x\in\inner\RC(x)\cap\inner\RC^*(x)$. In particular, that is the case for the systems on Lie groups under consideration in this paper. The system $\Sigma$ is said to be {\it controllable in $X\subset M$} if for all $x, y\in X$ there exists $\tau>0$ and $\omega\in\UC$ such that $y=\varphi(\tau, x, \omega). $ Equivalently, the system is controllable in $X\subset M$ if $X\subset\RC(x)\cap\RC^*(x)$ for some (and hence for all) $x\in X$.
\begin{remark}
It is worth to mention that the problem or characterizing local controllability was studied by many authors (see for instance Hermes \cite{HH1}, \cite{HH2} Sussmann \cite{Suss1}, \cite{Suss2}, \cite{Suss3} Bianchini and Stefani \cite{BiGS}). Necessary and sufficient conditions for local controllability are expressible in terms of $X\in \LC$, where $\LC=\LC(f^0, f^1, \ldots, f^m)$ denote the smallest Lie algebra of vector fields on $M$ containing $f^0, f^1,\ldots, f^m$. Indeed
all the papers above given sufficient conditions for local reachability.
\end{remark}
\begin{remark}
The choice of the set of admissible control functions being piecewise constant is not restrictive. In fact, most of the usual choices of admissible functions are such that the solutions of $\Sigma$ can be approximated by using piecewise constant ones.
\end{remark}
\subsection{Affine and linear vector fields}
In this section we define affine and linear vector fields and state their main properties. For the proof of the assertions in this section the reader can consult \cite{AyTi}, \cite{Jouan1} and \cite{Jouan2}.
Let $G$ be a connected Lie group with Lie algebra $\mathfrak{g}$. Following \cite{AyTi}, the {\it normalizer} of $\mathfrak{g}$ is the set
$$\eta:=\{F\in \ \mathfrak{X}(G);\, \mbox{ for all }Y\in \mathfrak{g},\; \;[F,Y]\in \mathfrak{g}\}.$$
A vector field $F$ on $G$ is said to be {\it affine} if it belongs to $\eta$. If $F\in\eta$ and $F(e)=0$ the vector field $F$ is said to be {\it linear}. Any affine vector field $F$ is uniquely decomposed as $F=\mathcal{X}+Y$ where $\mathcal{X}$ is linear and $Y$ is right invariant. Moreover, any $F\in\eta$ is complete, any linear vector field $\XC$ is an infinitesimal automorphism, that is, its flow in 1-parameter subgroup of $\mathrm{Aut}(G)$, and if $\{\alpha_t\}_{t\in\R}$ and $\{\psi_t\}_{t\in\R}$ stand, respectively, for the flow of $F$ and $\XC$, where $F=\mathcal{X}+Y$, we have that
\begin{equation}
\label{expressionslinear
\alpha_{t}(g)=L_{\alpha_{t}(e)}(\psi_{t}(g)), \;\;\mbox{ for all }\;\;g\in G.
\end{equation}
\bigskip
The next technical lemma shows that expression (\ref{expressionslinear}) can be generalized for finite composition of flows of affine vector fields. Such result will be needed ahead.
\begin{lemma}
\label{compositionaffine} Let $\{F_{i}\}_{i\in \mathbb{N}}$ be a family of
affine vector fields with decomposition $F_{i}=\mathcal{X}_{i}+Y_{i}$. Where
$\mathcal{X}_{i}$ is linear and $Y_{i}$ is right-invariant, for any
$i\in \mathbb{N}$. Denote by $\{ \alpha_{t}^{i}\}_{t\in \mathbb{R}}$ and $\{
\psi_{t}^{i}\}_{t\in \mathbb{R}}$ the flows of $F_{i}$ and $\mathcal{X}_{i}$
respectively. For any $i_{1},\ldots,i_{n}\in \mathbb{N}$ and any
real numbers $\tau_{1},\cdots,\tau_{n}$, it holds that
\begin{equation}
\label{composition}
\alpha_{\tau_{n}}^{i_{n}}\circ \cdots \circ \alpha_{\tau_{1}}^{i_{1}
=L_{\alpha_{\tau_{n}}^{i_{n}}\left( \cdots \left( \alpha_{\tau_{1}}^{i_{1
}(e)\right) \cdots \right) }\circ \psi_{\tau_{n}}^{i_{n}}\circ \cdots \circ
\psi_{\tau_{1}}^{i_{1}}.
\end{equation}
\end{lemma}
\begin{proof}
Our proof is by induction. For $n=1$ such equation coincides with
(\ref{expressionslinear}) and the result holds. Let us consider $i_{1
,\ldots,i_{n+1}\in \mathbb{N}$, $\tau_{1},\cdots,\tau_{n+1}$ and by the
hypothesis of induction assume that
\[
\alpha_{\tau_{n}}^{i_{n}}\circ \cdots \circ \alpha_{\tau_{1}}^{i_{1}
=L_{\alpha_{\tau_{n}}^{i_{n}}\left( \cdots \left( \alpha_{\tau_{1}}^{i_{1
}(e)\right) \cdots \right) }\circ \psi_{\tau_{n}}^{i_{n}}\circ \cdots \circ
\psi_{\tau_{1}}^{i_{1}
\]
holds. Hence,
\[
\alpha_{\tau_{n+1}}^{i_{n+1}}\circ \alpha_{\tau_{n}}^{i_{n}}\circ \cdots
\circ \alpha_{\tau_{1}}^{i_{1}}=\alpha_{\tau_{n+1}}^{i_{n+1}}\circ
L_{\alpha_{\tau_{n}}^{i_{n}}\left( \cdots \left( \alpha_{\tau_{1}}^{i_{1
}(e)\right) \cdots \right) }\circ \psi_{\tau_{n}}^{i_{n}}\circ \cdots \circ
\psi_{\tau_{1}}^{i_{1}
\
\[
=L_{\alpha_{i_{n+1}}^{\tau_{n+1}}(e)}\circ \psi_{\tau_{n+1}}^{i_{n+1}}\circ
L_{\alpha_{\tau_{n}}^{i_{n}}\left( \cdots \left( \alpha_{\tau_{1}}^{i_{1
}(e)\right) \cdots \right) }\circ \psi_{\tau_{n}}^{i_{n}}\circ \cdots \circ
\psi_{\tau_{1}}^{i_{1}}.
\]
However, for any $f\in \mathrm{Aut}(G)$ and $g\in G$ it follows that $f\circ
L_{g}=L_{f(g)}\circ f$ . So, we get
\[
L_{\alpha_{i_{n+1}}^{\tau_{n+1}}(e)}\circ \psi_{\tau_{n+1}}^{i_{n+1}}\circ
L_{\alpha_{\tau_{n}}^{i_{n}}\left( \cdots \left( \alpha_{\tau_{1}}^{i_{1
}(e)\right) \cdots \right) }=L_{\alpha_{i_{n+1}}^{\tau_{n+1}}(e)}\circ
L_{\psi_{\tau_{n+1}}^{i_{n+1}}\left( \alpha_{\tau_{n}}^{i_{n}}\left(
\cdots \left( \alpha_{\tau_{1}}^{i_{1}}(e)\right) \cdots \right) \right)
}\circ \psi_{\tau_{n+1}}^{i_{n+1}
\
\[
=L_{\alpha_{i_{n+1}}^{\tau_{n+1}}(e)\cdot \psi_{\tau_{n+1}}^{i_{n+1}}\left(
\alpha_{\tau_{n}}^{i_{n}}\left( \cdots \left( \alpha_{\tau_{1}}^{i_{1
}(e)\right) \cdots \right) \right) }\circ \psi_{\tau_{n+1}}^{i_{n+1
}=L_{\alpha_{i_{n+1}}^{\tau_{n+1}}\left( \alpha_{\tau_{n}}^{i_{n}}\left(
\cdots \left( \alpha_{\tau_{1}}^{i_{1}}(e)\right) \cdots \right) \right)
}\circ \psi_{\tau_{n+1}}^{i_{n+1}
\]
which implies that
\[
\alpha_{\tau_{n+1}}^{i_{n+1}}\circ \alpha_{\tau_{n}}^{i_{n}}\circ \cdots
\circ \alpha_{\tau_{1}}^{i_{1}}=L_{\alpha_{i_{n+1}}^{\tau_{n+1}}\left(
\alpha_{\tau_{n}}^{i_{n}}\left( \cdots \left( \alpha_{\tau_{1}}^{i_{1
}(e)\right) \cdots \right) \right) }\circ \psi_{\tau_{n+1}}^{i_{n+1}
\circ \psi_{\tau_{n}}^{i_{n}}\circ \cdots \circ \psi_{\tau_{1}}^{i_{1}
\]
ending the proof.
\end{proof}
We finish this section by commenting on the special connection between $\fg$-derivation and linear vector fields. Let $\XC$ be a linear vector field on $G$. Associate to $\XC$ there is a $\mathfrak{g}$-derivation $\mathcal{D}:\fg\rightarrow\fg$ given by
$$\mathcal{D}Y=-[\mathcal{X},Y],\mbox{ for all }Y\in
\mathfrak{g}.$$
The flow of $\XC$ is related to $\DC$ by
\begin{equation}
\label{flow}
(d\psi_{t})_{e}=\mathrm{e}^{t\mathcal{D}}\; \; \; \mbox{ and consequently }\; \; \;
\psi_{t}(\exp Y)=\exp(\mathrm{e}^{t\mathcal{D}}Y), \;\;\;\;\;\mbox{ for any } t\in\R, Y\in\fg.
\end{equation}
A special case is when the derivation $\DC$ is inner, that is, there is $X\in\fg$ such that $\DC=\ad(X)$. Following \cite{Jouan2}, when this happens the linear vector field decomposes as $\mathcal{X}=Y+i_*Y$ and its flow satisfies $\varphi_t=C_{\rme^{tX}}$. In particular, when $G$ is a semisimple Lie group any linear vector field is of this form since any $\fg$-derivation is inner.
For compact Lie groups, Theorem 4.29 of \cite{Knapp} implies that $G=G_{\mathrm{ss}}Z(G)_0$ where $Z(G)_0$ is the connected component of the center of $G$ and $G_{\mathrm{ss}}$ is a semisimple connected subgroup of $G$ with Lie algebra $[\fg, \fg]$. Since these subgroups are invariant by automorphisms, we have that the flow $\{\varphi_{t}\}_{t\in\R}$ of any linear vector field $\XC$ restricts to automorphisms of both, $G_{\mathrm{ss}}$ and $Z(G)_0$. Moreover, since $Z(G)_0$ is a torus we have that $\mathrm{Aut}(Z(G)_0)$ is discrete which by continuity implies that $\psi_t|_{Z(G)_0}=\operatorname{id}_{Z(G)_0}$. On the other hand, since $G_{\mathrm{ss}}$ is semisimple, we have that $\psi_t|_{G_{\mathrm{ss}}}=\rme^{tX}$ for some $X\in [\fg, \fg]$. Therefore, if $d$ is a bi-invariant metric $d$ on $G$ we have that $\psi_t$ is an isometry of $G$, for any $t\in\R$.
We will finish this section with some examples of affine and linear vector fields.
\begin{example}
Let $G$ to be the connected component of the identity of $\mathrm{Gl}(n, \R)$, the group of the invertible $n\times n$-matrices.Its Lie algebra $\fg$ is given by $\mathfrak{gl}(n, \R)$, the set of all $n\times n$-matrices.
For any $A\in\fg$, the vector field $\XC_A(g):=Ag-gA, \;\;g\in G$ is linear vector. Its associated flow is given by $\varphi_t(g)=C_{\rme^{tA}}(g)$ showing that the associated derivation is inner and given by $\DC=-\ad(A)$. If $B$ is another element in $\fg$ and we consider the left invariant vector field $B(g)=gB$, we have that $F=\XC_A+B$ is an affine vector field. Moreover, it holds that
$$F(g)=\XC_A(g)+B(g)=Ag-gA+gB=Ag-gC, \;\;\;\mbox{ where }\;\;C=A-B.$$
Reciprocally, any affine vector field $F$ whose associated linear vector field has inner derivation is of the form $F(g)=Ag-gB$ for matrices $A, B\in\fg$.
\end{example}
Following Theorem 2.2 of \cite{AyTi}, for simple connected Lie groups any linear vector field is determined by its derivation. Therefore, one cannot expect that all the affine vector fields of the previous example to be of the form $F(g)=Ag-gB$. The next example gives an example of a linear vector field whose associated derivation is not inner.
\begin{example}
Let
$$G=\left\{\left(\begin{array}{ccc}
1 & a & b\\ 0 & 1 & c\\ 0& 0& 1
\end{array}\right), \;(a, b, c)\in\R^3\right\}$$
be the Heisenberg group. Its Lie algebra $\fg$ is generated by
$$X=\left(\begin{array}{ccc}
0 & 1 & 0\\ 0 & 0 & 0\\ 0& 0& 0
\end{array}\right), \;\;\;\;Y=\left(\begin{array}{ccc}
0 & 0 & 0\\ 0 & 0 & 1\\ 0& 0& 0
\end{array}\right)\;\;\mbox{ and }\;\;Z=\left(\begin{array}{ccc}
0 & 0 & 1\\ 0 & 0 & 0\\ 0& 0& 0
\end{array}\right),$$
where $[X, Y]=Z$ and $[X, Z]=[Y, Z]=0$. By denoting the elements of $G$ and $\fg$ only by its coordinates on the basis $\{X, Y, Z\}$ we have that the vector field $\XC(a, b, c)=(a, b, 2c)$ is linear. In fact, a simple calculation shows that its flow is given by $\varphi_t(a, b, c)=(a\rme^t, b\rme^t, c\rme^{2t})$ and also that
$$\varphi_t((a_1, b_1, c_1)(a_2, b_2, c_2))=\varphi_t(a_1+a_2, b_1+b_2, c_1+c_2+a_1b_2)$$
$$=((a_1+a_2)\rme^t, (b_1+b_2)\rme^t, (c_1+c_2+a_1b_2)\rme^{2t})=(a_1\rme^t, b_1\rme^t, c_1\rme^{2t})(a_2\rme^t, b_2\rme^t, c_2\rme^{2t})=$$
$$=\varphi_t(a_1, b_1, c_1)\varphi_t(a_2, b_2, c_2)$$
showing that $\{\varphi_t\}_{t\in\R}$ is a one-parameter group of automorphisms and hence that $\XC$ is linear.
The derivation associated with $\XC$ on the above basis is given by $\DC(a, b, c)=(a, b, 2c)$ and is therefore not inner, since $\ad(W)Z=0$ for any $W\in\fg$ while $\DC Z=\DC(0, 0, 1)=(0, 0, 2)$.
\end{example}
\section{Bilinear systems on Lie groups}
Bilinear systems on Euclidean spaces are well studied in the literature (see for instance \cite{CK} and
\cite{Elliot}). In this section we extend the definition of such systems to connected Lie groups and establish their main properties. In particular we show that controllability of bilinear system on Lie groups are a quite rare condition and can only be expected in Euclidean spaces.
A \textbf{bilinear} system on a Lie group $G$ is given by
\begin{flalign*}
&&\dot{g}(t)=\XC^0(g(t))+\sum_{j=1}^m\omega_j(t)\XC^j(g(t)), &&\hspace{-1cm}\left(\Sigma_{B}\right)
\end{flalign*}
where $\mathcal{X}^{0},\mathcal{X}^{1},\ldots,\mathcal{X}^{m} $ are linear vector fields on $G$. The transition map of $\Sigma_B$ will be denoted by $\varphi^B$ and the diffeomorphism $g\in G\mapsto \varphi^{B}(t, g, \omega)$ by $\varphi^B_{t, \omega}$, where $t\in \R$ and $\omega\in\UC$. Moreover, we denote by $\mathcal{D}^{j}$ the $\mathfrak{g}$-derivation associated with the linear vector field $\mathcal{X}^{j}$, for $j=0, \ldots, m$.
Our intention in what follows is to obtain an expression for the solutions of $\Sigma_B$. In order to do that we consider, for any
$u=(u_{1},\ldots,u_{m})\in \mathbb{R}^{m}$, the linear vector field
$$\mathcal{X}_{u}=\mathcal{X}^{0}+\sum_{j=1}^{m}u_{j}\mathcal{X}^{j} \;\;\;\mbox{ with associated flow }\;\;\;\{ \psi_{t}^{u}\}_{t\in \mathbb{R}}\subset\mathrm{Aut}(G).$$
It is straightforward to see that the associated derivation $\DC_u$ is given by $\mathcal{D}_{u}=\mathcal{D}^{0}+\sum_{j=1}^{m}u_{j}\mathcal{D}^{j}$.
The next result gives an expression for the solutions of $\Sigma_{B}$ in terms of concatenation of linear flows.
\begin{theorem}
\label{bilinear}
Let $\Sigma_{B}$ be a bilinear control system on $G$ and consider $\omega\in\UC$. For a given $T>0$ write
$$\omega(t)=\omega_i \mbox{ for } \text{ and }t\in \left( \sum_{j=0}^{i-1}t_{j},\sum
_{j=0}^{i}t_{j}\right],$$
where $t_{1},\ldots,t_{n}>t_{0}=0$, $T=\sum_{i=1}^nt_i$ and $\omega_{1},\ldots,\omega_{m}\in \mathbb{R}^{m}$. Then,
\begin{equation}
\label{solutionbilinear}
\varphi^{B}(t, g, \omega)=\psi_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i
}(\psi_{t_{i-1}}^{\omega_{i\text{ }-1}}(\cdots(\psi_{t_{1}}^{\omega_{1
}(g))\cdots)),\; \; \;t\in \left( \sum_{j=0}^{i-1}t_{j},\sum_{j=0}^{i
t_{j}\right]
\end{equation}
Moreover, the solutions of $\ \Sigma_{B}$ are complete and $\varphi_{t,\omega
}^{B}\in \mathrm{Aut}(G)$ for any $t\in \mathbb{R}$ and $\omega \in \mathcal{U}$.
\end{theorem}
\begin{proof}
Let us consider $\alpha(t)$ as the curve define by the right hand side of
equation (\ref{solutionbilinear}), that is,
\[
\alpha(t):=\psi_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i}}(\psi_{t_{i-1
}^{\omega_{i-1}}(\cdots(\psi_{t_{1}}^{\omega_{1}}(g))\cdots)),\; \;
\;t\in \left( \sum_{j=0}^{i-1}t_{j},\sum_{j=0}^{i}t_{j}\right] .
\]
We know that $\alpha(0)=x$ and $\alpha$ is continuous since it is given by the
concatenations of linear flows. By the very definition of flow
\[
\frac{d}{ds}\psi_{s}^{\omega_{i}}(h)=\mathcal{X}_{\omega_{i}}(\psi_{s
^{\omega_{i}}(h)),\; \; \mbox{for any }h\in G,\text{ }s\in \mathbb{R}.
\]
By considering
\[
h=\psi_{-\sum_{j-1}^{i-1}t_{j}}^{\omega_{i}}(\psi_{t_{i-1}
^{\omega_{i-1}}(\cdots(\psi_{t_{1}}^{\omega_{1}}(g))\cdots))
\]
we get
\[
\alpha^{\prime}(t)=\frac{d}{dt}\psi_{t}^{\omega_{i}}(h)=\mathcal{X
_{\omega_{i}}\left( \psi_{t}^{\omega_{i}}(h)\right) =\mathcal{X}_{\omega
(t)}\left( \alpha(t)\right), \;\;t\in \left( \sum_{j=0}^{i-1}t_{j},\sum_{j=0}^{i
t_{j}\right]
\]
which shows that $\alpha(t)$ is in fact the solution of $\Sigma_{B}$
associated with the control $\omega$ and starting at $g\in G$ . From the
uniqueness of the solution we get $\alpha(t)=\varphi^{B}(t, g, \omega)$ proving
the equality in equation (\ref{solutionbilinear}).
The assertion about the completeness of the $\Sigma_{B}$-solutions follows
directly from the relation $\varphi_{-t,\omega}^{B}=\left( \varphi
_{t,\theta_{-t}\omega}^{B}\right) ^{-1}.$ Finally, $\varphi_{t, \omega}^{B
\in \mathrm{Aut}(G),$ for any $t\in\R$ and $\omega\in\UC$ since it is the concatenation of $G$-automorphisms.
\end{proof}
\begin{remark}
It is not hard to show that a similar expression is also possible for the negative time solutions.
\end{remark}
Using the relation between the linear flow and it associated derivation we are able to give an expression for the differential of the
solutions of $\Sigma_{B}$ in terms of exponential of matrices, as follows:
\begin{corollary}
\label{solutionbilinearexp} In the conditions of Theorem \ref{bilinear}, for any
$X\in \mathfrak{g}$ and $t\in \left( \sum_{j=0}^{i-1}t_{j},\sum_{j=0}^{i}t_{j}\right]$ it holds that
\[
\varphi_{t,\omega}^{B}(\exp(X))=\exp \left( \mathrm{e}^{\left(t-\sum
_{j=1}^{i}t_{j}\right)\mathcal{D}_{\omega_{i}}}\mathrm{e}^{t_{i-1}\mathcal{D
_{\omega_{i-1}}}\cdots \mathrm{e}^{t_{1}\mathcal{D}_{\omega_{1}}}X\right),
\]
where $\mathcal{D}_{\omega_{i}}$ is the $\mathfrak{g}$-derivation induced by
the linear vector field $\mathcal{X}_{\omega_{i}}$, for $i=1, \ldots, n$. Moreover,
$$(d\varphi_{t,\omega}^{B})_{e}X=\mathrm{e}^{\left(t-\sum_{j=1}^{i
t_{j}\right)\mathcal{D}_{\omega_{i}}}\mathrm{e}^{t_{i-1}\mathcal{D}_{\omega_{i-1}
}\cdots \mathrm{e}^{t_{1}\mathcal{D}_{\omega_{1}}}X.$$
\end{corollary}
\begin{proof}
The first equation follows directly from equation (\ref{solutionbilinear}) in Theorem 3.1 and by the commutative relation given in (\ref{flow}) applied to $\psi_{s}^{\omega_{i}}$ and $\DC_{\omega_i}$.
Therefore, for $t\in \left( \sum_{j=0}^{i-1}t_{j},\sum_{j=0}^{i}t_{j}\right]
$, we obtain
\[
\hspace{-2cm}(d\varphi_{t,\omega}^{B})_{e}X=\frac{d}{ds}\Bigl|_{s=0}\exp \left(
\mathrm{e}^{(t-\sum_{j=1}^{i}t_{j})\mathcal{D}_{\omega_{i}}}\mathrm{e
^{t_{i-1}\mathcal{D}_{\omega_{i-1}}}\cdots \mathrm{e}^{t_{1}\mathcal{D
_{\omega_{1}}}sX\right)
\
\[
=\frac{d}{ds}\Bigl|_{s=0}\exp \,s\left( \mathrm{e}^{(t-\sum_{j=1}^{i
t_{j})\mathcal{D}_{\omega_{i}}}\mathrm{e}^{t_{i-1}\mathcal{D}_{\omega_{i-1}
}\cdots \mathrm{e}^{t_{1}\mathcal{D}_{\omega_{1}}}X\right)
\
\[
\hspace{-2cm}=\mathrm{e}^{(t-\sum_{j=1}^{i}t_{j})\mathcal{D}_{\omega_{i}
}\mathrm{e}^{t_{i-1}\mathcal{D}_{\omega_{i-1}}}\cdots \mathrm{e}^{t_{1
\mathcal{D}_{\omega_{1}}}X
\]
proving the second equation and concluding the proof.
\end{proof}
\bigskip
Before stating and proving the main result of this section let us consider the special case of bilinear systems whose associated derivations are inner. Let $\Sigma_B$ be a bilinear system on $G$ and assume that, for any $j=0,1,...,m$, there is $Y^{j}\in \mathfrak{g}$ such that the $\fg$-derivation $\DC^j$ associated with $\XC^j$ is given by $\mathcal{D}^{j}=\ad(Y^j)$. As discussed at the end of Section 2, when this is the case we get that $\mathcal{X}^{j}=Y^{j}+i_{\ast}Y^{j}$ and consequently the bilinear system $\Sigma_{B}$ can be decomposed as
\[
\mathcal{X}_{\omega(t)}(g(t))=Y_{\omega(t)}(g(t))+i_{\ast}\left(
Y_{\omega(t)}(g(t))\right) .
\]
where $Y_{\omega(t)}(g(t))$ is the right-invariant control system
on $G$ defined by
\begin{flalign*}
&&\dot{g}(t)=Y_{\omega(t)}(g(t))=Y^{0}(g(t))+\sum_{j=1}^{m}\omega_{i}(t)Y^{j}(g(t)), &&\hspace{-1cm}\left(\Sigma_{I}\right).
\end{flalign*}
Since for any $u=(u_{1},\ldots,u_{m})$ the flow $\left\{\psi_{t}^{u}\right\}_{t\in \mathbb{R}}$ of $\mathcal{X}_{u}$ is given by $\psi
^{u}_{t}(g)=C_{\rme^{tY_u}}(g)$, for $Y_{u}=Y^{0}+\sum_{j=1}^{m}u_{j}Y^{j}$, we get that
$$\varphi_{t,\omega}^{B}=C_{\varphi_{t,\omega}^{I}(e)} \;\;\mbox{ for any }t\in\R, \omega\in\UC$$
where $\varphi_{t,\omega}^{I}(e)=\varphi^I(t, e, \omega)$ is the solution of $\Sigma_I$ starting at $e\in G$.
\bigskip
Now we are able to enunciate and prove the main result concerning the controllability of bilinear systems.
\begin{theorem}
\label{contbilinear}
Let $\Sigma_{B}$ a bilinear system on $G$. If $G$ is not a simply connected Abelian Lie group, then $\Sigma_{B}$ cannot be controllable on $G\setminus\{e\}$.
\end{theorem}
\begin{proof}
Let us divide the proof in cases:
\begin{itemize}
\item[$1.$] {\it $G$ is an Abelian compact Lie group:} As commented in the end of Section 2, the flow of any linear vector field is trivial. Since the solutions of $\Sigma_B$ is given by concatenations of flows of linear vector fields we must have that $\varphi_{t,\omega}^{B}=\operatorname{id}_G$ for any $t\in \mathbb{R}$ and $\omega \in \mathcal{U}$. Therefore $\Sigma_{B}$ cannot be controllable in $G\setminus\{e\}$.
\item[$2.$] {\it $G$ is a solvable Lie group:} For this case, the
derivative subgroup $G^{\prime}\subset G$ is a nontrivial proper subgroup of $G$.
Since $G^{\prime}$ is invariant by automorphisms and $\varphi^B_{t, \omega}\in\mathrm{Aut}(G)$ for any $t\in\R$ and $\omega\in\UC$ we must have that
$\varphi^B_{t, \omega}(G^{\prime})=G^{\prime}$ and therefore $\Sigma_{B}$ cannot be controllable in $G\setminus\{e\}$.
\item[$3.$] {\it $G$ is a semisimple Lie group:} Since derivations of semisimple Lie algebras are always inner, we have by the previous discussion that
$$\varphi_{t,\omega}^{B}=C_{\varphi_{t,\omega}^{I}(e)},\; \; \mbox{ for
all }t\in \mathbb{R},\omega \in \mathcal{U}.$$
Therefore, if we prove that the conjugation does not acts transitively on $G\setminus\{e\}$
the bilinear system $\Sigma_{B}$ cannot be controllable in $G\setminus\{e\}$. We have then two
possibilities:
\subitem3.1 {\it $G$ is a compact semisimple Lie group:} In this
case, $G$ admits a bi-invariant metric. In particular, any sphere centered at
$e\in G$ is invariant by conjugation, showing that the conjugation cannot be transitive.
\subitem3.2. {\it $G$ is a noncompact semisimple Lie group:} In this
situation, there exist $g, h\in G$ such that $\Ad(g)$ is
orthogonal and $\Ad(h)$ is symmetric for some inner product in
$\mathfrak{g}$ (see Chapter VI of \cite{Knapp}). Therefore, $\Ad(x)$ and
$\Ad(y)$ cannot be conjugated, which implies that the
conjugation cannot be transitive on $G$.
\item[$4.$] {\it $G$ is an arbitrary Lie group:} If the solvable radical $R$ of $G$ is nontrivial, the system cannot be controllable in $G\setminus\{e\}$ since $R$ is invariant by automorphisms. If $R=\{e\}$ the group is semisimple and such case was considered above.
\end{itemize}
\end{proof}
The previous theorem shows that controllability of bilinear systems on connected Lie groups
can only be expected for the classical bilinear systems on $\mathbb{R}^{n}.$ Actually, since in this particular case the group and the
algebra can be identified, and the normalizer coincides with the product between $\mathbb{R}^{n}$
and the Lie algebra $\mathfrak{gl}(n,\mathbb{R})$, any linear vector field $\XC=\XC^{\DC}$ on $\mathbb{R}^{n}$ can be directly associated with its linear map $\DC$. Thus, we obtain the classical bilinear system
\[
\dot{x}(t)=\mathcal{D}^{0}(x(t))+\sum_{j=1}^{m}\omega_{i}(t)\mathcal{D
^{j}(x(t)),\text{ }\omega \in \mathcal{U}.
\]
On the other hand, the class of bilinear systems plays a relevant role in the controllability property of affine systems as we will see in the forthcoming section.
\section{Affine systems on Lie groups}
The present section is devoted to analyze the general class of affine systems on Lie groups. As a matter of fact, we show
that there exists an intrinsic relation between the solutions of an affine system and its associated bilinear system. This relationship allows us to obtain
some preliminary controllability properties for the class of affine systems.
An {\it affine} system on a Lie group $G$ is determined by the family of ordinary differential equations
\begin{flalign*}
&&\dot{g}(t)=F^0(g(t))+\sum_{j=1}^m\omega_j(t)F^j(g(t)), &&\hspace{-1cm}\left(\Sigma_A\right).
\end{flalign*}
Here, $F^{0},F^{1},\ldots,F^{m}\in \eta$ are affine vector fields on $G$. We denote the transition map of $\Sigma_A$ by $\varphi^A$ and, for any $\omega \in \mathcal{U}$ and $t\in \R$, we denote by $\varphi_{t, \omega}^A$ the diffeomorphism $g\in G\mapsto \varphi^A(t, g, \omega)\in G$.
Associate with any affine system $\Sigma_A$ there is a bilinear system defined as: For any $j=0, 1, \ldots, m$ let us consider the decomposition $F^j=\XC^j+Y^j$ with $\XC^j$ linear and $Y^j$ right-invariant. We say that the bilinear system $\Sigma_B$ defined by the linear vector fields $\XC^0, \XC^1, \ldots, \XC^m$ is the {bilinear system induce by } $\Sigma_A$.
The next result gives us an expression for the solutions of an affine system $\Sigma_A$ on $G$ and show that they are intrinsically connected with the solutions of the bilinear system $\Sigma_B$ associated to $\Sigma_A$.
\begin{theorem}
\label{affine} Let $\omega \in \mathcal{U}$ be a piecewise
constant control function and consider $t_{1},\ldots,t_{n}>t_{0}=0$ and
$\omega_{1},\ldots,\omega_{n}\in \mathbb{R}^{m}$ such that $\omega
(t)=\omega_{i}$ for $\left( \sum_{j=0}^{i-1}t_{j},\sum_{j=0}^{i}t_{j}\right]
$. If $\{ \alpha_{t}^{\omega}\}_{t\in \mathbb{R}}$ stands for the flow of the
affine vector field $F_{\omega}:=F_{0}+\sum_{j=1}^{m}F_{j}$ then
\begin{equation}
\varphi^{A}(t,x,\omega)=\alpha_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i
}(\alpha_{t_{i-1}}^{\omega_{i-1}}(\cdots(\alpha_{t_{1}}^{\omega_{1}
(x))\cdots)),\; \; \;t\in \left( \sum_{j=0}^{i-1}t_{j},\sum_{j=0}^{i
t_{j}\right] .\label{solutionaffine
\end{equation}
Moreover, the solutions of $\Sigma_{A}$ are complete and it holds that
\begin{equation}
\varphi_{t,\omega}^{A}=L_{\varphi_{t,\omega}^{A}(e)}\circ \varphi_{t,\omega
}^{B}.\label{affineandbilinear
\end{equation}
\end{theorem}
\begin{proof}
The proof of the formula \ref{solutionaffine} and the assertion on the
completeness of the solution of $\Sigma_{A}$ are similar to those in the proof
of Theorem \ref{bilinear}. Then, we will omit it. Let us prove equation
(\ref{affineandbilinear}).
From (\ref{solutionaffine}), for any $t\in \left( \sum_{j=0}^{i-1
t_{j},\sum_{j=0}^{i}t_{j}\right] $ we obtain
\[
\varphi_{t,\omega}^{A}=\alpha_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i
}\circ \alpha_{t_{i-1}}^{\omega_{i-1}}\circ \cdots \circ \alpha_{t_{1}
^{\omega_{1}}.
\]
However, Lemma \ref{compositionaffine} implies that
\[
\hspace{-5cm}\alpha_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i}}\circ
\alpha_{t_{i-1}}^{\omega_{i-1}}\circ \cdots \circ \alpha_{t_{1}}^{\omega_{1}}=
\
\[
L_{\alpha_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i}}(\alpha_{t_{i-1
}^{\omega_{i-1}}(\cdots(\alpha_{t_{1}}^{\omega_{1}}(e))\cdots))}\circ
\psi_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i}}\circ \psi_{t_{i-1}
^{\omega_{i-1}}\circ \cdots \circ \psi_{t_{1}}^{\omega_{1}}.
\]
On the other hand, by Theorem \ref{solutionbilinear} we get
\[
\varphi_{t,\omega}^{B}=\psi_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i
}\circ \psi_{t_{i-1}}^{\omega_{i-1}}\circ \cdots \circ \psi_{t_{1}}^{\omega_{1}}.
\]
Therefore,
\[
\hspace{-5cm}\varphi_{t,\omega}^{A}=\alpha_{t\text{ }-\sum_{j=1}^{i-1}t_{j
}^{\omega_{i}}\circ \alpha_{t_{i-1}}^{\omega_{i-1}}\circ \cdots \circ
\alpha_{t_{1}}^{\omega_{1}}=
\
\[
L_{\alpha_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i}}(\alpha_{t_{i-1
}^{\omega_{i-1}}(\cdots(\alpha_{t_{1}}^{\omega_{1}}(e))\cdots))}\circ
\psi_{t\text{ }-\sum_{j=1}^{i-1}t_{j}}^{\omega_{i}}\circ \psi_{t_{i-1}
^{\omega_{i-1}}\circ \cdots \circ \psi_{t_{1}}^{\omega_{1}}.
\]
Finally,
\[
\varphi_{t,\omega}^{A}=L_{\varphi_{t,\omega}^{A}(e)}\circ \varphi_{t,\omega
}^{B
\]
as we wanted to prove.
\end{proof}
\subsection*{Controllability of affine systems}
Here we show that affine systems that are locally controllable at the identity are controllable if $G$ is a compact Lie group or solvable Lie group and the derivations associated with the induced bilinear system are inner and nilpotent.
For a given affine system $\Sigma_{A}$ on a Lie group $G$ let us denote by $\mathcal{R}$ and $\mathcal{R}^*$ its reachable set from the identity and its controllable set of the identity, respectively. Consider the bilinear system $\Sigma_B$ associate to $\Sigma_A$. We will say that a subset $W\subset G$ is {\it $\varphi^B$-invariant} if $\varphi_{t, \omega}^B(W)= W$ for any $t\in \R$ and $\omega\in\UC$, where $\varphi^B$ is the transition map of $\Sigma_B$.
\subsubsection*{Controllability of affine systems compact Lie groups}
For compact Lie groups, the next result shows that affine systems are controllable as soon as they are locally controllable at the identity.
\begin{theorem}
An affine system $\Sigma_A$ on a compact Lie group $G$ is controllable if and only if it is locally controllable at the identity.
\end{theorem}
\begin{proof}
Let us fix a bi-invariant metric $d$ on $G$. By assuming that the system is locally controllable at the identity, there exists $\varepsilon>0$ such that $W:=B(e, \varepsilon)\subset\inner\RC\cap\inner\RC^*$. Moreover, for any $t\in\R$ and $\omega\in\UC$, the maps $\varphi_{t, \omega}^B$ are isometries, it holds that $W$ is a $\varphi^B$-invariant subset. Let us denote by $\SC_W$ the semigroup generated by $W$.
{\bf Claim:} It holds that $\SC_W\subset\inner\RC\cap\inner\RC^*$.
Since any element in $\SC_W$ is a finite product of elements in $W$, it is enough to show that $W^n\subset\inner\RC\cap\inner\RC^*$ for any $n\in\N$ which we will do by induction. Since the case $n=1$ holds true, let us assume that $W^n\subset\inner\RC\cap\inner\RC^*$. For any $g\in W^n$ there exists $\tau_1, \tau_2\geq 0$ and $\omega_1, \omega_2\in\UC$ such that $g=\varphi^A_{\tau_1, \omega_1}(e)=\varphi^A_{-\tau_2, \omega_2}(e)$ and therefore
$$gW=\varphi^A_{\tau_1, \omega_1}(e)W=\varphi^A_{\tau_1, \omega_1}(e)\varphi_{\tau_1, \omega_1}^B(W)=\varphi^A_{\tau_1, \omega_1}(W)\subset\varphi^A_{\tau_1, \omega_1}(\inner\RC)\subset\inner\RC$$
and
$$gW=\varphi^A_{-\tau_2, \omega_2}(e)W=\varphi^A_{-\tau_2, \omega_2}(e)\varphi_{-\tau_2, \omega_2}^B(W)=\varphi^A_{-\tau_2, \omega_2}(W)\subset\varphi^A_{-\tau_2, \omega_2}(\inner\RC)\subset\RC^*$$
Since $g\in W^n$ was arbitrary we have that $W^{n+1}\subset\inner\R\cap\inner\RC^*$ and consequently $S_W\subset\inner\RC\cap\inner\RC^*$ as stated.
Since $G$ is compact and $\inner \SC_W\neq\emptyset$ we must have that $\SC_W=G$ and therefore $G=\RC\cap\RC^*$ showing that $\Sigma_A$ is controllable.
\end{proof}
\begin{remark}
Using the same idea of the above proof, one can actually show that controllability on compact Lie group holds on the slightly weaker assumption that $\inner\RC$ admits a compact $\varphi^B$-invariant subset.
\end{remark}
\subsubsection*{Controllability of affine systems on solvable Lie groups}
In this section, we analyze the controllability of affine systems on solvable Lie groups. In order to do that we generalize some of the results
from \cite{DS} (see also \cite{ADS}).
\begin{lemma}
\label{ginvariance} Let $g\in \mathcal{R}$ and assume that $\varphi_{t,\omega
}^{B}(g)\in \mathcal{R}$ for all $t\in \mathbb{R}$ and $\omega \in \mathcal{U
$. Then
\[
\mathcal{R}\cdot g\subset \mathcal{R}.
\]
\end{lemma}
\begin{proof}
Let $h=\varphi_{\tau,\omega}^{A}(e)\in \mathcal{R}$. By hypothesis we have that $\varphi_{-\tau
,\theta_{\tau}\omega}^{B}(g)\in \mathcal{R}$. Hence, by Theorem \ref{affine} we get
\[
hg=L_{\varphi_{\tau,\omega}^{A}(e)}\cdot g=\left(L_{\varphi_{\tau,\omega}^{A}(e)}\circ
\varphi_{\tau,\omega}^{B}\right)(\varphi_{-\tau,\Theta_{\tau}\omega}^{B
(g))=\varphi_{\tau,\omega}^{A}\left(\varphi_{-\tau,\Theta_{\tau}\omega}^{B
(g)\right)\in \varphi_{\tau,\omega}^{A}(\mathcal{R})\subset \mathcal{R
\]
as stated.
\end{proof}
The next result assures that a $\varphi^B$-invariant subgroup is contained in $\RC$ if the the exponential of elements of its Lie algebra is in $\RC$.
\begin{proposition}
\label{Hinvariance} Let $H$ be a connected $\varphi^B$-invariant Lie subgroup
with Lie algebra $\mathfrak{h}$. It holds that
\[
\exp(X)\in \mathcal{R}\text{ for any \ }X\in \mathfrak{h}\implies H\subset \mathcal{R}.
\]
\end{proposition}
\begin{proof}
From Corollary \ref{solutionbilinearexp}, for any $X\in \mathfrak{h}$ and
$\omega \in \mathcal{U}$ we know that
\[
\varphi_{t,\omega}^{B}(\exp(X))=\exp \left( \mathrm{e}^{(t\text{ }-\sum
_{j=1}^{i}t_{j})\mathcal{D}_{\omega_{i}}}\mathrm{e}^{t_{i-1}\mathcal{D
_{\omega_{i-1}}}\cdots \mathrm{e}^{t_{1}\mathcal{D}_{\omega_{1}}}X\right)
,\text{ for every }t\in \mathbb{R}.
\]
However, since $H$ is $\Sigma_{B}$-invariant we have that
\[
\varphi_{t,\omega}^{B}(\exp(X))\in H,\text{ for every }t\in \mathbb{R}
\]
and therefore
\[
\mathrm{e}^{(t\text{ }-\sum_{j=1}^{i}t_{j})\mathcal{D}_{\omega_{i}}
\mathrm{e}^{t_{i-1}\mathcal{D}_{\omega_{i-1}}}\cdots \mathrm{e}^{t_{1
\mathcal{D}_{\omega_{1}}}X\in \mathfrak{h}.
\]
By the assumption we obtain
\[
\varphi_{t,\omega}^{B}(\exp(X))=\exp \left( \mathrm{e}^{\left(t-\sum
_{j=1}^{i}t_{j}\right)\mathcal{D}_{\omega_{i}}}\mathrm{e}^{t_{i-1}\mathcal{D
_{\omega_{i-1}}}\cdots \mathrm{e}^{t_{1}\mathcal{D}_{\omega_{1}}}X\right)\in \mathcal{R}\text{ for any }t\in
\mathbb{R},\omega \in \mathcal{U}\text{ and }X\in \mathfrak{h}.
\]
Moreover, the connectedness of $H$ implies that any $x\in H$ can be written as
\[
x=\exp(X_{1})\cdots \exp(X_{n}), \;\;\mbox{ for }\;\;X_{1},\ldots
,X_{n}\in \mathfrak{h}
\]
which by Lemma \ref{ginvariance} implies that
\[
x\in \mathcal{R}\cdot \exp(X_{1})\cdot \exp(X_{2})\cdots \exp(X_{n})\subset
\cdots \subset \mathcal{R}\cdot \exp(X_{n})\subset \mathcal{R
\]
concluding the proof.
\end{proof}
\begin{proposition}
\label{ideal} Let $N\subset H\subset G$ two connected Lie subgroups with Lie
subalgebras $\mathfrak{n}\subset \mathfrak{h}\subset \mathfrak{g}$, respectively. Assume that $\mathfrak{n}$ is an ideal of $\mathfrak{h}$ and that $\mathcal{D
^{j}(\mathfrak{h})\subset \mathfrak{n}$, for any $j=0, 1,\ldots, m$. If the systems is locally controllable at the identity, then
$$N\subset\RC\implies H\subset\RC.$$
\end{proposition}
\begin{proof}
For any $X\in \mathfrak{h}$, $t\in \mathbb{R}$ and $u=(u_{1},\ldots,u_{m
)\in \mathbb{R}^{m}$ it holds that
\[
\mathrm{e}^{t\mathcal{D}_{u}}X=X+\sum_{n\in \mathbb{N}}\frac{t^{n}
{n!}\mathcal{D}_{u}^{n}(X),\, \mbox{ where }\mathcal{D}_{u}=\mathcal{D
^{0}+\sum_{j=1}^{m}u_{j}\mathcal{D}^{j}.
\]
By the hypothesis on every $\mathcal{D}^{j},$ $j=0,1,\ldots,m$ we get that
$$\sum_{n\in \mathbb{N}}\frac{t^{n}}{n!}\mathcal{D}_{u}^{n}(X)\in \mathfrak{n}\implies \mathrm{e}^{t\mathcal{D}_{u}}X\in X+\mathfrak{n}\;\;\mbox{ for any }t\in\R, u\in\R^m.$$
Inductively, for any $\tau_{1},\ldots,\tau_{n}\in \mathbb{R}$, $u^{1},\ldots,u^{n}\in \mathbb{R}^{m}$ and $X\in \mathfrak{h}$
we obtain
\[
\mathrm{e}^{\tau_{n}\mathcal{D}_{u^{n}}}\mathrm{e}^{\tau_{n-1}\mathcal{D
_{u^{n-1}}}\cdots \mathrm{e}^{\tau_{1}\mathcal{D}_{u^{1}}}X\in X+\mathfrak{n}.
\]
which by Corollary \ref{solutionbilinearexp} implies that
$$\varphi_{t,\omega}^{B}(\exp X)\in \exp(X+\mathfrak{n}), \;\;\mbox{ for any }X\in \mathfrak{h}, \;\;\mbox{ for any }t\in\R, \omega\in\UC.$$
However, since $N$ is a normal subgroup of $H$ we have by Lemma 3.1 of \cite{Wunster} that $\exp(X+\fn)\subset\exp(X)N$ for any $X\in\fh$ implying that
\begin{equation}
\label{salvador}
\varphi_{t,\omega}^{B}(\exp X)\in \exp(X)N, \;\;\;\mbox{ for any }\;X\in\fh, \;t\in\R \;\mbox{ and }\;\omega\in\UC.
\end{equation}
Let $W=\exp(U)$ be a connected neighborhood of $e\in H.$ By hypothesis,
$\mathcal{R}$ is an open neighborhood of the identity, so $W$ can be chosen
such that $W\subset \mathcal{R}\cap H$. Since $H$ is a connected subgroup, to
finish the proof it is enough to show that $W^{n}\subset \mathcal{R}$ for any
$n\in \mathbb{N} $. We prove it by induction. For $n=1$ the neighborhood $W$ is
a subset of $\mathcal{R}$ by construction. Let then $g=\exp(X)\in W$ and $h\in
W^{n-1}$. By the induction hypothesis we have that $h=\varphi_{\tau,\omega
}^{A}(e)$ for some $\tau>0$ and $\omega \in \mathcal{U}$.
Moreover, from equation (\ref{salvador}) $\varphi_{\tau,\omega}^{B}(g)=gl$ with $l\in N$. Therefore,
\[
hg=L_{\varphi_{\tau,\omega}^{A}(e)}\left( \varphi_{\tau,\omega
^{B}(g)\right)\, l^{-1}=\varphi_{\tau,\omega}^{A}(g)\,l^{-1}.
\]
Since by construction $g=\varphi_{\tau^{\prime},\omega^{\prime}}^{A}(e)$ for
some $\tau^{\prime}>0$ and $\omega^{\prime}\in \mathcal{U}$ we
get that $\varphi_{\tau,\omega}^{A}(g)=\varphi_{\tau+\tau^{\prime
,\omega^{\prime \prime}}^{A}(e)$, where $\omega^{\prime \prime}\in
\mathcal{U}$ is the concatenation of the control $\omega$ and
$\omega^{\prime}$. By the $\varphi^B$-invariance of $N$ and the fact that
$N\subset \mathcal{R}$ we obtain
\[
\varphi_{-\tau-\tau^{\prime},\Theta_{\tau+\tau^{\prime}}\omega^{\prime \prime
}^{B}(l^{-1})\in \mathcal{R
\]
which gives us
\[
h\cdot g=L_{\varphi_{\tau+\tau^{\prime},\omega^{\prime \prime}}^{A
(e)}(l^{-1})=L_{\varphi_{\tau+\tau^{\prime},\omega^{\prime \prime}
^{A}(e)}\circ \varphi_{\tau+\tau^{\prime},\omega^{\prime \prime}}^{B}\left(
\varphi_{-\tau-\tau^{\prime},\Theta_{\tau+\tau^{\prime}}\omega^{\prime \prime
}^{B}(l^{-1})\right)
\
\[
=\varphi_{\tau+\tau^{\prime},\omega^{\prime \prime}}^{A}\left( \varphi
_{-\tau-\tau^{\prime},\Theta_{\tau+\tau^{\prime}}\omega^{\prime \prime}
^{B}(l^{-1})\right) \in \varphi_{\tau+\tau^{\prime},\omega^{\prime
\prime}}^{A}(\mathcal{R})\subset \mathcal{R}
\]
completing the proof.
\end{proof}
The above result applies directly to solvable Lie groups as follows:
\begin{corollary}
Let $G$ be a solvable Lie group and assume the system is locally controllable at the identity. If
$N\subset G$ is the nilradical of $G$ then $N\subset \mathcal{R}$ implies $G=\mathcal{R}.$
\end{corollary}
\begin{proof}
In fact, if $\mathfrak{g}$ is a solvable Lie algebra and $\mathfrak{n}$ its
nilradical then $\mathcal{D}(\mathfrak{g})\subset \mathfrak{n}$ for any
derivation $\mathcal{D}$ of $\mathfrak{g}$. The result follows from
Proposition \ref{ideal} above.
\end{proof}
\bigskip
Now we are able to prove our main result concerning the controllability of
affine systems on solvable Lie groups.
\begin{theorem}
Let $\Sigma_A$ be an affine system on a solvable Lie group $G$. For $j=0,1,\ldots, m$ let us assume that the $\fg$-derivations $\mathcal{D}^{j}$ induced by the associated bilinear system $\Sigma_B$ are inner and nilpotent. Then $\Sigma_A$ is controllable if and only if it is locally controllable at the identity.
\end{theorem}
\begin{proof}
By the above corollary, it is enough for us to show that $N\subset\RC$, where $N$ is the nilradical of $G$. Let
\[
\mathfrak{n}=\mathfrak{n}_{1}\supset \mathfrak{n}_{2}\supset \ldots
\supset \mathfrak{n}_{k}\supset \mathfrak{n}_{k+1}=\{0\},
\]
be the lower central series of $\mathfrak{n}$, where for $i=2,\ldots,k$, we
have that $\mathfrak{n}_{i}=[\mathfrak{n},\mathfrak{n}_{i-1}]$ are ideals of
$\mathfrak{n}$. Since $D^j$ is inner and nilpotent we have that $D^j=\ad(X^j)$ for $X^j\in\fn$, $j=0, 1, \ldots, m$ implying that
$\mathcal{D}^{j}(\mathfrak{n}_{i})\subset \mathfrak{n}_{i+1}$ for
$i=1,\ldots,n$. Therefore, if $N_{i}$ is the connected Lie group with Lie
algebra $\mathfrak{n}_{i}$, $i=1,\ldots,k$, it turns out
\[
N=N_{1}\supset N_{2}\supset \ldots \supset N_{k}\supset N_{k+1}=\{e\}
\]
is the lower central series on the group level. But, $N_{k+1}=\{e\}
\subset \mathcal{R}$ which by Proposition \ref{ideal} we get $G_{k
\subset \mathcal{R}$. Again we can apply Proposition \ref{ideal} to get that
$G_{k-1}\subset \mathcal{R}$. By repeating the same $k$-times, we get that
$G=G_{1}\subset \mathcal{R}$. Since $\Sigma_A$ is analytic we have also that $e\in\inner\RC^*$ and we can analogously show that $G=\RC^*$ implying that $\Sigma_A$ is controllable.
\end{proof}
In particular, for nilpotent Lie groups we have the following:
\begin{corollary}
Let $\Sigma_A$ be an affine system on a nilpotent Lie group $G$. For $j=0,1,\ldots, m$ let us assume that the $\fg$-derivations $\mathcal{D}^{j}$ induced by the associated bilinear system $\Sigma_B$ are inner. Then $\Sigma_A$ is controllable if and only if it is locally controllable at the identity.
\end{corollary}
\section{Acknowledgements}
The first author was supported by Proyecto Fondecyt $n^{o}$ 1150292, Conicyt,
Chile. The second author was supported by Fapesp grant $n^{o}$ 2016/11135-2 and
the third one by CNPq grant $n^{o}$ 246762/2012-8.
We would like to thank the Centro de Estudios Cient\'{\i}ficos (CECs) in
Valdivia, Chile, through Prof. Dr. Jorge Zanelli, for providing to the first and
second authors an excellent environment to work out on this article.
|
{
"timestamp": "2018-03-09T02:00:28",
"yymm": "1803",
"arxiv_id": "1803.02841",
"language": "en",
"url": "https://arxiv.org/abs/1803.02841"
}
|
\section{Introduction}
A finite word $q$ is a \emph{quasiperiod} of a word $w$ if and only if
each position of $w$ is covered by an occurrence of $q$. A word $w$
with a quasiperiod $q \neq w$ is called \emph{quasiperiodic}. For
instance, $abaababaabaaba$ is quasiperiodic and has two quasiperiods:
$aba$ and $abaaba$. Likewise, an infinite word may have several, or
even infinitely many quasiperiods; in the latter case, we call it
\emph{multi-scale quasiperiodic}. The study of quasiperiodicity began
on finite words in the context of text
algorithms~\cite{ApostolicoEhrenfeucht1993Tcs,IliopoulosMouchard1999Jalc},
and was subsequently generalized to right infinite
words~\cite{GlenLeveRichomme2008Tcs,LeveRichomme2007Tcs,Marcus2004Beatcs},
to symbolic dynamical systems~\cite{MarcusMonteil2006Arxiv}, and to
two-dimensional words~\cite{GamardRichomme2015Lata} where it is a
special case of the tiling problem. Finally, a previous
article~\cite{GamardRichomme2016Mfcs} provided a method to determine
the set of quasiperiods of an arbitrary right infinite word. It also
characterized periodic words and standard Sturmian words in terms of
quasiperiods. This is interesting, because periodic words are the
simplest possible infinite words, and Sturmian words are a widely
studied
class~\cite{LeveRichomme2007Tcs,LothaireAlgebraic,MorseHedlundII}
which could be defined as the \emph{least complex non-periodic words}.
These results suggest that quasiperiodicity has some expressive power,
and that the set of quasiperiods is an interesting object to study in
order to get information about infinite words.
The current paper extends to the biinfinite case ($\mathbb{Z}$-words) some
results from~\cite{GamardRichomme2016Mfcs}. The motivations for this
are threefold.
In the two-dimensional case, quasiperiodic $\mathbb{N}^2$-words and
$\mathbb{Z}^2$-words behave quite
differently~\cite{GamardRichomme2015Lata}. This difference is not
specific to the dimension $2$, so it seems natural to start by
understanding the differences in quasiperiodicity between $\mathbb{N}$-words
and $\mathbb{Z}$-words.
Quasiperiodicity have been considered not only on infinite words, but
also on subshifts~\cite{MarcusMonteil2006Arxiv}. However the shift
map does not preserve quasiperiodicity in the right infinite case and
this leads to annoying technicalities. The biinfinite case is
sometimes considered more natural for subshifts because it turns the
shift map into a bijection. Moreover, it also turns the shift map into
a quasiperiodicity-preserving map, which makes the study of
quasiperiodic subshifts much more convenient.
Finally, a previous article~\cite{GamardRichomme2016Mfcs} gave a
characterization of standard Sturmian words in terms of
quasiperiods. Intuitively, the condition ``standard'' was only needed
because of problems at the origin. By moving to the biinfinite case,
we remove the origin so we can hope for a characterization of all
Sturmian words. (We did not achieve this yet, but it is a possible
continuation of our work.)
\smallskip
The current article makes a first step toward the resolution of these
questions: it generalizes the method to study the set of quasiperiods
of an arbitrary word from~\cite{GamardRichomme2016Mfcs} to the
biinfinite case. This is not a trivial task because, by contrast with
the right infinite case, we might have several quasiperiods with the
same length. (In the right infinite case, all quasiperiods are
prefixes, thus there may be only one quasiperiod of a given length.)
Therefore we need to determine not only the lengths of the
quasiperiods, but also for each length which factors are quasiperiods
and which are not.
Many natural results about quasiperiodicity on $\mathbb{N}$-words turned out
to be surprisingly difficult to generalize to $\mathbb{Z}$-words because of
this problem. In addition to show how to determine the set of
quasiperiods of an arbitrary $\mathbb{Z}$-word, we investigate the relations
existing between two quasiperiods of the same length inside a given
biinfinite word. More preciesly, we show that the following conditions
are decidable, given two words $q,r$ of the same length:
\begin{enumerate}[label={(\alph*)}] \itemsep=0pt
\item \label{item:exist}
there exists a biinfinite word both $q$ and $r$-quasiperiodic;
\item \label{item:infinite}
each $q$-quasiperiodic biinfinite word contains infinitely many
occurrences of $r$;
\item \label{item:all}
each $q$-quasiperiodic biinfinite word is also $r$-quasiperiodic;
\item \label{item:deriv} in any word with quasiperiods $q$ and $r$,
the derivated sequences of $q$ and $r$ are equal.
\end{enumerate}
Derivated sequences are a tool previously used to build examples and
counter-examples of quasiperiodic words and to show independence
results~\cite{MarcusMonteil2006Arxiv}. A derivated sequence can be
thought as a normal form for quasiperiodic words. Intuitively, when
two derivated sequences are equal, the considered quasiperiods contain
the same information about $\mathbf{w}$.
Finally, we give a complete description of the set of quasiperiods of
each biinfinite Sturmian word. In particular, we show that each
biinfinite Sturmian word has infinitely many quasiperiods. This
contrasts with the right infinite case, where two Sturmian words of
each slope have no quasiperiods.
\smallskip
\noindent
The paper is structured as follows.
In Section~\ref{sec:det}, we provide a method to study the
quasiperiods of an arbitrary biinfinite word, i.e., a description of
the set of quasiperiods of an arbitrary word.
In Section~\ref{sec:compat}, we define three relations over couples of
words: \emph{compatible}, \emph{definite}, and \emph{positive}. Those
relations are decidable by an algorithm. We show that the couple
$(q,r)$ is compatible if and only if there exists a biinfinite word
$\mathbf{w}$ having both $q$ and $r$ as quasiperiods (Item~\ref{item:exist}
above). Moreover, the couple $(q,r)$ is definite and positive if and
only if all $q$-quasiperiodic words are also $r$-quasiperiodic
(Item~\ref{item:all}).
In Section~\ref{sec:deriv}, we show that the couple $(q,r)$ is
positive if and only if in any word $\mathbf{w}$ which is both $q$ and
$r$-quasiperiodic, the derivated sequences along $q$ and $r$ are the
equal (Item~\ref{item:deriv}). We also prove that $(q,r)$ is definite
if and only if each $q$-quasiperiodic word contains infinitely many
copies of $r$ (Item~\ref{item:infinite}).
In Section~\ref{sec:sturm}, we determine the set of quasiperiods of
each biinfinite Sturmian word. In the process we show that all
biinfinite Sturmian words have infinitely many quasiperiods.
Finally in Section~\ref{sec:conclu}, we conclude with a few related
open questions and state our acknowledgements.
\noindent
Figure~\ref{fig:graph} below shows the implications proven in
Sections~\ref{sec:compat} and~\ref{sec:deriv}.
\begin{figure}[hbtp]
\centering
\begin{tikzpicture}[scale=0.8] \small
\node (compat) at(0.25,+2) {Compatible};
\node (defini) at(+2,-0.25) {Definite};
\node (positi) at(-2,+0.25) {Positive};
\node (defpos) at(-0.25,-2) {Definite+Positive};
\draw[-implies,double equal sign distance] (defini) -- (compat);
\draw[-implies,double equal sign distance] (positi) -- (compat);
\draw[-implies,double equal sign distance] (defpos) -- (positi);
\draw[-implies,double equal sign distance] (defpos) -- (defini);
%
\draw (3,+2) node[right] {$\iff$ $\exists$ $\mathbf{w}$ having both $q$ and $r$ as quasiperiods};
\draw (3,+0.25) node[right] {$\iff$ same derivated sequences};
\draw (3,-0.25) node[right] {$\iff$ ($\forall \mathbf{w}$, $q$-quasiperiodic $\implies$ infinitely many $r$'s)};
\draw (3,-2) node[right] {$\iff$ ($\forall \mathbf{w}$, $q$-quasiperiodic $\implies$ $r$-quasiperiodic)};
\draw[dotted] (compat) -- (3,+2);
\draw[dotted] (positi) -- (3,+0.25);
\draw[dotted] (defini) -- (3,-0.25);
\draw[dotted] (defpos) -- (3,-2);
\draw (0,-2.8) node{Notions defined in this paper (Sec.~\ref{sec:compat})};
\draw (7,-2.8) node{Preexisting notions};
\draw (14,+2) node{Sec.~\ref{sec:compat}};
\draw (14,+0.25) node{Sec.~\ref{sec:deriv}};
\draw (14,-0.25) node{Sec.~\ref{sec:deriv}};
\draw (14,-2) node{Sec.~\ref{sec:compat}};
\end{tikzpicture}
\caption{Implications proved in Sections~\ref{sec:compat} and~\ref{sec:deriv}}
\label{fig:graph}
\end{figure}
\section{Determining the quasiperiods of biinfinite words}
\label{sec:det}
We quickly review classical definitions and notation. Let $u,v$
denote two finite words and $\mathbf{w}$ a finite or infinite word. As
usual, $|u|$ denotes the length of $u$ and $uv$ the concatenation $u$
and $v$. We note $\mathbf{w}(i)$ the $i^\text{th}$ letter of $\mathbf{w}$; letters are
often considered as words of length $1$. We write $\varepsilon$ for
the empty word. If $u$ is of length $n$ and satisfies
$u = \mathbf{w}(i) \mathbf{w}(i+1) \dots \mathbf{w}(i+n-1)$, then we say that $u$ is a
\emph{factor} of $\mathbf{w}$ which \emph{occurs} at position $i$ and which
\emph{covers} positions $i$ to $i+n-1$ (included). The word $u$ is a
\emph{quasiperiod} of $\mathbf{w}$ if each position of $\mathbf{w}$ is covered by an
occurrence of $u$. In particular, if $\mathbf{w}$ is finite or right
infinite, then $u$ is a prefix of $\mathbf{w}$. If $u$ is a word and
$\alpha, \beta$ two different letters such that $u\alpha$ and $u\beta$
are both factors of $\mathbf{w}$, we say that $u$ is \emph{right special} in
$\mathbf{w}$. Symmetrically, if $\alpha{}u$ and $\beta{}u$ are factors of
$\mathbf{w}$, then $u$ is \emph{left special} in $\mathbf{w}$. If $\alpha u \beta$
is a factor of $\mathbf{w}$, then we say that $u \beta$ is a \emph{successor}
of $\alpha u$, and conversely that $\alpha u$ is a \emph{predecessor}
of $u \beta$ in $\mathbf{w}$. A word has a unique successor (resp.
predecessor) if and only if it is not right (resp. left)
special. Finally, $|u|_\alpha$ denotes the number of occurrences of
$\alpha$ in $u$. Unless stated otherwise, all infinite words are
biinfinite, i.e. indexed by $\mathbb{Z}$.
We now have enough vocabulary to state the main theorem
of~\cite{GamardRichomme2016Mfcs}, adapted to the biinfinite case.
\begin{thm}
\label{thm:old}
Let $\mathbf{w}$ denote an infinite word, $q$ a factor of $\mathbf{w}$ and
$\alpha$ a letter.
\begin{enumerate}
\item Suppose $q$ is a quasiperiod and $q\alpha$ a factor of
$\mathbf{w}$. The word $q\alpha$ is a quasiperiod if and only if $q$ is
\emph{not} right special.
\item Suppose $q$ is a quasiperiod and $\alpha{}q$ a factor of
$\mathbf{w}$. The word $\alpha{}q$ is a quasiperiod if and only if $q$ is
\emph{not} left special.
\item Suppose $q\alpha$ is a quasiperiod of $\mathbf{w}$. The word $q$ is a
quasiperiod if and only if either $u= q\alpha{}q\alpha$ is not a
factor of $\mathbf{w}$, or if $q$ occurs at least $3$ times in $u$.
\item Suppose $\alpha{}q$ is a quasiperiod of $\mathbf{w}$. The word $q$ is
a quasiperiod if and only if either $u= q\alpha{}q\alpha$ is not a
factor of $\mathbf{w}$, or if $q$ occurs at least $3$ times in $u$.
\end{enumerate}
\end{thm}
\noindent
A proof of Theorem~\ref{thm:old} can be found
in~\cite{GamardRichomme2016Mfcs} in the right infinite case; the
adaptation to the biinfinite case is immediate. That theorem basically
states that it is enough to study the set of right special factors and
square factors which are also prefixes to get the set of quasiperiods
of a given right infinite word. As special and square factors are
well-understood in combinatorics on words, it generally little
additional word to get the set of quasiperiods of a given
right infinite word. We will comment on the biinfinite version of the
theorem, which we just stated, in a few paragraphs.
We can extend this theorem a bit further, but to do so we need the
notion of \emph{overlap}.
\begin{dfn}
Let $q$ denote a finite word. An \emph{overlap of $q$} is a word $w$
having $q$ as a prefix and as a suffix, such that
$|q| < |w| \leq 2|q|$. More generally, a \emph{$k$-overlap of $q$}
is a word of the form $uv$, where $u$ is a $(k-1)$-overlap and $v$
is such that $qv$ is an overlap of $q$.
The quantity $2|q| - |w|$ is called the \emph{span} of the overlap.
If $q$ is fixed, then an overlap is uniquely determined by its span,
thus we note $\mathcal{V}_q(m)$ the overlap of $q$ having span $m$ (if it
exists). We write $\mathcal{V}_q(n_1, n_2, \dots, n_{k-1})$ the $k$-overlap
built from overlaps $\mathcal{V}_q(n_1)$, $\mathcal{V}_q(n_2)$, etc.\ and we call
$n_i$ the \emph{$i^{\mbox{\small th}}$ span} of this overlap.
\end{dfn}
An \emph{overlap} (without any explicit $k$) is thus a $2$-overlap. An
infinite word $\mathbf{w}$ is $q$-quasiperiodic if and only if two
consecutive occurrences of $q$ in $\mathbf{w}$ always form an overlap.
In general, we might have more than two occurrences of $q$ in an
overlap of $q$. For instance, $\mathcal{V}_{aaa}(1) = aaaaa$ contains $3$
occurrences of $aaa$. We say that $w$ is a \emph{proper $k$-overlap of
$q$} if $w$ is a $q$-quasiperiodic word containing exactly $k$
occurrences of $q$. We write $\mathcal{V}^*_q(n_1, \dots, n_{k-1})$ when we
mean that $\mathcal{V}_q(n_1, \dots, n_{k-1})$ is a \emph{proper} $k$-overlap
of $q$. A \emph{proper overlap} is implicitly a \emph{proper
$2$-overlap}.
\begin{lem}
\label{lem:patrice}
Let $u$ denote a word and $\alpha,\beta$ letters. If $u\beta$ is a
factor of an overlap of $u\alpha$, then $\alpha=\beta$.
\end{lem}
\begin{proof}
Let $w$ denote an overlap of $u\alpha$; by definition of an overlap,
there exist words $p$, $s$ (possibly empty) such that $u=ps$ and
$w = u\alpha{}s\alpha = ps\alpha s\alpha$. If $u\beta$ is a factor
of $w$, then $s\beta$ is a factor of $s\alpha{}s\alpha$. Let $x,y$
denote the words such that $s\alpha{}s\alpha=xs\beta{}y$. Observe
that $|xy|=|s\alpha|$, that $x$ is a prefix and $y$ a suffix of
$s\alpha$ to conclude that $xy=s\alpha$. Thus we can simplify
$|s\alpha{}s\alpha|_\alpha=|xs\beta{}y|_\alpha$ into
$|s\alpha|_\alpha = |s\beta|_\alpha$, which implies
$|\alpha|_\alpha=|\beta|_\alpha$ and $\alpha=\beta$. \relax
\end{proof}
\begin{pro}
\label{pro:predsuc}
Let $\mathbf{w}$ denote an infinite word, and $q$ a quasiperiod of length
$n$ of $\mathbf{w}$. A successor of $q$ is a quasiperiod of $\mathbf{w}$ if and
only if $q$ is not right special. A predecessor of $q$ is a
quasiperiod of $\mathbf{w}$ if and only if $q$ is not left special.
\end{pro}
\begin{proof}
Let $\alpha$, $\beta$ denote letters and $u$ denote a word such that
$\alpha u$ is a quasiperiod and $\alpha u \beta$ a factor of $\mathbf{w}$.
If $u\beta$ is a quasiperiod of $\mathbf{w}$ and $u \gamma$ is also factor
of $\mathbf{w}$ for a letter $\gamma \neq \beta$, then $u\gamma$ is a
factor of an overlap of $u \beta$. Lemma~\ref{lem:patrice} shows
that $\beta=\gamma$: a contradiction. Conversely if $\alpha u$ is
not right special, then every occurrence of $\alpha u$ continues
into an occurrence of $u \beta$; since $\alpha u$ covers $\mathbf{w}$, so
does $u \beta$. The left special case is symmetric. \relax
\end{proof}
Theorem~\ref{thm:old} and Proposition~\ref{pro:predsuc} together imply
that, in order to understand the set of quasiperiods of a biinfinite
word, it is enough to know its set of special factors and its set of
square factors. These two types of factors are already well-studied
and well-understood in combinatorics on words, therefore we can reuse
this knowledge when we need to get the set of quasiperiods of an
infinite word.
Proposition~\ref{pro:predsuc} has another interesting consequence: if
an infinite, aperiodic word $\mathbf{w}$ has a quasiperiod of some length
$n$, then it also has a left-special quasiperiod $\ell$ and a right
special quasiperiod $r$ of length $n$. More precisely, the set of
quasiperiods of some length $n$ is given by a union of chains of the
form $\{u_1, \dots, u_k\}$, where $u_1$ is left special, $u_k$ is
right special, no other $u_i$ is special, and $u_{i+1}$ is the
(unique) successor of $u_i$ for each $1 \leq i < k$. If $q$ belongs to
such a chain, we call $u_1$ its left-special predecessor and $u_k$ its
right special successor.
After working out several examples, one may conjecture that there is
at most one right special (and thus one left special) quasiperiod of a
given length in any biinfinite word. In this case, there would be at
most one chain of quasiperiods of a given length, so it would be easy
to determine the set of quasiperiods of an arbitrary biinfinite
word. Unfortunately the following example disproves this
conjecture. Let $q = aba\,ab\,aba$, $r = aba\,ba\,aba$ and $\mathbf{w}$ be
defined by:
\begin{equation}
\label{eq:badex} %
\mathbf{w}
= {}^\omega (a^{-1}q) \cdot (q)^\omega
=
\dots \,
\overunderbraces%
{&&\br{2}{r}&&\br{3}{r}&\br{1}{r}&\br{1}{r}&&&&&}%
{&
ba ab &aba\,
ba ab &a&ba\,
ba &ab &a&ba\,\cdot\,
aba& ab aba\,
aba& ab aba\,
aba& ab aba
&}%
{&&&\br{4}{r}&&&&&&&&}
\, \dots
\end{equation}
where the end of each occurrence of $q$ in $\mathbf{w}$ is showed by a space.
The definition of $\mathbf{w}$ makes it clear that $q$ is a quasiperiod of
$\mathbf{w}$. As the excerpt of $\mathbf{w}$ suggests, $r$ is also a quasiperiod of
$\mathbf{w}$: since the word is ultimately periodic, the same behaviour
repeats to the left and to the right. It can be directly observed in
the excerpt that both $q$ and $r$ are right special. This example is
the simplest ``pathological case'' which we mentioned in the
introduction.
\section{Checking implcations between two quasiperiods}
\label{sec:compat}
In this section we show that it is decidable to check, given two
finite words $q$ and $r$ of the same length, which of the following is
true:
\begin{enumerate}
\item Any $q$-quasiperiodic biinfinite word is also $r$-quasiperiodic;
\item there exists an infinite word which is $q$- and
$r$-quasiperiodic, and another one which is just $q$-quasiperiodic;
\item no infinite word may have both quasiperiods $q$ and $r$ at the
same time.
\end{enumerate}
First we develop a bit of vocabulary to state the conditions in a
convenient way.
\begin{lem}
\label{lem:no-qrrq}
Let $q,r$ denote two different words of the same length and $w$ a
proper overlap of $q$. The word $w$ has at most one occurrence of
$r$.
\end{lem}
\begin{proof}
By a classical lemma~\cite[Prop.~1.3.4]{Lothaire1983}, there exist
finite words $x,y$ and an integer $k$ satisfying $q = (xy)^kx$ and
$w = (xy)^{k+1}x$. Moreover, $xy$ is a primitive word. If it were
not, call $z$ its primitive root and observe that an occurrence of
$q$ would start at position $|z|$ in $w$, yielding three occurrences
of $q$ in $w$, a contradiction. Additionally, we have either
$k \geq 1$ or $y = \varepsilon$. Indeed, if $k = 0$ and $|y|\geq 1$,
we would have $q=x$ and $w=xyx$, implying $|w|>2|q|$, a
contradiction with the definition of an overlap. We treat the cases
$k \geq 1$ and $y = \varepsilon$ separately.
First, suppose $k \geq 1$. As $|q|=|r|$, all occurrences of $r$ in
$w$ must start at positions between $1$ and $|xy|$ (included). Call
$u$ the prefix of length $|xy|$ of $r$. The word $u$ is a factor of
$xyxy$. Because $xy$ is primitive, each factor of length $|xy|$
occurs only once in $xyxy$, except $xy$
itself~\cite[Prop.~1.3.2]{Lothaire1983}. This means that there can
only be one occurrence of $u$, and therefore of $r$, starting in the
first $|xy|$ letters of $w$.
Now suppose $k = 0$. By the previous remarks, this implies
$y = \varepsilon$, thus $q=x$ and $q$ is primitive. As a
consequence, each factor of length $|q|$ in $qq = w$ occurs only
once, excepted $q$ itself (otherwise, $q = q_1q_2 = q_2q_1$ for some
finite words $q_1, q_2$, and~\cite[Proposition~1.3.2]{Lothaire1983}
contradicts primitivity). In particular $r$, if it occurs at all,
occurs only once.
\relax
\end{proof}
\begin{dfn}
Let $q,r$ denote finite nonempty words of the same length and $m,n$
natural integers. If the proper overlap $\mathcal{V}^*_q(m)$ exists and
contains $r$ as a factor, then we write $\occ(q,r,m)$ for the
position of $r$ in $\mathcal{V}^*_q(m)$; otherwise $\occ(q,r,m)$ is not
defined. (Lemma~\ref{lem:no-qrrq} ensures that if $\occ(q,r,m)$
exists, then it is unique.) If both $\occ(q,r,m)$ and $\occ(q,r,n)$
exist, then we define the quantity
\begin{equation}
\label{eq:def-f}
f_{q,r}(m,n) = m + \occ(q,r,m) - \occ(q,r,n)
\end{equation}
otherwise, $f_{q,r}(m,n)$ is undefined.
\end{dfn}
We insist on the fact that $\occ(q,r,m)$ is defined only where
$\mathcal{V}^*_q(m)$ is defined and contains an occurrence of $r$. If
$\mathcal{V}_q(m)$ is not a \emph{proper} overlap (i.e. it contains more than
two occurrences of $q$, like $\mathcal{V}_{aaa}(1)$), then $\occ(q,r,m)$ is
not defined. The quantity $f_{q,r}(m,n)$ is defined if and only if
both $\occ(q,r,m)$ and $\occ(q,r,n)$ are. Moreover, $q$ and $r$ are
not symmetric: $f_{q,r} \neq f_{r,q}$.
Here is the intuitive interpretation of $f_{q,r}$. Let $m,n$ denote
natural integers such that $w=\mathcal{V}^*_{q}(m,n)$ is a proper $3$-overlap
of $q$. By Lemma~\ref{lem:no-qrrq}, the word $w$ has at most $2$
occurrences of $r$. Suppose it has exactly two. If these two
occurrences form an overlap of $r$, then $f_{q,r}(m,n)$ is the span of
this overlap. If these two occurrences do not overlap, then there
exists a nonempty word $s$ such that $rsr$ is a factor of $w$; in this
case, $f_{q,r}(m,n)=-|s|$. If $w$ has less than two occurrences of
$q$, then $f_{q,r}(m,n)$ is not defined.
\begin{exa*}
In Equation~\eqref{eq:badex} we had $q = aba\,ab\,aba$ and
$r = aba\,ba\,aba$; in this case the function $f_{q,r}$ is given by:
\begin{center}
\begin{tabular}{l | l l l }
$f$ & 0 & 1 & 3 \\ \hline
0 & 0 & -2 & 0 \\
1 & 3 & 1 & 3 \\
3 & 3 & 1 & 3
\end{tabular}
\end{center}
\end{exa*}
Computing $f_{q,r}$ given two finite words $q$ and $r$ of the same
length can be done in $O(|q|^3)$ time. For each $m$ and for each $n$
between $0$ and $|q|$ (included), compute $\mathcal{V}^*_q(m)$ and
$\mathcal{V}^*_q(n)$; in each of them, test whether $r$ appears as a factor;
if so, use Equation~\eqref{eq:def-f} to compute the value of
$f(m,n)$. Otherwise, $f(m,n)$ is not defined. The computation of
$\mathcal{V}^*_q(m)$ and $\mathcal{V}^*_q(m)$, and the search for $r$, can be done in
$O(|q|)$ time using an optimal string-searching algorithm.
\begin{lem}
\label{lem:lines}
\label{lem:gal-nfo}
Let $q$, $r$ be finite words and $\{a_1, \dots, a_k\}$ the set of
integers such that the proper overlap $\mathcal{V}^*_q(a_i)$ exists and
contains one occurrence of $r$. Then, for all $s_1, \dots, s_n$ in
$\{a_1, \dots, a_k\}$, the following equation holds:
\begin{equation}
\label{eq:sum}
s_1 + \dots + s_k = f(s_1, s_2) + \dots + f(s_k, s_1).
\end{equation}
In particular, for all integers $k,l,m,n$ in $\{a_1, \dots, a_k\}$ we have:
$f(m,m)=m$;
the relation $f(m,n)=m$ implies that $f(n,m)=n$; and
the relation $f(m,k)=f(m,l)$ implies that $f(n,k)=f(n,l)$.
\end{lem}
\begin{proof}
Since $\mathcal{V}^*_q(a_i)$ contains exactly one occurrence of $r$ for all
$i$, Lemma~\ref{lem:no-qrrq} implies that $\mathcal{V}^*_q(a_i, a_j)$
contains exactly two occurrences of $r$ for all $i,j$. As a
consequence, $f_{q,r}(a_i, a_j)$ is always defined. Conversely,
$f_{q,r}(m,n)$ is not defined if
$\{m,n\}\not\subseteq\{a_1, \dots, a_k\}$, since $\mathcal{V}^*_q(m,n)$ does
not contain two occurrences of $r$.
For Equation~\eqref{eq:sum}, first we compute:
\begin{equation*}
f(a_1, a_2) + f(a_2,a_1) = a_1 + \occ(a_1) - \occ(a_2) + a_2 + \occ(a_2) - \occ(a_1)
= a_1 + a_2
\end{equation*}
and then the result is easily proved by induction. The three other
facts of this lemma are immediate consequences of
Equation~\eqref{eq:sum} and of the definition of $f$. \relax
\end{proof}
Now we have enough machinery to state conditions on $(q,r)$ which
characterize situations where $q$-quasiperiodicity implies
$r$-quasiperiodicity, or implies non-$r$-quasiperiodicity.
\begin{dfn}
Let $q,r$ denote finite nonempty words of the same length. The
couple $(q,r)$ is:
\begin{itemize}
\item \emph{compatible} if there exist integers $m,n$ such that
$f_{q,r}(m,n)$ is defined;
\item \emph{definite} if $f_{q,r}(m,n)$ is defined wherever
$\mathcal{V}^*_q(m,n)$ is;
\item \emph{positive} if $f_{q,r}(m,n)$ is defined at least on one
couple and is nonnegative wherever it is defined.
\end{itemize}
\end{dfn}
\noindent
Since $f_{q,r}$ is computable in time $O(|q|^3)$, those relations are
testable with the same time complexity.
\begin{thm}
\label{thm:rel-qp}
Let $q,r$ denote two finite, nonempty words of the same length.
\begin{enumerate}
\item The couple $(q,r)$ is non-compatible if and only if
$q$-quasiperiodicity implies non-$r$-quasiperiodicity.
\item The couple $(q,r)$ is definite and positive if and only if
$q$-quasiperiodicity implies $r$-quasiperiodicity.
\item The couple $(q,r)$ is compatible, but not definite positive if
and only if there exists a biinfinite word with quasiperiods $q$ and
$r$, and another biinfinite word with only quasiperiod $q$.
\end{enumerate}
\end{thm}
\begin{proof} We prove the three statements separately.
\emph{Statement 1.} Let $\mathbf{w}$ denote $q$-quasiperiodic word. It
contains a factor of the form $u=\mathcal{V}^*_q(m, n)$. By hypothesis
$f_{q,r}(m, n)$ is not defined, which means that $u$ contains either
$0$ or $1$ occurrences of $r$. By Lemma~\ref{lem:no-qrrq}, at least
one position in $\mathbf{w}$ is not covered by $r$, so, $\mathbf{w}$ is not
$r$-quasiperiodic.
Conversely, suppose that each $q$-quasiperiodic word is
non-$r$-quasiperiodic. Let $m,n$ denote a pair of integers such that
the proper $3$-overlap $\mathcal{V}^*_q(m,n)$ exists and consider the
infinite periodic word given by
$\mathbf{w} = \mathcal{V}_q(\dots m, n, m, n, m, n \dots)$. Since $\mathbf{w}$ is not
$r$-quasiperiodic, either $\mathcal{V}^*_q(m,n)$ or $\mathcal{V}^*_q(n,m)$ (or both)
contains less than two occurrences of $r$. In other terms, either
$\mathcal{V}^*_q(m,n)$ or $\mathcal{V}^*_q(n,m)$ is not defined, and by
Lemma~\ref{lem:lines} the other one is not defined either. Since
this reasonning holds for any $m,n$ where $\mathcal{V}^*_q(m,n)$ exists, the
function $f_{q,r}$ is nowhere defined.
\smallskip
\emph{Statement 2.} Suppose $(q,r)$ is definite and positive and
consider $\mathbf{w}$ a $q$-quasiperiodic biinfinite word. Any position in
$\mathbf{w}$ is covered by an occurrence of $q$; let $m,n$ denote the
integers such that this occurrence is the middle one in the proper
$3$-overlap $\mathcal{V}^*_q(m.n)$. By hypothesis, $f_{q,r}(m,n)$ is defined
and positive, so $\mathcal{V}^*_q(m,n)$ contains a proper overlap of
$r$. Lemma~\ref{lem:no-qrrq} implies that this proper overlap of $r$
covers the middle occurrence of $q$. Consequently, any position in
$\mathbf{w}$ is covered by an occurrence of $r$.
Conversely, suppose that $q$-quasiperiodicity implies
$r$-qua\-si\-pe\-rio\-di\-ci\-ty. Let $m,n$ denote an arbitrary pair
of integers $m,n$ such that the proper $3$-overlap $\mathcal{V}^*_q(m,n)$
exists and $\mathbf{w}$ denote the periodic biinfinite word given by
$\mathbf{w}=\mathcal{V}^*_q(\dots m,n,m,n,m,n \dots)$. By hypothesis this word is
$r$-quasiperiodic, so by Lemma~\ref{lem:no-qrrq} the word
$\mathcal{V}^*_q(m,n)$ contains a proper overlap of $r$. Consequently,
$f_{q,r}(m,n)$ is defined and positive.
\smallskip
\emph{Statement 3.} The proof is immediate as this statement
exhausts all possibilities not covered by Statements~1 and 2.
\relax
\end{proof}
\section{On compatible and positive couples of quasiperiods}
\label{sec:deriv}
In this section, we investigate what the property ``compatible and
positive'' implies for a couple of words (not necessarily
definite). We get a characterization in terms of derivated sequences,
and another one in terms of chains of quasiperiods.
\smallskip
The concept derivated sequence originates from Mouchard's work on
quasiperiodic finite words~\cite{IliopoulosMouchard1999Jalc}, and was
later used by Marcus and Monteil to establish independence results
between quasiperiodicity and other properties on right infinite
words~\cite{MarcusMonteil2006Arxiv}. We start by recalling the
definition.
\begin{dfn}
Let $\mathbf{w}$ denote a biinfinite word and $q$ one of its
quasiperiods. The \emph{sequence of positions} of $q$ in $\mathbf{w}$ is the
sequence $(q_n)_{n \in \mathbb{Z}}$ of positions of occurrences of $q$ in
$\mathbf{w}$, in increasing order, such that $q_0$ is the position of the
leftmost occurrence covering the position $0$. If $(q_n)_{n \in \mathbb{Z}}$
is the sequence of positions of $q$ in $\mathbf{w}$, then
$(q_{n+1} - q_n)_{n \in \mathbb{Z}}$ is called the \emph{derivated sequence}
of $\mathbf{w}$ along $q$.
\end{dfn}
\noindent
For example, in Equation~\eqref{eq:badex}, the derivated sequence of
$\mathbf{w}$ along $q$ is ${}^{\omega}(7)(8)^\omega$ and the derivated
sequence along $r$ is ${}^{\omega}(7) \, 5 \, (8)^\omega$. Observe
that a word is $q$-quasiperiodic if and only if its derivated sequence
along $q$ is bounded by $|q|$. In this case, the derivated sequence
contains enough information to reconstruct the initial word.
\medskip
Chains of quasiperiods were already mentioned in
Section~\ref{sec:det}. Recall the following consequence of
Theorem~\ref{thm:old} and Proposition~\ref{pro:predsuc}: in an
infinite word $\mathbf{w}$, the set of quasiperiods of some length $n$ is
given by a union of chains of the form $\{u_1, \dots, u_k\}$, where
$u_1$ is left special, $u_k$ is right special, no other $u_i$ is
special, and $u_{i+1}$ is the (unique) successor of $u_i$ for each
$1 \leq i < k$. Equation~\eqref{eq:badex} shows an example of a word
having two such chains for length $8$.
\begin{thm}
\label{thm:deriv}
Let $\mathbf{w}$ denote a biinfinite word and $q$, $r$ denote two
quasiperiods of $\mathbf{w}$ of the same length. The following statements
are equivalent:
\begin{enumerate}
\item the couple $(q,r)$ is compatible and positive;
\item for all word $\mathbf{w}$ having quasiperiods $q$ and $r$, the
derivated sequences along $q$ and $r$ are equal;
\item for all word $\mathbf{w}$ having quasiperiods $q$ and $r$, those
quasiperiods belong to the same chain.
\end{enumerate}
\end{thm}
\noindent
We actually prove something slightly more precise: the next
proposition implies Theorem~\ref{thm:deriv}.
\begin{pro}
\label{pro:spec}
Let $\mathbf{w}$ denote a biinfinite word and $q$, $r$ denote two
quasiperiods of $\mathbf{w}$ of the same length. The following statements
are equivalent:
\begin{enumerate}
\item for each integers $m,n$ such that the proper $3$-overlap
$\mathcal{V}^*_q(m,n)$ exists and is a factor of $\mathbf{w}$, we have
$f_{q,r}(m,n) \geq 0$.
\item the derivated sequences of $\mathbf{w}$ along $q$ and along $r$ are
equal up to a shift of one position;
\item the quasiperiods $q$ and $r$ belong to the same chain in
$\mathbf{w}$.
\end{enumerate}
\end{pro}
The next lemma gives $2 \implies 1$ in Proposition~\ref{pro:spec},
because $x$ and $y$ are nonnegative integers. However it is actually
more general and we will also reuse it later in the proof.
\begin{lem}
\label{lem:equiv-f} %
Let $\mathbf{w}$ denote an infinite word and $q$, $r$ two quasiperiods of
$\mathbf{w}$ of the same length. The derivated sequences of $q$ and $r$ in
$\mathbf{w}$ are the same if and only if either: for each pair of natural
integers $(x,y)$ such that $f(x,y)$ is defined, we have
$f(x, y) = x$; or for each such pair $(x,y)$, we have $f(x,y) = y$.
\end{lem}
\begin{proof}
Call $(q_n)_{n \in \mathbb{Z}}$ the sequence of positions of $q$ in $\mathbf{w}$,
and similarly $(r_n)_{n \in \mathbb{Z}}$ the sequence of positions of $r$
in $\mathbf{w}$; observe that
$r_{n+1} - r_n = f(q_n - q_{n-1}, q_{n+1} - q_n)$.
The fact that $f(x,y) = x$ and $f(x,y) = y$ respectively translate to
\begin{equation*}
f(q_{n+1}-q_n, q_n-q_{n-1}) = q_{n+1} - q_n, \quad \text{ and } \quad
f(q_{n+1}-q_n, q_n-q_{n-1}) = q_n - q_{n-1}.
\end{equation*}
By replacing in the previous equation, we get the two possibilities
\begin{equation*}
r_{n+1} - r_n = q_{n+1} - q_n, \quad \text{ and } \quad
r_{n+1} - r_n = q_n - q_{n-1},
\end{equation*}
which both imply that the derivated sequences are equal up to a
shift. The converse argument work symmetrically.
\relax
\end{proof}
The next lemma proves $1 \implies 2$ in Proposition~\ref{pro:spec},
which is the most technical part.
\begin{lem}
\label{lem:1imp2} %
Let $\mathbf{w}$ denote an infinite word and $q$, $r$ two quasiperiods of
$\mathbf{w}$ of the same length. Suppose that for each pair of integers
$(m,n)$ such that the proper $3$-overlap $\mathcal{V}^*_q(m,n)$ exists and
is a factor of $\mathbf{w}$, we have $f_{q,r}(m,n) \geq 0$. Then the
derivated sequences of $\mathbf{w}$ along $q$ and along $r$ are identical,
up to a shift of one position.
\end{lem}
\begin{proof}
Let $\tau_1, \tau_2, \dots, \tau_m$ denote all the integers such
that the proper overlap $\mathcal{V}^*_q(\tau_i)$ exists and is a factor of
$\mathbf{w}$; sort the $\tau_i$ by increasing length. If $x,y$ are integers
such that the proper $3$-overlap $\mathcal{V}^*_q(x,y)$ exists and is a
factor of $\mathbf{w}$, then we call $(x,y)$ an \emph{occurring couple}.
We only need to prove that either for each $x$ such that
$(\tau_1, x)$ is an occurring couple, we have
$f(\tau_1,x) = \tau_1$; or that for each such $x$, we have
$f(x,\tau_1)=\tau_1$. Indeed, if $f(\tau_1,x)=\tau_1$ for all $x$
(or the opposite one), then for each occurring couple $(x,y)$ we
have $f(\tau_1,x)=f(\tau_1,y)$; by Lemma~\ref{lem:lines} we deduce
that $f(x,y)=f(x,x)=x$; subsequently Lemma~\ref{lem:equiv-f} shows
that this is sufficient to finish our proof. Therefore now we argue
that $f(\tau_1, x) = \tau_1$ for each $x$ such that $(\tau_1, x)$ is
an occurring couple.
Let $\sigma_1 = \tau_1$ and $\sigma_2, \dots, \sigma_n$ all the
integers such that $(\sigma_1, \sigma_i)$ is an occurring couple;
sort the $\sigma_i$ by increasing length (in particular
$(\sigma_i)_{1 \leq i \leq n}$ is a subsequence of
$(\tau_i)_{1 \leq i \leq m}$). The couple $(\sigma_1, \sigma_1)$ is
not necessarily an occurring couple, but Lemma~\ref{lem:lines}
guarantees that $f(\sigma_1, \sigma_1)$ is well-defined and that
$f(\sigma_1, \sigma_1) = \sigma_1$. We can assume that
$f(\sigma_1, \sigma_2) = \sigma_1$; if it is not the case, then
$f(\sigma_2, \sigma_1) = \sigma_1$ by Lemma~\ref{lem:lines} and
without loss of generality we consider the function
$f'(x,y) = f(y,x)$ instead of $f$. Now reason by contradiction and
consider the smallest integer $j$ such that
$f(\sigma_1, \sigma_j) \neq \sigma_1$. By hypothesis we can rule out
$f(\sigma_1, \sigma_j) < 0$, so it remains three cases to analyse.
\textbf{Case 1.} If $f(\sigma_1, \sigma_j) > \sigma_j$, then
use Lemma~\ref{lem:lines} to write
$\sigma_1 + \sigma_j = f(\sigma_1, \sigma_j) + f(\sigma_j, \sigma_1)$, which
is equivalent to $f(\sigma_j,\sigma_1)=\sigma_1+\sigma_j-f(\sigma_1,\sigma_j)$.
The quantity $\sigma_j-f(\sigma_1,\sigma_j)$ is negative so we have
$f(\sigma_j, \sigma_1) < \sigma_1$, which is a contradiction since
$\sigma_1$ is the smallest possible span.
\textbf{Case 2.} If $\sigma_1 < f(\sigma_1, \sigma_j) = x < \sigma_j$,
then consider the $q$-quasiperiodic word whose derivated sequence is
${}^\omega(x\, \sigma_1\, \sigma_j)^\omega$; it would have
$f(\sigma_1,x) = \sigma_1$ and $f(x,\sigma_1) = x$. Thus
$f_{q,r}(x,x)=\sigma_1$, but Lemma~\ref{lem:gal-nfo} implies $f_{q,r}(x,x)=x$:
we have a contradiction.
\textbf{Case 3.} Finally, suppose we have
$f(\sigma_1, \sigma_j) = \sigma_j$. Recall that $j$ is minimal and that $j > 2$.
By Lemma~\ref{lem:gal-nfo} we have
$f(\sigma_j, \sigma_1) = \sigma_1$, and by Lemma~\ref{lem:lines}
the relations $f(\sigma_1,\sigma_1) = f(\sigma_1,\sigma_2)$ and
$f(\sigma_j, \sigma_1) = \sigma_1$ imply that
$f(\sigma_j, \sigma_2) = \sigma_1$ as well. Therefore we have four
$3$-overlaps of $q$ with spans $(\sigma_1, \sigma_1)$;
$(\sigma_1, \sigma_2)$; $(\sigma_j, \sigma_1)$; and
$(\sigma_j, \sigma_2)$; all with the same induced $r$-overlap, which
has span $\sigma_1$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.4]
\draw (-8, 8 ) rectangle node{$q$} ++(9,1);
\draw ( 0, 6.5) rectangle node{$q$} ++(9,1);
\draw ( 8, 5 ) rectangle node{$q$} ++(9,1);
\draw (-8, 3 ) rectangle node{$q$} ++(9,1);
\draw ( 0, 1.5) rectangle node{$q$} ++(9,1);
\draw ( 7, 0 ) rectangle node{$q$} ++(9,1);
\draw (-4, -2 ) rectangle node{$r$} ++(9,1);
\draw ( 4, -3.5) rectangle node{$r$} ++(9,1);
\draw[dashed] (-4,-4) -- (-4,10);
\draw[dashed] (13,-4) -- (13,10);
\draw[<->] (+4,-0.5) -- node[above]{$\sigma_1$} ++(1,0);
\draw[<->] ( 0, 4.5) -- node[above]{$\sigma_1$} ++(1,0);
\draw[<->] ( 0, 9.5) -- node[above]{$\sigma_1$} ++(1,0);
\draw[<->] ( 7, 3 ) -- node[above]{$\sigma_2$} ++(2,0);
\draw[<->] ( 8, 8 ) -- node[above]{$\sigma_1$} ++(1,0);
\draw[decorate,decoration={brace,mirror},yshift=-1mm] (8,5) -- node[below]{$p_1$} ++(5,0);
\draw[decorate,decoration={brace,mirror},yshift=-1mm] (13,5) -- node[below]{$s_1$} ++(4,0);
\draw[decorate,decoration={brace,mirror},yshift=-1mm] (7,0) -- node[below]{$p_2$} ++(6,0);
\draw[decorate,decoration={brace,mirror},yshift=-1mm] (13,0) -- node[below]{$s_2$} ++(3,0);
\end{tikzpicture}
\caption[Illustration of the proof of
Lemma~\ref{lem:1imp2}]{Illustration of Case~3 in the proof of
Lemma~\ref{lem:1imp2}}
\label{fig:main-fat}
\end{figure}
From $f(\sigma_1,\sigma_1) = f(\sigma_1, \sigma_2) = \sigma_1$ we
deduce that $\mathcal{V}_q(\sigma_1,\sigma_1)$ and $\mathcal{V}_q(\sigma_1,\sigma_2)$ have a
common factor $\mathcal{V}_r(\sigma_1)$. In either case, the middle occurrence
of $q$ is contained in this factor and last occurrence of $q$ starts
in this factor. This situation is displayed on
Figure~\ref{fig:main-fat}. There exist words $s_1, s_2$ such that
$Q(\sigma_1)$ is a suffix of $\mathcal{V}_r(\sigma_1)s_1$, and $\mathcal{V}_q(\sigma_2)$ is
a suffix of $\mathcal{V}_r(\sigma_1)s_2$. Call $p_1, p_2$ the words satisfying
$p_1s_1 = p_2s_2 = q$, and without loss of generality suppose that
$|p_2|>|p_1|$. Observe that both $p_1$ and $p_2$ are suffixes of the
same word $\mathcal{V}_r(\sigma_1)$, and $|p_1|=|p_2|+\sigma_2-\sigma_1$. As
$p_1,p_2$ are also both prefixes of $q$, we deduce that $p_2$ has a
period $\sigma_2 - \sigma_1$. From
$f(\sigma_1, \sigma_2) = f(\sigma_j, \sigma_2) = \sigma_1$ the same
argument proves that there exist words $p'_1, p'_2$ such that
$\mathcal{V}_q(\sigma_1)$ is a prefix of $p'_1\mathcal{V}_r(\sigma_1)$, and $\mathcal{V}_q(\sigma_j)$
is
a prefix of $p'_2\mathcal{V}_r(\sigma_1)$. Call $s'_1, s'_2$ the words
satisfying $p'_1s'_1 = p'_2s'_2 = q$, and without loss of generality
suppose that $|s'_2| > |s'_1|$. Then remark that $s'_2$ has a period
$\sigma_j - \sigma_1$. In order to simplify notation in the rest of
the proof, let $\alpha = p_2$ and $\beta = s'_1$.
We have
$|\mathcal{V}_r(\sigma_1)|=2|q|-\sigma_1=|q|+|\alpha|+|\beta|-\sigma_2-\sigma_j$.
Therefore $|q|+\sigma_2+\sigma_j - \sigma_1 = |\alpha|+|\beta|$.
Since $\alpha$ is a prefix and $\beta$ a suffix of $q$, by a length
argument $\alpha$ has a non-empty suffix which is a prefix of
$\beta$; call it $\theta$. We have
$|\theta| = |\alpha|+|\beta|-|q| = \sigma_j+\sigma_2-\sigma_1 \geq
(\sigma_j - \sigma_1) + (\sigma_2 - \sigma_1)$. By the Fine-Wilf
Theorem (see~\cite[Prop.~1.3.5 and its proof]{Lothaire1983}),
$\theta$ has a period $\delta$ of length
$|\delta| = \gcd(\sigma_j-\sigma_1, \sigma_2-\sigma_1)$. A period of
$\alpha$ is a suffix of $\theta$ and a period of $\beta$ is a prefix
of $\theta$; by a divsibility argument, each of these periods is
itself $|\delta|$-periodic, therefore $q$ is $|\delta|$-periodic.
Besides, observe that $\beta$ is a prefix and $\alpha$ a suffix of
$r$, therefore $r$ is $|\delta|$-periodic as well.
Let us show that, for all $n$, we have $\mathbf{w}(n) = \mathbf{w}(n+|\delta|)$.
The only case where this could fail is if $n$ and $n+|\delta|$ do
not belong to the same occurrence of $q$ nor to the same occurrence
of $r$ in $\mathbf{w}$. Since
$|\delta|=\gcd(\sigma_j-\sigma_1, \sigma_2-\sigma_1) \leq \sigma_2$,
the only possible span for the $q$-overlap (and the $r$-overlap)
covering positions $n$ and $n+|\delta|$ is $\sigma_1$. But then,
the Figure~\ref{fig:main-fat} shows that the prefix of length
$|\mathcal{V}_q(\sigma_1)|-|s_1|$ of $\mathcal{V}_q(\sigma_1)$ and the prefix of length
$|\mathcal{V}_q(\sigma_2)|-|s_2|$ of $\mathcal{V}_q(\sigma_2)$ are equal; since the latter
one is $|\delta|$-periodic, the former one is as well. By a length
argument, $\mathbf{w}(n) = \mathbf{w}(n+|\delta|)$ in any case and $\mathbf{w}$ is
periodic: a contradiction.
\relax
\end{proof}
\noindent
Finally, the next lemma gives $2 \iff 3$ in
Proposition~\ref{pro:spec}.
\begin{lem}
\label{lem:2iff3}
Let $\mathbf{w}$ denote a biinfinite word and $q$, $r$ denote two
quasiperiods of $\mathbf{w}$ of the same length. The derivated sequences of
$\mathbf{w}$ along $q$ and along $r$ are equal if and only if, up to
swapping $q$ and $r$, there exists a chain of quasiperiods
$u_1, \dots, u_k$ with $u_1=q$ and $u_k=r$, such that $u_{i+1}$ is
the successor of $u_i$.
\end{lem}
\begin{proof}
Let $(q_n)_{n \in \mathbb{Z}}$ and $(r_n)_{n \in \mathbb{Z}}$ denote the sequences
of \emph{positions} of $q$ and $r$ in $\mathbf{w}$. If the derivated
sequences are equal up to a shift of one position, then there exists
an integer $k$ in $\{1, \dots, |q|-1\}$ such that for all $n$ we
have $r_n = q_n + k$. In particular $r$ always starts at the same
position inside $q$. Differently put, this means that there exists a
word $s$ of length $k$ such that $r$ is a suffix of $qs$ and each
occurrence of $q$ in $\mathbf{w}$ is the prefix of an occurrence of $qs$.
Set $u_i = (q s)(i, \dots, i+|r|-1)$ and the implication is proved.
Conversely suppose that there is a family of quasiperiods
$u_0, u_1, \dots, u_{k-1}$ such that $u_0 = q$ and $u_{k-1} = r$ and
$u_{i+1}$ is the unique successor of $u_i$ for each
$0 \leq i < k-1$. By Proposition~\ref{pro:predsuc}, none of the
$u_i$ is right special except maybe $u_{k-1}$. Therefore, there is
an occurrence of $r$ exactly $k-1$ positions after each occurrence
of $q$ in $\mathbf{w}$. Lemma~\ref{lem:no-qrrq} ensures that no other
occurrences of $r$ appear in $\mathbf{w}$, therefore we can conclude that
$r_n = q_n+k-1$ for each integer $n$. \relax
\end{proof}
Finally, the next proposition characterizes compatible and defined
couples of quasiperiods.
\begin{pro}
Let $q,r$ denote two words of the same length. The couple $(q,r)$ is
definite if and only if each $q$-quasiperiodic biinfinite word
contains infinitely many occurrences of $r$.
\end{pro}
\begin{proof}
If $(q,r)$ is definite, then $f_{q,r}(m,n)$ is defined whenever the
proper $3$-overlap $w=\mathcal{V}^*_q(m,n)$ exists; therefore each proper
overlap of $q$ contains one occurrence of $r$. If a biinfinite word
is $q$-quasiperiodic, then it contains infinitely many occurrences
of proper overlaps of $q$, and therefore infinitely many occurrences
of $r$.
Conversely, suppose that each $q$-quasiperiodic infinite word
contains infinitely many occurrences of $r$. Suppose that $m$ is an
integer such that the proper overlap $\mathcal{V}^*_q(m)$ exists, but does
not contain an occurrence of $r$. Then the periodic biinfinite word
given by $\mathbf{w} = \mathcal{V}^*_q(\dots m, m, m \dots)$ does not contain any
occurrence of $r$; but it should also contain infinitely many
occurrences of $r$ by hypothesis. Therefore we have a contradiction
and each proper overlap $\mathcal{V}^*_q(m)$ contains an occurrence of $r$,
which shows that $(q,r)$ is definite.
\end{proof}
\section{Quasiperiods of biinfinite Sturmian words}
\label{sec:sturm}
If $\mathbf{w}$ is an infinite word (either indexed by $\mathbb{N}$ or by $\mathbb{Z}$),
then $P_\mathbf{w}(n)$ denotes the number of distinct factors of length $n$
in $\mathbf{w}$ and $P_\mathbf{w}$ is the \emph{complexity function} of $\mathbf{w}$. An
infinite word is \emph{Sturmian} if and only if it is not ultimately
periodic and satisfies $P_\mathbf{w}(n)=n+1$ for each integer
$n$. Equivalently, a word is Sturmian if and only if it is not
eventually periodic and has exactly one right special factor and one
left special factor of each length. Sturmian words are an important
and well-studied class of infinite words (see~\cite[Chapter
2]{LothaireAlgebraic} and~\cite[Chapter 6]{PytheasFogg2002}). Now we
determine the set of quasiperiods of any biinfinite Sturmian word. To
this end, if $\mathbf{w}$ is a biinfinite word, let $Q_\mathbf{w}(n)$ denote the
number of quasiperiods of length $n$ in $\mathbf{w}$.
\begin{thm}
\label{thm:main}
Let $\mathbf{w}$ denote a biinfinite Sturmian word and $n$ a nonnegative
integer.
\begin{enumerate}
\item We have $Q_\mathbf{w}(n) = 0$ if and only if $\mathbf{w}$ has a nonempty
bispecial factor of length $n-1$.
\item If $Q_\mathbf{w}(n)>0$ and $s$ denotes the shortest bispecial factor
of $\mathbf{w}$ with $|s|\geq n$, then quasiperiods of length $n$ in
$\mathbf{w}$ are exactly the factors of length $n$ in $s$.
\end{enumerate}
\end{thm}
\begin{proof}
We prove the two statements separately.
\emph{Statement~1}. If $\mathbf{w}$ has a nonempty bispecial factor
of length $n-1$, say $u$, then there exists a letter $\alpha$ such
that the (unique) right special factor of length $n$ in $\mathbf{w}$ is
$\alpha{}u$, because any suffix of a right special factor is also
right special. By Theorem~\ref{thm:old} the word $\alpha{}u$ is not
a quasiperiod of $\mathbf{w}$; by Proposition~\ref{pro:predsuc} if $\mathbf{w}$
had a quasiperiod of length $n$, then it would have a right special
quasiperiod of this length. Consequently $\mathbf{w}$ has no quasiperiod of
length $n$.
Conversely suppose that $\mathbf{w}$ has no bispecial factor of length
$n-1$. Since $\mathbf{w}$ is Sturmian, it has exactly one right-special and
one left-special factor of length $n$, so its set of factors of
length $n$ may be written
$\{c_0, \dots, c_{k-1}\} \cup \{d_0, \dots, d_{\ell-1}\} \cup \{e_0,
\dots, e_{m-1}\}$,
where $c_{k-1}$ is right special and has successors $d_0$ and $e_0$;
both $d_{\ell-1}$ and $e_{m-1}$ have successor $c_0$, which is left
special; each other $c_i$, $d_i$ and $e_i$ has respectively
$c_{i+1}$, $d_{i+1}$ and $e_{i+1}$ as an (unique) successor. We
have $k \geq 1$, but we might have $m=0$ or $\ell=0$.
Figure~\ref{fig:graphsturm} shows a graph of the ``successor''
relation. Observe that $k+\ell+m = n+1$ and $k > 2$ (otherwise, we
would have a bispecial factor of length $n-1$). As a consequence,
the maximal distance between two consecutive occurrences of $c_0$ is
$\max(k+\ell, k+m)$, which is bounded by $n$. In other terms, $c_0$
is a quasiperiod of $\mathbf{w}$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[xscale=3,yscale=2,minimum size=1.2cm,>=stealth]
\node[draw,circle] (q1) at (0,0) {$c_0$};
\node[draw,circle] (q2) at (0.8,0) {$c_1$};
\node (qint1) at (1.3,0) {};
\node (qdots) at (1.5,0) {{\tiny $\dots$}};
\node (qint2) at (1.7,0) {};
\node[draw,circle] (q3) at (2.2,0) {$c_{k-2}$};
\node[draw,circle] (q4) at (3,0) {$c_{k-1}$};
\path[draw,->] (q1) to (q2);
\path[draw,->] (q2) to (qint1);
\path[draw,->] (qint2) to (q3);
\path[draw,->] (q3) to (q4);
\node[draw,circle] (t1) at (3,-1) {$d_0$};
\node[draw,circle] (t2) at (2.2,-1) {$d_1$};
\node (tint1) at (1.7,-1) {};
\node (tdots) at (1.5,-1) {{\tiny $\dots$}};
\node (tint2) at (1.3,-1) {};
\node[draw,circle] (t3) at (0.8,-1) {$d_{\ell-2}$};
\node[draw,circle] (t4) at (0,-1) {$d_{\ell-1}$};
\path[draw,->] (q4) edge[bend left=45] (t1);
\path[draw,->] (t1) to (t2);
\path[draw,->] (t2) to (tint1);
\path[draw,->] (tint2) to (t3);
\path[draw,->] (t3) to (t4);
\path[draw,->] (t4) edge[bend left=45] (q1);
\node[draw,circle] (b1) at (3,+1) {$e_0$};
\node[draw,circle] (b2) at (2.2,+1) {$e_1$};
\node (bint1) at (1.7,+1) {};
\node (bdots) at (1.5,+1) {{\tiny $\dots$}};
\node (bint2) at (1.3,+1) {};
\node[draw,circle] (b3) at (0.8,+1) {$e_{m-2}$};
\node[draw,circle] (b4) at (0,+1) {$e_{m-1}$};
\path[draw,->] (q4) edge[bend right=45] (b1);
\path[draw,->] (b1) to (b2);
\path[draw,->] (b2) to (bint1);
\path[draw,->] (bint2) to (b3);
\path[draw,->] (b3) to (b4);
\path[draw,->] (b4) edge[bend right=45] (q1);
\end{tikzpicture}
\caption[Rauzy graph of a Sturmian word]{Successor graph (usually
called \emph{Rauzy graph}) of factors of length $n$ of a
Sturmian word}
\label{fig:graphsturm}
\end{figure}
\emph{Statement~2}. Let $\mathbf{w}$ denote a biinfinite Sturmian
word and suppose that $Q_\mathbf{w}(n)>0$ for some integer $n$. The left
special and the right special factors of length $n$ of $\mathbf{w}$, call
them $\ell$ and $r$, are both quasiperiods by
Proposition~\ref{pro:predsuc}. Call $s$ the shortest factor of $\mathbf{w}$
having $\ell$ as a prefix and $r$ as a suffix. The set of factors of
length $n$ of $s$ is given by a sequence $u_1, u_2, \dots u_k$,
where $u_1 = \ell$ and $u_k = r$, such that $u_{i+1}$ is the
successor of $u_i$ for each $1 \leq i < k$. By
Proposition~\ref{pro:predsuc} again, the set $\{u_1, \dots, u_k\}$
is the set of quasiperiods of length $n$ of $\mathbf{w}$. Observe that $s$
is, by definition, exactly the shortest bispecial factor of $\mathbf{w}$
not shorter than $n$.
\end{proof}
Since any Sturmian word has infinitely many bispecial factors, whose
difference between consecutive lengths are unbounded, we have:
\begin{cor}
Each biinfinite Sturmian word $\mathbf{w}$ has infinitely many quasiperiods.
Moreover, $Q_\mathbf{w}$ is unbounded.
\end{cor}
\section{Conclusion}
\label{sec:conclu}
As explained in the introduction, the biinfinite case may give nicer results about
quasiperiodicity of subshifts and of Sturmian words. This paper provided a toolbox to
study the quasiperiods of biinfinite words, but many questions are still to be answered.
\begin{enumerate}
\item An $\mathbb{N}$-word $\mathbf{w}$ is periodic if and only if $Q_\mathbf{w}(n)>0$ for each large enough
$n$. Is it possible to characterize ultimately periodic $\mathbb{Z}$-words in terms of
quasiperiods?
\item An $\mathbb{N}$-word $\mathbf{w}$ is standard Sturmian if and only if it satisfies $Q_\mathbf{w}(n)=0$
exactly when there is a bispecial factor of length $n-1$ in $\mathbf{w}$. Is there a
characterization of biinfinite Sturmian words in terms of quasiperiods?
\item What about other families of low-complexity sequences, such as episturmian or
Arnoux-Rauzy sequences?
\item If an $\mathbb{N}$-word is multi-scale quasiperiodic, then it is
uniformly recurrent~\cite{MarcusMonteil2006Arxiv}. It is easy to
construct a $\mathbb{Z}$-word which is multi-scale quasiperiodic but not
uniformly recurrent: the word ${}^\omega(ba)\cdot(ab)^\omega$ has
quasiperiod $a(ba)^n$ for each positive $n$, but the factor $aa$
occurs only once. Are all such words ultimately periodic? If not,
how can they be characterized?
\item If a biinfinite word has infinitely many quasiperiods, does it necessarily have
quasiperiodic derivated sequences? If not, can we build a counter-example which is
uniformly recurrent?
\end{enumerate}
\paragraph{Acknowledgements.}
The authors would like to thank Gwena{\"e}l Richomme for the proof of
Lemma~\ref{lem:no-qrrq} and for proofreading many versions of this paper,
which notably helped to clarify the statement of Theorem~\ref{thm:main}.
The authors are also grateful to Patrice S{\'e}{\'e}bold, who provided
Lemma~\ref{lem:patrice} and its proof.
\bibliographystyle{plain}
|
{
"timestamp": "2018-03-08T02:08:17",
"yymm": "1803",
"arxiv_id": "1803.02643",
"language": "en",
"url": "https://arxiv.org/abs/1803.02643"
}
|
\section{Introduction}
Probability weighting (also called distortion) function (see \cite{TF95,P98}) plays a key role in a lot of theories of choice under uncertainty such as Kahneman and Tversky's \cite{KT79,TK92} cumulative prospect theory (CPT), Yaari's \cite{Y87} dual model, the Lopes' SP/A model, Quiggin's \cite{Q82} (1982) rank-dependent utility theory (RDUT). These theories (often called behavioral finance theories) provide satisfactory explanations of many paradoxes that the classical expected utility theory (EUT) fails to explain (see, e.g. \cite{FS48,A53,E61,MP85}).
\par
In recent years, a lot of attentions has been paid to the theoretical study of behavioral finance investment (including portfolio choice and optimal stopping) models involving probability weighting functions, see, e.g., \cite{JZ08,HZ11,JZZ11,XZ13,XZ16,RV17}.
A typical approach to solving such problems is described as follows. In stead of looking for the optimal strategies directly, one first reduces the investment problems to their corresponding so-called quantile optimization problems, in which the decision variables become quantile functions (rather than portfolio choices or stopping times in the original problems). By this change, dynamic stochastic control problems become static deterministic optimization problems.
In the second step, one tries to solve the deduced quantile optimization problems via optimization techniques such as convex analysis. The last step is to recover the optimal strategies by appealing to some proper hedging theories such as backward stochastic differential equation theory (in solving portfolio choice problems), or Skorokhod embedding theory (in solving optimal stopping problems).
The main difficulty of this approach typically lies in the second step, that is, how to solve the quantile optimization problems.
An obvious hurdle comes from the simple fact that quantile functions (or simply called quantiles), as the inverse functions of probability distribution functions, are always increasing. So one must take this monotonicity (as a minimum) constraint into consideration when solving the quantile optimization problems.
\par
Due to lack of a systematic approach, the quantile optimization problems were tackled isolatedly (under strong assumptions usually) in the literature (\cite{JZ08, HZ11}). Xia and Zhou \cite{XZ16}, for the first time, provided a systematic approach, that is, calculus of variations method, to solving a type of quantile optimization problems. They demonstrated their approach via solving a quantile optimization problem within the RDUT framework.
Shortly after, the author \cite{X16} provided another simple way, that is, making change-of-variable and relaxation method, to solve the same type of problems as in \cite{XZ16}. In this type of problems, the constraints on the decision variables (namely, quantiles) are almost minimum: additional to the monotonicity constraint (which is a must for quantile optimization problems as mentioned earlier), one only requires the so-called budget constraint, which is, mathematically speaking, a one-dimensional integral constraint that can be removed by employing a Lagrange multiplier.
\par
On the other hand, probability weighting function also appears in risk-sharing literature. In the context of insurance, the primary risk-sharing problem is how to design an insurance contract for an insured and an insurer that achieves Pareto optimality for them.
Designing insurance contract within the RDUT framework has been also studies in the literature (\cite{CDT00,DS07,CD08,BHYZ15, XZZ15}).
All these papers assume that the probability weighting function is of special shape such as convex, and reverse $S$-shaped.
Similar to tackling the aforementioned investment problems, one can deal with these contract design problems by the same approach: one first turns them into quantile optimization problems, and then solves the latter, and lastly recovers the optimal contracts.
\par
The main difficulty still lies in the second step, but there is a key difference between the formulations of the quantile optimization problems for investment models (called first type) and that for insurance contract design models (called second type). This comes from the fact that, when designing an insurance contract, one has to take both the insured and insurer into account simultaneously so as to achieve Pareto optimality for them, which mathematically results in that both the indemnity and retention functions are necessary to be increasing. Both Huberman, Mayers, and Smith Jr \cite{HMS83} and Picard \cite{P00} call the increasing condition of indemnity and retention the ``incentive compatibility'' constraint for optimal insurance contract. Mathematically speaking, it leads to the new second type of quantile optimization problems, in which the derivatives of decision quantiles are bounded. Because this constraint is of \emph{infinity}-dimension, it makes the second type of problems harder than the first type of ones. If one would simply ignore the constraint, then the quantile optimization problems would reduce to the first type, and the deduced optimal contract may cause the potential severe problem of moral hazard (see \cite{BHYZ15,XZZ15}).
To the best of our knowledge, there is no a general systematic approach to solving the second type of quantile optimization problems. Although calculus of variations method has been applied in the insurance literature but without taking the constraint of bounded derivatives into account. For example, Spence and Zeckhauser \cite{SZ71} used this method to solve an insurance contract design problem in the setting of EUT without considering the constraint, but the optimal contract turns out to be the classical deductible one which accidentally satisfies the constraint of bounded derivatives. As earlier mentioned, if the problem were considered within the behavioral finance theory framework, the optimal contract may cause the potential severe problem of moral hazard issue. In fact, Xu et al. \cite{XZZ15} is the only existing work we know that tackles this type of problems with the constraint kept in mind and only partial results are obtained. It seems that calculus of variations method simply cannot provide a satisfactory solution for this type of problems.
\par
In this paper we further develop the author's \cite{X16} relaxation method and provide a systemic approach to solving this new type of problems that are subject to the constraint of bounded derivatives. The most novel part of this paper is that we link the problem to a free boundary problem for a second-order nonlinear ordinary differential equation (ODE), which is similar to the Black-Scholes ODE for perpetual American options and has been well studied in literature theoretically and numerically. The optimal quantile is expressed in terms of the solution of the ODE. To the best of our knowledge, we have never seen anyone else has done this. This also allows us to give a similar ODE interpretation for the optimal quantiles obtained in \cite{XZ16} and \cite{X16}.
\par
The rest of this paper is organized as follows. In Section 2, we introduce the insurance background of the problem. In Section 3 we propose our new type of quantile optimization problems. Section 4 is devoted to making change-of-variable to simplify the formulation of the problem and study its feasibility issue. In Section 5, we further develop the relaxation method so as to solve the new type of problems completely. Some concluding remarks are given in Section 6, where we point out another possible way (that is using dynamic programming principle) and its limitations in tackling the new type of problems.
\subsection*{Notation}
\noindent
Generally speaking, quantiles are always increasing and may not be continuous. Depending on the definition, they may be left-or right-continuous.
\par
Let $F$ be the probability distribution function of a random variable. In this paper, we define the quantile function (or simply called quantile) of the random variable (or the left-continuous inverse function of $F$) as
\[F^{-1}(p):=\inf\{z\in\ensuremath{\operatorname{\mathbb{R}}}\mid F(z)\geq p\},\quad \forall\; p\in(0,1];\]
with the convention that $F^{-1}(0):=F^{-1}(0+)$ so that it is continuous at $0$. By this definition, a quantile is always increasing and left-continuous.\footnote{The quantile of the random variable is right-continuous if defined as $F^{-1}(p)=\inf\{z\in\ensuremath{\operatorname{\mathbb{R}}}\mid F(z)> p\}$ for $p\in[0,1)$. } We denote by $\mathbb{Q}$ the set of quantiles for random variables.
\par
On the other hand, if the incentive compatibility constraint for optimal insurance contract is taken into account, it turns out that only absolutely continuous quantiles will be interested in.
\section{Insurance background}
\noindent
In an insurance contract design problem, one seeks for the best way to share a potential loss by an insured and an insurer so as to achieve Pareto optimality for them.
\par
Let $I(X)$ and $R(X)$ be the losses borne by the insurer and by the insured, respectively, when a potential loss $X\geq 0$ occurs in the future. They are called compensation (or indemnity) and retention functions, respectively.
\par
Economically speaking, one alway has
\begin{align} \label{totalloss}
I(X)+R(X)=X,\quad I(0)=R(0)=0.
\end{align}
Furthermore, both of the insurer and the insured shall bear more if a bigger loss happens.
If one of them would bear less, it may potentially cause the severe problem of moral hazard as pointed out earlier. Therefore, mathematically speaking, it is a must to require both of $I(X)$ and $R(X)$ are increasing with respect to $X$, that is,
\begin{align} \label{monotonloss}
I(x)\geq I(y),\quad R(x)\geq R(y),\quad \forall\; x\geq y\geq 0.
\end{align}
This is the incentive compatibility constraint for optimal insurance contract.
\par
It is easily seen that we can express the joint constraints of \eqref{totalloss} and \eqref{monotonloss} via the
following single one
\begin{align} \label{monotonloss1}
R(0)=0,\quad 0\leq R(x)-R(y)\leq x-y, \quad \forall\; x\geq y\geq 0.
\end{align}
This requirement can be also stated as
\begin{multline} \label{monotonloss2}
\qquad \textrm{$R$ is increasing, absolutely continuous,} \\ \textrm{$R(0)=0$ and $0\leq R'\leq 1$ almost everywhere (a.e.).\qquad}
\end{multline}
\par
On the other hand, different risk measures lead to different insurance contract design models. In this paper we consider the problem within the RDUT framework. In this framework, the insured's risk measure for a (random) final wealth $Y\geq 0$ is given by
\begin{align} \label{vdef}
V(Y):=\int_{0}^{\infty}u(z)w'(1-F_{Y}(z))\ensuremath{\operatorname{d}\! } F_{Y}(z).
\end{align}
Here $u$ is a second-order differentiable utility function mapping $\ensuremath{\operatorname{\mathbb{R}}}^{+}$ onto itself with $u'>0$, $u''<0$; $w$ is a probability weighting function in the set of probability weighting functions
\[\mathcal{D}=\{w:[0,1]\mapsto [0,1]\mid \textrm{bijection, continuously differentiable}\};\]
and $F_{Y}$ is the probability distribution function of $Y$. One can easily show that
\begin{align}\label{v(y)}
V(Y)=\int_{0}^{1}u
\left(F_{Y}^{-1}(p)\right)w'(1-p)\ensuremath{\operatorname{d}\! p},
\end{align}
\par
We can now formulate the insurance contract design problem for the insured as
\begin{align} \label{ins1}
\max_{I}&\;V(\beta-X+I(X)) \\
\mathrm{s.t.}&\; \BE{I(X)}\leq \pi, \nonumber
\end{align}
where $\beta$ represents the insured's (constant) final wealth if the loss $X$ does not occur; $\pi$ denotes an upper bound for the value of the contract.
We see that $\beta-X+I(X)$ is the net wealth that the insured will have after the loss $X$ happened and the claim amount of $I(X)$ was received from the insurer. In this model, we also assume that the insurer is risk neutral (see, e.g. \cite{A63,R79,GS96}), so the value of the contract is simply given by $\BE{I(X)}$. The constraint $\BE{I(X)}\leq \pi$ is called the budget constraint in this paper. \par
For simplicity of the presentation we put the following technical assumption on $X$. One may release this assumption by employing the ideas of \cite{X14,XZZ15}.
\begin{assmp}
We have $\beta>X$ almost surely (a.s.), the distribution $F_{X}$ of $X$ is strictly increasing up to $1$ and the quantile $F_{X}^{-1}$ of $X$ is absolutely continuous on $[0,1)$.
\end{assmp}
The above assumption is satisfied if, for example, $X$ is uniformly distribution on $(0,\beta/2)$. It also allows $X$ has a mass at 0, which is the most common case in insurance practice. More discussions on this assumption can be found in \cite{XZZ15}.
\par
Now rewrite the problem \eqref{ins1} in terms of $R$ as
\begin{align} \label{ins2}
\max_{R}&\;V(\beta-R(X)) \\
\mathrm{s.t.}&\; \BE{R(X)}\geq \BE{X}-\pi. \nonumber
\end{align}
We notice that there exists a random variable $U$ which is uniformly distributed on $(0,1)$ such that $X=F_{X}^{-1}(U)$ a.s.. Let $g(p)=R(F_{X}^{-1}(p))$ for $p\in [0,1]$, then the problem \eqref{ins1} can be expressed as
\begin{align} \label{ins3}
\max_{g}&\;V(\beta-g(U)) \\
\mathrm{s.t.}&\; \BE{g(U)}\geq \BE{X}-\pi, \nonumber
\end{align}
Using \eqref{v(y)} we have
\begin{align*}
V(\beta-g(U))&=\int_{0}^{1}u\left(F_{\beta-g(U)}^{-1}(p)\right)w'(1-p)\ensuremath{\operatorname{d}\! p}\\
&=\int_{0}^{1}u\left(\beta-g(1-p)\right)w'(1-p)\ensuremath{\operatorname{d}\! p}\\
&=\int_{0}^{1}u\left(G(p)\right)w'(1-p)\ensuremath{\operatorname{d}\! p},
\end{align*}
where \[\textrm{$G(p):=\beta-g(1-p)$ for $p\in[0,1]$.}\] Furthermore,
\[\BE{g(U)}=\int_{0}^{1}g(p)\ensuremath{\operatorname{d}\! p}=\int_{0}^{1}g(1-p)\ensuremath{\operatorname{d}\! p}=\beta-\int_{0}^{1}G(p)\ensuremath{\operatorname{d}\! p}.\]
Hence the problem \eqref{ins3} can be expressed as
\begin{align} \label{ins4}
\max_{G}&\;\int_{0}^{1}u\left(G(p)\right)w'(1-p)\ensuremath{\operatorname{d}\! p}\\
\mathrm{s.t.}&\; \int_{0}^{1}G(p)\ensuremath{\operatorname{d}\! p}\leq \varpi, \nonumber
\end{align}
with $\varpi=\beta+\pi-\BE{X}$ being a given constant.
\par
In the above argument, we have not considered the constraint \eqref{monotonloss2} yet. Notice
\[G(p)=\beta-g(1-p)=\beta-R(F_{X}^{-1}(1-p)),\]
so the constraint \eqref{monotonloss2} in terms of $G$ can be stated as\footnote{For more details we refer to \cite{XZZ15}.}
\begin{align} \label{monotonloss3}
\textrm{$G$ is absolutely continuous, $G(1)=\beta$ and $0\leq G'\leq h$ a.e. on $[0,1]$.}
\end{align}
with $h=\left(F_{X}^{-1}\right)'(1-p)$. We denote by $\mathcal{G}$ the subset of quantiles that satisfies this constraint.
\par
We remark that the above argument is invertible, so solving the insurance contract design problem \eqref{ins1} reduces to solving the quantile optimization problem \eqref{ins4} subject to the constraint \eqref{monotonloss3}.
\section{A new type of quantile optimization problems}
\noindent
Xia and Zhou \cite{XZ16} and the author \cite{X16} respectively studied the same type of quantile optimization problems as follows.
\begin{align}
\max_{G}&\;\int_{0}^{1}u(G(p))w'(1-p)\ensuremath{\operatorname{d}\! p}\label{budget1}\\
\mathrm{s.t.}&\;\int_{0}^{1}G(p)\phi(p)\ensuremath{\operatorname{d}\! p}\leq \varpi, \nonumber
\end{align}
where $\phi$ is a given nonnegative and integrable function.
In this problem, other than the budget constraint, only the minimum monotonicity requirement on the decision quantiles $G$ has been put.
\par
The problem \eqref{ins4} is very similar to the above problem \eqref{budget1}, but there are two notably differences. Firstly, there is no $\phi$ involved in the insurance problem. If the insurer in the insurance problem \eqref{ins1} was not risk-neutral, then there might have $\phi$ involved in the problem \eqref{ins4}. Secondly, the insurance problem requires the constraint \eqref{monotonloss3} which is much stronger than the simple monotonicity requirement.
\par
In this paper, we investigate following new type of quantile optimization problems
\begin{align}\label{p1}
\max_{G\in\mathcal{G}}&\;\int_{0}^{1}u(G(p))w'(1-p)\ensuremath{\operatorname{d}\! p}\\
\mathrm{s.t.}&\;\int_{0}^{1}G(p)\phi(p)\ensuremath{\operatorname{d}\! p}\leq \varpi .\nonumber
\end{align}
The problem \eqref{budget1} can be regarded as its special case where $h\to+\infty$; while the problem \eqref{ins4} can also be regarded as its special case where $\phi\equiv 1$.
\par
In the following sections, we will further develop the author's \cite{X16} relaxation method and link this problem to an ordinary differential equation that has been well studied in literature. The optimal quantile will be expressed via the solution of the ODE.
\section{Change-of-variable and feasibility}
\noindent
We first simplify the problem \eqref{p1}.
The following change-of-variable argument is similar to \cite{X16}. We put it here for the completeness of the paper.
\par
We first make a change of variable to remove $w$ from the objective function.
Let $\nu: [0,1]\mapsto [0,1]$ be the inverse map of $p\mapsto 1-w(1-p)$, given by
\[\nu(p):=1-w^{-1}(1-p), \quad \forall\;p\in[0,1].\]
Then $\nu\in\mathcal{D}$ is also a probability weighting function. It follows that
\begin{align*}
\int_{0}^{1}u(G(p))w'(1-p) \ensuremath{\operatorname{d}\! p}&=\int_{0}^{1}u(G(p))\ensuremath{\operatorname{d}\! }\; (1-w(1-p))\\
&=\int_{0}^{1}u(G(p))\ensuremath{\operatorname{d}\! }\; (\nu^{-1}(p))=\int_{0}^{1}u(G(\nu(p)))\ensuremath{\operatorname{d}\! p}=\int_{0}^{1}u(Q(p))\ensuremath{\operatorname{d}\! p},
\end{align*}
where \[ Q(p):=G(\nu(p)),\quad \forall\;p\in[0,1].\]
Note that $G$ is a quantile if and only if so is $Q$. Moreover, $G' \leq h$ a.e. on $[0,1]$ if and only if $Q'(p)=G'(\nu(p))\nu'(p)\leq h(\nu(p))\nu'(p)$ a.e. on $[0,1]$. Therefore, $G\in\mathcal{G}$ if and only if $Q$ belongs to
\[\mathcal{Q}:=\{Q\mid \textrm{$Q$ is absolutely continuous, $Q(1)=\beta$ and $0\leq Q'\leq \hbar$ a.e. on $[0,1]$}\},\]
where \[\hbar(p):=h(\nu(p)) \nu'(p)\geq 0,\quad \forall\;p\in[0,1].\]
Notice
\begin{align*}
\int_0^1 G(p)\phi(p)\ensuremath{\operatorname{d}\! p}&=\int_0^1 G(\nu(p))\phi(\nu(p))\nu'(p)\ensuremath{\operatorname{d}\! p}=\int_0^1 Q(p)\varphi'(p)\ensuremath{\operatorname{d}\! p},
\end{align*}
where
\begin{align*}\label{defi:variphi}
\varphi(p)&:=\int_{0}^{p}\phi(\nu(t))\nu'(t)\ensuremath{\operatorname{d}\! t}=\int_{0}^{p}\phi(\nu(t))\ensuremath{\operatorname{d}\! } \nu(t)\\
&=\int_{\nu^{-1}(0)}^{\nu^{-1}(p)}\phi(t)\ensuremath{\operatorname{d}\! t}=\int_{0}^{1-w(1-p)}\phi(t)\ensuremath{\operatorname{d}\! t}, \quad\forall\; p\in[0,1],
\end{align*}
is increasing as $\phi$ and $w'$ are both nonnegative.
Moreover, $\varphi$ is uniformly bounded
\[0=\varphi(0)\leq \varphi\leq \varphi(1)=\int_{0}^{1}\phi(p)\ensuremath{\operatorname{d}\! p}.\]
\par
By making the above change-of-variable, solving the problem \eqref{p1} has now reduced to solving the problem
\begin{align}\label{p2}
\max_{Q\in\mathcal{Q}}&\;\int_{0}^{1}u(Q(p))\ensuremath{\operatorname{d}\! p}\\
\mathrm{s.t.}&\;\int_{0}^{1}Q(p)\varphi'(p)\ensuremath{\operatorname{d}\! p}\leq \varpi ,\nonumber
\end{align}
in which the probability weighting function does not appear in the objective.
\par
Before solving the problem \eqref{p2}, we would like to study its feasibility issue, that is, whether it has a feasible solution.\footnote{For general discussions of feasibility and other issues, we refer to \cite{JXZ08} for EUT framework, and \cite{X16} for RDUT framework.} By a feasible solution, we mean a quantile that satisfies all the constraints. We first notice that for any $Q\in\mathcal{Q}$,
\[Q(p)=Q(1)-\int_{p}^{1}Q'(t)\ensuremath{\operatorname{d}\! t}\geq \beta-\int_{p}^{1}\hbar(t)\ensuremath{\operatorname{d}\! t},\]
so
\[\int_{0}^{1}Q(p)\varphi'(p)\ensuremath{\operatorname{d}\! p}\geq \int_{0}^{1}\left(\beta-\int_{p}^{1}\hbar(t)\ensuremath{\operatorname{d}\! t}\right)\varphi'(p)\ensuremath{\operatorname{d}\! p}=\beta\int_{0}^{1}\phi(p)\ensuremath{\operatorname{d}\! p}-\int_{0}^{1} \varphi(p) \hbar(p)\ensuremath{\operatorname{d}\! p}, \]
where we used the fact that $\varphi$ is increasing and integration by parts. Therefore, the problem \eqref{p2} has
\[\begin{cases}
\textrm{no solution}, &\quad \textrm{if $\varpi<\beta\int_{0}^{1}\phi(p)\ensuremath{\operatorname{d}\! p}-\int_{0}^{1} \varphi(p) \hbar(p)\ensuremath{\operatorname{d}\! p}$;}\\[4pt]
\textrm{a unique feasible (thus optimal) solution}, &\quad \textrm{if $\varpi=\beta\int_{0}^{1}\phi(p)\ensuremath{\operatorname{d}\! p}-\int_{0}^{1} \varphi(p) \hbar(p)\ensuremath{\operatorname{d}\! p}$;}\\[4pt]
\textrm{infinity many feasible solutions}, &\quad \textrm{if $\varpi>\beta\int_{0}^{1}\phi(p)\ensuremath{\operatorname{d}\! p}-\int_{0}^{1} \varphi(p) \hbar(p)\ensuremath{\operatorname{d}\! p}$.}
\end{cases}
\]
The first two cases are trivial, so from now on we focus on the last one
\begin{align}\label{condition1}
\varpi>\beta\int_{0}^{1}\phi(p)\ensuremath{\operatorname{d}\! p}-\int_{0}^{1} \varphi(p) \hbar(p)\ensuremath{\operatorname{d}\! p}.
\end{align}
\section{Relaxation method}
\noindent
The most novel part of this paper is this section. We will introduce an ODE through which we can express the optimal quantile for the problem \eqref{p2}, provided \eqref{condition1} holds.
\par
Recall that we have assumed \eqref{condition1} so that the problem \eqref{p2} has infinity many feasible solutions.
Under this condition, because $u$ is strictly concave, the problem \eqref{p2} is equivalent to
\begin{align}\label{p3}
\max_{Q\in\mathcal{Q}}&\;\int_{0}^{1}u(Q(p))-\lambda Q(p)\varphi'(p)\ensuremath{\operatorname{d}\! p}
\end{align}
for some Lagrange multiplier $\lambda>0$. Recall that
\[\mathcal{Q}=\{Q\mid \textrm{$Q$ is absolutely continuous, $Q(1)=\beta$ and $0\leq Q'\leq \hbar$ a.e. on $[0,1]$}\}.\]
\par
We now modify the relaxation method \cite{X16} so as to incorporate the constraint of bounded derivatives.
\par
For any $Q\in\mathcal{Q}$, an application of integration by parts leads to
\begin{align} \label{ineq1}
&\quad\;\int_{0}^{1}u(Q(p))-\lambda Q(p)\varphi'(p)\ensuremath{\operatorname{d}\! p}\nonumber\\
&=\int_{0}^{1}u(Q(p))+\lambda Q'(p)\varphi(p)\ensuremath{\operatorname{d}\! p}-\beta\lambda\varphi(1) \nonumber\\
&=\int_{0}^{1}u(Q(p))+\lambda (Q'(p)-\hbar(p))\varphi(p)\ensuremath{\operatorname{d}\! p}+\int_{0}^{1} \lambda \hbar(p)\varphi(p)\ensuremath{\operatorname{d}\! p}-\beta\lambda\varphi(1).
\end{align}
Let $\delta$ be an absolutely continuous function (to be determined) such that
\[\textrm{$\delta(0)=0$ and $\delta\leq\lambda\varphi$ on $[0, 1]$.}\]
The RHS of \eqref{ineq1} is, noticing $ Q'\leq\hbar$,
\begin{align} \label{ineq2}
&\leq\int_{0}^{1}u(Q(p))+(Q'(p)-\hbar(p))\delta(p)\ensuremath{\operatorname{d}\! p}+\int_{0}^{1} \lambda \hbar(p)\varphi(p)\ensuremath{\operatorname{d}\! p}-\beta\lambda\varphi(1)\\
&=\int_{0}^{1}u(Q(p))+Q'(p)\delta(p)\ensuremath{\operatorname{d}\! p}+\int_{0}^{1} \hbar(p)(\lambda\varphi(p)-\delta(p))\ensuremath{\operatorname{d}\! p}-\beta\lambda\varphi(1), \nonumber
\end{align}
which is, by applying integration by parts again,
\begin{align} \label{ineq3}
&=\; \int_{0}^{1}u(Q(p))-Q(p)\delta'(p)\ensuremath{\operatorname{d}\! p}+\int_{0}^{1} \hbar(p)(\lambda\varphi(p)-\delta(p))\ensuremath{\operatorname{d}\! p}+\beta(\delta(1)-\lambda\varphi(1))\nonumber\\
&\leq\int_{0}^{1}u(\overline{Q}(p))-\overline{Q}(p)\delta'(p)\ensuremath{\operatorname{d}\! p}+\int_{0}^{1} \hbar(p)(\lambda\varphi(p)-\delta(p))\ensuremath{\operatorname{d}\! p}+\beta(\delta(1)-\lambda\varphi(1)),
\end{align}
where
\[\overline{Q}(p)=(u')^{-1}( \delta'(p)),\quad \forall\;p\in[0,1],\]
maximizes the integrand point wisely.
\par
We hope $\overline{Q}$ is the optimal solution of the problem \eqref{p3}. It shall be a feasible solution, which requires $0\leq {\overline{Q}}' \leqslant \hbar$, that is,
\begin{align*}
0\leq \frac{ \delta'' }{u''((u')^{-1}( \delta'))} \leqslant \hbar,
\end{align*}
which can also be expressed as
\begin{align*}
\textrm{$\delta''\leq 0\;$ and $\;\delta''-\hbar u''((u')^{-1}( \delta'))\geqslant 0$.}
\end{align*}
The last requirement is $\overline{Q}(1)=(u')^{-1}( \delta'(1))=\beta$, that is
\[\delta'(1)=u'(\beta).\]
\par
Summarizing the results obtained thus far, we conclude that
\begin{thm}
If $\delta\in C^{2}[0,1]$ is concave, and satisfies the free boundary problem
\begin{align}\label{vi}
\min\left\{\delta''(p)-\hbar(p) u''((u')^{-1}(\delta'(p))), \; \lambda\varphi(p)-\delta(p)\right\}=0,\quad \textrm{a.e.}\;p\in[0,1],
\end{align}
with boundary values $\delta(0)=0$, $\delta'(1)=u'(\beta)$.
Then
\[\overline{Q}(p):=(u')^{-1}( \delta'(p)),\quad \forall\;p\in[0,1],\]
is the optimal solution of the problem \eqref{p3}.
\end{thm}
\begin{proof}
Because $u''<0$ and $\delta''\leq 0$, we see that ${\overline{Q}}'\geq 0$. Moreover, we can rewrite \eqref{vi} as
\begin{align*}
\min\left\{-\frac{ \delta''(p) }{u''((u')^{-1}( \delta'(p)))}+\hbar(p),\;\lambda\varphi(p)-\delta(p)\right\}=0,\quad \textrm{a.e.}\;p\in[0,1].
\end{align*}
that is
\begin{align}\label{vi1}
\min\left\{-{\overline{Q}}'(p)+\hbar(p), \; \lambda\varphi(p)-\delta(p)\right\}=0,\quad \textrm{a.e.}\;p\in[0,1].
\end{align}
This implies ${\overline{Q}}'\leqslant \hbar$ a.e.,
together with $\overline{Q}(1)=(u')^{-1}( \delta'(1))=\beta$, we have proved that $\overline{Q}\in\mathcal{Q}$, thus a feasible solution of the problem \eqref{p3}.
\par
We see from \eqref{vi1} that
\[({\overline{Q}}'(p)-\hbar(p))(\lambda\varphi(p)-\delta(p))=0, \quad \textrm{a.e.}\;p\in[0,1],\]
giving
\begin{align*}
\int_{0}^{1}\lambda(\overline{Q}'(p)-\hbar(p))\varphi(p)\ensuremath{\operatorname{d}\! p} &=\int_{0}^{1}(\overline{Q}'(p)-\hbar(p))\delta(p)\ensuremath{\operatorname{d}\! p}.
\end{align*}
From this, we conclude that the inequalities \eqref{ineq2} and \eqref{ineq3} are equations when $Q$ is replaced by $\overline{Q}$.
In another words, the upper bound \eqref{ineq3} is reached at $\overline{Q}$, that is,
\begin{align*}
\int_{0}^{1}u(Q(p))-\lambda Q(p)\varphi'(p)\ensuremath{\operatorname{d}\! p}
&\leq\int_{0}^{1}u(\overline{Q}(p))-\lambda \overline{Q}(p)\varphi'(p)\ensuremath{\operatorname{d}\! p},
\end{align*}
proving the claim.
\end{proof}
\par
If $\delta''>0$ in some region, then $\delta''>0\geq \hbar u''((u')^{-1}( \delta'))$ and hence $\lambda\varphi=\delta$ by \eqref{vi}, consequently $\varphi''>0$ in the same region. Therefore, we conclude that
\begin{coro}
If $\varphi$ is concave on $[0,1]$, then the solution of the free boundary problem \eqref{vi} is also concave.
\end{coro}
Since the insurer is risk-neutral in the insurance contract design problem \eqref{ins4}, we have $\varphi\equiv 1$ which is concave.
\begin{remark}
In order to let the problem \eqref{vi1} have a classical solution, we require some growth condition on $u''((u')^{-1}(\cdot))$, but this can be easily satisfied with mild conditions, at least for widely used power, logarithm and exponential utilities.
\end{remark}
\begin{remark}
To solve the original quantile optimization problem \eqref{p1}, it is left to determine the Lagrange multiplier $\lambda$.
This can be done by numerical calculation by noting the fact that $\lambda$ is monotonic with respect to $\varpi$.
\end{remark}
\begin{remark}
In \cite{XZ16} and \cite{X16}, the optimal solution is expressed via $\delta$, the concave envelope of some known function $\varphi$. In fact $\delta$ can be interpreted as the solution of the following free boundary problem
\begin{align*}
\min\left\{-\delta''(p),\;\delta(p)-\varphi(p)\right\}=0,\quad \textrm{a.e.}\;p\in[0,1],
\end{align*}
with some proper boundary conditions.
\end{remark}
\section{Concluding remarks}
\noindent
It is unknown to us how to modify Xia and Zhou's \cite{XZ16} calculus of variations method to solve the present problem.
But one can interpret the problem \eqref{p3} as a determinist control problem, where $c_{t}=Q'(t)$ is regarded as the control variable in the constraint set
\[\mathcal{C}=\{c: 0\leq c_{t}\leq \hbar(t), \textrm{a.e.}\;t\in[0,1]\},\]
and $Q$ as the state process. Then the dynamic programming principle leads to the following Hamilton-Jacobi-Bellman equation
\[\begin{cases}
v_{t}+\sup\limits_{c\in\mathcal{C}_{t}}H(t,x,c,v_{x})=0, \\[4pt]
v|_{t=1}=0.
\end{cases}\]
Here $\mathcal{C}_{t}=[0,\hbar(t)]$ and
\[H(t,x,c,p)=pc+u(x)-\lambda x\varphi'(t).\]
There seems, however, no easy way to determine the value function and solve the problem from this equation. In fact, wether it has a classical solution is still an open problem.
\par
In this paper, we demonstrate the systemic approach within the RDUT framework.
But similar to \cite{XZ16,X16}, the method also works for problems within other frameworks. As an example, one can first apply our method to consider the loss and gain parts separately for CPT model, and then combine the results. We encourages reads to give the details.
\newpage
|
{
"timestamp": "2018-03-08T02:04:55",
"yymm": "1803",
"arxiv_id": "1803.02546",
"language": "en",
"url": "https://arxiv.org/abs/1803.02546"
}
|
\section{Introduction}
The problem of quantifying the uncertainty of the solution of systems of partial or ordinary differential equations has become in recent years a rather active area of research. The realization that more often than not, for problems of practical interest, one is not able to determine the parameters, initial conditions, boundary conditions etc. to within high enough accuracy, has led to a flourishing literature of methods for quantifying the impact that this uncertainty imposes on the solution of the problems under investigation (see e.g. \cite{ghanem,leonenko,ma,nouy,venturi,wan,barajas}). However, despite the increase in computational power and the development of various techniques for uncertainty quantification there is still a wealth of problems where reliable uncertainty quantification is beyond reach. One way to address this problem is to look for reduced models for a subset of the variables needed for a complete description of the uncertainty. The effect of all types of uncertainty is intimately connected with the inherent instabilities that may be present in the underlying system which we subject to the uncertainty. These considerations remain equally, if not more, important when we attempt to construct reduced models for uncertainty quantification.
In the current work, we are concerned with the construction of reduced models for systems of differential equations that arise from polynomial chaos expansions of solutions of a PDE or ODE system. In particular, we focus on the case that the given PDE or ODE system contains uncertain parameters or initial conditions and we want to construct a reduced model for the evolution of a subset of the polynomial chaos expansions that are needed for a complete description of the uncertainty caused by the uncertain parameters. There are different methods to construct reduced models for PDE or ODE systems (see e.g. \cite{givon,CS05} and references therein). We choose to use the Mori-Zwanzig formalism in order to construct the reduced model \cite{CHK00,CHK3}.
The main issue with all model reduction approaches is the computation of the memory caused by the process of eliminating variables from the given system (referred to as the full system from this point on) \cite{CS05}. The memory terms are, in general, integral terms which account for the history of the variables that are not resolved. In principle the integrands appearing in the memory terms can be computed through the solution of the orthogonal dynamics equation \cite{CHK3}. We present some examples where this procedure can be implemented and the resulting reduced model can be estimated. Those examples highlight the definite improvement in accuracy of a reduced model when it includes a memory term. However, it is also easy to come up with examples where the solution of the orthogonal dynamics equation becomes prohibitively expensive.
For such cases we present a Markovian reformulation of the MZ formalism which allows the calculation of the memory terms through the solution of ordinary differential equations instead of the computation of convolution integrals as they appear in the original formulation. We present an algorithm which allows the estimation of the necessary parameters on the fly. This means that one starts evolving the full system and use it to estimate the reduced model parameters. Once this is achieved, the simulation continues by evolving {\it only} the reduced model with the necessary parameters set equal to their estimated values from the first part of the algorithm. Of course, such an approximation of the memory term cannot work under all circumstances. We present results for a nontrivial problem where it does yield a reduced model with improved behavior compared to a model that ignores the memory terms altogether.
We should note that this alternative approach to computing the memory term fits in the renormalization framework advocated recently by one of the authors \cite{s11} in order to construct reduced models for singular PDEs. In particular, the idea is that one embeds the MZ reduced model in a larger family of reduced models which share the same functional form but may have additional parameters for enhanced flexibility. These extra parameters are determined so that the reduced model reproduces some dynamic features of the full system. After this is done, one can switch to the reduced model for the rest of the simulation. In the current work, the extra parameters are the lengths of the memory appearing in the MZ reduced model.
Section \ref{mz_formalism} presents a brief introduction to the MZ formalism for the construction of reduced models of systems of ODEs. In Section \ref{reformulation} we develop the Markovian reformulation of the MZ formalism and show how one can estimate adaptively the parameters appearing in the reduced model. Section \ref{examples} presents numerical results both for the original MZ formalism (Sections \ref{example1}-\ref{example2}) and its Markovian reformulation (Section \ref{example3}). Finally, in Section \ref{discussion} we discuss directions for future work.
\section{Mori-Zwanzig formalism}\label{mz_formalism}
We begin with a brief presentation of the Mori-Zwanzig formalism \cite{CHK00,CHK3}. Suppose we are given the system
\begin{equation}\label{odes}
\frac{du(t)}{dt} = R (t,u(t)),
\end{equation}
where $u = ( \{u_k\}), \; k \in H \cup G$
with initial condition $u(0)=u_0.$ The unknown variables (modes) are divided into two groups, one group is indexed in H and the order indexed in G. Our goal is to construct a reduced model for the modes in the set $H.$ The system of ordinary differential equations
we are given can be transformed into a system of linear
partial differential equations
\begin{equation}
\label{pde}
\pd{\varphi_k}{t}=L \varphi_k, \qquad \varphi_k (u_0,0)=u_{0k}, \, k \in H \cup G
\end{equation}
where $L=\sum_{k \in H \cup G } R_i(u_0) \frac{\partial}{\partial u_{0i}}.$ The solution of \eqref{pde} is
given by $u_k (u_0,t)=\varphi_k(u_0,t)$. Using semigroup notation we can rewrite (\ref{pde}) as
$$\pd{}{t} e^{tL} u_{0k}=L e^{tL} u_{0k}$$
Suppose that the vector of initial conditions can be divided as $u_0=(\hat{u}_0,\tilde{u}_0),$ where
$\hat{u}_0$ is the vector of the resolved variables (those in $H$) and $\tilde{u}_0$ is the vector of the unresolved variables (those in $G$). Let $P$ be an orthogonal projection on the space of functions of $\hat{u}_0$ and $Q=I-P.$
Equation \eqref{pde}
can be rewritten as
\begin{equation}
\label{mz}
\frac{\partial}{\partial{t}} e^{tL}u_{0k}=
e^{tL}PLu_{0k}+e^{tQL}QLu_{0k}+
\int_0^t e^{(t-s)L}PLe^{sQL}QLu_{0k}ds, \, k \in H,
\end{equation}
where we have used Dyson's formula
\begin{equation}
\label{dyson1}
e^{tL}=e^{tQL}+\int_0^t e^{(t-s)L}PLe^{sQL}ds.
\end{equation}
Equation (\ref{mz}) is the Mori-Zwanzig identity.
Note that
this relation is exact and is an alternative way
of writing the original PDE. It is the starting
point of our approximations. Of course, we
have one such equation for each of the resolved
variables $u_k, k \in H$. The first term in (\ref{mz}) is
usually called Markovian since it depends only on the values of the variables
at the current instant, the second is called ``noise" and the third ``memory".
If we write
$$e^{tQL}QLu_{0k}=w_k,$$
$w_k(u_0,t)$ satisfies the equation
\begin{equation}
\label{ortho}
\begin{cases}
&\frac{\partial}{\partial{t}}w_k(u_0,t)=QLw_k(u_0,t) \\
& w_k(u_0,0) = QLx_k=R_k(u_0)-(PR_k)(\hat{u_0}).
\end{cases}
\end{equation}
If we project (\ref{ortho}) we get
$$P\frac{\partial}{\partial{t}}w_k(u_0,t)=
PQLw_k(u_0,t)=0,$$
since $PQ=0$. Also for the initial condition
$$Pw_k(u_0,0)=PQLu_{0k}=0$$
by the same argument. Thus, the solution
of (\ref{ortho}) is at all times orthogonal
to the range of $P.$ We call
(\ref{ortho}) the orthogonal dynamics equation. Since the solutions of
the orthogonal dynamics equation remain orthogonal to the range of $P$,
we can project the Mori-Zwanzig equation (\ref{mz}) and find
\begin{equation}
\label{mzp}
\frac{\partial}{\partial{t}} Pe^{tL}u_{0k}=
Pe^{tL}PLu_{0k}+
P\int_0^t e^{(t-s)L}PLe^{sQL}QLu_{0k} ds.
\end{equation}
We will not present here more details about how to start from Eq. \eqref{mzp} and construct reduced models of different orders for a general system of ODEs. Such constructions have been documented thoroughly elsewhere (see e.g. \cite{CHK3}). However, we will provide such details for the specific numerical examples in Sections \ref{example1}-\ref{example2}.
\section{Markovian reformulation of the MZ formalism}\label{reformulation}
While the MZ model given by Eq. \eqref{mzp} is exact, its construction can be involved and most importantly, very costly. The main source of computational expense is the memory term. Technically, the cost associated with the memory term comes from two sources: i) the presence of the orthogonal dynamics equation solution operator $e^{sQL}$ and ii) the need to find an expression in terms of the resolved variables and time for $PLe^{sQL}QLu_{0k}$ which appears in the memory integrand.
The presence of $e^{sQL}$ is problematic because the orthogonal dynamics equation is, for the general case, a PDE in as many dimensions as the original system of ODEs. Also, finding an expression for $PLe^{sQL}QLu_{0k}$ is problematic because, in general, it is not possible to separate the dependence of the expression on time and on the resolved variables. Both are formidable tasks and we will show with several examples how they can increase the cost of constructing the reduced model. For some cases (see e.g. Section \ref{example1} and \ref{example2}) both tasks can be tackled through the use of a finite-rank projection for the operator $P.$ However, we will show with a simple example (see Section \ref{example3}) that the use of a finite-rank projection may be too costly itself. For such cases, we need an alternative approach to the construction of the memory term. In this section we describe a reformulation of the problem of computing the memory term which
can alleviate some of these issues. Also, we present numerical results from the application of this approach in Section \ref{example3}.
\subsection{Finite memory}\label{memory_comp}
We focus on the case when the memory has a finite extent only. The case of infinite memory is simpler and is a special case of the formulation presented below. Also, the current reformulation allows us to comment on what happens in the case when the memory is very short.
Let $w_{0k}(t)=P\int_0^t e^{(t-s)L}PLe^{sQL}QLu_{0k} ds=P\int_0^t e^{sL}PLe^{(t-s)QL}QLu_{0k} ds,$ by the change of variables $t'=t-s.$ Note, that $w_{0k}$ depends both on $t$ and the resolved part of the initial conditions $\hat{u}_0.$ We have suppressed the $\hat{u}_0$ dependence for simplicity of notation. If the memory extends only for $t_0$ units in the past (with $t_0 \leq t,$) then $$w_{0k}(t)=P\int_{t-t_0}^t e^{sL}PLe^{(t-s)QL}QLu_{0k} ds.$$ The evolution of
$w_{0k}$ is given by
\begin{equation}\label{memory_1}
\frac{dw_{0k}}{dt}=Pe^{tL}PLQLu_{0k}-Pe^{(t-t_0)L}PLe^{t_0 QL}QLu_{0k}+w_{1k}(t),
\end{equation}
where $$w_{1k}(t)=P\int_{t-t_0}^t e^{sL}PLe^{(t-s)QL}QLQLu_{0k} ds.$$ To allow for more flexibility, let us assume that the integrand in the formula for $w_{1k}(t)$ contributes only for $t_1$ units with $t_1 \leq t_0.$ Then $$w_{1k}(t)=P\int_{t-t_1}^t e^{sL}PLe^{(t-s)QL}QLQLu_{0k} ds.$$
We can proceed and write an equation for the evolution of $w_{1k}(t)$ which reads
\begin{equation}\label{memory_2}
\frac{dw_{1k}}{dt}=Pe^{tL}PLQLQLu_{0k}-Pe^{(t-t_1)L}PLe^{t_1 QL}QLQLu_{0k}+w_{2k}(t),
\end{equation}
where $$w_{2k}(t)=P\int_{t-t_1}^t e^{sL}PLe^{(t-s)QL}QLQLQLu_{0k} ds.$$ Similarly, if this integral extends only for $t_2$ units in the past with $t_2 \leq t_1,$ then
$$w_{2k}(t)=P\int_{t-t_2}^t e^{sL}PLe^{(t-s)QL}QLQLQLu_{0k} ds.$$
This hierarchy of equations continues indefinitely. Also, we can assume for more flexibility that at every level of the hierarchy we allow the interval of integration for the integral term to extend to fewer or the same units of time than the integral in the previous level. If we keep, say, $n$ terms in this hierarchy, the equation for $w_{(n-1)k}(t)$ will read
\begin{gather}\label{memory_n}
\frac{dw_{(n-1)k}}{dt}=Pe^{tL}PL(QL)^{n-1}QLu_{0k}- \\
Pe^{(t-t_{n-1})L}PLe^{t_{n-1} QL}(QL)^{n-1}QLu_{0k}+w_{nk}(t) \notag
\end{gather}
where $$w_{nk}(t)=P\int_{t-t_n}^t e^{sL}PLe^{(t-s)QL}(QL)^{n}QLu_{0k} ds$$
Note that the last term in \eqref{memory_n} involves the unknown evolution operator for the orthogonal dynamics equation. This situation is the well-known closure problem. We can stop the hierarchy at the $n$th term by assuming that $w_{nk}(t)=0.$
In addition to the closure problem, the unknown evolution operator for the orthogonal dynamics equation appears in the equations for the evolution of the quantities $w_{0k}(t),$ $\ldots,$ $w_{(n-1)k}(t)$ through the various terms $Pe^{(t-t_0)L}PLe^{t_0 QL}QLu_{0k},$ $\ldots,$ $Pe^{(t-t_0)L}PLe^{t_0 QL}(QL)^{n-1}QLu_{0k}$ respectively.
We describe now a way to express these terms involving the unknown orthogonal dynamics operator through known quantities so that we obtain a closed system for the evolution of $w_{0k}(t),\ldots,w_{(n-1)k}(t).$
Since we want to treat the case where $t_0$ is not necessarily small, we divide the interval $[t-t_0,t]$ in $n_0$ subintervals. Define
\begin{align*}
w_{0k}^{(1)}(t) & =P\int_{t-\Delta t_0}^t e^{sL}PLe^{(t-s)QL}QLu_{0k} ds \\
w_{0k}^{(2)}(t) & =P\int_{t-2 \Delta t_0}^{t- \Delta t_0} e^{sL}PLe^{(t-s)QL}QLu_{0k} ds \\
\ldots & \\
w_{0k}^{(n_0)}(t) & =P\int_{t-t_0}^{t- (n_0-1)\Delta t_0} e^{sL}PLe^{(t-s)QL}QLu_{0k} ds,
\end{align*}
where $n_0 \Delta t_0 = t_0$ and $w_{0k}(t)=\sum_{i=1}^{n_0} w_{0k}^{(i)}(t).$ Similarly, we can define the quantities $w_{1k}^{(1)}(t),\ldots,w_{1k}^{(n_1)}(t)$
\begin{align*}
w_{1k}^{(1)}(t) & =P\int_{t-\Delta t_1}^t e^{sL}PLe^{(t-s)QL}QLQLu_{0k} ds \\
w_{1k}^{(2)}(t) & =P\int_{t-2 \Delta t_1}^{t- \Delta t_1} e^{sL}PLe^{(t-s)QL}QLQLu_{0k} ds \\
\ldots & \\
w_{1k}^{(n_1)}(t) & =P\int_{t-t_1}^{t- (n_1-1)\Delta t_1} e^{sL}PLe^{(t-s)QL}QLQLu_{0k} ds,
\end{align*}
where $n_1 \Delta t_1 = t_1$ and $w_{1k}(t)=\sum_{i=1}^{n_1} w_{1k}^{(i)}(t).$ In a similar fashion we can define corresponding quantities for all the memory terms up to $w_{(n-1)k}(t)=\sum_{i=1}^{n_{n-1}} w_{(n-1)k}^{(i)}(t).$
In order to proceed we need to make an approximation for the integrals over the subintervals.
\subsection{Trapezoidal rule approximation}\label{trapezoidal}
We have
\begin{multline*}
w_{0k}^{(1)}(t) =P\int_{t-\Delta t_0}^t e^{sL}PLe^{(t-s)QL}QLu_{0k} ds \\
=\biggl[ Pe^{tL}PLQLu_{0k}+Pe^{(t-\Delta t_0)L}PLe^{\Delta t_0 QL}QLu_{0k} \biggr] \frac{\Delta t_0}{2}+ O((\Delta t_0)^3)
\end{multline*}
from which we find
$$Pe^{(t-\Delta t_0)L}PLe^{\Delta t_0 QL}QLu_{0k}=\biggl ( \frac{2}{\Delta t_0} \biggr ) w_{0k}^{(1)}(t) - Pe^{tL}PLQLu_{0k} + O((\Delta t_0)^2)$$
and from \eqref{memory_1}
\begin{equation*}
\frac{dw_{0k}^{(1)}}{dt}=-\biggl ( \frac{2}{\Delta t_0} \biggr ) w_{0k}^{(1)}(t)+ 2Pe^{tL}PLQLu_{0k}+w_{1k}^{(1)}(t)+ O((\Delta t_0)^2).
\end{equation*}
Similarly, for $w_{0k}^{(2)}(t)$ we find
\begin{multline*}
\frac{dw_{0k}^{(2)}}{dt}=\biggl ( \frac{4}{\Delta t_0} \biggr ) w_{0k}^{(1)}(t) \\
-\biggl ( \frac{2}{\Delta t_0} \biggr ) w_{0k}^{(2)}(t) - 2Pe^{tL}PLQLu_{0k}
+w_{1k}^{(2)}(t)+ O((\Delta t_0)^2)
\end{multline*}
In general,
\begin{multline}\label{memory_1a}
\frac{dw_{0k}^{(i)}}{dt}= -\biggl ( \frac{2}{\Delta t_0} \biggr ) w_{0k}^{(i)}(t) + (-1)^{i+1} 2Pe^{tL}PLQLu_{0k} \\
+\biggl [ \sum_{j=1}^{i-1} \biggl ( \frac{4}{\Delta t_0} \biggr ) (-1)^{i+j+1} w_{0k}^{(j)}(t) \biggr ] +w_{1k}^{(i)}(t)+ O((\Delta t_0)^2) \; \; \text{for} \; \; i=1,\ldots,n_0.
\end{multline}
Similarly,
\begin{multline*}
\frac{dw_{1k}^{(i)}}{dt}= -\biggl ( \frac{2}{\Delta t_1} \biggr ) w_{1k}^{(i)}(t) + (-1)^{i+1} 2Pe^{tL}PLQLQLu_{0k} \\
+\biggl [ \sum_{j=1}^{i-1} \biggl ( \frac{4}{\Delta t_1} \biggr ) (-1)^{i+j+1} w_{1k}^{(j)}(t) \biggr ] +w_{2k}^{(i)}(t)+ O((\Delta t_1)^2) \; \; \text{for} \; \; i=1,\ldots,n_1
\end{multline*}
$\ldots$
\begin{multline}
\frac{dw_{(n-1)k}^{(i)}}{dt}= -\biggl ( \frac{2}{\Delta t_{n-1}} \biggr ) w_{(n-1)k}^{(i)}(t) + (-1)^{i+1} 2Pe^{tL}PL(QL)^{n-1}QLu_{0k} \\
+\biggl [ \sum_{j=1}^{i-1} \biggl ( \frac{4}{\Delta t_{n-1}} \biggr ) (-1)^{i+j+1} w_{(n-1)k}^{(j)}(t) \biggr ] + O((\Delta t_{n-1})^2) \; \; \text{for} \; \; i=1,\ldots,n_{n-1}.
\end{multline}
By dropping the $O((\Delta t_0)^2),\ldots, O((\Delta t_{n-1})^2)$ terms we obtain a system of $n_0+n_1+\ldots+n_{n-1}$ differential equations for the evolution of the quantities $w_{0k}^{(1)}(t),\ldots,w_{(n-1)k}^{(n_{n-1})}.$ This system allows us to determine the memory term $w_{0k}(t).$ Since the approximation we have used for the integral leads to an error $O(\Delta t)^2,$ the ODE solver should also be $O(\Delta t)^2.$ We have used the modified Euler method to solve numerically the equations for the reduced model.
Note that the implementation of the above scheme requires the knowledge of the expressions for $Pe^{tL}PLQLu_{0k},\ldots,Pe^{tL}PL(QL)^{n-1}QLu_{0k}.$ Since the computation of these expressions for large $n$ can be rather involved for nonlinear systems (see Section \ref{example3}), we expect that the above scheme will be used with a small to moderate value of $n.$ Finally, we mention that the above construction can be carried out for integration rules of higher order e.g. Simpson's rule.
\subsection{Estimation of the memory length}\label{mz_length}
The construction presented above relies on an accurate determination of the memory lengths $ t_0, t_1,\ldots, t_{n-1}.$ We present in this section a way to estimate these quantities on the fly. This means that we start evolving the {\it full} system, use it to estimate $ t_0, t_1,\ldots, t_{n-1}$ and then switch to the reduced model with the estimated values for $ t_0, t_1,\ldots, t_{n-1}.$
For simplicity of presentation we assume that we evolve only $w_{0k}(t).$ If we use the trapezoidal rule to discretize $w_{0k}(t)$ and eliminate the term $Pe^{(t-t_0)L}PLe^{t_0 QL}QLu_{0k}$ from \eqref{memory_1}, the reduced model reads
\begin{gather}
\frac{d Pu_{k}}{dt}= Pe^{tL}PLu_{0k} + w_{0k}(t) \label{reduced1} \\
\frac{d w_{0k}}{dt}=2Pe^{tL}PLQLu_{0k} - \frac{2}{t_0} w_{0k}(t) \label{reduced2}
\end{gather}
for $k \in H. $
We can solve \eqref{reduced2} formally and substitute in \eqref{reduced1} to get
\begin{equation}\label{reduced_integral}
\frac{d Pu_{k}}{dt}= Pe^{tL}PLu_{0k} + \int_0^t e^{-\lambda_0(t-s)}2Pe^{sL}PLQLu_{0k} ds
\end{equation}
where $\lambda_0=2/t_0.$ Recall that, for the resolved variables, we have from the full system
\begin{equation}\label{full_split}
\frac{d Pu_{k}}{dt}= Pe^{tL}PLu_{0k} + Pe^{tL}QLu_{0k}.
\end{equation}
We would like to estimate the memory decay parameter $t_0$ so that the reduced equation \eqref{reduced_integral} for $u_{k}$ reproduces the behavior of $u_{k}$ as predicted by the full system \eqref{full_split}. We can do that by requiring that the evolution of some integral quantity of the solution is the same when predicted by the reduced and full systems.
We begin by discretizing the integral term in \eqref{reduced_integral}. Suppose that we are evolving the full system with a step size $\delta t,$ where $t=n_t \delta t$ (note that $n_t$ increases as $t$ increases). If we discretize the integral with the trapezoidal rule we find
\begin{gather}\label{reduced_integral2}
\frac{d Pu_{k}}{dt}= Pe^{tL}PLu_{0k} \\
+ [f_{k}(t,\hat{u}_{0})+2 \sum_{j=1}^{n_t-1}e^{-\lambda_0(t-j\delta t)}f_{k}(j\delta t, \hat{u}_{0}) +e^{-\lambda_0t}f_{k}(0,\hat{u}_{0})] \frac{\delta t}{2} \notag
\end{gather}
where $f_{k}(j\delta t, \hat{u}_{0})=2Pe^{j\delta t L}PLQLu_{0k}$ for $j=0,\ldots,n_t.$ The quantities $f_{k}(j\delta t, \hat{u}_{0})$ can be computed from the full system.
There is freedom in the choice of the integral quantity whose evolution the reduced model should be able to reproduce. For example, we can use $ \sum_{k \in H} |Pu_{k}(t)|^2$ the squared $l_2$ norm of the resolved variables. If we use this integral quantity, then from \eqref{reduced_integral2} and \eqref{full_split} we find that the unknown parameter $t_0$ must satisfy
\begin{equation}\label{newton1}
\sum_{k \in H} 2 Re \{ I_{k}(t,t_0) (Pu_{k})^*(t) \} = \sum_{k \in H} 2 Re \{ Pe^{tL}QLu_{0k} (Pu_{k})^*(t) \} ,
\end{equation}
where $$I_{k}(t,t_0)= [f_{k}(t,\hat{u}_{0})+2 \sum_{j=1}^{n_t-1}e^{-\lambda_0(t-j\delta t)}f_{k}(j\delta t, \hat{u}_{0}) +e^{-\lambda_0t}f_{k}(0,\hat{u}_{0})] \frac{\delta t}{2}$$ and $Re\{\cdot\}$ denotes the real part.
Let $y=\exp[-\lambda_0\delta t].$ Then,
\begin{equation}\label{newton2}
I_{k}(t,t_0)= [f_{k}(t,\hat{u}_{0})+2 \sum_{j=1}^{n_t-1} y^{n_t-j} f_{k}(j\delta t, \hat{u}_{0}) +y^{n_t}f_{k}(0,\hat{u}_{0})] \frac{\delta t}{2}.
\end{equation}
With this identification, equation \eqref{newton1} becomes a polynomial equation for $y$ with $y \in [0,1].$ It is not difficult to solve equation \eqref{newton1} with an iterative method, for example Newton's method. For the numerical results we present in Section \ref{example3}, Newton's method converged to double precision accuracy within 4-5 iterations. After an estimate $\hat{y}$ has been obtained, we can find the estimate $\hat{t}_0$ of $t_0$ (recall $\lambda_0=2/t_0$) from
\begin{equation}\label{newton3}
\hat{t}_0=-\frac{2 \delta t}{\ln \hat{y}} .
\end{equation}
\subsubsection{Determination of optimal estimate $\hat{t}_0$}\label{mz_optimal}
For each time instant $t$ we can obtain through equations \eqref{newton1} and \eqref{newton3}, an estimate $\hat{t}_0(t)$ for $t_0.$ Thus, the most important issue that we have to address is that of deciding which is the best estimate of $t_0.$ In other words, at what time $t_f$ should we stop estimating the value of $t_0$ so that we can use the estimated value $\hat{t}_0(t_f)$ to evolve the reduced model from then on.
We define $\epsilon(t)=\underset{l\in [1,n_t]}{\max} |\hat{y}^l (t+\delta t)-\hat{y}^l (t)|.$ The quantity $\epsilon(t)$ monitors the convergence of not only the value of the estimate $\hat{y}$ as a function of the time $t$, but of the whole function $e^{-\lambda_0(t-s)}.$ Ideally, $\epsilon(t)$ converges to zero with increasing $t.$ That will be the case if the approximation of the memory term only through $Pe^{tL}PLQLu_{0kr}$ is enough (see \eqref{reduced1}-\eqref{reduced2}). However, this will not always be the case. If keeping $Pe^{tL}PLQLu_{0kr}$ is not enough, then $\epsilon(t)$ will decrease with increasing $t$ up to some time $t_{min}$ when it will reach a nonzero minimum. After that time, it starts increasing. This signals that keeping only $Pe^{tL}PLQLu_{0kr}$ is {\it not enough} to describe accurately the memory.
In order to proceed we have two options: (i) construct a higher order model and (ii) identify $t_f=t_{min}$ and thus $\hat{t}_0(t_f)=\hat{t}_0(t_{min}).$ Results for higher order models will be presented elsewhere (see also discussion in Section \ref{discussion}). In the numerical experiments we present in the next section we have chosen $\hat{t}_0(t_f)=\hat{t}_0(t_{min}).$ Note that the procedure just outlined allows the automation of the algorithm. This means that there is no adjustable reduced model parameter that needs to be specified at the onset of the algorithm.
We are now in a position to state the adaptive Mori-Zwanzig algorithm which constructs a reduced model with the necessary memory term parameter $t_0$ estimated on the fly.
\vskip14pt
{\bf Adaptive Mori-Zwanzig Algorithm}
\begin{enumerate}
\item
Evolve the full system and compute, at every step, the estimate $\hat{t}_0(t).$ Use estimates of $t_0$ from successive steps to calculate $\epsilon(t)=\underset{l\in [1,n_t]}{\max} |\hat{y}^l (t+\delta t)-\hat{y}^l (t)|.$
\item
When $\epsilon(t)$ reaches a minimum (possibly non zero) value at some instant $t_{min}$, pick $\hat{t}_0(t_{min})$ as the final estimate of $t_0.$
\item
For the remaining simulation time ($t > t_{min}$), switch from the full system to the reduced model. The reduced model is evolved with the necessary parameter $t_0$ set to its estimated value $\hat{t}_0(t_{min}).$
\end{enumerate}
This procedure can be extended to the computation of optimal estimates for $t_1,t_2,\ldots,$ i.e. when we evolve, in addition to $w_{0k}(t),$ the quantities $w_{1k}(t),w_{2k}(t),\ldots.$ Results for such higher order models will be presented elsewhere.
\section{Numerical Examples}\label{examples}
\subsection{A linear ODE with uncertain coefficient}\label{example1}
Consider the following {\it linear} ordinary equation with an uncertain coefficient
\begin{equation}\label{ex:ODE}
\begin{split}
\frac{du}{dt} &= -\kappa u,\\
u(0,\cdot )& = u^\circ,
\end{split}
\end{equation}
where $\kappa \sim U[0,1]$. This equation has the solution $u = u^\circ exp(-\kappa t)$. To represent the dependence of the solution of \eqref{ex:ODE} on $\kappa,$ we can expand it in a general polynomial chaos (gPC) expansion \cite{xiu2006}, say using Legendre polynomials. Let $u(t,\cdot)\approx \sum_{i=0}^M u_i(t) \phi_i(\xi)$, where $\xi \sim U[-1,1]$ and $\{\phi_i\}$ are normalized Legendre polynomials which are orthonormal with respect to the uniform distribution of $\xi$, i.e., $$\int_{-1}^1 \phi_i(\xi)\phi_j(\xi)\frac{1}{2}d\xi = \delta_{ij}.$$ We can write $\kappa$ as $\kappa =\frac{1}{2}\xi+\frac{1}{2}= \sum_{i=0}^1{k_i}\phi_i(\xi)$. We substitute this expansion in \eqref{ex:ODE} and obtain (through Galerkin projection) the (truncated) system up to order $M$
\begin{equation}\label{ex:ODE_g}
\begin{split}
\frac{du_r}{dt}& = -\sum_{i=0}^1\sum_{j=0}^M k_iu_je_{ijr}, \\
u_r(0) &= u_{0r}, \qquad r = 0,\dots, M,
\end{split}
\end{equation}
where $e_{ijk} = \int_{-1}^1 \phi_i(\xi)\phi_j(\xi)\phi_k(\xi)\frac{1}{2}d\xi$ and $u_{00} = u^\circ$, $u_{0r} = 0$ for $r = 1,\dots,M$ (for details, see e.g \cite{xiu2002}).
Because of the spectral decay in the gPC coefficients, it is natural to choose the coefficients $u_i$, $i = 0,\dots,\Lambda$ of the lower degree Legendre polynomials to be the resolved variables $\hat{u},$ and $u_i$, $i = \Lambda+1,\dots, M$ to be the unresolved variables $\tilde{u}$ respectively.
To conform with the notation in Section \ref{mz_formalism}, we have $H = \{0,\dots,\Lambda\}$ and $G = \{\Lambda+1,\dots,M\}$.
We have chosen $M = 6$ for the full system and $\Lambda = 1$ for the reduced system. The solution of the full system is converged for $M=6$ and thus, we do not need to keep further terms in the expansion.
The projection $P$ we have chosen is defined as $(Pf)(\hat{u}_0) = f(\hat{u}_0,\tilde{0}).$ Also, we define $Q=I-P.$ To be consistent with the notation in Section \ref{mz_formalism}, we have
$$ R_r({u}_0) = -\sum_{i=0}^1\sum_{j=0}^M k_iu_je_{ijr}. $$
\[
PLu_{0r} = -\sum_{i=0}^1\sum_{j=0}^{\Lambda}k_iu_{0j}e_{ijr}.
\]
\[
QLu_{0r} = -\sum_{i=0}^1\sum_{j=\Lambda+1}^Mk_iu_{0j}e_{ijr},
\]
In order to be able to compute the expressions for the memory terms we use a finite-rank projection $\mathbb{P}$ to approximate the projection $P$. To define the finite-rank projection we need to introduce a measure for the distribution of the coefficients. We consider the coefficients $u_{0r}$ to be i.i.d Gaussian random variables with mean at the values given initially (see \eqref{ex:ODE_g}) and a prescribed variance for $i = 0,\dots, M.$ In the case of the linear ODE, the variance was set to 0.01 for all the variables in the full system. Also, $\omega$ is the joint probability measure with respect to these random variables. Then for a function $\varphi_j(u_0,t)$ of the initial conditions and time, the finite-rank projection reads
\begin{equation}\label{def:frank_proj}
(\mathbb{P}\varphi_j)(\hat{u}_0,t) = \sum_{\nu\in I}(\varphi_j(u_0,t),h^{\nu}(\hat{u}_0))h^{\nu}(\hat{u}_0),
\end{equation}
where $h^{\nu}(\hat{u}_0)$ are tensor product Hermite polynomials up to some order $p$, $\nu$ is the multi-index $\nu = (\nu_0,\dots,\nu_{\Lambda})$ with $|\nu| = \sum_{i=0}^\Lambda \nu_i$ and $I$ is the index set up to order $p$, i.e., $I = \{ \mu \big| |\mu|\leq p \}$. The order $p$ for the basis functions was set to 5 for a total of 21 basis functions. In formula \eqref{def:frank_proj} the inner product is defined as
\begin{equation}
(f,g) = \int fg d\omega .
\end{equation}
For each $j\leq \Lambda$, the component $F_j(u_0,t)$ denotes the solution of the orthogonal dynamics
\begin{equation}\label{Orth_j}
\begin{split}
&\frac{\partial}{\partial t}F_j(u_0,t) ={Q}LF_j(u_0,t) = {L}F_j(u_0,t)-{P}LF_j(u_0,t),\\
& F_j(u_0,0) = {Q}Lu_{0j} = R_j(u_0)-{P}Lu_{0j}.
\end{split}
\end{equation}
\eqref{Orth_j} is equivalent to the Dyson formula:
\begin{equation}\label{Orth_j_Dyson}
F_j(u_0,t) = e^{tL}F_j(u_0 ,0)-\int_{0}^t e^{(t-s)L}{P}LF_j(u_0,s)ds.
\end{equation}
Eq. \eqref{Orth_j_Dyson} is a Volterra integral equation for $F_j(u_0,t).$ To proceed, we replace the projection operator ${P}$ with the finite-rank projection operator $\mathbb{P}$ and find
\begin{equation}
K_j(\hat{u}_0,s) = PLF_j(u_0,s) \approx \mathbb{P}LF_j(u_0,s) = \sum_{\nu \in I}a_j^{\nu}h^{\nu}(\hat{u}_0),
\end{equation}
where
\[
a^{\nu}_j(s) = (LF_j(u_0,s),h^{\nu}(\hat{u}_0)).
\]
Consequently,
\[
e^{(t-s)L}\mathbb{P}LF_j(u_0,s) = \sum_{\nu\in I}a^{\nu}_j(s)h^{\nu}(\varphi(u,t-s)).
\]
We substitute $e^{(t-s)L}\mathbb{P}LF_j(u_0,s)$ for $e^{(t-s)L}PLF_j(u_0,s)$ in Eq. \eqref{Orth_j_Dyson}, multiply both sides by $L$ and take the inner product with $h^{\mu}(\hat{u}_0))$; the result is (dropping the approximation sign)
\begin{equation}
\begin{split}
&(LF_j(u_0,t),h^{\mu}(\hat{u}_0)) \\
=& (Le^{tL}F_j(u_0,0),h^{\mu}(\hat{u}_0))-\int_{0}^t \sum_{\nu\in I} a^{\nu}_j(s)(Le^{(t-s)L}h^{\nu}(\hat{u}_0),h^{\mu}(\hat{u}_0))ds. \label{Volterra_a1}
\end{split}
\end{equation}
Eq. \eqref{Volterra_a1} is a Volterra integral equation for the function $a^{\nu}_j(t)$, which can be rewritten as follows:
\begin{equation}\label{Volterra_a}
a^{\mu}_j(t) = f^{\mu}_j(t)-\int_{0}^t\sum_{\nu\in I}a^{\nu}_j(s)g^{\nu\mu}(t-s)ds,
\end{equation}
where
\[
f^{\mu}_j(t) = (Le^{tL}F_j(u_0,0),h^{\mu}(\hat{u}_0)), \qquad g^{\nu\mu}(t)=(Le^{tL}h^{\nu}(\hat{u}_0),h^{\mu}(\hat{u}_0)).
\]
The functions $f^{\nu}_j(t)$, $g^{\mu\nu}(t)$ can be found by averaging over a collection of experiments or simulations, with initial conditions drawn from the initial distribution. In this example, we use a sparse grid quadrature rule for the multi-dimensional integrals \cite{xiu2006}.
Finally, we perform one more projection to eliminate the noise term (see Section \ref{mz_formalism}) and the memory term becomes
\[
\int_{0}^t Pe^{(t-s)L}K_j(\hat{u}_0,s)ds.
\]
This can be approximated by
\[
\int_{0}^t \sum_{\nu\mu\in I}a^{\nu}_j(s)\gamma^{\nu\mu}(t-s)h^{\mu}(\hat{u}_0)ds,
\]
where
\[
\gamma^{\nu\mu}(t) = (e^{tL}h^{\nu}(\hat{u}_0),h^{\mu}(\hat{u}_0)).
\]
After calculating $a_i^{\mu}$ and $\gamma^{\mu\nu}$ we obtain the following reduced system,
\begin{equation}\label{eq:redu_b}
\frac{d}{dt}\hat{u}(t) = \textrm{R}(\hat{u}(t))+\int_{0}^t A(s)\Gamma(t-s)h(\hat{u}_0)ds. \quad \hat{u}(0) = \hat{u}_0,
\end{equation}
here $A$ and $\Gamma$ are the matrix form of $a^{\mu}_i$ and $\gamma^{\mu\nu}$, $\hat{u}_0$ is the initial condition of resolved variables.
Fig. \ref{fig:ODE_1_me} shows the evolution of the memory kernel $(Le^{tQL}QLu_1,h^{01})$ which is indicative of the behavior of the memory kernels. The basis function $h^{01}$ is the product of the zero order Hermite polynomial in the variable $u_0$ and the first order Hermite polynomial in the variable $u_1.$ We see that the memory kernel is rather slowly decaying which means that the resulting reduced order model will have a long memory. Fig. \ref{fig:ODE_1_sol} shows the solution for the resolved variables as predicted by the full system and two different reduced order models, the Markovian model which results from dropping the memory term in \eqref{eq:redu_b} and the non-Markovian reduced model given by \eqref{eq:redu_b}. It is obvious from Fig. \ref{fig:ODE_1_sol} that the Markovian model loses accuracy quickly. On the other hand, the non-Markovian model retains its accuracy for the length of the simulation interval. This difference in behavior is quantified in Fig. \ref{fig:ODE_1_er} where we see that for both resolved variables the relative error of the Markovian model becomes greater than $50\%$ by the end of the simulation interval. On the other hand, the error of the non-Markovian model remains less than $1\%$ for the whole simulation interval.
\begin{figure}[htbp]
\centering
\psfig{file = ODE_1_a_mu.eps, width = 11cm}
\caption{Evolution of the memory kernel $(Le^{tQL}QLu_1,h^{01})$ (see text for details).}
\label{fig:ODE_1_me}
\end{figure}
\begin{figure}[htbp]
\centerline{
\psfig{file = ODE_Case1_u0.eps, width = 7cm}
\psfig{file = ODE_Case1_u1.eps, width = 7cm}
}
\caption{Evolution of the resolved variables $u_0,u_1$ predicted by the full model (black line), the (Markovian) reduced model without memory (blue line) and the (non-Markovian) reduced model with memory (red line).}
\label{fig:ODE_1_sol}
\end{figure}
\begin{figure}[htbp]
\centerline{
\psfig{file = ODE_Case1_u0_er.eps, width = 7cm}
\psfig{file = ODE_Case1_u1_er.eps, width = 7cm}
}
\caption{Relative error with respect to the true solution for the (Markovian) reduced model without memory (blue line) and the (non-Markovian) reduced model with memory (red line).}
\label{fig:ODE_1_er}
\end{figure}
\subsection{Nonlinearly damped and randomly forced particle}\label{example2}
Consider the following equation describing a particle moving in a double well potential and driven by a force term (see \cite{xiu2006})
\begin{equation}\label{eq:Damped_system}
\begin{split}
\frac{du}{dt} & = u-u^3+f(t,\xi),\\
u(0) & = u^\circ,\\
\end{split}
\end{equation}
where $f = \sin(t+t_0)\xi$ and $\xi \sim U[-1,1]$. We use order $M=6$ polynomials in $\xi$ to approximate the full system solution up to time $10$.
We want to construct a reduced model for the first 2 coefficients of the polynomial expansion ($\Lambda=1$). As before, we let $u(t,\xi)\approx \sum_{i=0}^{M} u_i(t)\phi_i(\xi)$ and we obtain through Galerkin projection the system
\begin{equation}
\begin{split}
\frac{du_i}{dt} &= u_i-\sum_{j,k,m = 0}^{M}u_j u_k u_m e_{jkmi}+f_i,\\
u_i(0) &= u_{0i}, \quad \textrm{ for } i = 0,\dots,M.
\end{split}
\end{equation}
where $e_{jkmi} = \int \phi_j(\xi) \phi_k(\xi) \phi_m(\xi) \phi_i(\xi) \frac{1}{2}d\xi$ and $u_{0i} = \{\begin{array}{ll} u^\circ, & i=0;\\ 0,& \textrm{otherwise}. \end{array}$
Let $R$ be the vector with $R_i = u_i-\sum_{j,k,m = 0}^{M}u_j u_k u_m e_{jkmi}+f_i $. In order to apply the MZ formalism we need an autonomous system of equations to begin with. For this purpose, we introduce an auxiliary time-variable $\tau,$ such that $\tau = t$ and $\frac{d\tau}{dt} = 1.$ The projection operator $\mathbb{P}$ projects onto the function space of the first two coefficients and $\tau$. Again, we use the finite-rank projection onto the function space expanded by Hermite polynomials up to order $3$ to represent the orthogonal dynamics (total of 10 functions) and solve the Volterra equation for the memory kernels as we did for the linear ODE example. The variance for the Gaussian variables used to define the inner product for the finite-rank projection was set to $10^{-2i-2}$ for the coefficient $u_i$ with $i=0,1,\ldots,6.$ The reason we used a decreasing sequence of variances as we go up in the order of Legendre polynomials is to stabilize the behavior of the reduced model.
As can be seen from Fig. \ref{fig:damped_T10_log_er}, the difference between the (memoryless) Markovian and non-Markovian reduced models is even more pronounced than in the case of the linear ODE. The inclusion of the memory term is indeed crucial for maintaining the accuracy of the reduced model for long times. For the case of the resolved variable $u_1,$ the relative error spikes at a couple of points even for the otherwise very accurate non-Markovian reduced model. As can be seen from Fig. \ref{fig:damped_T10}, this is because the exact value of $u_1$ becomes zero at these points so that the relative error becomes very large even for an accurate approximation. However, the significant improvement in accuracy with the inclusion of the memory term is evident in Fig. \ref{fig:damped_T10_log_er} which plots the error in a logarithmic scale.
\begin{figure}[htbp]
\centerline{
\psfig{file = damped_u0.eps, width = 7cm}
\psfig{file = damped_u1.eps, width = 7cm}
}
\caption{Evolution of the resolved variables $u_0,u_1$ predicted by the full model (black line), the (Markovian) reduced model without memory (blue line) and the (non-Markovian) reduced model with memory (red line).}
\label{fig:damped_T10}
\end{figure}
\begin{figure}[htbp]
\centerline{
\psfig{file = damped_u0_log_er.eps, width = 7cm}
\psfig{file = damped_u1_log_er.eps, width = 7cm}
}
\caption{Logarithimic scale relative error for $u_0,u_1$ with respect to the true solution for the (Markovian) reduced model without memory (blue line) and the (non-Markovian) reduced model with memory (red line).}
\label{fig:damped_T10_log_er}
\end{figure}
\subsection{Viscous 1D Burgers with uncertain initial conditions}\label{example3}
In this section we show how the above MZ formulation can be used for uncertainty quantification for the one-dimensional Burgers equation with uncertain initial condition. As is explained at the end of this section, the calculation of the MZ memory term cannot proceed as for the last two examples. The reason is that it is prohibitively expensive due to the number of basis functions needed. Thus, we will apply the alternative construction that was presented in Section \ref{reformulation}.
The equation is given by
\begin{equation}\label{burgersequation}
u_t+u u_x = \nu u_{xx},
\end{equation}
where $\nu > 0.$ Equation (\ref{burgersequation}) should be supplemented with an initial condition $u(x,0)=u_0(x)$ and boundary conditions. We solve (\ref{burgersequation}) in the interval $[0,2\pi]$ with periodic boundary conditions. This allows us to expand the solution in Fourier series
$$u_{N}(x,t )=\underset{k \in F}{\sum} u_k(t) e^{ikx},$$
where $F=[-\frac{N}{2},\frac{N}{2}-1].$ The equation of motion for the Fourier mode $u_k$ becomes
\begin{equation}
\label{burgersode}
\frac{d u_k}{dt}=- \frac{ik}{2} \underset{p, q \in F}{\underset{p+q=k }{ \sum}} u_{p} u_{q} -\nu k^2 u_k.
\end{equation}
We assume that the initial condition $u_0(x)$ is uncertain (random) and can be expanded as $u_0(x,\xi)= (\alpha_0+ \alpha_1 \xi) v_0(x)$ where $\xi$ is uniformly distributed in $[-1,1]$ and $v_0(x)$ a given function. In the numerical experiments we have taken $\alpha_0 =\alpha_1=1$ and $v_0(x)=\sin x.$ Thus, the initial condition varies ``uniformly" between the functions 0 and $2 \sin x.$
To proceed we expand the solution $u_k(t,\xi)$ for $k \in F$ in a polynomial chaos expansion using the standard Legendre polynomials which are orthogonal in the interval $[-1,1].$ In particular, we have that $$\int_{-1}^1\phi_i (\xi) \phi_j(\xi) \frac{1}{2}d \xi=\frac{1}{2i+1} \delta_{ij},$$ where $\phi_i(\xi)$ is the standard Legendre polynomial of order $i.$ For each wavenumber $k$ we expand the solution $u_k(t,\xi)$ of \eqref{burgersodemz} in Legendre polynomials and keep the first $M$ polynomials
\begin{equation}\label{ode_expansion}
u_k(t,\xi)\approx \sum_{i=0}^{M-1} u_{ki}(t) \phi_i(\xi), \; \; \text{where} \; \; \xi \sim U[-1,1].
\end{equation}
Similarly, the initial condition can be written as $u_0(x,\xi) = \sin x \sum_{i=0}^1\alpha_i \phi_i(\xi)$ since $\phi_0 (\xi)=1$ and $\phi_1 (\xi)=\xi.$
Substitution of \eqref{ode_expansion} in \eqref{burgersode} and use of the orthogonality property of the Legendre polynomials gives
\begin{equation}\label{burgersodemz_system}
\frac{du_{kr}(t)}{dt}=- \frac{ik}{2} \sum_{l=0}^{M-1} \sum_{m=0}^{M-1} \underset{p, q \in F}{\underset{p+q=k }{ \sum}} u_{pl} u_{qm} c_{lmr} - \nu k^2 u_{kr}
\end{equation}
for $k \in F$ and $r=0,\ldots,M-1.$ Also $$c_{lmr}=\frac{E[ \phi_l (\xi) \phi_m(\xi) \phi_r(\xi) ] }{E[\phi^2_r(\xi) ]},$$
where the expectation $E[\cdot]$ is taken with respect to the uniform density on $[-1,1].$ The Legendre polynomial triple product integral defines a tensor which has the following sparsity pattern: $E[ \phi_l (\xi) \phi_m(\xi) \phi_r(\xi) ]=0,$ if $ l+m < r$ or $l+r < m$ or $m+r < l$ or $l+m+r= \text{odd}$ \cite{gupta}. Due to this sparsity pattern, for a given value of $M$ only about $1/4$ of the $M^3$ tensor entries are different from zero.
Before we proceed we have to comment on the cost of applying the MZ formalism to construct a reduced model. We have set the viscosity coefficient to $\nu = 0.03.$ The solution of the full system was computed with $N=196$ Fourier modes ($F=[-98,97]$) and the first 7 Legendre polynomials ($M=7$). The first 7 Legendre polynomials were enough to obtain converged statistics for the full system. We want to construct reduced models for the evolution of the coefficients of the first 2 Legendre polynomials i.e., $u_{k0},u_{k1}$ for $k \in F.$ If we want to apply the MZ formalism in the way we did for the previous two examples (employing a finite-rank projection etc.) we would need to construct a basis in $2 \times 98$ dimensions (exploiting the fact that the solution of the Burgers equation is real-valued). Any attempt to use basis functions up to a high order is infeasible for such a high-dimensional situation. We have attempted to use only low order basis functions but they are not enough to guarantee accuracy of the reduced model. Thus, we turn to the reformulated reduced model that was presented in Section \ref{reformulation}.
\subsubsection{Reformulated MZ reduced model}\label{mz_ode_example}
To conform with
the Mori-Zwanzig formalism we set
$$R_{kr}(u)=- \frac{ik}{2} \sum_{l=0}^{M-1} \sum_{m=0}^{M-1} \underset{p, q \in F}{\underset{p+q=k }{ \sum}} u_{pl} u_{qm} c_{lmr} - \nu k^2 u_{kr} ,$$
where $u=\{u_{kr}\}$ for $k \in F$ and $r=0,\ldots,M-1.$ Thus, we have
\begin{equation}
\label{burgersodemz}
\frac{d u_{kr}}{dt}=R_{kr}(u)
\end{equation}
for $k \in F$ and $r=0,\ldots,M-1.$
We proceed by dividing the variables in resolved and unresolved. In particular, we consider as resolved the variables $\hat{u}=\{u_{kr}\}$ for $k \in F$ and $r=0,\ldots,\Lambda-1,$ where $\Lambda < M.$ Similarly, the unresolved variables are $\tilde{u}=\{u_{kr}\}$ for $k \in F$ and $r=\Lambda,\ldots,M-1.$ In the notation of Section \ref{mz_formalism} we have $H= F \cup (0,\ldots,\Lambda-1)$ and $G= F\cup (\Lambda,\ldots,M-1).$ In other words, we resolve, for all the Fourier modes, only the first $\Lambda$ of the Legendre expansion coefficients and we shall construct a reduced model for them.
The system (\ref{burgersodemz}) is supplemented by the initial
condition $u_0=(\hat{u}_0,\tilde{u}_0).$ We focus on initial conditions where
the unresolved Fourier modes are set to zero, i.e. $u_0=(\hat{u}_0,0).$ We also define $L$ by
$$L=\sum_{k \in F}\sum_{r=0}^{M-1} R_{kr}(u_0) \frac{\partial}{\partial u_{0kr}}.$$
To construct a MZ reduced model we need to define a projection operator $P.$ For a function $h(u_0)$ of all the
variables, the projection operator we will use is defined by $P(h(u))=P(h(\hat{u}_0,\tilde{u}_0))=h(\hat{u}_0,0),$ i.e.
it replaces the value of the unresolved variables $\tilde{u}_0$ in any function $h(u_0)$ by zero. Note that this choice of projection is consistent with the initial conditions we have chosen. Also, we define the Markovian term
$$ PLu_{0k}=PR_k(u_0)=- \frac{ik}{2} \sum_{l=0}^{\Lambda-1} \sum_{m=0}^{\Lambda-1} \underset{p, q \in F}{\underset{p+q=k }{ \sum}} u_{0pl} u_{0qm} c_{lmr} - \nu k^2 u_{0kr}.$$
The Markovian term has the same functional form as the RHS of the full system but is restricted to a sum over only the first $\Lambda$ Legendre expansion coefficients for each Fourier mode.
For the the term $PLQLu_{0kr}$ we find
\begin{equation}\label{burgersmemory1}
PLQLu_{0kr}=2\times \biggl [ - \frac{ik}{2} \sum_{l=\Lambda}^{M-1} \sum_{m=0}^{\Lambda-1} \underset{p, q \in F}{\underset{p+q=k }{ \sum}} PLu_{0pl} u_{0qm} c_{lmr} \biggr ] .
\end{equation}
Finally, to implement any method to solve equation \eqref{newton1} for the estimation of $t_0$ we need to specify the RHS of the equation \eqref{newton1}. This requires the evaluation of the expression $Pe^{tL}QLu_{0kr}.$ For the case of the viscous Burgers equation, we find
\begin{gather}\label{newton4}
Pe^{tL}QLu_{0kr}=2(- \frac{ik}{2}) \sum_{l=\Lambda}^{M-1} \sum_{m=0}^{\Lambda-1} \underset{p, q \in F}{\underset{p+q=k }{ \sum}} u_{pl} u_{qm} c_{lmr} \\
- \frac{ik}{2} \sum_{l=\Lambda}^{M-1} \sum_{m=\Lambda}^{M-1} \underset{p, q \in F}{\underset{p+q=k }{ \sum}} u_{pl} u_{qm} c_{lmr}. \notag
\end{gather}
Note that since we restrict attention to initial conditions for which the unresolved variables are zero and the projection sets the unresolved variables to zero, the quantity $Pe^{tL}QLu_{0kr}$ can be computed through the evolution of the full system \eqref{burgersodemz}.
The full system was solved with the modified Euler method with $\delta t = 0.001.$ The reduced model uses $N=196$ Fourier modes but only the first two Legendre polynomials, so $\Lambda=2.$ It was solved using the modified Euler method with $\delta t = 0.001.$ The parameter $t_0$ needed for the evolution of the memory term was found to be 0.3783 through the procedure described in Section \ref{mz_optimal}.
\begin{figure}
\centering
\epsfig{file=plot_initial_energy_mean.eps,width=4.in}
\caption{Evolution of the mean of the energy of the solution using only the first two Legendre polynomials.}
\label{plot_initial_energy_mean}
\end{figure}
\begin{figure}
\centering
\epsfig{file=plot_initial_energy_stdev.eps,width=4.in}
\caption{Evolution of the standard deviation of the energy of the solution using only the first two Legendre polynomials.}
\label{plot_initial_energy_stdev}
\end{figure}
Figure \ref{plot_initial_energy_mean} shows the evolution of the mean energy of the solution
$$\mathbb{E}[E(t)]=\frac{1}{2} \sum_{k \in F} \sum_{r=0}^1 2\pi |u_{kr}(t)|^2 \frac{1}{2r+1}$$
as computed from the full system (with $M=7$ Legendre polynomials), the MZ reduced model with $\Lambda=2$ {\it without} memory (keeping only the Markovian term) and the MZ reduced model with $\Lambda=2$ {\it with} memory. Figure \ref{plot_initial_energy_stdev} shows the evolution of the standard deviation of the energy of the solution. The variance of the energy is given by
$$Var[E(t)]=\frac{1}{4} \sum_{k_1, k_2 \in F} \sum_{r_1,\ldots,r_4=0}^1 (2\pi)^2 u_{k_1r_1} u_{k_1r_2}^*u_{k_2r_3} u_{k_2r_4}^*d_{r_1r_2r_3r_4}-\{ \mathbb{E}[E(t)]\}^2,$$
where
$$d_{r_1r_2r_3r_4}=\int_{-1}^1\phi_{r_1}(\xi)\phi_{r_2}(\xi)\phi_{r_3}(\xi)\phi_{r_4}(\xi) \frac{1}{2} d\xi.$$
The reduced model performs equally well with or without memory. Of course, the reduced model with memory is slower than the reduced model without memory. However, the reduced model with memory is still about 4 times faster than the the full system.
\begin{figure}
\centering
\epsfig{file=plot_initial_gradient_mean.eps,width=4.in}
\caption{Evolution of the mean of the squared $l_2$ norm of the gradient of the solution calculated using only the first two Legendre polynomials.}
\label{plot_initial_gradient_mean}
\end{figure}
\begin{figure}
\centering
\epsfig{file=plot_initial_gradient_stdev.eps,width=4.in}
\caption{Evolution of the standard deviation of the squared $l_2$ norm of the gradient of the solution calculated using only the first two Legendre polynomials.}
\label{plot_initial_gradient_stdev}
\end{figure}
Figure \ref{plot_initial_gradient_mean} shows the evolution of the mean squared $l_2$ norm of the gradient of the solution
$$\mathbb{E}[G(t)]= \sum_{k \in F} \sum_{r=0}^1 2\pi k^2 |u_{kr}(t)|^2 \frac{1}{2r+1}$$
as computed from the full system (with $M=7$ Legendre polynomials), the MZ reduced model with $\Lambda=1$ {\it without} memory (keeping only the Markovian term) and the MZ reduced model with $\Lambda=1$ {\it with} memory. Figure \ref{plot_initial_gradient_stdev} shows the evolution of the standard deviation. The variance is given by $$Var[G(t)]= \sum_{k_1, k_2 \in F} \sum_{r_1,\ldots,r_4=0}^1 (2\pi)^2 k_1^2 k_2^2u_{k_1r_1} u_{k_1r_2}^*u_{k_2r_3} u_{k_2r_4}^*d_{r_1r_2r_3r_4}-\{ \mathbb{E}[G(t)]\}^2.$$
The large values of the standard deviation of the mean squared $l_2$ norm of the gradient are justified by the uncertainty in the initial condition. Recall that we have chosen an initial condition which can vary ``uniformly" between the functions 0 and $2\sin x.$ As a result, the standard deviation is large because it has to account for a wide range of possible initial conditions.
It is evident from the figures that the inclusion of the memory term improves the performance of the reduced model. Also, it is evident that there is room for improvement of the reduced model {\it with} memory. In particular, more terms are needed in the reformulated MZ model to approximate better the memory.
Recall that the solution of Burgers equation is a contraction \cite{lax}. Eventually, the complete description of the uncertainty caused by the uncertainty in the initial condition requires only a few polynomial chaos expansion coefficients. This happens at a time scale that is dictated by the magnitude of the viscosity coefficient. That is why for long times the reduced model with and without memory have comparable behavior to that of the full system. However, for short times, the inclusion of the memory term does make a difference because information from the higher chaos expansion coefficients is needed. The higher chaos expansion coefficients will have a more prolonged contribution for systems that possess unstable modes. In such cases, the inclusion of the memory term becomes imperative for short as well long times. Results for such cases will be presented elsewhere.
\section{Discussion and future work}\label{discussion}
We have examined the application of the Mori-Zwanzig formalism to the problem of constructing reduced models for uncertainty quantification. In particular, we have constructed reduced models for subsets of the polynomial chaos expansion coefficients needed to describe fully the uncertainty. We have examined cases of parametric or initial condition uncertainty. The main conclusion from the current work is that while the MZ formalism can be applied for the construction of reduced models, the task of constructing an efficient (or even feasible) reduced model can be involved. For cases where the straightforward application of the MZ formalism is not possible, we have offered an alternative construction. The implementation of this alternative construction is reminiscent of renormalization constructions used to describe the evolution of complex solutions of PDEs \cite{s11}.
The current work opens several directions for future work. First, we should investigate whether there is a more economical way of choosing the basis functions for cases when the basis functions have many arguments (as was the case for the Burgers example). This is important because the calculation of the memory kernels through the finite-rank projection is well defined and the solution of the corresponding Volterra equations can be performed with high accuracy. A related question is whether there is sparsity in the coefficients of the basis functions. It is plausible that even though in principle the number of basis functions to reach a specific order may be very large, many of them may not contribute to the representation. A related approach would be the use of machine learning algorithms to obtain a more efficient representation of the memory term. Finally, a related issue to be investigated is how to ensure the stability of the reduced model when the finite-rank projection is employed. For example, for the nonlinearly damped and forced particle case, we had to assign smaller variances for the higher coefficients to stabilize the reduced model. This procedure needs to be investigated and, if possible, automated.
A second interesting research direction has to do with the representation of the memory term when the finite-rank projection is not possible due to a prohibitively large number of basis functions. We have explored here an expansion of the memory term that involves, in essence, a Taylor expansion of the orthogonal dynamics operator. Such an expansion seems more plausible when the timescale of the orthogonal dynamics is {\it slower} than that of the resolved variables. However, there is an alternative way of performing the expansion of the memory term that is more suited to the case when the orthogonal dynamics is {\it faster} than the resolved variables. Such an expansion leads to a Taylor expansion of the whole memory term, not just the orthogonal dynamics operator. If the memory kernel becomes insignificant after a time interval $t_0,$ then one can use the {\it full} system up to time $t_0,$ estimate the Taylor expansion of the whole memory term around time $t_0$ and then switch to the {\it reduced} model with the memory given by the Taylor expansion. We will investigate this alternative memory representation and report the results elsewhere.
\section{Acknowledgements}
The authors would like to thank D. Barajas-Solano, H. Lei and A. Tartakovsky for useful discussions and comments. This research at Pacific Northwest National Laboratory (PNNL) was partially supported by the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4), under Award Number DE-SC0009280 and partially by the U.S. DOE ASCR project ``Uncertainty Quantification For Complex Systems Described by Stochastic Partial Differential Equations". PNNL is operated by Battelle for the DOE under Contract DE-AC05-76RL01830.
|
{
"timestamp": "2018-03-09T02:00:12",
"yymm": "1803",
"arxiv_id": "1803.02826",
"language": "en",
"url": "https://arxiv.org/abs/1803.02826"
}
|
\section{Introduction}
The present paper is meant to complete the program set out in \cite{Mkrtchyan:2017ixk}, concerning the classification of cubic interactions for massless bosons in three space-time dimensions. In perspective, the hope is that this work will lead to a full non-linear action formulation for higher-spin systems coupled with matter in $d=3$.
Higher-Spin (HS) Gravity \cite{Vasiliev:1990en,Prokushkin:1998bq} is one of the promising attempts for the reconciliation of quantum theory and General Relativity. Conjectured dualities with known CFT's (see \cite{Gaberdiel:2010pz,Beccaria:2016tqy,Giombi:2016pvg,Bae:2017fcs} and references therein) put various models of HS gravity in the front line of holographic studies of quantum gravity. A simple and promising example of holographic duality is the $AdS_3/CFT_2$ conjecture of \cite{Gaberdiel:2010pz}.
One of the main drawbacks of these models is, however, the lack of a bulk description suitable for quantisation. This problem in particular is attacked through the so-called Fronsdal program --- a perturbative construction of classical action for HS gravity models by applying the Noether method to the gauge symmetries of massless HS fields. The starting point is the free Fronsdal action for massless fields of any spin \cite{Fronsdal:1978rb}. The interacting theories can be constructed order-by-order in powers of the fields, starting from the first non-trivial order --- cubic vertices. The latter are the main building blocks of most of the known interacting theories.
Cubic interactions of massless higher-spin fields in arbitrary space-time dimensions $d\geq 4$ were studied extensively starting from the pioneering work \cite{Bengtsson:1983pd} later extended to the complete light-cone gauge classification of vertices first in four dimensions \cite{Bengtsson:1986kh} and then in arbitrary dimensions $d\geq 4$ \cite{Metsaev:2005ar}. The covariant approach has been developed more slowly compared to the light-cone approach, and after seminal works of the same period \cite{Berends:1984rq, Fradkin:1987ks}, the Fronsdal program \cite{Fronsdal:1978rb} was revived again in the current millenium (see \cite{Bekaert:2006us,fms1, Fotopoulos:2008ka, Zinoviev:2008ck, Boulanger:2008tg, Manvelyan:2009tf, Bekaert:2009ud, Manvelyan:2010wp} and references therein) resulting in the classification of parity-even cubic vertices in Minkowski space of any dimension $d\geq 4$ \cite{Manvelyan:2010jr}, i.e. the covariant extension of \cite{Metsaev:2005ar}. These vertices were packed into surprisingly compact generating functions \cite{Sagnotti:2010at, Fotopoulos:2010ay, Manvelyan:2010je, Mkrtchyan:2010pp}, with intriguing hints on possible relations with String Theory, and the studies of their $(A)dS$ extensions followed \cite{Joung:2011ww,Manvelyan:2012ww,Francia:2016weg} in parallel with Vasiliev's frame-like approach \cite{Zinoviev:2010cr,Vasilev:2011xf,Boulanger:2012dx} to $(A)dS$ vertices.
Even though the light-cone classification has been known since long time, the full covariant classification in four dimensions was completed only recently in \cite{Conde:2016izb}, where the parity-odd vertices in $d=4$ were derived. A notable difference between covariant and light-cone vertices in four dimensions is the existence of a two-derivative ``minimal'' coupling to gravity in the light-cone, which is absent in the covariant classification. It is tempting to speculate that in four dimensions symmetric tensor fields may not constitute a perfect choice for minimal covariant variables for describing flat space theories and possibly even for the $(A)dS_4$ Vasiliev theory\footnote{The possibility of describing the same spectrum of particles with alternative choice of ``minimal variables'', i.e. mixed-symmetry tensors, are poorly explored despite the fact that Vasiliev system contains these tensors on the same footing as the symmetric ones. See, however, \cite{Basile:2015jjd,Joung:2016naf} and references therein.}. Indeed, these extra light-cone vertices are crucial for the consistency of the HS theories in four-dimensional Minkowski space \cite{Metsaev:1991mt} (see also \cite{Conde:2016izb,Devchand:1996gv,Sleight:2016xqq,Ponomarev:2016lrm,Ponomarev:2016cwi}).
The absence of corresponding covariant couplings in $d\geq 4$ is known as Aragone-Deser problem \cite{Aragone:1979hx} which is resolved in constant non-zero curvature $(A)dS$ spacetimes by the Fradkin-Vasiliev mechanism \cite{Fradkin:1987ks} (see \cite{Boulanger:2008tg,Joung:2013nma} for related discussion).
The covariant classification of cubic vertices in \cite{Manvelyan:2010jr} not only completed the light-cone vertices of Metsaev \cite{Metsaev:2005ar} to off-shell ones for Fronsdal fields but also defined a scheme of field redefinitions in a given cubic action to bring it to the form containing not more than $s_1+s_2+s_3$ derivatives. This form does not contain any contraction between derivatives and is uniquely defined. We refer to it as a vertex in Metsaev basis. This was implemented later in \cite{Boulanger:2015ova,Didenko:2015cwv} for translating the quadratic order of the Vasiliev equations in $(A)dS_4$, corresponding to cubic action, to the Metsaev basis in metric formulation, that is, $AdS$ extensions of Minkowski vertices for each number of derivatives $\Delta = s_1+s_2+s_3-2n$ for $n=0,1,\dots,min\{s_1,s_2,s_3\}$.
Attempts for going beyond cubic order
\cite{MM,Sagnotti:2010at,Dempster:2012vw,Bengtsson:2016hss,Taronna:2017wbx,Roiban:2017iqg} have met difficulties in the framework of local field theory. An interesting suggestion for a possibility of a non-local theory with conformal symmetry has been made in \cite{Roiban:2017iqg} which calls for further studies.
Another interesting recent development is the progress in the holographic reconstruction \cite{Bekaert:2014cea,Sleight:2016dba,Ponomarev:2017qab} of type A HS theory in $AdS_{d+1}$. Together with the aforementioned attempts of construction of a quartic order action via the Noether procedure, these results brought to the forefront of HS research the puzzle of locality which, to our best knowledge, was first posed sharply for three dimensional systems in \cite{Prokushkin:1998bq}.
One may hope that the key to the solution of this puzzle can be found more easily in the three-dimensional case by applying recently obtained knowledge of the metric-like theory.
Unfortunately, most of the aforementioned advances in higher-dimensional HS gravities are not directly applicable to three dimensional models. This is due to the heavy use of Metsaev basis of cubic interactions in higher dimensions that does not apply to $d=3$. In order to make use of new results in metric-like HS gravity also for the three-dimensional models, one first needs to address the gap in the classification of cubic vertices.
In this paper, we continue the study aimed at filling this gap initiated in \cite{Mkrtchyan:2017ixk} where parity-even cubic vertices of massless bosons were classified. We complete the three-dimensional classification of cubic interactions deriving parity-odd vertices for massless bosonic fields as well as their couplings to Chern-Simons fields. We also elaborate on the analogous classification in two dimensions in the Appendix.
Despite all the successes of the three-dimensional HS gravities (see \cite{Prokushkin:1998bq,Banados:2016nkb,Campoleoni:2017xyl,Joung:2014qya,Iazeolla:2015tca} and references therein), there is no action formulation\footnote{See, however, \cite{Arias:2016ajh} and references therein for non-standard actions.} for the only known example of higher spin theory with propagating degrees of freedom in three dimensions i.e. Prokushkin-Vasiliev theory \cite{Prokushkin:1998bq}. This theory contains scalar degrees of freedom interacting with higher spin gauge fields which do not carry propagating degrees of freedom in the bulk.
The Chern-Simons formulation of HS gravity in three dimensions does not answer the question whether a Lagrangian for the Prokushkin-Vasiliev theory exists. This question may be tackled in the metric-like formulation where scalar and gauge fields can be put into interaction in a straightforward manner. This approach is much less explored in three dimensions though, with the exception of a few works on higher spins in the Fronsdal formulation \cite{Campoleoni:2012hp,Fredenhagen:2014oua,Campoleoni:2014tfa}.
S-matrix methods do not apply to three dimensions where massless particles of spin $s\geq 2$ do not propagate.
For the same reason, Metsaev's light-cone classification \cite{Metsaev:2005ar} does not work in three dimensions. The part of the cubic vertices that contains no divergences and traces i.e. traceless-transverse (TT) vertices are non-trivial though, as shown in \cite{Mkrtchyan:2017ixk}, and can serve as the basis for the classification of cubic interactions of massless fields in three dimensions.
The main difference between dimensions $d\geq 5$ and $d\leq 4$ for the cubic interactions of massless symmetric HS fields is the existence of dimension-dependent identities (Schouten identities) that are available in the latter case.
Due to these identities, the classification of cubic vertices in three dimensions becomes a completely independent problem which overlaps with the generic dimensional classification only for some vertices involving lower spin fields.
The classification of $d=3$ vertices was initiated in \cite{Mkrtchyan:2017ixk} where the parity-even vertices for interactions of massless bosons were derived. In this work, we complete the classification adding to it the parity-odd vertices of massless bosons as well as their interactions with Chern-Simons vector fields.
The paper is organized as follows: In Section \ref{FT} we review metric-like formulation of free HS fields. In Section \ref{Cubic Review} we review the construction of cubic vertices in higher dimensions, and the parity-even vertices in three dimensions. In Section \ref{PO} we derive full list of parity-odd cubic vertices of massless bosons in three dimensions and establish interesting relation between parity-odd and parity-even vertices. In Section \ref{Chern-Simons} we study interactions of massless fields with Chern-Simons vector fields. We conclude by summary of results and discussion in Section \ref{Discussion}. Appendices provide curious observations related to the parity-even vertices and classification of Fronsdal cubic vertices in two dimensions.
\section{Review: Free Theory}\label{FT}
In this paper we study interactions of massless fields of any spin as deformations of the free theory. To this end, we first set the stage by describing the free theory.
In order to streamline the notation, we will contract spacetime indices $\mu,\nu,\dots$ with commuting auxiliary variables $a^\mu$. In this language, the rank $s$ symmetric tensor field is given by:
\begin{align}
\phi^{\sst s}(a) = \frac{1}{s!} \, \phi_{\mu_1 \dots \mu_s} \, a^{\mu_1} \dots a^{\mu_s} \,.
\end{align}
In order to describe a free particle with spin $s$ in a covariant manner, one has to impose on the rank $s$ symmetric Lorenz tensor field the so-called Fierz equations \cite{Fierz}:
\begin{subequations}
\begin{align}
(\Box + m^2)\phi^{\sst s}(a)= \frac{1}{s!} \,(\Box + m^2)\,\phi_{\mu_1 \dots \mu_s} \, a^{\mu_1} \dots a^{\mu_s} =0\,,\label{Fierz1}\\
(\partial_x\cdot \partial_a)\phi^{\sst s}(a)=\frac1{(s-1)!}\,\partial^{\nu} \phi_{\nu \mu_2 \dots \mu_s} \, a^{\mu_2} \dots a^{\mu_s} =0\,,\label{Fierz2}\\
\partial_a^2\phi^{\sst s}(a)=\frac{1}{(s-2)!} \, \phi^{\nu}{}_{\nu\mu_3 \dots \mu_s} \, a^{\mu_3} \dots a^{\mu_s}=0\,.\label{Fierz3}
\end{align}
\end{subequations}
For the massless fields $(m^2=0)$, one has to require also an extra equivalence between fields, differing by a gradient shift with traceless and transverse parameter $\e^{\sst s-1}(x;a)$:
\begin{align}
\d\phi^{\sst s}(a)=(a\cdot \partial_x)\e^{\sst s-1}(a)\,,\quad (\partial_x\cdot \partial_a)\e^{\sst s-1}(a)=0\,,\quad
\partial_a^2\e^{\sst s-1}(a)=0\,.
\end{align}
It has been a challenge to find a Lagrangian, even for the free Fierz equations. The natural expectations based on experience with lower spins is to have a single equation of motion for the rank $s$ tensor field, which has all the three Fierz equations as its consequences and also gauge symmetry of action in the massless case.
For the massless case, the most conventional description is due to Fronsdal \cite{Fronsdal:1978rb}.
The equation of motion is given by Fronsdal tensor:
\begin{align}
\mathcal{F}^{\sst s}(a) \equiv \left[ \Box - (a \cdot \partial_x) D \right] \, \phi^{\sst s}(a)=0 \,,
\end{align}
with the de Donder operator $D(a) = (\partial_x \cdot \partial_a) - \frac12 \, (a \cdot \partial_x) \partial^2_a$.
The Fronsdal tensor $\mathcal{F}$ is invariant with respect to gauge transformations:
\begin{align}
\delta \phi^{\sst s}(a) = (a \cdot \partial_x) \epsilon^{\sst s-1}(a) && \text{with} && \partial_a^2 \, \epsilon^{\sst s-1}(a) = 0 \,.
\end{align}
The Fronsdal field $\phi^{\sst s}(a)$ is doubly traceless:
\begin{align}
(\partial_a^2)^2 \, \phi^{\sst s}(a) =0 \,.
\end{align}
The action is given by:
\begin{align}
\label{eq:fronsdalAction}
S^{(s)} = \frac12 \int \text{d}^d x \; \phi^{\sst s}(a) (\overset{\leftarrow}{\partial_a} \cdot \overset{\rightarrow}{\partial_a} )^s \mathcal{G}^{\sst s}(a) \,,
\end{align}
with Lagrangian equations of motion:
\begin{align}
\mathcal{G}^{\sst s}(a) = \mathcal{F}^{\sst s}(a) - \frac14 \, a^2 \partial_a^2 \, \mathcal{F}^{\sst s}(a)=0 \,.
\end{align}
Using double-tracelessness of the Fronsdal field, one can easily show that the equations of motion $\mathcal{G}=0$ are equivalent to the Fronsdal equations $\mathcal{F}=0$.
At the linearised level, the Fronsdal equations imply the Fierz equations.
An alternative to the Fronsdal action is given by the Maxwell-like formulation \cite{Campoleoni:2012th} of HS dynamics.
The traceless-transverse (TT) parts of vertices in both formulations are equivalent though \cite{Francia:2016weg} and we will therefore not distinguish them in this work since we restrict ourselves to TT vertices only following \cite{Mkrtchyan:2017ixk}. The TT vertices studied here can be completed to off-shell vertices for both Fronsdal and Maxwell-like HS fields\footnote{It is an empirical observation that the TT vertices can be completed to off-shell ones, based on the known examples of both Fronsdal \cite{Manvelyan:2010jr} and Maxwell-like \cite{Francia:2016weg} vertices in $d\geq 4$. We do not have a proof that it will work in 3d straightforwardly. One interesting check would be to see if the ``Grassmann miracle'' of \cite{Manvelyan:2010je} (which allows to immediately derive the off-shell vertices from TT ones) works in this case. In three dimensions the off-shell vertex computations are technically involved, though, and we do not attempt them here.}. One can regard the results of this work as the classification of deformations of the Fierz system of equations \cite{Fierz} for massless HS fields in $d=3$. There is an important difference between Fronsdal and Maxwell-like descriptions relevant to this work which we do not elaborate on here, though. While Fronsdal fields do not carry propagating degrees of freedom in three dimensions, the reducible Maxwell-like fields do carry a propagating massless scalar (vector) degree of freedom for even (odd) spin. As a consequence, non-linear theories of Maxwell-like fields, if any, cannot be given by Chern-Simons actions in striking difference with many known models for Fronsdal fields. The classification that we carry out here can be implemented for building models with both Fronsdal and Maxwell-like field content.
\section{Review: Cubic Vertices}\label{Cubic Review}
We will assume that there exists a gauge invariant non-linear action $S$ that can be expanded in power of fields with a small expansion parameter $g$ as follows
\begin{align}
S = S^{(2)} + g \, S^{(3)} + g^2 \, S^{(4)} + \dots \,,\label{FA}
\end{align}
where $S^{(2)} = S^{(s_1)} + S^{(s_2)} + S^{(s_3)}$ with $S^{(s_i)}$ denoting the Fronsdal action for the spin $s_i$ field \eqref{eq:fronsdalAction}.
Gauge invariance of the action implies
\begin{align*}
\delta S = (\delta^{(0)} + g \delta^{(1)} + \dots ) (S^{(2)} + g S^{(3)} + \dots) && \rightarrow && \delta^{(0)} S^{(3)} + \delta^{(1)} S^{(2)} = 0 \,.
\end{align*}
Using the fact that $\delta^{(1)} S^{(2)} = \delta^{(1)} \phi \; \mathcal{G}$, it follows that\footnote{Note that our notation is somewhat schematic. The variation is to be understood as a sum of three terms, i.e. $\delta^{(1)} \phi \;\mathcal{G}= \delta^{(1)} \phi^{(s_1)} \, \mathcal{G}^{(s_1)}+ \delta^{(1)} \phi^{(s_2)} \, \mathcal{G}^{(s_2)}+ \delta^{(1)} \phi^{(s_3)} \, \mathcal{G}^{(s_3)}$.}
\begin{align}
\delta^{(0)} S^{(3)} \approx 0 \,,
\end{align}
where $\approx$ denotes equality upon imposing free equations of motion $\mathcal{G}=0$. Note also that any two actions $S$ and $S'$ related by a field redefinition $\phi \to \phi + g \, f(\phi,\phi)$, obey
\begin{align}
S^{(3)} \approx S'^{(3)} \,.
\end{align}
This ambiguity in field redefinition at the cubic order will be fixed by restricting the possibility of derivative contractions in the cubic vertex (as reviewed for example in \cite{Conde:2016izb,Mkrtchyan:2017ixk}).
One can now make the following ansatz for the cubic action
\begin{align}
S^{(3)} = \int \text{d}^dx \, \mathcal{V} \; \phi^{\sst s_1}(a_1,x_1) \, \phi^{\sst s_2}(a_2,x_2) \, \phi^{\sst s_3}(a_3,x_3) \; \left( \prod^3_{i=1} \delta(x-x_i) d^3x_i \right) \,,
\end{align}
where the differential operator $\mathcal{V}=\mathcal{V}(\partial_{x_1},\partial_{a_1},\partial_{x_2},\partial_{a_2},\partial_{x_3},\partial_{a_3})$ is to be determined. Since $\mathcal{V}$ is a scalar operator, it is built of contractions of the derivatives $\partial_{a_i}$ and $\partial_{x_i}$. It can be shown that, up to total derivatives and upon fixing the freedom in field redefinitions \footnote{The field redefinition freedom is fixed following \cite{Manvelyan:2010wp,Manvelyan:2010jr}, that is, by removing all terms with derivatives contracted with each other. Strictly speaking, one can exclude the derivative contractions by field redefinitions only in the terms that do not contain divergences. That turns out to be already sufficient for fixing the field redefinition freedom (see, e.g., \cite{Manvelyan:2010je}).}, all contractions can be written in terms of
\begin{subequations}
\begin{align}
\partial_{a_i}\cdot\partial_{x_i} &\equiv \text{Div}_i \,,\\
\partial_{a_i}\cdot\partial_{x_{i+1}} &\equiv y_i \,,\\
\partial_{a_i}\cdot\partial_{a_{i+1}} &\equiv z_{i+2} \,,\\
\partial_{a_i}\cdot\partial_{a_{i}} &\equiv T_i \,.
\end{align}
\end{subequations}
Let us furthermore restrict to interaction terms which do not involve traces $T_i$ or divergences Div$_i$. In this case, one obviously has
\begin{align}
\mathcal{V} = \mathcal{V}(y_i,z_i) \,.
\end{align}
The gauge variation of the ansatz for the cubic action is then given by
\begin{align*}
\delta^{(0)} S^{(3)} = \int \text{d}^d x \prod_{i=1}^{3} \text{d}x_i \delta(x-x_i) \; \mathcal{V} \; \sum_{j=1}^3 a_j \cdot \partial_{x_j} \epsilon(a_j,x_j) \; \phi(a_{j+1},x_{j+1}) \; \phi(a_{j-1},x_{j-1}) \,,
\end{align*}
where here and in the following we assume indices $i,j,\dots$ to be cyclic in $(1,2,3)$, for example $y_{i} \equiv y_{i+3}$.
Using the commutators
\begin{align}
[z_{i+1}, a_i \cdot \partial_i] \circeq y_{i+2}\,, && [z_{i+2}, a_i \cdot \partial_i] \circeq -y_{i+1} \,,
\end{align}
where $\circeq$ denotes equality up to equations of motion, total derivatives, traces and divergences. Similarly, it can be shown that all other commutators vanish up to these terms. After dropping total derivatives with respect to $\partial_{x_i}$, it then follows that
\begin{align*}
\delta^{(0)} S^{(3)} = \int \text{d}^d x \prod_{i=1}^{3} \text{d}x_i \delta(x-x_i) \; \sum_j (y_{j-1} & \partial_{z_{j+1}}- y_{j+1} \partial_{z_{j-1}}) \; \mathcal{V} \\ & \times \epsilon(a_j,x_j) \; \phi(a_{j+1},x_{j+1}) \; \phi(a_{j-1},x_{j-1})
\end{align*}
It then immediately follows that gauge invariant vertices solve the equations
\be
D_i \mathcal{V}\equiv (y_{i-1}\partial_{z_{i+1}}-y_{i+1}\partial_{z_{i-1}})\mathcal{V}=0\,,\quad i=1,2,3\,,\label{Di}
\ee
and are given by
\begin{align}
\mathcal{V}=\mathcal{V}(y_i,G)\qquad\qquad \text{with} \quad G = \sum_{i=1}^3 y_i \cdot z_i \,.
\end{align}
In generic spacetime dimension, these solutions span the entire space of possible cubic vertices. However at fixed dimension $d\le 4$, most of these vertices are vanishing \cite{Conde:2016izb}, while there may be more solutions due to Schouten identities as demonstrated in \cite{Mkrtchyan:2017ixk}. We will briefly review the main results of the latter work on parity-even vertices in $d=3$ here.
\subsection{Parity-even Vertices in $d=3$}
The derivation of Lorentz covariant cubic vertices described above has to be supplemented with Schouten identities that are relevant for $d\le 4$. The case of four dimensions can be found in \cite{Conde:2016izb} while in \cite{Mkrtchyan:2017ixk} the $d=3$ parity even vertices were classified. The Schouten identities can be systematically derived by ``over-antisymmetrisation'' of Lorentz indices and there are even Mathematica packages doing so \cite{Nutma:2013zea}.
The elementary three-dimensional Schouten identities for parity-even TT cubic vertices are given as (grouped in two-, three- and four-derivative identities, no summation over repeating indices assumed):
\begin{subequations}\label{SI}
\begin{align}
(G - y_i z_i)^2 = 0 \,, && y_i z_i G - y_{i-1} z_{i-1} y_{i+1} z_{i+1} = 0 \,, \label{eq:twoDerivDDI}\\
y_i y_{i\pm 1}(G - y_i z_i) = 0 \,, \label{eq:threeDerivDDI}\\
y_i^2 y^2_{i+1} = 0 \,, && y_i^2 y_{i+1} y_{i-1}=0 \label{eq:fourDerivDDI}\,.
\end{align}
\end{subequations}
These identities will be supplemented with parity-odd ones in the next section and are needed for the derivation of parity-odd vertices.
Due to identities \eqref{SI}, the classification of parity-even cubic vertices in 3d is different from that of $d\geq 4$. In particular, these identities allow for existence of two-derivative and three-derivative TT vertices given by \cite{Mkrtchyan:2017ixk}:
\begin{subequations}\label{PE}
\begin{framed}
\begin{align}
\mathcal{V}_{s_1,s_2,s_3} &=[(s_1-1) y_1 z_1+(s_2-1) y_2 z_2+(s_3-1) y_3 z_3] G z_1^{n_1} z_2^{n_2} z_3^{n_3}\,,\label{2vertex}\\
n_i&=\tfrac12(s_{i-1}+s_{i+1}-s_i)-1\geq 0\,,\nonumber\\
\mathcal{V}_{s_1,s_2,s_3} &= y_1\, y_2\, y_3\, z_1^{n_1}\, z_2^{n_2}\, z_3^{n_3}\,,\qquad
n_i=\tfrac12(s_{i-1}+s_{i+1}-s_i-1)\geq 0\,.\label{3vertex}
\end{align}
\end{framed}
\end{subequations}
The expressions \eqref{2vertex} and \eqref{3vertex} describe unique cubic vertices for even and odd sum of spins respectively. Note that \eqref{2vertex} involves minimal coupling to gravity discussed for particular cases earlier in \cite{Aragone:1983sz,Gwak:2015vfb,Campoleoni:2012hp}. These vertices exist only if the spin values satisfy triangle inequalities $s_i< s_{i+1}+s_{i-1}$.
\section{Parity-odd Vertices for Massless Bosons}\label{PO}
In order to construct parity-odd vertices of massless fields in three dimensions, one needs to add to the building blocks of the parity-even vertices, i.e. $y_i$ and $z_i$, all elementary scalar contraction operators that involve the invariant tensor $\epsilon_{\mu\nu\lambda}$ of the Lorentz algebra. These are:
\begin{align}
U=\epsilon^{\mu\nu\lambda}\partial^{a_1}_{\mu}\partial^{a_2}_{\nu}\partial^{a_3}_{\lambda}\,,\quad V_{ij}=\epsilon^{\mu\nu\lambda}\partial^{a_{i+1}}_{\mu}\partial^{a_{i-1}}_{\nu}\partial^{x_j}_{\lambda}\,,\quad W_i=\epsilon^{\mu\nu\lambda}\partial^{a_i}_{\mu}\partial^{x_{i+1}}_{\nu}\partial^{x_{i-1}}_{\lambda}\,,
\end{align}
where the $V$'s satisfy (discarding total derivative terms)
\begin{align}
\sum_{j}V_{ij}=0\,,
\end{align}
while the $W$'s are a choice of basis for nine different structures with two derivatives related to each other up to total derivatives.
Therefore the independent set of parity-odd variables is spanned by ten scalar operators $U, V_{ij} (i\neq j), W_i$.
It is straightforward to check that:
\begin{align}
[U, a_i\cdot\partial_j]=V_{ij}\,,\quad [V_{i i\pm 1}, a_i\cdot \partial_j]=0\,,\quad
[V_{i i\pm 1},a_{j}\cdot\partial_{i\pm 1}]=0\,,\\
[V_{i i\pm 1},a_{i\mp 1}\cdot\partial_{i\mp 1}]=-W_{i\pm 1}\,,\quad
[V_{i i\pm 1}, a_{i\pm 1}\cdot \partial_{i\mp 1}]=\pm W_{i\mp 1} \,,\\
[V_{i i\pm 1}, a_{i\pm 1}\cdot \partial_{i}]=-W_{i\mp 1} \,,\quad
[V_{i i\pm 1}, a_{i\mp 1}\cdot \partial_{i}]=W_{i\pm 1} \,,\\
[W_i, a_j\cdot\partial_k]=0\,,
\end{align}
up to total derivatives.
The operator $D_i$ \eqref{Di} takes the following form for parity-odd vertices:
\begin{align}
D_i = y_{i-1} \partial_{z_{i+1}} - y_{i+1} \partial_{z_{i-1}} - W_{i-1} \partial_{V_{i+1 i-1}} - W_{i+1} \partial_{V_{i-1 i+1}}\nonumber\\ - V_{i i-1} \partial_U - V_{i i +1} \partial_U \,.
\end{align}
The elementary parity-odd Schouten identities are given by (with arbitrary $i,j,k$ and no summation over repeating indices assumed):
\begin{subequations}\label{POSI}
\begin{align}
V_{i-1 i} z_{i+1} + V_{i+1 i} z_{i-1} =0\,,\qua
U y_i + V_{i+1 i-1} z_{i-1} - (V_{i-1 i} + V_{i-1 i+1})z_{i+1} =0\,,\label{1dSI}\\
W_{i+1} z_{i+1} - W_{i} z_{i} = V_{j i-1} y_{j}=V_{k i-1} y_{k}\,,\qua
W_{i} z_{i+1} = V_{i+1 i} y_{i}\,,\qua
W_{i} z_{i-1} = - V_{i-1 i} y_{i}\,,\label{2dSI} \\
W_{i} y_{i\pm1} = 0\,,\label{3dSI}
\end{align}
\end{subequations}
where identities are grouped into one, two and three derivative ones. From these we derive other useful identities:
\begin{align}
V_{i i\pm 1}(y_i z_i+y_{i\mp 1} z_{i\mp 1})= 0\,,\label{Vyz}\\
W_i\, y_i\, z_i=V_{i i+1}\,y_i^2=-V_{i i-1}\,y_i^2\,,\quad V_{ij}y_j y_{j\pm1}= 0\,,\quad W_i\,y_i\,z_i^2=- U\,y_1\,y_2\,y_3\,,\label{Wyzz}\\
V_{i i\pm1}y_i^2 y_{i\mp 1}= 0\,,\quad V_{ij}\,y_1\,y_2\,y_3=0\,,\quad W_i y_i^2 z_i^2= 0\,.\label{Vyyy}
\end{align}
A consequence of the Schouten identities is that all parity-odd terms with more than one derivative can be written in terms of $W_i,\,y_i,\,z_i$ operators only as long as the spins satisfy triangle inequalities $s_i< s_{i+1}+s_{i-1}$. This property will be useful in the following. The terms that cannot be written only in terms of the variables mentioned above are of the following form:
\begin{align}
V_{i i\pm 1}y_{i\mp 1}^n z_i^m z_{i\pm 1}^p\neq \sum_k \mathcal{O}(W_k)\,.\label{NW}
\end{align}
We assume without loss of generality $s_1\geq s_2\geq s_3$. Then, all the terms of the type \eqref{NW} are given as:
\begin{align}
V_{23}\,y_1^{s_1-s_2-s_3}z_2^{s_3-1}z_3^{s_2}\,,
\qquad
V_{32}\,y_1^{s_1-s_2-s_3}z_2^{s_3}z_3^{s_2-1}\,,\label{nW}
\end{align}
and exist only for $s_1 \geq s_2+s_3$.
We also note that any expression written in terms of $W_i$ and non-vanishing up to identities \eqref{3dSI} and \eqref{Vyyy} cannot be converted to $V_{ij}$ expressions that vanish. This is due to the fact that
terms involving $W_k$ with different $k$ give rise to $V_{ij}$ with different $j$ that cannot sum up to zero through identities \eqref{1dSI} and \eqref{2dSI}. This simple technical observation suggests that working solely with $W_k$-s wherever possible will not miss any information about terms that may conspire to sum up to zero. Since $W_i$-s are also commuting with all gauge variations, it makes them the preferred choice of variables in expressions with more than one derivative that we will study in the following.
We now proceed with the derivation of parity-odd cubic vertices for massless bosons in three dimensions. We will need to discuss separately different cases and simplify each ansatz maximally, to save virtual trees that get cut in order to supply us with Mathematica notebooks.
\subsection{Vertices with Scalars}
The simplest example is a vertex with two scalar fields involved: $(s,0,0)$.
In this case the only candidate vertex operator is:
\begin{align}
\boxed{\mathcal{V}_{s,0,0}^{PO}=W_1\,y_1^{s-1}\,,}\label{POs00}
\end{align}
and defines a gauge invariant vertex of current interaction type:
\begin{align}
\mathcal{L}_{s,0,0}=h^{\mu_1\dots\mu_s}\tilde{J}_{\mu_1\dots\mu_s}\,,
\end{align}
where the current
\begin{align}
\tilde{J}_{\mu_1\dots\mu_s}=\epsilon_{\nu\rho(\mu_1}\partial^{\nu}J^{\rho}{}_{\mu_2\dots\mu_s)}\,,
\end{align}
is roughly the curl of the parity-even conserved current $J_{\mu_1\dots\mu_s}$ of spin $s$.
Next we look at the possible vertices with $s_1\geq s_2\geq 1$ and $s_3=0$.
The general ansatz for $(s_1,s_2,0)$ vertex can be written as:
\begin{align}
\mathcal{V}_{s_1,s_2,0}=(\alpha V_{31} + \beta V_{32})y_1^{s_1-s_2}z_3^{s_2-1}\,.\label{POs1s20}
\end{align}
The variation of \eqref{POs1s20} with respect to the gauge symmetry of the spin $s_1$ field gives:
\begin{align}
D_1 \mathcal{V}_{s_1,s_2,0}=(s_2-1)(\alpha V_{31}+\beta V_{32})\,y_1^{s_1-s_2}\,y_2\,z_3^{s_2-2}-\beta w_2\,y_1^{s_1-s_2}\,z_3^{s_2-1}\,,
\end{align}
The variation of \eqref{POs1s20} with respect to the symmetry of the spin $s_2$ field gives:
\begin{align}
D_2 \mathcal{V}_{s_1,s_2,0}&=-\alpha W_1 y_1^{s_1-s_2}z_3^{s_2-1}+(s_2-1)(\alpha V_{31} + \beta V_{32})y_1^{s_1-s_2+1}z_3^{s_2-2}\nonumber\\
&=-s_2\,\alpha\, W_1 y_1^{s_1-s_2}z_3^{s_2-1}+(s_2-1)\, \beta\, V_{32}y_1^{s_1-s_2+1}z_3^{s_2-2}\,.\label{VarPOs1s20}
\end{align}
Vanishing of this variation is compatible with a non-zero vertex only for $s_2=1$, $\alpha=0$. Therefore, there is a unique vertex:
\begin{align}
\boxed{\mathcal{V}_{s,1,0}^{PO}=V_{32}\,y_1^{s-1}\,,}\label{POs10}
\end{align}
which is invariant with respect to the gauge transformation of the second (Maxwell) field ($D_2 \mathcal{V}_{s,1,0}^{PO}= 0$), while the gauge variation of the spin $s$ field,
\begin{align}
D_1 \mathcal{V}_{s,1,0}^{PO}= - W_2 y_1^{s-1}\,,
\end{align}
vanishes due to \eqref{3dSI} iff $s\geq 2$. Therefore, the vertex of type $(s,1,0)$ exists for any $s\geq 2$.
For $s_2\geq 2$, the variation \eqref{VarPOs1s20} can be rewritten as:
\begin{align}
D_2\mathcal{V}_{s_1,s_2,0}=(s_2\, \alpha\, V_{31} + (s_2-1)\, \beta\, V_{32})y_1^{s_1-s_2}z_3^{s_2-2}\,,
\end{align}
and allows for gauge invariance only for trivial solution $\alpha=\beta=0$.
Similar to the parity-even case, there are no couplings of the type $(s_1,s_2,0)$ with $s_1\geq s_2\geq 2$.
Thus we found all the vertices involving scalar fields.
\subsection{Vertices with Maxwell Fields}
From the Schouten identity \eqref{3dSI}, it follows immediately that there is a vertex of the type $(s,s,1)$ with two derivatives:
\begin{align}
\boxed{\mathcal{V}_{s,s,1}^{PO}=W_3\, z_3^s\,.}\label{PO1ss}
\end{align}
It is a parity-odd two-derivative coupling to spin one which requires charged spin-$s$ fields. For $s=1$, \eqref{PO1ss} reproduces the spin one vertex found by Anco in \cite{Anco:1995wt}.
There is another vertex of the type $(s,1,1)$ that may be guessed immediately:
\begin{align}
\boxed{\mathcal{V}_{s,1,1}^{PO}=W_1\,y_1^{s-1}\, z_1\,,}\label{POs11}
\end{align}
which involves $s+1$ derivatives. For $s=1$, \eqref{POs11} coincides with \eqref{PO1ss} up to relabelling of the fields. We will come back to this vertex shortly.
It remains to check other possibilities of interactions $s_1\geq s_2\geq s_3=1$ with Maxwell field.
It is straightforward to see that the number of derivatives cannot be less than $s_1-s_2$ simply because there are no candidate scalar expressions. The upper bound on derivatives is a bit more subtle to define. An obvious upper bound is $s_1+s_2$, since all vertex monomials with $s_1+s_2+2$ derivatives vanish due to \eqref{3dSI} for any $s_1,\,s_2$ and there are no candidate expressions with derivatives more than $s_1+s_2+2$.
Nevertheless, it can be easily shown that for $s_1\geq s_2>>1$ the upper bound is much lower than $s_1+s_2$ due to \eqref{eq:fourDerivDDI} and \eqref{3dSI}.
In fact, careful examination taking into account all Schouten identities shows that there are no non-trivial vertex candidates for the number of derivatives more than $s_1-s_2+2$.
Therefore we are left with two candidate values for number of derivatives in the vertex: $s_1-s_2$ and $s_1-s_2+2\,.$
We will consider these cases separately.
\paragraph*{$(s_1-s_2+2)-$ derivative vertex.}
With the help of some elementary algebra and making use of Schouten identities, a general ansatz for an $s_1-s_2+2$ derivative vertex can be written in the form:
\begin{align}
\mathcal{V}_{s_1,s_2,1}=[\gamma_1 W_1 z_1
+\gamma_2 W_2 z_2+\gamma_3 W_3 z_3]y_1^{s_1-s_2}z_3^{s_2-1}
\,,\label{POs1s21}
\end{align}
where $\gamma_i$ are arbitrary constants.
For simplicity, we discuss separately the cases of $s_2=1$, $s_1=s_2$ and $s_1>s_2\geq 2$.
\begin{itemize}
\item
For $s_2=1$, we have a general ansatz
\begin{align}
\mathcal{V}_{s,1,1}=[\gamma_1 W_1 z_1
+\gamma_2 W_2 z_2+\gamma_3 W_3 z_3]y_1^{s-1}\,.
\end{align}
For $s=1$, we have:
\begin{align}
\boxed{\mathcal{V}_{1,1,1}^{PO}=\gamma_1 W_1z_1+\gamma_2 W_2z_2+\gamma_3 W_3z_3
\,,}\label{PO111}
\end{align}
with arbitrary $\gamma_i$. Each of the three terms in this expression is separately gauge invariant and defines a vertex of the type \eqref{PO1ss}. We have three inequivalent vertices, defined for any triple of Maxwell fields. As opposed to the Yang-Mills vertex, which is fully antisymmetric in all the three fields involved, the term $W_iz_i$ is antisymmetric only in two fields $A^{i\pm 1}_\mu$ and can even define a cubic vertex for only two distinct Maxwell fields (e.g. taking value in the two-generator Lie algebra of infinitesimal affine transformations of a real line).
This vertex has been studied in \cite{Anco:1995wt}. One can write it in explicit form:
\begin{align}
\mathcal{L}_{1,1,1}=f_{abc}\epsilon^{\mu\nu\lambda}A^a_\mu \tilde{F}^b_\nu\tilde{F}^c_\lambda\,,\quad \tilde{F}^a_\mu=\epsilon_{\mu\nu\rho}\partial^\nu A^{a \rho}\,,\quad f_{abc}=-f_{acb}\,.
\end{align}
For $s\geq 2$, we have:
\begin{align}
\mathcal{V}_{s,1,1}=\gamma_1 W_1 z_1
y_1^{s-1}\,.
\end{align}
This is the $(s,1,1)$ vertex \eqref{POs11} where, for odd $s$, non-trivial interaction requires charged Maxwell fields.
\item
For $s_1=s_2=s>1$, the general ansatz with $s_1-s_2+2=2$ derivatives is:
\begin{align}
\mathcal{V}_{s,s,1}=[\gamma_1 W_1 z_1
+\gamma_2 W_2 z_2+\gamma_3 W_3 z_3]z_3^{s-1}\,,
\end{align}
and is gauge invariant iff $\gamma_1=\gamma_2=0$. Therefore, we end up with the unique possibility of the vertex \eqref{PO1ss}.
\item
For $s_1>s_2\geq 2$, the general ansatz \eqref{POs1s21} reduces to:
\begin{align}
\mathcal{V}_{s_1,s_2,1}=\gamma_1 W_1 z_1
y_1^{s_1-s_2}z_3^{s_2-1}
\,,
\end{align}
which is not gauge invariant under the variation of the second field with spin $s_2$.
\end{itemize}
Therefore we find that all the vertices for $s_1\geq s_2\geq s_3=1$ with $s_1-s_2+2$ derivatives are covered by \eqref{PO1ss}, \eqref{POs11} and \eqref{PO111}.
\paragraph*{$(s_1-s_2)-$ derivative vertex.}
A general ansatz with $s_1-s_2$ derivatives can be written for $s_1=s_2=s$ in the form (without derivatives since $s_1-s_2=0$):
\begin{align}
\mathcal{V}_{s,s,1}=U z_3^{s-1}\,.
\end{align}
It is elementary to check that this expression is not gauge invariant. We will assume in the following that $s_1>s_2$, in which case the general ansatz takes the form:
\begin{align}
\mathcal{V}_{s_1,s_2,1}=(\alpha V_{23} z_3+\beta V_{21} z_3+\gamma V_{32}z_2) y_1^{s_1-s_2-1} z_3^{s_2-1}\,,
\end{align}
and it is easy to check that the equation $D_1 \mathcal{V}_{s_1,s_2,1}=0$ have only vanishing solutions for the coefficients $\alpha, \beta, \gamma$ unless $s_2=1$, $\beta=0,\, \gamma=-\alpha$.
The only candidate expression
$\mathcal{V}_{s,1,1}=(V_{23} z_3-V_{32}z_2)y_1^{s-2}\,$
is however not invariant with respect to the gauge transformations of Maxwell fields.
We conclude that all the parity-odd vertices with Maxwell fields are given by \eqref{PO1ss}, \eqref{POs11} and \eqref{PO111}.
\subsection{Gravitational Interactions}
Making use of the Schouten identity \eqref{3dSI}, one can easily show that there is a three-derivative parity-odd $(s,s,2)$ coupling to massless spin two:
\begin{align}
\boxed{\mathcal{V}_{s,s,2}^{PO}=W_3\, y_3\, z_3^s\,.}\label{PO2ss}
\end{align}
This vertex is symmetric with respect to the exchange of spin $s$ fields and therefore does not require charged fields.
For $s=2$, the expression \eqref{PO2ss} reproduces the vertex found by Boulanger and Gualtieri in \cite{Boulanger:2000ni}.
Any $(s,s,2)$ type of parity-odd vertex requires an odd number of derivatives.
We will see in the following that there are no parity-odd vertices with one derivative or with more than four derivatives. Therefore, \eqref{PO2ss} is the unique parity-odd coupling to gravity for given spin $s$.
We conclude that the parity-odd minimal coupling to gravity is given by a three-derivative vertex. Let us recall that the parity-even gravitational coupling has two derivatives. This is in contrast to spin one (Maxwell) minimal couplings, where the parity-odd coupling \eqref{PO1ss} has two derivatives while the parity-even coupling has three derivatives \cite{Mkrtchyan:2017ixk}.
It remains to see what the other options of $s_1> s_2\geq s_3=2$ couplings are.
These vertices will be classified in the following where we will consider the more general case of couplings between fields with arbitrary spin values.
\subsection{General Case}
It is elementary to verify, by making use of Schouten identity \eqref{3dSI}, that the expression $W_1 y_1^n z_1^m$ is gauge invariant for any $n,m$ and therefore forms a vertex. These type of vertices are all exhausted by \eqref{POs00}, \eqref{PO1ss}, \eqref{POs11} and \eqref{PO2ss}. There are no vertices of the aforementioned type with $n,m\geq 2$ due to \eqref{Vyyy}.
After the examples with low spins, we now start studying more general cases of cubic interactions.
\subsubsection{Couplings without Derivatives}
It is straightforward to show that there are no vertices without derivatives. In order to do so, one just needs to gauge variate the most general ansatz,
\begin{align}
\mathcal{V}_{s_1, s_2, s_3}=U\,z_1^{n_1}\,z_2^{n_2}\,z_3^{n_3}\,,\quad s_i=n_{i-1}+n_{i+1}+1\,,
\end{align}
and compare to the linear combination of one-derivative Schouten identities with arbitrary coefficients. One will thus verify that there is no such linear combination and therefore no vertex without derivatives.
\subsubsection{One-derivative Vertices}
It is straightforward to show that any one-derivative parity-odd vertex with three massless fields of spins $s_1\geq s_2\geq s_3\geq 2$ could be written in the following form:
\begin{align}
\mathcal{V}_{s_1,s_2,s_3}=\sum_{i=1}^3 (\alpha_i V_{i i+1}+\beta_i V_{i i-1})z_{i+1} z_{i-1} z_1^{n_1} z_2^{n_2} z_3^{n_3}\,,
\end{align}
which can be further simplified in case if the spins satisfy triangle inequalities $s_1<s_2+s_3$ to
\begin{align}
\mathcal{V}_{s_1,s_2,s_3}=(\alpha_1 V_{12} z_2 z_3+\alpha_2 V_{23} z_3 z_1+\alpha_3 V_{31} z_1 z_2) z_1^{n_1} z_2^{n_2} z_3^{n_3}\,,
\end{align}
and, if they saturate triangle inequality $s_1=s_2+s_3$, to
\begin{align}
\mathcal{V}_{s_1,s_2,s_3}= (\alpha V_{23}z_3+\beta V_{32}z_2) z_2^{n_2} z_3^{n_3}\,.
\end{align}
In both cases, even though there are solutions for each of the equations $D_i \mathcal{V}_{s_1,s_2,s_3}=0$, there is no non-zero intersection between these solutions\footnote{This observation may be useful in classification of couplings with massless {\it and massive} fields (since massive fields are not constrained by gauge invariance), which is out of the scope of this work.}.
It is also easy to verify that there are no candidate expressions for one-derivative vertices if $s_1>s_2+s_3$.
We conclude that there are no parity-odd vertices with one derivative for any spins $s_1\geq s_2\geq s_3\geq 0$.
\subsubsection{Two-derivative Vertices}
Now we turn to studying two derivative parity-odd interactions. This corresponds to odd values of the sum $s_1+s_2+s_3$ and therefore the triangle inequality cannot be saturated: $s_1\neq s_2+s_3$.
We discuss separately the case when the spins satisfy triangle inequalities and when they do not.
\subparagraph{Triangle inequalities are satisfied.}
Taking into account that for $s_1\geq s_2\geq s_3\geq 1$ and $s_1<s_2+s_3$, any vertex monomial with two derivatives can be brought to the form where the only parity-odd operators are $W_i$. We end up with a simple ansatz:
\begin{align}
\mathcal{V}_{s_1,s_2,s_3}=[\alpha W_1z_1+\beta W_2z_2+\gamma W_3z_3]\,z_1^{n_1}z_2^{n_2}z_3^{n_3}\,, \quad n_1\leq n_2\leq n_3\,.
\end{align}
Making use of \eqref{3dSI} and \eqref{Wyzz}, one can show that
the gauge invariance conditions,
\begin{subequations}
\begin{align}
D_1\mathcal{V}_{s_1,s_2,s_3}=[-\b\, n_3\,W_2\,y_2\,z_2^2+\g\, n_2\, W_3\,y_3\,z_3^2]\,z_1^{n_1}z_2^{n_2-1}z_3^{n_3-1}\nonumber\\
=(\b\,n_3-\g\,n_2)\,U\,y_1\,y_2\,y_3\,z_1^{n_1}z_2^{n_2-1}z_3^{n_3-1}=0\,,\\
D_2\mathcal{V}_{s_1,s_2,s_3}=[-\g\,n_1\,W_3\,y_3\,z_3^2+\a\, n_3\, W_1\,y_1\,z_1^2]\,z_1^{n_1-1}z_2^{n_2}z_3^{n_3-1}\nonumber\\
=(\g\,n_1-\a\,n_3)\,U\,y_1\,y_2\,y_3\,z_1^{n_1-1}z_2^{n_2}z_3^{n_3-1}=0\,,\\
D_3\mathcal{V}_{s_1,s_2,s_3}=[-\a\,n_2\,W_1\,y_1\,z_1^2+\b\, n_1\, W_2\,y_2\,z_2^2]\,z_1^{n_1-1}z_2^{n_2-1}z_3^{n_3}\nonumber\\
=(\a\,n_2-\b\,n_1)\,U\,y_1\,y_2\,y_3\,z_1^{n_1-1}z_2^{n_2-1}z_3^{n_3}=0\,,
\end{align}
\end{subequations}
imply:
\begin{align}
\b\,n_3-\g\,n_2=\g\,n_1-\a\,n_3=\a\,n_2-\b\,n_1=0\,.\label{n1n2n3abg}
\end{align}
The solution to these equations fixes the vertex uniquely up to an overall constant\footnote{The vertex is not unique only when $n_1=n_2=n_3=0$. In this case, the equations \eqref{n1n2n3abg} are trivialised and there are no restrictions on $\a,\b,\g$. This case corresponds to the vertex given by the equation \eqref{PO111}.}:
\begin{align}
\boxed{\mathcal{V}_{s_1,s_2,s_3}^{PO}=[n_1\, W_1z_1+n_2\, W_2z_2+n_3\, W_3z_3]\,z_1^{n_1}z_2^{n_2}z_3^{n_3}\,, \quad s_i=n_{i+1}+n_{i-1}+1\,.}\label{2dPO}
\end{align}
This vertex exists for any triples of spins with odd sum satisfying strict triangle inequalities. For $s_1=s_2=s_3=3$, the expression \eqref{2dPO} reproduces the vertex found by Boulanger, Leclerq and Cnockaert in \cite{Boulanger:2005br}. To our best knowledge, the latter is the only example known in literature of parity-odd cubic vertices of HS fields in three dimensions.
This result is similar to parity-even case, where the two-derivative vertex \eqref{2vertex} exists for every triple of spins, with even sum, satisfying strict triangle inequalities. One important difference is that if there are two fields with the same spins in the vertex the parity-even vertex with two derivatives, \eqref{2vertex}, is symmetric with respect to exchange of these fields while parity-odd one, \eqref{2dPO}, is antisymmetric (we assume at least one of the spins is greater than one). We will come back to the relation of the parity-even and parity-odd vertices in the following.
\subparagraph*{Triangle inequalities are violated.}
It is elementary to show that for $s_1> s_2+s_3+1$ there are no vertex monomials with two derivatives.
The only allowed case is $s_1=s_2+s_3+1$, with an ansatz involving expressions of the type \eqref{nW}:
\be
\mathcal{V}_{s_1,s_2,s_3}^{PO}=(\a V_{23} z_3+\b V_{23} z_2)y_1\,z_2^{s_3-1}\,z_3^{s_2-1}+\g W_1\,z_2^{s_3}\,z_3^{s_2}\,.
\ee
This expression is invariant with respect to the spin $s_1$ field's gauge variation, $D_1\mathcal{V}=0$, if $\a\, s_2+\b\, s_3=0$, and with respect to the second field's gauge variation, $D_2\mathcal{V}=0$, if $\a\, s_2=0=\b\, (s_2-1)\,,\,\, \g s_2=0$, while for the invariance with respect to the third field we get: $\a\,(s_3-1)=0=\b\,s_3\,,\,\, \g\, s_3=0$.
The only non-trivial solutions for this class of vertices are given by \eqref{POs00} with $s=1$ and \eqref{POs10} with $s=2$.
We conclude that, similarly to parity-even vertices with two derivatives, there is only one parity-odd vertex \eqref{2dPO} with two derivatives for each triple of spins $s_1\geq s_2\geq s_3\geq 2$ satisfying triangle inequalities $s_1<s_2+s_3$ and with odd sum $s_1+s_2+s_3$.
\subsubsection{Three-derivative Vertices}
For vertices with three derivatives, we consider separately three cases depending on the values of spins. This corresponds to even sum of the spins.
\subparagraph*{Triangle inequalities are satisfied.}
In this case, the general ansatz for the vertex is given by (the overall arbitrary coefficient is dropped):
\ba
\boxed{\mathcal{V}_{s_1,s_2,s_3}^{PO}= -W_i\,y_i\,z_i^2\, z_1^{n_1}\,z_2^{n_2}\,z_3^{n_3}=U\,y_1\,y_2\,y_3\,z_1^{n_1}\,z_2^{n_2}\,z_3^{n_3}\,,\quad s_i=n_{i-1}+n_{i+1}+2\,.\label{PO3}}
\ea
Remarkably, this vertex is gauge invariant with respect to all three variations due to \eqref{eq:fourDerivDDI} and \eqref{Vyyy}.
This result is similar to the parity-even case \eqref{PE}, where every triple of spins with odd sum defined a unique vertex \eqref{3vertex} proportional to $y_1\,y_2\,y_3$ \footnote{Except for $(1,1,1)$ Yang-Mills fields, for which there were two vertices --- with one derivative and three derivatives.}. One notable difference is that in case if there are two fields with identical spin, due to the factor $U$, the vertex is symmetric with respect to permutations of these fields, as opposed to the vertex \eqref{3vertex}, which would be antisymmetric. For $s_1=s_2=s_3=2$ the vertex \eqref{PO3} reproduces the symmetric $d=3$ vertex of \cite{Boulanger:2000ni}.
\subparagraph*{Triangle inequalities are saturated.}
In this case, there are no non-trivial vertex monomials with three derivatives.
\subparagraph*{Triangle inequalities are violated.}
This case allows for non-trivial vertex ansatz iff $s_1=s_2+s_3+2$. The most general ansatz is given by:
\ba
\mathcal{V}_{s_1,s_2,s_3}^{PO}=(\a\, V_{23} z_3+\b\, V_{32}\, z_2)y_1^2\,z_2^{s_3-1}\,z_3^{s_2-1}+\g W_1\, y_1\,z_2^{s_3}\,z_3^{s_2}\,.
\ea
The analysis of this case is similar to the two-derivative one performed above. The only non-trivial solutions have been covered by \eqref{POs00} with $s=2$ and \eqref{POs10} with $s=3$.
It is a straightforward algebraic exercise to show that there are no non-trivial vertices with more than three derivatives for $s_1\geq s_2\geq s_3\geq 2$. This completes the classification of parity-odd cubic vertices of massless bosonic fields in three space-time dimensions.
\subsection{Relations between Parity-Odd and Parity-Even Vertices}
There is a remarkable universality in the formulas of the vertices \eqref{2vertex}, \eqref{2dPO}, \eqref{3vertex} and \eqref{PO3}. In order to show it, we first notice the following relation (as always in this work, we neglect trace terms):
\be
U^2=-2\, z_1\,z_2\,z_3\,.
\ee
Now, we can formally define the following operator
\be
z_1^{1/2}\,z_2^{1/2}\,z_3^{1/2}=\frac{i}{\sqrt{2}}U\,.
\ee
Now, let us shift the integers $n_i$ in the \eqref{2dPO} by half: $n_i\to n_i+\frac12$.
Then the sum of the spins becomes even and the equation \eqref{2dPO} can be formally rewritten as:
\begin{align}
\mathcal{V}_{s_1,s_2,s_3
=\frac{i}{\sqrt{2}}\Big[(n_1+\frac12)\, W_1z_1+(n_2+\frac12)\, W_2z_2+(n_3+\frac12)\, W_3z_3\Big]\,U\,z_1^{n_1}z_2^{n_2}z_3^{n_3}\nonumber\\
=\frac{i}{\sqrt{2}}[(s_1-1) y_1 z_1+(s_2-1) y_2 z_2+(s_3-1) y_3 z_3]\, G\, z_1^{n_1} z_2^{n_2} z_3^{n_3}\,,\label{2dPE}
\end{align}
where $s_i=n_{i+1}+n_{i-1}+2\,$. Here we used the identities:
\be
W_i\,U=y_i(G-y_i\,z_i)\,,
\ee
and \eqref{eq:twoDerivDDI}.
The equation \eqref{2dPE} exactly reproduces the parity-even vertex \eqref{2vertex} up to an overall constant.
It is elementary to show, that the same relation holds between three-derivative parity-odd \eqref{PO3} and parity-even \eqref{3vertex} vertices.
Another curiosity related to parity-even vertices is discussed in Appendix \ref{A}.
This universality in formulas may have a deeper meaning in terms of certain dualities between fields that is yet to be uncovered. For example, it may be related to the Chern-Simons formulation where each HS field has two connections analogous to dreibein and spin connection of gravity. When switching to the Fronsdal formulation, the ``spin connection'' is solved in terms of the ``frame field'' and the solution contains one derivative and a Levi-Civita tensor. Replacing one ``frame field'' with a ``spin connection'' partner may result in switching the interactions between parity-odd and parity-even ones (it changes the parity and the number of derivatives by one). This is a speculation but can be checked by explicit computations. It is also tempting to speculate about the existence of a more fundamental formulation of any HS theory in terms of spinors, analogous to \cite{Prokushkin:1998bq,Vasiliev:1990en}, that treat the parity-even and parity-odd vertices on the same footing.
We leave more thorough investigations of this aspect to a future work.
\section{Vertices with Chern-Simons Vector Fields}\label{Chern-Simons}
So far we have been studying TT vertices of Fierz-type fields including the Maxwell field for $s=1$ that is given by the free Lagrangian:
\be
\mathcal{L}^0_{s=1}=-\frac14 F_{\m\n}F^{\m\n}\,,\quad F_{\m\n}=\partial_{\m}A_{\n}-\partial_{\n}A_\m\,,
\ee
with field equation $\partial^\m F_{\m\n}=0$, i.e. both Fronsdal equation \cite{Fronsdal:1978rb} and Maxwell-like HS equation \cite{Campoleoni:2012th} for $s=1$.
In three dimensions one can also consider Chern-Simons (CS) vector fields with free Lagrangian:
\be
\mathcal{L}^0_{s=1}=\frac12\epsilon^{\m\n\l}A_{\m}\partial_{\n}A_\l\,,
\ee
and free field equation $F_{\m\n}=0$.
It is common to call CS field ``spin one'' or ``vector'' field, but one has to be careful to not confuse it with the Maxwell field. We will mostly use the terms CS or Maxwell for the corresponding fields in the following. Since this field appears naturally in the context of HS gravity theories \cite{Prokushkin:1998bq,Gwak:2015vfb}, we study its interactions with other massless fields for completeness of our analysis.
Note that due to the difference in the free-field equations, the equivalence class that is defined for field redefinitions and gauge variations of vertices is different for CS fields. Any term that is proportional to free Maxwell field equations is obviously also on-shell trivial for CS fields. The opposite, however, is not true. The on-shell trivial cubic terms for a CS field ($i-$th field in the vertex) that are not trivial for Fierz (i.e. Fronsdal and Maxwell-like) fields are given by the expressions:
\begin{align}
G-y_i\,z_i=0\,,\quad y_i\,y_{i\pm 1}=0\,,\quad W_i=0\,,\quad V_{i\pm 1\,i}=0\,,
\quad W_{i+1}z_{i+1}=W_{i-1}z_{i-1}\,.\label{CSid}
\end{align}
Together with Fierz equations and Schouten identities, these terms define an equivalence class for cubic vertex monomials. Cubic vertices with CS fields should have trivial gauge variations in this class while not being trivial themselves.
There are three possible cases depending on the number of CS fields in the cubic vertex.
\paragraph*{Vertices with one CS field.}
A general ansatz for parity-even vertices with one CS field and two Fierz fields with spins $s_1,\,s_2$ can be written in the form:
\be
\mathcal{V}=y_1^{s_1-s_2}z_3^{s_2-1}(\a\, y_1\,z_1+\g\, y_3\,z_3)\,.
\ee
Note that $y_2\,z_2$ term is absent as it can be replaced by $y_1\,z_1$ due to identity \eqref{CSid}: $y_1\,z_1+y_2\,z_2=0$.
This vertex is gauge invariant with respect to all three gauge transformations for $\a=0$ and is not trivial only for $s_1=s_2$. It therefore defines a unique cubic vertex of two massless fields of spin $s$ and a CS field:
\be
\boxed{\mathcal{V_{CS}}=y_3\,z_3^s\,.\label{CS}}
\ee
It is straightforward to see that this interaction corresponds to minimal coupling to vector gauge field obtained by replacing in the free action of a spin-$s$ field $\partial \to \nabla=\partial+A$, i.e. making use of covariant derivatives. This coupling has appeared for example in \cite{Gwak:2015vfb} and is obviously not applicable to Maxwell fields. The free action of a spin $s$ field, supplemented with covariant derivatives, is gauge invariant up to commutators of covariant derivatives which are proportional to the curvature of the vector field. This curvature terms are equations of motions for CS fields and can therefore be compensated by deformations of transformations for CS fields. This is not true for Maxwell fields though. The absence of minimal coupling to the electromagnetic field is known as the Velo-Zwanziger problem \cite{Velo:1970ur} and is analogous to the Aragone-Deser problem for minimal coupling to gravity. In three dimensions, the minimal coupling to gravity exists which is related to the fact that the Riemann curvature of gravity is proportional to Einstein equations in three dimensions and therefore the problematic terms can be compensated with deformations of gauge transformations of the metric (analogously to the Rarita-Schwinger coupling that leads to Supergravity). Even though the mechanisms are slightly different for spin one and spin two, in both cases the fact that the curvature tensor is on-shell trivial allows for minimal coupling. The on-shell triviality of the curvature tensor is, on the other hand, related to the absence of dynamical degrees of freedom in the bulk and opens the possibility for CS formulation.
Parity-odd CS vertices are also severely restricted. Given that the third field in the vertex is a CS field, it can for example be shown that $W_1\,y_1\,z_1=0=W_2\,y_2\,z_2$. After some algebra, it is straightforward to show that, for two Fierz fields with spins $s_1=s_2=s$, there is a two-derivative coupling to CS field with the vertex operator given by:
\be
\boxed{\mathcal{V_{CS}}^{PO}=W_1\,z_1\,z_3^{s-1}=W_2\,z_2\,z_3^{s-1}=\frac12\, U\,y_1\,y_2\,z_3^{s-2}\,.}
\ee
We skip the details of the computations here since they are elementary algebraic manipulations by straightforward application of the Schouten identities, Fierz equations and \eqref{CSid}.
\paragraph*{Vertices with two CS fields.}
In this case, the extra identities include (we assume the second and third fields are CS):
\be
y_i\,y_j=0\, (i\neq j)\,,\quad G-y_2\,z_2=0=G-y_3\,z_3\,,\quad W_2=W_3=0\,,\quad W_1\,z_1=0\,.
\ee
Using these equalities, one can easily show that there are no vertices of interactions between a massless field with spin $s$ and two CS fields if $s\geq 2$. Instead, there is a vertex of interactions of a Maxwell field and two CS fields:
\be
\boxed{\mathcal{V}_{MCS}=y_1\,z_1=-y_2\,z_2=-y_3\,z_3\,.}
\ee
For this vertex to be non-zero, the two CS fields should be charged.
There are no parity-odd vertices with two CS fields and a massless field with spin $s$.
\paragraph*{Vertices with three CS fields.}
In this case we have:
\be
W_i=0\,,\quad V_{ij}=0\,,\quad y_i\,y_j=0\,,\quad y_i\,z_i=0\,.
\ee
The only non-trivial contraction between the three fields is given by a parity-odd expression,
\be
\boxed{\mathcal{V}_{CS}^{PO}=U\,,}
\ee
which is gauge invariant and is the well-known interaction term of CS fields. For this interaction to be non-trivial, CS fields should carry non-abelian charges.
These results fit into the picture of our findings for HS fields. There are no cubic interactions between massless fields where spins do not satisfy triangle inequalities.
\section{Discussion}\label{Discussion}
In this work, we completed the program initiated in \cite{Mkrtchyan:2017ixk} providing an exhaustive classification of covariant cubic interactions for massless bosonic fields in three dimensions.
We found that the parity-odd cubic vertices for interactions of massless fields in three dimensions are in one-to-one correspondence with parity-even vertices. For each collection of massless fields satisfying strict triangle inequalities, there is a unique parity-odd vertex on top of the unique parity-even one \footnote{The only exception is the cubic interaction between three Maxwell fields in both parity-even and parity-odd cases. In parity-even case there are two vertices for collection of Maxwell fields --- Yang-Mills vertex and $F^3$ vertex, both requiring fully antisymmetric Chan-Paton factors. For the parity-odd case, there are three free parameters in the two-derivative vertex \eqref{PO111}, which has no definite symmetry.}. For triplet of spins not satisfying triangle inequalities, the only cubic vertices are of ``current-interaction'' type, involving two matter fields of spin $s=0$ or $1$.
For the triplets, that contain at least two spins greater than one, all the vertices have either two or three derivatives.
Our results should match the CFT three-point functions, as argued, e.g., in \cite{Costa:2011mg}. The uniqueness of the vertex for given triplet is in agreement with the two-dimensional CFT. The three-point functions of quasi-primaries in 2d CFT have two free parameters for every triple of spins. In our classification, for each triple we get one parity-even and one parity-odd vertex therefore match the number of independent structures. The only intriguing aspect is the missing vertices, which translate into selection rules in 2d CFT.
Similarly to the parity-even case \cite{Mkrtchyan:2017ixk}, in the parity-odd case, missing vertices are all those containing at least two fields with spin greater than one and violating strict triangle inequalities.
Therefore, for quasi-primaries of spin values $s_1\geq s_2\geq s_3\geq 2$, all the three point functions for values $s_1\geq s_2+s_3$ are expected to be zero. This property is observed in known examples of 2d CFT's (see, e.g. \cite{Campoleoni:2011hg}), but we are not aware of a general proof.
The only massless fields that carry propagating degrees of freedom in three dimensions are scalar and Maxwell fields which are related by duality. Nevertheless, there are slight differences in vertices containing scalars and Maxwell fields observed in our classification. When we compare these vertices, one can take into account exact relation of duality between a Maxwell field $A_{\mu}$ and a scalar $\phi$, given by the relation:
\be
F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}=\epsilon_{\mu\nu\l}\partial^{\l}\phi\,.\label{DR}
\ee
If a vertex involves the curvature of the Maxwell field, one can simply replace it with the right hand side of the equation \eqref{DR} and get a vertex for a scalar, which has opposite parity. Similarly, if the vertex contains derivative of the scalar, one can replace it with the dual of the curvature of the Maxwell field. Instead, for the vertices where one has a naked vector potential $A_\m$ this dualization is not applicable since the curl operation is not invertible and the field $A_\m$ cannot be expressed through $\phi$ locally. Let us start with parity-even vertices with Maxwell fields. There is an $(s,1,0)$ vertex, which contains $s+1$ derivatives, and in which one can dualize the Maxwell field to get parity-odd $(s,0,0)$ vertex \eqref{POs00} with $s+1$ derivatives. Alternatively, one can dualize the scalar to get the parity-odd $(s,1,1)$ vertex \eqref{POs11} with $s+1$ derivatives. Therefore, we established duality relations:
\be
\mathcal{V}_{(s,1,1)}^{PO} \leftrightarrow \mathcal{V}_{(s,1,0)}\leftrightarrow \mathcal{V}_{(s,0,0)}^{PO}\,.
\ee
This duality works for any $s\geq 1$.
Next, there is a parity-even vertex $\mathcal{V}_{(s,1,1)}=y_1^{s-1}G$ with $s$ derivatives. Dualization of one Maxwell field leads to a vertex $\mathcal{V}_{(s,1,0)}^{PO}$ with $s$ derivatives given by \eqref{POs10}. We can further dualize the second Maxwell field and get a parity-even vertex $\mathcal{V}_{(s,0,0)}=y_1^s$ with $s$ derivatives:
\be
\mathcal{V}_{(s,1,1)}\leftrightarrow\mathcal{V}_{(s,1,0)}^{PO}\leftrightarrow \mathcal{V}_{(s,0,0)}\,.
\ee
These dualities work for $s\geq 2$, since there is no parity-odd vertex for spin configuration $(1,1,0)$, which leaves out the Yang-Mills vertex from dualization procedure. The dualization of the other cubic vertex of three Maxwell fields, $F^3$ vertex, leads to a trivial TT expression and therefore does not have a parity-odd $(1,1,0)$ dual either.
The (A)dS cubic vertices in any dimensions can be understood as deformations of flat space cubic vertices and therefore the first step towards (A)dS vertices lies in the classification of their flat counterparts.
In fact, all known Lagrangian theories with HS spectrum in three dimensions allow for flat space limit. Therefore, one may expect that the Lagrangian formulation for Prokushkin-Vasiliev theory, if existing, may also allow for a flat limit.
Even more, three-dimensional Minkowski vertices can be extended to arbitrary Einstein backgrounds due to the same reason as the absence of Aragone-Deser problem in 3d --- the obstructing terms are given by Weyl tensors and therefore vanish in three dimensions. One can even work with full non-linear gravity while constructing the action perturbatively in powers of HS fields (see \cite{Gwak:2015vfb} for such expansions of full non-linear theories). In that case, one needs to take care of the backreaction to the Einstein equations involving HS fields, which contribute to the construction of quartic and higher order vertices.
The full classification of cubic vertices is the first step towards construction of a Lagrangian for the HS theories accommodating propagating degrees of freedom which are not covered by Chern-Simons actions. Our classification is performed for the three-dimensional Minkowski background while the (A)dS extension can be considered straightforwardly. Since the main technical difficulty of (A)dS extensions is related to the non-trivial commutators of covariant derivatives, it is natural to expect that those vertices that contain many derivatives will be the most challenging. As we have seen, in three dimensions the only vertices that contain more than three derivatives are current interactions containing scalar and Maxwell fields. The AdS extensions for these vertices have already been studied in higher dimensions in \cite{Manvelyan:2009tf,Bekaert:2009ud,Mkrtchyan:2010pp}. The scalar coupling in three dimensions was studied in \cite{Prokushkin:1999xq,Ammon:2011ua,Kessel:2015kna,Lovrekovic:2018hgu}.
The main technical novelty of the three-dimensional classification provided in this work and in \cite{Mkrtchyan:2017ixk} compared to earlier work on cubic vertices in arbitrary dimensions is related to the systematic implementation of Schouten identities in three dimensions. When considering quartic interactions of massless symmetric fields, there are relevant Schouten identities in dimensions $d\leq 7$. Therefore, the analysis of quartic order of interactions becomes more involved. We plan to address that problem both in three-dimensional and higher-dimensional contexts in the future.
We delegate some more technical discussion to appendices.
In Appendix \ref{A}, we elaborate on the possibility of writing parity-even vertices as ratios which are by themselves meaningless expressions, but can be defined and motivated {\it only in three dimensions} due to their equivalence to vertices of \cite{Mkrtchyan:2017ixk} via Schouten identities.
We study two-dimensional vertices in Appendix \ref{B}. There, the restrictions imposed by Schouten identities are much more severe and eventually allow for only vertices of the type $(s,s,1)$ and $(s,s,0)$.
\section*{Acknowledgements}
We would like to thank Andrea Campoleoni, Eduardo Conde, Dario Francia, Stefan Fredenhagen, Euihun Joung, Gabriele Lo Monaco, Stefan Theisen and Arkady Tseytlin for discussions related to the subject of this work. We are also indebted to Nicolas Boulanger for comments on the draft. The work of K.M. is supported by Alexander von Humboldt foundation. K.M. would like to thank Scuola Normale Superiore and INFN Sezione di Pisa for the hospitality extended to him during the final stage of this work. P.K. would like to thank the Albert Einstein Institute for generous support by which his modest contribution to this work became possible.
|
{
"timestamp": "2018-03-08T02:10:22",
"yymm": "1803",
"arxiv_id": "1803.02737",
"language": "en",
"url": "https://arxiv.org/abs/1803.02737"
}
|
\section{Introduction}
Nontrivial topological gluon fields can form in quantum chromodynamics (QCD) from vacuum fluctuations~\cite{Lee:1974ma}.
Interactions with those gluon fields can change the chirality of quarks in local domains where the approximate chiral symmetry is restored~\cite{Lee:1974ma,Morley:1983wr,Kharzeev:1998kz,Kharzeev:2007jp}.
Quarks of the same chirality in a local domain immersed in a strong magnetic field will move in opposite directions along the magnetic field if they bear opposite charges. This charge separation phenomenon is called the chiral magnetic effect (CME)~\cite{Kharzeev:2007jp,Fukushima:2008xe}.
Heavy-ion collisons provide a suitable environment for the CME to occur: the relativistic spectator protons can create an intense, transient magnetic field~\cite{Kharzeev:2004ey,Bzdak:2011yy,Deng:2012pc,Bloczynski:2012en} roughly perpendicular to the reaction plane (spanned by the impact parameter and beam directions); high energy density can be created in the collision zone and the approximate chiral symmetry may be restored~\cite{Arsene:2004fa,Back:2004je,Adams:2005dq,Adcox:2004mh,Muller:2012zq}; and topological gluon fields can emerge from the QCD vacuum~\cite{Lee:1974ma}.
Because the observation of the CME will simultaneously support the above pictures,
the detection of such charge separations in heavy-ion collisions is of critical importance.
The common variable that has been used to search for the CME-induced charge separation is the so-called $\Delta\gamma$ variable~\cite{Voloshin:2004vk}. Positive charge-dependent signals have been observed in heavy-ion collisions, qualitatively consistent with the CME~\cite{Abelev:2009ac,Abelev:2009ad,Adamczyk:2014mzf,Adamczyk:2013hsi,Abelev:2012pa}.
However, the $\Delta\gamma$ variable is strongly contaminated by elliptic flow induced correlation backgrounds~\cite{Wang:2009kd,Bzdak:2009fc,Schlichting:2010qia,Zhao:2018ixy, Zhao:2018skm}. In fact, $\Delta\gamma$ measurements in small systems of p+Pb collisions at the CERN Large Hadron Collider (LHC)~\cite{Khachatryan:2016got} and d+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC)~\cite{Zhao:2017wck,Zhao:2017ckp}, where only backgrounds are expected, reveal large signals comparable to those measured in heavy-ion collisions. With suppression of backgrounds by event-by-event and event-shape-engineering techniques, experimental data~\cite{Adamczyk:2013kcb,Sirunyan:2017quh,Acharya:2017fau} show significantly reduced, consistent-with-zero signals for the CME.
Another variable that has been proposed to detect charge separation is
the $R_{\Psi_2}(\Delta S)$ variable~\cite{Ajitanand:2010rc, Magdy:2017yjev2}. We call it the \textit{sine} observable. It is defined as follows.
In each event, let
\begin{equation}\label{Psi2}
\varphi = \phi - \Psi_2
,
\end{equation}
\begin{equation}
\begin{split}
\left\langle S_p \right\rangle = \frac{1}{N_p}\sum_1^{N_p}\sin(\varphi_+),\quad
\left\langle S_n \right\rangle = \frac{1}{N_n}\sum_1^{N_n}\sin(\varphi_-),
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\Delta S_{sep} = \left\langle S_p \right\rangle - \left\langle S_n \right\rangle
,
\end{split}
\end{equation}
where $\phi$ is the particle azimuthal angle in the laboratory frame and $\varphi$ is therefore the azimuthal angle relative to the second-order harmonic plane $\Psi_2$ (as a proxy for the unmeasured reaction plane). Subscripts ($+,-$) indicate the charge sign, and $N_p,N_n$ are the number of particles with positive and negative charge, respectively.
A parallel set of variables is constructed by randomizing the charges of all particles in the event,
respecting the relative multiplicities of positive and negative particles. Then, according to the randomized charges,
\begin{equation}
\begin{split}
\left\langle S_{p}' \right\rangle = \frac{1}{N_p'}\sum_1^{N_p'}\sin(\varphi'_+),\quad
\left\langle S_{n}' \right\rangle = \frac{1}{N_n'}\sum_1^{N_n'}\sin(\varphi'_-),
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\Delta S_{mix} = \left\langle S_{p}' \right\rangle - \left\langle S_{n}' \right\rangle
,
\end{split}
\end{equation}
where the primes denote quantities for this so-called shuffled event.
The ratio is formed from the event probability distributions of real events in $\Delta S_{sep}$ and shuffled events in $\Delta S_{mix}$,
\begin{equation}
C_{\Psi_2}(\Delta S) = \frac{N(\Delta S_{sep})}{N(\Delta S_{mix})}
.
\end{equation}
For events with CME signals, charge separation along the magnetic field gives $|\sin\varphi_\pm| \approx 1$ and a maximal difference $\sin\varphi_+-\sin\varphi_- \approx \pm 2$.
The distribution of $N(\Delta S_{sep})$ would therefore become wider than its reference distribution.
Here, the shuffled event $N(\Delta S_{mix})$ serves as the reference distribution.
The ratio $C_{\Psi_2}$ is therefore concave for CME~\cite{Ajitanand:2010rc, Magdy:2017yjev2}.
There can be background sources that change the shape of $C_{\Psi_2}(\Delta S)$. In order to eliminate reaction-plane (RP) independent backgrounds, an analogous variable $C_{\Psi_2}^{\perp}$ is constructed in a way identical to $C_{\Psi_2}$ except changing each $\varphi$ into $\varphi-\pi/2$.
The $R_{\Psi_2}$ variable is defined to be the ratio of $C_{\Psi_2}$ to $C_{\Psi_2}^{\perp}$,
\begin{equation}
R_{\Psi_2}(\Delta S) = \frac{C_{\Psi_2}(\Delta S)}{C_{\Psi_2}^{\perp}(\Delta S)}
.
\end{equation}
The RP-independent backgrounds would cancel in $R_{\Psi_2}(\Delta S)$. Since the CME signal does not affect $C_{\Psi_2}^{\perp}$ significantly because $\sin(\varphi_\pm-\pi/2) \approx 0$, the CME in $C_{\Psi_2}$ would survive in $R_{\Psi_2}(\Delta S)$, making it concave. The RP-dependent backgrounds, such as resonance decays with finite $v_2$, can still affect $R_{\Psi_2}(\Delta S)$. However, they were shown to make $R_{\Psi_2}(\Delta S)$ convex~\cite{Ajitanand:2010rc,Magdy:2017yjev2}.
Preliminary STAR data reveal concave $R_{\Psi_2}(\Delta S)$ distributions in 200 GeV Au+Au collisions~\cite{Roy:2017rs}.
Previous studies using a multiphase transport (AMPT) model where resonance decay background is present but no CME, suggest that $R_{\Psi_2}(\Delta S)$ is convex~\cite{Magdy:2017yjev2}.
The anomalous-viscous Fluid Dynamics (AVFD) model shows concave $R_{\Psi_2}(\Delta S)$ distributions for CME signals and convex ones for typical resonance backgrounds~\cite{Magdy:2017yjev2}.
A recent hydrodynamic study, however, indicates concave shapes for backgrounds as well~\cite{Bozek:2017hi}.
To better understand these results, we present a systematic study of resonance backgrounds as functions of the resonance elliptic flow ($v_{2}$) and transverse momentum ($p_{T}$) with toy-model simulations and central limit theorem (CLT) calculations.
It is found that the concavity or convexity of $R_{\Psi_2}(\Delta S)$ depends sensitively on the resonance $v_2$ (which yields different numbers of decay $\pi^+\pi^-$ pairs in the in-plane and out-of-plane directions) and $p_T$ (which affects the opening angle of the decay $\pi^+\pi^-$ pair).
Supplemental studies in terms of the triangular flow ($v_3$), where only backgrounds exist but any CME would average to zero, are also presented.
\section{Toy-model simulation of resonance backgrounds}
We use a toy model of $\rho$ meson decays to study the behavior of $R_{\Psi_2}(\Delta S)$ as functions of the $\rho$ kinematic variables. The toy model has been used for CME background studies in Ref.~\cite{Wang:2016iov}. It generates events to be composed of primordial pions and $\rho$-decay pions. Their input $p_T$ distributions and $v_2(p_T)$ are obtained from data measurements \cite{Adams:2003cc, Adler:2003qi, Adams:2003xp, Abelev:2008ab, Adams:2004bi, Adare:2010sp, Dong:2004ve, Adamczyk:2015lme, Agashe:2014kda, Abelev:2009gu, Wang:2016iov}. For simplicity, we use the input harmonic plane $\Psi_2$ (as well as $\Psi_3$ discussed in Sec.~\ref{v3simulation}) in our analysis.
In order to study the $v_2$ dependence, we scale $v_{2,\rho}$ ($v_2$ of $\rho$) up or down by a $p_T$-independent factor to investigate how $R_{\Psi_2}(\Delta S)$ responds.
Figure~\ref{v2RhoScan} shows the results; the curve of $C_{\Psi_2}$ becomes more concave when $v_{2,\rho}$ is increased, and $C_{\Psi_2}^{\perp}$ behaves in the opposite way. Subsequently, $R_{\Psi_2}(\Delta S)$ becomes more concave. This behavior can be qualitatively understood as follows.
At the typical resonance $p_T$ in the simulation, the decay daughters are close to each other in azimuthal angle. The numerator of $C_{\Psi_2}$ has the term:
$\sin\varphi_+-\sin\varphi_- \approx \cos\bar{\varphi}\delta\varphi$, where
$\bar{\varphi}=(\varphi_++\varphi_-)/2$, $\delta\varphi =\varphi_+-\varphi_-$ are the average and difference of the $\pi^\pm$ azimuths, respectively.
When $v_{2,\rho}$ is large,
$\bar{\varphi}$ will be relatively close to $0$ or $\pi$, and $|\cos\bar{\varphi}|$ will be relatively big. Hence, the $\Delta S$ in the numerator of $C_{\Psi_2}$ has a wider distribution, and accordingly $C_{\Psi_2}$ becomes more concave (see Fig.~\ref{Cp_Pi10}).
Similarly, the numerator of $C_{\Psi_2}^{\perp}$ has the term:
$\sin(\varphi_+-\pi/2)-\sin(\varphi_--\pi/2) \approx \sin\bar{\varphi}\delta\varphi$. When $v_{2,\rho}$ is large, $|\sin\bar{\varphi}|$ will be relatively small and close to $0$, so the $\Delta S$ in the numerator of $C_{\Psi_2}^{\perp}$ has a narrower distribution, and accordingly $C_{\Psi_2}^{\perp}$ becomes more convex (Fig.~\ref{Cp_perp_Pi10}).
Because of the opposite behaviors of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$, we can easily get the dependence of their ratio $R_{\Psi_2}$ on $v_{2,\rho}$: its concavity increases with increasing $v_{2,\rho}$ (Fig.~\ref{R_Pi10}).
\begin{figure*}[th]
\subfloat{
\centering
\includegraphics[width=0.33\linewidth]{Cp_Pi10}
\label{Cp_Pi10}
}
\subfloat{
\centering
\includegraphics[width=0.33\linewidth]{Cp_perp_Pi10}
\label{Cp_perp_Pi10}
}
\subfloat{
\centering
\includegraphics[width=0.33\linewidth]{R_Pi10}
\label{R_Pi10}
}
\caption{(color online) Observable distributions for various values of $v_{2,\rho}$ (with $v_{2,\pi}$ fixed to its default distribution). Here, $v_{2,\rho}^{def}$ is the default distribution of $v_{2,\rho}$ obtained from data \cite{Adams:2003cc, Adler:2003qi, Adams:2003xp, Abelev:2008ab, Adams:2004bi, Adare:2010sp, Dong:2004ve, Adamczyk:2015lme, Agashe:2014kda, Abelev:2009gu, Wang:2016iov}. }
\label{v2RhoScan}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=0.33\textwidth]{R_Rho00}
\caption{(color online) $R_{\Psi_2}(\Delta S)$ for various values of $v_{2,\pi}$ (with $v_{2,\rho}$ fixed to 0). Here, $v_{2,\pi}^{def}$ is the default distribution of $v_{2,\pi}$ obtained from data \cite{Adams:2003cc, Adler:2003qi, Adams:2003xp, Abelev:2008ab, Adams:2004bi, Adare:2010sp, Dong:2004ve, Adamczyk:2015lme, Agashe:2014kda, Abelev:2009gu, Wang:2016iov}.}
\label{R_Rho00}
\end{figure}
Note that the curves in Fig.~\ref{R_Pi10} with zero $v_{2,\rho}$ is counterintuitively nonflat. This is due to the finite $v_{2,\pi}$ (primordial pion $v_2$). The $\rho$ decays alter the pion multiplicities which affect $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$. The finite $v_{2,\pi}$ breaks the symmetry between $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$, resulting in the slightly nonflat $R_{\Psi_2}(\Delta S)$. Figure~\ref{R_Rho00} shows $R_{\Psi_2}$ curves with zero $v_{2,\rho}$ for various values of $v_{2,\pi}$. Only weak dependences on $v_{2,\pi}$ are observed for $R_{\Psi_2}(\Delta S)$ (and also $C_{\Psi_2}$, $C_{\Psi_2}^\perp$). When both $v_{2,\rho}$ and $v_{2,\pi}$ are set to zero, then $R_{\Psi_2}$ is indeed flat.
To scan $p_{T,\rho}$ (the $p_T$ of $\rho$), we fix $v_{2,\rho}$ to a specific value 0.06, because otherwise the value of $v_{2,\rho}$ would be affected by the changing $p_{T,\rho}$.
The $v_{2,\pi}$ and $p_T$ of the primordial pions are given by default.
We find the curves of $C_{\Psi_2}$, $C_{\Psi_2}^{\perp}$, and $R_{\Psi_2}$ to become more convex when $p_{T,\rho}$ increases (Fig.~\ref{v2PtScan}).
This is because of the following. When $p_{T,\rho}$ is large,
the decay opening angle $\delta\varphi$ is small.
The $\cos\bar{\varphi}\delta\varphi$ contribution to $\Delta S$ in $C_{\Psi_2}$ and the $\sin\bar{\varphi}\delta\varphi$ contribution to $\Delta S$ in $C_{\Psi_2}^{\perp}$ both become small in magnitude, so the distributions of $\Delta S$ in both $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ become narrower. The reshuffled $\Delta S$ in the denominators of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ are not as sensitive to the $\delta\varphi$ change as the numerators. Thus, the shapes of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ both become more convex. Since the change in $\cos\bar{\varphi}\delta\varphi$ is larger than in $\sin\bar{\varphi}\delta\varphi$ with increasing $p_T$ for $\bar{\varphi}$ close to the reaction plane, the narrowing in $C_{\Psi_2}$ is more significant, so $R_{\Psi_2}$ becomes more convex.
Another way to explain the $R_{\Psi_2}$ change is as follows. When $p_{T,\rho}$ is high, the two decay daughters are close to each other and preferentially close to the reaction plane because of the finite $v_{2,\rho}$. This is characteristic of the CME background. At low $p_{T,\rho}$, the two daughters are preferentially more perpendicular to the RP because of the large decay opening angle. This case resembles the CME signal, so the $R_{\Psi_2}$ curves with lower $p_{T,\rho}$ becomes more concave, just like how CME signal would behave.
For our typical $p_T$ distribution from data, the high $p_{T,\rho}$ case wins over the case with low $p_{T,\rho}$.
\begin{figure*}[th]
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{Cp_RhoPt}
\label{Cp_RhoPt}
}
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{Cp_perp_RhoPt}
\label{Cp_perp_RhoPt}
}
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{R_RhoPt}
\label{R_RhoPt}
}
\caption{(color online) Observable distributions for various values of $p_{T,\rho}$ (with $v_{2,\rho}$ fixed to 0.06).}
\label{v2PtScan}
\end{figure*}
The behaviors of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ are recapitulated in Fig.~\ref{RMS2} by the RMS (root mean square) of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$.
\begin{figure*}[h]
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{RMS_Pi10}
\label{RMS_Pi10}
}
\quad\quad
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{v2pTRMS}
\label{RMS_Rho10}
}
\iffalse
\subfloat[RMS of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ in different final pion $p_{T}$ bins of the default final pion $p_{T}$ distribution which determines the distributions of $v_{2,\rho}$ and $v_{2,\pi}$.][RMS of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ in different final pion $p_{T}$ bins of the default $p_{T}$ distribution which determines the distributions of $v_{2,\rho}$ and $v_{2,\pi}$ (same amount of final pions in each bin).]{
\centering
\includegraphics[width=0.45\textwidth]{PtBinCpCpperpRMS}
\label{PtBinCpCpperpRMS}
}
\fi
\caption{(color online) RMS of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ depending on $v_{2,\rho}$ and $p_{T,\rho}$. (a) RMS of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ (shown in Fig.~\ref{v2RhoScan}) depending on $v_{2,\rho}$ (with $v_{2,\pi}$ fixed to its default distribution). (b) RMS of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ (shown in Fig.~\ref{v2PtScan}) depending on $p_{T,\rho}$ (with $v_{2,\rho}$ fixed to 0.06 and $v_{2,\pi}$ fixed to its default distribution).}
\label{RMS2}
\end{figure*}
We summarize our main findings as follows:
\begin{itemize}
\item The curve of $C_{\Psi_2}$ becomes more concave when $v_{2,\rho}$ increases, and $C_{\Psi_2}^{\perp}$ more convex, rendering a more concave $R_{\Psi_2}$.
\item The shapes of the observables ($C_{\Psi_2}$, $C_{\Psi_2}^{\perp}$, and $R_{\Psi_2}$) are only weakly dependent on $v_{2,\pi}$.
\item The curves of $C_{\Psi_2}$ and $C_{\Psi_2}^{\perp}$ become more convex when $p_{T,\rho}$ increases. The effect is more significant in $C_{\Psi_2}$, rendering a more convex $R_{\Psi_2}$.
\iffalse
\item In Fig.~\ref{PtBinCpCpperpRMS}, we see that the RMS of $C_{\Psi_2}$ is firstly larger than and then smaller than that of $C_{\Psi_2}^\perp$ with increasing final pion $p_T$, which is consistent with the dependence on resonance $p_{T,\rho}$.
\fi
\end{itemize}
\section{Supplemental studies using $v_3$} \label{v3simulation}
The CME is a charge separation with respect to the RP (or the $v_2$ harmonic plane $\Psi_2$). The CME-induced charge separation must be zero with respect to the third order harmonic plane because of its random orientation relative to $\Psi_2$. Resonance backgrounds, on the other hand, should be still finite with respect to $\Psi_3$. In this section, we verify this with our toy model simulation.
In term of $v_3$, the reference azimuthal angle is the third harmonic plane:
\begin{equation} \label{Psi3}
\varphi=\phi-\Psi_3
.
\end{equation}
There have been two different ways to define the sine observables for $v_3$, and both are similar to the definition of the observables for $v_2$.
\textbf{A. }
For the first definition~\cite{Magdy:2017yjev2},
one changes $\Psi_2$ into $\Psi_3$ for $\varphi$ (see Eqs.~\ref{Psi2},~\ref{Psi3}) and replaces $-\pi/2$ by $-\pi/3$ for $\Delta S$ (both $\Delta S_{sep}$ and $\Delta S_{mix}$) in $C_{\Psi_3}^{\perp}$,
\begin{equation}
C_{\Psi_3}^{\perp}: \quad \Delta S = \frac{1}{N_p}\sum_1^{N_p} \sin\left(\varphi_+-\frac{\pi}{3}\right)-\frac{1}{N_n}\sum_1^{N_n} \sin\left(\varphi_--\frac{\pi}{3}\right)
.
\end{equation}
\textbf{B. }
For the second definition~\cite{Bozek:2017hi}, one still changes $\Psi_2$ into $\Psi_3$ for $\varphi$. In addition, one adds a factor $3/2$ in front of the azimuths,
\begin{equation}
\begin{split}
C_{\Psi_3}:& \quad \Delta S = \frac{1}{N_p} \sum_1^{N_p} \sin\left(\frac{3}{2}\varphi_+\right)-\frac{1}{N_n} \sum_1^{N_n} \sin\left(\frac{3}{2}\varphi_-\right)
,
\\
C_{\Psi_3}^{\perp}:& \quad \Delta S = \frac{1}{N_p} \sum_1^{N_p} \sin\left(\frac{3}{2}\varphi_+-\frac{\pi}{2}\right)-\frac{1}{N_n} \sum_1^{N_n} \sin\left(\frac{3}{2}\varphi_--\frac{\pi}{2}\right)
.
\end{split}
\end{equation}
We use the toy Monte Carlo simulation to investigate $R_{\Psi_3}(\Delta S)$ of those two definitions. The toy simulation generates primordial $\pi^+, \pi^-$ and $\rho$ with the experimental $p_T$ spectra but with only $v_3$ of the $\rho$. Since $\Psi_2$ and $\Psi_3$ are uncorrelated, including non-zero $v_2$ does not change the results.
Including a finite $v_3$ for the primordial pions does not have significant effect.
The default function of $v_{3,\rho}(p_T)$ is approximated by that of $v_{2,\rho}(p_T)$ but with half magnitude, i.e.
\begin{equation} \label{default v3Rho definition}
v_{3,\rho}(p_T) = \frac{1}{2}v_{2,\rho}(p_T)
.
\end{equation}
We constrain the azimuthal range to be $[0,2\pi)$ in our simulation.
As will be discussed later in Sec.~\ref{CLTv3}, the sine observables of Definition~\textbf{B} unfortunately depend on which periodic range is used, suggesting Definition~\textbf{B} is not a physically correct definition.
The simulation results are shown in Figs.~\ref{v3definition1} and \ref{v3 definition 2}.
\begin{figure*}[th]
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{N3Cp}
\label{N3Cp}
}
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{N3Cp_perp}
\label{N3Cp_perp}
}
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{N3R}
\label{N3R}
}
\caption{(color online) Definition \textbf{A}: observable distributions for various values of $v_{3,\rho}$ and $p_{T,\rho}$ (with $v_{3,\pi}$ fixed to $0$). The $C_{\Psi_3}$ and $C_{\Psi_3}^{\perp}$ curves, with the same $p_{T,\rho}$ but various $v_{3,\rho}$, are very close to each other in panels (a) and (b) (concave dashed lines for low $p_{T,\rho}$, and convex solid lines for high $p_{T,\rho}$). }
\label{v3definition1}
\end{figure*}
\begin{figure*}[th]
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{B3Cp}
\label{B3Cp}
}
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{B3Cp_perp}
\label{B3Cp_perp}
}
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{B3R}
\label{B3R}
}
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{B3Cp_pT}
\label{B3Cp_pT}
}
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{B3Cp_perp_pT}
\label{B3Cp_perp_pT}
}
\subfloat{
\centering
\includegraphics[width=0.33\textwidth]{B3R_pT}
\label{B3R_pT}
}
\caption{(color online) Definition \textbf{B}: The upper plots (a)--(c) show observable distributions for various values of $v_{3,\rho}$ (with $v_{3,\pi}$ fixed to $0$). Here, $v_{3,\rho}^{def}$ is the default distribution of $v_{3,\rho}$ approximated by $0.5v_{2,\rho}^{def}$ (Eq.~\ref{default v3Rho definition}).
The lower plots (d)--(f) show observable distributions for various values of $p_{T,\rho}$ (with $v_{3,\rho}$ fixed to 0.03 and $v_{3,\pi}$ fixed to $0$).}
\label{v3 definition 2}
\end{figure*}
We make the following observations:
\begin{itemize}
\item In Definition \textbf{A}, $R_{\Psi_3}$ is always flat.
By Definition \textbf{A} itself, $R_{\Psi_3}$ should always be flat, as follows.
The Probability Density Function (PDF) is $f(\varphi)=(1+2v_3\cos(3\varphi))/(2\pi)$, whose period is $2\pi/3$. In the definition of $C_{\Psi_3}^{\perp}$, $\varphi$ is shifted by $\pi/3$ clockwise, $\Delta S(\varphi) \rightarrow \Delta S(\varphi -\pi/3)$. If we keep shifting $\Delta S$ by another period in the same direction, we would not change the distribution of $\Delta S$ in $C_{\Psi_3}^{\perp}$, which means $\Delta S(\varphi - \pi/3)$ and $\Delta S(\varphi-\pi)$ have the same distribution. From the Definition \textbf{A}, we also know that $\Delta S(\varphi-\pi) = -\Delta S(\varphi)$. Because the distribution of $\Delta S(\varphi)$ is symmetric about $\Delta S = 0$, $\Delta S(\varphi)$ and $-\Delta S(\varphi)$ have the same distribution as well. Thus, $\Delta S(\varphi)$ and $\Delta S(\varphi - \pi/3)$ have the same distribution, which means that $C_{\Psi_3}$ and $C_{\Psi_3}^{\perp}$ have the same shape and $R_{\Psi_3}$ must be flat and have the value $1$.
This flat $R_{\Psi_3}$ can also be explained by the analysis based on CLT in Sec.~\ref{analyze}.
\item The $C_{\Psi_3}$ and $C_{\Psi_3}^\perp$ curves from Definition \textbf{A} show a similar dependence on resonance $p_{T,\rho}$ as $C_{\Psi_2}$ and $C_{\Psi_2}^\perp$ curves in the $v_2$ case.
\item The $C_{\Psi_3}$, $C_{\Psi_3}^{\perp}$, and $R_{\Psi_3}$ curves from Definition \textbf{B} are obviously dependent on the $p_{T,\rho}$ and $v_{3,\rho}$.
Increasing $p_{T,\rho}$ makes the curves more convex. Increaing $v_{3,\rho}$ makes the $C_{\Psi_3}$, $R_{\Psi_3}$ curves more concave, and $C_{\Psi_3}^\perp$ more convex. Those tendencies are consistent with the scans with respect to $v_2$.
\item In Definition \textbf{B}, the $C_{\Psi_3}^\perp$ and $R_{\Psi_3}$ curves are counterintuitively not flat, even if we set $v_3$ to zero.
\end{itemize}
We note that Definition \textbf{A} was used only in the early version (version~2) of Ref.~\cite{Magdy:2017yjev2} where the $R_{\Psi_3}(\Delta S)$ variable was studied with respect to $v_3$. In the later version~3 of Ref.~\cite{Magdy:2017yjev2}, Definition \textbf{B} was used.
\section{Analytical results based on the central limit theorem} \label{analyze}
In this section, we use the central limit theorem (CLT) to analyze the sine observable. This analysis can be applied to all observables discussed in this paper. With a few reasonable approximations, the behavior of the sine observable can be readily understood.
There are many versions of the CLT, and here we use the \emph{Lindeberg-Levy} expression.
Let $X_1, X_2, \ldots, X_n$ be a sequence of independent and identically distributed (i.d.d.) random variables with expectation value $\text{E}[X_i] = \mu$ and variance $\text{Var}[X_i] = \sigma^2 < \infty$, and
\begin{equation}
S_n = \frac{X_1 + X_2 + \cdots + X_n}{n}
= \frac{1}{n}\sum_{i=1}^{n} X_i
\end{equation}
denotes their mean. As $n$ approaches infinity, the random variable $\sqrt{n}(S_n - \mu)$ converges in distribution to a normal $\mathcal{N}(0, \sigma^2)$.
Generally, if $X_1, X_2, \ldots, X_n$ are independent normal distributions,
\begin{equation}
X_i \sim \mathcal{N}(\mu_i, \sigma_i^2)
,
\end{equation}
then the weighted sum of them is a normal distribution,
\begin{equation}
\sum_{i=1}^{n} a_iX_i \sim \mathcal{N}\left(\sum_{i=1}^{n} a_i\mu_i, \sum_{i=1}^{n} a_i^2\sigma_i^2\right)
.
\end{equation}
\subsection{Analysis of $C_{\Psi_2}$, $C_{\Psi_2}^{\perp}$, and $R_{\Psi_2}$}
First, we write the
PDF of $\phi$,
\begin{equation} \label{PDF}
f(\phi)=\frac{1}{2\pi}\left(1+2\sum_{m=2}^{\infty} v_{m}\cos\left(m\left(\phi-\Psi_m\right)\right)\right)
,
\end{equation}
where $\Psi_m$ are normally different and uncorrelated among different $m$.
When we focus only on one specific $m$, for example $m=2$ or $3$ in the former discussion, we can just use $\varphi=\phi-\Psi_m$ as the relative azimuth of particles.
\subsubsection{Numerator of $C_{\Psi_2}$} \label{CpNumerator}
The PDF of $\Delta S_{sep}$ can describe $N(\Delta S_{sep})$, the numerator of $C_{\Psi_2}$.
For simplicity, we assume that the number of positive charges is the same as the number of negative charges in the final state. In each event, before any decay, $n_\rho$ denotes the number of $\rho$ mesons, and $n_\pi$ denotes the number of primordial pions. Thus,
\begin{equation}
N_n = N_p = n_\rho + 0.5n_\pi
.
\end{equation}
We rewrite
\begin{equation}\label{Ssep}
\begin{split}
\Delta S_{sep} =& \frac{1}{n_\rho + 0.5n_\pi}\left(\sum_1^{n_\rho} (\sin\varphi_+ - \sin\varphi_-)\right)\\
&+ \frac{1}{n_\rho + 0.5n_\pi}\left(\sum_1^{0.5n_\pi} (\sin\varphi_+ - \sin\varphi_-) \right)
.
\end{split}
\end{equation}
The first sum is over $\rho$ decay pions, and the second is over primordial pions.
For convenience, we will use the following shorthand notations:
\begin{equation} \label{short}
\begin{split}
c:=\cos\varphi, \quad \bar{c}:=\cos\bar{\varphi}, \quad s:=\sin\varphi, \quad \bar{s}:=\sin\bar{\varphi},\\
\delta:=2\sin(\delta\varphi/2)
\end{split}
\end{equation}
where $\bar{\varphi} = (\varphi_+ + \varphi_-)/2$ is related to the $\rho$ angular position and $\delta\varphi = \varphi_+ - \varphi_-$ represents the decay opening angle.
We use the indices $\rho$ or $\pi$ to indicate whether the variables are for $\rho$ or primordial $\pi^\pm$.
We express the first sum of Eq.~\ref{Ssep} as
\begin{equation}
\sum_1^{n_\rho} (\sin\varphi_+ - \sin\varphi_-) = \sum_1^{n_\rho}2\cos(\bar{\varphi})\sin(\delta\varphi/2)
=\sum_{1}^{n_\rho}\bar{c}_\rho\delta
.
\end{equation}
Because the primordial pions all independently obey the same distribution related to the global harmonic plane, we rewrite the second sum of Eq.~\ref{Ssep} as
\begin{equation} \label{uuupi}
\sum_1^{0.5n_\pi} (\sin\varphi_+ - \sin\varphi_-) =
\sum_1^{0.5n_\pi} \sin\varphi_+ - \sum_1^{0.5n_\pi} \sin\varphi_-
.
\end{equation}
We make two assumptions:
(1) In a resonance decay, $\bar{\varphi}$ could be regarded as an approximation for $\varphi_\rho$, so the PDF of $\bar{\varphi}$ is the same as the PDF of $\varphi_\rho$.
(2) For two tracks from one resonance decay, $\cos\bar{\varphi}$ and $2\sin(\delta\varphi/2)$ are independent.
From symmetry, $\text{E}[\delta] = \text{E} [ 2\sin(\delta\varphi/2) ] = 0$ at any given $\bar{c}_\rho$, so
\begin{equation}
\text{E}[\bar{c}_\rho\delta]=\text{E}[\bar{c}_\rho]\text{E}[\delta]=\text{E}[\bar{c}_\rho]\times0=0
.
\end{equation}
We therefore get
\begin{equation}
\begin{split}
\text{Var}[\bar{c}_\rho\delta]
&=\text{Var}[\bar{c}_\rho]\text{Var}[\delta]+\text{E}[\bar{c}_\rho]^2 \text{Var}[\delta]+\text{Var}[\bar{c}_\rho]\text{E}[\delta]^2\\
&=\text{E}\left[\bar{c}_\rho^2\right] \text{Var}[\delta]
.
\end{split}
\end{equation}
In our simulations, $n_\rho$ is a Poisson distribution, so to get the variance of $\sum_{1}^{n_\rho}\bar{c}_\rho\delta$ is a problem of the compound Poisson distribution. Thus, we have
\begin{equation}\label{CompoundPoisson}
\begin{split}
\text{Var}\left[\sum_{i}^{n_\rho}\bar{c}_\rho\delta\right] &= \text{E}\left[n_\rho\right]\text{Var}[\bar{c}_\rho\delta]+\text{E}[\bar{c}_\rho\delta]^2\text{Var}[n_\rho]\\
&= \text{E}\left[n_\rho\right]\text{Var}[\bar{c}_\rho\delta]
.
\end{split}
\end{equation}
Equation~\ref{CompoundPoisson} indicates that it makes no difference whether $n_\rho$ is a single value or a Poisson distribution. For simplicity, we can just use $n_\rho$ as if it is fixed to a specific value.
According to CLT,
\begin{equation}
\sum_{1}^{n_\rho}\bar{c}_\rho\delta \sim \mathcal{N} \left( 0, n_\rho\text{Var}[\delta]\text{E}\left[\bar{c}_\rho^2\right] \right)
.
\end{equation}
The PDF of $\varphi$ of the primordial pions has the same form as Eq.~\ref{PDF}, so we can readily obtain the variances ($\text{Var}[c_\pi]$ and $\text{Var}[s_\pi]$).
As for the term about primordial pions, the two terms in the right hand side of Eq.~\ref{uuupi} should have the same distribution.
\iffalse
As for the expectation and variance of $s_\pi u$, because $P(u^2=1)=1$, we have the following results:
\begin{equation}
\begin{split}
\text{E}[s_\pi u]=\text{E}[s_\pi]\text{E}[u]=\text{E}[s_\pi]\times0=0,\\
\text{Var}[s_\pi u]=\text{E}[(s_\pi u)^2]-\text{E}[s_\pi u]^2=\text{E}[s_\pi^2]
.
\end{split}
\end{equation}
\fi
The discussion about $n_\pi$ is as same as the discussion of $n_\rho$. According to CLT, we have
\begin{equation}
\sum_1^{0.5n_\pi} \sin\varphi_+, \sum_1^{0.5n_\pi} \sin\varphi_- \sim \mathcal{N}(0.5n_\pi\text{E}[s_\pi], 0.5n_\pi\text{Var}[s_\pi])
,
\end{equation}
so the difference is
\begin{equation} \label{uuudif}
\sum_1^{0.5n_\pi} \sin\varphi_+ - \sum_1^{0.5n_\pi} \sin\varphi_- \sim \mathcal{N}(0,n_\pi \text{Var}[s_\pi])
.
\end{equation}
\iffalse
\begin{equation}
\sum_{1}^{n_\pi} s_\pi u \sim \mathcal{N} \left( 0, n_\pi\text{E}[s_\pi^2] \right)
.
\end{equation}
\fi
Finally, we write $\Delta S$ in our new notation,
\begin{equation}
\Delta S_{sep} =
\frac{\sum_{1}^{n_\rho}\bar{c}_\rho\delta + \sum_{1}^{n_\pi/2} s_{\pi^+} - \sum_{1}^{n_\pi/2} s_{\pi^-}}{n_\rho + 0.5n_\pi}
,
\end{equation}
where $s_{\pi^+}$ and $s_{\pi^-}$ are the sine values for $\pi^+$ and $\pi^-$ from resonance decay, and they obey the same distribution independently, so we just call them both $s_\pi$. According to CLT,
\begin{equation}
\begin{split}
\Delta S_{sep}
&\sim \mathcal{N} \left(0,\frac{n_\rho\text{Var}[\delta]\text{E}\left[\bar{c}_\rho^2\right]+n_\pi\text{Var}[s_\pi]}{(n_\rho + 0.5n_\pi)^2} \right)\\
&:= \mathcal{N} \left(0,\sigma_\uparrow^2 \right)
.
\end{split}
\end{equation}
\subsubsection{Denominator of $C_{\Psi_2}$}
The PDF of $\Delta S_{mix}$ can describe $N(\Delta S_{mix})$, the denominator of $C_{\Psi_2}$. The analysis here is very similar to the analysis of $\Delta S_{sep}$. In shuffling, we keep the number of positive charges still the same as the number of negative charges:
\begin{equation}
N_n' = N_p' = n_\rho + 0.5n_\pi
\end{equation}
Relaxing this requirement to an average level does not change our results.
After shuffling, all the pions are independent, no matter whether they are primordial or from resonance decays. For pions from resonance decays, the pion azimuth can be written as $\varphi=\bar{\varphi}+\delta\varphi/2 \approx \varphi_\rho+\delta\varphi/2$. Because the distribution of $\delta\varphi$ is symmetric about $\delta\varphi = 0$, we just use $+\delta\varphi/2$ here.
The expression of $\Delta S_{mix}$ can therefore be rewritten as:
\begin{equation}
\begin{split}
\Delta S_{mix}
=& \frac{1}{n_\rho + 0.5n_\pi}\left(\sum_1^{n_\rho} (\sin\varphi_+ - \sin\varphi_-)\right) \\
&+ \frac{1}{n_\rho + 0.5n_\pi}\left(\sum_1^{0.5n_\pi} (\sin\varphi_+ - \sin\varphi_-) \right) \\
=& \frac{\sum_1^{n_\rho} \sin\left(\varphi_\rho+\delta\varphi_+/2\right) - \sum_1^{n_\rho}\sin\left(\varphi_\rho+\delta\varphi_-/2\right)}{n_\rho + 0.5n_\pi}\\
&+\frac{\sum_1^{n_\pi/2} s_{\pi^+} - \sum_1^{n_\pi/2} s_{\pi^-}}{n_\rho + 0.5n_\pi}
.
\end{split}
\end{equation}
The second term is already calculated in Eq.~\ref{uuudif}, and we calculate the distribution of the first term as
\begin{equation}
\begin{split}
&\sum_{1}^{n_\rho} \sin\left(\varphi_\rho+\frac{\delta\varphi_+}{2}\right) - \sum_{1}^{n_\rho} \sin\left(\varphi_\rho+\frac{\delta\varphi_-}{2}\right) \\
&\sim \mathcal{N} \left(0, 2n_\rho\text{Var}\left[\sin\left(\varphi_\rho+\frac{\delta\varphi}{2}\right)\right] \right),
\end{split}
\end{equation}
\iffalse
\begin{equation}
\begin{split}
\sum_{1}^{n_\pi/2} s_{\pi^+} - \sum_1^{n_\pi/2} s_{\pi_-}
\sim \mathcal{N} \left(0, n_\pi\text{Var}[s_\pi] \right)
.
\end{split}
\end{equation}
\fi
The first and the second moment below are needed in order to complete the calculation of the variance:
\begin{equation} \label{uuu1m}
\begin{split}
\text{E} &\left[\sin\left(\varphi_\rho + \frac{\delta\varphi}{2}\right)\right]\\
=& \text{E} \left[\sin\varphi_\rho\right] \text{E}\left[\cos\frac{\delta\varphi}{2}\right] + \text{E} \left[\cos\varphi_\rho\right] \text{E}\left[\sin\frac{\delta\varphi}{2}\right]\\
=& \text{E} \left[\sin\varphi_\rho\right] \text{E}\left[\cos\frac{\delta\varphi}{2}\right]
,
\end{split}
\end{equation}
\begin{equation} \label{uuu2m}
\begin{split}
\text{E} &\left[ \sin^2 \left(\varphi_\rho + \frac{\delta\varphi}{2}\right)\right]\\
=&\frac{1}{2}-\frac{1}{2}\text{E}\left[\cos(2\varphi_\rho)\cos(\delta\varphi)\right]+\frac{1}{2}\text{E}\left[\sin(2\varphi_\rho)\sin(\delta\varphi)\right]\\
=&\frac{1}{2}-\frac{1}{2}\text{E}\left[\left(1-2\sin^2\varphi_\rho\right)\left(1-\frac{1}{2}\delta^2\right)\right]+0\\
=&\text{E}[s_\rho^2]+\frac{1}{4}\text{Var}[\delta]-\frac{1}{2}\text{E}[s_\rho^2]\text{Var}[\delta]
.
\end{split}
\end{equation}
The last step uses the fact that $\text{E}[\delta]=0$ and therefore $\text{Var}[\delta]=\text{E}[\delta^2]$.
Thus, we can get the distribution of $\Delta S_{mix}$,
\begin{equation}
\begin{split}
\Delta S_{mix} &\sim \mathcal{N}\left(0, \frac{2n_\rho\text{Var}\left[\sin\left(\varphi_\rho+\frac{\delta\varphi}{2}\right)\right] + n_\pi\text{E}[s_\pi^2]}{(n_\rho + 0.5n_\pi)^2} \right) \\
&=: \mathcal{N} \left(0,\sigma_\downarrow^2 \right)
.
\end{split}
\end{equation}
\subsubsection{Shape of $C_{\Psi_2}$}
We use the PDF of a normal distribution Gaussian function:
\begin{equation}
f(x|\mu,\sigma) := \frac{1}{\sqrt{2\pi\sigma^2}}\exp{ \left(-\frac{(x-\mu)^2}{2\sigma^2} \right)}
.
\end{equation}
The shape of $C_{\Psi_2}$ is described by the ratio of the PDF of $\Delta S_{sep}$ to the PDF of $\Delta S_{mix}$. Using Gaussian functions for those PDFs, the shape of $C_{\Psi_2}$ is
\begin{equation}
C_{\Psi_2}(x)=\frac{f(x|0,\sigma_\uparrow)}{f(x|0,\sigma_\downarrow)} = \frac{\sigma_\downarrow}{\sigma_\uparrow}\exp{ \left[-\frac{x^2}{2} \left(\frac{1}{\sigma_\uparrow^2}-\frac{1}{\sigma_\downarrow^2} \right) \right]}
.
\end{equation}
Here, $x$ denotes $\Delta S$ (representing $\Delta S_{sep}$ or $\Delta S_{mix}$).
\subsubsection{Shape of $C_{\Psi_2}^{\perp}$}
The analysis of $C_{\Psi_2}^{\perp}$ is nearly the same as that of $C_{\Psi_2}$ by shifting the relative azimuth $\varphi$ by a centain angle: $\varphi'=\varphi - \xi$. Accordingly, we use the parallel shorthand notations as follows:
\begin{equation}
\begin{split}
c':=\cos\varphi', \quad \bar{c}':=\cos\bar{\varphi}', \quad s':=\sin\varphi', \\
\bar{s}':=\sin\bar{\varphi}',\quad
\delta:=2\sin(\delta\varphi'/2)=2\sin(\delta\varphi/2)
.
\end{split}
\end{equation}
Then, the format of variances here is just like before:
\begin{equation}
\sigma_{\perp\uparrow}^2=\frac{n_\rho\text{Var}[\delta]\text{E}[\bar{c}_\rho'^2]+n_\pi\text{Var}[s_\pi']}{(n_\rho + 0.5n_\pi)^2}
,
\end{equation}
\begin{equation}
\sigma_{\perp\downarrow}^2=\frac{2n_\rho\text{Var}\left[\sin\left(\varphi_\rho'+\frac{\delta\varphi}{2}\right)\right] + n_\pi\text{E}[s_\pi'^2]}{(n_\rho + 0.5n_\pi)^2}
.
\end{equation}
The shape of $C_{\Psi_2}^{\perp}$ is
\begin{equation}
C_{\Psi_2}^{\perp}(x)=
\frac{f(x|0,\sigma_{\perp\uparrow})}{f(x|0,\sigma_{\perp\downarrow})} = \frac{\sigma_{\perp\downarrow}}{\sigma_{\perp\uparrow}}\exp{ \left[-\frac{x^2}{2} \left(\frac{1}{\sigma_{\perp\uparrow}^2}-\frac{1}{\sigma_{\perp\downarrow}^2} \right) \right]}
.
\end{equation}
\subsubsection{Shape of $R_{\Psi_2}$}
According to the definition of $R_{\Psi_2}$, the shape of $R_{\Psi_2}$ is given by
\begin{equation}
\begin{split}
R_{\Psi_2}(x)=&
\frac{f(x|0,\sigma_\uparrow)}{f(x|0,\sigma_\downarrow)} \left/ \frac{f(x|0,\sigma_{\perp\uparrow})}{f(x|0,\sigma_{\perp\downarrow})} \right. \\
=& \frac{\sigma_\downarrow\sigma_{\perp\uparrow}}{\sigma_\uparrow\sigma_{\perp\downarrow}}\exp{ \left[-\frac{x^2}{2} \left(\frac{1}{\sigma_\uparrow^2}-\frac{1}{\sigma_\downarrow^2} - \frac{1}{\sigma_{\perp\uparrow}^2}+\frac{1}{\sigma_{\perp\downarrow}^2} \right) \right]}
.
\end{split}
\end{equation}
Thus, whether $R_{\Psi_2}$ is convex or concave is determined by the following parameter:
\begin{equation}
\zeta := \frac{1}{\sigma_\uparrow^2}-\frac{1}{\sigma_\downarrow^2} - \frac{1}{\sigma_{\perp\uparrow}^2}+\frac{1}{\sigma_{\perp\downarrow}^2}
.
\end{equation}
\begin{itemize}
\item If $\zeta > 0$, then $R_{\Psi_2}$ is convex, and the more positive $\zeta$ is, the more convex $R_{\Psi_2}$ will be.
\item If $\zeta < 0$, then $R_{\Psi_2}$ is concave, and the more negative $\zeta$ is, the more concave $R_{\Psi_2}$ will be.
\item If $\zeta = 0$, then $R_{\Psi_2}$ is flat.
\end{itemize}
\subsection{CLT analysis for $v_2$}
If we only focus on $v_2$, the PDF in Eq.~\ref{PDF} can be simplified as
\begin{equation}
f(\varphi)=\frac{1}{2\pi}\left(1+2v_2\cos(2\varphi)\right)
.
\end{equation}
From the definition of $C_{\Psi_2}^{\perp}$ for $v_2$, the relative azimuth is shifted by $\xi=\pi/2$. Thus,
\begin{equation} \label{uuuCpm2}
\begin{split}
&\text{E}[c_\rho^2]=\text{E}[\bar{c}_\rho^2]=\frac{1+v_{2,\rho}}{2},
\quad \text{E}[s_\rho^2]=\frac{1-v_{2,\rho}}{2},\\
&\text{E}[c_\rho'^2]=\text{E}[\bar{c}_\rho'^2]=\frac{1-v_{2,\rho}}{2},
\quad \text{E}[s_\rho'^2]=\frac{1+v_{2,\rho}}{2},\\
&\text{E}[s_\pi^2]=\frac{1-v_{2,\pi}}{2},
\quad \text{E}[s_\pi'^2]=\frac{1+v_{2,\pi}}{2}
.
\end{split}
\end{equation}
We can easily get that the first moment of $\sin\left(\varphi_\rho + \delta\varphi/2\right)$ in Eq.~\ref{uuu1m} is $0$, so its variance is equal to its second moment which can be expressed as Eq.~\ref{uuu2m} by the terms in Eq.~\ref{uuuCpm2}.
After slightly changing the sequence in the expression of $\zeta$, we have
\begin{equation}
\begin{split}\label{zeta2}
\zeta
=& \left( \frac{1}{\sigma_\uparrow^2} - \frac{1}{\sigma_{\perp\uparrow}^2} \right) - \left( \frac{1}{\sigma_\downarrow^2} - \frac{1}{\sigma_{\perp\downarrow}^2} \right)\\
=& \bigg( \frac{2(n_\rho + 0.5n_\pi)^2}{n_\rho\text{Var}[\delta](1+v_{2,\rho})+ n_\pi(1-v_{2,\pi})} \\
&- \frac{2(n_\rho + 0.5n_\pi)^2}{n_\rho\text{Var}[\delta](1-v_{2,\rho}) + n_\pi(1+v_{2,\pi})} \bigg)\\
&- \bigg( \frac{2(n_\rho + 0.5n_\pi)^2}{2n_\rho(1-v_{2,\rho}) + n_\rho v_{2,\rho} \text{Var}[\delta] + n_\pi(1-v_{2,\pi})} \\
&- \frac{2(n_\rho + 0.5n_\pi)^2}{2n_\rho(1+v_{2,\rho}) - n_\rho v_{2,\rho} \text{Var}[\delta] + n_\pi(1+v_{2,\pi})} \bigg)
.
\end{split}
\end{equation}
For further insights, we make two more assumptions (in addition to those in Sec.~\ref{CpNumerator}):
(3) The magnitude of $v_2$ (including $v_{2,\rho}$ and $v_{2,\pi}$) is much smaller than 1. In our simulations, they are around 0.1;
(4) In each event, the number of primordial pions are much larger than the number of $\rho$ mesons. In our simulations, $n_\pi \approx 10n_\rho$.
In our simulations, $v_{2,\rho}$, $v_{2,\pi}$, and $n_\rho/n_\pi$ are of the same order of magnitude ($\sim 0.1$). To the leading order of them,
\begin{equation}
\zeta = \frac{n_\rho}{n_\pi} (2n_\rho + n_\pi)^2 \left(\frac{ 4v_{2,\pi}-2v_{2,\rho}-2v_{2,\pi}\text{Var}[\delta]}{n_\pi+n_\rho \left(4+2\text{Var}[\delta]\right)}\right)
.
\end{equation}
The first derivatives are
\begin{equation}
\frac{\partial \zeta}{\partial v_{2,\rho}} = \frac{n_\rho}{n_\pi} (2n_\rho + n_\pi)^2 \left(\frac{-2}{n_\pi+n_\rho \left(4+2\text{Var}[\delta]\right)}\right) < 0
,
\end{equation}
\begin{equation}\label{zetavpi}
\frac{\partial \zeta}{\partial v_{2,\pi}} = \frac{n_\rho}{n_\pi} (2n_\rho + n_\pi)^2 \left(\frac{ 4-2\text{Var}[\delta]}{n_\pi+n_\rho \left(4+2\text{Var}[\delta]\right)}\right)
,
\end{equation}
\begin{equation}\label{zetavardelta}
\begin{split}
\frac{\partial \zeta}{\partial \text{Var}[\delta]} = \frac{-2n_\rho}{n_\pi} \left(\frac{2n_\rho + n_\pi}{n_\pi+n_\rho \left(4+2\text{Var}[\delta]\right)}\right)^2\\
\times\left(v_{2,\pi}n_\pi+8v_{2,\pi}n_\rho-2v_{2,\rho} n_\rho \right)
.
\end{split}
\end{equation}
When $0 \le \text{Var}[\delta] < 2$, $\partial \zeta / \partial v_{2,\pi} >0$.
When $2 < \text{Var}[\delta] \le 4$, $\partial \zeta / \partial v_{2,\pi} <0$.
Varying the $p_{T,\rho}$ changes $\text{Var}[\delta]$. As long as $v_{2,\rho}$ is no more than $9v_{2,\pi}$ (which is almost always the case), then $\partial \zeta / \partial \text{Var}[\delta] < 0$. In our $p_T$ scan, $v_{2,\rho}$ is a single value $0.06$, and the average of $v_{2,\pi}$ is also around this value.
Thus, after suitable approximations, we can see the effects on the shape of $R_{\Psi_2}$ from those variables:
\begin{itemize}
\item Increasing $v_{2,\rho}$ makes $R_{\Psi_2}$ more concave.
\item Increasing $\text{Var}[\delta]$ makes $R_{\Psi_2}$ more concave.
Increasing $p_{T,\rho}$ makes $R_{\Psi_2}$ more convex, because larger $p_{T,\rho}$ makes the two daughter pions closer to each other in angle, yielding a smaller $\text{Var}[\delta]$ (see Fig.~\ref{Var-pT}).
\item Increasing $v_{2,\pi}$ makes $R_{\Psi_2}$ more convex when $\text{Var}[\delta]<2$, and more concave when $\text{Var}[\delta]>2$. In our default simulation (Fig.~\ref{deltaRMS}), $\text{Var}[\delta] \approx 1.359^2 < 2$, so $R_{\Psi_2}$ becomes more convex as $v_{2,\pi}$ increases.
\end{itemize}
The conclusions of the CLT analysis are consistent with the simulation results.
\begin{figure}[h]
\subfloat{
\centering
\includegraphics[width=0.9\linewidth]{deltapT}
\label{Var-pT}
}
\centering
\subfloat{
\centering
\includegraphics[width=0.9\linewidth]{deltaRMS}
\label{deltaRMS}
}
\caption{$\text{Var}[\delta]$ in the default simulation and the scan of $p_{T,\rho}$. (a) $\text{Var}[\delta]$ depending on $p_{T,\rho}$. (b) $\delta$ distribution in the default simulation.}
\end{figure}
\subsection{CLT analysis for $v_3$} \label{CLTv3}
If we only focus on $v_3$, the PDF in Eq.~\ref{PDF} could be simplified as follows:
\begin{equation}\label{v2PDF}
f(\varphi)=\frac{1}{2\pi}\left(1+2v_3\cos(3\varphi)\right)
.
\end{equation}
\subsubsection{Analysis for Definition \textbf{A}}
By Definition \textbf{A}, we list the shorthand notations:
\begin{equation}
\begin{split}
c:=\cos\varphi, \quad s:=\sin\varphi, \quad \xi=\pi/3, \\
c':=\cos\left(\varphi-\frac{\pi}{3}\right), \quad s':=\sin\left(\varphi-\frac{\pi}{3}\right), \\
\delta:=2\sin\left(\frac{1}{2}\delta\varphi\right)
.
\end{split}
\end{equation}
By using the simplified PDF (Eq.~\ref{v2PDF}), we can easily get the second moments needed:
\begin{equation}
\begin{split}
&\text{E}[{c}_\rho^2]=\text{E}[\bar{c}_\rho^2]=\frac{1}{2},
\quad \text{E}[s_\rho^2]=\frac{1}{2},
\quad \text{E}[s_\pi^2]=\frac{1}{2},\\
&\text{E}[c_\rho'^2]=\text{E}[\bar{c}_\rho'^2]=\frac{1}{2},
\quad \text{E}[s_\rho'^2]=\frac{1}{2},
\quad \text{E}[s_\pi'^2]=\frac{1}{2}.
\end{split}
\end{equation}
There is no $v_{3,\rho}$ or $v_{3,\pi}$ in any term above, so the shapes of the observables should not change with $v_{3,\rho}$ or $v_{3,\pi}$. We can just utilize the CLT analysis results for $v_2$ by setting all $v_2$ values to $0$, and then from the expression of $\zeta$ in Eq.~\ref{zeta2}, we see the terms in each bracket cancel each other. Thus, the CLT analysis shows $\zeta=0$, and accordingly, $R_{\Psi_3}$ should be always flat, as indeed shown in Fig.~\ref{N3R}.
\subsubsection{Analysis for Definition \textbf{B}}
By Definition \textbf{B}, we list the shorthand notations:
\begin{equation}
\begin{split}
c:=\cos\left(\frac{3}{2}\varphi\right) \quad s:=\sin\left(\frac{3}{2}\varphi\right), \quad \xi=\frac{\pi}{3}, \\
\quad c':=\cos\left(\frac{3}{2}\left(\varphi-\frac{\pi}{3}\right)\right),
\quad s':=\sin\left(\frac{3}{2}\left(\varphi-\frac{\pi}{3}\right)\right),\\
\delta :=2\sin\left(\frac{3}{4}\delta\varphi\right)
.
\end{split}
\end{equation}
From the simplified PDF (Eq.~\ref{v2PDF}), we can get the first and the second moments:
\begin{equation} \label{R3m1}
\begin{split}
&\text{E}[s_\rho] = \text{E}[c_\rho'] = \frac{6-4v_{3,\rho}}{9\pi}, \quad \text{E}[c_\rho] = \text{E}[s_\rho'] = 0,\\
&\text{E}[s_\pi] = \text{E}[c_\pi'] = \frac{6-4v_{3,\pi}}{9\pi}, \quad \text{E}[c_\pi] = \text{E}[s_\pi'] = 0
,
\end{split}
\end{equation}
\begin{equation}
\begin{split}
&\text{E}[c_\rho^2]=\text{E}[\bar{c}_\rho^2]=\frac{1+v_{3,\rho}}{2},
\quad \text{E}[s_\rho^2]=\frac{1-v_{3,\rho}}{2},\\
&\text{E}[c_\rho'^2]=\text{E}[\bar{c}_\rho'^2]=\frac{1-v_{3,\rho}}{2},
\quad \text{E}[s_\rho'^2]=\frac{1+v_{3,\rho}}{2},\\
&\text{E}[s_\pi^2]=\frac{1}{2},
\quad \text{E}[s_\pi'^2]=\frac{1}{2}
,
\end{split}
\end{equation}
where we have a constraint that the azimuthal range must be $0 \le \varphi < 2\pi$.
Because of the non-zero first moments, the $R_{\Psi_3}$ curve is not flat ($\zeta \neq 0$) even if both $v_{3,\rho}$ and $v_{3,\pi}$ are set to $0$. This counterintuitive observation is due to the absence of the periodical symmetry in the Definition \textbf{B}. For the same reason, Definition \textbf{B} has some disadvantages as follow:
\begin{itemize}
\item The $R_{\Psi_3}$ curve is counterintuitively not flat, even if both $v_{3,\rho}$ and $v_{3,\pi}$ are set to $0$ which means all azimuths are isotropically distributed.
\item The azimuthal range must be set. In the former discussion, we let $\varphi \in [0,2\pi)$. However, if we let the azimuthal range be $\varphi \in [-\pi,\pi)$, the first moments will change from Eq.~\ref{R3m1} into
\begin{equation}
\begin{split}
&\text{E}[s_\rho] = \text{E}[c_\rho'] = 0, \quad \text{E}[c_\rho] = \text{E}[s_\rho'] = \frac{6-4v_{3,\rho}}{9\pi},\\
&\text{E}[s_\pi] = \text{E}[c_\pi'] = 0, \quad \text{E}[c_\pi] = \text{E}[s_\pi'] = \frac{6-4v_{3,\rho}}{9\pi}
,
\end{split}
\end{equation}
which can make obvious differences to the features of the sine observables.
\item The azimuthal range is by choice, however, it introduces artificial unphysical differences using Definition \textbf{B}.
Take Fig.~\ref{ChargeDisorder} as an example. If we take the azimuthal range $[-\pi,\pi)$,
we have $\alpha > 0$ and $\beta < 0$. However, if we take the range $[0,2\pi)$, $\beta$ will become $\beta' = \beta + 2\pi$. The contribution of this resonance decay to $\Delta S_{sep}$ changes from
\[\quad\quad\quad\sin\left(\frac{3}{2}\alpha\right) - \sin\left(\frac{3}{2}\beta\right)\]
into
\[\quad\quad\quad\sin\left(\frac{3}{2}\alpha\right) - \sin\left(\frac{3}{2}\beta'\right) = \sin\left(\frac{3}{2}\alpha\right) + \sin\left(\frac{3}{2}\beta\right).\]
It seems just like the negative charge becomes a positive one.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=0.33\textwidth]{ChargeDisorder}
\caption{The choice of the azimuthal range affects the physical results using Definition \textbf{B}.}
\label{ChargeDisorder}
\end{figure}
We thus conclude that Definition \textbf{B} is ill-devised, and should not be used. On the other hand, Definition \textbf{A} always yields a flat $R_{\Psi_3}$ distribution and therefore is not sensitive to the CME or background. It therefore appears that the $\Psi_3$ harmonic plane is not suitable for the sine observables.
\iffalse
Again, we can use the CLT analysis results for $v_2$ by replacing $v_{2,\rho}$ with $v_{3,\rho}$ and replacing $v_{2,\pi}$ with $0$. Thus, the $v_3$ observables' dependences on $v_{3,\rho}$ and $p_{T,\rho}$ are just like the $v_2$ observables' dependences on $v_{2,\rho}$ and $p_{T,\rho}$. Namely,
\begin{itemize}
\item Increasing $v_{3,\rho}$ makes $R_{\Psi_3}$ more concave.
\item Increasing $p_{T,\rho}$ makes $R_{\Psi_3}$ more convex.
\end{itemize}
For the first point, in our simulation, the observables are not very sensitive to $v_{3,\rho}$.
For the second point, it is consistent with the simulation results.
\fi
\section*{Summary}
We have presented a systematic study of resonance backgrounds as functions of the resonance $v_2$ and $p_T$ with toy-model simulations and CLT calculations, in order to better understand the behaviors of the sine observable.
It is found that the concavity or convexity of $R_{\Psi_2}(\Delta S)$ depends sensitively on the resonance $v_2$
(which yields different numbers of decay $\pi^+\pi^-$ pairs in the in-plane and out-of-plane directions)
and $p_T$ (which affects the opening angle of the decay $\pi^+\pi^-$ pair).
Qualitatively, low $p_{T}$ resonances decay into large opening-angle pairs and result in more ``back-to-back'' pairs out-of-plane (because of the positive resonance $v_2$), mimicking a CME signal, or a concave $R_{\Psi_2}(\Delta S)$. High $p_T$ resonances, on the other hand, result in more close pairs in-plane, constituting a well-known background, or convex $R_{\Psi_2}(\Delta S)$. In other words, resonance backgrounds can yield both concave and convex $R_{\Psi_2}(\Delta S)$ distributions, depending on the resonance kinematics.
We have also conducted a supplemental study using the triangular flow ($v_3$) and discussed two definitions for the sine variables. For one of the definitions, it is found that $R_{\Psi_3}(\Delta S)$ is
always flat due to the inherited symmetry in the definition.
For the other definition, $R_{\Psi_3}(\Delta S)$ for $v_3$ is found to to behave similarly as $R_{\Psi_2}(\Delta S)$ for $v_2$, if the azimuthal angle is kept in the range $[0,2\pi)$;
$R_{\Psi_3}$ can be concave or convex depending on details.
However, $R_{\Psi_3}$ is found to depend on the choice of the azimuthal angle range due to the inconsistency between the periods of $R_{\Psi_3}$ ($4\pi/3$) and azimuthal position ($2\pi$). If $[-\pi,\pi)$ is chosen to be the range, then the $R_{\Psi_3}$ results are completely different.
Therefore, the $\Psi_3$ may not be suitable for the sine-observable studies. One has to be careful to keep the identical azimuthal angle range in the model-data comparison studies.
We have verified our toy-model simulation results by analytical CLT calculations.
\iffalse
If the CME is the only source for the RP-dependent and charge-dependent correlations, then the $R_{\Psi_2}(\Delta S)$ would be concave and $R_{\Psi_3}(\Delta S)$ would be flat. However, given the existence of backgrounds, a concave $R_{\Psi_2}(\Delta S)$ and a simultaneous flat $R_{\Psi_3}(\Delta S)$ do not lead to the conclusion of CME.
This is because the $R_{\Psi_2}(\Delta S)$ and $R_{\Psi_3}(\Delta S)$ variables do not necessarily have a prior relationship, each individually varying with their respective $v_m(p_T)$ $(m=2,3)$ of resonances, and because the $R_{\Psi_3}$ variable depends on what azimuthal range is used.
Based on our results, it is clear that the qualitative concavity or convexity of the $R_{\Psi_2}(\Delta S)$ or $R_{\Psi_3}(\Delta S)$ variable, or the comparison between them, cannot conclude on the existence, nor the magnitude, of the CME. Since the $R_{\Psi_2}(\Delta S)$ and $R_{\Psi_3}(\Delta S)$ variable depends on the details of the resonance kinematics and anisotropies, as well as the resonance abundances, a precise knowledge of all resonance distributions is required in order to quantify the CME using the $R_{\Psi_m}(\Delta S)$ $(m=2,3)$ observables.
\fi
If the CME is the only source for the RP-dependent and charge-dependent correlations, then the $R_{\Psi_2}(\Delta S)$ would be concave and $R_{\Psi_3}(\Delta S)$ would be convex for the nontrivial defintion. However, given the existence of backgrounds, a concave $R_{\Psi_2}(\Delta S)$ and a simultaneous convex $R_{\Psi_3}(\Delta S)$ do not lead to the conclusion of CME.
This is because the $R_{\Psi_2}(\Delta S)$ and $R_{\Psi_3}(\Delta S)$ variables do not necessarily have a prior relationship, each individually varying with their respective $v_m(p_T)$ $(m=2,3)$ of resonances, and because the $R_{\Psi_3}$ variable depends on what azimuthal range is used.
Based on our results, it is clear that the qualitative concavity or convexity of the $R_{\Psi_2}(\Delta S)$ or $R_{\Psi_3}(\Delta S)$ variable, or the comparison between them, cannot conclude on the existence, nor the magnitude, of the CME. Since the $R_{\Psi_2}(\Delta S)$ and $R_{\Psi_3}(\Delta S)$ variables depend on the details of the resonance kinematics and anisotropies, as well as the resonance abundances, a precise knowledge of all resonance distributions is required in order to quantify the CME using the $R_{\Psi_m}(\Delta S)$ $(m=2,3)$ observables.
\section*{Acknowledgments}
Y.~Feng thanks Dr.~Wendell Lutz and Mrs.~Nancy Lutz~for their generous support of the Rolf Scharenberg Graduate Research Fellowship.
We thank Roy Lacey and Niseem Magdy for useful discussions. This work is supported in part by the U.S.~Department of Energy Grant No.~DE-SC0012910 and the National Natural Science Foundation of China Grants No.~11647306 and No.~11747312.
|
{
"timestamp": "2018-08-31T02:14:29",
"yymm": "1803",
"arxiv_id": "1803.02860",
"language": "en",
"url": "https://arxiv.org/abs/1803.02860"
}
|
\section{Historical Preamble}
George Lema\"itre left many legacies, well beyond his technical results in cosmology. Three of them still riverberate on the current research in quantum gravity.
\begin{description}
\item[\it Finiteness] ~~ A finite universe is a recurring theme in Lema\"itre research. This is clear in his best known idea: a universe finite in time, i.e. having a beginning rather than being eternal as the static universe, which was the dominant paradigm at the time. But finiteness inspired many of his other ideas. His ideas about the origin of the universe in the form of a quantum \emph{primeval atom} addressed the plague of curvature singularities in General Relativity. The quantum properties of the primeval atom were supposed to correct for this, even if at the time no available technique could achieve it. Less known is the fact that George Lema\"itre addressed the problem of singularities also in the context of black holes during his doctorate: he analysed a non-singular black-hole solution with non-constant density, that lead later to
the Eddington-Lema\"itre metric for cosmology.
\item[\it Phoenix Universe] ~~
The Phoenix Universe is a model where the universe goes trough oscillations, collapsing and bouncing back. It reestablishes infiniteness in time, but avoids the problem of the initial curvature singularity. The idea of a universe that raises from its own ashes was welcomed by many as conceptually preferable, and dismissed by others (Eddington notoriously said \emph{``I am no Phoenix worshipper''}). Lema\"itre was attracted by the by the philosophical aspects of his idea, but renounced it when his calculations yield cosmological cycles that were too short. Today viable bouncing models are studied, relying of different physical mechanisms \cite{Brandenberger2016}. The cosmological framework based on Loop Quantum Gravity realises this possibility via a quantum singularity resolution, as I briefly illustrate in the next Section.
\item[\it Phenomenology] ~~
George Lema\"itre understood that data were indicating that the universe was expanding. He was the first to write the law that became later known as the Hubble law: not only this was in 1927, two years before 1929 Hubble's paper, but, most importantly, he had already the correct interpretation of the data. His attention for data was stronger than for most of his contemporary pears working in gravitation ---Einstein included. Theory needs data! Today, in our era of celebrated data abundance in cosmology, our attention should perhaps shift towards the theory to find the right interpretation, without which the accumulation of new data remains blind. I also like to mention here the pioneeringh work of Lem\"itre in numerical simulations. Simulations are the theorist's experiment, and in quantum gravity simulations are going to play an increasingly important role guiding the future analytical investigation
\cite{Magliaro:2008dq,Christensen:2009oq,Bahr2015,Bayle2016,Dona2018,Gozzini2018}%
.
\end{description}
\section{Quantum Singularity Resolution}
The singularities of General Relativity can be avoided if matter violates the strong energy condition which is an hypothesis in the Penrose-Hawking singularity theorem. They can also be avoided quantum-mechanically: this is what is expected in a viable quantum gravity theory, and this is what happens in Loop Quantum Gravity.
Loop Quantum Gravity is based on gauge-theory perspective on General Relativity, written in the tetrad formalism. On a fixed time surface, tetrads reduce to triads, and the corresponding operators in the quantum theory are generators of rotations. These are the basic ingredients to define geometric operators. The compactness of the rotation group (or its double covering $SU(2)$) yields the discreteness of the spectrum of area and volume operators, which is the cornerstone of Loop Quantum Gravity. This result alone, however, is not sufficient to account for the resolution of the singularity of classical General Relativity. This depends on dynamics of the theory.
A covariant formulation of the Loop Quantum Gravity dynamics is given by the \emph{spinfoam} formalism. In this formalism the geometrical $SU(2)$ quantities labelling the states at a given time evolve building up Lorentzian spacetime. The corresponding transition amplitudes are defined by a map from
representations of $SU(2)$ to unitary representations of $SL(2,\mathbb{C})$. This map induces a proportionality $\vec K=\gamma \vec L$ between the generators of rotations $\vec L$ and the generator of boosts $\vec K$ in $SL(2,\mathbb{C})
\footnote{This relations is called the \emph{simplicity} constraint. Starting from the topological BF action, this constraint frees the gravitational degrees of freedom and project into the physical states, encoding the dynamics of spacetime \cite{Rovelli:2014ssa}.}.
The proportionality constant $\gamma$ is called after Barbero and Immirzi: in the classical tetrad formulations it multiplies the Holst term in the action%
\footnote{The Holst term appearing in General Relativity is analogue to the other one we know for a non-abelian theory, the $\theta$-term in QED: these terms do not affect the classical equations of motion, but they play a role in the quantum theory.}.
The equation $\vec K=\gamma \vec L$ ties the discrete spectrum of $L^2$ to the discretness of the boost spectrum and this, in turn, implies the existence of a maximal acceleration \cite{Rovelli2013a}. A maximal acceleration have been considered in the study of singularities on a classical spacetime: having a maximal acceleration appears to be a sufficient condition to have a bounded curvature and therefore no curvature singularity%
\footnote{The definition of a curvature singularity in General Relativity is more cumbersome than just checking the finiteness of the curvature. Indeed, one should check also that spacetime is geodesically complete. Given a maximal acceleration, there is one (only) counterexample of non-geodesically complete spacetime given by Tipler \cite{Tipler:1977kx}, but it require a quite artificial construction that does not apply to the kind of spacetime we may want to consider in physical situations.}%
. Therefore, in the covariant formulation of Loop Quantum Gravity, there are the conditions for a quantum \emph{no-singularity comnjecture}.
Curvature singularities are therefore expected to disappear in any formulation of a quantum cosmology based on the loop quantization. An extensive literature has confirmed this expectation in the context of Loop Quantum Cosmology \cite{Agullo2017}. In Loop Quantum Cosmology a quantum Hamiltonian operator is constructed using holonomies of the Ashtekar connection, which codes the curvature. Promoting holonomies directly to quantum operators yields boundness. In this simplified framework one can study the consequences of this boundness explicitly: the effective equations of motion for the universe, show that when the universe approaches a Planckian energy density, quantum effects manifest as an effective repulsive force that prevent further collapse. In a (closed) primordial universe, this energy density is attained when the universe is many orders of magnitude larger than a Planck radius \cite{Ashtekar:2006es}. The result is a bouncing universe: the universe experiences a pre-bigbang contracting phase, the contraction is stopped by the appearance of a quantum repulsive force, that lead to the subsequent expanding phase.
In the rest of the paper I focus on the another main consequence of the quantum singularity resolution: its effect on black holes.
\section{Planck Stars}
It is reasonable to expect the physics of the singularity at the center of a black hole to be similar to that of the cosmological bounce. The application of Loop Quantum Gravity to the central black-hole singularity started a decade ago \cite{Modesto2004,Modesto2006,Bojowald:2005qw,Ashtekar2005,Gambini:2008bh} and is actively continuing today \cite{Gambini2014,Gambini2013,Gambini2015,Gambini2016,Corichi2016,Yonika2017,Olmedo2017}. A detailed picture recently developed is provided by the \emph{Planck Star} scenario: a distribution of matter collapses, forms a black hole and continues the collapse inside its dynamical horizon, the quantum repulsion prevents the formation of the central singularity and triggers a phase of expansion, eventually muting the trapping surface of the horizon into an anti-trapping surface, i.e. a white hole \cite{Rovelli2014h}. The name Planck Star comes from the analogy with usual stars because the quantum repulsion prevents a further collapse like the nuclear reactions do in the core of stars. Notice that the collapse still produce an horizon, but a dynamical horizon rather than an event horizon, therefore most of the usual black holes physics is unchanged.
The Planck star scenario bears three novelties. First, the black hole bounces back in the same universe and in the same spacial point where it is, and not to a different region like for instance in the case of an Einstein-Rosen bridge. Previous attempt to bridge a collapsing solution with an expanding one \cite{Stephens1994,Frolov1981} faced the problem that in Penrose's conformal diagrams a white hole sit in the causal past of a black hole, not in its future. The realisation that it is possible to have a white hole in the future of a black hole, with the same asymptotic region, was clarified by Haggard and Rovelli \cite{Haggard:2014fv}. Such a spacetime is forbidden by the classical Einstein equations, but is possible if these are violated in a small spacetime region, as typical of quantum tunnelling.
The second feature of the Planck Star framework is that it is possible to ask how long the bounce of a black hole takes. Notice that the bounce can be very fast in the proper time comoving with the collapsing matter ---of the order of the time light would take to cross the horizon from one side to the other--- but very long for an external observer, i.e. an observer that see a front of matter collapsing into forming the black hole and then coming back from a white hole. The difference can be huge, due to the large redshift due to the gravitational potential. The external bounce time is the time we are interested to study in view of the phenomenology; this is the black hole lifetime.
In the covariant theory of Loop Quantum Gravity, it is possible in principle to compute the black hole lifetime \cite{Christodoulou:2016ve} althought the explicit calculation has so far proven cumbersome \cite{Christodoulou2018}. The same quantity should be understandable using different quantum-gravity approaches and a concrete result on that may provide a test for the theory. From higher-dimensional black holes to a brane interior, there is a convergence toward a possible instability of quantum-gravitational origin that can yields an explosive event \cite{Gregory:1993vy,Casadio:2000py,Casadio:2001dc,Emparan:2002jp,Kol:2004pn}.
In the lack of a solid explicit computation, it is possible to estimate the allowed lifetime using heuristic arguments. In the simplified case of a non-rotating black hole with no charge, this time would depend solely on the total mass. Therefore, we can express it in natural units as a function of the black hole mass $M_{BH}$.
The characteristic time scale of the Hawking evaporation, is $M_{BH}^3$. The usual computation of Hawking evaporation uses a fixed background and its validity should be questioned when the dynamical nature of spacetime needs to be taken into account. This is expected to happen when approximately half of the mass of the black hole has evaporated, as signalled by the inconsistency referred as `firewall' \cite{Almheiri:2012rt} found in this regime. Therefore a full quantum-gravity regime should begin before firewalls would appear. The time $M_{BH}^3$ can be estimated as the longest possible time the hole can live before turning into a white-hole with the consequent explosion.
To bound the black hole lifetime time from below, consider that quantum fluctuations can already appear on a shorter time scale, when Hawking radiation is still negligible. In general a classical system present a \emph{quantum-break time} that corresponds to its characteristic timescale after which the evolution departs from classicality and the system should be treated as a quantum system. This time can be computed for a black hole asking how long would it take for quantum fluctuations to be detected outside the horizon. The curvature at the horizon could be small (the bigger the black hole the smaller this curvature would be) but never zero. As this curvature is proportional to $M_{\rm BH}^{-2}$ in the area outside the horizon, the inverse of this quantity gives a time $M_{\rm BH}^2$ \cite{Haggard:2014fv,Haggard:2016ibp}, a time hugely shorter than $M_{\rm BH}^3$ as one would realise by plugging the Planck constants in the equation. $M_{\rm BH}^2$ is the lower bound on the possible lifetime of a hole: in order for a black hole to decay into a white hole, some quantum effects should appear and $M_{\rm BH}^2$ is the minimal time for which this could happen. In the treatment of quantum system that undergo a decay, the distribution of events is strongly peaked on the minimal time. This suggest that for black holes the time $M_{\rm BH}^2$ might be the relevant one.
Finally, the third novel aspect of the Planck-Star scenario is a whole new phenomenology of the explosive expanding phase. The current studies, while motivated by Planck Stars, can in fact apply to a wider range of model of exploding black holes. I discuss this in the remaining part of this presentation, considering the full range of possible black-hole lifetimes between $M_{\rm BH}^2$ and $M_{\rm BH}^3$.
It is convenient to parametrise this window as a lifetime
$$
\tau = 4\kappa\left({M_{\rm BH}}/{m_{\rm Pl}}\right)^{2} t_{\rm Pl}
$$ where $\kappa$ is a phenomenological parameter that for primordial black hole exploding now
ranges over the wide inteval $0.05$ to $10^{22}$. We use Planck units for time $t_{\rm Pl } $ and mass $ m_{\rm Pl } $.
The lifetime determines the size of the hole at the time of its explosion.
Let's estimate the size of a black hole that has formed in the early universe and has lived as long as the Hubble time $t_H \approx 14 \cdot 10^9$years. Then within the running of $\kappa$, the mass of this black hole would range between $10^{14}g$ and $10^{23}g$, that yields approximatively a Schwarzschild radius respectively between $10^{-14}cm$ and $10^{-2}cm$ \cite{Rovelli2014h,Barrau2014g}.
Given the mass of the black hole at the time of the explosion, we can study the characteristics of the astrophysical signal, both via gravitational and electromagnetic messengers. I will not consider here the emission in gravitational waves. I focus instead on the electromagnetic one.
\section{Expected Astrophysical Signals}
We do not know the exact mechanism of emission from a white hole. Still, the possible signals arising from such an explosion can be described and searched. Three possible astrophysical signals can be expected. These are: a low energy signal that depends on the size of the hole exploding and whose wavelength varies widely within the allowed lifetime window; a high-energy signal in the very high energy spectral band that depends on the energy of the matter that formed the black hole; and a signal in the radio that would be produced if there are magnetic fields in the surrounding of the exploding hole. I discuss the three cases below.
\subsection{Low Energy Channel} \label{LEC}
During an explosion, the exploding object have its modes excited. The lowest modes are associated with the size of the object. For an exploding black hole, we can expect a mode of the order of the Schwarzschild radius to be excited and appear as a channel in the emitted signal. This should not be surprising as a similar reasoning yields to the wavelength of the particle produced by Hawking radiation, where the wavelength is peaked around the Schwarzschild radius with a factor $\sim16$ that accounts for a combination of physical constants.
Black holes formed by stellar collapse are not of interest in this context, because they would explode only in the far future. Only \emph{primordial black holes} (PBH) formed in the early universe may have the size and the lifetime to have already exploded and be within reach for observation. Different mechanisms can lead to the formation of black holes in the very early universe: in fact, there are so many mechanisms, ranging from versions of inflation to cosmic strings, that their existence seems quite plausible.
Let's consider a black hole whose lifetime is equal to the Hubble time and use the relation for the lifetime $\tau = 4\kappa\left({M_{\rm BH}}/{m_{\rm Pl}}\right)^{2} t_{\rm Pl}$.
We have studied in \cite{Barrau:2015uca} the emission for all possible value of $\kappa$ and for the highest one, corresponding to longer lifetimes, the emitted signal is in the $GeV$. Studying with the PYTHIA code \cite{Sjostrand:2014zea} the spectrum of photons produced in the event, it has been shown that the highest photon density would be in the $MeV$, making a detection more likely in this band \cite{Barrau2014e}. Short Gamma Ray Bursts \cite{Nakar:2007yr}, not associated with known sources or with possible neutron-star merging, would candidate as such signal.
For the lowest values of $\kappa$, favoured by the theory, we can expect a signal reaching 1imatively the half millimetre. In this wavelength, events within all the visible universe would be detectable \cite{Barrau:2015uca}. Instruments like the Large Latin American Millimeter Array (LLAMA) could be promising for the detection.
An aspect that makes this channel with this value of $\kappa$ particularly interesting is a possible connection with Fast Radio Bursts (FRB) \cite{Barrau2014g}. The bursts present a number of characteristics that would fit our signal, such as the (mostly) extragalactic origin, the unusually high flux, the rate of events, and so on. Indeed, there is a mismatch between our prediction and the wavelength of FRB, that is around $20\,cm$. If one wants to further investigate this possibility, there are two possible explanation for this mismatch. One is simply given by the rough simplifications taken in the calculation, such that they can easily lead to miss a couple of orders of magnitude. A more interesting possibility considers the random nature of black-hole lifetime. Therefore, FRB could be just part of a family of cosmic rays coming from black holes explosions, of which those in the radio are selected due to the atmosphere opacity and the telescopes availability.
This possibility becomes more viable if the PBH mass spectrum is highly peaked \cite{Barrau2018}.
\subsection{High Energy Channel}\label{HEC}
From the point of view of the matter collapsing and forming the black hole, the bouncing process is extremely rapid and almost elastic: it is conceivable that no dissipative effect affects appreciably the matter energy. Therefore, matter could be expelled with approximately the same energy with which the process started.
We are interest in black holes formed in the early universe.
The first mechanism proposed for their formation \cite{Carr:1974nx}, and still the most palatable, is based on the presence of overdense regions during the reheating: their collapse forms PBHs whose size depends on the Hubble horizon at the precise time of their formation. The universe then would be composed mostly by photons with a temperature of the order of $TeV$. Correspondently, this is the energy we expect for the matter emitted by a white hole. Being in the $TeV$, we label this emission channel as the ``high energy channel''.
A direct observation of a cosmic ray of such an high energy has an observational horizon due to the interaction with the CMB. As studied in \cite{Barrau:2015uca}, only events within our galactic neighbourhood are at reach. This estimation is performed taking into account the observational facilities currently available. On the other hand, new telescopes will start to operate in the forthcoming years \cite{Rieger:2013awa} allowing for the exploration of this observational window.
\subsection{Rees Mechanism}\label{RM}
The particle production in the high energy channel can be studied using codes like PYTHIA \cite{Sjostrand:2014}. It would be important to check in more detail the coherence of the signal and the production of electron-position pairs. The number of these pairs is expected to be significant, as in the explosive process all the mass of the black hole is emitted. The resulting relativistic expanding shell behaves as a perfect conductor. In the likely presence of interstellar magnetic fields, the shell would reflect and boost the virtual photons of those fields. The resulting emission depends on the Lorentz factor of the expanding shell, that depends on the energy of the initial event. As in this case the energy is in the $TeV$, this allow to efficiently convert the primary signal into a secondary one in the radio. The spectrum of this emission corresponds to the one derived by Blandford \cite{Blandford1977}. Rees in the Seventies studied this mechanism \cite{Rees1977} in the hope of observing the explosion of PBH: a signal in the radio can travel without deteriorating interactions much further than one at the $TeV$ energy, at which the explosion were expected. Today we join Rees in that hope when considering the possibility that the detection of Fast Radio Bursts could be in fact the detection of PBH explosions.
The connection between the Rees mechanics and PBH exploding in a time shorter than the evaporation time have been considered in \cite{Kavic2008,Kavic2008a,Estes2017}.
The connection between the Rees mechanics and FRB, on the other hand, is consider as well by Thompson \cite{Thompson2017}, even though with a different exploding source.
The Planck Star framework allows to pull together these lines of research, with a better defined theoretical framework for the explosion and, as mentioned before, the right characteristic in terms of localisation mostly out of the galaxy, the efficient conversion of the black hole mass in the received flux, and the expected rate of events.
\section{A quantum signature}
The most interesting aspect of the Planck Star scenario is the fact that the signal has a clear signature that can distinguish it from other astrophysical signals. This is a peculiar relation between distance and observed wavelength, and it depends on the relation between the lifetime of the black hole and its mass. Consider two PBH of different mass, one of which explodes now near us and the other far away. The second one must be smaller because it exploded earlier, hence has a shorter lifetime. Notice that the smaller black hole produces a signal of the shorter wavelength \emph{both} in the low and in the high energy channels. The shorter wavelength partially compensates the standard cosmological redshift.
This compensation is peculiar: the emitted wavelength of a standard astrophysical object scales linearly with the distance because of the redshift. The phenomenon is not a generic feature of PBH either, as an explosion due to the Hawking evaporation happens in the standard theory when the black hole reaches the Planck size, independently from its initial mass, and the signal emitted would only know of that scale. In the case of Planck stars, on the other hand, there is a modified relation between the wavelength of the observed signal and the distance of its source \cite{Barrau:2014yka}. The resulting curve in Fig.\ref{fig:flat} presents a characterising flattening. This provide a signature of the quantum-gravitational origin of the bursts.
To fit such a curve, we need to collect data of bursts and be able to associate the source distance, a task not always possible. A standard method would utilise the dispersion measure of the received signal. In lucky situations, the burst may be associated with a known astronomical object, for instance a host galaxy.
\begin{SCfigure}
\includegraphics[height=55mm]{lambda.pdf}
\caption{
The observed wavelength (unspecified units) from black hole explosions as a function of the redshift $z$. The curve presents a characteristic flattening as the redshift is compensated by the shortening of the emitted wavelength for more distant sources.}
\label{fig:flat}
\end{SCfigure}
This framework provide a further technique to measure distances. In fact we can reverse the previous argument and use the fact that distant explosions of PBH come from less massive ones. Therefore, the incoming flux will be lower. This is particularly interesting in connection with a FRB: in fact, the Rees mechanism washes the information about the size of the source in terms of wavelenght, even though not completely, and make more cumbersome to localise the source. Still, the flux in the radio emission will depend on the flax of the primary $TeV$ burst. Therefore, we expect to see FRB with a smaller flux the more distant is the source.
\subsection{Intensity mapping}
Detection of cosmic rays with a precise association of the distance of their source is hard. A more affordable strategy is to set a campaign of observation on the largest sky field in order to collect the emission integrated over all the detectable past PBH explosions. This has been studied exploring the whole range of lifetime varying the parameter $\kappa$. The single events have been simulated with the PYTHIA code and then integrated. Luckily, the characteristic wavelength-distance relation discussed before propagates its effects also in the spectrum of the integrated emission \cite{Barrau:2015uca}. In fact, the integration over standard astrophysical sources, with a given emission spectrum, produce a thermal black-body radiation. In this case instead the integrated spectrum obtained results distorted because of the peculiar redshift-wavelength relation of Fig.\ref{fig:flat}. The same effect manifests in the spectrum obtained for the high energy and the low energy channels. This result, verified for the whole window of allowed lifetimes, is here shown for the case of the shortest lifetime in Fig.\ref{fig:diffuse}.
Notice that while the result depends on the PBH mass spectrum, a study considering different shapes for the PBH mass spectrum has shown that this has only a small effect on the shape of the emission spectrum and does not change the qualitative result \cite{Barrau:2015uca}.
\begin{figure}[h
\includegraphics[width=.5\textwidth]{k005kmin.pdf}
\includegraphics[width=.5\textwidth]{k005_high.pdf}
\caption{The diffuse emission of the low energy channel (left) and the diffuse emission of the high energy channel (right) for the shortest BH lifetime ($\kappa=.05$). We have not specified the units in the ordinate axis as the normalisation of the spectrum can possibly depend on the percentage of PBH as dark matter.}
\label{fig:diffuse}
\end{figure}
\subsection{Primordial Black Holes}
We have considered black holes forming in the very early universe, so that the matter forming them is not subject to the constraints on baryons that apply for nucleosynthesis. This allow to qualify PBHs as non-baryonic dark matter (DM). PBHs were indeed one of the first candidates historically considered to explain dark matter \cite{Chapline:1975cr}. They have had alternate fortune. Today, with the lack of direct detection of DM particles and the recent discovery of black holes of unexpected sizes, they are receiving renewed attention. We have now observational constraints for different mass ranges of PBHs \cite{Carr:2016hva,Chen:2016pud,Green:2016xgy}. Some of the old constraints are today relaxed by allowing different mechanisms of PBH formations: most of the constraints were in fact obtained with a monochromatic mass spectrum, a paradigm now challenged by different authors as unrealistic \cite{Carr:2017jsz}.
The framework presented here requires to reconsider farther certain of these constrains, especially those based on solely Hawking evaporation, and presents some novel phenomenological feature.
The process that yields the final explosion of a black hole can be seen as a decaying process. Therefore PBHs would qualify as a decaying DM with a distinctive characteristic: the decay time depends on the mass. This feature distinguishes this model from other decaying DM candidates, whose decay time is fixed. In particular, the decaying time $\tau$ is shorter than the evaporation time: this may reduce or even completely suppress the presence of Hawking radiation, undetermine PBH constraints based on it.
Another constrain challenged by this scenario concerns small black hole and their lack of detection by microlensing \cite{griest2013new}. In fact, according to Hawking evaporation, PBHs having completed their full evaporation today have a mass of $10^{12}$ Kg. All the black holes with a mass smaller than that would not be present in the universe any more, having already evaporated. In the new scenario the minimal mass of PBH still present today raises. For the shorter lifetime $M_{\rm BH}^2$, PBHs decaying and exploding today have a mass as high as $10^{23}$ Kg. Therefore, no PBH smaller than that mass is likely to be detected.
A further peculiar property of PBH decay is a lowering of the DM energy-density. In fact, the decay effectively converts DM into radiation, changing the total balance in the equation of state of the Universe through ages. This has a repercussion in the count of galaxies in large-scale galaxy surveys, affecting galaxy clustering, galaxy lensing and Redshift-Space Distortions (RSD) \cite{Raccanelli:2017one}. More data concerning this will became available for instance with the LSS surveys by the SKA telescope, with the detection of individual galaxies in the radio continuum \cite{Jarvis:2015asa}: new techniques \cite{Menard:2013aaa,Kovetz:2016hgp} allow to extract from these data the information about the evolution with respect to redshift.
If these measurements can be associated to other on PBHs, they would allow to constrain the quantum-gravity lifetime-mass scaling. The innovative aspect of such a strategy resides in the possibility to measure a quantum-gravity phenomenon from late-universe data, as opposed to standard strategies involving data from the early universe.
Finally, the effects of the quantum-gravitational BH decay should leave an imprint in the CMB, because of the energy release by the explosion of the smallest PBHs. An ongoing effort is now addressing these effects and the analysis of the new constrains, combining different observables and considering extended PBH mass functions \cite{Bellomo:prep}.
\begin{acknowledgements} FV thanks Aur\'elien Barrau, Michael Kavic, Alvise Raccanelli, Carlo Rovelli for their collaboration on the topics exposed here. ~ The work of FV at UPV/EHU is supported by the the grant IT956-16 of the Basque Government.
\end{acknowledgements}
|
{
"timestamp": "2018-03-08T02:11:02",
"yymm": "1803",
"arxiv_id": "1803.02755",
"language": "en",
"url": "https://arxiv.org/abs/1803.02755"
}
|
\section{Introduction}
An MDS matrix is a matrix whose minors all have full rank.
These matrices arise naturally in coding theory, as they are generating matrices for MDS (Maximally Distance Separable) codes.
A question arising in coding theory, motivated by applications in multiple access networks \cite{halbawi2014distributed,dau2015simple}
and in secure data exchange \cite{yan2013algorithms,yan2014weakly},
is what zero patterns can MDS matrices have. Namely, how sparse can MDS matrices be?
There is a natural combinatorial characterization on the allowed zero patterns, called the MDS condition.
Let $A$ be a $k \times n$ MDS matrix with $k \le n$.
We can describe its zero/nonzero pattern by a set system $S_1,\ldots,S_k \subset [n]$, where $S_i=\{j \in [n]: A_{i,j}=0\}$.
There are several restrictions on the structure of such set systems. Clearly, any row of $A$ can have at most $k-1$ zeros,
so $|S_i| \le k-1$ for all $i$. Similarly, any two rows of $A$ can have at most $k-2$ common zeros, so $|S_i \cap S_j| \le k-2$
for all $i \ne j$. In general, this is known as the \emph{MDS condition} on the set system:
\begin{equation}
\tag{$\star$}
|I|+\left| \bigcap_{i \in I} S_i \right| \le k \qquad \forall I \subseteq [k], I \text{ nonempty}.
\end{equation}
It is known that the MDS condition is also sufficient for the existence of MDS matrices with zero pattern given by the
set system, if the underlying field is large enough.
Concretely, let $S_1,\ldots,S_k \subset [n]$ be a set system which satisfies the MDS condition.
Let $\mathbb{F}$ be the underlying field, and assume that $|\mathbb{F}| > {n \choose k}$. Let $A$ be a randomly chosen $k \times n$
matrix over $\mathbb{F}$, where $A_{i,j}=0$ if $j \in S_i$, and otherwise $A_{i,j} \in \mathbb{F}$ is chosen uniformly and independently.
Such a matrix $A$ is an MDS matrix with positive probability. The reason is that the number
of maximal $k \times k$ minors of $A$ is ${n \choose k}$, and the MDS condition implies that the determinants of these minors are not identically zero. So, each minor has a probability of $|\mathbb{F}|^{-1}$ to be singular, and by the union bound, if $|\mathbb{F}| > {n \choose k}$, then with positive
probability all minors are nonsingular. This bound was improved to $|\mathbb{F}| > {n-1 \choose k-1}$ in \cite{dau2013balanced}.
Dau {et al.} \cite{dau2014existence} conjectured that the MDS condition is sufficient over small fields as well.
This is known as the GM-MDS conjecture. Concretely,
if a $k \times n$ zero pattern satisfies the MDS condition, then there exists an MDS matrix with this zero pattern
over any field of size $|\mathbb{F}| \ge n+k-1$. Clearly, if this is true then a different argument than the probabilistic
argument above would be needed.
\begin{conjecture}[GM-MDS conjecture \cite{dau2014existence}]
\label{conj:gmmds}
Let $S_1,\ldots,S_k \subset [n]$ be a set system which satisfies the MDS condition.
Then for any field $\mathbb{F}$ with $|\mathbb{F}| \ge n+k-1$, there exists a $k \times n$ MDS matrix $A$ over $\mathbb{F}$ with $A_{i,j}=0$ whenever $j \in S_i$.
\end{conjecture}
We prove \Cref{conj:gmmds} in this work.
\begin{theorem}
\label{thm:main}
\Cref{conj:gmmds} is correct.
\end{theorem}
First, we describe an algebraic framework introduced by Dau \etal \cite{dau2014existence}
towards proving \Cref{conj:gmmds}.
\subsection{The algebraic GM-MDS conjecture}
Dau \etal \cite{dau2014existence} formulated an algebraic conjecture that implies \Cref{conj:gmmds}:
if $S_1,\ldots,S_k$ is a set system that satisfies the MDS condition, then there exists a Generalized Reed-Muller code
with zeros in locations prescribed by the set system. Otherwise put, the matrix $A$ can be factored as the product of a $k \times k$ invertible matrix
and a $k \times n$ Vandermonde matrix. Before explaining these ideas further, we first set up some notations.
Let $\mathbb{F}$ be a finite field, and let $x,a_1,\ldots,a_n$ be formal variables,
where we shorthand $\bold{a}=(a_1,\ldots,a_n)$. We use the standard notations $\mathbb{F}[\bold{a},x]$ for the ring of polynomials over $\mathbb{F}$ in the variables
$\bold{a},x$; $\mathbb{F}(\bold{a})$ for the field of rational functions over $\mathbb{F}[\bold{a}]$; and $\mathbb{F}(\bold{a})[x]$ for the ring of univariate polynomials in $x$ over $\mathbb{F}(\bold{a})$.
Given a set $S \subset [n]$ define a polynomial $p=p(S) \in \mathbb{F}[\bold{a},x]$ as follows:
$$
p(\bold{a},x):=\prod_{i \in S} (x-a_i).
$$
Given a set system $\mathcal{S}=\{S_1,\ldots,S_k\}$ define $P(\mathcal{S}):=\{p(S_1),\ldots,p(S_k)\}$.
Let $\mathcal{S}=\{S_1,\ldots,S_k\}$ be a set system which satisfies the MDS condition. It is possible to assume without loss of generality
that each $S_i$ is maximal, namely that $|S_i|=k-1$ for all $i \in [k]$.
For example, if we are allowed to increase $n$ then we can replace each $S_i$ with $S_i \cup T_i$
where $|T_i|=k-1-|S_i|$ and $T_1,\ldots,T_k,[n]$ are pairwise disjoint. An improved reduction is given in \cite{dau2014existence}
which does not require increasing $n$.
Either way, under this assumption the polynomials $P(\mathcal{S})$ form a set of $k$ polynomials of degree $k-1$,
which we denote by $p_1,\ldots,p_k$.
Define the $k \times n$ matrix $A$ as
$A_{i,j} = p_i(a_j)$. Note that entries of $A$ are polynomials in $\mathbb{F}[\bold{a}]$. The condition that all $k \times k$ minors
of $A$ are nonsingular is equivalent to the condition that the polynomials $P(\mathcal{S})$ are linearly independent
over $\mathbb{F}(\bold{a})$ (here, we view the polynomials as elements of $\mathbb{F}(\bold{a})[x]$ instead of as elements of $\mathbb{F}[\bold{a},x]$).
If this is the case, then one can use the Schwartz-Zippel
lemma and show that the formal variables $a_1,\ldots,a_n$ can be replaced with distinct field elements from $\mathbb{F}$, while
still maintaining the property that all $k \times k$ minors of $A$ are nonsingular.
The bound on the field size $|\mathbb{F}| \ge n+k-1$ arises from the degrees of the polynomials obtained in the process.
For details we refer to the original paper \cite{dau2014existence}.
This motivated \cite{dau2014existence} to propose the following algebraic conjecture, which implies \Cref{conj:gmmds}.
\begin{conjecture}[Algebraic GM-MDS conjecture \cite{dau2014existence}]
\label{conj:gmmds-alg}
Let $S_1,\ldots,S_k \subset [n]$ be a set system which satisfies the MDS condition, and where $|S_i|=k-1$ for all $i$.
Then the set of polynomials $P(\mathcal{S})$ are linearly independent over $\mathbb{F}(\bold{a})$.
\end{conjecture}
We remark that given any polynomials $p_1,\ldots,p_k \in \mathbb{F}[\bold{a},x]$ (for example, the polynomials appearing in $P(\mathcal{S})$),
an equivalent condition to the polynomials
being linearly independent over $\mathbb{F}(\bold{a})$ is the following: for any polynomials $w_1,\ldots,w_k \in \mathbb{F}[\bold{a}]$, not all zero,
it holds that
$$
\sum_{i=1}^k w_i(\bold{a}) p_i(\bold{a},x) \ne 0.
$$
Following \cite{dau2014existence}, several works~\cite{halbawi2014distributed,heidarzadeh2017algebraic,yildiz2018further}
attempted to resolve the GM-MDS conjecture. They showed that \Cref{conj:gmmds-alg} holds in several special cases, but the
general case remained open. In this work we prove \Cref{conj:gmmds-alg}, which implies \Cref{conj:gmmds}.
\subsection{A generalized conjecture}
We start by considering a more general condition. Let $v \in \mathbb{N}^n$ be a vector, where $\mathbb{N}=\{0,1,2,\ldots\}$ stands
for non-negative integers. The coordinates of $v$ are $v=(v(1),\ldots,v(n))$. We shorthand $|v|=\sum v(i)$.
Given vectors $v_1,\ldots,v_m \in \mathbb{N}^n$ define $\bigwedge v_i \in \mathbb{N}^n$ to be their coordinate-wise minimum:
$$
\bigwedge_{i \in [m]} v_i := (\min(v_1(1),\ldots,v_m(1)),\ldots,\min(v_1(n),\ldots,v_m(n))).
$$
Note that if $v_1,\ldots,v_m \in \{0,1\}^n$ are indicator vectors of sets $S_1,\ldots,S_m \subset [n]$,
then $\bigwedge v_i$ is the indicator vector of $\cap S_i$.
Given a parameter $k > |v|$ define a set of polynomials in $\mathbb{F}[\bold{a},x]$:
$$
P(k,v) := \left\{\prod_{j \in [n]} (x-a_j)^{v(j)} x^e: e=0,\ldots,k-1-|v|\right\}.
$$
Note that $P(k,v)$ consists of $k-|v|$ polynomials of degree $\le k-1$, which form a basis for the linear space of polynomials
of degree $\le k-1$ which have $v(j)$ roots at each $a_j$. Furthermore, note that
if $v$ is the indicator vector of a set $S \subset [n]$
of size $|S|=k-1$, then $P(k,v) = \{p(S)\}$. Given a set of vectors $\mathcal{V}=\{v_1,\ldots,v_m\} \subset \mathbb{N}^n$ define
$$
P(k,\mathcal{V}) := P(k,v_1) \cup \ldots \cup P(k,v_m).
$$
We use in this paper the convention that set union can result in a multiset. So for example, if the same polynomial appears in multiple $P(k,v_i)$
then it appears multiple times in $P(k,\mathcal{V})$. Under this assumption we always have the identity:
$$
|P(k,\mathcal{V})| = |P(k,v_1)| + \ldots + |P(k,v_m)|.
$$
The following definition is the natural extension of the MDS condition to vectors.
\begin{definition}[Property $V(k)$]
Let $\mathcal{V}=\{v_1,\ldots,v_m\} \subset \mathbb{N}^n$ and $k \ge 1$ be an integer. We say that $\mathcal{V}$ satisfies $V(k)$ if it satisfies:
\begin{enumerate}
\item[(i)] $|v_i| \le k-1$ for all $i \in [m]$.
\item[(ii)] For all $I \subseteq [m]$ nonempty,
$\sum_{i \in I} (k-|v_i|) + \left|\bigwedge_{i \in I} v_i\right| \le k$.
\end{enumerate}
\end{definition}
Note that when $m=k$ and $v_1,\ldots,v_k$ are indicators of sets $S_1,\ldots,S_k \subset [n]$ of size $|S_i|=k-1$, then property $V(k)$ is equivalent
to the MDS condition for $S_1,\ldots,S_k$.
Observe that in general, if $\mathcal{V}$ satisfies $V(k)$ then $P(k,\mathcal{V})$ contains $\sum_{i=1}^m (k-|v_i|) \le k$ polynomials
of degree $\le k-1$. The following conjecture is the natural extension of \Cref{conj:gmmds-alg} to vectors.
\begin{conjecture}
\label{conj:vecmds}
Let $\mathcal{V} \subset \mathbb{N}^n$ and $k \ge 1$. Assume that $\mathcal{V}$ satisfies $V(k)$.
Then the polynomials in $P(k,\mathcal{V})$ are linearly independent over $\mathbb{F}(\bold{a})$.
\end{conjecture}
A clarifying remark: as we view the set $P(k,\mathcal{V})$ as a multiset, \Cref{conj:vecmds} (and \Cref{thm:vecstarmds} below) imply in particular that the polynomials in $P(k,\mathcal{V})$ are all distinct, so $P(k,\mathcal{V})$ is in fact a set.
\subsection{An intermediate case}
We prove \Cref{conj:vecmds} under an additional assumption, which is sufficient to prove \Cref{conj:gmmds}. It is still open
to prove \Cref{conj:vecmds} in full generality.
\begin{definition}[Property $V^*(k)$]
Let $\mathcal{V}=\{v_1,\ldots,v_m\} \subset \mathbb{N}^n$ and $k \ge 1$ be an integer. We say that $\mathcal{V}$ satisfies $V^*(k)$ if
it satisfies $V(k)$, and additionally it satisfies:
\begin{enumerate}[(i)]
\item[(iii)] $v_i \in \{0,1\}^{n-1} \times \mathbb{N}$ for all $i \in [m]$. Namely, all coordinates in $v_i$, except perhaps the last,
are in $\{0,1\}$.
\end{enumerate}
\end{definition}
\begin{theorem}
\label{thm:vecstarmds}
Let $\mathcal{V} \subset \mathbb{N}^n$ and $k \ge 1$. Assume that $\mathcal{V}$ satisfies $V^*(k)$. Then the polynomials $P(k,\mathcal{V})$ are linearly independent over $\mathbb{F}(\bold{a})$.
\end{theorem}
\Cref{conj:gmmds-alg} follows directly from \Cref{thm:vecstarmds}. If $S_1,\ldots,S_k \subset [n]$
are sets which satisfy the assumptions of \Cref{conj:gmmds-alg}, then their indicator
vectors $v_1,\ldots,v_k \in \{0,1\}^n$ satisfy the assumptions of \Cref{thm:vecstarmds},
and hence $P(\{S_1,\ldots,S_k\})=P(k,\{v_1,\ldots,v_k\})$ are linearly independent over $\mathbb{F}(\bold{a})$.
\subsection{General distance}
The rows of a $k \times n$ MDS matrix generates a linear code in $\mathbb{F}^n$ whose minimal distance is $d=n-k+1$. Namely, any vector
in the subspace spanned by the rows has at most $n-d=k-1$ zeros. One can ask a more general question: given parameters $k \le n$ and $d \le n-k+1$,
what are the necessary and sufficient conditions on the zero pattern of a code with minimal distance $d$.
As it turns out, this more general question reduces to the one about MDS codes.
\begin{corollary}
Let $k \le n$ and $d \le n-k+1$. Let $S_1,\ldots,S_k \subseteq [n]$. A necessary condition for the existence of a $k \times n$ matrix $A$ over any field,
such that the code spanned by the rows of the matrix has minimal distance at least $d$, and such that $A_{i,j}=0$ whenever $j \in S_i$, is
$$
|I| + \left| \bigcap_{i \in I} S_i \right| \le n-d+1 \qquad \forall I \subseteq [k], I \text{ nonempty}.
$$
It is also a sufficient condition over any field $\mathbb{F}$ of size $|\mathbb{F}| \ge 2n-d$.
\end{corollary}
\begin{proof}
We first show that the conditions are necessary. Assume the condition is violated for some $I$. Then there are $|I|$ rows with at least $n-d+2-|I|$ common zeros. Pick any $|I|-1$ other coordinates; there is
some linear combination of the rows in $I$ which is zero in these coordinates. So this linear combination has $(n-d+2-|I|)+(|I|-1) = n-d+1$ many zeros, a contradiction
to the minimal distance being at least $d$.
To show that the conditions are sufficient, consider the set system $S_1,\ldots,S_k, S_{k+1}=\ldots=S_{n-d+1} = \emptyset$. It satisfies that
$$
|I| + \left| \bigcap_{i \in I} S_i \right| \le n-d+1 \qquad \forall I \subseteq [n-d+1], I \text{ nonempty}.
$$
The claim follows by applying \Cref{thm:main} to this set system.
\end{proof}
\subsection{Related work}
As we already discussed, the GM-MDS conjecture was suggested by
\cite{dau2014existence}, and partial results were obtained by~\cite{halbawi2014distributed,heidarzadeh2017algebraic,yildiz2018further}.
Shortly after posting this result in arXiv~\cite{lovett2018proof}, we were informed by Yildiz and Hassibi~\cite{yildiz2018optimum}
that they too have found a proof of the GM-MDS conjecture. Inspecting their proof,
it is similar in spirit to our proof, in the sense that both proofs generalize the original GM-MDS conjecture, in order to facilitate an
inductive argument. More specifically, our approach is to allow multiple roots at a distinguished point,
while their approach is to allow general multiplicities of sets.
\subsection{Open problems}
We already discussed \Cref{conj:vecmds}. A more general open problem is the following. Let $S_1,\ldots,S_k \subset [n]$ be a set system.
Let $A$ be a $k \times n$ matrix over some field, such that $A_{i,j}=0$ whenever $j \in S_i$. If we make no assumptions on the set system,
then some $k \times k$ minors of $A$ are forced to be singular (this happens when the set system, restricted to the minor, violates the MDS condition).
The question is: what is the minimal field size, for which there exists a matrix where all minors which are not forced to be singular are nonsingular.
This question arises naturally in the study of Maximally Recoverable (MR) codes, where the minors which are forced to be
singular are determined by the underlying topology of the code.
The GM-MDS conjecture which we prove is the special case where no minor is forced to be singular.
In this case, very small fields (of size $n+k-1$) are sufficient.
However, in general there is no reason for nice algebraic constructions to exist.
Two recent works~\cite{kane2017independence,gopi2017maximally} have shown that in specific situations, exponential field size is needed. However,
the proof techniques are highly specialized to these specific cases.
This raises the following natural conjecture: most set systems require exponential field size.
\begin{conjecture}
Let $S_1,\ldots,S_k \subset [n]$ be chosen randomly, by including each $j \in S_i$ independently with probability $1/2$.
Assume that there exists a $k \times n$ matrix $A$ over a field $\mathbb{F}$ that satisfies:
\begin{enumerate}[(i)]
\item $A_{i,j}=0$ whenever $j \in S_i$.
\item Any $k \times k$ minor of $A$, which is not forced to be singular by (i), is nonsingular.
\end{enumerate}
Then with high probability over the choice of the set system, $|\mathbb{F}| \ge c {n \choose k}^c$, where $c>0$ is some absolute constant.
\end{conjecture}
The conjecture basically says that for most set systems, the probabilistic construction which requires field size ${n \choose k}$ cannot be
significantly improved.
\paragraph*{Acknowledgement.} I thank Hoang Dau and Sankeerth Rao for a careful reading of an earlier version of this paper.
\section{Proof of \Cref{thm:vecstarmds}}
Let $n,k \ge 1$. Let $\mathcal{V}=\{v_1,\ldots,v_m\} \subset \mathbb{N}^n$ which satisfies $V^*(k)$. We will prove that the polynomials $P(k,\mathcal{V})$ are linearly
independent over $\mathbb{F}(\bold{a})$.
To that end, we assume that $\mathcal{V}$ is a minimal counter-example and derive a contradiction.
Concretely, the underlying parameters are $n,k,m$ and $d=|P(k,\mathcal{V})| = \sum k-|v_i|$. We will assume that if $\mathcal{V}'$ is a set of vectors
with corresponding parameters $n' \le n,k' \le k,m' \le m,d' \le d$ with at least one of the inequalities being sharp, then \Cref{thm:vecstarmds}
holds for $\mathcal{V}'$. In particular, we assume that $m \ge 2$, as \Cref{thm:vecstarmds} clearly holds when $m=1$.
To help the reader, we note that the following lemmas construct such $\mathcal{V}'$ with the following parameters:
\begin{itemize}
\item \Cref{lemma:minb_zero}: $n,k-1,m,d$.
\item \Cref{lemma:tight}: $n,k,e,d'$ and $n,k,m-e+1,d''$ with $2 \le e \le m-1$ and $d',d''<d$.
\item \Cref{lemma:onevec}: $n-1,k,m,d$.
\item \Cref{lemma:n_equals_k}: $n,k,m,d-1$.
\end{itemize}
We use the following notation to simplify the presentation:
$$
v_I := \bigwedge_{i \in I} v_i \qquad I \subseteq [m].
$$
We introduce sometimes in the proofs an auxiliary set $\mathcal{V}'=\{v'_1,\ldots,v'_{m'}\}$, in which case $v'_I$ for $I \subseteq [m']$ are defined analogously. Below, when we say that $\mathcal{V}$ or $\mathcal{V}'$ satisfy (i), (ii) or (iii), we mean the relevant items in the definition of $V^*(k)$.
Given two vectors $u,v \in \mathbb{N}^n$ we denote $u \le v$ if $u(i) \le v(i)$ for all $i \in [n]$.
\begin{lemma}
\label{lemma:dominate}
There do not exist distinct $i,j \in [m]$ such that $v_i \le v_j$.
\end{lemma}
\begin{proof}
Assume the contrary. Applying (i) to $j$ gives $|v_j| \le k-1$. Applying (ii) to $I=\{i,j\}$ gives
$$
(k-|v_i|) + (k-|v_j|) + |v_i \wedge v_j| \le k.
$$
As $v_i \le v_j$ we have $v_i \wedge v_j = v_i$, and hence obtain that $k-|v_j| \le 0$, a contradiction.
\end{proof}
\Cref{lemma:dominate} implies in particular that $n \ge 2$. This is since if $n=1$ then necessarily $m=1$,
as otherwise there would be $i,j$ for which $v_i \le v_j$. So we assume $n \ge 2$ from now on.
\begin{lemma}
\label{lemma:minb_zero}
$\bigwedge_{i \in [m]} v_i = 0$.
\end{lemma}
\begin{proof}
Assume not. Then there exists a coordinate $j \in [n]$ with $v_i(j) \ge 1$ for all $i \in [m]$.
Define a new set of vectors $\mathcal{V}'=\{v'_1,\ldots,v'_m\} \subset \mathbb{N}^n$ as follows:
$$
v'_i := (v_i(1),\ldots,v_i(j-1),v_i(j)-1,v_i(j+1),\ldots,v_i(n)).
$$
In words, $v'_i$ is defined from $v_i$ by decreasing coordinate $j$ by $1$.
We first show that $\mathcal{V}'$ satisfies $V^*(k-1)$. Note that $|v'_i|=|v_i|-1$.
It clearly satisfies (i),(iii). To show that it satisfies (ii) let $I \subseteq [m]$.
We have
$$
\sum_{i \in I} (k-1-|v'_i|) + |v'_I| = \sum_{i \in I} (k-|v_i|) + |v_I| - 1 \le k-1.
$$
As we showed that $\mathcal{V}'$ satisfies $V^*(k-1)$, the minimality of $\mathcal{V}$ implies that the polynomials $P(k-1,\mathcal{V}')$ are linearly
independent over $\mathbb{F}(\bold{a})$. The lemma follows as it is simple to verify that
$$
P(k,\mathcal{V}) = \{p(\bold{a},x) (x-a_j): p \in P(k-1,\mathcal{V}')\}.
$$
In particular, the linear independence of $P(k-1,\mathcal{V}')$ implies the linear independence of $P(k,\mathcal{V})$.
\end{proof}
\begin{definition}[Tight constraint]
A set $I \subseteq [m]$ is \emph{tight for $\mathcal{V}$} if property (ii) holds with equality for $I$. Namely if
$$
\sum_{i \in I} (k-|v_i|) + |v_I| = k.
$$
\end{definition}
Note that if $|I|=1$ then $I$ is always a tight constraint.
The following lemma is an extension of Lemma 2(i) in \cite{yildiz2018further}. It shows that in a minimal counter-example
there are no tight sets, except for singletons and perhaps the whole set.
\begin{lemma}
\label{lemma:tight}
If $I \subseteq [m]$ is a tight constraint, then $|I|=1$ or $|I|=m$.
\end{lemma}
\begin{proof}
Assume towards a contradiction that there exist a tight $I$ with $1<|I|<m$.
We will use the minimality of $\mathcal{V}$ to derive a contradiction. Assume for simplicity
of notation that $I=\{e,\ldots,m\}$ for $2 \le e \le m-1$. Define a new set of vectors
$\mathcal{V}'=\{v'_1,\ldots,v'_e\}$ given by
$$
v'_1 := v_1, \ldots, v'_{e-1} := v_{e-1}, v'_e := v_I.
$$
We first show that $\mathcal{V}'$ satisfies $V^*(k)$. It clearly satisfies (i) and (iii). To see that it satisfies (ii) let $I' \subseteq [e]$.
If $e \notin I'$ then $\mathcal{V}'$ satisfies (ii) for $I'$ as it is same condition as for $\mathcal{V}$, so assume $e \in I'$.
Let $I'' = I' \cup \{e+1,\ldots,m\}$. Then
$$
\sum_{i \in I'} (k-|v'_i|) + |v'_{I'}| = \sum_{i \in I''} (k-|v_i|) + |v_{I''}| \le k,
$$
where the equality holds since $k-|v'_e| = \sum_{i \in I} (k-|v_i|)$ since we assume $I$ is tight, and since by definition of $I''$ we have $v'_I=v_{I''}$.
As we assume that $\mathcal{V}$ is a minimal counter-example for \Cref{thm:vecstarmds}, the theorem holds for $\mathcal{V}'$. So,
the polynomials $P(k,\mathcal{V}')$ are linearly independent. Observe that $|P(k,\mathcal{V}')|=|P(k,\mathcal{V})|$ since
$$
|P(k,\mathcal{V}')| = \sum_{i \in [e]} (k-|v'_i|) = \sum_{i \in [m]} (k-|v_i|) = |P(k,\mathcal{V})|.
$$
Thus, it will suffice to prove that $P(k,\mathcal{V})$ and $P(k,\mathcal{V}')$ span the same space of polynomials over $\mathbb{F}(\bold{a})$. To that end,
it suffices to prove that $F:=P(k,\{v_e,\ldots,v_m\})$ and $F':=P(k,v'_e)$ span the same space of polynomials.
Let us shorthand $v=v'_e$.
Define the polynomial $p(\bold{a},x):=\prod_{j \in [n]} (x-a_j)^{v(j)}$. Observe that $p$ divides all polynomials in $F,F'$.
Moreover, $F'=\{p(\bold{a},x) x^d: d=0,\ldots,k-1-|v|\}$ spans the linear space of all multiples of $p$ of degree $\le k-1$.
As $|F|=|F'|$ it suffices to prove that $F$ are linearly independent over $\mathbb{F}(\bold{a})$, as then they must span the same linear space.
However, this follows from the minimality of $\mathcal{V}$, since $F=P(k,\mathcal{V}'')$ for $\mathcal{V}''=\{v_e,\ldots,v_m\}$.
\end{proof}
The following lemma identifies a concrete vector that must exist in a minimal counter-example. It is in its proof
that we actually use the assumption that $\mathcal{V}$ satisfies (iii), namely $V^*(k)$ and not merely $V(k)$.
\begin{lemma}
\label{lemma:onevec}
There exists $i \in [m]$ such that $v_i=(1,\ldots,1,0)$.
\end{lemma}
\begin{proof}
\Cref{lemma:minb_zero} guarantees that there exists $i^* \in [m]$ for which $v_{i^*}(n)=0$.
We will prove that $v_{i^*}=(1,\ldots,1,0)$. If not, then by (iii) there exists $j^* \in [n-1]$ be such that $v_{i^*}(j^*)=0$.
For simplicity of notation assume that $i^*=m,j^*=n-1$.
Define a new set of vectors $\mathcal{V}'=\{v'_1,\ldots,v'_m\} \subset \mathbb{N}^{n-1}$
as follows:
$$
v'_i := \left(v_i(1),\ldots,v_i(n-2), v_i(n-1)+v_i(n)\right).
$$
In words, $v'_i \in \mathbb{N}^{n-1}$ is obtained by adding the last two coordinates of $v_i \in \mathbb{N}^n$.
We first show that $\mathcal{V}'$ satisfies $V^*(k)$. Note that $|v'_i|=|v_i|$.
It clearly satisfies (i),(iii). To show that it satisfies (ii) let $I \subseteq [m]$.
Note that (ii) always holds if $|I|=1$, so we may assume $|I|>1$. We have by definition
\begin{equation}
\label{eq:onevec}
\sum_{i \in I} (k-|v'_i|) + |v'_I| - v'_I(n-1) = \sum_{i \in I} (k-|v_i|) + |v_I|
- v_I(n-1) - v_I(n) .
\end{equation}
First, consider first the case where $|I|<m$. \Cref{lemma:tight} gives that $I$ is not tight, and hence
$$
\sum_{i \in I} (k-|v_i|) + |v_I| \le k-1.
$$
As $\mathcal{V}$ satisfies (iii) we have $v_i(n-1) \in \{0,1\}$ for all $i$. This implies $v_I(n-1) \in \{0,1\}$
and $v'_I(n-1) \in \{v_I(n), v_I(n)+1\}$. So \Cref{eq:onevec} gives
$$
\sum_{i \in I} (k-|v'_i|) + |v'_I| \le \sum_{i \in I} (k-|v_i|) + |v_I|+1 \le k.
$$
Next, consider the case of $|I|=m$. As $v_m(n-1)=v_m(n)=0$ we have $v'_m(n-1)=0$ and hence
$v_I(n-1)=v_I(n)=v'_I(n-1)=0$.
\Cref{eq:onevec} then gives
$$
\sum_{i \in I} (k-|v'_i|) + |v'_I| = \sum_{i \in I} (k-|v_i|) + |v_I| \le k.
$$
As we showed that $\mathcal{V}'$ satisfies $V^*(k)$, the minimality of $\mathcal{V}$ implies that the polynomials $P(k,\mathcal{V}')$ are linearly
independent over $\mathbb{F}(\bold{a})$. We next show that this implies that $P(k,\mathcal{V})$ are also linearly independent over $\mathbb{F}(\bold{a})$.
Let $s_i := k-|v_i|$ for $i \in [m]$.
We have $P(k,\mathcal{V})=\{p_{i,e}: i \in [m], e \in [s_i]\}$ and $P(k,\mathcal{V}')=\{p'_{i,e}: i \in [m], e \in [s_i]\}$ where
\begin{align*}
&p_{i,e}(\bold{a},x) := x^{e-1} \prod_{j \in [n-2]} (x-a_j)^{v_i(j)} \; \cdot \; (x-a_{n-1})^{v_i(n-1)} (x-a_n)^{v(n)}\;, \\
&p'_{i,e}(\bold{a},x) := x^{e-1} \prod_{j \in [n-2]} (x-a_j)^{v_i(j)} \; \cdot \; (x-a_{n-1})^{v_i(n-1)+v_i(n)}\;.
\end{align*}
Observe that $p'_{i,e}$ can be obtained from $p_{i,e}$ by substituting $a_{n-1}$ for $a_n$. Namely
$$
p'_{i,e}(a_1,\ldots,a_{n-1},x) = p_{i,e}(a_1,\ldots,a_{n-1},a_{n-1},x).
$$
Assume towards a contradiction that $\{p_{i,e}\}$ are linearly dependent over $\mathbb{F}(\bold{a})$. Equivalently, there exist polynomials $w_{i,e}(\bold{a})$, not all zero,
such that
$$
\sum_{i \in [m]} \sum_{j \in [s_i]} w_{i,e}(\bold{a}) p_{i,e}(\bold{a},x) = 0.
$$
We may assume that the polynomials $\{w_{i,e}\}$ do not all have a common factor, as otherwise we can divide them by it. Let $w'_{i,e}(\bold{a})$
be obtained from $w_{i,e}(\bold{a})$ by substituting $a_{n-1}$ for $a_n$. That is, $w'_{i,e}(a_1,\ldots,a_{n-1})=w_{i,e}(a_1,\ldots,a_{n-1},a_{n-1})$.
Then we obtain
$$
\sum_{i \in [m]} \sum_{j \in [s_i]} w'_{i,e}(\bold{a}) p'_{i,e}(\bold{a},x) = 0.
$$
As the polynomials $\{p'_{i,e}\}$ are linearly independent over $\mathbb{F}(\bold{a})$, this implies that $w'_{i,e} \equiv 0$ for all $i,e$.
That is, the polynomials $w_{i,e}$ satisfy
$$
w_{i,e}(a_1,\ldots,a_{n-1},a_{n-1}) \equiv 0.
$$
This implies that $(a_{n-1}-a_n)$ divides $w_{i,e}$ for all $i,e$, which is a contradiction to the assumption that $\{w_{i,e}\}$
do not all have a common factor.
\end{proof}
\Cref{lemma:onevec} implies that the vector $(1,\ldots,1,0)$ belongs to $\mathcal{V}$. Without loss of generality,
we may assume that it is $v_m$. This implies that $v_i(n) \ge 1$ for all $i \in [m-1]$, as otherwise
we would have $v_i \le v_m$, violating \Cref{lemma:dominate}.
\begin{lemma}
\label{lemma:n_equals_k}
$n=k$.
\end{lemma}
\begin{proof}
Let $v_m=(1,\ldots,1,0)$. We know by (i) that $n-1 = |v_m| \le k-1$, so $n \le k$. Assume towards a contradiction that $n<k$.
Define a new set of vectors $\mathcal{V}'=\{v'_1,\ldots,v'_m\} \subset \mathbb{N}^n$ as follows:
$$
v'_1 := v_1, \ldots, v'_{m-1} := v_{m-1}, v'_m := (1,\ldots,1,1).
$$
In words, we increase the last coordinate of $v_m$ by $1$.
We claim that $\mathcal{V}'$ satisfies $V^*(k)$. It satisfies (i) by our assumption that $|v'_m|=n \le k-1$, and it satisfies (iii)
clearly. To show that it satisfies (ii), let $I \subseteq [m]$. If $m \notin I$ then
it clearly satisfies (ii) for $I$, as it is the same constraint as for $\mathcal{V}$, so assume $m \in I$. In this case we have
$$
\sum_{i \in I} (k-|v'_i|) + |v'_I| = \left( \sum_{i \in I} (k-|v_i|) - 1 \right) + \left( |v_I| + 1 \right) \le k.
$$
Note that $|P(k,\mathcal{V}')|=|P(k,\mathcal{V})|-1$. As $\mathcal{V}$ is a minimal counter-example, we have that $\mathcal{V}'$ satisfies $V^*(k)$. Let $p(\bold{a},x):=\prod_{j \in [n-1]}(x-a_j)$.
The construction of $\mathcal{V}'$ satisfies that $P(k,\mathcal{V})$ and $P(k,\mathcal{V}') \cup \{p\}$ span the same linear space of polynomials over $\mathbb{F}(\bold{a})$. This is since
$v'_i=v_i$ for $i=1,\ldots,m-1$ and since
$$
P(k,\{v_m\}) = \{p x^e: e=0,\ldots,n-k\}
$$
and
$$
P(k,\{v'_m\}) \cup \{p\} = \{p (x-a_n) x^e: e=0,\ldots,n-k-1\} \cup \{p\}
$$
both span the linear space of polynomials which are multiples of $p$ and of degree $\le k-1$.
Denote for simplicity of presentation the polynomials of $P(k,\mathcal{V}')$ by $p_1,\ldots,p_{d-1}$, where $d=|P(k,\mathcal{V})|$.
Assume that the polynomials $P(k,\mathcal{V})$ are linearly dependent. As $P(k,\mathcal{V}')$ are linearly independent, it implies that there exist
polynomials $w,w_1,\ldots,w_{d-1} \in \mathbb{F}[\bold{a}]$, where $w \ne 0$, such that
$$
w(\bold{a})p(\bold{a},x) + \sum_{i=1}^{d-1} w_i(\bold{a}) p_i(\bold{a},x) \equiv 0.
$$
Note that by construction, $v'_i(n) \ge 1$ for all $i \in [m]$. This implies that $p_1,\ldots,p_{d-1}$ are all divisible
by $(x-a_n)$, while $p$ is not. Substituting $x=a_n$ then gives $w \equiv 0$, a contradiction.
\end{proof}
We can now reach a contradiction to $\mathcal{V}$ being a counter-example. We know that $v_m=(1,\ldots,1,0)$
with $|v_m|=n-1=k-1$.
Let $\mathcal{V}'=\{v_1,\ldots,v_{m-1}\}$. As it satisfies $V^*(k)$ we have that the polynomials
$P(k,\mathcal{V}')$ are linearly independent. Moreover, as $|v_m|=k-1$ we have $P(k,v_m) = \{p\}$
where $p(\bold{a},x)=\prod_{j \in [n-1]} (x-a_j)$.
Note that all polynomials in $P(k,\mathcal{V}')$ are divisible by $(x-a_n)$, while $p$ is not.
So by the same argument as in the proof of \Cref{lemma:n_equals_k}, $P(k,v_m)$ cannot be linearly dependent of $P(k,\mathcal{V}')$.
So $P(k,\mathcal{V})$ are linearly independent.
\bibliographystyle{alpha}
|
{
"timestamp": "2018-03-22T01:01:39",
"yymm": "1803",
"arxiv_id": "1803.02523",
"language": "en",
"url": "https://arxiv.org/abs/1803.02523"
}
|
\section{Introduction}
Many physical situations like string-payloads \cite{he2014adaptive} or drilling systems \cite{bresch2014output} are modeled by infinite dimensional systems. They are, in their fundamentals, related to a Partial Differential Equation (PDE) and consequently, their stability analysis and control are not straightforward and has been under active research during the last decade.
A drilling mechanism is within this class of systems. It is used in the industry to pump oil deep in the soil. This physical system is subject to torsion and radial deformation due to the torque applied on one boundary of the pipe. This system is usually modeled by a coupled Ordinary Differential Equation (ODE) / string equation. These heterogeneous equations appear naturally when the torsional motion of the pit is coupled with the axial deformation of the pipe \cite{challamel2000rock}. Moreover, as there is friction all along the pipe, it leads to a complex system made up of two non-linear equations. The commonly used methodology to control this system is the backstepping.
The aim is to use a control to transform the problem into a target system with the desired properties. Then, using a Lyapunov approach for example, the stability can be proven. This has been widely used in \cite{bresch2014output,krstic2009delay,krstic2008boundary,Wu20142787}. There are many advantages because it provides a Lyapunov functional useful for a robustness analysis for example but it also provides a very accurate control as it mostly depends on the target system. But the calculations are tedious and lead to an infinite dimension control law which may be subjected to implementation issues.
Coming from the stability analysis of time-delay systems, a new method based on Linear Matrix Inequalities (LMIs) seems to be promising. As time-delay systems are a particular case of infinite dimension systems \cite{fridman2014}, it is possible to extend the methodology described in \cite{seuret:hal-01065142} to other systems. It relies on a Lyapunov functional and a state extension using projections of the infinite dimensional state on a basis of orthonormal polynomials. The key result is based on an extensive use of Bessel inequality. It has been successfully applied to transport equations in \cite{SAFI20171}, to the heat equation \cite{baudouinHeat} and to the wave equation also \cite{besselString}.
In this paper, we focus on the exponential stability analysis of a linearized drilling mechanism as described in \cite{marquez2015analysis} with the previous methodology. First, we explain the problem and discuss the existence of a solution. Then, an exponential stability result is provided. The theorem ensures the exponential stability with a guaranteed decay-rate. Some necessary conditions are drawn from the LMI condition and then, an example using physical values is provided. A control law is also derived to show the effectiveness of the method.
\textbf{Notations:} In this paper, $\mathbb{R}^+ = [0, +\infty)$ and $(x,t) \mapsto u(x,t)$ is a multi-variable function from $[0,1] \times \mathbb{R}^+$ to $\mathbb{R}$.
The notation $u_t$ stands for $\frac{\partial u}{\partial t}$. We also use the notations $L^2 = L^2((0, 1); \mathbb{R})$ and for the Sobolov spaces: $H^n = \{ z \in L^2; \forall m \leqslant n, \frac{\partial^m z}{\partial x^m} \in L^2 \}$.
The norm in $L^2$ is $\|z\|^2 = \int_{\Omega} |z(x)|^2 dx = \left<z,z\right>$.
For any square matrices $A$ and $B$, the operations '$\text{He}$' and '$\text{diag}$' are defined as follow: $\text{He}(A) = A + A^{\top}$ and $\text{diag}(A,B) = \left[ \begin{smallmatrix}A & 0\\ 0 & B \end{smallmatrix} \right]$.
A positive definite matrix $P \in \mathbb{R}^{n \times n}$ belongs to the set $\S^n_+$ and $P \succ 0$.
\section{Problem Statement}
\subsection{Modeling of the drilling process}
A drilling mechanism was first modeled in \cite{fridman2010bounds} using the work of \cite{challamel2000rock}. This system described in Figure~\ref{fig:drilling} is the result of a coupling between a radial deformation and an axial movement. This coupling was later modeled in \cite{marquez2015analysis,saldivar2016control} by the following nonlinear model for $x \in (0, 1)$ and $t > 0$:
\begin{equation} \label{eq:problem}
\hspace{-0.05cm}
\left\{
\begin{array}{ll}
z_{tt}(x,t) = c^2 z_{xx}(x,t) - d z_t(x,t),\\
z_x(0,t) = g \left( z_t(0,t) - \tilde{u}_1(t) \right), \\
z_x(1,t) = -h z_{tt}(1,t) - k z_t (1,t) - q T_{nl}( z_t(1,t) ), \\
\dot{Y}(t) = A Y(t) + B \tilde{u}_2(t) + E_1 z_t(1,t) + E_2 T_{nl} (z_t(1,t)), \\
\end{array}
\right.
\hspace{-0.9cm}
\end{equation}
with initial condition $z(\cdot,0) = z^0$, $z_t(\cdot,0) = z_t^0$ on $(0, 1)$ and $Y(0) = Y^0$. In this model, $z$ is the twist angle and it propagates along the pipe following a damped wave equation of speed $c$ and internal damping $d$. Since the internal damping stabilizes the system, in this study, we consider the worst case scenario with $d = 0$ like in \cite{fridman2010bounds}. A similar work can be done with $d > 0$ but leads to more tedious calculation and is then omitted.
There are two boundary conditions at $x= 0$ and $x = 1$. At $x = 0$, a rotary table whose speed is controlled by the input $\tilde{u}_1$ allows to twist the pipe. Furthermore, the boundary damping with a coefficient $g$ at $x = 0$ represents a viscous friction torque.\\
\begin{figure}
\centering
\includegraphics[width=8cm]{drilling.eps}
\caption{Schematic of a drilling mechanism originally taken from \cite{saldivar2016control}. Data corresponding to physical vaues are given in Table~\ref{tab:values}.}
\label{fig:drilling}
\end{figure}
\!\!\!The drilling pit is located at $x = 1$. When drilling, an external torque applies at this boundary and the momentum equation leads to a second order in time boundary condition. The term $T_{nl}$ is a non-linear function related to the change of torque and given below. To simplify the system as done in \cite{fridman2010bounds}, we consider the equation at the bottom of the pipe to be only a first order boundary damping, then $h = 0$. \\
The axial deformation is modeled by a finite dimensional equation as noted in \cite{challamel2000rock}. This equation is related to the axial deformation of the pipe. In \cite{saldivar2016control}, a second order damped harmonic oscillator is used because it models a mass subject to a force for small vibrations. The control at $x = 0$ for the axial position is $t \mapsto \tilde{u}_2(t)$ and corresponds to the force needed in the system to drill. Denoting by $y$ the axial bit position and by $\Gamma_0$ the rate of penetration, $Y(t) = \left[ y(t) - \Gamma_0 t \ \ \dot{y}(t) - \Gamma_0 \right]^{\top} \in \mathbb{R}^2$ represents the axial position error and axial velocity error, leading to the last equation in \eqref{eq:problem}.
\begin{remark} Note that this model does not take into account a coupling between torsion and axial deformation but more a cascaded effect between them.\end{remark}
The parameters $c,g,k,q,A_{21}, A_{22}, b, e_1$ and $e_2$ are physical parameters given in \cite{saldivar2016control} and reported in Table~\ref{tab:values}. The matrices have the following structure:
\[
\begin{array}{cccc}
A = \left[ \begin{smallmatrix} 0 & 1 \\ A_{21} & A_{22} \end{smallmatrix} \right], & B = \left[ \begin{smallmatrix} 0 \\ b \end{smallmatrix} \right], &
E_1 = \left[ \begin{smallmatrix} 0 \\ e_1 \end{smallmatrix} \right], & E_2 = \left[ \begin{smallmatrix} 0 \\ e_2 \end{smallmatrix} \right].
\end{array}
\]
The aim is to design control laws $\tilde{u}_1$ and $\tilde{u}_2$ such that the angular speed $z_t(1,t)$ in system \eqref{eq:problem} converges to the desired angular velocity $\Omega_e$ and $Y$ to $0$. Without loss of generality, we assume $\Omega_e > 0$.
In \cite{challamel2000rock,saldivar2016control}, the nonlinear part of the torque is described by the following equations for $\theta \in \mathbb{R}$:
\begin{equation} \label{eq:Tnl}
\left\{
\begin{array}{l}
T_{nl}(\theta) = W _{ob} R_{b} \mu_b(\theta) \sign(\theta), \\
\mu_b(\theta) = \mu_{cb} + \left(\mu_{sb} - \mu_{cb}\right) e^{-\gamma_b |\theta|}.
\end{array}
\right.
\end{equation}
Considering $\Omega_e \gg 0$, then $e^{- \gamma_b \Omega_e}$ is small and $T_{nl}$ is linearized around $\Omega_e$ as follows:
\begin{equation} \label{eq:T}
T_{nl}(z_t(1,t)) \simeq W _{ob} R_{b} \mu_{cb} = T^e.
\end{equation}
\begin{remark} This approximation prevents from the stick-slip effect which is the main problem that occurs when dealing with drilling pipes for small $\Omega_e$. This work can be seen as a preliminary version of an extended one considering the non-linearity. \end{remark}
That leads to an approximated linear system defined for $t \geqslant 0$ with the same initial conditions and $x \in (0, 1)$:
\begin{equation} \label{eq:problem}
\hspace{-0.05cm}
\left\{
\begin{array}{l}
w_{tt}(x,t) = c^2 w_{xx}(x,t), \\
w_x(0,t) = g \left( w_t(0,t) - \tilde{u}_1(t) \right), \\
w_x(1,t) = - k w_t (1,t) - q T^e \\
\dot{Y}(t) = A Y(t) + B \tilde{u}_2(t) + w_t(1,t) E_1 - T^e E_2.
\end{array}
\right.
\hspace{-1cm}
\end{equation}
It is possible to use the Riemann coordinates to simplify the writing of this system using the following variable: $\tilde{\chi}(x,t) = \left[ \begin{smallmatrix} w_t(x,t) + c w_x(x,t) \\ w_t(1-x,t) - c w_x(1-x,t) \end{smallmatrix} \right]$.
The system becomes for $t \geqslant 0$:
\begin{equation} \label{eq:linProblem}
\left\{
\begin{array}{l}
\tilde{\chi}_{t}(x,t) = c \tilde{\chi}_{x}(x,t), \quad \quad x \in (0, 1), \\
\left[ \begin{smallmatrix} 1-cg & 0 \\ 0 & 1-ck \end{smallmatrix} \right] \tilde{\chi}(0,t) = \left[ \begin{smallmatrix} 0 & 1+cg \\ 1+ck & 0 \end{smallmatrix} \right] \tilde{\chi}(1,t) + \left[ \begin{smallmatrix} - 2 cg \tilde{u}_1(t) \\ 2 c q T^e \end{smallmatrix} \right], \\
\dot{Y}(t) = A Y(t) + B \tilde{u}_2(t) + \tilde{E}_1 \left[ \begin{smallmatrix} \tilde{\chi}(0,t) \\ \tilde{\chi}(1,t) \end{smallmatrix} \right] - T^e E_2,
\end{array}
\right.
\end{equation}
with $\tilde{E}_1 = \frac{1}{2} E_1 \left[ \begin{smallmatrix} 0 & 1 & 1 & 0 \end{smallmatrix}\right]$. The stability of system~\eqref{eq:linProblem} implies the stability of \eqref{eq:problem} and then the study focuses on system \eqref{eq:linProblem}.
Assuming $(\tilde{\chi}^e, Y^e)$ is an equilibrium point of system \eqref{eq:linProblem}, it satisfies $\tilde{\chi}^e_{t} = 0$, $w^e_t = \Omega_e$ and $\dot{Y}^e = 0$. Therefore, a feedforward open-loop control is introduced as:
\begin{equation} \label{eq:feedforward}
\tilde{u}_1^e = \Omega_e \left( 1 + \frac{k}{g} \right) + \frac{q}{g} T^e, \quad \tilde{u}_2^e = \frac{T^e e_2 - \Omega_e e_1}{b}.
\end{equation}
Introducing the error variables $\chi(x,t) = \tilde{\chi}(x,t) - \tilde{\chi}^e(x)$, $u_1(t) = \tilde{u}_1(t) - \tilde{u}_1^e$ and $u_2 (t)= \tilde{u}_2(t) - \tilde{u}_2^e$, the aim is to show the exponential stability of $\chi$ to $0$ in order to get $w_t \to \Omega_e$ and $\| Y \| \to 0$. The inputs $u_1$ and $u_2$ are assumed to be the results of a strictly proper dynamic controller whose inputs are $w_t(0,t), w_t(1,t)$ and $Y$. That means that the measurements are these three variables but it is not possible to apply exactly $w_t(1)$ or $w_t(0)$, corresponding to the situation where the actuator is bandwidth limited for instance. This assumption is important as the wave can be seen as a neutral system \cite{barreauInputOutput} and using directly $w_t$ means that we can affect directly the neutral part. This phenomena is known to be absolutely non-robust \cite{hale2001effects} to small delay for example. Assuming the controller is of order $n$, it is written for $t \geqslant 0$:
\[
\left\{
\begin{array}{cl}
\dot{X}_c(t) \!\!\!\!& = A_c X_c(t) + B_{c1} Y(t) + B_{c2} \left[ \begin{smallmatrix} w_t(0,t) \\ w_t(1,t) \end{smallmatrix} \right], \\
u_1(t) \!\!\!\!& = C_1 \left[ \begin{smallmatrix} X_c(t) \\ Y(t) \end{smallmatrix} \right], \\
u_2(t) \!\!\!\!& = C_2 X_c(t) + K Y(t).
\end{array}
\right.
\]
with $C_1, C_2 \in \mathbb{R}^{1 \times (n+2)}, A_c \in \mathbb{R}^{n \times n}, B_{c1}, B_{c2} \in \mathbb{R}^{n,2}$ and $K \in \mathbb{R}^{1 \times 2}$. The closed-loop system in Riemann coordinates can be rewritten as:
\begin{equation} \label{eq:linWaveProblem}
\hspace{-0.15cm}
\left\{
\begin{array}{l}
\chi_{t}(x,t) = c \chi_{x}(x,t), \\
\left[ \begin{smallmatrix} 1-cg & 0 \\ 0 & 1-c k \end{smallmatrix} \right] {\chi}(0,t) = \left[ \begin{smallmatrix} 0 & 1+cg \\ 1+c k & 0 \end{smallmatrix} \right] {\chi}(1,t) - \left[ \begin{smallmatrix} 2 c g C_1 X(t) \\ 0 \end{smallmatrix} \right], \\
\dot{X}(t) = \tilde{A} X(t) + \tilde{B} \left[ \begin{smallmatrix} \chi(0,t) \\ \chi(1,t) \end{smallmatrix} \right],
\end{array}
\right.
\end{equation}
with initial conditions $\chi(x, 0) = \chi_{0}(x), X(0) = X^0$, $X^{\top} \!=\! \left[ \begin{smallmatrix} X_c^{\top} & Y^{\top} \end{smallmatrix} \right]^{\top}$ and
\[
\tilde{A} \!=\! \left[ \begin{matrix} A_c & B_{c1} \\ B C_2 & A \!+\! BK \end{matrix} \right]\!\!, \quad \quad \tilde{B} \!=\! \frac{1}{2}\left[ \begin{matrix} B_{c2} \\ \begin{array}{cc} E_1 & 0_{2, 1} \end{array} \end{matrix} \right] \left[ \begin{smallmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 1 \\ 1 & 0 \end{smallmatrix} \right]^{\top}\!\!.
\]
\begin{remark} A similar control law is proposed in \cite{ATJECC} but the stability is dealt using another Lyapunov functional. \end{remark}
\begin{remark} From now on, to ease the reading, the parameter $t$ may be omitted and ${\chi}$ refers to a solution of \eqref{eq:linWaveProblem}. \end{remark}
\subsection{Existence and uniqueness}
The existence and uniqueness follows the same lines than in \cite{besselString}. Define the following set: $\mathcal{H}^m = \mathbb{R}^{n+2} \times H^m \times H^m$ with $m \in \mathbb{N}$. The space $\mathcal{H} = \mathcal{H}^0$ can be equipped with the following norm:
\[
\begin{array}{ll}
\forall (X, \chi) \in \mathcal{H}, \quad \|(X, \chi)\|_{\mathcal{H}} \!\!\!\!& = |X|^2 + \frac{1}{2} \|\chi\|^2 \\
& = |X|^2 + c^2 \| w_x \|^2 + \| w_t \|^2.
\end{array}
\]
Using the operator notation \cite{tucsnak2009observation}, system \eqref{eq:linWaveProblem} is formulated as follows:
\[
T \left( \begin{smallmatrix} X \\ \chi \end{smallmatrix} \right) = \left( \begin{matrix} \tilde{A} X + \tilde{B} \left[ \begin{smallmatrix} \chi(0) \\ \chi(1) \end{smallmatrix} \right] \\ c \chi_x \end{matrix} \right),
\text{ and } T: \mathcal{D}(T) \to \mathcal{H},
\] with
\begin{multline*}
\mathcal{D}(T) = \left\{ (X, \chi) \in \mathcal{H}^1, \left[ \begin{smallmatrix} 1-cg & 0 \\ 0 & 1-c k \end{smallmatrix} \right] \chi(0) = \right. \\
\left. \left[ \begin{smallmatrix} 0 & 1+cg \\ 1+c k & 0 \end{smallmatrix} \right] \chi(1) - \left[ \begin{smallmatrix} 2 c g C_1 X \\ 0 \end{smallmatrix} \right] \right\}.
\end{multline*}
The existence of a continuous solution for $(X^0, \chi_0) \in \mathcal{D}(T)$ is ensured by applying Lumer-Philips theorem (for example in \cite[p.103]{tucsnak2009observation}) whose conditions are recalled below:
\begin{enumerate}
\item there exists a function $V:\mathcal{H} \to \mathbb{R}^+$ such that its derivative along the trajectories of \eqref{eq:linWaveProblem} is negative;
\item there exists $\lambda$ sufficiently small such that $\mathcal{D}(T) \subseteq \mathcal{R}(\lambda I - T)$ where $\mathcal{R}$ is the range operator.
\end{enumerate}
The first condition relies on the existence of a Lyapunov functional and is therefore the subject of the following part. The second statement needs some calculations very similar to the one conducted in \cite{besselString} or \cite{morgul1994dynamic}. For a given $\lambda > 0$, let $(r,f) \in \mathcal{D}(T)$, the aim is to prove the existence of $(X, \chi) \in \mathcal{D}(T)$ satisfying the following for $x \in (0, 1)$:
\[
\left\{
\begin{array}{l}
\lambda X - \tilde{A} X - \tilde{B} \left[ \begin{smallmatrix} \chi(0) \\ \chi(1) \end{smallmatrix} \right] = r, \\
\lambda \chi(x) - c \chi_x(x) = f(x). \\
\end{array}
\right.
\]
That leads to $\chi(x) = k_1 e^{\lambda\frac{x}{c}} + F(x)$ with $F(x) = c^{-1} \int_0^x e^{\lambda \frac{x-s}{c}} f(s) ds \in H^1$ and $k_1 = \mathrm{diag}(k_{11}, k_{12})$, $k_{11}, k_{12} \in \mathbb{R}$. Using the boundary conditions, we get a system of two equations:
\[
\begin{array}{l}
(1-cg) k_1 = k_2 e^{\frac{\lambda}{c}}(1+cg)(A+F(1)) - \frac{2cg}{\lambda} C_1 X, \\
(1-c k) k_2 = k_1 e^{\frac{\lambda}{c}}(1+c k)(A+F(1))
\end{array}
\]
Since there exists a $\lambda$ such that $\tilde{A} + \tilde{B} \left[ \begin{smallmatrix} \chi(0) \\ \chi(1) \end{smallmatrix} \right]$ is not the null matrix, then this system has a unique solution for a given $X$ that ends the proof of existence.
\section{Exponential Stability of the Drilling Pipe}
\subsection{Main result}
The main result of this paper is the $\alpha$-stability criterion for system \eqref{eq:linWaveProblem} expressed in terms of LMIs, therefore easily tractable. Let us first define the $\alpha$-stability.
\begin{definition} System \eqref{eq:linWaveProblem} is $\alpha$-stable (or exponentially stable with a decay-rate of at least $\alpha$) with respect to the norm $\| \cdot \|_{\mathcal H}$ if there exists $\gamma \geqslant 1$ such that the following holds for $(X^0, \chi_0)$ the initial condition:
\[
\|(X(t), \chi(\cdot,t))\|_{\mathcal H} \leqslant \gamma \|(X^0, \chi_0 )\|_{\mathcal H} e^{- \alpha t}.
\]
\end{definition}
Considering this definition, we propose a stability theorem for system~\eqref{eq:linWaveProblem}.
\begin{theo} \label{sec:theoLin}
Let $N > 0$. Assume there exists $P_N \in \mathbb{S}^{n+2+2(N+1)}_+$, $R, S \in \mathbb{S}^2_+$ such that the following LMI holds:
\begin{equation} \label{eq:LMI}
\Psi_{N,\alpha} - c R_N \prec 0,
\end{equation}
with
\begin{equation} \label{eq:defTheo}
\begin{array}{cl}
\!\!\!\!\!\! \Psi_{N,\alpha} \!\!\!\!& = \He((Z_N + \alpha F_N)^{\top} P_N F_N) - c G_N^{\top} S G_N \\
& \hfill \!\!\!\!\!\! + c H_N^{\top} \left( S + R \right) H_N e^{\frac{2 \alpha}{c}},\\
\!\!\!\!\!\! F_N \!\!\!\! &= \left[ \begin{matrix} I_{n+2+2(N+1)} & 0_{n+2+2(N+1), 2} \end{matrix} \right], \\
\!\!\!\!\!\! Z_N \!\!\!\! &= \left[ \begin{matrix} \mathcal{N}_N^{\top} & \mathcal Z_{N}^{\top} \end{matrix} \right]^{\!\top}\!\!\!,
\quad \mathcal{N}_N = \left[ \begin{matrix} \tilde{A} & 0_{n+2,2(N+1)} & \tilde{B} \end{matrix} \right], \\
\!\!\!\!\!\! \mathcal Z_{N} \!\!\!\! &= c \mathbb{1}_N H_{N}\! -\! c\bar{\mathbb{1}}_N G_{N}\! -\! \left[\begin{matrix} 0_{2(N + 1),n+2} &\!\!\!\! L_{N} &\!\!\!\! 0_{2(N + 1), 2}\end{matrix} \right], \\
\end{array}
\end{equation}
\begin{equation*}
\begin{array}{cl}
\!\!\!\!\!\! G_{N} \!\!\!\! &= \left[ \begin{matrix} \begin{smallmatrix} - c g C_1 \\ 0_{1,n+2} \end{smallmatrix} & 0_{2, 2(N+1)} & G \end{matrix} \right], \quad G = \left[ \begin{smallmatrix} 0 & 1 + cg \\ 1 + c k & 0 \end{smallmatrix} \right], \\
\!\!\!\!\!\! H_{N} \!\!\!\! &= \left[ \begin{matrix} \begin{smallmatrix} 0_{1,n+2} \\ c g C_1 \end{smallmatrix} & 0_{2, 2(N+1)} & H \end{matrix} \right], \quad H = \left[ \begin{smallmatrix} 1 - c k & 0 \\ 0 & 1 - c g \end{smallmatrix} \right], \\
\!\!\!\!\!\! R_N \!\!\!\! &= \mathrm{diag}(0_n, R, 3R, \cdots, (2N+1)R, 0_{2}), \\
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{rcllcl}
\!\!\! L_N\!=\!\left[\!\begin{smallmatrix}
\ell_{0,0}I_2 & \cdots& 0_2 \\
\vdots & \ddots &\vdots\\
\ell_{N,0}I_2 & \cdots & \ell_{N,N}I_2 \\
\end{smallmatrix}\!\right]\!\!, \
\mathbb{1}_N\!=\!\left[\!\begin{smallmatrix} I_2 \\ \vdots \\ I_2\end{smallmatrix}\!\right]\!\!, \
\bar{\mathbb 1}_N\!=\!\left[\!\begin{smallmatrix} I_2 \\ \vdots \\(-1)^{N} I_2\end{smallmatrix}\!\right],
\end{array}
\end{equation*}
and $\ell_{k,j} = (2j+1)(1 - (-1)^{j+k})$ if $ j \leqslant k$ and $0$ otherwise.\\
Then system \eqref{eq:linWaveProblem} is $\alpha$-exponentially stable.
\end{theo}
The proof of this theorem relies on the construction of a Lyapunov functional described in the following subsections.
\begin{remark} A necessary condition for \eqref{eq:LMI} to be fulfilled is that the last $2 \times 2$ diagonal block of \eqref{eq:LMI} must be definite negative corresponding to the following inequality:
\[
H^{\top} (S+R) H e^{2 \frac{\alpha}{c}} - G^{\top} S G \prec 0.
\]
This condition implies:
\begin{equation} \label{eq:alphaMax}
\alpha \leqslant \alpha_{max} = \max \left( \frac{c}{2} \log \left| \frac{(c k + 1)(c g + 1)}{(c k - 1)(c g - 1)} \right|, 0 \right).
\end{equation}
Setting $g = 0$ or $k=0$ leads to the same maximal decay-rate than in \cite{barreauInputOutput,bastin2016stability,datko1991two}. This condition is also related to the $\tau$-stabilization which is a common phenomenon when considering a wave equation \cite{olgac2004practical}.
One can notice that for $g > 0$ and $k > 0$, the PDE system itself is asymptotically stable, because the two boundary conditions are adding damping. Notice that if one of them is negative, there exist also values of the other coefficient making the system asymptotically stable. Note also that for $g = c^{-1}$ or $k=c^{-1}$ leads to $\alpha_{max} = + \infty$ meaning there is no neutral part and the system resumes to a time-delay system. For $d > 0$, the neutral part is not modified and the same limit can be observed. \end{remark}
\begin{remark}[Hierarchy] \label{sec:hierarchy} Define the following set:
\[
\mathcal{C}_N = \left\{ \alpha \geqslant 0 \ | \ \Psi_{N,\alpha} - R_N \prec 0, P_N \succ 0, R \succ 0, S \succ 0 \right\},
\]
and assume this set is not empty. Then, denote $\alpha_{N} = \sup \mathcal{C}_N$. The hierarchy property states that $\alpha_{N+1} \geqslant \alpha_{N}$. This can be proved using the same strategy than in \cite{besselString,SAFI20171}.
\end{remark}
\subsection{Proof of Theorem~\ref{sec:theoLin}}
\subsubsection{Preliminaries}
The main contribution of this paper relies on the extensive use of Bessel inequality to encompass traditional results. Before stating this inequality, we need to introduce an orthonormal family. The definition is as follows:
\begin{definition}[Legendre polynomials] Let $N \in \mathbb{N}$, the family of Legendre polynomials of degree less than or equal to $N$ is denoted by $\{\mathcal{L}_\ell\}_{\ell \in [0, N]}$ with
\[
\mathcal{L}_\ell(x) = (-1)^\ell \sum_{l = 0}^\ell (-1)^l \left( \begin{smallmatrix} \ell \\ l \end{smallmatrix} \right) \left( \begin{smallmatrix} \ell+l \\ l \end{smallmatrix} \right) x^l
\]
with $\left( \begin{smallmatrix} \ell \\ l \end{smallmatrix} \right) = \frac{\ell!}{l! (\ell-l)!}$.
\end{definition}
The sequence $\{\mathcal{L}_k\}$ is made up of ``shifted''-Legendre polynomials on $[0, 1]$. As seen in \cite{baudouin:hal-01310306,courant1966courant,seuret:hal-01065142}, this family is orthonormal in $L^2$ with the canonical inner product. That leads to the following definition.
\begin{definition} Let $\chi \in L^2$. The projection of $\chi$ on the $\ell^{th}$ Legendre polynomials is defined as follows:
\[
\mathfrak{X}_\ell := \int_0^1 \chi(x) \mathcal{L}_\ell(x) dx.
\]
\end{definition}
The Bessel inequality is obtained considering the previous definitions and the orthogonal property of the shifted-Legendre family.
\begin{lemma}[Bessel Inequality] \label{Bess}
For any function $\chi \in L^2$ and symmetric positive matrix $R \in \mathbb S^2_+$, the following Bessel-like integral inequality holds for all $N\in \mathbb N$:
\begin{equation}\label{eq:Bessel}
\int_{0}^1 \chi^{\top}(x) R \chi(x) dx \geqslant \sum_{\ell=0}^{N} (2\ell+1) \mathfrak{X}_\ell^{\top} R \mathfrak{X}_\ell.
\end{equation}
\end{lemma}
This lemma and its short proof can be seen in \cite{besselString}.
The derivation of $\mathfrak{X}_\ell$ along time is needed in the sequel. Lemma~3 from \cite{besselString} deals with this issue.
\begin{lemma}\label{lem:Chi_k}
For any function $\chi \in L^2$, the following expression holds for any $N$ in $\mathbb N$ using notations \eqref{eq:defTheo}:
\begin{equation*}
\left[ \begin{smallmatrix} \dot \mathfrak{X}_0 \\ \vdots \\ \dot \mathfrak{X}_{N} \end{smallmatrix} \right] = c\mathbb 1_N\chi(1)-c\bar {\mathbb 1}_N\chi(0) -cL_N \left[ \begin{smallmatrix} \mathfrak{X}_0 \\ \vdots \\ \mathfrak{X}_{N} \end{smallmatrix} \right].
\end{equation*}
\end{lemma}
The link between $\alpha$-exponential stability and a Lyapunov functional is made by the following lemma.
\begin{lemma} \label{sec:lemmaEpsilon} Let $V$ be a Lyapunov functional for system \eqref{eq:linWaveProblem} and $\alpha \geq 0$. Assume there exist $\varepsilon_1, \varepsilon_2, \varepsilon_3 > 0$ such that the following holds for all $t \geqslant 0$:
\begin{equation} \label{eq:epsilon}
\left\{
\begin{array}{l}
\varepsilon_1 \| (X, \chi) \|^2_{\mathcal H} \leqslant V(X, \chi) \leqslant \varepsilon_2 \| (X, \chi) \|^2_{\mathcal H}, \\
\dot{V}(X, \chi) + 2 \alpha V(X, \chi) \leqslant - \varepsilon_3 \| (X, \chi) \|^2_{\mathcal H},
\end{array}
\right.
\end{equation}
then system \eqref{eq:linWaveProblem} is $\alpha$-exponentially stable.
\end{lemma}
\begin{proof}
Inequalities \eqref{eq:epsilon} bring the following: $\dot{V}(X,w) + \left (\alpha + \frac{\varepsilon_3}{\varepsilon_2} \right) V(X, \chi) \leqslant 0$. Then integrating this inequality between $0$ and $t$ leads to:
\begin{equation*}
\| (X(t), \chi(t) ) \|^2_{\mathcal H} \leqslant \frac{\varepsilon_2}{\varepsilon_1} \| (X^0, \chi_0) \|^2_{\mathcal H} e^{- 2 \alpha t}.
\end{equation*}
\end{proof}
Once these useful lemmas reminded, a Lyapunov functional can be defined.
\subsubsection{Lyapunov functional candidate}
The aim of this subpart is to build a Lyapunov functional candidate for system \eqref{eq:linWaveProblem}. Following the same methodology than introduced in \cite{besselString}, a first Lyapunov functional $\mathcal{V}_{\alpha}$ for the PDE part is defined with $S, R \in \mathbb{S}^2_+$:
\[
\mathcal{V}_{\alpha}(\chi) = \int_0^1 e^{2 \frac{\alpha x}{c}} \chi^{\top}(x) (S + xR) \chi(x) dx,
\]
The Lyapunov functional candidate is then the summation of a quadratic term and $\mathcal{V}_{\alpha}$. This quadratic term contains the stability of state $X$ but also some terms merging the ODE and the PDE. This is done to enlarge the stability analysis, enabling the study of stability of the whole interconnected system and not of each subsystem independently. This technique, as shown in \cite{besselString}, is well-suited for the study of an unstable ODE coupled with a PDE for instance. The total Lyapunov function of order $N \in \mathbb{N}$ is then:
\begin{equation} \label{eq:VN}
V_{N, \alpha}(X, \chi) = X_N^{\top} P_N X_N + \mathcal{V}_{\alpha}(\chi)
\end{equation}
with $P_N \in \mathbb{S}^{n+2+2(N+1)}_+$ and $X_N = \left[ \begin{matrix} X^{\top} \!\!& \mathfrak{X}_0^{\top} \!\!& \dots \!\!& \mathfrak{X}_N^{\top} \end{matrix} \right]^{\top}\!\!$.
The aim now is to prove the existence of $\varepsilon_1, \varepsilon_2$ and $\varepsilon_3 > 0$ to apply Lemma~\ref{sec:lemmaEpsilon} on the functional $V_{N,\alpha}$ and then conclude the proof.
\subsubsection{Existence of $\varepsilon_1$}
Conditions $P_N \succ 0$ and $S, R \in \mathbb{S}^2_+$ mean that there exists $\varepsilon_1 > 0$, such that for all $x \in [0, 1]$:
\vspace{-0.3cm}
\[
\begin{array}{rcl}
P_N \!\!&\succeq& \!\!\!\varepsilon_1 \text{diag} \left( I_{n+2}, 0_2 \right) , \\
S+xR \ \succeq \ S &\succeq& \!\!\!\frac{\varepsilon_1}{2} I_2.
\end{array}
\]
These inequalities imply:
\[
\begin{array}{lcl}
V_{N,\alpha}(X, w) \!\!\! & \geqslant & \!\!\!\varepsilon_1 \left( |X|^2 + \frac{1}{2} \|\chi\|^2 \right) \\
&& + \int_0^1\chi^\top(x)\left(S + xR - \frac{\varepsilon_1}{2} I_2\right)\chi(x)dx\\
&\geqslant& \!\!\!\varepsilon_1 \left( |X|_n^2 + \frac{1}{2} \|\chi\|^2 \right) \geqslant \varepsilon_1 \|(X, \chi)\|_{\mathcal{H}}^2.
\end{array}
\]
\subsubsection{Existence of $\varepsilon_2$}
Since $P_N, S$ and $R$ are definite positive matrices, there exists $\varepsilon_2 > 0$ such that:
\vspace{-0.1cm}
\[
\begin{array}{rcl}
P_N \!\!\!& \preceq \!\!\!& \text{diag} \left( \varepsilon_2 I_{n+2}, \frac{\varepsilon_2}{4} \text{diag} \left\{ (2\ell+1) I_n \right\}_{\ell \in (0, N)} \right), \\
(S+xR) \!\!\!& \preceq \!\!\!& S + R \ \preceq \ \frac{\varepsilon_2}{4} e^{-2 \frac{\alpha}{c}}I_2,\quad \forall x\in (0,1).
\end{array}
\]
Then, we get:
\vspace{-0.1cm}
\begin{equation*}
\begin{array}{lll}
V_{N,\alpha}(X, \chi)
&\!\!\!\! \leqslant &\!\!\!\! \displaystyle\varepsilon_2 |X|^2 \vphantom{\sum_{\ell=0}^{N}} \! +\! \frac{\varepsilon_2}{4}\! \left( \sum_{\ell=0}^{N} (2\ell\!+\!1) \mathfrak{X}_\ell^{\top} \mathfrak{X}_\ell + \| \chi \|^2 \right) \\
&\!\!\!\! \leqslant &\!\!\!\! \varepsilon_2 \left( |X|^2\! +\! \frac{1}{2} \|\chi\|^2 \right) = \varepsilon_2 \|(X,\chi)\|_{\mathcal{H}}^2.
\end{array}
\end{equation*}
The inequality comes from Bessel inequality \eqref{eq:Bessel}.
\subsubsection{Existence of $\varepsilon_3$}
This part is the most important and shows that system \eqref{eq:linWaveProblem} is dissipative \cite{besselString,tucsnak2009observation}. Differentiating with respect to time \eqref{eq:VN} along the trajectories of system \eqref{eq:linWaveProblem} leads to:
\begin{equation*}
\dot{V}_{N,\alpha}(X, w) = \text{He} \left( \left[ \begin{smallmatrix} \dot{X} \\ \dot{\mathfrak{X}}_0 \\ \vdots \\ \dot{\mathfrak{X}}_N \end{smallmatrix} \right]^{\top} P_N \left[ \begin{smallmatrix} {X} \\ {\mathfrak{X}}_0 \\ \vdots \\ {\mathfrak{X}}_N\end{smallmatrix} \right] \right) + \dot{\mathcal{V}}_{\alpha}(w).
\end{equation*}
The goal here is to find an upper bound of $\dot{V}_{N,\alpha}$ using the extended state: $\xi_N = \left[ X_N^{\top} \ \ w_t(1) \ \ w_t(0) \right]^{\top}$. The first step is to derive an expression of $\dot{\mathcal{V}}_{\alpha}$. Similarly to \cite{besselString}, we get:
\[
\hspace*{-0.12cm}
\begin{array}{ll}
\dot{\mathcal{V}}_{\alpha}(\chi) \!\!\!\!\!&= 2 c \int_0^1 \chi^{\top}_x(x) (S + xR) \chi(x) e^{2 \frac{\alpha x}{c}} dx \\
&= 2c \left( \chi^{\top}(1) (S+R) \chi(1) e^{2 \frac{\alpha}{c}} - \chi^{\top}(0) S \chi(0) \right. \quad \quad \quad \\
& \hfill \left.- \int_0^1 \chi^{\top}(x) R \chi(x) e^{2 \frac{\alpha x}{c}} dx \right) - 4 \alpha \mathcal{V}_{\alpha}(\chi) - \dot{\mathcal{V}}_{\alpha}(\chi) \\
&= c \left( \chi^{\top}(1) (S+R) \chi(1) e^{2 \frac{\alpha}{c}} - \chi^{\top}(0) S \chi(0) \right. \quad \quad \quad \\
& \hfill \left.- \int_0^1 \chi^{\top}(x) R \chi(x) e^{-2 \frac{\alpha x}{c}} dx \right) - 2 \alpha \mathcal{V}_{\alpha}(\chi).
\end{array}
\]
Using the previous equation, Lemma~\ref{lem:Chi_k} and equation \eqref{eq:problem}, we note that $X_N = F_N\xi_N, ~ \dot X_N = Z_N\xi_N, ~ \chi(0) = G_{N}\xi_N, ~ \chi(1) = H_{N}\xi_N$ where matrices $F_N, Z_N, H_{N}, G_{N}$ are given in \eqref{eq:defTheo}. Then we can write:
\vspace{-0.22cm}
\begin{multline*}
\dot{V}_{N,\alpha}(X, \chi) = \xi_N^{\top} \Psi_{N,\alpha} \xi_N + c \sum_{\ell=0}^{N} \mathfrak{X}_\ell^{\top} (2\ell+1) R \mathfrak{X}_\ell \\
-c \int_0^1 \chi^{\top}(x) R \chi(x) e^{2 \frac{\alpha x}{c}} dx - 2 \alpha V_{N,\alpha}(X, \chi).
\end{multline*}
Denoting by $W_{N,\alpha}(X, \chi) = \dot{V}_{N,\alpha}(X, \chi) + 2 \alpha V_{N, \alpha}(X, \chi)$, the previous equality implies the following upper bound:
\begin{multline} \label{eq:WN}
W_{N,\alpha}(X, \chi) \leqslant \xi_N^{\top} \Psi_{N,\alpha} \xi_N + c \sum_{\ell=0}^{N} (2\ell+1) \mathfrak{X}_\ell^{\top} R \mathfrak{X}_\ell \\
-c \int_0^1 \chi^{\top}(x) R \chi(x) dx.
\end{multline}
Since $R \succ 0$ and $\Psi_{N,\alpha} \prec 0$, there exists $\varepsilon_3 > 0$ such that:
\begin{equation} \label{eq:Rpos2}
\begin{array}{ccl}
R \!\!\!&\succeq & \ \frac{\varepsilon_3}{2} I_2,\\
\Psi_{N,\alpha} \!\!\!&\preceq &\!\!\! -\varepsilon_3 \text{diag} \left( I_{n+2}, \frac{1}{2} I_2, \frac{3}{2}I_2,\dots, \frac{2N\!+\!1}{2} I_2, 0_2 \right)\!.
\end{array}
\end{equation}
Using \eqref{eq:Rpos2} and Bessel's inequality, equation \eqref{eq:WN} becomes:
\begin{equation*}
W_{N,\alpha}(X,\chi) \leqslant \!\!- \varepsilon_3 \left( \!|X|^2 + \frac{1}{2} \| \chi \|^2 \!\right) \leqslant \!\!- \varepsilon_3 \ \| (X, \chi) \|^2_{\mathcal H},
\end{equation*}
and that concludes the proof.
\section{Examples and discussion}
In this section, we illustrate the proposed theorem by using values taken from \cite{marquez2015analysis, saldivar2016control} and shown in Table \ref{tab:values}. The simulation is based on a finite-difference method of order $2$. The two cases under study here are summarized below:
\begin{enumerate}
\item the feedforward control with $n = 0$ (using only $u_1^e$ and $u_2^e$ in \eqref{eq:feedforward}) and
\begin{equation} \label{eq:uncontrolled}
\begin{array}{lll}
C_1 = \left[ \begin{smallmatrix} 0 & 0 \end{smallmatrix} \right], & C_2 = 0, & K = \left[ \begin{smallmatrix} 0 & 0 \end{smallmatrix} \right].
\end{array}
\end{equation}
\item a dynamic control with the following parameters:
\begin{equation} \label{eq:controlled}
\hspace{-1cm}
\begin{array}{lll}
\tilde{A} = \left[ \begin{smallmatrix} -800 & 0 \\ 0 & -150 \end{smallmatrix} \right], & B_{c1} = 0_{2,2}, & \!\! B_{c2} = I_2, \\
C_1 = \left[ \begin{smallmatrix} 800 & 0.015 & 0.01 & -0.1 \end{smallmatrix} \right], & C_2 = \left[ \begin{smallmatrix} 0 & - 0.0718 \end{smallmatrix} \right], \\
K = \left[ \begin{smallmatrix} -82.2 & \ 10.4 \end{smallmatrix} \right].
\end{array}
\hspace{-1cm}
\end{equation}
\end{enumerate}
The dynamic controller is obtained considering two low-pass filters. Denote by $s \in \mathbb{C}$ the Laplace variable, the two transfer functions for the low-pass filters are $\frac{u_1}{w_t(0)} = \frac{1}{1 + s \omega_{c1}}$ and $\frac{u_1}{w_t(1)} = \frac{1}{1 + \omega_{c2}}$ with the cut-off frequencies $\omega_{c1} = 800$ and $w_{c2} = 150$. Gain $K$ has been chosen such that the eigenvalues of $A+BK$ are $-2.4603 \pm 0.1230 i$. $C_2$ has been chosen to cancel the dependence on $w_t(1,\cdot)$ in the ODE.
With the feedforward controller only, it is possible to estimate the decay-rate of the solution. Indeed, there is no real coupling between the ODE and the PDE and the decay-rate of the interconnected system will be the smallest between their respective ones. Here, the PDE has a decay-rate given by equation \eqref{eq:alphaMax} of $1.2302$ and the ODE is $0.2159$. The results of Theorem~\ref{sec:theoLin} is given in Table~\ref{tab:alphaMax}. The maximum decay-rate for the feedforward case is obtained for $N \geqslant 1$ and is, as expected, the decay-rate of the ODE.
Figure~\ref{fig:simu} shows the time response of system \eqref{eq:linWaveProblem} in the two cases. The initial state for this computation is $X^0 = 0, w(x,0) = 2 - \Omega_e x$ and $w_{t}(x,0) = \frac{\Omega_e - q T^e}{k} x - \frac{u_1^e - \Omega_e}{g} (1-x)$ for $x \in (0, 1)$. Of course states $X_1$ and $X_2$ are much faster, which results from the direct influence of static feedback gain $K$ but also the speed $w_t(1)$, which is more regular and converges faster to $0$. Indeed, as shown in Table \ref{tab:alphaMax}, the speed is much faster in the situation with the dynamic control. The hierarchy of Remark~\ref{sec:hierarchy} is clearly visible and reaches its maximum value (up to three a 3 digits precision) at $N = 2$. If $d > 0$, one can notice a slightly higher decay rate but the limit remains the same. One of the drawback of such a system is the angular speed $w_t(x,\cdot)$ for $x \in (0,1)$, which increases significantly compared to the first case as it is possible to see on Figure~\ref{fig:simu3D}.
\begin{remark} A backstepping control law could have been considered with a target system of arbitrary large decay-rate. Compared to this method, the price to pay for a finite dimension controller is seen by equation~\eqref{eq:alphaMax}. Indeed, it is not possible to accelerate the system with an arbitrary large decay-rate. Other differences are that there is no design methodology using LMI yet and the control is a finite-dimension state-feedback using the knowledge of only $Y$, $w_t(0)$ and $w_t(1)$ with strictly proper controllers.\end{remark}
\begin{table}
\centering
\begin{tabular}{c|c||c|c}
Symbol & Value & Symbol & Value \\
\hline
$c$ & $2.6892$ m.s${}^{-1}$ & $\Omega_e$ & $10$ rad.s${}^{-1}$ \\
$k$ & $0.1106$ s.m${}^{-1}$ & $g$ & $2.48$ s.m${}^{-1}$ \\
$A_{21}$ & $-41.58$ s${}^{-2}$ & $A_{22}$ & $-0.43$ s${}^{-1}$\\
$e_1$ & $-8.35$ m.s${}^{-1}$.rad${}^{-1}$ &$e_2$ & $-0.069$ m${}^{-1}$.kg${}^{-1}$ \\
$b$ & $-0.43$ s${}^{-1}$ & $T^e$ & $7572.4$ N.m \\
$q$ & $0.0012$ N${}^{-1}$.m${}^{-1}$
\end{tabular}
\vspace{0.2cm}
\caption{Coefficient values taken for the simulations.}
\label{tab:values}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|ccccc}
Type of control& $N = 0$ & $N = 1$ & $N = 2$ & $N =3$ & $\alpha_{max}$ \\
\hline
Feedforward & $0.2157$ & $0.2159$ & $0.2159$ & $0.2159$ & $1.2302$ \\
Dynamic & $0.4972$ & $0.4972$ & $1.000$ & $1.000$ & $1.2302$
\end{tabular}
\vspace{0.1cm}
\caption{Maximum decay-rate $\alpha$ using Theorem~\ref{sec:theoLin} at an order $N$. The feedforward controller refers to \eqref{eq:uncontrolled} while the dynamic controller is with \eqref{eq:controlled}. $\alpha_{max}$ is calculated using \eqref{eq:alphaMax}.}
\label{tab:alphaMax}
\end{table}
\begin{figure*}
\centering
\includegraphics[trim = 2.75cm 0cm 2.75cm 0cm, width=18cm]{simuLarge.eps}
\caption{Simulation on the feedforward and dynamic controlled system. }
\label{fig:simu}
\end{figure*}
\begin{figure}
\centering
\includegraphics[trim = 0.5cm 0cm 0.5cm 0cm, clip, width=8.5cm]{simu3D.eps}
\caption{Angle velocity $w_t$ in the situation with dynamic control.}
\label{fig:simu3D}
\end{figure}
\section{Conclusion}
We have studied the stability of a drilling mechanism, which dynamics can be modeled as a coupled ODE/PDE. Approximating this model around a desired equilibrium point leads to an interconnected ODE / damped wave equation. Therefore, the stability of this coupled system is studied using a Lyapunov approach and the stability condition of such a system has been expressed in terms of LMI. Using Bessel inequality, we provided a hierarchy of LMI conditions for this kind of interconnected system with linear feedback controllers. Using only strictly proper hand-designed controllers, a control law has been derived improving subsequently the decay-rate of the system. Further studies would investigate how to automatically design such controllers.
\section{ACKNOWLEDGMENTS}
The authors gratefully acknowledge anonymous reviewers' comments. This work is supported by the ANR project SCIDiS contract number 15-CE23-0014.
\bibliographystyle{plain}
|
{
"timestamp": "2018-09-07T02:07:28",
"yymm": "1803",
"arxiv_id": "1803.02713",
"language": "en",
"url": "https://arxiv.org/abs/1803.02713"
}
|
\section{Introduction}
Linear regression is one of the oldest tools for data analysis \citep{galton1886regression} and it remains one of the most commonly-used as of today \citep{draper2014applied}, especially in social sciences \citep{agresti1997}, econometics \citep{greene2003econometric} and medical research \citep{armitage2008statistical}. Moreover, many nonlinear models are either intrinsically linear in certain function spaces, e.g., kernels methods, dynamical systems, or can be reduced to solving a sequence of linear regressions, e.g., iterative reweighted least square for generalized Linear models, gradient boosting for additive models and so on \citep[see][for a detailed review]{friedman2001elements}.
In order to apply linear regression to sensitive data such as those in social sciences and medical studies, it is often needed to do so such that the privacy of individuals in the data set is protected. Differential privacy \citep{dwork2006calibrating} is a commonly-accepted criterion that provides provable protection against identification and is resilient to arbitrary auxiliary information that might be available to attackers. In this paper, we focus on linear regression with $(\epsilon,\delta)$-differentially privacy \citep{dwork2006our}.
\paragraph{Isn't it a solved problem?}
It might be a bit surprising why this is still a problem, since several general frameworks of differential privacy have been proposed that cover linear regression. Specifically, in the agnostic setting (without a data model), linear regression is a special case of differentially private empirical risk minimization (ERM), and its theoretical properties have been quite well-understood in a sense that the minimax lower bounds are known \citep{bassily2014private} and a number of algorithms \citep{chaudhuri2011differentially,kifer2012private} have been shown to match the lower bounds under various assumptions. In the statistical estimation setting where we assume the data is generated from a linear Gaussian model, linear regression is covered by the sufficient statistics perturbation approach for exponential family models \citep{dwork2010differential,foulds2016theory}, propose-test-release framework \citep{dwork2009differential} as well as the the subsample-and-aggregate framework \citep{smith2008efficient}, with all three approaches achieving the asymptotic efficiency in the fixed dimension ($d=O(1)$), large sample ($n\rightarrow \infty$) regime.
Despite these theoretical advances, very few empirical evaluations of these algorithms were conducted and we are not aware of a commonly-accepted best practice. Practitioners are often left puzzled about which algorithm to use for the specific data set they have.
The nature of differential privacy often requires them to set parameters of the algorithm (e.g., how much noise to add) according to the diameter of the parameter domain, as well as properties of a hypothetical worst-case data set, which often leads to an inefficient use of their valuable data.
The main contribution of this paper is threefold:
\begin{enumerate}
\item We consolidated many bits and pieces from the literature and clarified the price of differentially privacy in statistical estimation and statistical learning.
\item We carefully analyzed One Posterior Sample (\textsc{OPS}{}) and Sufficient Statistics Perturbation (\textsc{SSP}{}) for linear regression and proposed simple modifications of them into adaptive versions: \textsc{AdaOPS}{} and \textsc{AdaSSP}{}. Both work near optimally for every problem instance without any hyperparameter tuning.
\item We conducted extensive real data experiments to benchmark existing techniques and concluded that the proposed techniques give rise to the more favorable privacy-utility tradeoff relative to existing methods.
\end{enumerate}
\paragraph{Outline of this paper.}
In Section~\ref{sec:setup} we will describe the problem setup and explain differential privacy. In Section~\ref{sec:priorwork}, we will survey the literature and discuss existing algorithms.
Then we will propose and analyze our new method \textsc{AdaSSP}{} and \textsc{AdaOPS}{} in Section~\ref{sec:main} and conclude the paper with experiments in Section~\ref{sec:exp}.
\section{Notations and setup}\label{sec:setup}
Throughout the paper we will use $X\in\mathbb{R}^{n\times d}$ and $\boldsymbol y\in \mathbb{R}^n$ to denote the design matrix and response vector. These are collections of data points $(x_1,y_1),...,(x_n,y_n)\in \mathcal{X}\times \mathcal{Y}$.
We use $\|\cdot\|$ to denote Euclidean norm for vector inputs, $\ell_2$-operator norm for matrix inputs. In addition, for set inputs, $\|\cdot\|$ denotes the radius of the smallest Euclidean ball that contains the set. For example, $\|\mathcal{Y}\| = \sup_{y\in \mathcal{Y}} |y|$ and $\|\mathcal{X}\| = \sup_{x\in\mathcal{X}}\|x\|$. Let $\Theta$ be the domain of coefficients. Our results do not require $\Theta$ to be compact but existing approaches often depend on $\|\Theta\|$.
$\lesssim$ and $\gtrsim$ denote greater than or smaller to up to a universal multiplicative constant, which is the same as the big $O(\cdot)$ and the big $\Omega(\cdot)$. $\tilde{O}(\cdot)$ hides at most a logarithmic term. $\prec$ and $\succ$ denote the standard semidefinite ordering of positive semi-definite (psd) matrices. $\cdot\vee\cdot$ and $\cdot \wedge \cdot$ denote the bigger or smaller of the two inputs.
We now define a few data dependent quantities. We use $\lambda_{\min }(X^TX)$ (abbv. $\lambda_{\min }$) to denote the smallest eigenvalue of $X^TX$, and to make the implicit dependence in $d$ and $n$ clear from this quantity, we define
$\alpha := \lambda_{\min }\frac{d}{n\|\mathcal{X}\|^2}.$
One can think of $\alpha$ as a normalized smallest eigenvalue of $X^TX$ such that $0 \leq \alpha\leq 1$. Also, $1/\alpha$ is closely related to the condition number of $X^TX$.
Define the least square solution
$\theta^* = (X^TX)^{\dagger}X^T\boldsymbol y$. It is the optimal solution to
$
\min_{\theta} \frac{1}{2}\|\boldsymbol y-X \theta\|^2 =: F(\theta).
$
Similarly, we use $\theta^*_\lambda = (X^TX + \lambda I)^{-1}X^T\boldsymbol y$ denotes the optimal solution to the ridge regression objective $F_\lambda(\theta)= F(\theta) + \lambda \|\theta\|^2$.
In addition, we denote the global Lipschitz constant of $F$ as $L^*: = \|\mathcal{X}\|^2\|\Theta\|+\|\mathcal{X}\|\|\mathcal{Y}\|$ and data-dependent local Lipschitz constant at $\theta^*$ as $L := \|\mathcal{X}\|^2\|\theta^*\|+\|\mathcal{X}\|\|\mathcal{Y}\|$. Note that when $\Theta =\mathbb{R}^d$, $L^* = \infty$, but $L$ will remain finite for every given data set.
\paragraph{Metric of success.}
We measure the performance of an estimator $\hat{\theta}$ in two ways.
First, we consider the optimization error $F(\hat{\theta}) - F(\theta^*)$ in expectation or with probability $1-\varrho$. This is related to the prediction accuracy in the distribution-free statistical learning setting.
Second, we consider how well the coefficients can be estimated under the linear Gaussian model:
$$
\boldsymbol y = X\theta_0 + \mathcal{N}(0,\sigma^2 I_n)
$$
in terms of $\mathbb{E}[\|\hat{\theta}-\theta_0\|^2]$ or in some cases $\mathbb{E}[ \|\hat{\theta}-\theta_0\|^2 | E ] $ where $E$ is a high probability event.
The optimal error in either case will depend on the specific design matrix $X$, optimal solution $\theta^*$, the data domain $\mathcal{X},\mathcal{Y}$, the parameter domain $\Theta$ as well as $\theta_0,\sigma^2$ in the statistical estimation setting.
\paragraph{Differential privacy.}
We will focus on estimators that are differential private, as defined below.
\begin{definition}[Differential privacy \citep{dwork2006calibrating}]
We say a randomized algorithm $\mathcal{A}$ satisfies $(\epsilon,\delta)$-DP if for \emph{all} fixed data set $(X,\boldsymbol y)$ and data set $(X',\boldsymbol y')$ that can be constructed by adding or removing one row $(x,y)$ from $(X,\boldsymbol y)$, and for any measurable set $\mathcal{S}$ over the probability of the algorithm
$$
\mathbb{P}(\mathcal{A}((X,\boldsymbol y))\in \mathcal{S}) \leq e^{\epsilon} \mathbb{P}(\mathcal{A}((X',\boldsymbol y'))\in \mathcal{S}) + \delta,\;\;
$$
\end{definition}
Parameter $\epsilon$ represents the amount of privacy loss from running the algorithm and $\delta$ denotes a small probability of failure. These are user-specified targets to achieve and the differential privacy guarantee is considered meaningful if $\epsilon \leq 1$ and $\delta \ll 1/n$ \citep[see, e.g., Section 2.3.3 of ][for a comprehensive review]{dwork2014algorithmic}.
\paragraph{The pursuit for adaptive estimators.}
Another important design feature that we will mention repeatedly in this paper is \emph{adaptivity}. We call an estimator $\hat{\theta}$ \emph{adaptive} if it behaves optimally simultaneously for a wide range of parameter choices. Being adaptive is of great practical relevance because we do not need to specify the class of problems or worry about whether our specification is wrong \citep[see examples of adaptive estimators in e.g.,][]{donoho1995noising,birge2001gaussian}. \emph{Adaptivity} is particularly important for differentially private data analysis because often we need to decide the amount of noise to add by the size of the domain. For example, an adaptive algorithm will not rely on conservative upper bounds of $\theta_0$, or a worst case $\lambda_{\min }$ (which would be $0$ on any $\mathcal{X}$), and it can take advantage of favorable properties when they exist in the data set. We want to design an estimator that does not take these parameters as inputs and behave nearly optimally for every fixed data set $X\in \mathcal{X}^n,\boldsymbol y\in \mathcal{Y}$ under a variety of configuration of $\|\mathcal{X}\|,\|\mathcal{Y}\|,\|\Theta\|$.
\section{A survey of prior work}\label{sec:priorwork}
In this section, we summarize existing theoretical results in linear regression with and without differential privacy constraints. We will start with lower bounds.
\subsection{Information-theoretic lower bounds}
\paragraph{Lower bounds under linear Gaussian model.}
Under the statistical assumption of linear Gaussian model $\boldsymbol y = X\theta_0 + \mathcal{N}(0,\sigma^2)$, the minimax risk for both estimation and prediction are crisply characterized for each fixed design matrix $X$:
\begin{equation}\label{eq:prediction_lowerbound}
\inf_{\hat{\theta}}\sup_{\theta_0\in\mathbb{R}^d} \mathbb{E}[ F(\hat{\theta}) - F(\theta_0)| X]
= \frac{d\sigma^2}{2},
\end{equation}
and if we further assume that $n\geq d$ and $X^TX$ is invertible (for identifiability), then
\begin{align}\label{eq: estimation_lowerbound}
\inf_{\hat{\theta}}\sup_{\theta_0\in\mathbb{R}^d} \mathbb{E}[\|\hat{\theta}-\theta_0\|^2_2 | X] = \sigma^2\mathrm{tr}[(X^TX)^{-1}] .
\end{align}
In the above setup, $\hat{\theta}$ is any measurable function of $\hat{y}$ (note that $X$ is fixed). These are classic results that can be found in standard statistical decision theory textbooks \citep[See, e.g., ][Chapter 13]{wasserman2013all}.
Under the same assumptions, the Cramer-Rao lower bound mandates that the covariance matrix of any unbiased estimator $\hat{\theta}$ of $\theta_0$ to obey that
\begin{equation}\label{eq:cramer-rao}
\mathrm{Cov}(\hat{\theta}) \succ \sigma^2 (X^TX)^{-1}.
\end{equation}
This bound applies to every problem instance separately and also implies a sharp lower bound on the prediction variance on every data point $x$. More precisely, $\mathrm{Var}(\hat{\theta}^Tx) \geq \sigma^2 x^T(X^TX)^{-1}x$ for any $x$.
Minimax risk \eqref{eq:prediction_lowerbound}, \eqref{eq: estimation_lowerbound} and the Cramer-Rao lower bound \eqref{eq:cramer-rao} are simultaneously attained by $\theta^*$.
\paragraph{Statistical learning lower bounds.}
Perhaps much less well-known, linear regression is also thoroughly studied in the distribution-free statistical learning setting, where the only assumption is that the data are drawn iid from some unknown distribution $\mathcal{P}$ defined on some compact domain $\mathcal{X}\times \mathcal{Y}$. Specifically, let the risk ($\mathbb{E}[\text{loss}]$) be
$$
R(\theta) = \mathbb{E}_{( x,y)\sim \mathcal{P}}[ {\textstyle \frac{1}{2}}(x^T\theta-y)^2] ={\textstyle \frac{1}{n}}\mathbb{E}_{(X,\boldsymbol y)\sim \mathcal{P}^n}[ F(\theta)].
$$
\citet{shamir2015sample} showed that when $\Theta$, $\mathcal{X}$ are $\mathcal{Y}$ are Euclidean balls,
\begin{equation}\label{eq:shamir_bound}
\begin{aligned}
&\inf_{\hat{\theta}}\sup_{\mathcal{P}} \left[\mathbb{E} [n\cdot R(\hat{\theta})] - \inf_{\theta\in\Theta}[n\cdot R(\theta)] \right]\\
\gtrsim& \min\{ n\|\mathcal{Y}\|^2, \|\Theta\|^2\|\mathcal{X}\|^2 + d \|\mathcal{Y}\|^2, \sqrt{n} \|\Theta\|\|\mathcal{X}\|\|\mathcal{Y}\| \}.
\end{aligned}
\end{equation}
where $\hat{\theta}$ be any measurable function of the data set $X,\boldsymbol y$ to $\Theta$ and the expectation is taken over the data generating distribution $X,\boldsymbol y\sim \mathcal{P}^n$. Note that to be compatible to other bounds that appear in this paper, we multiplied the $R(\cdot)$ by a factor of $n$. Informally, one can think of $\|\mathcal{Y}\|$ as $\sigma$ in \eqref{eq:prediction_lowerbound} so both terms depend on $d\sigma^2$ (or $d\|\mathcal{Y}\|^2$), but the dependence on $\|\Theta\|\|\mathcal{X}\|$ is new for the distribution-free setting.
\citet{koren2015fast} later showed that this lower bound is matched up to a constant by Ridge Regression with $\lambda = 1$
and both \citet{koren2015fast} and \citet{shamir2015sample} conjecture that ERM without additional regularization should attain the lower bound \eqref{eq:shamir_bound}. If the conjecture is true, then the unconstrained OLS is simultaneously optimal for all distributions supported on the smallest ball that contains all data points in $X,\boldsymbol y$ for any $\Theta$ being an $\ell_2$ ball with radius larger than $\|\theta^*\|$.
\paragraph{Lower bounds with $(\epsilon,\delta)$-privacy constraints.}
Suppose that we further require $\hat{\theta}$ to be $(\epsilon,\delta)$-differentially private, then there is an additional price to pay in terms of how accurately we can approximate the ERM solution. Specifically, the lower bounds for the \emph{empirical} excess risk for differentially private ERM problem in \citep{bassily2014private} implies that for $\delta<1/n$ and sufficiently large $n$:
\begin{itemize}
\item[1.] There exists a triplet of $(\mathcal{X},\mathcal{Y}, \Theta)\subset \mathbb{R}^d\times \mathbb{R}\times \mathbb{R}^d$,
such that
\begin{equation}\label{eq:dp_lowerbound_lipschitz}
\begin{aligned}
\inf_{\hat{\theta}\text{ is }(\epsilon,\delta)\text{-DP}}\sup_{X\in \mathcal{X}^n,\boldsymbol y\in \mathcal{Y}^n} \left[F(\hat{\theta}) - \inf_{\theta\in\Theta}F(\theta)
\right]
\gtrsim \min\left\{n\|\mathcal{Y}\|^2, \frac{\sqrt{d} (\|\mathcal{X}\|^2\|\Theta\|^2 + \|\mathcal{X}\|\|\Theta\|\|\mathcal{Y}\|)}{\epsilon}\right\}.
\end{aligned}
\end{equation}
\item[2.] Consider the class of data set $\mathcal{S}$ where all data sets $X\in \mathcal{S} \subset \mathcal{X}^n$ obeys that the inverse condition number $\alpha \geq \alpha^* \geq \frac{d^{1.5}(\|\mathcal{X}\|\|\Theta\| + \|\mathcal{Y}\|)}{n\|\mathcal{X}\|\|\Theta\|\epsilon}$ \footnote{This requires $\lambda_{\min } \geq \sqrt{d}L/\epsilon$ for all data sets $X$.}.
There exists a triplet of $(\mathcal{X},\mathcal{Y}, \Theta)\subset \mathbb{R}^d\times \mathbb{R}\times \mathbb{R}^d$ such that
\begin{equation}\label{eq:dp_lowerbound_strongcvx}
\begin{aligned}
\inf_{\hat{\theta}\text{ is }(\epsilon,\delta)\text{-DP}}\sup_{X\in \mathcal{S},\boldsymbol y\in \mathcal{Y}^n} \left[F(\hat{\theta}) - \inf_{\theta\in\Theta}F(\theta)
\right]
\gtrsim \min\left\{n\|\mathcal{Y}\|^2, \frac{d^2 (\|\mathcal{X}\|\|\Theta\| + \|\mathcal{Y}\|)^2 }{n\alpha^* \epsilon^2}\right\}.
\end{aligned}
\end{equation}
\end{itemize}
These bounds are attained by a number of algorithms, which we will go over in Section~\ref{sec:exist_algs}.
Comparing to the non-private minimax rates on prediction accuracy, the bounds look different in several aspects. First, neither rate for prediction error in \eqref{eq:prediction_lowerbound} or \eqref{eq:shamir_bound} depends on whether the design matrix $X$ is well-conditioned or not, while $\alpha^*$ appears explicitly in \eqref{eq:dp_lowerbound_strongcvx}.
Secondly, the dependence on $\|\Theta\|\|\mathcal{X}\|,\|\mathcal{Y}\|,d,n$ are different, which makes it hard to tell whether the optimization error lower bound due to the privacy requirement is limiting.
To clarify the relationships, we plot Shamir's lower bound \eqref{eq:shamir_bound} and the smaller of Bassily et. al.'s differential privacy lower bounds \eqref{eq:dp_lowerbound_lipschitz} and \eqref{eq:dp_lowerbound_strongcvx} for all configurations of $d,n$ graphically in Figure~\ref{fig:price_of_DP}. We also use multiple lines to illustrate the shifts in these lower bounds when parameters such as $\epsilon$ and $\alpha^*$ changes. In all figures $\delta$ is assumed to be $o(1/n)$ and logarithmic terms are dropped. The price of differential privacy is highlighted as a shaded area in the figures. Interestingly, in the first case when $\|\Theta\|$ is small (when $\|\mathcal{X}\|\|\Theta\|\asymp \|\mathcal{Y}\|$), then substantial price only occurs in the non-standard region where $n<d$. Arguably this is OK because in that regime, people should use Ridge regression or Lasso anyways rather than OLS. In the case when $\|\Theta\|$ is large (when $\|\mathcal{X}\|\|\Theta\|\asymp d\|\mathcal{Y}\|$), the price is more substantial and it applies to all $n>d$ unless we can exploit the strong convexity in the data set. When we do, then the cost only occur for an interval in $n$ and eventually the cost of differential privacy becomes negligible relative to the minimax rate.
To the best of our knowledge this is the first time the ``price of differential privacy'' for linear regression is discussed with clear explanation of the dependency in all parameters of the problem.
\begin{figure} [p]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_of_dp_case1a}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_of_dp_case1b}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_of_dp_case2a}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_of_dp_case2b}
\end{subfigure}
\caption{Illustration of the lower bounds for non-private and private linear regression.}\label{fig:price_of_DP}
\bigskip
\centering
\includegraphics[width=0.44\textwidth]{figures/freedp_case1}
\includegraphics[width=0.44\textwidth]{figures/freedp_case2}
\caption{Illustration of the region of $\epsilon$ where DP can be obtained without losing the statistical learning minimax rate.}\label{fig:regions_PfF}
\end{figure}
The above discussion also allows us to address the following question.
\begin{center}
When is privacy \emph{for free} in statistical learning?
\end{center}
Specifically, what is the smallest $\epsilon$ such that an $(\epsilon,\delta)$-DP algorithm matches the minimax rate in \eqref{eq:shamir_bound}?
The answer really depends on the relative scale of $\|\mathcal{X}\|\|\Theta\|$ and $\|\mathcal{Y}\|$ and that of $n,d$. When $\|\mathcal{X}\|\|\Theta\|\asymp \|\mathcal{Y}\|$, \eqref{eq:dp_lowerbound_lipschitz} says that $(\epsilon,\delta)$-DP algorithms can achieve the nonconvex minimax rate provided that
$\epsilon \gtrsim \min\left\{ \frac{1}{\sqrt{d}}\vee\sqrt{\frac{d}{n}}, \sqrt{\frac{d^2}{n^{1.5}\alpha^*}} \vee \sqrt{\frac{d}{n\alpha^*}}\right\}. $
On the other hand, if $\|\mathcal{X}\|\|\Theta\|\asymp \sqrt{d}\|\mathcal{Y}\|$
\footnote{This is arguably the more relevant setting. Note that if $x\sim \mathcal{N}(0,I_d)$ and $\theta$ is fixed, then $x^T\theta = O_P(d^{-1/2}\|x\|\|\theta\|)$.}
and $n>d$, then we need
$\epsilon \gtrsim \min\left\{\sqrt{d}\vee\frac{d^{3/2}}{n}, \frac{d}{\sqrt{n\alpha^*}}\vee \frac{d^{3/2}}{n\sqrt{\alpha^*}} \right\}.$
The regions are illustrated graphically in Figure~\ref{fig:regions_PfF}.
In the first case, there is a large region upon $n\gtrsim d$, where meaningful differential privacy (with $\epsilon\leq 1$ and $\delta= o(1/n)$) can be achieved without incurring a significant toll relative to \eqref{eq:shamir_bound}. In the second case, we need at least $n\gtrsim d^2$ to achieve ``privacy-for-free'' in the most favorable case where $\alpha^*=1$. In the case when $X$ could be rank-deficient, then it is infeasible to achieve ``privacy for free'' no matter how large $n$ is.
Based on the results in Figure~\ref{fig:price_of_DP} and ~\ref{fig:regions_PfF}, it might be tempting to conclude that one should always prefer Case 1 over Case 2. This is unfortunately not true because the artificial restriction of the model class via a bounded $\|\Theta\|$ also weakens our non-private baseline. In other word, the best solution within a small $\Theta$ might be significantly worse than the best solution in $\mathbb{R}^d$.
In practice, it is hard to find a $\Theta$ with a small radius that fits all purposes\footnote{If $\|\Theta\| \gg \|\theta^*\|$ then the constraint becomes limiting. If $\|\theta^*\| \ll \|\Theta\|$ instead, then calibrating the noise according to $\|\Theta\|$ will inject more noise than necessary.} and it is unreasonable to assume $\alpha^*>0$. This motivates us to go beyond the worst-case and come up with \emph{adaptive} algorithms that work without knowing $\|\theta^*\|$ and $\alpha$ while achieving the minimax rate for the class with $\|\Theta\| = \|\theta^*\|$ and $\alpha^*=\alpha$ (in hindsight).
\subsection{Existing algorithms and our contribution }\label{sec:exist_algs}
We now survey the following list of five popular algorithms in differentially private learning and highlight the novelty in our proposals \footnote{While we try to be as comprehensive as possible, the literature has grown massively and the choice of this list is limited by our knowledge and opinions.}.
\begin{enumerate}
\item Sufficient statistics perturbation (\textsc{SSP}{}) \citep{vu2009differential,foulds2016theory}: Release $X^TX$ and $X\boldsymbol y$ differential privately and then output $\hat{\theta} = (\widehat{X^TX})^{-1}\widehat{X\boldsymbol y}$.
\item Objective perturbation (\textsc{ObjPert}{}) \citep{kifer2012private}: $\hat{\theta} = \mathop{\mathrm{argmin}} F(\theta) + 0.5\lambda \|\theta\|^2 + Z^T\theta$ with an appropriate $\lambda$ and $Z$ is an appropriately chosen iid Gaussian random vector.
\item Subsample and Aggregate (Sub-Agg) \citep{smith2008efficient,dwork2010differential}: Subsample many times, apply debiased MLE to each subset and then randomize the way we aggregate the results.
\item Posterior sampling (\textsc{OPS}{}) \citep{mir13,dimitrakakis2014robust,wang2015privacy,minami2016differential}: Output $\hat{\theta} \sim P(\theta) \propto e^{ - \gamma (F(\theta) + 0.5\lambda \|\theta\|^2)}$ with parameters $ \gamma,\lambda$.
\item \textsc{NoisySGD}{} \citep{bassily2014private}: Run SGD for a fixed number of iterations with additional Gaussian noise added to the stochastic gradient evaluated on one randomly-chosen data point.
\end{enumerate}
We omit detailed operational aspects of these algorithms and focus our discussion on their theoretical guarantees. Interested readers are encouraged to check out each paper separately.
These algorithms are proven under different scalings and assumptions. To ensure fair comparison, we make sure that all results are converted to our setting under a subset of the following assumptions.
\begin{table}[t]
\bigskip
\bigskip
\caption{Summary of optimization error bounds. This table compares the (expected or high probability ) additive suboptimality of different differentially private linear regression procedures relative to the (non-private) empirical risk minimizer $\theta^*$. In particular, the results for NoisySGD holds in expectation and everything else with probability $1-\varrho$ (hiding at most a logarithmic factor in $\sqrt{1/\varrho}$). Constant factors are dropped for readability.
}\label{tab:comparison}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cccp{5cm}}
\midrule
& $F(\hat{\theta}) - F(\theta^*)$ & Assumptions & Remarks\smallskip\\
\midrule
\multirow{2}{*}{NoisySGD} & $\frac{\sqrt{d\log(\frac{n}{\delta})}\|\mathcal{X}\|^2\|\Theta\|^2}{\epsilon}$ & A.1, A.2 & Theorem 2.4 (Part 1) of \citep{bassily2014private}.\smallskip\\
&$\frac{d^2 \log(\frac{n}{\delta})\|\Theta\|^2}{\alpha^* n\epsilon^2}$ & A.1, A.2, A.3 & Theorem 2.4 (Part 2) of \citep{bassily2014private} \smallskip\\
\midrule
\multirow{2}{*}{\textsc{ObjPert}{}} & $\frac{\sqrt{d\log(\frac{1}{\delta})}\|\mathcal{X}\|^2\|\Theta\|\|\theta^*\|}{\epsilon}$ & A.1, A.2 & Theorem 4 (Part 2) of \citep{kifer2012private}. \smallskip\\
& $\frac{d^2 \log(\frac{1}{\delta})\|\Theta\|^2}{\alpha^* n\epsilon^2}$ & A.1, A.2, A.3 & Theorem 5 \& Appendix E.2 of \citep{kifer2012private}. \\
\midrule
\textsc{OPS}{} & $\frac{d \|\mathcal{X}\|^2\|\Theta\|^2}{\epsilon}$ & A.1, A.2& Results for $\epsilon$-DP \citep{wang2015privacy}\smallskip\\
\midrule
\textsc{SSP}{} & $\frac{d^2\log(\frac{1}{\delta})\|\mathcal{X}\|^2\|\theta^*\|^2}{\alpha n\epsilon^2}$ & A.1 & Adaptive to $\|\theta^*\|,X,\alpha$, but requires $n =\Omega(\frac{d^{1.5}\log(4/\delta)}{\alpha \epsilon})$ \footnotemark.\smallskip\\
\midrule
\textsc{AdaOPS}{} \& \textsc{AdaSSP}{}& $\frac{\sqrt{d\log(\frac{1}{\delta})}\|\mathcal{X}\|^2\|\theta^*\|^2}{\epsilon}\wedge \frac{d^2 \log(\frac{1}{\delta})\|\theta^*\|^2}{\alpha n\epsilon^2}$ & A.1 & Adaptive in $\|\theta^*\|,X,\alpha$.\smallskip\\
\midrule
\end{tabular}
}
\end{table}
\begin{table}[t]
\caption[Summary of estimation errors under the linear Gaussian model]{Summary or estimation error bounds under the linear Gaussian model. On the second column we compare the approximation of MLE $\theta^*$ in mean square error up to a universal constant.
On the third column, we compare the relative efficiency. The relative efficiency bounds are simplified with the assumption of $\alpha = \Omega(1)$, which implies that $\mathrm{tr}[(X^TX)^{-1}] = O(d^2n^{-1}\|\mathcal{X}\|^{-2})$ and $\mathrm{tr}[(X^TX)^{-2}] = O(dn^{-1}\|\mathcal{X}\|^{-2} \mathrm{tr}[(X^TX)^{-1}])$. $\tilde{O}(\cdot)$ hides
$\mathrm{polylog}(1/\delta)$ terms. }\label{tab:compare-efficiency}
\resizebox{\textwidth}{!}{
\begin{tabular}{cccp{3.5cm}}
\midrule
&Approxi. MLE: $\mathbb{E}\|\hat{\theta}-\theta^*\|^2$& Rel. efficiency: $\frac{\mathbb{E} \|\hat{\theta}-\theta_0\|^2}{\mathbb{E}\|\theta^* - \theta_0\|^2}$ &Remarks \smallskip\\
\midrule
Sub-Agg & $ O\left(\frac{\text{poly}(d,\|\Theta\|,\|\mathcal{X}\|,\alpha^{-1})}{\epsilon^{6/5}n^{6/5}}\right)$ & $ 1 + \tilde{O}(\frac{\text{poly}(d,\|\Theta\|,\|\mathcal{X}\|)}{n^{1/5}\epsilon^{6/5}})$ &$\epsilon$-DP, suboptimal in $n$, possibly also in $d$\citep{dwork2010differential}. \smallskip\\
\textsc{OPS}{} & $O(\frac{\|\mathcal{X}\|^2\|\Theta\|^2}{\epsilon})\mathrm{tr}[(X^TX)^{-1}]$ & $\tilde{O}(\frac{\|\mathcal{X}\|^2\|\Theta\|^2}{\epsilon \sigma^2})$ & $\epsilon$-DP, adaptive in $X$, but not asymptotically efficient \citep{wang2015privacy}. \smallskip\\
\textsc{SSP}{}
$O\left( \frac{\log(\frac{1}{\delta}) \|\mathcal{X}\|^4\|\theta^*\|^2}{\epsilon^2} \mathrm{tr}[(X^TX)^{-2}] \right)$
&
$1+ \tilde{O}(\frac{d\|\mathcal{X}\|^2\|\theta_0\|^2}{n\epsilon^2\sigma^2} + \frac{d^3}{n^2\epsilon^2})$
& Adaptive in $\|\theta^*\|,X$, no explicit dependence on $\alpha$, but requires large $n$. \citep[Theorem 5.1]{sheffet2017differentially} \smallskip\\
\textsc{AdaOPS}{} \& \textsc{AdaSSP}{}& $O\left(\frac{d\log(\frac{1}{\delta}) \|\mathcal{X}\|^2\|\theta^*\|^2 }{\alpha n \epsilon^2}\mathrm{tr}[(X^TX)^{-1}] \right)$&
$1+\tilde{O}(\frac{d\|\mathcal{X}\|^2\|\theta_0\|^2}{n\epsilon^2\sigma^2}+\frac{d^3}{n^2\epsilon^2})$ & Adaptive in $\|\theta^*\|,X,\alpha$.\\
\midrule
\end{tabular}
}
\end{table}
\begin{enumerate}
\item[A.1] $\|\mathcal{X}\|$ is bounded, $\|\mathcal{Y}\|$ is bounded.
\item[A.2] $\|\Theta\|$ is bounded.
\item[A.3] All possible data sets $X$ obey that the smallest eigenvalue $\lambda_{\min }(X^TX)$ is greater than $\frac{n\|\mathcal{X}\|^2}{d}\alpha^*$.
\end{enumerate}
Note that A.3 is a restriction on the domain of the data set, rather than the domain of individual data points in the data set of size $n$. While it is a little unconventional, it is valid to define differential privacy within such a restricted space of data sets. It is the same assumption that we needed to assume for the lower bound in \eqref{eq:dp_lowerbound_strongcvx} to be meaningful. As in \citet{koren2015fast}, we simplify the expressions of the bound by assuming $\|\mathcal{Y}\|\leq \|\mathcal{X}\|\|\Theta\|$, and in addition, we assume that $\|\mathcal{Y}\| \lesssim \|\mathcal{X}\|\|\theta^*\|$.
Table~\ref{tab:comparison} summarizes the upper bounds of optimization error the aforementioned algorithms in comparison to our two proposals: \textsc{AdaOPS}{} and \textsc{AdaSSP}{}.
Comparing the rates to the lower bounds in the previous section, it is clear that NoisySGD, \textsc{ObjPert}{} both achieve the minimax rate in optimization error
but their hyperparameter choice depends on the unknown $\|\Theta\|$ and $\alpha^*$. \textsc{SSP}{} is adaptive to $\alpha$ and $\|\theta^*\|$ but has a completely different type of issue --- it can fail arbitrarily badly for regime covered under \eqref{eq:dp_lowerbound_lipschitz}, and even for well-conditioned problems, its theoretical guarantees only kick in as $n$ gets very large. Our proposed algorithms \textsc{AdaOPS}{} and \textsc{AdaSSP}{} are able to simultaneously switch between the two regimes and get the best of both worlds.
Table~\ref{tab:compare-efficiency} summarizes the upper bounds for estimation. The second row compares the approximation of $\theta^*$ in MSE and the third column summarizes the statistical efficiency of the DP estimators relative to the MLE: $\theta^*$ under the linear Gaussian model.
All algorithms except \textsc{OPS}{} are asymptotically efficient. For the interest of $(\epsilon,\delta)$-DP, \textsc{SSP}{} has the fastest convergence rate and does not explicitly depend on the smallest eigenvalue, but again it behaves differently when $n$ is small, while \textsc{AdaOPS}{} and \textsc{AdaSSP}{} work optimally (up to a constant) for all $n$.
\subsection{Other related work}
The problem of adaptive estimation is closely related to model selection \citep[see, e.g.,][]{birge2001gaussian} and an approach using Bayesian Information Criteria was carefully studied in the differential private setting for the problem of $\ell_1$ constrained ridge regression by \citet{lei2016differentially}. Their focus is different to ours in that they care about inferring the correct model, while we take the distribution-free view.
Linear regression is also studied in many more specialized setups, e.g., high dimensional linear regression \citep{kifer2012private,talwar2014private,talwar2015nearly}, statistical inference \citep{sheffet2017differentially} and so on. For the interest of this paper, we focus on the standard regime of linear regression where $d<n$ and do not use sparsity or $\ell_1$ constraint set to achieve the $\log(d)$ dependence. That said, we acknowledge that \citet{sheffet2017differentially} analyzed \textsc{SSP}{} under the linear Gaussian model (the third row in Table~\ref{tab:compare-efficiency}and their techniques of adaptively adding regularization have inspired \textsc{AdaSSP}{}.
\section{Main results: adaptive private linear regression} \label{sec:main}
In this section, we present and analyze \textsc{AdaOPS}{} and \textsc{AdaSSP}{} that achieve the aforementioned adaptive rate. The pseudo-code of these two algorithms are given in Algorithm~\ref{alg:self-tuning-AdaOPS} and Algorithm~\ref{alg:adaSuffP}.
The idea of both algorithms is to release key data-dependent quantities differentially privately and then use a high probability confidence interval of these quantities to calibrate the noise to privacy budget as well as to choose the ridge regression's hyperparameter $\lambda$ for achieving the smallest prediction error. Specifically, \textsc{AdaOPS}{} requires us to release both the smallest eigenvalue $\lambda_{\min }$ of $X^TX$ and the local Lipschitz constant $L := \|\mathcal{X}\|(\|\mathcal{X}\|\|\theta^*_\lambda\| + \|\mathcal{Y}\|)$, while \textsc{AdaSSP}{} only needs the smallest eigenvalue $\lambda_{\min }$.
\begin{algorithm}[t]
\caption{\textsc{AdaOPS}{}: One-Posterior Sample estimator with adaptive regularization}
\label{alg:self-tuning-AdaOPS}
\begin{algorithmic}
\INPUT{ Data $X$, $\boldsymbol y$. Privacy budget: $\epsilon$, $\delta$, Bounds: $\|\mathcal{X}\|, \|\mathcal{Y}\|$.
}
\STATE{1. Calculate the minimum eigenvalue $\lambda_{\text{min}}(X^TX)$.}
\STATE{2. Sample $Z\sim \mathcal{N}(0,1)$ and privately release \\
$\tilde{\lambda}_{\text{min}} = \max\left\{\lambda_{\text{min}} + \frac{\sqrt{\log(6/\delta)}}{\epsilon/4}Z - \frac{\log(6/\delta)}{\epsilon/4}, 0\right\}.$}
\STATE{ 3. Set $\bar{\epsilon}$ as the positive solution of the quadratic equation
$$
\bar{\epsilon}^2/ (2\log(6/\delta))+ \bar{\epsilon} - \epsilon/4 = 0.
$$}
\STATE{4. Set $\varrho=0.05$,
$
C_1 = \big(d/2 + \sqrt{d\log(1/\varrho)} + \log(1/\varrho)\big)\log(6/\delta)/\bar{\epsilon}^2,
$
$
C_2 = \log(6/\delta)/(\epsilon/4),
$
$
t_{\min } = \max\{\frac{\|\mathcal{X}\|^2(1+\log(6/\delta))}{2\epsilon}-\tilde{\lambda}_{\min },0\}
$
and solve
\begin{equation}\label{eq:adachoice_of_lambda}
\lambda = \mathop{\mathrm{argmin}}_{t \geq t_{\min }}\frac{\|\mathcal{X}\|^4C_1[1+\|\mathcal{X}\|^2/( t + \tilde{\lambda}_{\min })]^{2 C_2}}{ t + \tilde{\lambda}_{\min } } + t.
\end{equation}
which has a unique solution.
}
\STATE{5. Calculate
$
\hat{\theta} = (X^TX + \lambda I)^{-1}X^T\boldsymbol y.
$}
\STATE{6. Sample $Z\sim \mathcal{N}(0,1)$ and privately release} \\
$\Delta = \log(\|\mathcal{Y}\|+\|\mathcal{X}\|\|\hat{\theta} \|) + \frac{\log(1+\|\mathcal{X}\|^2/(\lambda+\tilde{\lambda}_{\min }))}{\epsilon/ (4\sqrt{\log(6/\delta)})} Z + \frac{\log(1+\|\mathcal{X}\|^2/(\lambda+\tilde{\lambda}_{\min }))}{\epsilon/(4\log(6/\delta))}$. Set $\tilde{L} := \|\mathcal{X}\|e^{\Delta}$.
\STATE{7. Calibrate noise by choosing
$\tilde{\epsilon}$ as the positive solution of the quadratic equation
\begin{equation}\label{eq:adachoice_of_eps}
\frac{\tilde{\epsilon}^2}{2} \left[\frac{1}{\log(6/\delta)}\frac{1+\log(6/\delta)}{\log(6/\delta)}\right] + \tilde{\epsilon} - \epsilon/2 = 0.
\end{equation}
and then set
$\gamma = \frac{(\tilde{\lambda}_{\min } +\lambda)\tilde{\epsilon}^2}{\log(6/\delta) \tilde{L}^2}.$ }
\OUTPUT{ $\tilde{\theta}\sim p(\theta|X,\boldsymbol y) \propto e^{-\frac{\gamma}{2} \left(\|\boldsymbol y - X\theta\|^2 + \lambda\|\theta\|^2\right)}.$}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{\textsc{AdaSSP}{}: Sufficient statistics perturbation with adaptive damping}
\label{alg:adaSuffP}
\begin{algorithmic}
\INPUT{ Data $X$, $\boldsymbol y$. Privacy budget: $\epsilon$, $\delta$, Bounds: $\|\mathcal{X}\|, \|\mathcal{Y}\|$.
}
\STATE{1. Calculate the minimum eigenvalue $\lambda_{\text{min}}(X^TX)$.}
\STATE{2.} Privately release $\tilde{\lambda}_{\text{min}} = \max\left\{\lambda_{\text{min}} + \frac{\sqrt{\log(6/\delta)}}{\epsilon/3}\|\mathcal{X}\|^2 Z - \frac{\log(6/\delta)}{\epsilon/3}\|\mathcal{X}\|^2, 0\right\}$, where $Z\sim \mathcal{N}(0,1)$.
\STATE{3. Set $\lambda = \max\{0, \frac{\sqrt{d \log(6/\delta)\log(2d^2/\rho) }\|\mathcal{X}\|^2}{\epsilon/3} - \tilde{\lambda}_{\min }\}$}
\STATE{4. Privately release $\widehat{X^TX} = X^TX + \frac{\sqrt{\log(6/\delta)}\|\mathcal{X}\|^2}{\epsilon/3}Z$ for $Z\in\mathbb{R}^{d\times d}$ is a symmetric matrix and every element from the upper triangular matrix is sampled from $\mathcal{N}(0,1)$. }
\STATE{5. Privately release $\widehat{X\boldsymbol y} = X\boldsymbol y + \frac{\sqrt{\log(6/\delta)}\|\mathcal{X}\|\|\mathcal{Y}\|}{\epsilon/3}Z$ for $Z\sim \mathcal{N}(0, I_d)$. }
\OUTPUT{ $\tilde{\theta}= (\widehat{X^TX}+ \lambda I)^{-1} \widehat{X\boldsymbol y}$}
\end{algorithmic}
\end{algorithm}
In both \textsc{AdaSSP}{} and \textsc{AdaOPS}{}, we choose $\lambda$ by minimizing an upper bound of
$
F(\tilde{\theta}) - F(\theta^*)
$
in the form of ``variance'' and ``bias''
$$
\tilde{O}(\frac{d\|\mathcal{X}\|^4\|\theta^*\|^2}{\lambda + \lambda_{\min }}) + \lambda \|\theta^*\|^2.
$$
Note that while $\|\theta^*\|^2$ cannot be privately released in general due to unbounded sensitivity, it appears in both terms and do not enter the decision process of finding the optimal $\lambda$ that minimizes the bound. This convenient feature follows from our assumption that $\|\mathcal{Y}\|\lesssim \|\mathcal{X}\|\|\theta^*\|$. Dealing with the general case involving an arbitrary $\|\mathcal{Y}\|$ is an intriguing open problem.
A tricky situation for \textsc{AdaOPS}{} is that the choice of $\gamma$ depends on $\lambda$ through $\tilde{L}$, which is the local Lipschitz constant at the ridge regression solution $\theta^*_\lambda$. But the choice of $\lambda$ also depends on $\gamma$ since the ``variance'' term above is inversely proportional to $\gamma$. Our solution is to express $\tilde{L}$ (hence $\gamma$) as a function of $\lambda$ and solve the nonlinear univariate optimization problem \eqref{eq:adachoice_of_lambda}.
We are now ready to state the main results.
\begin{theorem}\label{thm:adaops}
Algorithm~\ref{alg:self-tuning-AdaOPS} outputs $\tilde{\theta}$ which obeys that
\begin{enumerate}
\item[(i)] It satisfies $(\epsilon,\delta)$-DP.
\item[(ii)] Assume $\|\mathcal{Y}\|\lesssim \|\mathcal{X}\|\|\theta^*\|$. With probability $1-\varrho$,
$$F(\tilde{\theta}) - F(\theta^*) \leq
O\left(\frac{\sqrt{d+\log(\frac{1}{\varrho})}\|\mathcal{X}\|^2\|\theta^*\|^2}{\epsilon/\sqrt{\log(\frac{1}{\delta})}}\wedge \frac{d[d+\log(\frac{1}{\varrho})] \|\theta^*\|^2}{\alpha n\epsilon^2 / \log(\frac{1}{\delta})}\right).
$$
\item[(iii)] Assume that $\mathbf{y}|X$ obeys a linear Gaussian model and $X$ is full-rank. Then there is an event $E$ satisfying $\mathbb{P}(E)\geq 1-\delta/3$ and $E\independent \mathbf{y} | X$, such that
\begin{align*}
\mathbb{E}[\tilde{\theta} | X,E] = \theta_0 &&\text{ and }&&\mathrm{Cov}[\tilde{\theta} |X,E] \prec \left(1 + O\left(\frac{ \tilde{C} d \log(6/\delta)}{\sigma^2 \alpha n \epsilon^2}\right) \right)\sigma^2(X^TX)^{-1}
\end{align*}
where constant
$$\tilde{C} :=\|\mathcal{Y}\|^2+\|\mathcal{X}\|^2(\|\theta_0\|^2
+\sigma^2\mathrm{tr}[(X^TX)^{-1}]).$$
\end{enumerate}
\end{theorem}
The proof, deferred to Appendix~\ref{sec:proof_OPS}, makes use of a fine-grained DP-analysis through the recent per instance DP techniques \citep{wang2017per} and then convert the results to DP by releasing data dependent bounds of $\alpha$ and the magnitude of a ridge-regression output $\theta^*_\lambda$ with an adaptively chosen $\lambda$. Note that $\|\theta^*_\lambda\|$ does not have a bounded global sensitivity. The method to release it differentially privately (described in Lemma~\ref{lem:lipschitz_GS}) is part of our technical contribution.
The \textsc{AdaSSP}{} algorithm is simpler and enjoys slightly stronger theoretical guarantees.
\begin{theorem}\label{thm:adaSuffP}
Algorithm~\ref{alg:adaSuffP} outputs $\tilde{\theta}$ which obeys that
\begin{enumerate}
\item[(i)] It satisfies $(\epsilon,\delta)$-DP.
\item[(ii)] Assume $\|\mathcal{Y}\|\lesssim \|\mathcal{X}\|\|\theta^*\|$. With probability $1-\varrho$,
$$
F(\tilde{\theta}) - F(\theta^*) \leq
O\left(\frac{ \sqrt{d\log(\frac{d^2}{\varrho})}\|\mathcal{X}\|^2\|\theta^*\|^2 }{\epsilon/\sqrt{\log(\frac{6}{\delta})}} \wedge \frac{ \|\mathcal{X}\|^4\|\theta^*\|^2 \mathrm{tr}[(X^TX)^{-1}]}{\epsilon^2/ [\log(\frac{6}{\delta})\log(\frac{d^2}{\varrho}) ]}\right)
$$
\item[(iii)]
Assume that $\mathbf{y}|X$ obeys a linear Gaussian model and $X$ has a sufficiently large $\alpha$. Then there is an event $E$ satisfying $\mathbb{P}(E)\geq 1-\delta/3$ and $E\independent \mathbf{y} | X$, such that
$
\mathbb{E}[\tilde{\theta} | X, E] = \theta_0
$
and
\begin{align*}
\mathbb{E}[\|\tilde{\theta}-\theta_0\|^2 | X, E]
=\sigma^2\mathrm{tr}[(X^TX)^{-1}] + O\left(\frac{\tilde{C} \|\mathcal{X}\|^2\mathrm{tr}[(X^TX)^{-2}] }{\epsilon^2/\log(\frac{6}{\delta})} \right),
\end{align*}
with the same constant $\tilde{C}$ in Theorem~\ref{thm:adaops} (iii).
\end{enumerate}
\end{theorem}
The proof of Statement (1) is straightforward. Note that we release the eigenvalue $\lambda_{\text{min}}(X^TX)$, $X\boldsymbol y$ and $X^TX$ differentially privately each with parameter $(\epsilon/3,\delta/3)$. For the first two, we use Gaussian mechanism and for $X^TX$, we use the Analyze-Gauss algorithm \citep{dwork2014analyze} with a symmetric Gaussian random matrix. The result then follows from the composition theorem of differential privacy. The proof of the second and third statements is provided in Appendix~\ref{app:suffpert}. The main technical challenge is to prove the concentration on the spectrum and the Johnson-Lindenstrauss-like distance preserving properties for symmetric Gaussian random matrices (Lemma~\ref{lem:JL-ellipsoid}). We note that while \textsc{SSP}{} is an old algorithm the analysis of its theoretical properties is new to this paper.
\paragraph{Remarks.} Both \textsc{AdaOPS}{} and \textsc{AdaSSP}{} match the smaller of the two lower bounds \eqref{eq:dp_lowerbound_lipschitz} and \eqref{eq:dp_lowerbound_strongcvx} for each problem instance. They are slightly different in that \textsc{AdaOPS}{} preserves the shape of the intrinsic geometry while \textsc{AdaSSP}{}'s bounds are slightly stronger as they do not explicitly depend on the smallest eigenvalue.
\section{Experiments}\label{sec:exp}
In this section, we conduct synthetic and real data experiments to benchmark the performance of \textsc{AdaOPS}{} and \textsc{AdaSSP}{} relative to existing algorithms we discussed in Section~\ref{sec:priorwork}. \textsc{NoisySGD}{} and Sub-Agg are excluded because they are dominated by \textsc{ObjPert}{} and an $(\epsilon,\delta)$-DP version of \textsc{OPS}{}, which we describe in Appendix~\ref{app:epsdelta_OPS}.
The code to reproduce all experimental results in this paper is available at \url{https://github.com/yuxiangw/optimal_dp_linear_regression}.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/bike_balanced}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/buzz_balanced}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/elevators_balanced}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/housing_balanced}
\end{subfigure}
\caption{\small Example of results of differentially private linear regression algorithms on UCI data sets for a sequence of $\epsilon$. Reported on the y-axis is the cross-validation prediction error in MSE and their confidence intervals. }\label{fig:UCI_dataset}
\end{figure}
\begin{figure} [t]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Gaussian_MSE_eps_0_1}
\caption{Estimation MSE at $\epsilon=0.1$}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Gaussian_MSE_eps_1}
\caption{Estimation MSE at $\epsilon=1$}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Gaussian_RelativeEfficiency_eps_0_1}
\caption{Rel. efficiency at $\epsilon=0.1$}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/Gaussian_RelativeEfficiency_eps_1}
\caption{Rel. efficiency at $\epsilon=1$}
\end{subfigure}
\caption{\small Example of differentially private linear regression under linear Gaussian model with an increasing data size $n$. We simulate the data from $d=10$, $\theta_0$ drawn from a uniform distribution defined on $[0,1]^d$. We generate $X\in \mathbb{R}^{n\times d}$ as a Gaussian random matrix and then generate $y \sim \mathcal{N}(X\theta_0, I_d)$. We used $\epsilon = 1$ and $\epsilon=0.1$, both with $\delta = 1/n^2$. The results clearly illustrate the asymptotic efficiency of the proposed approaches.}\label{fig:gaussian}
\end{figure}
\paragraph{Prediction accuracy in UCI data sets experiments. }
The first set of experiments is on training linear regression on a number of UCI regression data sets.
Standard $z$-scoring are performed and all data points are normalized to having an Euclidean norm of $1$ as a preprocessing step.
Results on four of the data sets are presented in Figure~\ref{fig:UCI_dataset}. As we can see, \textsc{SSP}{} is unstable for small data. \textsc{ObjPert}{} suffers from a pre-defined bound $\|\Theta\|$ and does not converge to nonprivate solution even with a large $\epsilon$. \textsc{OPS}{} performs well but still does not take advantage of the strong convexity that is intrinsic to the data set. \textsc{AdaOPS}{} and \textsc{AdaSSP}{} on the other hand are able to nicely interpolate between the trivial solution and the non-private baseline and performed as well as or better than baselines for all $\epsilon$. More detailed quantitative results on all the 36 UCI data sets are presented in Table~\ref{tab:uci_eps=0.1} and Table~\ref{tab:uci_eps=1} for $\epsilon=0.1,\delta=\min\{1e-6,1/n^2\}$ and $\epsilon = 1,\delta=\min\{1e-6,1/n^2\}$ respectively in Appendix~\ref{sec:realdata_table}.
\paragraph{Parameter estimation under linear Gaussian model. }
To illustrate the performance of the algorithms under standard statistical assumptions, we also benchmarked the algorithms on synthetic data generated by a linear Gaussian model. The results, shown in Figure~\ref{fig:gaussian} illustrates that as $n$ gets large, \textsc{AdaOPS}{} and \textsc{AdaSSP}{} with $\epsilon=0.1$ and $\epsilon = 1$ converge to the maximum likelihood estimator at a rate faster than the optimal statistical rate that MLE estimates $\theta^*$, therefore at least for large $n$, differential privacy comes for free. Note that there is a gap in \textsc{SSP}{} and \textsc{AdaSSP}{} for large $n$, this can be thought of as a cost of adaptivity as \textsc{AdaSSP}{} needs to spend some portion of its privacy budget to release $\lambda_{\min }$, which \textsc{SSP}{} does not, this can be fixed by using more careful splitting of the privacy budget.
\section{ Conclusion}
In this paper, we presented a detailed case-study of the problem of differentially private linear regression. We clarified the relationships between various quantities of the problems as they appear in the private and non-private information-theoretic lower bounds. We also surveyed the existing algorithms and highlighted that the main drawback using these algorithms relative to their non-private counterpart is that they cannot adapt to data-dependent quantities. This is particularly true for linear regression where the ordinary least square algorithm is able to work optimally for a large class of different settings.
We proposed \textsc{AdaOPS}{} and \textsc{AdaSSP}{} to address the issue and showed that they both work in unbounded domain. Moreover, they smoothly interpolate the two regimes studied in \citet{bassily2014private} and behave nearly optimally for every instance. We tested the two algorithms on 36 real-life data sets from the UCI machine learning repository and we see significant improvement over popular algorithms for almost all configurations of $\epsilon$.
Future work includes extending the result beyond linear regression and releasing off-the-shelf packages for adaptive differentially private learning.
\section*{Acknowledgements}
The author thanks the anonymous reviewers for helpful feedbacks and Zichao Yang for sharing the 36 UCI regression data sets as was used in \citep{yang2015carte}.
|
{
"timestamp": "2018-07-10T02:04:49",
"yymm": "1803",
"arxiv_id": "1803.02596",
"language": "en",
"url": "https://arxiv.org/abs/1803.02596"
}
|
\section{Introduction}
Define the fractional maximal function as
\begin{equation*}
M_\alpha f (x) = \sup_{t > 0} \left \lvert \frac{t^{\alpha}}{|B(x,t)|} \int_{B(x,t)} f \, dy \right \rvert
\end{equation*}
for $\alpha \in [0,n)$. The study of its regularity properties was initiated in \cite{KS2003} by Kinnunen and Saksman. They proved the pointwise inequality
\begin{equation}\label{Kinnunen Saksman}
|\nabla M_{\alpha}|f|(x)| \leq C M_{\alpha - 1}|f|(x), \quad \alpha \geq 1
\end{equation}
with a constant $C$ only depending on the dimension $n$ and $\alpha$. This inequality has two interesting consequences. First, $M_{\alpha}$ maps $L^{p}(\mathbb{R}^{n})$ into a first order Sobolev space. Second, as noted by Carneiro and Madrid \cite{CM2015}, the pointwise bound together with the Gagliardo--Nirenberg--Sobolev inequality implies
\begin{equation}
\label{intro:eq2}
\norm{\nabla M_{\alpha} f }_{L^{p}} \leq C \norm{ M_{\alpha-1} f }_{L^{p}} \leq C \norm{ f }_{L^{n/(n-1)}} \leq C \norm{\nabla f}_{L^{1}}
\end{equation}
for $\alpha\geq 1$ and $p = n/(n-\alpha)$. When $\alpha \in (0,1)$, inequality \eqref{Kinnunen Saksman} no longer helps, and the conclusion of \eqref{intro:eq2} is an open problem. When $M_{\alpha}$ is replaced by its non-centred variant, the analogous result is due to Carneiro and Madrid \cite{CM2015} for $n = 1$ and Luiro and Madrid \cite{LM2017} for $f$ radial and $n \geq 2$. For other aspects of the regularity of fractional maximal functions, see e.g.~ \cite{HKKT2015,HKNT2013} and the references therein.
Our first result is a smooth variant of the inequality \eqref{intro:eq2} for $\alpha \in (0,1)$ and $n \geq 2$. Define the lacunary fractional maximal function as
\[M_{\alpha}^{lac}f (x) := \sup_{ k \in \mathbb{Z} } \left \lvert \frac{2^{\alpha k}}{|B(0,2^{k})|} \int_{B(x,2^{k})} f \, dy \right \rvert. \]
For integrable $\varphi$ and $t > 0$, let $\varphi_t(x)=t^{-n}\varphi(x/t)$. Assume, for simplicity, that $\varphi$ is a positive Schwartz function and define the smooth fractional maximal function as
\[M_{\alpha}^{\varphi}f(x) = \sup_{t > 0} t^{\alpha} | \varphi_{t} * f(x) | .\]
The smoothness requirement can be substantially relaxed, see $\S\S$\ref{sec:smooth}.
\begin{theorem}
\label{thm:main solid balls}
Let $f \in \dot{\mathrm{BV}}(\mathbb{R}^n)$ and suppose that $\alpha \in (0,1)$ and $n \geq 2$. Then for $\mathcal{M}_\alpha \in \{ M_\alpha^{lac}, M_\alpha^\varphi\}$, there exists a constant $C$ only depending on dimension $n$, $\alpha$ and $\varphi$ such that
\[\norm{\nabla \mathcal{M}_\alpha f}_{L^{p}(\mathbb{R}^n)} \leq C | f |_{\mathrm{BV}(\mathbb{R}^n)} \]
for $p = n/(n-\alpha)$.
\end{theorem}
The proof of this theorem uses the $g$-function technique familiar from Stein's spherical maximal function theorem. The idea is to follow the scheme behind the short estimation \eqref{intro:eq2}. The Fourier transform is used to find a substitute for \eqref{Kinnunen Saksman} at the level of Besov spaces, from which the conclusion then follows by a refined Gagliardo--Nirenberg--Sobolev type embedding theorem \cite{CDDD2003}. The last step requires $n > 1$ whereas the smoothness condition on the maximal operator is imposed by Fourier analysis. We stress that the right hand side of the conclusion is BV norm instead of the considerably larger homogeneous Hardy--Sobolev norm one might first expect. The detailed proof is given in $\S$\ref{sec:solid balls}, and all necessary definitions can be found in $\S$\ref{sec:definitions}. To the best of our knowledge, Fourier transform techniques have not been exploited effectively in the study of endpoint regularity of maximal functions prior to this work.
The background of the question \eqref{intro:eq2} goes back to Kinnunen's theorem \cite{Kinnunen1997,KL1998} asserting that the Hardy--Littlewood maximal function is bounded in $W^{1,p}$ with $p > 1$. His result was later extended to $W^{1,1}$ in the form
\begin{equation}
\label{intro:w11}
\| \nabla M f \|_{L^1(\mathbb{R}^n)} \leq C \| \nabla f \|_{L^1(\mathbb{R}^n)}
\end{equation}
by Tanaka \cite{Tanaka2002} when $n = 1$ and Luiro \cite{Luiro2017} when $n \geq 2$ and $f$ is radial. Here $M$ is the non-centred Hardy--Littlewood maximal function. The same inequality for $M_0$ (centred maximal function) was established by Kurka \cite{Kurka2010} when $n = 1$, and the question is open in dimensions $n \geq 2$. Kurka's theorem can be seen as the limiting case $\alpha = 0$ of \eqref{intro:eq2}.
In connection to \eqref{intro:w11}, maximal functions with smooth convolution kernels are better understood than the Hardy--Littlewood maximal function. Inequality \eqref{intro:w11} can be proved with sharp constant for many smooth kernels \cite{CFS2015,CS2013} whereas the best constant for centred Hardy--Littlewood maximal function is not known (for the non-centred maximal function \cite{AP2007} as well as for certain non-tangential maximal functions \cite{Ramos2017} the constant is one). Similarly, a Hardy--Sobolev bound corresponding to \eqref{intro:w11} is known for smooth maximal functions in all dimensions \cite{PPSS2017} whereas the progress for the standard maximal function is limited to the case of radial functions \cite{Luiro2017}. Finally, there are metric measure spaces where Kinnunen's theorem does not hold but suitable smoother maximal functions satisfy a Sobolev bound \cite{AK2010}. Theorem \ref{thm:main solid balls} can be seen as a part of this line of research attempting to understand \eqref{intro:eq2} and \eqref{intro:w11} first in the case of smooth maximal functions.
The second part of the paper studies the regularity of the spherical fractional maximal function
\begin{equation}
\label{sph:definition}
S_{\alpha} f(x) := \sup_{t > 0} |t^{\alpha} \sigma_t * f(x)| ,
\end{equation}
where $\sigma_t$ is the normalized surface measure of the sphere $\partial B(0,t)$. For $\alpha = 0$, one recovers the spherical maximal function of Stein \cite{Stein1976} ($n \geq 3$) and Bourgain \cite{Bourgain1986} ($n = 2$). For $\alpha > 0$, $L^{p} \to L^{q}$ bounds for this operator follow from the work of Schlag \cite{Schlag1997} ($n=2$) and Schlag and Sogge \cite{SS1997} ($n \geq 3$). It is natural to ask if the fractional spherical maximal function has regularizing properties similar to \eqref{Kinnunen Saksman}. Our result in this direction is the following.
\begin{theorem}
\label{thm:spherical_technical}
Let $n \geq 5$, $n/(n-2) < p \leq q < \infty$ and
\begin{equation*}
\alpha(p):=
\begin{cases}
\frac{n^2-2n-1}{n-1} -\frac{2n}{p(n-1)} & \text{if} \quad \frac{n}{n-2} < p \leq \frac{n^2+1}{n^2-2n-1} \\
\frac{n-1}{p} & \text{if} \quad \frac{n^2+1}{n^2-2n-1} < p \leq n-1 .
\end{cases}
\end{equation*}
Assume that
\[\frac{1}{q}=\frac{1}{p}- \frac{\alpha-1}{n}, \qquad 1 \leq \alpha < \alpha(p). \]
Then, for any $f \in L^p$, $S_{\alpha} f$ is weakly differentiable and
\[\left \lVert \nabla S_{\alpha} f\right \rVert_{L^{q}} \lesssim \norm{f}_{L^{p}}.\]
\end{theorem}
The proof of this theorem is also based on the use of the Fourier transform. When $q \geq 2$, we study $L^{p} \to L^{q}$ estimates for a maximal multiplier operator in analogy with the estimates in \cite{Schlag1997,SS1997,Lee2003} for the spherical maximal function. Since Theorem \ref{thm:spherical_technical} is a statement at the derivative level, the corresponding multiplier enjoys worse Fourier decay than $\widehat{\sigma}$. This forces us to study the behavior in $L^{p}$ with large $p$ more carefully than what is needed to understand $L^{p}$ mapping properties of the spherical maximal function. We take advantage of the sharp local smoothing estimate for the wave equation in $L^{n-1}(\mathbb{R}^{n})$, which is available whenever $n \geq 5$ thanks to recent advances in decoupling theory (see \cite{BD2015,Garrigos2009,Garrigos2010,Laba2002,Wolff2000} and \cite{BHS2018,HNS2011,LS2013,MSS1992,Sogge1991} for more on decoupling and local smoothing estimates). We remark that results in $n=4$ could be obtained upon further progress on local smoothing estimates.
\bigskip
\noindent \textbf{Acknowledgements.}
We would like to thank Juha Kinnunen for his question about regularising properties of the fractional spherical maximal function, which led to this work. We also thank Jonathan Hickman for discussions on the spherical maximal function and local smoothing estimates.
\section{Notation and Preliminaries}\label{sec:definitions}
\subsection{Notation}
All function spaces are defined over $\mathbb{R}^{n}$, and it is written, for instance, $L^{2}$ for $L^{2}(\mathbb{R}^{n})$. The letter $C$ denotes a generic constant whose value may vary from line to line. Its dependency on other parameters will be clear from the context. The notation $A \lesssim B$ is used if $A \leq C B$ for such a constant $C$, and similarly $A \gtrsim B$ and $A \sim B$. The Fourier transform of a tempered distribution $f \in \mathcal{S}'$ is denoted by $\widehat{f}$ or $\mathcal{F}(f)$ and its inverse Fourier transform by $\mathcal{F}^{-1}(f)$ or $f^{\vee}$; in particular for a Schwartz function $f \in \mathcal{S}$,
\[\widehat{f}(\xi) = \mathcal{F}f (\xi) = \int_{\mathbb{R}^{n}} e^{ - 2 \pi i x \cdot \xi} f(x) \, dx .\]
Given any multi-index $\gamma \in {\mathbb N}^n$, $\partial^\gamma$ denotes
$$
\partial^\gamma f =\partial_{x_1}^{\gamma_1} \cdots \partial_{x_n}^{\gamma_n} f.
$$
For any $\alpha \in {\mathbb R}$, the notation $(-\Delta)^{\alpha/2}$ is taken to denote the operator associated to the Fourier multiplier $|\xi|^\alpha$.
\subsection{Besov spaces and Littlewood--Paley pieces}
\label{subsec:besov}
Given a smooth function $\psi \in C^\infty_c$ supported in $\{ \xi \in {\mathbb R}^n: 2^{-1} < |\xi | < 2\}$ and such that
$$
\sum_{j \in \mathbb{Z}} \psi(2^{-j} \xi) = 1
$$
for $\xi \neq 0$, let $f_j$ denote the Littlewood--Paley piece of $f$ at frequency $2^{j}$, given by $\widehat{f_j}=\widehat{f}\psi(2^{-j} \cdot)$. The Besov seminorm for $\dot{B}_{p,q}^{s}$ for $s \in \mathbb{R}$ and $p,q \in [1,\infty] $ is defined as
\[\norm{f}_{\dot{B}_{p,q}^{s}} = \Big( \sum_{j \in \mathbb{Z}} 2^{q j s} \norm{f_j}_{L^{p}}^{q} \Big)^{1/q};\]
the seminorms defined through different Littlewood-Paley functions $\psi$ are comparable (see \cite[Chapter 6]{BL1976} for further details).
\subsection{BV space}
A function $f$ is said to have bounded variation, and denoted by $f \in \dot{\mathrm{BV}}$, if its variation
$$
|f|_{\mathrm{BV}}:=\sup \Big\{ \int_{{\mathbb R}^n} f \: \mathrm{div} (g); \:\: g \in C^1_c({\mathbb R}^n, {\mathbb R}^n), \: \| g\|_\infty \leq 1 \Big\}
$$
is finite, where $g=(g_1, \dots, g_n)$ and the $L^\infty$ norm is defined by
$$
\| g \|_{\infty}:= \| (\sum_{i=1}^n g_i^2 )^{1/2} \|_{L^\infty}.
$$
Note that if $f$ belongs to space $W^{1,1}$, integration by parts allows one to identify
$$
|f|_{\mathrm{BV}}=\int_{{\mathbb R}^n} |\nabla f|.
$$
See \cite[Chapter 5]{EG1992} for more.
\subsection{Finite differences}
\label{subsec:finitedifference}
Denote
\[D^{h}f(x) = \frac{f(x+h)- f(x)}{|h|}.\]
Recall (see e.g~\cite[Chapter 5, $\S$5.8, Theorem 3]{Evans2010}) that if there is a finite constant $A$ such that
\[ \left \lVert D^{h}f \right \rVert_{L^{p}} \leq A \]
for all $h \in \mathbb{R}^{n}$, then the weak derivatives of $f$ exist and
\[\norm{\nabla f }_{L^{p}} \leq C A \]
for a constant $C$ only depending on the dimension $n$. If $S$ is a sublinear operator that commutes with translations, then
\[ |D^{h}Sf| \leq |S D^{h}f|.\]
In particular, if $S$ is a maximal function and $f$ is a positive function, this allows us to reduce the question about differentiability to boundedness of a maximal multiplier for all Schwartz functions $f$.
\section{Endpoint results} \label{sec:solid balls}
\subsection{A model result}\label{subsec:model}
It is instructive to start first with a model case for Theorem \ref{thm:main solid balls}. This consists in the study of the single scale version of the (rough) fractional maximal function $M_{\alpha}$, defined as
\[M_{\alpha}^{*}f = \sup_{1 \leq t \leq 2 } \left \lvert \frac{1}{|B(x,t)|} \int_{B(x,t)} f(y) \, dy \right \rvert . \]
\begin{theorem}
\label{thm:model_rough}
Let $0<\alpha < 1$, $p = n/(n-\alpha)$ and $n \geq 2$. Then there is a constant $C$ only depending on dimension $n$ and $\alpha$ such that for any $f \in \dot{B}_{p,1}^{1-\alpha}$
\[
\norm{ M_{\alpha}^{*} D^h f}_{L^{p}} \leq C \norm{f}_{\dot{B}_{p,1}^{1-\alpha}}
\]
uniformly on $h \in \mathbb{R}^{n}$.
\end{theorem}
By the discussion in $\S\S$\ref{subsec:finitedifference}, Theorem \ref{thm:model_rough} implies an $L^{p}$ bound for the gradient of $M_{\alpha}^{*}$. It will be shown in $\S\S$\ref{subsec:extensionFull} how the proof of the above estimate gives Theorem \ref{thm:main solid balls} for sightly smoother versions of the fractional maximal function, such as its lacunary version or maximal functions of convolution type with smooth kernels.
\begin{proof}
Write, for $f \in \mathcal{S}$,
\[M_{\alpha}^{*} (D^{h}f)(x) = \sup_{1 \leq t \leq 2} | \mathcal{F}^{-1}( (t|\xi|)^{\alpha} \widehat{1_{B(0,1)}}(t \xi) \mathcal{F}( T_h (-\Delta)^{(1-\alpha)/2} f ) )| \]
where $T_h$ is the operator defined by
\begin{equation}\label{Th definition}
\widehat{T_h g}(\xi) = \frac{e^{i \xi \cdot h} - 1}{|\xi||h|}\widehat{g}(\xi)=: a_h( \xi) \widehat{g}(\xi).
\end{equation}
Observe that $T_h$ is a bounded operator on $L^p$ uniformly in $h\in \mathbb{R}^{n}$ for all $1<p<\infty$ by the Mikhlin--Hörmander multiplier theorem (see, for instance \cite[Theorem 8.10]{Duo}); it is clear that
$$
|\partial^\gamma a_{h}(\xi)| \lesssim |\xi|^{-|\gamma|} \quad \quad \text{for all multi-indexes } \gamma \in {\mathbb N}_0^n
$$
with implicit constant independent of $h \in {\mathbb R}^n$. Thus, the operator $T_h$ plays no role in determining the range of boundedness for $M^*_\alpha D^h$.
Let $m(\xi) = |\xi|^{\alpha} \widehat{1_{B(0,1)}}( \xi)$ and $m_t(\xi):=m(t\xi)$ for all $t>0$.
For each $j \in {\mathbb Z}$, let $f_j=\check{\psi}_j \ast f$ denote the Littlewood-Paley piece of $f$ around the frequency $2^{j}$ as in $\S\S$\ref{subsec:besov}. Assume momentarily that the following holds.
\begin{proposition}
\label{prop:Lp}
Let $g \in \mathcal{S}$. Then for $p = n/(n-\alpha)$ and $0 <\alpha < n/2$,
\[\norm{ \sup_{1 \leq t \leq 2} |\mathcal{F}^{-1}(m_t \widehat{g}_j ) |}_{L^{p}} \lesssim (2^{j \alpha} 1_{\{j \leq 0\}} + 1_{\{j > 0\}} ) \norm{g_j}_{L^{p}}. \]
\end{proposition}
Then the proof may be concluded as follows. Decomposing the function $f$ into frequency localised pieces $f_j$ and applying Proposition \ref{prop:Lp} to the function $g = T_h (-\Delta)^{(1-\alpha)/2} f$ one has
\begin{align}
\norm{\sup_{1\leq t \leq 2} | \mathcal{F}^{-1}(m_t \widehat{g} ) | }_{L^{p}} & \leq \sum_{j \in {\mathbb Z}} \| \sup_{1\leq t \leq 2} |\mathcal{F}^{-1}(m_t \widehat{g}_{j}) | \|_{L^p} \notag \\
& \lesssim \sum_{j \in {\mathbb Z}} (2^{j \alpha } 1_{\{j \leq 0\}} + 1_{\{ j > 0 \}} ) \| g_{j} \|_{L^p} \notag \\
& \leq \sum_{j \in {\mathbb Z}} 2^{j(1-\alpha)} \| f_{j} \|_{L^p}\sim \| f \|_{\dot{B}_{p,1}^{1-\alpha}}, \label{eq:Besov_bad}
\end{align}
where the last step follows from the $L^{p}$ boundedness of $T_h$ and Young's convolution inequality.
\begin{remark}
By Bernstein's inequality, $2^{j(1-\alpha)}\| f_j \|_{L^p} \lesssim 2^j \| f_j \|_{L^1}$, so one may further bound $\| f \|_{\dot{B}^{1-\alpha}_{p,1}} \lesssim \| f \|_{\dot{B}_{1,1}^1}$ in \eqref{eq:Besov_bad}.
\end{remark}
It remains to prove Proposition \ref{prop:Lp}. This is done by interpolating an $L^2$ bound with an $L^1-L^{1,\infty}$ bound as in the proof of the spherical maximal function theorem that can be found in the textbooks \cite[Chapter XI, $\S$3.3]{bigStein} or \cite[Chapter 5.5]{GrafakosClassical2014}. Writing
\[
\mathcal{F}^{-1}(m_t \widehat{g}_j ) = t^{\alpha} \mathcal{F}^{-1}( \widehat{1_{B(0,1)}}(t \xi) (|\xi|^{\alpha}\widehat{g}_{j} ) ),
\]
it is clear that
\[
\sup_{1 \leq t \leq 2} |\mathcal{F}^{-1}(m_t \widehat{g})| \lesssim \sup_{1 \leq t \leq 2} | t^{-n} 1_{B(0,t)} * ((-\Delta)^{\alpha/2} g) | \leq M((-\Delta)^{\alpha/2} g)
\]
where $M$ is the Hardy--Littlewood maximal function. Bounds on $M$ and Young's convolution inequality then imply
\begin{proposition}
Let $g \in \mathcal{S}$. Then
\begin{equation*}
\norm{ \sup_{1 \leq t \leq 2}| \mathcal{F}^{-1}(m_t \widehat{g}_j )| }_{L^{1,\infty}} \lesssim 2^{ j \alpha } \norm{g_j}_{L^{1}} .
\end{equation*}
\end{proposition}
The $L^{2}$ estimate follows by estimating the Fourier decay of $m$ after an application of a Sobolev embedding. This is the part of the proof that allows to take advantage of better symbols $m$ later in $\S\S$\ref{sec:smooth} so we write the proof in detail.
\begin{proposition}
\label{prop:L2}
Let $g \in \mathcal{S}$. Then
$$
\norm{ \sup_{1 \leq t \leq 2} |\mathcal{F}^{-1}( m_t \widehat{g}_j ) |}_{L^{2}}
\lesssim ( 2^{ j \alpha } 1_{\{j \leq 0\}} + 2^{ j (- \frac{n}{2} + \alpha )} 1_{\{ j > 0\}} ) \norm{g_j}_{L^{2}}.
$$
\end{proposition}
\begin{proof}
Let $\tilde{m}(\xi) = \xi \cdot \nabla m(\xi)$ and denote by $T_m$ and $T_{\tilde{m}}$ the operators associated to the multipliers $m$ and $\tilde{m}$. By the fundamental theorem of calculus,
\begin{align}
\sup_{1 \leq t \leq 2} |T_{m_t} g_j|
&\leq |T_{m} g_j| + 2 \left( \int_{1}^{2} |T_{ m_t} g_j | |T_{\tilde{m}_t} g_j | \frac{dt}{t} \right)^{1/2} \notag \\
&\leq |T_{m} g_j| + 2 \left( \int_{1}^{2} |T_{ m_t} g_j |^{2} \frac{dt}{t} \right)^{1/4} \left( \int_{1}^{2} |T_{\tilde{m}_t} g_j |^{2} \frac{dt}{t} \right)^{1/4} . \label{eq:sobolev embedding L2}
\end{align}
Taking $L^2$-norm in the above expression, an application of the Cauchy--Schwarz inequality and Fubini's theorem reduces the problem to compute the $L^\infty$ norm of $m\psi_j$ and $\tilde{m}\psi_j$.
Recall that $\widehat{1_{B(0,1)}}(\xi) = |2\pi \xi|^{-n/2} J_{n/2}(2 \pi |\xi|)$, where $J_{n/2}$ denotes the Bessel function of order $n/2$, and
\[J_{n/2}(r) \lesssim r^{n/2} 1_{\{r \leq 1\}} + r^{-1/2} 1_{\{r > 1\}};\]
see, for instance, \cite[Appendix B]{GrafakosClassical2014} for further details. This immediately yields
\begin{equation}
\label{eq:lacunary_single}
\norm{m \psi_j }_{L^{\infty}} \lesssim 2^{j\alpha} 1_{\{j \leq 0\}} + 2^{j( -\frac{n+1}{2} + \alpha )} 1_{\{j >0\}}.
\end{equation}
Concerning $\tilde{m}$, the relation
$$
\frac{d}{dr}[r^{-n/2} J_{n/2}(r)]=-r^{-n/2}J_{n/2 +1}(r)
$$
and a similar analysis to the one carried above leads to
$$\norm{\tilde{m} \psi_j}_{L^{\infty}} \lesssim 2^{j\alpha} 1_{\{j \leq 0\}} + 2^{j( -\frac{n-1}{2} + \alpha )}1_{\{j >0\}}.$$
Putting both estimates together in \eqref{eq:sobolev embedding L2} concludes the proof.
\end{proof}
Proposition \ref{prop:Lp} now follows by interpolation, and the proof of the model case is complete.
\end{proof}
\subsection{Extension to the full supremum}\label{subsec:extensionFull}
From now on, we redefine $m$ to be Fourier transform of an integrable function smoother than $1_{B(0,1)}$. Momentarily assume $m$ satisfies
\begin{equation}\label{assumption extra decay}
\norm{ \sup_{1 \leq t \leq 2} | (m_t \widehat{g}_j )^\vee |}_{L^{p}} \lesssim (2^{j \alpha} 1_{\{j \leq 0\}} + 2^{-j\varepsilon} 1_{\{j > 0\}} ) \norm{g_j}_{L^{p}}
\end{equation}
with $p = n/(n-\alpha)$,
which we next show to be enough to conclude a bound as in Theorem \ref{thm:main solid balls}. The proof of \eqref{assumption extra decay} is postponed to $\S\S$\ref{sec:smooth}.
Inequality \eqref{assumption extra decay} rescales as
\begin{equation}\label{eq:rescaled}
\norm{ \sup_{2^{-k} \leq t \leq 2^{-k+1}} | (m_t \widehat{g}_{j+k} ) ^\vee|}_{L^{p}} \lesssim (2^{j \alpha} 1_{\{j \leq 0\}} + 2^{-j\varepsilon} 1_{\{j > 0\}}) \norm{g_{j+k}}_{L^{p}}.
\end{equation}
In order to use this bound, break the full supremum over all possible scales and use the embedding $\ell^p \subseteq \ell^\infty$,
$$
\sup_{t >0} | (m_t \widehat{g} )^\vee | = \sup_{k \in {\mathbb Z}} \sup_{2^{-k} \leq t \leq 2^{-k+1}} | (m_t \widehat{g} )^\vee | \leq \Big( \sum_{k \in {\mathbb Z}} \sup_{2^{-k} \leq t \leq 2^{-k+1}} | (m_t \widehat{g})^\vee |^p \Big)^{1/p}.
$$
Taking $L^p$ norm and using \eqref{eq:rescaled}, we see
\begin{equation*}
\norm{\sup_{t >0} | (m_t \widehat{g} )^\vee| }_{L^{p}} \lesssim \sum_{j \in {\mathbb Z}} (2^{ j \alpha } 1_{\{j \leq 0\}} + 2^{- j \varepsilon} 1_{\{ j > 0 \}} ) \Big( \sum_{k \in {\mathbb Z}} \| g_{j+k} \|_{L^p}^p \Big)^{1/p} .
\end{equation*}
Using the geometric decay to sum in $j \in {\mathbb Z}$ and recalling
\[\norm{g_{j+k}}_{L^{p}} = \norm{ (-\Delta)^{(1-\alpha)/2} f_{j+k} }_{L^{p}} \lesssim 2^{(j+k)(1-\alpha)}\norm{f_{j+k}}_{L^{p}}, \]
we obtain
$$
\Big( \sum_{k \in {\mathbb Z}} \| g_{j+k} \|_{L^p}^p \Big)^{1/p} \lesssim \| f \|_{\dot{B}^{1-\alpha}_{p,p}}.
$$
We then claim
\begin{equation}\label{eq:Besov_BV}
\| f \|_{\dot{B}^{1-\alpha}_{p,p}} \lesssim | f |_{\mathrm{BV}}
\end{equation}
for $n>1$ and $0<\alpha < n/2$. This will follow from a Gagliardo--Nirenberg--Sobolev type inequality.
\begin{proposition}[\cite{CDDD2003}
Assume $\gamma>1$ or $\gamma<1-1/n$, and let $(s,q)$ satisfy $(s-1)q'/n=\gamma-1$ for some $1 < q \leq \infty$, where $1/q+1/q'=1$. Then, for any $0< \theta < 1$,
$$
\| f \|_{\dot{B}^t_{p,p}} \lesssim \| f \|_{\dot{B}_{q,q}^s}^{1-\theta} | f|_{\mathrm{BV}}^\theta
$$
where $\frac{1}{p}=\frac{1-\theta}{q} + \theta$ and $t=(1-\theta)s + \theta$.
\end{proposition}
Indeed, taking $\gamma=0$, $s=1-n/2$ and $\theta=1-2\alpha/n$, which are admissible for $n>1$ and $0<\alpha<n/2$, one has
$$
\| f \|_{\dot{B}_{p,p}^{1-\alpha}} \lesssim \| f \|_{\dot{B}^{1-n/2}_{2,2}}^{1-\theta} |f|_{\mathrm{BV}}^\theta.
$$
Applying Bernstein's and Minkowski's inequalities as well as Littlewood--Paley theory, we see
\begin{align*}
\| f \|_{\dot{B}^{1-\frac{n}{2}}_{2,2}} & \sim \Big( \sum_{j \in {\mathbb Z}} 2^{2j(1-\frac{n}{2})} \| f_j \|_{L^{2}}^2 \Big)^{1/2} \lesssim \Big( \sum_{j \in {\mathbb Z}} 2^{2j(1-\frac{n}{2})} 2^{2jn(\frac{n-1}{n}-\frac{1}{2})} \| f_j \|_{L^{\frac{n}{n-1}}}^2 \Big)^{1/2} \\
&= \Big( \sum_{j \in {\mathbb Z}} \| f_j \|_{L^{\frac{n}{n-1}}}^2 \Big)^{1/2} \leq \| \Big( \sum_{j \in {\mathbb Z}} |f_j|^2 \Big)^{1/2} \|_{L^{\frac{n}{n-1}}} \sim \| f \|_{L^{\frac{n}{n-1}}}.
\end{align*}
Inequality \eqref{eq:Besov_BV} then follows from the Gagliardo--Nirenberg--Sobolev inequality \cite[Theorem 5.6.1.~(i)]{EG1992}, and we conclude
$$
\| \sup_{1\leq t \leq 2} |\mathcal{F}^{-1}(m_t \widehat{g}) | \|_{L^p} \lesssim \| f \|_{L^{n/(n-1)}}^{1-\theta} |f|_{\mathrm{BV}}^\theta \lesssim |f|_{\mathrm{BV}}.
$$
Thus it suffices to verify \eqref{assumption extra decay}. This is done separately in the cases when $m$ comes from a smooth kernel and when the maximal function is lacunary.
\subsection{Smooth kernel}
\label{sec:smooth}
Define the smooth fractional maximal function as follows. Let $\epsilon > 0$. Let $\varphi$ be a positive function with radial $L^{1}$ majorant such that $\widehat{\varphi}(\xi) \lesssim_{\varphi}(1+ |\xi|)^{- n/2 - \epsilon}$. For instance, any positive Schwartz function or even
\[\varphi(x) = (1-|x|^{2})_{+}^{\epsilon} \]
with $\epsilon > 0$ will do (see Appendix B.5 in \cite{GrafakosClassical2014}). The subscript denotes the positive part as $f_+ = f \cdot 1_{\{f > 0\}}$. Now we want to analyse $M^{\varphi}_{\alpha},$ as defined in the introduction. A repetition of the proof of Proposition \ref{prop:L2} gives the $L^{2}$ bound
\[\norm{ \sup_{1 \leq t \leq 2} |\mathcal{F}^{-1}( (t |\xi|)^{\alpha} \widehat{\varphi}(t\xi) \widehat{g}_j ) |}_{L^{2}} \lesssim ( 1_{\{j \leq 0\}}2^{j \alpha} + 1_{\{j > 0\}}2^{j(- \frac{n}{2} + \alpha - \epsilon )} ) \norm{g_j}_{L^{2}}.\]
The $\epsilon$-decay gain in the above estimate continues to hold on $L^{n/(n-\alpha)}$, so the extra decay assumption \eqref{assumption extra decay} is satisfied for smooth convolution kernels. By $\S\S$\ref{subsec:extensionFull}, Theorem \ref{thm:main solid balls} holds in this case.
\subsection{Lacunary set of radii}
Similarly, there is a gain in the $L^2$ estimate when we study the lacunary fractional maximal function. Now $m(\xi) = |\xi|^{\alpha} \widehat{1_{B(0,1)}}( \xi)$ and
\[c_n M_{\alpha}^{lac}f (x)= \sup_{k \in {\mathbb Z}} | 2^{k\alpha-nk} \int_{B(x,2^{k})} f(y) \, dy| \leq \Big( \sum_{k \in {\mathbb Z}} | 2^{k\alpha-nk} \int_{B(x,2^{k})} f(y) \, dy|^p \Big)^{1/p} \]
so that it suffices to use a bound for a single dilate instead of a supremum bound. Thus, it is enough to use \eqref{eq:lacunary_single} to replace Proposition \eqref{prop:L2} by
\[
\| |\mathcal{F}^{-1} (m \widehat{g}_j)| \|_{L^2} \lesssim ( 2^{j \alpha} 1_{\{j \leq 0\}} + 2^{j ( - \frac{n+1}{2} + \alpha ) } 1_{\{j > 0\}} ) \norm{g_j}_{L^{2}},
\]
which has an extra $1/2$-decay compared to Proposition \ref{prop:L2}.
After interpolation, this leads to an $\varepsilon$-decay gain in the $L^{n/(n-\alpha)}$ estimate so that \eqref{assumption extra decay} (without supremum) and Theorem \ref{thm:main solid balls} for lacunary set of radii follow.
\section{Proof of Theorem \ref{thm:spherical_technical}}
Recall the definition \eqref{sph:definition}.
By the characterisation through finite differences described in $\S$2, the sublinearity of $S_\alpha$ and by density, it suffices to prove
$$
\|S_{\alpha}D^{h} f \|_{L^{q}} \lesssim \| f \|_{L^{p}}
$$
for all Schwartz functions $f$ uniformly in $h \in {\mathbb R}^n$.
Observe that by means of Fourier transform,
\begin{equation}\label{eq:in Fourier side}
S_\alpha D^h f (x)= \sup_{t > 0} \left \lvert \mathcal{F}^{-1} \left( t^{\alpha} |\xi| \widehat{\sigma}(t \xi) \mathcal{F} (T_{h}f) (x) \right) \right \rvert,
\end{equation}
where $T_h$ is the Fourier multiplier operator \eqref{Th definition}. As described in $\S\S$\ref{subsec:model}, $T_h$ is bounded on $L^p$ for all $1<p<\infty$ uniformly in $h \in {\mathbb R}^n$ by the Mikhlin--H\"ormander multiplier theorem, so it plays no role in determining the boundedness range for $S_\alpha D^h$; for this reason, $T_h f$ is identified with $f$ in the rest of this section.
\subsection{The case $q \geq 2$}
\label{sec:q>2}
It is enough to consider the single scale version of the maximal function in \eqref{eq:in Fourier side}: suppose we can prove
\begin{equation}
\label{sph:ss_red}
\norm{ \sup_{1 \leq t \leq 2} \lvert \mathcal{F}^{-1} ( t^{\alpha} |\xi| \widehat{\sigma}(t \xi) \widehat{f_j} ) \rvert}_{L^q} \lesssim (2^{j s_1 } 1_{\{ j \leq 0 \}} + 2^{-j s_2} 1_{\{ j > 0\}} ) \| f_j \|_{L^p}
\end{equation}
for $s_1, s_2 >0$. Then rescaling gives
$$
\norm{ \sup_{2^{-k} \leq t \leq 2^{-k+1}} \lvert \mathcal{F}^{-1} ( t^{\alpha} |\xi| \widehat{\sigma}(t \xi) \widehat{f_{j+k}} ) \rvert}_{L^q} \lesssim (2^{j s_1 } 1_{\{ j \leq 0 \}} + 2^{-j s_2} 1_{\{ j > 0\}} ) \| f_{j+k} \|_{L^p}
$$
under the relation $\frac{1}{q}=\frac{1}{p}-\frac{\alpha-1}{n}$, and arguing as in $\S\S$\ref{subsec:extensionFull}
\begin{multline*}
\norm{\sup_{t > 0} |\mathcal{F}^{-1}(t^\alpha |\xi| \widehat{\sigma}(t\xi) \widehat{f}) | }_{L^{q}} \lesssim \sum_{j \in \mathbb{Z}} (2^{j s_1 } 1_{\{ j \leq 0 \}} + 2^{-j s_2} 1_{\{ j > 0\}} ) \Big( \sum_{k \in \mathbb{Z}} \norm{f_{j+k}}_{L^{p}} ^{q} \Big)^{1/q} \\
\lesssim \| f \|_{L^p}
\end{multline*}
where the last inequality follows from Minkowski's inequality $(q \geq p $); controlling $\ell^{q}$ norm by $\ell^{2}$ norm, and applying Littlewood--Paley theory to see the inner sum as $L^{p}$ norm of $f$. The sum in $j$ converges as $s_1, s_2 >0$. Hence it suffices to prove \eqref{sph:ss_red}.
For low frequencies $j \leq 0$, we can use domination by the Hardy--Littlewood maximal function, Young's convolution inequality and Bernstein's inequality to see
\[\norm{ \sup_{1 \leq t \leq 2} \lvert \mathcal{F}^{-1} ( t^{\alpha} |\xi| \widehat{\sigma}(t \xi) \widehat{f_j} ) \rvert}_{L^q} \lesssim \norm{ M(-\Delta)^{1/2} f_j}_{L^q} \lesssim 2^{j(1+\alpha)} \norm{f_j}_{L^p}. \]
Hence it suffices to prove \eqref{sph:ss_red} for $j > 0$.
\subsection{A local smoothing estimate}
The Fourier transform of the spherical measure is
\[
\widehat{\sigma}(\xi)=2\pi |\xi|^{-\frac{n-2}{2}} J_{\frac{n-2}{2}}(2\pi |\xi|)=\sum_{\pm} a_{\pm} (\xi) e^{\pm 2 \pi i|\xi|},
\]
where the symbols $a_{\pm}$ are in the class $S^{-(n-1)/2}$, that is
$$
|\partial_\xi^\gamma a_{\pm} (\xi)| \lesssim (1+|\xi|)^{-\frac{n-1}{2} - |\gamma|}
$$
for all multi-indices $\gamma \in {\mathbb N}^n_0$ (c.f. \cite[Chapter VIII]{bigStein}). Hence
\[
\mathcal{F}^{-1}( \widehat{\sigma}(t\xi) \widehat{f}) = \sum_{\pm} \int_{\mathbb{R}^{n} } e^{2\pi i ( x \cdot \xi \pm t|\xi|)} a_\pm( t \xi) \widehat{f}(\xi) \, d\xi,
\]
so that the connection to half-wave propagator $ e^{it \sqrt{-\Delta}} f (x):=\int_{{\mathbb R}^n} e^{i x \cdot \xi} e^{ i t |\xi| } \widehat{f}(\xi) d\xi$ is evident. We will quote the following result:
\begin{proposition}[Consequence of \cite{BD2015}]
\label{thm:local smoothing so far}
For $n \ge 2, \, s \in {\mathbb R},$
\begin{equation*}
\Big(\int_{1}^2 \| e^{it \sqrt{-\Delta}} f \|_{L^{p}_{s-s_p + \theta}({\mathbb R}^n)}^p dt \Big)^{1/p} \lesssim \| f \|_{L^p_s(\mathbb{R}^{n})}
\end{equation*}
holds for $0 \le \theta < \frac{1}{p}$ and $s_p = (n-1)\big(\frac{1}{2} - \frac{1}{p}\big)$ whenever $p \geq \frac{2(n+1)}{n-1}$.
\end{proposition}
This can be found as Corollary 1.3 (i) in \cite{Garrigos2009} knowing that the conjectured value of $p_d$ in Table 1 of that paper has later been verified by \cite{BD2015}.
\begin{proposition}
\label{sph:prop}
Let $g$ be a Schwartz function and $j > 0$. For any $\epsilon > 0$
\[ \norm{ \sup_{1 \le t \le 2} | \sigma_t * g_j | }_{L^{n-1}} \lesssim_{\epsilon} 2^{j ( \epsilon - 1)} \norm{g_j}_{L^{n-1}} .\]
\end{proposition}
\begin{proof}
For $j>0$ and a smooth bump $\chi$ around $[1,2]$, we have
\begin{align*}
\| \sup_{1 \le t \le 2} | \sigma_t * g_j | \|_{L^{n-1}(\mathbb{R}^{n})} & \lesssim \| (1+\sqrt{-\partial_t^2})^r \chi \cdot \sigma_t * g_j \|_{L^{n-1}({\mathbb R}^{n+1})} \cr
& \lesssim 2^{j\left(r + s_p - \theta -\frac{n-1}{2} + \epsilon \right)} \| g_j \|_{L^{n-1}({\mathbb R}^n)} \cr
\end{align*}
where we used Sobolev embedding with $r > 1/(n-1)$, Proposition \ref{thm:local smoothing so far} with $p=n-1$ as well as Young's convolution inequality. Simplifying the exponent in accordance with Proposition \ref{thm:local smoothing so far}
\footnote{The full strength of \cite{BD2015} is only needed when $n=5$. When $n \geq 6$, the earlier results from \cite{Laba2002} will already do.},
we obtain the claim.
\end{proof}
\subsection{$L^{p} \to L^{q}$ estimates}
To finish the proof of \eqref{sph:ss_red}, we prove $L^{p} \to L^{q}$ estimates following the interpolation scheme of Lee \cite{Lee2003} enhanced with the sharp local smoothing estimate. Denote
\[S_j^{*} f (x) := \sup_{1 \le t \le 2} |\mathcal{F}^{-1}( \widehat{\sigma}( t \xi ) |\xi| \widehat{f_j}(\xi))(x)|, \]
where $\widehat{f_j} = \widehat{f} \psi_j$ still stands for Fourier localization at the level of a Littlewood--Paley piece of frequency $2^j$.
\begin{proposition}
Let $P$ be the open convex polygon with vertices
\begin{align*}
A&= \Big( \frac{n-2}{n}, \frac{2}{n} \Big), \quad B=\Big(\frac{n^2-2n-1}{n^2+1}, \frac{2(n-1)}{n^2+1} \Big)\\
C&=\Big(\frac{1}{n-1}, \frac{1}{n-1} \Big), \quad D= \Big( \frac{n-2}{n}, \frac{n-2}{n} \Big).
\end{align*}
Then
\[\norm{ S_j^{*} f }_{L^q} \lesssim 2^{- \varepsilon j} \norm{ f_j }_{L^p}\]
for some $\varepsilon > 0$ and all $j > 0$ provided that $(1/p,1/q) \in P$.
\end{proposition}
\begin{proof}
Since $\supp ( \widehat{\sigma}\cdot \psi_j ( t \cdot )) \subset \{ |\xi| \sim 2^{j} \}$, we can assume that $\widehat{f}$ is supported in an annulus around $|\xi| = 2^{j}$. We use the following bounds:
\begin{align}
\norm{ S_j^{*}f }_{L^1} &\lesssim 2^{2j} \norm{ f }_{L^1} \nonumber \\
\norm{ S_j^{*}f }_{L^\infty} &\lesssim 2^{2j} \norm{ f }_{L^1} \nonumber \\
\label{lplq}
\norm{ S_j^{*}f }_{L^{n-1}} &\lesssim_{\delta} 2^{j \delta} \norm{ f }_{L^{n-1}}, \quad \textrm{for all} \ \delta > 0 \\
\norm{ S_j^{*}f }_{L^2} &\lesssim 2^{- \frac{n-4}{2}j} \norm{ f }_{L^2} \nonumber \\
\norm{ S_j^{*}f }_{L^{\frac{2(n+1)}{n-1}}} &\lesssim 2^{- j \frac{n^2-4n-3}{2n+2}} \norm{ f }_{L^2} \nonumber.
\end{align}
To verify \eqref{lplq}, use Proposition \ref{sph:prop} as well as Young's convolution inequality to obtain
\[\norm{ S_j^{*}f }_{L^{n-1}} \lesssim_\delta 2^{-j(1- \delta)} \norm{ (-\Delta)^{1/2} f }_{L^{n-1}} \lesssim 2^{j \delta} \norm{ f }_{L^{n-1}}.\]
The other inequalities follow similarly, that is, by borrowing the corresponding bounds for the spherical maximal function (inequalities (1.7)--(1.10) in \cite{Lee2003}), and applying Young's convolution inequality. Interpolating the bounds above, we obtain the claimed proposition.
\end{proof}
For each $p > 1$, we want to find the values of $\alpha$ such that $(1/p,1/q) \in P$ when $(\alpha-1)/n = 1/p-1/q$ and $q \geq 2$. When $q \geq 2$ is assumed, this happens when
\[
\frac{n}{n-2} < p \leq \frac{n^2+1}{n^2-2n-1}, \quad \alpha < \frac{n^2-2n-1}{n-1} -\frac{2n}{p(n-1)}
\]
or
\[
\frac{n^2+1}{n^2-2n-1} < p \leq n-1 ,\quad \alpha < \frac{n-1}{p} .
\]
This concludes the proof for the case $q \geq 2$. Notice that the restriction $q \ge 2$ is not dictated by validity of $L^{p} \to L^{q}$ estimates but it was required in order to upgrade the single scale bounds to bounds for the full maximal operator in $\S\S$\ref{sec:q>2}.
\subsection{The case $q \leq 2$}
Next we remove the assumption $q \geq 2$. Let
\[T^{*} f(x) = \sup_{t > 0} |\mathcal{F}^{-1} ( (t|\xi|)^{\alpha} \widehat{\sigma}(t\xi) \widehat{f}(\xi))(x)|.\]
The operator $S_{\alpha}$ in \eqref{eq:in Fourier side} can be written
\[S_{\alpha} = T^* I_{\alpha-1} T_h f \]
where $\widehat{I_{\alpha-1}f} = |\xi|^{1-\alpha} \widehat{f}$ is the Riesz potential of order $\alpha -1$ and $T_h$ are as in \eqref{Th definition}. As discussed in $\S\S$\ref{subsec:model}, $T_h$ are bounded in $L^{p}$ for all $p > 1$. Also, by the Hardy--Littlewood--Sobolev inequality $I_{\alpha -1}$ is bounded $L^p \to L^q,$ for $p,q$ obeying $\frac{\alpha-1}{n} = \frac{1}{p}-\frac{1}{q}$. Therefore, it is enough to analyse the operator $T^*$.
Let $m(\xi) = |\xi|^{\alpha} \widehat{\sigma}(\xi)$ and take a Littlewood--Paley function $\psi$ (as in $\S$\ref{sec:definitions}). We define $m_{1} = \sum_{j > 0} \psi_j m$ and $m_0 = \sum_{j \leq 0} \psi_j m$. Take $T_j^{*}$ to be as $T^*$ but $m$ replaced by $m_j$. Then
\[
T^*f \le T_0^{*}f+ T^*_1f.
\]
We first bound $T_0^{*}$. A straightforward computation shows that $m_0$ is bounded and for any multi-index $\beta \in \mathbb{N}^{n}$ with $|\beta| = k$, $k \leq n +1$
\[ | \partial_{\xi}^{\beta} m_0(\xi) | \lesssim |\xi|^{\alpha-k} \]
so that
\[\norm{ (1 + | \cdot |)^{n+1} \mathcal{F}^{-1}( m_0 ) }_{L^{\infty}} \lesssim 1 . \]
Consequently
\[T_0^{*}f \lesssim Mf\]
and boundedness in any $L^{p}$ with $p> 1$ follows from that of the Hardy--Littlewood maximal function.
To bound $T_1^{*}$, we use a part of Theorem B from \cite{RdF1986}:
\begin{theorem}[Rubio de Francia \cite{RdF1986}]
Let $m$ be a function in $C^{s+1}({\mathbb R}^n)$ for some integer $s > n/2$ such that $|D^{\gamma}m(\xi)| \lesssim |\xi|^{-a},$ for all $|\gamma| \le s+1$. Suppose also that $a > \frac{1}{2}.$ Then the maximal
multiplier operator $T^* f := \sup_{t > 0} |\mathcal{F}^{-1}( m(t \cdot) \widehat{f})|$ is bounded in $L^r,$ for
\[
\frac{2n}{n+2a -1} < r \le 2.
\]
\end{theorem}
Since $\sum_{j > 0} \psi_j m $ is smooth and satisfies $|D^{\gamma}m(\xi)| \lesssim |\xi|^{-a},$ for all $\gamma \in \mathbb{N}^{n}$ with $a = \frac{n-1}{2} - \alpha$, we can apply the theorem to conclude the proof whenever
\[
\frac{2n}{2n - 2 -2\alpha} < q \le 2, \quad a > \frac{1}{2}
\]
which is equivalent to $p > \frac{n}{n-2}$ and $\alpha < \frac{n-2}{2} < \alpha(p)$. However, given $p > \frac{n}{n-2}$, the condition $\alpha < \frac{n-2}{2}$ is automatically satisfied whenever $q \leq 2$. Hence $\alpha < \alpha(p)$ is an active constraint only when $q > 2$.
\qed
|
{
"timestamp": "2018-11-16T02:14:30",
"yymm": "1803",
"arxiv_id": "1803.02581",
"language": "en",
"url": "https://arxiv.org/abs/1803.02581"
}
|
\section{Introduction}
T2K \cite{T2K} is a long-baseline neutrino oscillation experiment in Japan. It uses a muon neutrino or antineutrino beam produced in Japan-Proton Accelerator Research Center (J-PARC) located in Tokai.
There are three main detectors placed along the beamline:
\begin{itemize}
\item INGRID at $280$\,m from the target: it is composed of an iron-scintillator sandwich whose goal is to monitor the neutrino flux and its direction \cite{INGRID}.
\item ND280 at $280$\,m: it contains several sub-detectors all included in the UA1 magnet ($0.2$\,T). The main tracker volume consists of three Time Projection Chambers (TPCs) and two Fine-Grained Detectors (FGDs) in between. The FGDs are layers of scintillator bars which are used as an active target material, of around $1$\,ton each. There is also a $\pi^0$ detector (P0D) upsteam and the whole detector is surrounded by an electromagnetic calorimer (ECal).
ND280 is used to know the neutrino flux before any oscillation and constrain flux and cross-section model parameters.
\item Super-Kamiokande at $295$\,km: it is located $1$\,km deep in Kamioka mine. It is a 50 kton water Cherenkov detector ($22.5$\,kton fiducial volume). It can distinguish between muons and electrons and the energy reconstruction is made with the lepton kinematics assuming a charged current quasi-elastic interaction.
\end{itemize}
The latter two detectors are $2.5$\,degrees off-axis from the neutrino beam, allowing the experiment to have a narrow band muon neutrino beam peaked at $600$\,MeV, which corresponds to the first oscillation maximum at Super-Kamiokande.
The T2K collaboration has already measured muon (anti-)neutrino disappearance and electron (anti-)neutrino appearance \cite{nuAna}\cite{anuAna}. Recently, the experiment has also observed first hints of maximal CP violation \cite{CP}.
T2K original goal was to reach a total of $7.8 \times 10^{21}$ protons on target (POT) in 2021, split between neutrino and antineutrino modes. An extension of the T2K running up to 2026, to achieve $20 \times 10^{21}$ protons on target, is under consideration \cite{T2KII}, as presented fig.~\ref{T2K_II_POT}. An upgrade of the J-PARC beam is scheduled, in order to reach a projected beam power of $1.3$\,MW.
The primary goal of this extension (T2K-II) is to achieve $>3\sigma$ sensitivity for CP violation in electron (anti-)neutrino appearance. Figure \ref{T2K_II} shows the expected significance of excluding CP conservation ($\delta_{CP} =0,\pi$) as a function of delivered POT for true $\delta_{CP} = -\pi/2$. It also clearly shows the impact of the current systematic errors on the sensitivity.
An upgrade of ND280 is then needed to be able to reduce the flux and cross-section systematic uncertainties that can be constrained by the near-detector, in particular by increasing the angular acceptance of the particles created by neutrino interactions. This will help to better understand the latter.
\begin{figure}[!h]
\begin{minipage}[c]{0.45\linewidth}
\includegraphics[width=\linewidth]{fig/targetPOT.pdf}
\caption{Expected beam power and Protons-on-Target accumulation. Plot taken from \cite{T2KII}. \label{T2K_II_POT}}
\end{minipage}
\hspace{.2in}
\begin{minipage}{0.55\linewidth}
\includegraphics[width=\linewidth]{fig/dcp_t2k2.pdf}
\caption{Sensitivity to CP violation as a function of POT for T2K Phase 2 with predicted 2016 systematic errors. The $\delta_{CP}=-\frac{\pi}{2}$ and normal mass hierarchy are assumed to be true. Plot taken from \cite{T2KII}.\label{T2K_II}}
\end{minipage}
\end{figure}
\section{The proposed upgraded ND280 detector}
Current ND280 analyses have shown that the detector has only a good acceptance for forward tracks (with respect to neutrino direction), as shown in fig.~\ref{current}. The efficiency for high-angle (going perpendicular to the beam direction) tracks is low, due to the geometry of the FGD bars (vertical tracks only cross one or two bars, which is not sufficient for reconstruction) and to the fact there are no TPCs above and below the FGD. The identification of backward tracks is limited by track sense reconstruction that requires one to measure time-of-flight of particles.
In the proposed upgraded design \cite{upgrade}, as shown in fig.~\ref{upgrade}, a new tracker, consisting of a two-ton horizontal plastic scintillator target (about $1.8 \times 0.6 \times 2$\,m$^3$) sandwiched between two new horizontal TPCs, is built. This tracker would be surrounded by Time-of-Flight counters (made of plastic scintillator) to measure the direction of the tracks, reject the Out-of-Fiducial-Volume events and potentially be used for particle identification.
The total mass of active target for neutrino interactions increases from $2.2$\,tons (the two FGDs) to $4.3$\,tons (the two FGDs + the new target), allowing the doubling of the expected statistics for a given exposure.
\begin{figure}
\caption*{\emph{Colors:} \textcolor{orange}{current FGDs}, \textcolor{blue}{current TPCs}, \textcolor{gray}{P0D}, \textcolor{violet}{ECal}, \textcolor{magenta}{new target}, \textcolor{cyan}{new TPCs}, \textcolor{green!50!black}{new ToF counters}}
\begin{minipage}[c]{0.48\linewidth}
\centering
\begin{tikzpicture}[scale=0.72]
\draw[line width=1.5, -latex] (-6.5, 0)--(-5.8, 0) node[midway,above] {$\nu$};
\draw[-latex] (-6.8, -2)--(-6.5, -1.6) node[right] {x};
\draw[-latex] (-6.8, -2)--(-6.8, -1.4) node[above] {y};
\draw[-latex] (-6.8, -2)--(-6.2, -2) node[right] {z};
\fill[draw, fill=violet!50] (-5.7,-2.3) rectangle (-2.52,2.3);
\fill[draw, fill=violet!50] (-2.48,-2.3) rectangle (3.6,2.3);
\fill[draw, fill=black!30] (-5.7, -2) rectangle (-2.5, 2);
\fill[draw, fill=blue!30] (-2.5,-2) rectangle (-1, 2);
\fill[draw, fill=orange!30] (-1,-2) rectangle (-0.5, 2);
\fill[draw, fill=blue!30] (-0.5,-2) rectangle (1, 2);
\fill[draw, fill=orange!30] (1,-2) rectangle (1.5, 2);
\fill[draw, fill=blue!30] (1.5,-2) rectangle (3, 2);
\fill[draw, fill=violet!60] (3,-2) rectangle (3.6,2);
\fill[draw, white] (-3,-2.8) rectangle (3,-2.4);
\draw[line width=1.2, green!50!black, -latex] (-0.7,0.2)--(3.6, 0.5) node[midway,yshift=5pt,rotate=4,green!50!black] {\textbf{reconstructed}};
\draw[line width=1.2, red, -latex] (-0.7,0.2)--(-0.55, 2.3);
\draw[line width=1.2, red, -latex] (-0.7,0.2)--(-5.7, -1.8) node[midway,yshift=6pt,rotate=22] {\textbf{lower efficiency}};
\end{tikzpicture}
\caption{Current design of ND280 detector and its limitations.\label{current}}
\end{minipage}\hfill
\begin{minipage}[c]{0.48\linewidth}
\centering
\begin{tikzpicture}[scale=0.72]
\draw[line width=1.5, -latex] (-6.5, 0)--(-5.8, 0) node[midway,above] {$\nu$};
\fill[draw, fill=violet!50] (-5.7,-2.3) rectangle (-2.52,2.3);
\fill[draw, fill=violet!50] (-2.48,-2.3) rectangle (3.6,2.3);
\fill[draw, fill=black!30] (-5.7, -2) rectangle (-5.5, 2);
\fill[draw, fill=magenta!30] (-5.5, -0.5) rectangle (-2.5, 0.5);
\fill[draw, fill=cyan!30] (-5.5, -0.5) rectangle (-2.5,-2);
\fill[draw, fill=cyan!30] (-5.5, 0.5) rectangle (-2.5, 2);
\fill[draw, fill=blue!30] (-2.5,-2) rectangle (-1, 2);
\fill[draw, fill=orange!30] (-1,-2) rectangle (-0.5, 2);
\fill[draw, fill=blue!30] (-0.5,-2) rectangle (1, 2);
\fill[draw, fill=orange!30] (1,-2) rectangle (1.5, 2);
\fill[draw, fill=blue!30] (1.5,-2) rectangle (3, 2);
\fill[draw, fill=violet!60] (3,-2) rectangle (3.6,2);
\draw[line width=1.5, green!50!black] (-5.5, -2) rectangle (-2.5, 2);
\fill[draw, white] (-3,-2.8) rectangle (3,-2.4);
\draw[line width=1.2, green!50!black, -latex] (-0.7,0.2)--(3.6, 0.5);
\draw[line width=1.2, green!50!black, -latex] (-4,0.2)--(-3.9, 2.3);
\draw[line width=1.2, green!50!black, -latex] (-0.7,0.2)--(-5.7, -1.8);
\end{tikzpicture}
\caption{Proposed upgraded design of ND280 detector.\label{upgrade}}
\end{minipage}
\end{figure}
\clearpage
Different technologies for the new scintillating target are under consideration:
\begin{itemize}
\item The first option is a new type of detector called Super-FGD \cite{SuperFGD}, consisting of a matrix of $1 \times 1 \times 1$\,cm$^3$ cubes made of extruded plastic scintillator. Each cube is crossed by three wavelength shifting (WLS) fibers along the three directions X, Y and Z, as shown in fig.~\ref{SuperFGD}. R\&D is ongoing inside the collaboration.
In this configuration, each hit results in light in the three fibers (giving at least three times more light-yield than a classic scintillator). This provides three views (XZ, YZ, XY) to be used to reconstruct neutrino events, even with low-momentum tracks in any direction (for instance, the proton threshold is expected to reduce from $450$\,MeV/c in the current FGD to $300$\,MeV/c in the Super-FGD). This detector could also provide a better separation of electrons and converted photons than the current FGD.
\item The second option consists of relying on the current FGD known technology, by building a horizontal FGD (scintillator bars in X and Z directions), in order to detect particles propagating perpendicularly to the beam, even though such a detector provides only two views and has limited forward acceptance.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{fig/superfgd.png} \hspace{0.3cm}
\includegraphics[width=0.46\linewidth]{fig/superfgd_cube.png}
\caption{Left: Schematic view of Super-FGD detector. Right: picture of a small prototype of the Super-FGD, taken from \cite{SuperFGD}.}
\label{SuperFGD}
\end{figure}
\section{Performance}
Simulations with GEANT4 \cite{GEANT4} and GENIE \cite{GENIE} have been performed in order to compare the performance of the current-like detector and the proposed upgraded configuration. Figure~\ref{eff} shows the selection efficiency of $\nu_{\mu}$ charged-current (CC) interactions, as a function of muon angle. We see a large improvement for muons emitted at high-angle (perpendicular to the beam direction) in the new target, thanks to the two horizontal TPCs, and backward in the closest FGD, thanks to timing information provided by Time-of-Flight counters.
This study only considers muons reconstructed and identified in a TPC. Performance may improve further by considering muons stopping in the target (as the SuperFGD is expected to provide an acceptance in all directions that is higher than $90\%$).
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{fig/eff_costheta_numuCC.pdf}
\caption{Selection efficiency of $\nu_{\mu}$CC events as a function of the muon polar angle, with muons detected in TPC. The solid lines correspond to ND280 upgrade configuration and the dashed lines correspond to current-like configuration. Plot taken from \cite{upgrade}.}
\label{eff}
\end{figure}
Based on these efficiencies, sensitivity studies are done in order to assess the impact of the upgrade on oscillation and physics analyses. Table \ref{BANFF} presents the improvement on some of the systematic uncertainties of the neutrino oscillation measurement.
\begin{table}[!tbp]
\centering
\caption{Sensitivity to some flux and cross-section parameters of interest for the current ND280 and the upgrade configuration. Values taken from \cite{upgrade}.}
\begin{tabular}{c|c|c}
\hline
\hline
Parameter & Current-like ($\%$) & Upgrade-like ($\%$) \\
\hline
SK flux normalisation & $2.9$ & $2.1$ \\
($ 0.6 < \text{E} < 0.7$\,GeV)
MA$_{QE}$ (GeV/c$^2$) & $2.6$ & $1.8$ \\
$\nu_{\mu}$ 2p2h normalisation & $9.5$ & $5.9$ \\
2p2h shape on Carbon & $15.6$ & $9.4$ \\
\hline
\hline
\end{tabular}
\label{BANFF}
\end{table}
\section{Conclusion}
T2K plans to extend its data taking up to 2026, followed by the Hyper-K project \cite{HyperK} where the mass of the far detector will be increased by a factor $10$. In this context, a near-detector upgrade is a necessary step in order to reduce the uncertainty on the neutrino event rate prediction at the far detector and look for the first evidence of CP violation in the leptonic sector.
An upgraded design, consisting of a new plastic scintillator detector surrounded by two horizontal TPCs, is proposed. An Expression Of Interest was submitted to CERN SPSC in January 2017 (CERN-SPSC-2017-002 and SPSC-EOI-015). The R\&D is ongoing, for both the TPCs and SuperFGD, a novel scintillator detector.
First performance studies have shown that it is possible to cover better the phase space of neutrino interactions and this allows one to reduce the uncertainties on parameters used in oscillation analysis. Studies of $\nu_e$ interactions and electron-photon separation in new target are ongoing.
All these improvements, with respect to the current detector design, will help to better understand and constrain neutrino interaction models.
|
{
"timestamp": "2018-03-08T02:08:19",
"yymm": "1803",
"arxiv_id": "1803.02645",
"language": "en",
"url": "https://arxiv.org/abs/1803.02645"
}
|
\section{\label{sec:level1}Introduction\protect\\
}
The Stern-Gerlach experiment provided an evidence for existence of spin as an intrinsic, non-classical property [1]. A beam of silver atoms traveling through an inhomogeneous magnetic field is deflected up or down depending on their spin. Strangely, this experiment did not work with beams of electrons [2]. Bohr and Pauli emphasized, free electrons cannot be spin-polarized by exploiting magnetic fields, because of the combined effects of the Lorentz force and quantum uncertainty principle. This conclusion was due to the concept of classical particle trajectories and became a general argument in scientific literature [3-5].
One of the first efforts refuting the Bohr and Pauli’s statement was using a longitudinal magnetic-field configuration instead of the standard transverse geometry of Stern-Gerlach. In this way, the complete spin splitting with quantum-mechanical analysis has been reported [6-9]. In recent theoretical studies, the use of the grating and electromagnetic fields are resulted in the spin separation for electrons. Tang proposed a spin-polarized Talbot effect which is non-paraxial for an electron beam scattered from a grating of magnetic nanostructures [10]. Also a transverse Stern-Gerlach magnet which diffracts electrons by a magnetic phase grating was discussed by McGregor \textit{et al} [11]. They indicated that by applying a current to the solenoids, a spin-dependent phase difference is created between the two arms of the Mach-Zehnder interferometer [12]. Moreover, a novel space-variant Wien filter, named "q-filter", which is composed of space variant orthogonal electric and magnetic fields can act as an efficient spin-polarization filter. This filter couples spin angular momentum to orbital angular momentum for electron, neutron, or atom beams [13-14].
Kapitza-Dirac (KD) effect, the quantum mechanical diffraction of an electron on a periodic spatial structure formed by a standing light wave, confirmed that it could be a way to detect the spin of electrons [15]. Ahrens \textit{et al.} theoretically showed complete spin-flip transitions applying relativistic treatment for KD effect [16-17]. It was demonstrated that electrons according to their spin state in the interaction with circularly polarized counter-propagating monochromatic standing waves can be separated [18]. In the meantime, Dellweg \textit{et al}. reported a similar way of generating spin-polarized electrons by using bichromatic ($\omega$:$2\omega$) laser beams in Kapitza-Dirac effect [19-20]. Also Dellweg showed that spin dynamics of electrons in bichromatic KD effect is dependent on the polarization of laser beams and therefore the spin direction of the output beams can be controlled [21-22].
One major result of these studies is that the initial electron with spin up state is transferred to the scattered electron with spin down and vice versa. This is symmetric flipping spin dynamics. For especial two cases, circularly polarization in KD effect with equal frequency [18] and combination of linear-circular polarization in bichromatic KD effect with a frequency ratio of 2 [22] the spin of electron does not flip and preserves its state.
In the present paper, we theoretically discuss bichromatic KD effect arising from the interaction of the laser beam with frequency of $\omega$ and the counter-propagating laser beam with frequency of $3\omega$. In these fields, the electron exchanges four photons and the four-photon bichromatic KD (4PBKD) occurs. We investigate the spin polarization of the electron using various polarized counter propagating lasers beams with a frequency ratio of three. We focus on the electron whose momentum is parallel to the laser beam axis so that the interaction term $ \vec{p}.\vec{A} $ becomes very insignificant. Our article is organized as follows: in Sec. II we determine that in bichromatic standing waves with a frequency ratio of 1:3, electron exchanges four photons. Then based on the $ \textit{S} $ matrix approach, transition amplitude for resonant state is calculated and polarization dependent Rabi frequency is taken into account. Numerical solutions of the time-dependent Dirac equation in momentum space are applied to clear up relativistic quantum dynamics in Sec. III and look into scattering probability result of the different polarizations of two-color beams. In this paper atomic units are used throughout unless otherwise stated.
\section{Theoretical Description}
\subsection{\label{sec:level2}Electron Dispersion in Bragg Regime}
The interaction between an electron and two counter-propagating laser fields of frequency $ \omega_i(i=1,2)$ is due to the absorption of some photons from laser beam 1 and the stimulated emission of some photons to laser beam 2. In the case of an intense bichromatic standing wave, the multiphoton interaction between electron and laser field becomes more likely. The absorption of $ N_a $ photons with $ \omega_1 $ frequency and emission of $ N_e $ photons with $ \omega_2 $ frequency conserve energy and momentum if $N_a\omega_1=N_e\omega_2$. In case of $ \omega_1=\omega $ and $ \omega_2=3\omega $, a free electron absorbs three $ \omega $ photons and emits one $ 3\omega $ photon or vice versa. Since a photon has the energy $ c \hbar k $ and the momentum $\hbar k $, the total transferred energy is $\triangle E=c (N_a k_1-N_e k_2) $ and the total transferred momentum is $ \triangle p= (N_ak_1+N_ek_2) $. In the presence of the fundamental frequency and the third harmonic with $ N_a=3$ and $N_e=1$, after the interaction the electron momentum change is $ 6k $.
The relativistic energy-momentum relation secant and quantum pathway that increases electron momentum by $ 6k $ are shown in Fig. 1. The total exchange of energy and momentum of the electron with laser beams is represented by horizontal dash line.
This slop is given by $ s=\frac{\triangle E}{\triangle p} $ and connects initial and final momenta of diffracted electron [17].
All pathways in Bragg regime starts and ends exactly on the dispersion relation secant. Theoretically, other transitions are also possible as well, but we focus on the resonant two-state quantum dynamics in Bragg regime. Absorption of one photon of $ 3\omega $ and emission of three photons of $ \omega $ make no difference in result.
\begin{figure}
\resizebox{86mm}{68mm}{\includegraphics{dispersion}
\begin{picture}(0.8,0.6)
\put(-14,25) {$\dfrac{p}{mc^2}$}
\end{picture}
\begin{picture}(0.7,0.1)
\put(5,200) {$\dfrac{\varepsilon(p)}{mc^2}$}
\end{picture}
\caption{\label{fig:epsart} Sketch of dominant pathway of four-photon KD effect in bichromatic standing waves. The dispersion relation of energy and momentum is in Bragg regime and each wiggly arrow shows a photon.}
\end{figure}
\subsection{\label{sec:level2}\textit{S} Matrix Approach}
The evaluation of the relativistic electron in four-photon Kapitza-Dirac process is described quantum mechanically by Dirac equation
\begin{equation}
\left(
i\hbar\slashed{\partial}+\dfrac{e}{c}\slashed{A}(x)-m
\right)\psi(x)=0,
\end{equation}
in which $\slashed{A} $ is denoted by Feynman slash $ \slashed{A}=\gamma.A $ and $\gamma$ stands for Dirac matrices.
An analytical treatment for multiphoton stimulated Compton scattering such as bichromatic Kapitza-Dirac scattering has been accomplished through $ \textit{S} $ matrix approach with suitable approximations [22]. The Dirac equation has a well known solution; Volkov state for an electron in the case of the external potential being a plane wave. By using the fundamental laser mode and Volkov sates for the incident electron, the remaining four-photon Kapitza-Dirac process can be represented within the first order perturbation theory [23-24]. The calculation of S matrix is the same as that used in three photons KD effect, except that a further photon participates in the interaction.
In the presence of third harmonic the $ \textit{S} $ matrix for transition from $ p $ to $ p^\prime $ by absorbing three photons from $ A_1 $ as beam with fundamental frequency and emitting one photon into $ A_2 $ as beam with $ 3\omega$ frequency is given by
\begin{eqnarray}
S &\approx& \frac{ie}{cV} \int d^4x\, \bar{u}_{p', s'} \Bigg(
\slashed{A}_2^{(+)} \tilde{J}_2 e^{i \left( p' - p - 3k_1 \right) \cdot x} \nonumber\\
& & \left. - \frac{e}{2 c} \left[ \frac{\slashed{A}_1^{(-)} \slashed{k} \slashed{A}_2^{(+)}}{k_1 \cdot p'} + \frac{\slashed{A}_2^{(+)} \slashed{k}_1 \slashed{A}_1^{(-)}}{k_1 \cdot p} \right]
\tilde{J}_1 e^{i \left( p' - p - k_1 \right) \cdot x}
\right) u_{p, s} \nonumber\\
&\approx& \frac{i e}{2} T \bar{u}_{p', s'} \left[
a_2 \tilde{J}_2 \bar{\slashed{\epsilon}}_2
- \frac{e a_1 a_2}{4 c} \tilde{J}_1 \left( \frac{\slashed{\epsilon}_1 \slashed{k}_1 \bar{\slashed{\epsilon}}_2}{k_1 \cdot p'}
+ \frac{\bar{\slashed{\epsilon}}_2 \slashed{k}_1 \slashed{\epsilon}_1}{k_1 \cdot p} \right)
\right] u_{p, s}. \nonumber\\
\end{eqnarray}
Here, $\slashed{A}_1^{(-)} = \frac{1}{2} a_1 \slashed{\epsilon}_1 e^{-i k_1 \cdot x}$ is the component that defines the absorption of one photon from $ A_1 $, and $\slashed{A}_2^{(+)} = \frac{1}{2} a_2 \bar{\slashed{\epsilon}}_2 e^{i k_2 \cdot x}$ where $\bar{\slashed{\epsilon}}_2=\epsilon_2^*\cdot\gamma$ is the component that describes emission of one photon into $ A_2 $. Also $ \tilde{J}_{1,2}$ are generalized Bessel functions. In this derivation only resonant scattering process was considered, which fulfilled the Bragg condition. Therefore the $d^4x$- integration does result in the factor $c VT $, with $ T $ the interaction time and $ V$ quantification volume [22]. The initial and scattered electron momentum is set respectively to $p=(p^0,p_x,0,-3 \hbar k)$ and $p^\prime=(p^0,p_x,0,+3 \hbar k)$. The Dirac spinors for theses momenta are corresponding to Pauli spinors by
\begin{equation}
u_{(p,s)}=\frac{1}{\sqrt{2mc(p^0+mc)}}\quad \begin{pmatrix}
(p^0+mc)\chi_s \\
\vec{p}.\vec{\sigma}\chi_s
\end{pmatrix}.
\quad
\end{equation}
We can calculate a part of Eq. (2) as
\begin{equation}
\left( \bar{u}_{p', s'} \bar{\slashed{\epsilon}}_2 u_{p, s} \right)_{s', s}
= -\frac{p_x}{m c} \vec{\epsilon}_2^{~*} \cdot \vec{e}_x + i \frac{3\omega}{m c^2} \left( \vec{\epsilon}_2^{~*}\times\vec{e}_z \right)\cdot \vec{\sigma}
\end{equation}
and in a similar way
\begin{eqnarray}
& & \left( \bar{u}_{p',s'}\left[
\frac{\slashed{\epsilon}_1 \slashed{k}_1 \bar{\slashed{\epsilon}}_2}{k_1 \cdot p'} + \frac{\bar{\slashed{\epsilon}}_2 \slashed{k}_1 \slashed{\epsilon}_1}{k_1 \cdot p}
\right] u_{p,s} \right)_{s', s} \nonumber\\
&\approx& \dfrac{2 \vec{\epsilon}_1 \cdot \vec{\epsilon_2^{~*}}}{m c}
+ \frac{6 i \omega}{m^2 c^3} \left( \vec{\epsilon}_1 \times \vec{\epsilon_2}^{*} \right)\cdot \vec{\sigma}.
\end{eqnarray}
Also from the Taylor series of the generalized Bessel functions, we can estimate
\begin{eqnarray}
\tilde{J}_1 \approx -3\frac{e a_{1}}{m c^2} \frac{p_{x}}{m c} \vec{\epsilon_1} \cdot \vec{e_x}, \nonumber\\
\tilde{J}_2
\approx \frac{e^2 a_1^2}{m^2 c^4} \left( \frac{36}{8} \frac{p_x^2}{m^2 c^2} \left( \vec{\epsilon_1} \cdot \vec{e_x} \right)^2 - \frac{3}{8}\vec{\epsilon}_1^{~2} \right).
\end{eqnarray}
Putting all this together, the $ \textit{S} $ matrix of Eq. 2 for small transverse momentum is estimated as follows
\begin{eqnarray}
& S\approx \dfrac{i}{2} T\dfrac{e^{3}a_1^{2}a_2}{m^{3}c^{6}}\left[
\dfrac{3}{8\hbar}p_x c \vec{\epsilon_1}^{2} \vec{\epsilon_2}^{*} .\vec{e_x}+
\dfrac{3}{4\hbar}p_x c \vec{\epsilon_1}.\vec{e_x} \vec{\epsilon_1}.\vec{\epsilon_2}^{*} \right.\nonumber\\
&\left. -\dfrac{9i}{8}\omega \vec{\epsilon_1}^2
(\vec{\epsilon_2}^{*}\times\vec{e_z}). \vec{\sigma}\right]
=\dfrac{i}{2}T\xi_1^2\xi_2\hat{\Omega}.
\end{eqnarray}
Here $ \xi_{1,2}=\dfrac{ea_{1,2}}{mc^2}$ are the common dimensionless field amplitudes used in atomic physics. It is clear that for electrons entirely on $ z $ axis, the transition from $ p $ to $ p^\prime $ is dependent on the polarization beam vectors $ \vec{\epsilon_i} $ and the direction of electron spin along $z$ axis.
Generally for combination of the fundamental laser beam with higher harmonics laser beam, $ \hat{\Omega} $ has the common form presented in Eq. 7 and just the coefficients of each term are changed. For the electron with $ p_x=p_y=0 $, the third term of Eq. 7 has an essential role in the spin state of the scattered electron. As a result, we expect that the electron parallel to the beam axis in bichromatic ($\omega$:$3\omega$) laser beams shows the similar spin dynamics behavior as in bichromatic ($\omega$:$2\omega$) standing waves.
\section{Numerical Results}
In this section, the numerical results will be presented for the spin-dependent Kapitza-Dirac scattering in bichromatic counter-propagating laser fields with a frequency ratio of 3. Taking into account the combined vector potential and rewriting Dirac equation in momentum space, we find a system of coupled ordinary differential equations. The numerical solution of differential equations is obtained by employing a Crank-Nicholson scheme. The solutions are the absolute square values of the expansion coefficients that represent scattering probability of the electron in the particular quantum state. $ c_n^{\zeta}(n=0,\pm1,\pm2,...)$ coefficients represent electrons with momentum $p_n=(p_x,0,nk)$. The index $ \zeta\in\lbrace+\uparrow,+\downarrow,-\uparrow,-\downarrow \rbrace$ labels the sign of the energy and the spin direction. These states can be denoted by $\vert nk,\zeta\rangle $.
In the four-photon KD effect, the vector potential for the bichromatic field $ (\omega:3\omega) $ can be described in the form of
\begin{equation}
\vec{A} =A_1\left[\cos (k(z\pm ct))\hat{\epsilon}_1\right]+A_2\left[\cos (3k(z\pm ct))\hat{\epsilon}_2\right],
\end{equation}
where $A_1$ and $A_2 $ are the amplitudes of standing waves and $\epsilon_1$ , $\epsilon_2$ are polarization vectors. The electron with initial longitudinal momentum of $p_z=-3k $ in presence of the mentioned vector potential is scattered into mirror mode with longitudinal momentum $p_z=+3k $ by exchanging $ 6k $ momentum. For all next calculations, we start with longitudinal momentum $p_z=-3k $ and spin projection either up or down, while the other momenta modes at initial time will be zero. As mentioned in Bragg regime, transfer of population is restricted to $p_z= +3k $ for the final state and occupation probability of other momenta modes is very small. When the electron has component perpendicular to laser beam direction, all four states $\vert-3,+\uparrow\rangle $, $\vert-3,+\downarrow\rangle$, $\vert+3,+\uparrow\rangle $ and $\vert+3,+\downarrow\rangle $ participate in the interaction. It is worth noting that because of focusing on the effect of $\vec{\sigma}.\vec{B}$ in this work, we choose electron momentum to be parallel to the axis of the laser beam, however practically the influence of $ \vec{p}.\vec{A} $ cannot be ignored.
In all simulations, smooth switching on and off laser fields are $sin^2 $ slopes for five cycle laser periods with a flat top function. To study spin effects in four-photon KD diffraction, the following various polarizations of this bichromatic standing wave are considered:
\begin{figure}[t]
\resizebox{86mm}{68mm}{\includegraphics{bothlinearwcos}}
\caption{\label{fig:epsart}Time evolution of the occupation probability in four-photon KD effect with linear fundamental beam and counter propagating linear third harmonic. The overall laser intensity is $ 3 \times10^{22}$ W$\textnormal{cm}^{-2}$ with wavelength $\lambda=1.03$ nm, field parameters for beam $\omega$ and $3\omega$ are $A_1=18 \times10^{3}$ eV and $A_2=6 \times10^{3}$ eV respectively. The electron enters the laser fields with $p_z=-3k$ along the laser field direction.}
\end{figure}
\begin{figure}[b]
\resizebox{86mm}{68mm}{\includegraphics{bothlinear.eps}}
\caption{\label{fig:epsart} Rabi oscillation in both linear polarization setup for vector potential mentioned in Eq. 10. All other laser and electron parameters are same as Fig. 2.}
\end{figure}
\subsection{\label{sec:level2}Linear Polarization for Two Laser Beams (Lin-Lin)}
In the first setup, the fundamental field and its third harmonic are linearly polarized along the $ x $ axis
\begin{equation}
\vec{A} =A_1\left[\cos(k(z-ct))\hat{e}_x\right]+A_2\left[\cos (3k(z+ct))\hat{e}_x\right].
\end{equation}
The incident electron is $-3\hbar k$ along the laser propagation direction and has no component parallel to the polarization direction. The overall laser intensity in Lin-Lin setup is $ I=\frac{\omega^2A_1^2+(3\omega)^{2}A_2^2}{8\pi c} $, when the amplitudes of beams are in maximum of their value. The angular momentum of fundamental laser beam $ \omega $ considered for all numerical solutions is $1.2\times10^3 $ eV. Fig. 2 presents the typical behavior of an electron in both linear laser beams. For the electron that is injected with spin up, a Rabi oscillation takes place between $\vert-3,+\uparrow\rangle $ and $\vert+3,+\downarrow\rangle $. The interaction in this field configuration is independent of the initial electron spin state such as the electron in initial momentum and the spin down state $\vert-3,+\downarrow\rangle $ is also scattered into the reflected momentum and spin up state $\vert+3,+\uparrow\rangle $ and the Rabi oscillation is similarly as Fig. 2. This symmetry of spin-flipping exists also in three-photon bichromatic$ (\omega:2\omega) $ KD effect with linear polarization [22].
The shape of oscillation in bichromatic four-photon KD effect is sinusoidal and has two distinct peaks that are different in size in Fig. 2. It means that the probability of $ \vert c_{-3}^{+\uparrow}\vert^2 $ has sinusoidal oscillation whose minimum amplitude alternatively changes between 0.16 and 0.0 while $ \vert c_{+3}^{+\downarrow}\vert^2 $ oscillate similar to $ \vert c_{-3}^{+\uparrow}\vert^2 $ but it's maximum amplitude changes between 0.84 and 1.0. The Rabi oscillation period is about $ 2.8 $ fs and simulation shows that even with changing the standing waves amplitudes, these two distinct peaks for $ \vert c_{-3}^{+\uparrow}\vert^2 $ and $ \vert c_{+3}^{+\downarrow}\vert^2 $ will not be destroyed.
The vector potential $ A(x)=f(t)[A_1(k_1.x)+A_2(k_2.x)] $ with slow envelope function $f(t)$ which was given in bichromatic $ (\omega:2\omega) $ KD effect demonstrates the typical Rabi oscillation with one peak [22]. Our results confirm that with similar vector potential for bichromatic $ (\omega:3\omega) $ KD effect
\begin{equation}
\vec{A} =f(t)\left( A_1\left[\cos(kz)\hat{e}_x\right]+A_2\left[\cos(3kz)\hat{e}_x\right] \right),
\end{equation}
the typical oscillatory behavior with sinusoidal oscillation appears as shown in Fig. 3. The Rabi cycle is fully developed in Fig. 3 and the period of Rabi oscillation is $ 2\pi/\Omega_R=0.84 $ fs. By comparing vector potentials mentioned in Eq. 9 and Eq. 10, it's obvious that existence of $\cos(\omega_{i=1,2}t)$ part in numerical solutions results in different oscillation behavior.
\subsection{\label{sec:level2}Co-rotating Circular Fundamental and Third Harmonic Fields (Cir-Cir)}
We now look over an electron in two circular bichromatic counter propagating but co-rotating waves given by
\begin{eqnarray}
&\vec{A} =\frac{A_1}{\sqrt{2}}\left[\cos (k(z-ct))\left(\hat{ e}_x+i\hat{e}_y\right)\right] \nonumber\\
& +\frac{A_2}{\sqrt{2}}\left[\cos (3k(z+ct))\left(\hat{e}_x+i\hat{e}_y\right)\right].
\end{eqnarray}
If the electron is initially in spin up state with momentum $ -3k $, no diffraction occurs in this field configuration, as shown in upper panel of Fig. 4. In contrast for the electron with spin down and initial momentum $ -3k $ in lower panel, a Rabi oscillation between two states $\vert-3,+\downarrow\rangle $ and $\vert+3,+\uparrow\rangle $ takes place. This spin dependent diffraction behavior implies that it is possible to separate electrons based on their initial spin state in circularly polarized laser beams. The shape of Rabi oscillation in cir-cir setup for an electron with spin down is interesting. Still two peaks in oscillation exist and considering that maximum values of peaks are 1.0 and 0.96, the difference in maximum of peaks is less in this setup. As represented in Fig. 4 for two intervals about $0.56 $ fs the population of modes do not transfer and the electron maintains its momentum and spin. In fact similar to Fig. 2 the electron does not have instant changing momentum and spin. Also in this setup, the vector potential without $\cos(\omega_i t )$ part in Eq. 11 results in the typical sinusoidal Rabi oscillation similar to Fig. 3.
\begin{figure}[t]
\centering
\resizebox{86mm}{68mm}{\includegraphics{2cir.eps}}
\caption{\label{fig:epsart}Time evolution of the occupation probability in four-photon KD effect with co-rotating circular bichromatic waves. The combined laser intensity is $ 7\times10^{22}$ W$\textnormal{cm}^{-2} $ with wavelength $\lambda=1.03$ nm, field amplitude for both beam is $A_1=A_2=30\times10^{3}$ eV. The electron enters the laser fields with $p_z=-3k$ along the laser direction. Upper panel of the figure shows that electron with spin up does not scatter in this setup.}
\end{figure}
\subsection{\label{sec:level2}Combination of Linear and Circular Polarization Fields (Lin-Cir)}
To examine the result of the last section about spin separation, we focus on a bichromatic setup in which the fundamental beam is linearly polarized and the third harmonic beam is circularly polarized
\begin{eqnarray}
&\vec{A} =A_1\left[\cos (k(z-ct))\left(\hat{ e}_x\right)\right] \nonumber\\
& +\frac{A_2}{\sqrt{2}}\left[\cos (3k(z+ct))\left(\hat{e}_x+i\hat{e}_y\right)\right].
\end{eqnarray}
By choosing the high frequency beam to be circularly polarized, as shown in Fig. 5, only the population probability starting from $\vert-3,+\uparrow\rangle $ travels to $\vert+3,+\downarrow\rangle$ state and a Rabi oscillation takes place. The Rabi period is $2.8$ fs and with the same parameters of the lin-lin setup, the population of modes does not transfer completely. The maximum of Rabi amplitude of $\vert+3,+\downarrow\rangle$ state only reaches 0.84. When the electron spin is down and has $ -3k$ momentum $\vert-3,+\downarrow\rangle $, no scattering occurs at all. According to the $ \textit{S}$ matrix in Eq. 7 by putting $ \epsilon_1=\hat{ e}_x $ and $ \epsilon_2=(\hat{ e}_x+i\hat{ e}_y)/\sqrt{2} $ and due to existence $ \sigma_-$ in Rabi frequency $\hat{\Omega} $, the symmetry transition between the spin up $ \rightarrow $ down and spin down $ \rightarrow $ up disappears.
\begin{figure}[t]
\centering
\resizebox{86mm}{68mm}{\includegraphics{lincir.eps}}
\caption{\label{fig:epsart}Time evolution of the occupation probability in four-photon KD effect with a hybrid setup that fundamental laser beam linear and the third harmonic is circularly polarized. The other simulation parameters are the same as in Fig. 2. In upper panel electron beam is initially spin up and spin filliping transition occur. The electron beam with spin down in lower panel is not diffracted.}
\end{figure}
For the case with a fundamental frequency beam of circular polarization and a third harmonic beam of linear polarization, the spin flipping symmetry still exists, i.e. by choosing $ c_{-3}^{+\uparrow}(t=0)=1 $ or $ c_{-3}^{+\downarrow}(t=0)=1 $ the population is transferred to $\vert+3,+\downarrow\rangle $ and $\vert+3,+\uparrow\rangle $, respectively. We have a Rabi oscillation similar to upper panel of Fig. 5 for both electron spin states. In fact, the asymmetry of spin arises only for the circularly polarized high frequency laser beam. As the setup of both linearly polarized beams, if $ cos(\omega_i t)$ is not considered, we find a typical Rabi oscillation.
\subsection{\label{sec:level2} Combination of Linear - Elliptical Polarization Fields (Lin-Ellip)}
When an elliptically polarized laser beam with frequency $ \omega $ and $ \epsilon_1=(2\hat{ e}_x+i\hat{ e}_y)/\sqrt{5}$ is combined with a linearly laser beam of frequency $ 3\omega $ and $\epsilon_2=\hat{ e}_x $, the Rabi oscillation for both up and down spin states of the electron occurs. Numerical results demonstrate well the spin-filliping symmetry. For electron with $ -3k$ momentum, whether the initial spin is up or down, we have a Rabi oscillation behavior as shown in Fig. 6 with dash line. For $ A_1=3A_2=18\times10^3 $ eV the maximum of Rabi amplitude in this setup is $0.25 $.
In case of linear polarization for fundamental frequency and elliptical polarization for third harmonic, the vector potential is given by
\begin{eqnarray}
&\vec{A} =A_1\left[\cos (k(z-ct))\left(\hat{ e}_x\right)\right] \nonumber\\
& +\frac{A_2}{\sqrt{5}}\left[\cos (3k(z+ct))\left(2\hat{e}_x+i\hat{e}_y\right)\right].
\end{eqnarray}
Solving Dirac equation for this vector potential gives a Rabi oscillation for the electron with initial spin up state as seen in Fig. 6 with solid line. Exactly the same exact Rabi oscillation will be happen if the spin of the initial electron is down. Therefore, with vector potential given by Eq. 13, the diffraction probability dose not depend on the spin orientation of the incident electrons, and with elliptical polarization for high harmonic, unlike circular polarization case, the electron can not be spin polarized.
In fact the Rabi frequency obtained by $ S$ matrix in Eq. 7 also predicst the symmetry of spin-flipping behavior. The inequality of polarization amplitude brings out the existence of $ \sigma_x $ or $ \sigma_y $ in third term of Rabi frequency of Eq. 7. In this condition the separation of electrons due to their initial spin states vanishes.
\begin{figure}[t]
\centering
\resizebox{86mm}{68mm}{\includegraphics{linellp.eps}}
\caption{\label{fig:epsart}Time evolution of the occupation probability in four-photon KD effect from a linear polarized fundamental laser beam and third harmonic with $ (2\hat{ e}_x+i\hat{ e}_y)/\sqrt{5} $ polarized is shown by solid line. The dash line show the elliptically polarization of fundamental beam and linearly polarization of third harmonic beam. The fundamental laser wavelength $\lambda=1.03$nm and field amplitudes are the same as Fig. 2. For the incident electron with spin up or spin down, the Rabi transitions between two resonant states in any combination of Lin-Ellip configuration will be happen.}
\end{figure}
\section{Conclusion}
Four-photon Kapitza-Dirac scattering occurs in the bichromatic standing wave when the electron beam absorbs three photons from the laser field with $ \omega $ frequency and emits one photon to the counter-propagation laser beam with $ 3\omega $ frequency. The initial electron spin and photon helicity are two factors that can affect on the polarization of the free electron in the bichromatic $(\omega:3\omega)$ KD effect. In this work, it is shown when the fundamental laser beam is linear and 3rd harmonic is circularly polarized, Rabi oscillation occurs only for electrons with spin up. When the initial electrons have spin down, diffraction does not exist and the symmetric flipping spin effect disappears.
In this study, we showed that the shape of Rabi frequency in bichromatic $(\omega:3\omega)$ vector potential has two unequal peaks that change with beam amplitude and the laser intensity. This different oscillation is due to fast time varying $\cos(\omega_i t )$ part in vector potential. Our results also indicate that these two peaks in the Rabi oscillation exist for other harmonics like second and forth harmonics. The Rabi frequency obtained from analytical $\textit{S}$ matrix is congruous with numerical results. Rabi frequency obtained from $\textit{S}$ matrix method predicts that even for other harmonics with circular polarization, the electron is diffracted based on its initial spin state.
The momentum and spin of free electrons in four-photon bichromatic KD effect can be controlled by a suitable laser filed configuration. A spin-unpolarized incident electron beam with $ -3k$ momentum after interaction with a linear fundamental laser beam and third harmonic with circular polarization, splited into two portions $\vert-3,+\uparrow\rangle $ and $\vert+3,+\downarrow\rangle $ with Rabi oscillation. The electron with $+3k$ momentum is entirely spin polarized. By choosing the electron along the laser beams and circular polarization of the high frequency laser beam in bichromatic KD effect, the electron spin filter works for other harmonics participating in the Kapitza-Dirac effect.
\section{References}
|
{
"timestamp": "2018-03-08T02:10:45",
"yymm": "1803",
"arxiv_id": "1803.02748",
"language": "en",
"url": "https://arxiv.org/abs/1803.02748"
}
|
\section{Introduction}
The task of person re-identification (Re-ID) is to judge whether two person images represent the same person or not and it has widely-spread applications in video surveillance. There are two challenges posed by viewpoint changes: the variation of a person's pose and misalignment.
Many existing methods solve the challenges above by extracting cross-view invariant features~\cite{Alpher13}~\cite{Alpher14}~\cite{Alpher16}~\cite{Alpher27}~\cite{Alpher19}. These methods focus on extracting local features including the hand-crafted features and deep learning features from horizontal stripes of a person image, and fuse them into a description vector as the representation. Though these methods usually work under the assumption up to a slight vertical misalignment, they ignore the typically widely existing horizonal misalignment. From the perception of humans, the images captured by two cameras for the same person should have many common components (body-part, front and back pose, belongings) so that people can decide whether the two input images represent the same person or not. Based on this principle, methods like DCSL~\cite{Alpher06} employ deep convolutional networks to learn the correspondence among these components and have shown a promising performance. DCSL uses the deep convolutional networks like GoogLeNet~\cite{Alpher09} to extract the semantic-components representation. For bottom layers, the discriminative region in each feature map learned by DCSL corresponds to one component of a person such as bag, head, and body. For high layers, the learned regions still keep their shapes and spatial locations while they are abstract. However, the feature regions of the same components from two views for the same person seldom have the consistent spatial scales and locations because of viewpoint changes. For example, the component ``bag'' is located in the opposite sides in the two images in Figure~\ref{fig:example}. Consequentely, the existing methods like DCSL ignore this problem.
\begin{figure}
\includegraphics[width=1\textwidth]{example}
\caption{An example of misalignment for the component ``bag'' in two different images where the feature maps are extracted with the GoogLeNet.}
\label{fig:example}
\end{figure}
To address the challenges above, in this work, we present a deep convolutional pyramid person matching network (PPMN). Pyramid matching based on convolution operation is employed to compute the responses of the same semantic-component in different images. To further capture the variation of the spatial scale and location misalignment, we exploit the flexible kernel-size convolution operation to guarantee that the most of semantic-components are matched in the same subwindows. Since the convolution operation with large kernels increases the parameters and computation, we propose to reduce the computation complexity by introducing the atrous convolution structure~\cite{Alpher23}, which has been used in some convNet-based tasks such as image segmentation, object detection, and can provide a desirable view of perception without increasing parameters and computation by introducing zeros between the consecutive filter values. In particular, we employ the multi-rate atrous convolution layers to construct the Pyramid Matching Module and produce the correspondence representation between the semantic-components. With the correspondence representation, we learn the final similiarity value to decide whether the two input images represent the same person or not.
The proposed framework is evaluated on three real-world datasets. Extensive experiments on these benchmark datasets demonstrate the effectiveness of our approach against the state-of-the-art, especially on the rank-1 recognition rate.
The main contributions of this paper are as follows:
(1) We propose an end-to-end deep convolutional framework to deal with the problem of person Re-ID. Image representation learning and cross-person correspondence learning are jointly optimized to enable the image representation to adapt to the task of perosn Re-ID.
(2) The proposed framework maps a person's semantic-components to the deep feature space and employs the pyrimid matching strategy based on the atrous convolution to identify the common components of the person.
\section{Related Work}
In the literature, most existing efforts of person Re-ID are mainly carried in two aspects: the discriminative representation learning and the effective matching strategy learning. For image representation, a number of approaches pay attention to designing robust descriptors againist misalignments and variations. Early studies employ hand-crafted features including HSV color histogram~\cite{Alpher01}, SIFT~\cite{Alpher02}, LBP~\cite{Alpher03} features or the combination of them. Recently, several deep convolutional architectures \cite{Alpher27}~\cite{Alpher06} have been proposed for person Re-ID and have shown significant improvements over those with hand-crafted features.
For matching strategy, the essential idea behind metric learning is to find a mapping function from the feature space to the distance space so as to minimize the intra-personal variance while maximizing the inter-personal margin. Many approaches have been proposed based on this idea including LMNN~\cite{Alpher13} and KISSME~\cite{Alpher14}. Recently, some efforts jointly learn the representation and classifier in a unified deep architecture. For example, patch-based methods~\cite{Alpher16}~\cite{Alpher27} decompose images into patches and perform patchwise distance measurement to find the spatial relationship. Part-based methods~\cite{Alpher19} divide one person into equal parts and jointly perform bodywise and partwise correspondence learning since the pedestrians keep upright in general. Different from all the above efforts which focus on feature distance measurement, our proposed method aims at learning the semantic correspondence of semantic-components based on the semantics-aware features and is robust to the variation and misalignment posed by viewpoint changes.
\section{Our Architecture}
Figure \ref{fig:architecture} illustrates our network's architecture. The proposed architecture extracts the semantics-aware representations for a pair of input person images. The features are then concatenated to feed into the Pyramid Matching Module to learn the correspondence of semantic-components. Finally, softmax activations are employed to compute the final decision which indicates the probability that the image pair represents the same person. The details of the architecture are explained in the following subsections.
\subsection{Learning representation for Images}
The ImageNet-pretrained GoogLeNet employed in this work is able to capture the semantic features for most of objects in this task as the ImageNet dataset contains a large number of object types for more than 100000 concepts. In our architecture, these semantic features are extracted with two parameter-shared GoogLeNets for a pair person images, respectively. As shown in Figure \ref{fig:architecture}, the GoogLeNets have been adapted to the Re-ID task by finetuning on a Re-ID dataset and decompose the person image into many semantic components such as bag, head, and body. It is apparent to recognize the particular components from the visualization of bottom layers' output like Conv1 layer. These components' visualizations for higher layers such as Conv5 layer are more abstract but still keep the shapes and spatial locations. For notational simplicity, we refer to the convNet as a function $f_{CNN}( \boldsymbol X; \boldsymbol \theta)$, which takes $ \boldsymbol X$ as input and $ \boldsymbol \theta$ as parameters. Given an input pair of images resized to $160\times80$ from two cameras, A and B, the GoogLeNets output 1024 feature maps with size $10\times5$ separately as the representations of images. We denote this process as follows:
\begin{equation}
\label{equ:imgPre}
\{\boldsymbol R^A, \boldsymbol R^B\}=\{f_{CNN}(\boldsymbol I^A; \boldsymbol \theta_1), f_{CNN}(\boldsymbol I^B; \boldsymbol \theta_1)\}
\end{equation}
where $\boldsymbol R^A$ and $\boldsymbol R^B$ denote the representations of images $\boldsymbol I^A$ and $\boldsymbol I^B$, respectively. $\boldsymbol \theta_1$ are the shared parameters.
\begin{figure}
\includegraphics[width=1\textwidth]{new_architecture}
\caption{The proposed architecture of the deep convolutional Pyramid Person Matching Network (PPMN). Given a pair of person images as input, the parameters-shared GoogLeNets generate the semantics-aware representation. The semantic components such as bag and head are visible in the output of Conv1 layer. With the extracted features, the Pyramid Matching Module learns the correspondence of these semantic components based on multi-scale artrous convolution layers. Finally, softmax activations give the final decision of whether the image pair depicts the same person or not.}
\label{fig:architecture}
\vspace*{-2em}
\end{figure}
\subsection{Pyramid Matching Module using Atrous Convolution}
Based on the semantic representations of persons, the problem of person matching is reduced to a problem of the matching for the semantic-components. However, the challenges are the variations of spatial scales and the misalignments of locations for the semantic-components posed by viewpoint changes. As shown in Figure \ref{fig:example}, the same bag belonging to the same person is located on the right side in one image but the left side in the other image. To deal with the challenges, we employ the atrous convolution with multi-scale kernels to construct a module called Pyramid Matching Module based on the pyramid matching strategy. We give two examples in Figure~\ref{fig:fusion_layer} to explain how this module works. On the left column, the component ``head'' has the similiar spatial shapes and locations in the two images. It is easy to learn the correspondence between the two feature maps with a general convolution operation to compute the responses of two feature regions in closely located windows in the two images, respectively, called the field-of-views, while the component ``bag'' has completely different shapes and locations in the two images, respectively, and thus has different field-of-views. Consequently, a larger field-of-view is required for the convolution in the latter case. Accordingly, we employ the atrous convolution for a large field-of-view. The Pyramid Matching Module includes three branches $3\times3$ atrous convolution with rate 1, 2 and 3, respectively, which provides the field-of-view with size $3\times3$, $5\times5$, $7\times7$, respectively. With the images concatenated as $\{\boldsymbol R^A,\boldsymbol R^B\}$, the proposed module computes the correspondence distribution denoted as $\boldsymbol S_{PPM} = \{ \boldsymbol S_{r=1}$, $\boldsymbol S_{r=2}$, $\boldsymbol S_{r=3} \}$, in which the value of each location $(i, j)$ indicates the correspondence probability at that location. $r$ is the rate of atrous convolution. We formulate this matching strategy as follows:
\begin{align}
\label{equ:ppm}
\boldsymbol S_{PPM} ={} & \{\boldsymbol S_{r=1}, \boldsymbol S_{r=2}, \boldsymbol S_{r=3}\} \notag \\
={} & \{f_{CNN}(\{\boldsymbol R^A,\boldsymbol R^B\}; \{\boldsymbol \theta^1_2, \boldsymbol \theta^2_2, \boldsymbol \theta^3_2\}\}
\end{align}
where $\boldsymbol \theta^r_2(r=1,2,3)$ are the parameters of the matching branch with rate $r$. We use $\boldsymbol \theta_2 = \{ \boldsymbol \theta^1_2, \boldsymbol \theta^2_2, \boldsymbol \theta^3_2 \}$ as the parameters of our module.
We fuse the concatenated correspondence maps $S_{PPM}$ with the learned parameters $\boldsymbol \theta_3$, which indicates the weights of different matching branches, and output the fused correspondence representation $\boldsymbol S_{fusion}$. Inspired by \cite{Alpher06}, we further downsample $S_{fusion}$ by the max-pooling operation so as to preserve the most discriminative correspondence information and align the result in a larger region. Then, we obtain the final correspondence representation $\boldsymbol S_{final}$:
\begin{align}
\label{equ:weights}
\boldsymbol S_{final} = f_{CNN}(\{\boldsymbol S_{r=1}, \boldsymbol S_{r=2}, \boldsymbol S_{r=3}\}; \boldsymbol \theta_3)
\end{align}
\subsection{The unified framework and Learning}
We apply two fully connected layers to encode the correspondence representation $\boldsymbol S_{final}$ with an abstract vector of size 1024. The vector is then passed to a softmax layer with two softmax units $\boldsymbol S(\boldsymbol S_{final}; \boldsymbol \theta_4)$: namely $\boldsymbol S_0(\boldsymbol S_{final}; \boldsymbol \theta_4)$ and $\boldsymbol S_1(\boldsymbol S_{final}; \boldsymbol \theta_4)$. We represent the probability that the two images in the pair, $\boldsymbol I^A$ and $\boldsymbol I^B$, are of the same person with softmax activations computed on the units above:
\begin{equation}
\label{equ:softmax}
p = \frac {exp(\boldsymbol S_1(\boldsymbol S_{final}; \boldsymbol \theta_4))}{exp(\boldsymbol S_0(\boldsymbol S_{final}; \boldsymbol \theta_4))+exp(\boldsymbol S_1(\boldsymbol S_{final}; \boldsymbol \theta_4))}
\end{equation}
We reformulate this approach as a unified framework with $\boldsymbol \theta = \{\boldsymbol \theta_1, \{\boldsymbol \theta^r_2\}, \boldsymbol \theta_3, \boldsymbol \theta_4 \}$, where $r=1,2,3$ based on Eqs.\ref{equ:imgPre} - \ref{equ:weights} :
\begin{align}
\label{equ:unified}
S(\boldsymbol S_{final}, \boldsymbol \theta_4) ={} & f_{CNN}(\{\boldsymbol S_{r=1}, \boldsymbol S_{r=2}, \boldsymbol S_{r=3}\}; \boldsymbol \theta_4, \boldsymbol \theta_3) \notag \\
={} & f_{CNN}(\{\boldsymbol I^A,\boldsymbol I^B\}; \boldsymbol \theta_4, \boldsymbol \theta_3, \{\boldsymbol \theta^r_2\},\boldsymbol \theta_1) \notag \\
={} & f_{CNN}(\{\boldsymbol I^A,\boldsymbol I^B\}; \boldsymbol \theta)
\end{align}
We optimize this framework by minimizing the widely used cross-entropy loss over a training set of $N$ pairs:
\begin{align}
\label{equ:loss}
\boldsymbol L_\theta = - \frac{1}{N} \sum^N_{n=1} [ l_n \log p_n + (1-l_n) \log (1-p_n) ]
\end{align}
where $l_n$ is the 1/0 label for the input pair, which represents the same person or not.
\begin{figure}
\includegraphics[width=1\textwidth]{fusion_layer}
\caption{
Illustration of the correspondence learning with Pyramid Matching Module. Left: the component ``head'' has the similiar spatial shapes and locations. Right: the component ``bag'' has the completely different shapes and locations. We match the components above by computing their responses in the corresponding windows and take the convolutions with a multi-scale field-of-view, which are robust to the misalignments of locations and variations of scale posed by viewpoint changes.
}
\label{fig:fusion_layer}
\end{figure}
\section{Experiments}
\subsection{Datasets and Protocol}
We compare our proposed architecture with the state-of-the-art approaches on three person Re-ID datasets, namely CUHK03~\cite{Alpher27}, CUHK01~\cite{Alpher28} and VIPeR~\cite{Alpher29}. All the approaches are evaluated with Cumulative Matching Characteristics (CMC) by single-shot results, which characterize a ranking result for every image in the gallery given a probe image. Our experiments are conducted on the datasets with 10 random initializations in training and the average results are provided. Table~\ref{table:dataset} lists the description of each dataset and our experimental settings with the training and testing splits.
\begin{table}
\caption{Datasets and settings in our experiments. The settings for CUHK01 dataset include the 100 test IDs and 486 test IDs. }
\label{table:dataset}
\begin{center}
\begin{tabular}{c|c|c|c}
\toprule
Dataset & CUHK03 & CUHK01 & VIPeR \\
\midrule
identities & 1360 & 971 & 632 \\
images & 13164 & 3884 & 1264 \\
views & 2 & 2 & 2 \\
train IDs & 1160 & 871;485 & 316 \\
test IDs & 100 & 100;486 & 316 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Training the Network}
The proposed architecture is implemented on the widely used deep learning framework Caffe\cite{Alpher30} with an NVIDIA TITAN X GPU. It takes about 40-48 hours in training for 160K iterations with batch size 100. We use stochastic gradient descent for updating the weights of the network. We set the momentum as $\gamma = 0.9$ and set the weight decay as $\mu = 0.0002$. We start with a base learning rate of $\eta^{0} = 0.01$ and gradually decrease it as the training progresses using a polynomial decay policy with $power$ as $0.5$.
\textbf{Data Augmentation}. To make the model robust to the image translation variance and to further augment the training dataset, for every original training image, we sample 5 images around the image center, with translation drawn from a uniform distribution in the range $[-8,8]\times[-4,4]$ for an original image of size $160\times80$.
\textbf{Hard Negative Mining (hnm)}. The negative pairs are far more than the positive pairs, which can lead to data imbalance. Also, in these negative pairs, there still exist scenarios that are hard to distinguish. To address these difficuties, we first sample the negative sets to get three times as many negatives as positives and train our network. Then, we use the trained model to classify all the negative pairs and retain those ranked top on which the trained model performs the worst for retraining the network.
\begin{table}
\begin{center}
\caption{Comparison of state-of-the-art results on CUHK03. The cumulative matching scores (\%) at rank 1, 5, and 10 are listed.}
\label{table:CUHK03}
\begin{tabular}{c|ccc|ccc}
\toprule
\multirow{2}*{Methods} &
\multicolumn{3}{c|}{labelled CUHK03} &
\multicolumn{3}{c}{detected CUHK03} \\
\cline{2-7}
& r=1 & r=5 & r=10 & r=1 & r=5 & r=10 \\
\midrule
KISSME & 14.17 & 37.46 & 52.20 & 11.70 & 33.45 & 45.69 \\
LMNN & 7.29 & 19.64 & 30.74 & 6.25 & 17.87 & 26.60 \\
LOMO+LSTM & - & - & - & 57.30 & 80.10 & 88.30 \\
LOMO+XQDA & 52.20 & 82.23 & 92.14 & 46.25 & 78.90 & 88.55 \\
\midrule
FPNN & 20.65 & 50.94 & 67.01 & 19.89 & 49.41 & - \\
ImprovedDL & 54.74 & 86.50 & 93.88 & 44.96 & 76.01 & 81.85 \\
PIE(R)+Kissme & - & - & - & 67.10 & 92.20 & 96.60 \\
SICIR & - & - & - & 52.17 & - & - \\
DCSL(no hnm) & 78.60 & 97.76 & 99.30 \\
DCSL(hnm) & 80.20 & 97.73 & 99.17 & - & - & - \\
\midrule
PPMN(no hnm) & 83.20 & 97.50 & 99.25 & 77.60 & \textbf{96.10} & \textbf{98.60} \\
PPMN(hnm) & \textbf{85.50} & \textbf{98.20} & \textbf{99.50} & \textbf{80.63} & 95.62 & 98.07 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Experimental Results}
In this section, we campare PPMN with several recent methods, including both hand-crafted features based methods: KISSME~\cite{Alpher14}, LMNN~\cite{Alpher13}, LOMO+LSTM~\cite{Alpher19}, LOMO+XQDA~\cite{Alpher32}; and deep learning features based methods: FPNN~\cite{Alpher27}, ImprovedDL~\cite{Alpher16}, Pose Invariant Embedding (PIE(R)+Kissme)\cite{Alpher34}, Single-Image and Cross-Images Representation learning(SICIR)\cite{Alpher22}, DCSL~\cite{Alpher06}. We report the evaluation results in Table~\ref{table:CUHK03}.
\begin{table}
\begin{center}
\caption{Comparison of state-of-the-art results on CUHK01 dataset. The cumulative matching scores (\%) at rank 1, 5, and 10 are listed.}
\label{table:CUHK01}
\begin{tabular}{c|ccc|ccc}
\toprule
\multirow{2}*{Methods} &
\multicolumn{3}{c|}{CUHK01(100 test IDs)} &
\multicolumn{3}{c}{CUHK01(486 test IDs)} \\
\cline{2-7}
& r=1 & r=5 & r=10 & r=1 & r=5 & r=10 \\
\midrule
KISSME & 29.40 & 60.18 & 74.44 & - & - & - \\
LMNN & 21.17 & 48.51 & 62.98 & 13.45 & 31.33 & 42.25\\
\midrule
FPNN & 27.87 & 59.64 & 73.53 & - & - & - \\
ImprovedDL & 65.00 & 89.00 & 94.00 & 47.53 & 71.60 & 80.25 \\
PIE(R)+Kissme & - & - & - & - & - & - \\
SICIR & 71.80 & - & - & - & - & - \\
DCSL(no hnm) & 88.00 & 96.90 & 98.10 & - & - & - \\
DCSL(hnm) & 89.60 & 97.80 & 98.90 & 76.54 & \textbf{94.24} & 97.49 \\
\midrule
PPMN(no hnm) & 92.10 & \textbf{99.50} & \textbf{99.95} & - & - & - \\
PPMN(hnm) & \textbf{93.10} & 98.80 & 99.80 & \textbf{77.16} & 92.80 & \textbf{97.53} \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
We conduct PPMN on both labelled and detected CUHK03 datasets. From Table~\ref{table:CUHK03}, our method achieves an improvement of 5.30\% (85.50\% vs. 80.20\%) on the labelled dataset and an improvement of 23.33\% (80.63\% vs. 57.30\%) on the detected dataset. Table~\ref{table:CUHK01} also illustrates the top recognition rate on CUHK01 dataset with 100 test IDs and 486 test IDs. We see that PPMN achieves the best rank-1, rank-5 recognition rates of 93.10\%, 99.50\% (vs. 89.60\%, 96.90\% respectively by the next best method) with 100 test IDs, which means in most cases we can find the correct person in the first five samples of the queried and returned results given 100 candidate images. For the settings with 486 test IDs, we finetune the network on the set of half-CUHK01 with the pre-trained model on CUHK03 and achieve an improvement of 0.62\% (77.16\% vs. 76.54\% ) over DCSL using the same training protocol on rank-1 recognition rate. The experimental results also demonstrate the effect of hard negative mining, which provides the absolute gain over 1.00\% compared with the
same model without hard negative mining. Following the setup of~\cite{Alpher16}, we pre-train the network using CUHK03 and CUHK01 datasets, and finetune on the training set of VIPeR. As shown in Table~\ref{table:VIPeR}, we see that PPMN achieves the best rank-1, rank-5, rank-10 recognition rates and an imporvement of 1.20\% (45.82\% vs. 44.62\%) for rank-1 recognition rate.
\begin{table}
\begin{center}
\caption{Comparison of state-of-the-art results on VIPeR dataset. The cumulative matching scores (\%) at rank 1, 5, and 10 are listed.}
\label{table:VIPeR}
\begin{tabular}{c|ccc}
\toprule
\multirow{2}*{Methods} &
\multicolumn{3}{c}{VIPeR}\\
\cline{2-4}
& r=1 & r=5 & r=10 \\
\midrule
KISSME &19.60 & 48.00 & 62.20\\
LMNN & - & - & - \\
LOMO+LSTM & 42.40 & 68.70 & 79.40 \\
LOMO+XQDA & 40.00 & 68.13 & 80.51\\
\midrule
FPNN & - & - & -\\
ImprovedDL & 34.81 & 63.61 & 75.63\\
PIE(R)+Kissme & 27.44 & 43.01 & 50.82 \\
SICIR & 35.76 & - & -\\
DCSL(hnm) & 44.62 & 73.42 & 82.59\\
\midrule
PPMN(hnm) & \textbf{45.82} & \textbf{74.68} & \textbf{86.08} \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
In this paper, we have developed a novel deep convolutional architecture for person re-identification. We employ a deep convNet GoogLeNet to map a person's semantic components to the required feature space. Based on the pyramid matching strategy, we design a module to address the misalignment and variation issues posed by viewpoint changes. We demonstrate the effectiveness and promise of our method by reporting extensive evaluations on various datasets. The results have indicated that our method has a remarkable improvement over the state-of-the-art literature.
|
{
"timestamp": "2018-03-08T02:04:58",
"yymm": "1803",
"arxiv_id": "1803.02547",
"language": "en",
"url": "https://arxiv.org/abs/1803.02547"
}
|
\section{Introduction}
Pioneered by the seminal works of Basko \cite{basko_metalinsulator_2006} and Gornyi \cite{gornyi_interacting_2005},
the many-body localization (MBL) transition is defined as a dynamical phase transition which happens at finite energy density for a disordered and isolated many-body interacting system. Conceptually, MBL is when Anderson localization \cite{anderson_absence_1958,Fleishman_Interactions_1980} survives inter-particle interactions. In MBL systems, under unitary time evolution, local observables fail to thermalize to their ergodic values.
Typical MBL models are disordered spin chains with short-ranged interactions in one dimension \cite{oganesyan_localization_2007,znidaric_many-body_2008,Berkelbach_Conductivity_2010,pal_many-body_2010,kjall_many-body_2014,luca_ergodicity_2013,pekker_encoding_2014,Bar_Lev_Dynamics_2014,Bar_Lev_Absence_2015,Agarwal_Anomalous_2015,luitz_many-body_2015,yu_finding_2015,Yu2016_bimodal,serbyn_criterion_2015,bera_many-body_2015,luitz_extended_2016,singh_signatures_2016,pollmann_efficient_2015,khemani_obtaining_2015,lim_nature_2015,luitz_long_2016,luitz_anomalous_2016,bera_local_2016,serbyn_universal_2016,Khemani_Critical_2017,Luitz_Information_2017,Imbrie_Diagonalization_2016}.
More recently, systems with itinerant degrees of freedom have been explored including disordered Hubbard or t-J models \cite{Prelovsek_2016PRB_absence,Jakub_2018arXiv,Bar_Lev_delocalized,Lemut_Complete_2017}. This focus has been partially motivated by cold-atom experiments \cite{Schreiber2015, Kondov2015, Bordia_2016PRL_Coupling,Luschen_Evidence_2016}.
Phenomenologically, the full MBL (FMBL) phase is characterized by a complete set of local integrals of motion (LIOM) \cite{Joel_unbound_2012,serbyn_local_2013,imbrie_many-body_2014,huse_phenomenology_2014,chandran_constructing_2015,Chandran2014,ros_integrals_2015,
Pekker_Fixed_2017,inglis_accessing_2016,pekker_encoding_2014,Wahl2016,Imbrie_Review_2017,
monthus_many-body_2016}, or the existence of a small bond-dimension unitary tensor network (UTN) which diagonalizes the MBL Hamiltonian \cite{pekker_encoding_2014,Vidal_spectral_2015}. A key application of the LIOMs or UTN is to explain the entanglement behavior of the MBL system. They imply that the entanglement of eigenstates are area law and that entanglement grows logarithmically under time evolution after a quench \cite{Issac_lightcone}.
In this work, we report on a microscopic Hamiltonian which goes beyond the FMBL or ergodic phases. We show that this microscopic Hamiltonian has both constant (area law) as well as logarithmically entangled (log law) eigenstates. These eigenstates are interspersed throughout the spectrum (i.e. they don't make up a mobility edge). We then show how to probe separately the area-law and log-law eigenstates through time-evolution from simple product states giving potential access to these different types of states through cold-atom experiments.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{shematic_phase_diagram.pdf}
\caption{Schematic phase diagram of the spin-disordered Hubbard chain at large disorder in the the 2D plane labeled by quantum numbers $j$ and $m$.
The eta-pairing raising operator $\eta_+$ acts as $\eta_+ |E, S^z_\textrm{total},j,m\rangle = |E+U, S^z_\textrm{total},j,m+1\rangle$ moving eigenstates horizontally in this figure. The particle-hole transformation maps $|E, S^z_\textrm{total}, j, m \rangle$ to $|E-2mU, S^z_\textrm{total}, j, -m \rangle$, equivalent to a mirror symmetry about $m=0$. In the $S^z_\textrm{total} = 0$ sector, the top left corner of the triangle is the vacuum state. \textbf{Region I}
(\textcolor{blue}{blue}) are reference states which are destroyed by $\eta_-$ (left edge) or $\eta_+$ (right edge) and contain a mixture of area-law and log-law states.
\textbf{Region II} (\textcolor{pink}{pink}) is the region where all eigenstates' entanglement entropies have logarithmic correction due to repeated application of $\eta_+$ (left edge) or $\eta_-$ (right edge).
}
\label{fig:EE_Hmap}
\end{figure}
\subsection{Overview of Results}
We consider a one-dimensional Hubbard model with spin disorder
\begin{equation}
H = -t \sum_{i\sigma} (c^{\dagger}_{i\sigma} c_{i+1\sigma} + h.c.) + \sum_i U n_{i\uparrow} n_{i\downarrow} + \sum_i h_i S^{z}_{i},\label{eq:HB_spin}
\end{equation}
where $S^{z}_{i}=(n_{i\uparrow} - n_{i\downarrow})/2$, and the disordered magnetic field $h_i \in [-W,W]$ is sampled uniformly. We focus on the case of $U=1, t=1$.
To scaffold this discussion, we first note that Eq.~\eqref{eq:HB_spin} has a pseudo-spin SU(2) symmetry \cite{Yang_1989PRL_eta,Zhang_1990PRL_eta} (see Sec.~\ref{sec:Model}), which allows us to label our eigenstates by four quantum numbers $|E,S^z_\textrm{total},j,m\rangle$ including the energy $E$, total $S_z$ and two quantum numbers $j$ and $m$ associated with the pseudo-spin symmetry. Because this symmetry is a continuous non-abelian symmetry, we don't expect to have a fully-MBL phase \cite{Potter_Symmetry_2016,Protopopov_Effect_2017}. Unless otherwise noted, we work with $S_\textrm{total}^z=0$ (although our results generically apply to all $S_\textrm{total}^z$) and separately consider the entanglement of eigenstates in different quantum number sectors. Fig.~\ref{fig:EE_Hmap} is a diagram of available quantum numbers and this figure will set the framework in which we discuss our results.
From the pseudo-spin algebra, one can analytically build towers of excited states of increasing $m$ using the eta-pairing raising and lower operators $\eta_+$ and $\eta_-$ \cite{Yang_1989PRL_eta,Zhang_1990PRL_eta}; every application of $\eta_+$ moves states horizontally in Fig.~\ref{fig:EE_Hmap}. The blue line (region I) are the eigenstates at the bottom of the towers which we call reference states. We then consider eigenstates in region II which are generated from a reference state by the application of $\eta_+^N$ for a given constant $N$. A common feature of this group of excited states is the large number of double occupancies. We will show that all such eigenstates have, at least, an additive logarithmic correction to their von Neumann entanglement entropy with respect to the reference state; this violates the area-law entanglement for a typical MBL phase. We accomplish this by identifying a particular sector of the reduced density matrix for these states that leads to the logarithmic correction (see Sec.~\ref{sec:EE_corrections}). This extends results of Ref.~\onlinecite{Vafek_2016arXiv_eta} which recently showed such corrections in the case of the vacuum state (top left of Fig.~\ref{fig:EE_Hmap}) with applications to a non-disordered Hubbard model. Additionally, we show that any eigenstate which is made of only singlons has an exact logarithmic correction.
We then numerically consider a number of disorder realizations of the Hamiltonian in Eq.~\eqref{eq:HB_spin} for $L=8$ at large $W$ (see Sec.~\ref{sec:Entanglement_of_reference_states}) using the slope of the cut average entanglement (SCAEE), introduced in Ref.~\onlinecite{Yu2016_bimodal}. We find that the reference eigenstates contain a mixture of area-law and log-law states and that the full spectrum of eigenstates in region II do indeed exhibit a logarithmic increase in entanglement (see Fig.~\ref{fig:dEE_lnv_1-v}).
Having characterized the eigenstates, we then discuss how to separately probe the localization physics both from area-law states as well as from log-law states dynamically using time evolution (see Sec. \ref{sec:Time_Evolve}). This would allow cold-atom experiments to directly probe this physics. We identify two extreme cases of product states --
all single occupancies at quarter filling which occupy primarily area law states, and all double occupancies at half filling which occupy log-law states. We find that in former case the entanglement entropy grows logarithmically and the charge imbalance does not relax as is typical in a
MBL system, while in the latter the entanglement entropy grows as a power law (but not linear) fashion and the charge imbalance tends to fully relax, which is delocalized but not ergodic.
\section{Introduction to Pseudo-spin algebra} \label{sec:Model}
\subsection{Pseudo-spin $SU(2)$ symmetry}
It is easy to see that the spin disorder breaks the spin rotation symmetry of $H$.
To prove that the pseudo-spin $SU(2)$ symmetry is intact,
one can introduce the eta-pairing operators \cite{Yang_1989PRL_eta,Zhang_1990PRL_eta},
with notations for 1D specifically.
\begin{equation}
\eta_- = \sum_{i} (-1)^i c_{i\uparrow} c_{i\downarrow}, \ \eta_+ = \eta^{\dagger}_-, \ \eta_0 = \frac{1}{2}(\hat{N}-L),
\end{equation}
where $L$ is the number of sites and has to be even,
and $\hat{N}$ is the operator for total number of electrons in the system.
The eta-pairing operators generate a $SU(2)$ algebra because
\begin{equation}\label{eq:eta_algebra}
[\eta_0, \eta_{\pm}] = \pm \eta_{\pm}, \quad [\eta_+, \eta_-] = 2\eta_0.
\end{equation}
To prove that pseudo-spin symmetry is preserved, one can straightforwardly check that
\begin{equation}\label{eq:H_eta_commutation}
[H, \eta_{\pm}] = \pm U \eta_{\pm}, \ [H, \eta_0] = 0, \ [H, \vec{\eta}^2]=0,
\end{equation}
where the total pseudo-spin operator $\vec{\eta}^2$ is
\begin{equation}
\vec{\eta}^2 = \frac{1}{2}(\eta_+ \eta_- + \eta_- \eta_+) + \eta^2_0.
\end{equation}
Therefore, $\{H, S^z_\textrm{total}, \vec{\eta}^2, \eta_0 \}$ is a complete set of commuting observables.
For simplicity, we will denote the eigenstate of $\{ \vec{\eta}^2, \eta_0 \}$ as $|j, m\rangle$, with
\begin{equation}
\vec{\eta}^2 |j, m\rangle = j(j+1) |j, m\rangle, \quad \eta_0 |j, m\rangle = m |j, m\rangle,
\end{equation}
where $|m| \le j$.
Because of Eq.~\eqref{eq:eta_algebra} and \eqref{eq:H_eta_commutation},
$\eta_{\pm}$ are a pair of ladder operators for $\eta_0$ and the Hamiltonian $H$.
Consider a simultaneous eigenstate $|E, S^z_\textrm{total}, j, m \rangle$. Applying $\eta_+$ to this eigenstate will lead to
$|E+U, S^z_\textrm{total}, j, m+1 \rangle $, and vice versa for $\eta_-$.
\subsection{Particle-hole transformation}
The particle-hole (PH) transformation
\begin{equation}\label{eq:PHT}
c_{i\uparrow} \to (-1)^i c^{\dagger}_{i\downarrow}, \quad c_{i\downarrow} \to (-1)^i c^{\dagger}_{i\uparrow}
\end{equation}
has some consequences for the eigenstates.
Firstly, $S^z_i$ is invariant under the PH transformation. Secondly, we have
\begin{equation}
\eta_\pm \to \eta_{\mp}, \ \eta_0 \to -\eta_0, \ \vec{\eta}^2 \to \vec{\eta}^2.
\end{equation}
Thirdly, the Hamiltonian $H$ transforms as
\begin{equation}
H \to H - 2 U\eta_0.
\end{equation}
Therefore, from the wave function's perspective, an eigenstate $|E, S^z_\textrm{total}, j, m \rangle$ under the PH transformation becomes $|E-2mU, S^z_\textrm{total}, j, -m \rangle$.
Both these eigenstates will have the same entanglement; therefore, both the right and left edge of Fig.~\ref{fig:EE_Hmap} can be considered reference states.
\section{Entanglement entropy of eta-pairing states} \label{sec:EE_corrections}
The pseudo-spin $SU(2)$ algebra has direct influence on the eigenstates' energy and entanglement entropy.
Given a reference eigenstate $|\psi_{\text{ref}}\rangle$ of $H$,
one can build a tower of highly excited states with $\eta_+$
\begin{equation}
|\psi^N\rangle = \mathcal{A}_N \eta_+^N |\psi_{\text{ref}}\rangle, \quad N \in \mathbb{N}^+,
\end{equation}
where $\mathcal{A}_N$ is the normalization factor.
With increasing $N$, $|\psi^N\rangle$ has increasing energy and number of doublons, until annihilated.
We will call these excited states the eta-pairing states, and only consider the reference state $|\psi_\text{ref}\rangle$ which can be annihilated by $\eta_-$ and has relatively small number of electrons (see Appendix \ref{app:reference_state} for details).
In this section, we prove two things: (A) eta-pairing states (with large enough $N$) have, at least, a logarithmically increasing entanglement with respect to its reference state and (B) eta-pairing states (with large enough $N$) whose reference state consist of only singlons have exactly a logarithmically increasing entanglement.
To accomplish this, we decompose $|\psi_{ref}\rangle = \sum_t |\psi_t\rangle $ into a linear superposition of terms labeled by property $t$ which is preserved under the application of $\eta_+^N$. Additionally, the reduced density matrix is block diagonal in blocks labeled by $t$, i.e. $\rho_t = tr_B |\psi_t\rangle\langle \psi_t|$. It then follows that the entanglement entropy is a sum of these individual blocks. To determine the change of entanglement, we need consider only how the entanglement of each term $|\psi_t\rangle$ changes.
To prove (A), the property $t$ is the spin polarization in the subsystem $A$. We consider only the term where $S_{A,z}$ is maximally polarized (i.e. $S_{A,z}=K/2$ for a system of $K$ electrons) and show that this term has a logarithmically increasing entanglement. To prove (B), the property $t$ is the singlon number in subsystem $A$ and we can show that every term has a logarithmically increasing entanglement. Proving the logarithmic increase in entanglement uses a similar approach to Ref. \onlinecite{Vafek_2016arXiv_eta}. In the subsections below we detail these claims.
\subsection{Maximally Polarized Sector}\label{subsec:max_polarization}
In this subsection, we show that, for any eta-pairing eigenstate in region II of Fig. \ref{fig:EE_Hmap}, the entanglement entropy grows at least logarithmically, whose contribution comes from the maximally polarized sector in the reduced density matrix.
Consider an eta-pairing state built on a many-body reference state with $K$ electrons. Without loss of generality, let $S_z=0$. Decompose $|\psi_{ref}\rangle$ into terms of fixed $S_{A,z}$. Notice that the operation of $\eta_+$ only adds doublons to a basis vector and therefore, can't change the value of $S_{A,z}$ except by destroying the state. Since $S_z$ is fixed, $S_{B,z}$ will not change either when $\eta_+$ is applied. When we trace out $B$ to calculate the reduced density matrix of $A$, the terms $|\psi_{t'}\rangle\langle \psi_t|$ where $t\neq t'$ will vanish because $|\psi_{t'}\rangle$ and $| \psi_t\rangle $ have different values $S_{B,z}$. As a result, the reduced density matrix will be block diagonal according to $S_{A,z}$.
Take a reference state with $K$ electrons for the disordered Hubbard model.
\begin{equation}\label{many_wf}
|\psi_{\text{ref}}\rangle = \sum_{(i_1,\sigma_1), \cdots, (i_K,\sigma_K)} \alpha_{(i_1,\sigma_1), \cdots, (i_K,\sigma_K)} c^{\dagger}_{i_1,\sigma_1} \cdots c^{\dagger}_{i_K,\sigma_K} |0\rangle,
\end{equation}
which satisfies $K \ll L/2$ and $\eta_- |\psi_{\text{ref}}\rangle = 0$.
We consider the block in the reduced density matrix with maximum $S_{A,z}$ for which there is $K/2$ spin-up electrons in $A$ and $K/2$ spin-down electrons in $B$.
\begin{align}
|\psi_{K/2} \rangle &= \sum_{(i_1,\uparrow), \cdots, (i_K,\downarrow)} \alpha_{(i_1,\uparrow), \cdots, (i_K,\downarrow)} c^{\dagger}_{i_1,\uparrow},\cdots, c^{\dagger}_{i_K,\downarrow} |0\rangle \\
& = \sum_{i \in I,j \in J} \alpha_{i,j} \{c^{\dagger}_{i \uparrow}\} \{c^{\dagger}_{j \downarrow}\} |0\rangle.
\end{align}
$I$ is the set of site sequences with only spin-up electrons. $\{c^{\dagger}_{i \uparrow}\}$ is the product of $c^{\dagger}_{\uparrow}$ from a particular site sequence $i$. Similar notation is used for $J$ and $\{c^{\dagger}_{j \downarrow}\}$ for the case of spin-down electrons. We then perform a Schmidt decomposition on $|\psi_{K/2} \rangle$.
\begin{align}
|\psi_{K/2} \rangle &= \sum_{i \in I,j \in J} \alpha_{i,j} \{c^{\dagger}_{i \uparrow}\} \{c^{\dagger}_{j \downarrow}\} |0\rangle\\
&= \sum_{i \in I,j \in J} \sum_{k} u_{i k}d_{kk}v_{kj} \{c^{\dagger}_{i \uparrow}\} \{c^{\dagger}_{j \downarrow}\} |0\rangle\\
&= \sum_{k} d_{kk} (\sum_{i \in I} u_{ik} \{c^{\dagger}_{i \uparrow}\} ) (\sum_{j \in J} v_{kj} \{c^{\dagger}_{j \downarrow}\}) |0\rangle \\
&= \sum_{k} \alpha_{k} |k_{\uparrow,A}\rangle |k_{\downarrow,B}\rangle
\end{align}
Notice that we construct Schmidt vectors with $K/2$ spin-up singlons in $A$ and $K/2$ spin-down singlons in $B$. By using the same contour integral technique as Ref. \onlinecite{Vafek_2016arXiv_eta}, the reduced density matrix has the following form.
\begin{equation}
\begin{split}
\rho^{\text{single}}_{A} &= tr_B [\mathcal{A}_N \eta_+^N |\psi_{K/2} \rangle \langle \psi_{K/2}| \mathcal{A}_N \eta_-^N]\\
&=(\mathcal{A}_N N!)^2 \oint_o \oint_o \frac{dz_1dz^*_2}{(2 \pi)^2}
\sum_{k} \alpha^2_{k} \langle k_{\downarrow,B}| \\
& \frac{e^{z^*_2\eta_{-,B}}}{(z^*_2)^{N+1}} \frac{e^{z_1\eta_{+,B}}}{z_1^{N+1}} |k_{\downarrow,B}\rangle e^{z_1\eta_{+,A}} |k_{\uparrow,A}\rangle \langle k_{\uparrow,A}| e^{z^*_2\eta_{-,A}}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\rho^{\text{single}}_{A} &= (\mathcal{A}_N N!)^2 \oint_o \oint_o \frac{dz_1dz^*_2}{(2 \pi)^2}
\sum_{k} \alpha^2_{k} \frac{(1+z_1 z^*_2)^{L_B-K/2}}{(z_1 z^*_2)^{N+1}}\\
& e^{z_1\eta_{+,A}} |k_{\uparrow,A}\rangle \langle k_{\uparrow,A}| e^{z^*_2\eta_{-,A}}
\end{split}
\end{equation}
where
\begin{equation}
\eta_{-,A/B} = \sum_{i \in A/B} (-1)^i c_{i\uparrow} c_{i\downarrow}, \quad \eta_{+,A/B} = \eta^{\dagger}_{-,A/B}.
\end{equation}
After carrying out the contour integrals, we get
\begin{equation}
\rho^{\text{single}}_{A} = \sum_{k} \alpha^2_{k} \sum_{n} \lambda_{n} |N-n,k_{\uparrow,A}\rangle \langle N-n,k_{\uparrow,A}|
\end{equation}
\begin{equation}
\lambda_{n} = \frac{C_{L_A-K/2}^{N-n} C_{L_B-K/2}^{n} }{C_{L-K}^N},
\end{equation}
\begin{equation}
|N-n,k_{\uparrow,A}\rangle = \mathcal{A}_{N-n}^{L_A} \eta^{N-n}_{+,A} |k_{\uparrow,A} \rangle.
\end{equation}
where $C^k_n = n!/[k!(n-k)!]$ is the combinatorial number, $\mathcal{A}_{N-n}^{L_A}$ is a normalization constant given by Eq.~\eqref{eq:AN_normalization} by replacing $L$ with the subsystem size $L_A$. It is crucial to choose the maximum polarized sector so that $|k_{\uparrow,A} \rangle$ is a reference state of $\eta_{+,A}$.
The entanglement entropy can be calculated as follows
\begin{align}
S &= - \sum_{k,n} \alpha^2_{k} \lambda_{n} \ln (\alpha^2_{k} \lambda_{n})\\
&= - \sum_{k,n} \alpha^2_{k} \lambda_{n} \ln (\alpha^2_{k}) -\sum_{k,n} \alpha^2_{k} \lambda_{n} \ln (\lambda_{n})\\
&= - \sum_{k} \alpha^2_{k} \ln (\alpha^2_{k}) - \beta_{A} \sum_{n} \lambda_{n} \ln (\lambda_{n})\\
&= S_{ref} - \beta_{A} \sum_{n} \lambda_{n} \ln (\lambda_{n}) \label{eq:S_max_polarization}
\end{align}
where $S_{ref}=- \sum_{k} \alpha^2_{k} \ln (\alpha^2_{k})$,
$\beta_{A}=\sum \alpha_{k}^2 = \sum_{k} d^2_{kk}$. Since the sum of the square of the singular values is equal to the Frobenius norm of the matrix, it follows that $\beta_{A}=\sum_{i \in I,j \in J} \alpha^2_{i,j}$. Eq.~\eqref{eq:S_max_polarization} is checked numerically with system size $L=8$ and reference electron number $K=2$ in Fig. \ref{fig:S_max_Sz_K2}.
Naturally, we are interested in the limit of highly excited states, large system size and large heat bath size. Therefore we take the limit of $L_B \gg L_A$, $N \gg L_A - K/2$, and $L_A \gg K/2$. In this case, one can simplify $S-S_\text{ref}$ using the Stirling approximation,
replace the summation by an integral, and finally apply a saddle point approximation, which leads to
\begin{equation}
S-S_\text{ref} \approx \frac{\beta_{A}}{2} (1 + \ln[2 \pi \nu (1-\nu) (L_A-K/2)]) \label{eq:many_ref}
\end{equation}
where $\nu=N/(L-K)$ indicates the portion of available sites taken by double occupancies. Notice that the symmetric appearance of $\nu (1-\nu)$ with respect to $\frac{1}{2}$ is a consequence of the particle-hole symmetry (see Fig.~\ref{fig:dEE_lnv_1-v}). We can clearly see the logarithmic contribution of $\ln{(L_A-K/2)}$ to the total von Neumann entanglement entropy.
\subsection{Reference states with only singlons}\label{subsec:singlon}
In this subsection, we show that, for any eta-pairing eigenstate in region II of Fig. \ref{fig:EE_Hmap} whose reference state contains only singlons, the entanglement entropy grows exactly logarithmically in the thermodynamic limit.
Given a reference $|\psi_{ref}\rangle$ with only singlons, we decompose it into terms of fixed singlon number $i$ in subsystem $A$. Notice that the operation of $\eta_+$ only adds doublons to a basis vector and therefore, can't change the value of $i$ except by destroying the state. Since the total singlon number is fixed in the reference state, the singlon number in subsystem $B$ will not change either when $\eta_+$ is applied. When we trace out $B$ to calculate the reduced density matrix of $A$, the terms $|\psi_{t'}\rangle\langle \psi_t|$ where $t\neq t'$ will vanish because $|\psi_{t'}\rangle$ and $| \psi_t\rangle $ have different values in terms of singlon number in subsystem $B$. As a result, the reduced density matrix will be block diagonal according to the singlon number $i$ in subsystem $A$.
Take a many particle reference state in the form of Eq.~\eqref{many_wf} with all $K$ electrons to be singlons. We consider the component $|\psi_i\rangle$ with $i$ singlons in subsystem $A$. Following the same calculation as in the previous subsection, we perform a Schmidt decomposition and have
\begin{equation}
|\psi_{i} \rangle = \sum_{k} \alpha_{i,k} |k_{i}\rangle |k_{K-i}\rangle
\end{equation}
where we construct Schmidt vectors of $i$ singlons in $A$ and $K-i$ singlons in $B$.
Using the same contour integral technique, one can show that the reduced density matrix from this reference state has the following form.
\begin{equation}
\rho^{\text{single}}_{i,A} = \sum_{k,n} \alpha^2_{i,k} \lambda_{i,n} |\phi^A_{i,n}\rangle \langle \phi^A_{i,n}|
\end{equation}
where $\lambda_{i,n}=\frac{C_{L_B-(K-i)}^{n} C_{L_A-i}^{N-n}}{C_{L-K}^N} $ and ${|\phi^A_{i,n}}\rangle$ is a set of orthonormal basis.
It follows that the entanglement entropy has the form.
\begin{equation}
S^{\text{single}}_{i,A} = S_{i,ref} - \beta_{i,A} \sum_{n} \lambda_{i,n} \ln \lambda_{i,n} \label{eq:singlon_EE2}
\end{equation}
where $S_{i,ref}=-\sum_{i,k} \alpha^2_{i,k} \ln \alpha^2_{i,k}$, $\beta_{i,A}=\sum \alpha^2_{i,n} = \sum_{(i_1,\sigma_1), \cdots, (i_K,\sigma_K)}$ $ \alpha^2_{(i_1,\sigma_1), \cdots, (i_K,\sigma_K)}$ with the constraint that there are $i$ singlons in $A$ and $K-i$ singlons in $B$.
Summing up the entanglement entropy contribution from different singlon number $i$ sectors, the total entanglement entropy is
\begin{equation}
S_{total}=\sum_i S_{i,ref} - \sum_{i,n} \beta_{i,A} \lambda_{i,n} \ln \lambda_{i,n} \label{eq:singlon_EE}
\end{equation}
Eq.~\eqref{eq:singlon_EE} is consistent with numerical result with system size $L=8$ and reference electron number $K=2$ in Fig.~\ref{fig:S_singlon_K2}.
In the thermodynamic limit of $L_B \gg L_A$, $N \gg L_A - K$, $L_A \gg K$, and $N+K<L$, we can apply Stirling approximation and saddle point approximation to each $S^{\text{single}}_{i,A}$, which implies an additive scaling of $\ln(L_A-i)$ to the entanglement entropy. The result is true for all $i\leq K$ sectors as long as the reference state has only singlon occupancies. Therefore, the total entanglement entropy of the eta-pairing states built on reference states with only singlon occupancies will have logarithmically more entanglement entropy than the reference states.
\section{Numerical Evaluation of Entanglement}
\label{sec:Entanglement_of_reference_states}
In the previous section, we proved that there is, at least, an additive logarithmic increase in entanglement from the reference state in various contexts. While this rules out that eta-pairing states are area law, these proofs are not sufficient to determine the actual entanglement entropy because we don't know the entanglement of the reference state nor whether the increase in entanglement is greater then logarithmic. In this section we take steps to answer these questions numerically.
The simplest reference states we can consider are ground states. We consider the ground state of Hamiltonian in Eq.~\eqref{eq:HB_spin} for $W=4$ in the sector where total $S_z=0$ and $K=2$ for $M=400$ total sites. This state is close to the top of region I in Fig.~\ref{fig:EE_Hmap}. We apply $\eta_+$ many times and see a clear logarithmic increase in entanglement (see Fig.~\ref{fig:S_eta_pairing_states_sample}). Strictly speaking, under open boundary conditions, pseudo-spin symmetry is no longer exact, but this does not seem to be a problem at large system sizes. While our proof in section \ref{subsec:max_polarization} does not forbid a faster growth of entanglement, we do not see it in this case.
We then consider the entanglement entropy of eigenstates in the middle of the spectrum. We consider the entanglement of these states for a system of size $L=8$. For various disorder strengths, we compute the cut averaged entanglement entropy (CAEE) and the slope of the cut-averaged entanglement entropy (SCAEE)~\cite{Yu2016_bimodal}. Note that the SCAEE equals to 1 at all $L_A$ for an infinite-temperature volume law state and zero for large enough $L_A$ for an area law state. See Fig.~\ref{fig:SCAEE} for a histogram of these results. We find the SCAEE consistent with a volume law at small disorder strengths. At larger disorder strength, though, there is a broad distribution of the SCAEE with some states exhibiting area-law behavior and some states exhibiting sub-volume non-area law behavior.
Together with the unusual behavior of CAEE in Fig.~\ref{fig:CAEE}, it suggests a non-ergodic, non-MBL phase in the model.
To understand this better, we start by considering the probability density of the SCAEE in reference states of various quantum-number sectors at subsystem size $l=2$ for system size $L=8$ (see Fig.~\ref{fig:SCAEE_edge_sectors}). We find a bimodal distribution with one of the peaks centered at zero and the other peak at a non-zero value much less than 1 again suggesting a mix of area-law and sub-volume law states (see Fig.~\ref{fig:SCAEE_edge_sectors}(bottom), Fig.~\ref{fig:CAEEavg_SCAEE_filter_ln}, Fig.~\ref{fig:CAEEall_SCAEE_filter}). We check this by considering the disordered average entanglement for both peaks. The peak centered at zero is clearly area-law while the other peak is consistent with logarithmically growing entanglement (See Fig. \ref{fig:SCAEE_edge_sectors}).
We then proceed to consider the difference in entanglement between the reference states and the eta-pairing states. The numerical results (see Fig.~\ref{fig:dEE}) are consistent with all states have a logarithmic increase in entanglement. This is interesting given that this is in a regime where the proof is not applicable and much of the entanglement comes from sectors other then the maximally polarized. See Fig.~\ref{fig:CAEE06} for the non-disordered average version of this curve.
Finally, we note that for a reference state of only singlons, Eq.~\eqref{eq:singlon_EE} exactly implies the (logarithmic) increase in entanglement. While our reference states don't typically have only singlons, we can check the efficacy of this formula as a function of the number of non-singlons in our system. We find (see Fig.~\ref{fig:EE_err_Nsavg}) that the formula still is applicable with small deviations when the average number of non-singlons is close to zero.
\begin{figure}[H]
\centering
\includegraphics[scale=0.9]{S_L400_W4.pdf}
\caption{
\textbf{Top:} Entanglement entropy $S(L_A)$ of $\mathcal{A}_N \eta_+^N|\psi_{ref}\rangle$ for various $N$ (starting at $N=0$ for the bottom curve) with $|\psi_{ref}\rangle$ as a single disordered realization of the open-boundary condition two-particle ground state of Eq.~\eqref{eq:HB_spin} with $W=4,t=1$ and $U=1$.
\textbf{Bottom:} Entanglement entropy difference between $\mathcal{A}_N \eta_+^N|\psi_{ref}\rangle$ and $|\psi_{ref}\rangle$ for various $N$.
}\label{fig:S_eta_pairing_states_sample}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{L8_SCAEE_Hist_l2.pdf}
\caption{Probability density of SCAEE at $W=1,5,8,12,20$, for $L=8$ and a subsystem size of $\ell=2$.
}\label{fig:SCAEE}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{L8_P0_W14_RS3_SCAEE_c2_edge_sectors.pdf}
\includegraphics[scale=0.6]{L8_P0_W14_RS3_CAEEavg_SCAEE_filter_ln.pdf}
\caption{
\textbf{Top:} The SCAEE histograms at reference states corresponding to (from left to right) $j=0,1,2,3$ with $L=8$, subsystem size $l=2$, $S_z=0$ and $W=14$.
\textbf{Bottom:} Mean cut-averaged entanglement entropy (CAEE) vs. ln$L_A$ for different reference states [$j=0$ (\textcolor{blue}{blue curve}), $j=1$ (\textcolor{YellowOrange}{orange curve}), $j=2$ (\textcolor{OliveGreen}{green curve}), $j=3$ (\textcolor{red}{red curve})] for the eigenstates with SCAEE value on the left (right) side of the dashed line corresponding to the left (right) figure.
}
\label{fig:SCAEE_edge_sectors}
\end{figure}
\section{Time evolution after quantum quench} \label{sec:Time_Evolve}
We have given evidence for a Hamiltonian which has both area-law and log-law eigenstates. Here we show how these different eigenstates can be probed using time-evolution. In the process this will give further evidence for the two types of states as well as supply a physical picture for why we might expect this difference.
To accomplish this, our goal will be to find states that are simple to prepare, such as product states, that have overlap with primarily area-law or log-law eigenstates and then consider the effect of time-evolution on these states. We will consider two product states: a quarter filled singlon state, shown in Fig.~\ref{fig:Initial_states}(top), and a half filled doublon state, shown in Fig.~\ref{fig:Initial_states}(bottom).
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{L8_P0_W14_RS3_dEE_lnLA_all.pdf}
\caption{
The entanglement entropy difference vs. ln$(L_A)$ for $L=8$, $S_z=0$, and $W=14$ at different quantum number sectors[$j=4$ (\textcolor{black}{black curve}), $j=3$ (\textcolor{YellowOrange}{yellow curve}), $j=2$ (\textcolor{blue}{blue curve}), $j=1$ (\textcolor{red}{red curve})]. The entanglement entropy for each quantum number sector is averaged over the entropy of all the eigenstates in the sector obtained from exact diagonalization.
}
\label{fig:dEE}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{L8_P0_W14_RS3_EE_err_Nsavg_K2.pdf}
\caption{Entanglement entropy difference for $L=8$, $S_z=0$ and $W=14$ between Eq.~\eqref{eq:singlon_EE} and the exact entanglement entropy vs. average non-singlon number in the quantum number sector $j=3,m=0$. The entanglement entropy difference is averaged over all $L_A$. When the non-singlon number is close to zero, Eq.~\eqref{eq:singlon_EE} provides a good description for the total entanglement entropy.}
\label{fig:EE_err_Nsavg}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.8]{L8_P0_W14_RS3_SCAEE_Doublon_Scatter_-2_2_.pdf}
\caption{SCAEE vs. doublon expectation value for $L=8$, $l=2$, $S_z=0$ and $W=14$ in quantum number sector $j=2,m=-2$. For doublon expectation value close to zero, we see area-law entanglement.}
\label{fig:SCAEE_doublon}
\end{figure}
For a product state $|p\rangle$ (in the occupation basis) the number of single occupancies $n_1$ and double occupancies $n_2$ fix the expected value of the quantum numbers $(j,m)$,
\begin{equation}
\langle p| \eta_0 | p\rangle = -\frac{(L-n_1-2n_2)}{2}
\end{equation}
and
\begin{equation}
\langle p| \vec{\eta}^2 | p\rangle = \frac{L-n_1-2n_2}{2}\left(\frac{L-n_1-2n_2}{2} + 1\right) + n_2.
\end{equation} \vspace{0.1 cm}\\
\textbf{Quarter filled singlon state: } To find area-law states we must focus away from states with overlap in region II (which can't be area-law) and instead on those with high overlap with the reference states. A product state $|p\rangle$ with only single occupancy, is always an
eigenstate of $\{ \vec{\eta}^2, \eta_0\}$ of eigenvalues $(\frac{L-n_1}{2}, -\frac{L-n_1}{2})$ and therefore is a linear superposition of only reference states in one quantum number sector. Moreover, of those states, we find (see Fig.~\ref{fig:SCAEE_doublon}) that states with low doublon number are area law states. An area law state should be `many-body localized' and so we generically expect that time evolution starting in such states should not equilibrate. \vspace{0.2 cm} \\
\textbf{Half filled doublon state: } On the other hand, to find log-law states, we can look for product states which have high overlap in region II. While the average single and double occupancy doesn't fix the quantum number sector it localizes it around a given quantum number sector. For $n_2 < \frac{L}{2}$, as $n_2$ increases, $\langle \eta_0 \rangle$ grows towards 0, while $\langle \vec{\eta}^2 \rangle$ decreases towards $\frac{L}{2}$. For $n_2 > \frac{L}{2}$, as $n_2$ increases, $\langle \eta_0 \rangle$ approaches $\frac{L}{2}$, while $\langle \vec{\eta}^2 \rangle$ increases towards $\frac{L}{2}(\frac{L}{2} + 1)$. For either case, one can see that the half-filled doublon state is composed of eta-pairing states high up on long pseudo-spin ladders, which have logarithmic corrections to their entanglement entropy. As a log-law state we expect less localization than a MBL state. \vspace{0.2 cm}
Both of the product states start with zero entanglement entropy, and highly imbalanced charge distributions between even and odd sites. By considering the time evolution of doublon number in both quarter filling and half filling settings, we can verify that the doublon number is largely localized (see Fig.~\ref{fig:ED_doublon}).
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{Initial_states_QF_HF.pdf}
\caption{
\textbf{A:} A schematic diagram for the quarter filled singlon state used in unitary time evolution.
\textbf{B:} A schematic diagram for half filled doublon state used in unitary time evolution.
Both states have a charge imbalance per electron of 1.}
\label{fig:Initial_states}
\end{figure}
With these two initial product states, we investigate the time evolution of von Neumann entanglement entropy, and charge imbalance
\begin{equation}
I = \frac{\sum_j (-1)^j (n_{j \uparrow}+n_{j \downarrow})}{\sum_j (n_{j \uparrow}+n_{j \downarrow})}.
\end{equation}
We also look at staggered magnetization in Appendix \ref{app:staggered_M}.
The main goal is to see the rate of entanglement entropy growth and whether the charge imbalance relaxes.
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{ED_Doublon_vs_t_2panels.pdf}
\caption{Ensemble averaged number of doublon for $W=14, L=8$ with quarter filled singlon state and half filled doublon states. Same samples are used as in the first column of Fig.~\ref{fig:compare_S}. The initial doublon number is zero for the quarter filled state and four for the half filled state.}
\label{fig:ED_doublon}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.95]{S_Charge_vs_t_2x2_L8_L12_W14_v2.pdf}
\caption{Ensemble averaged von Neumann entropy $S$ for $W=14$ with quarter filled singlon and half filled doublon initial product states. \textbf{First column:} $L=8$, exact diagonalization and periodic boundary conditions. Results are averaged over 400 (top) and 150 (bottom) samples respectively. \textbf{Second column:} $L=12$, TEBD, and open boundary conditions. Results were averaged over 210 samples. For the quarter filling case (\textcolor{blue}{blue curves}), the entropy grows logarithmically with respect to time. For the half filling case (\textcolor{red}{red curves}), the entropy grows as a power law with $t$, with the power law exponent equal to 0.245 for $L=8$, and $0.29$ for $L=12$. \textbf{Third \& Fourth columns:} Ensemble averaged charge imbalance $I$ using the same samples and parameters as the first and the second column respectively.
}\label{fig:compare_S}
\end{figure*}
The real time evolution simulations are carried out separately for $L=8$, which uses the exact diagonalization (ED) method, and for $L=12$, which uses the time-evolving block decimation (TEBD) method based on the open source ITensor library \cite{ITensor}. We consider disorder strengths $W=14$ and $W=24$. Under each simulation, the entanglement entropy, charge imbalance and staggered magnetization are averaged over disorder realizations.
We find (see Fig.~\ref{fig:compare_S}) that the quarter filled singlon case exhibits logarithmic growth in entanglement entropy and a charge imbalance that, after an initial decay, never relaxes stabilizing around a non-zero value. This is as expected for a many-body localized state. On the other hand, the half filled doublon case exhibits a power-law growth of entanglement as well as a charge-imbalance which decays quickly to zero. Although this is suggestive of thermalization, the slope of the entanglement is significantly below the expected linear growth of an ergodic phase. We attribute this difference to the logarithmic as opposed to volume-law entanglement of the eigenstates.
There is a simple physical picture consistent with these results. Since double occupancy has $S_z=0$, spin-up and spin-down electrons can hop together through a second order process, which leads to full charge delocalization in the half filled setting. However, single occupancies can not hop freely due to spin disorder, which prevents full charge delocalization in the quarter filling case. Under the spin disorder potential, the double occupancy tends to hop together and creates charge relaxation.
From the above analysis, it is clear that the quarter filled singlon product state acts many-body localized while the half filled doublon product state is neither fully ergodic nor MBL. Moreover, via time-evolution we see that we can directly probe the area-law and log-law parts of the spectrum opening up the possibility that this effect can be seen experimentally.
\section{Conclusion}
Within a spin-disordered Hubbard chain at large disorder, we find a number of area-law and log-law eigenstates. Our results are presented in the context of the quantum numbers of the pseudo-spin symmetry of this model (see Fig.~\ref{fig:EE_Hmap}). Using analytic arguments related to pseudo-spin symmetry, we showed that there is, at least, an additive logarithmic entanglement difference between the states in region I and those in region II. We present numerical results which suggest that this difference is in fact logarithmic. Moreover, we show numerical evidence that the states in region II are all log-law while the states in region I are partially area-law and partially log-law with the area-law states being preferentially in states with smaller expected value of doublons. We then consider two product states which have primary overlap with area-law or log-law eigenstates respectively. We find that under time evolution the product state consisting of primarily area-law eigenstates acts like a MBL eigenstate with localized charge imbalance and logarithmic growth of entanglement. On the other hand, the product state consisting of primarily log-law eigenstates has charge imbalance which relaxes and an entanglement which grows polynomially but not linearly.
While our focus in this work has been on large disorder, from Fig.~\ref{fig:SCAEE} we can see that at small disorder ($W=1$) this system eventually transitions to an ergodic phase which consists of primarily volume-law eigenstates. Interestingly, at this disorder, there is clear bimodality in the entanglement entropy of eigenstates. Moreover, we might anticipate that there is a transition around $W=5$ where there is a surprisingly broad spread of entanglement entropies (see Fig.~\ref{fig:SCAEE} and Fig.~\ref{fig:CAEE}).
Our work provides a solid microscopic Hamiltonian that demonstrates the existence of a non-ergodic, non-MBL phase in one-dimensional system. Such phases will not have local integrals of motion nor small unitary tensor networks. This work opens up the possibility of different entanglement structures beyond the area-law of many body localized state in disordered systems.
\section*{Acknowledgment}
This project is part of the Blue Waters sustained-petascale computing project,
which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the State of Illinois.
Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This material is based upon work supported by the U.S. Department of Energy, Office of Science under Award Number FG02-12ER46875. Di acknowledges useful discussions with Tianci Zhou. BKC thanks Andrei Bernevig for mentioning his eta-pairing work.
|
{
"timestamp": "2018-03-09T02:00:23",
"yymm": "1803",
"arxiv_id": "1803.02838",
"language": "en",
"url": "https://arxiv.org/abs/1803.02838"
}
|
\section{\label{sec:level1}Introduction}
The multi-particle entangled Greenberger-Horne-Zeilinger (GHZ) state shows unique non-local correlations which are essential for understanding the fundamental principles of quantum entanglement \cite{Greenberger89, Carvacho17} and has important applications in quantum information protocols \cite{Zhao04, Kempe99}. Many ingenious schemes for creation of GHZ states in atomic systems have been previously proposed using a multi-step or a single-step process \cite{Muller09, Ostmann17, Saffman10}. In this paper we present a single-step scheme for GHZ state creation employing Rydberg dipole blockade and Stimulated Raman Adiabatic Passage (STIRAP) \cite{Bergmann98, Vitanov17} using a single control atom and an ensemble of target atoms. Rydberg states which are high lying atomic levels, when excited, exert long range dipole forces on the atoms in its vicinity, effectively blocking excitation of more than two atoms to the same Rydberg state \cite{Lukin01,Saffman10}. This phenomenon of `dipole blockade' provides an atomic control that acts on multiple atoms at the same time, which is necessary for generating entanglement between the atoms of the ensemble within the blockade radius. Approaches to create a multi-particle GHZ state by using Electromagnetically Induced Transparency and adiabatic passage along with Rydberg blockade have been previously studied \cite{Muller09, Unanyan2002, Moller08}. Fidelity of the GHZ states obtained at the end of these protocols is an important parameter to consider. Because of radiative decay from the excited Rydberg states of the ensemble atoms, the fidelity of the GHZ states obtained in these schemes is adversely affected \cite{Saffman10}.\\
Here we propose a different theoretical scheme to realize the creation of a multi-particle GHZ state in an ensemble of $\Lambda$ three-level Rydberg atoms which is robust to radiation decay from the excited Rydberg levels of the ensemble atoms. In this setup, the control atom and the ensemble of the target atoms are assumed to be independently addressable. This can be achieved by storing them in two separate trapping potentials in close proximity or in a lattice where the control atom can be efficiently addressed. This setup is similar to what has been discussed in the proposal by Muller et. al. \cite{Muller09}. \\
The control atom has a three level structure as is shown in Fig. (\ref{fig:Fig1a}). The two metastable levels $|0\rangle$ and $|1\rangle$ determine the state of the control atom. Level $|0\rangle$ is connected to the excited Rydberg level $|R\rangle$ via a control pulse with Rabi frequency given by $\Omega_{c}(t)$.
\begin{figure}
\centering
\subfloat[Atomic level structure]{\label{fig:Fig1a}\includegraphics[width=0.23\textwidth]{Fig1a.png}}~
\subfloat[Pulse Scheme]{\label{fig:Fig1b}\includegraphics[width=0.23\textwidth]{Fig1b.png}}
\caption{(a)The atomic level structure of the control atom and the target ensemble atoms: The control atom has two metastable states $|0\rangle$ and $|1\rangle$. The level $|0\rangle$ interacts with the excited Rydberg level $|R\rangle$ via Rabi frequency $\Omega_{c}(t)$. $\delta_{R}$ is the detuning between the carrier frequency of the light pulse and the frequency of transition between the levels $|0\rangle$ and $|R\rangle$. The level $|1\rangle$ is isolated from the other levels. Each target atom has a $\Lambda$ type level structure with two metastable states, $|g\rangle$ and $|s\rangle$. They interact with the excited Rydberg level $|r\rangle$ via Gaussian pulses having Rabi frequencies $\Omega_{g}(t)$ and $\Omega_{s}(t)$ respectively. The detuning for both the pulses is given by $\delta$. (b) The pulse sequences: This protocol begins with a Gaussian [$\Omega_{c}(t)$] $\pi$ pulse having a standard deviation given by $T_{c}$ to take the control atom from $|0\rangle$ to $|R\rangle$. It is then followed by counter-intuitive STIRAP pulse sequence with Gaussian profiles, each having $T (\gg T_{c})$ standard deviation. $\tau$ is the time interval between the peaks of these two STIRAP pulses. Finally, another control $\pi$ pulse is used to bring the control atom back to state $|0\rangle$.\label{fig:1}}
\end{figure}
Level $|1\rangle$ is chosen such that dipolar transitions between $|1\rangle$ and $|0\rangle$ as well as $|R\rangle$ are forbidden. An ensemble having N target Rydberg atoms is considered to be within the blockade radius of the excited control atom. The level structure of the ensemble atoms and the corresponding pulse sequence acting on them is shown in Fig. (\ref{fig:1}). Every ensemble atom has two metastable ground states, namely, $|g\rangle$ and $|s\rangle$ and one Rydberg excited level $|r\rangle$. All the ensemble atoms are initiated in the $|g\rangle$ state. This GHZ state creation protocol begins with a control $\pi$ pulse having Rabi frequency $\Omega_{c}(t)$ which is used to excite the control atom. If the control atom is in state $|1\rangle$, the control pulse has no effect. On the other hand, if it is in state $|0\rangle$, with the action of the control $\pi$ pulse, the atom is excited to the Rydberg level $|R\rangle$. Due to the long range dipole-dipole interactions between the excited Rydberg level $|R\rangle$ and Rydberg levels $|r\rangle$, the target ensemble Rydberg levels undergo energy level shift given by a frequency $\Delta$. In the absence of this energy shift, the condition for adiabatic population transfer of the ensemble atoms from the ground state $|g^{N}\rangle=\otimes_{j=1}^{N}|g\rangle_{j}$ to $|s^{N}\rangle=\otimes_{j=1}^{N}|s\rangle_{j}$ via the counter-intuitive STIRAP pulse sequence $\Omega_{s}(t)$ and $\Omega_{g}(t)$ [Fig. (\ref{fig:1})] is satisfied. The parameters of the system are set up in such a way that when the control atom is excited to $|R\rangle$, the induced energy shift $\Delta$ in the ensemble atoms disrupts the STIRAP condition for population transfer from $|g^{N}\rangle$ to $|s^{N}\rangle$. Due to the added detuning the population remains in the state $|g^{N}\rangle$ after the application of the STIRAP pulses. Finally, another control $\pi$ pulse is used to bring the control atom back to the original state. When the control atom is prepared in the $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ superposition state at the beginning of the protocol and finally measured in the superposition basis, the ensemble atoms get projected to a N particle GHZ state.\\
If the conditions for STIRAP are met, the instantaneous eigenstate occupied by the ensemble atoms has no contribution from the level $|r\rangle$ at all times. Hence, this protocol is insensitive to the radiative decay losses from the excited Rydberg level of the ensemble atoms. \\
Let us now analyze this scheme in detail and study the dependence of the STIRAP transfer conditions on the parameters of the system. In Sec. \ref{sec:level2} we discuss the dynamics of the control atom. This is followed by the discussion of the transfer mechanism in the target atoms and the adiabaticity conditions required for efficient transfer in Sec. \ref{sec:level3}. Numerical simulations of this protocol for realistic parameters are then presented in Sec. \ref{sec:level4} . In Sec. \ref{sec:level5} we conclude the discussion.
\section{\label{sec:level2}The control atom}
Hamiltonian for the control atom interacting with the classical control field in the field interaction representation with the rotating wave approximation (RWA) is given below:
\begin{eqnarray}
\frac{H_{C}(t)}{\hbar}&=& \delta_{R}|R\rangle\langle R|+\frac{\Omega^{*}_{c}(t)}{2}|0\rangle\langle R|+\frac{\Omega_{c}(t)}{2}|R\rangle\langle 0|
\label{eq:conH}
\end{eqnarray}
The energy levels are measured relative to the ground state energy $\hbar\omega_{0}=0$. In Eq. (\ref{eq:conH}), $\delta_{R}\equiv\omega_{R}-\omega_{c}$ is the detuning between the frequency of transition from $|0\rangle$ to $|R\rangle$ (denoted by $\omega_{R}$) and the optical frequency of the control pulse, $\omega_{c}$. As noted previously, $\Omega_{c}(t)$ is the Rabi frequency of the control pulse with a Gaussian temporal profile given below.
\begin{eqnarray}
\Omega_{c}(t)&=&\Omega_{c0}\exp\big[{-\frac{(t-\tau_{c})^2}{2T_{c}^{2}}\big]}
\end{eqnarray}
We will assume $\Omega_{c0}$ to be real in all the calculations here after. The level $|1\rangle$ is isolated from the levels $|0\rangle$ and $|R\rangle$. For $\delta_{R}=0$, on solving the Schrodinger's equation for a general wave-function, $|\Psi(t)\rangle = c_{0}(t)|0\rangle+c_{R}(t)|R\rangle$, with $|c_{0}(-\infty)| = 1$, we get:
\begin{eqnarray}
|c_{0}(\infty)|^{2}&=&\cos^{2}\Theta\\
|c_{R}(\infty)|^{2}&=&\sin^{2}\Theta\\
\Theta \equiv \int_{-\infty}^{\infty}\frac{\Omega_{c}(t')}{2}dt' &=& \Omega_{c0}T_{c}\sqrt{\frac{\pi}{2}}
\end{eqnarray}
For complete transfer of population from $|0\rangle$ to $|R\rangle$ state, $\Theta$ should be an odd multiple of $\frac{\pi}{2}$. Thus, we need:\\
\begin{eqnarray}
\Omega_{c0}T_{c}&=& (2p+1)\sqrt{\frac{\pi}{2}},~~~~p\in\mathbb{Z}
\label{eq:oddmul}
\end{eqnarray}
To check for the robustness of this transfer against variations in the Rabi frequency, we look at the derivative of $|c_{R}(\infty)|$ wrt $\Omega_{c0}$.
\begin{eqnarray}
\frac{\partial|c_{R}(\infty)|}{\partial\Omega_{c0}}&=&-T_{c}\sqrt{\frac{\pi}{2}}\cos(\Omega_{c0}T_{c}\sqrt{\frac{\pi}{2}})
\label{eq:robust}
\end{eqnarray}
Eq.~(\ref{eq:robust}) implies that smaller values of $T_{c}$ provide more robustness against variation in $\Omega_{c0}$. For $\delta_{R}\neq 0$, analytic solution for Gaussian form of the Rabi frequency is difficult to derive. Hence, we will look at the dependence of $|c_{R}(\infty)|^{2}$ on different values of $\Omega_{c0}$, $\delta_{R}$ and $T_{c}$ numerically in Fig. (\ref{fig:2}). For the value of $T_{c} = 0.1T$, where $T$ is the standard deviation of the Gaussian STIRAP pulses, we see from Fig. (\ref{fig:Fig2a}) that the population gets completely transferred to the $|R\rangle$ state when $\Omega_{c0}T = 6.2$ and $\delta_{R}T = 0$. From Fig. (\ref{fig:Fig2b}), we see that there are multiple periodic values of $\Omega_{c0}T$ for which complete population transfer to the excited level can be achieved via a $\pi$ pulse as expected from Eq. (\ref{eq:oddmul}) for $T_{c}=1T$. As $\delta_{R}T$ becomes larger, the fraction of population in the excited state decreases and eventually becomes zero. The effect of larger values of $\delta_{R}T$ is more prominent for larger values of $T_{c}$. As derived in Eq. (\ref{eq:robust}), we see that smaller values of $T_{c}$ provide more robust transfer against variations in $\Omega_{c0}$ and $\delta_{R}$.
\begin{figure}
\centering
\subfloat[$T_{c} = 0.1 T$]{\label{fig:Fig2a}\includegraphics[width=0.24\textwidth]{Fig2a.png}}~
\subfloat[$T_{c} = 1 T$]{\label{fig:Fig2b}\includegraphics[width=0.24\textwidth]{Fig2b.png}}
\caption{(a) The coefficient of population in state $|R\rangle$ transferred from $|0\rangle$, $|c_{R}(\infty)|^{2}$, due to the control $\pi$ pulse is plotted as a function of scaled detuning $\delta_{R}T$ and scaled peak control Rabi frequency $\Omega_{c0}T$ for a value of $T_{c} = 0.1 T$. (b) Same as plot (a) but for value of $T_{c}=1 T$. We see that smaller values of $T_{c}$ are more robust to variations in detuning and peak Rabi frequency.}\label{fig:2}
\end{figure}
\section{\label{sec:level3}The target ensemble}
In this section, we will derive the conditions that are necessary to maintain adiabatic transfer of the ensemble atoms from $|g^{N}\rangle$ to $|s^{N}\rangle$ when the control atom is in state $|1\rangle$ and to remain in the state $|g^{N}\rangle$ when the control atom is in the $|0\rangle$ state. The Hamiltonian for ensemble atoms interacting with the counter-intuitive STIRAP pulse sequence in the RWA is given below:
\begin{eqnarray}
\frac{H_{T}(t)}{\hbar}&=& \sum_{j=1}^{N}\big[(\omega_{r}^{0}-\delta_{g})|g\rangle_{j}\langle g|+(\omega_{r}^{0}-\delta_{s})|s\rangle_{j}\langle s|\big]\nonumber\\
&+&\sum_{j=1}^{N}\big[\frac{\Omega^{*}_{g}(t)}{2}e^{-\textit{i}\omega^{0}_{r}t}|g\rangle_{j}\langle r|+\frac{\Omega^{*}_{s}(t)}{2}e^{-\textit{i}\omega^{0}_{r}t}|s\rangle_{j}\langle r| \nonumber\\
&&+\text{h.c.}\big]
\label{eq:tarH}
\end{eqnarray}
In Eq. (\ref{eq:tarH}), $\hbar\omega_{r}^{0}$ is the energy of the excited level $|r\rangle$. For the energy of states $|g\rangle$ and $|s\rangle$ denoted by $\hbar\omega_{g}^{0}$ and $\hbar\omega_{s}^{0}$ respectively, $\delta_{g(s)} = \omega_{r}^{0}-\omega_{g(s)}^{0}-\omega_{g(s)}$ are the detunings of these levels wrt to the optical frequencies $\omega_{g}$ and $\omega_{s}$ of the STIRAP pulses shown in Fig. (\ref{fig:Fig1b}). The corresponding Rabi frequencies $\Omega_{g}(t)$ and $\Omega_{s}(t)$ are defined as follows:
\begin{eqnarray}
\Omega_{g}(t)&=&\Omega\exp{\big[-\frac{(t-\frac{\tau}{2})^{2}}{2T^{2}}\big]}\label{eq:omg}\\
\Omega_{s}(t)&=&\Omega\exp{\big[-\frac{(t+\frac{\tau}{2})^{2}}{2T^{2}}\big]}
\label{eq:oms}
\end{eqnarray}
In Eqs. (\ref{eq:omg})-(\ref{eq:oms}), $\Omega$ is the peak Rabi frequency of the Gaussian STIRAP pulses, $\tau$ is the time separation between the peaks of the two pulses and $T$ is the standard deviation. We can simplify the Hamiltonian in Eq. (\ref{eq:tarH}) by setting $\omega_{r}^{0}=0$ and assuming two photon resonance condition for the system i.e. $\delta_{g}=\delta_{s}=\delta$ \cite{Bergmann98}. Boosting the energy of all the levels by $\delta$, we get the modified Hamiltonian for the target ensemble as:
\begin{eqnarray}
\frac{H_{T}(t)}{\hbar}&=& \sum_{j=1}^{N}\big[\frac{\delta}{2}|r\rangle_{j}\langle r|+\frac{\Omega^{*}_{g}(t)}{2}|g\rangle_{j}\langle r|+\frac{\Omega^{*}_{s}(t)}{2}|s\rangle_{j}\langle r| \nonumber\\
&&+\text{h.c.}\big]
\label{eq:tarH2}
\end{eqnarray}
We will restrict the set of basis states for the analysis of this system to a set containing only one Rydberg level excitation by assuming that all the atoms are within the Rydberg blockade radius of each other. We can rewrite the Hamiltonian in Eq. (\ref{eq:tarH2}) in the symmetric Fock state basis set defined by:
\begin{eqnarray}
\Sigma_{\mu,\nu}&=&\sum_{j}|\mu\rangle_{j}\langle \nu| = a_{\mu}^{\dagger}a_{\nu};\\
|g^{N-n};s^{n};r^{0}\rangle& =& \sqrt{\frac{(N-n)!}{N!n!}}\Sigma_{s,g}^{n}|g^{N}\rangle\\
|g^{N-n-1};s^{n};r^{1}\rangle &=& \sqrt{\frac{(N-n-1)!}{N!n!}}\Sigma_{s,g}^{n}\Sigma_{r,g}|g^{N}\rangle
\end{eqnarray}
There are in all (2N+1) states in this basis set, namely,
\begin{eqnarray}
\big\{|g;s;r\rangle_{N}\big\}&=&\big\{|g^{N};s^{0};r^{0}\rangle,..,|g^{N-n};s^{n};r^{0}\rangle,..|g^{0};s^{N};r^{0}\rangle, \nonumber\\
& &|g^{N-1};s^{0};r^{1}\rangle,..,|g^{N-n-1};s^{n};r^{1}\rangle,..|g^{0};s^{N-1};r^{1}\rangle\big\} \nonumber\\
\label{eq:set}
\end{eqnarray}
As a short hand notation, we use $|g^{N}\rangle \equiv |g^{N};s^{0};r^{0}\rangle$ and $|s^{N}\rangle \equiv |g^{0};s^{N};r^{0}\rangle $. The corresponding Hamiltonian in the Fock number basis is then:
\begin{eqnarray}
\frac{H_{T}(t)}{\hbar}&=&\delta \sigma_{r}^{+}\sigma_{r}^{-}+\big[\frac{\Omega_{g}^{*}(t)}{2}a_{g}^{\dagger}\sigma_{r}^{-}+\frac{\Omega_{s}^{*}(t)}{2}a_{s}^{\dagger}\sigma_{r}^{-}+\text{h.c.}\big]\nonumber\\
\label{eq:tarH3}
\end{eqnarray}
Where:
\begin{eqnarray}
\sigma_{r}^{+}|r^{0}\rangle &=& |r^{1}\rangle ~~~~~~\sigma_{r}^{-}|r^{0}\rangle = 0\\
\sigma_{r}^{-}|r^{1}\rangle &=& |r^{0}\rangle ~~~~~~\sigma_{r}^{+}|r^{1}\rangle = 0
\end{eqnarray}
Using the properties of block tri-diagonal matrices it can be shown that the Hamiltonian in Eq. (\ref{eq:tarH3}) when expressed as a matrix in the basis set defined by Eq. (\ref{eq:set}) always has one eigenvalue as 0. The characteristic equation for this Hamiltonian is invariant when $\delta \rightarrow -\delta$ and the eigenvalue $\lambda \rightarrow -\lambda$. This structure implies that the other 2N eigenvalues are symmetrically placed around the eigenvalue 0. With the following new definitions given in Eqs. (\ref{eq:om0})-(\ref{eq:the}), we are set to explore the eigen-structure of this system.
\begin{eqnarray}
\Omega_{0}(t) &\equiv& \sqrt{\Omega_{g}^{2}(t)+\Omega_{s}^{2}(t)} \label{eq:om0}\\
\tan\theta(t) \equiv \frac{\Omega_{g}(t)}{\Omega_{s}(t)};&~~~&\tan\varphi(t) \equiv \frac{\Omega_{0}(t)}{\delta}\label{eq:the}
\end{eqnarray}
\begin{figure}
\centering
\subfloat[$|c_{s}(\infty)|^{2}$]{\label{fig:Fig3a}\includegraphics[width=0.25\textwidth]{Fig3a.png}}~
\subfloat[$|c_{s}(\infty)|^{2}+|c_{g}(\infty)|^{2}$]{\label{fig:Fig3b}\includegraphics[width=0.25\textwidth]{Fig3b.png}}\\
\subfloat[$|c_{s^{5}}(\infty)|^{2}$]{\label{fig:Fig3c}\includegraphics[width=0.25\textwidth]{Fig3c.png}}~
\subfloat[$|c_{s^{5}}(\infty)|^{2}+|c_{g^{5}}(\infty)|^{2}$]{\label{fig:Fig3d}\includegraphics[width=0.25\textwidth]{Fig3d.png}}\\
\caption{(a) Co-efficient of population in state $|s\rangle$ for a target ensemble with 1 atom after the application of STIRAP pulses as a function of the scaled peak Rabi frequency $\Omega T$ and scaled detuning $\delta T$ for $\tau = 1.4T$.(b)Total population in the state $|s\rangle$ and $|g\rangle$ after the STIRAP pulses for a single target atom as a function of $\Omega T$ and $\delta T$. (c) Same as plot (a) but for a target ensemble of 5 atoms. (d) Same as plot (b) for N = 5 atoms. We see that as the number of target atoms goes up, the parameter space for adiabatic transfer from $|g^{N}\rangle$ to $|s^{N}\rangle$ or no transfer gets modified as per the conditions derived in Eqs. (\ref{eq:a})-(\ref{eq:c}) }\label{fig:3}
\end{figure}
On solving for the eigenvalues of this system, we find that the non-zero eigenenergies are:
\begin{eqnarray}
E^{N}_{\pm n} &=& \frac{\hbar\Omega_{0}(t)}{2}[\cot\varphi(t)\pm\sqrt{n+\cot^{2}\varphi(t)}], ~~~~ n = 1,..,N\nonumber\\
\end{eqnarray}
The corresponding eigenstates are be denoted by $|\lambda_{\pm n}^{N}\rangle$. The eigenstate with eigenenergy 0 is given as:
\begin{eqnarray}
|O(t)\rangle &=& \sum_{n=0}^{N}(-1)^{N-n}\alpha^{N}_{n}(t)|g^{N-n};s^{n};r^{0}\rangle\\
\alpha^{N}_{n}(t)&=&\sqrt{\frac{N!}{n!(N-n)!}}\cos^{N-n}(\theta(t))\sin^{n}(\theta(t))
\end{eqnarray}
State $|O(t)\rangle$ is the N particle STIRAP state. As $t\rightarrow -\infty$, $|O(-\infty)\rangle = |g^{N}\rangle$ and $t\rightarrow \infty$, $|O(\infty)\rangle = |s^{N}\rangle$. If this system evolves adiabatically, then the population of the target ensemble can be coherently transferred from $|g^{N}\rangle$ to $|s^{N}\rangle$. This eigenstate with eigenvalue 0 has no contribution from the excited level $|r\rangle$ for any number of ensemble atoms at all times. It is also independent of the detuning $\delta$. In the STIRAP process our aim is to keep the target ensemble in the instantaneous eigenstate $|O(t)\rangle$ at all times. Adiabatic population transfer along this eigenstate implies that this protocol is insensitive to the spontaneous emissions from the excited level $|r\rangle$. This is a key feature of this scheme which provides us with a robust mechanism of population transfer even in the presence of decay. Numerical studies in the presence of decay are described in Sec. \ref{sec:level4}. \\
The condition for maintaining adiabatic transfer along the $|O(t)\rangle$ state is summarized by the adiabaticity criterion discussed in \cite{Comparat09} given as:
\begin{eqnarray}
\sum_{m\neq 0}\big|\frac{\hbar\langle m|\dot {O}(t)\rangle}{E_{0}-E_{m}}\big|\ll 1
\label{eq:adcond}
\end{eqnarray}
In the above Eq. (\ref{eq:adcond}), $E_{0}$ is the eigenenergy of the eigenstate $|O(t)\rangle$ and the sum is taken over all the other eigenstates $|m\rangle$ with eigenenergies $E_{m}$.\\
From here onwards, we will assume $\Omega$ to be real. On analyzing the eigenstates $|\lambda^{N}_{\pm 1}\rangle$ corresponding to eigenenergies $E^{N}_{\pm 1}$, we find that the projection of state $|\lambda^{N}_{\pm 1}\rangle$ onto the $|r^{0}\rangle$ subspace is co-linear with $|\dot{O}(t)\rangle$:
\begin{eqnarray}
\langle \lambda^{N}_{+ 1}(t)|\dot{O}(t)\rangle &=& \frac{\dot{\theta}(t)\sqrt{N}\sin(\frac{\varphi(t)}{2})}{\cot(\frac{\varphi(t)}{2})}\\
\langle \lambda^{N}_{- 1}(t)|\dot{O}(t)\rangle &=& \frac{-\dot{\theta}(t)\sqrt{N}\cos(\frac{\varphi(t)}{2})}{\tan(\frac{\varphi(t)}{2})}
\end{eqnarray}
The eigen-structure is such that for any value of N, all the eigenstates except the zeroth eigenstate have non-zero projections in the $|r^{1}\rangle$ subspace. From the orthonormality properties of the eigenvectors we can deduce that:
\begin{eqnarray}
\langle \lambda^{N}_{\pm n}|P_{r^{0}}P^{\dagger}_{r^{0}}|\lambda^{N}_{\pm m}\rangle = \langle \lambda^{N}_{\pm n}|P_{r^{1}}P^{\dagger}_{r^{1}}|\lambda^{N}_{\pm m}\rangle = 0 ~~\forall~~n\neq m\nonumber\\
\end{eqnarray}
Here, $P^{\dagger}_{r^{0}}$ and $P^{\dagger}_{r^{1}}$ are projection operators for the $|r^{0}\rangle$ and $|r^{1}\rangle$ subspace respectively. From the above deduction we can conclude that only the $|\lambda^{N}_{\pm 1}\rangle$ eigenstates contribute to the sum in Eq. (\ref{eq:adcond}). On simplifying the adiabatic condition we get:
\begin{eqnarray}
\dot{\theta}(t) &\ll&\frac{\Omega_{0}(t)}{2\sqrt{N}}f(\varphi(t))\label{eq:thetdot}\\
f(\varphi(t))&=&\frac{\sin\frac{\varphi(t)}{2}\cos\frac{\varphi(t)}{2}}{\sin^{3}\frac{\varphi(t)}{2}+\cos^{3}\frac{\varphi(t)}{2}}\label{eq:adexp}
\end{eqnarray}
Substituting the expressions for $\Omega_{0}(t)$ and $\dot{\theta}(t)$ in Eq. (\ref{eq:thetdot}), the adiabaticity condition is rewritten in Eq. (\ref{eq:adexp2}). Here, we have scaled all the variables with $T$, thus, $\tilde{\Omega}\equiv\Omega T$, $\tilde{\tau}\equiv\frac{\tau}{T}$ and similarly $\tilde{\delta}$ and $\tilde{t}$.
\begin{eqnarray}
1&\ll&\sqrt{\frac{2}{N}}\frac{\tilde{\Omega}}{\tilde{\tau}} \exp{(-\frac{(\tilde{t}^{2}+\frac{\tilde{\tau}^{2}}{4})}{2})}\cosh^{3/2}(\tilde{t}\tilde{\tau})f(\varphi(\tilde{t}))
\label{eq:adexp2}
\end{eqnarray}
Since, the Rabi frequencies and detuning are positive, $0 \leq \varphi(t) < \frac{\pi}{2}$. The function $f(\varphi(t))$ is a monotonically increasing function of $\varphi(t)$ in this range. For the strictest adiabaticity condition, we should choose the limit when $\varphi(t)\rightarrow 0 $. In this limit, $f(\varphi(t)) = \frac{\Omega_{0}(t)}{\sqrt{2}\delta}$, given $\delta \gg \Omega_{0}(t)$. On the other hand, when $\varphi(t) \rightarrow \frac{\pi}{2}$, we get $f(\varphi(t)) = \frac{1}{\sqrt{2}}$ with $\delta \rightarrow 0$. For the duration of population transfer, i.e. when $\Omega_{0}(\tilde{t})$ is considerably large, the $\tilde{t}$ dependence of the RHS of Eq. (\ref{eq:adexp2}) varies from being singly peaked with maximum at $\tilde{t}=0$ till $\tilde{\tau}$ is increased from 0 to about 1.4 to being doubly peaked as $\tilde{\tau}$ is increased further with a minimum at $\tilde{t}=0$. It is thus sufficient to study the Eq. (\ref{eq:adexp2}) at $\tilde{t}=0$ for all values of $\tilde{\tau}$. Incorporating the above simplifications, the adiabaticity condition now is given as:
\begin{eqnarray}
1&\ll&\frac{\tilde{\Omega}^2}{\sqrt{N}\tilde{\tau}\tilde{\delta}} \exp{(-\frac{\tilde{\tau}^{2}}{4})}~~~~\text{when}~~~\tilde{\delta}\gg\tilde{\Omega}
\label{eq:adconddel}
\end{eqnarray}
It is worthwhile to keep in mind that when $\delta \rightarrow 0$, this condition becomes:
\begin{eqnarray}
1&\ll&\frac{\tilde{\Omega}}{\sqrt{N}\tilde{\tau}} \exp{(-\frac{\tilde{\tau}^{2}}{8})}
\label{eq:adcondnodel}
\end{eqnarray}
Note the dependence of the adiabaticity conditions in Eq. (\ref{eq:adconddel}) and Eq. (\ref{eq:adcondnodel}) on the number of atoms in the ensemble. The condition for adiabatic transfer along the $|O\rangle$ eigenstate becomes stricter by $\sqrt{\text{N}}$ for an ensemble of N atoms. The optimum value of $\tau$ can be obtained numerically. When all other parameters are fixed, the condition $\tilde{\delta} \ll \tilde{\Omega}^{2}$ for the adiabatic transfer is similar to what was proved by Vitanov and Stenholm in 1997 \cite{Vitanov97} for a single atom case.\\
Let us now understand the condition required for the atomic population to remain in the state $|g^{N}\rangle$ when the added detuning due to Rydberg dipole-dipole interaction is introduced. For a single atom case, as long as $\tilde{\delta}\gg\tilde{\Omega}$, we can reduce the three level system to a two level system. In this case, the condition for adiabatic transfer from $|g\rangle$ to $|s\rangle$ is simply $\tilde{\delta}\ll\tilde{\Omega}^{2}$, ignoring the effects of $\tilde{\tau}$. On the other hand, the condition to remain in the $|g\rangle$ state is $\tilde{\Omega}^2 \ll \tilde{\delta}$ which is obtained by making the effective coupling between levels $|g\rangle$ and $|s\rangle$ small \cite{Vitanov97}. This situation changes a little in the presence of more than one atom. In this case, when we enforce that the effective couplings are kept small, the condition for the ensemble state to remain in the state $|g^{N}\rangle$ is modified to:
\begin{eqnarray}
\sqrt{N}\tilde{\Omega}^{2} \ll \tilde{\delta}~~~\text{when}~~~\tilde{\Omega}\ll\tilde{\delta}
\end{eqnarray}
Thus, we can conclude that for the ensemble state to be transferred to $|s^{N}\rangle$ state from the initial state $|g^{N}\rangle$, assuming $\tilde{\tau}$ is fixed, we must have:
\begin{eqnarray}
\delta_{|1\rangle} &\ll& \frac{\tilde{\Omega}^{2}}{\sqrt{N}} ~~~\text{when}~~~\tilde{\delta}_{|1\rangle}\gg\tilde{\Omega}
\label{eq:a}\\
1 &\ll& \frac{\tilde{\Omega}}{\sqrt{N}} ~~~\text{when}~~~\tilde{\delta}_{|1\rangle}\rightarrow 0
\label{eq:b}
\end{eqnarray}
Also, for the ensemble state to remain in the $|g^{N}\rangle$ state, we must have:
\begin{eqnarray}
\delta_{|0\rangle} &\gg&\sqrt{N}\tilde{\Omega}^{2} ~~~\text{when}~~~\tilde{\delta}_{|0\rangle}\gg\tilde{\Omega}
\label{eq:c}
\end{eqnarray}
In the above equations $\tilde{\delta}_{|0\rangle}$ and $\tilde{\delta}_{|1\rangle}$ are the detunings of ensemble atoms when the control atom is in state $|0\rangle$ and $|1\rangle$ respectively. For our protocol to work efficiently, our system should satisfy the conditions given in Eq. (\ref{eq:a}) or Eq. (\ref{eq:b}) along with Eq. (\ref{eq:c}). Thus, we can take $\tilde{\delta}_{|0\rangle} = \tilde{\delta}_{|1\rangle} + \tilde{\Delta}$. \\
To understand the implications of the adiabaticity conditions derived in this section, we numerically evolve the Hamiltonian for the ensemble atoms given in Eq. (\ref{eq:tarH3}) for different values of $\Omega T$ and $\delta T$. In Fig. (\ref{fig:3}) we plot the population of ensemble atoms in state $|s^{N}\rangle$ for $N = 1$ and $5$ denoted by the co-efficient $|c_{s^{N}}(\infty)|^2$. To compare this with the population that remained in the initial state $|g^{N}\rangle$, we plot the total population in the states $|s^{N}\rangle$ and $|g^{N}\rangle$ after the completion of the protocol. This sum is denoted as $|c_{g^{N}}(\infty)|^{2}+|c_{s^{N}}(\infty)|^{2}$. For $N=1$, we see from Fig. (\ref{fig:Fig3a}), the population gets completely transferred to $|s\rangle$ state for $\tilde{\Omega}^{2}\gg\tilde{\delta}$. It is clear from Fig. (\ref{fig:Fig3b}), there is only a small portion of the parameter space when $\tilde{\Omega}\approx\tilde{\delta} < 3$ where adiabatic transfer of population as described above does not take place for $N=1$. This situation changes as the number of atoms in the target ensemble increases since more intermediate states now become available. For $N=5$, as seen from Fig. (\ref{fig:Fig3c}), the condition for adiabatic transfer from $|g^{5}\rangle$ to $|s^{5}\rangle$ becomes stricter compared to that for $N=1$. Portions of the parameter space defined by $\tilde{\Omega}$ and $\tilde{\delta}$ open up where the adiabaticity conditions fail. This region clearly divides the parameter space into two sections, one which allows the adiabatic transfer of population from $|g^{N}\rangle$ to $|s^{N}\rangle$ with high fidelity marked out by the condition $\tilde{\Omega}^{2}\gg\sqrt{N}\tilde{\delta}$ and the other where population remains in $|g^{N}\rangle$ with unit probability. The Rydberg-Rydberg interaction between the control and the ensemble atoms provides a tunable mechanism to increase or decrease the effective value of $\tilde{\delta}$ such that the target atoms are always in either of these two high fidelity transfer regions subject to the state of the control atom.
\section{\label{sec:level4}Numerical results}
Before we start analyzing the numerical simulations for the control and target system together, let us introduce the effect of decoherence due to spontaneous emissions from the excited Rydberg states for the control atom and the target ensemble. \\
Assuming no collisions, the master equation for the density matrix, $\rho$, with $M$ number of spontaneous emission decay channels is given below:
\begin{eqnarray}
\dot{\rho}&=&\frac{\textit{i}}{\hbar}[\rho,H]+\hat{L}(\rho)\\
\hat{L}(\rho)&=&-\frac{1}{2}\sum_{m=1}^{M}(C_{m}^{\dagger}C_{m}\rho\text{+}\rho C_{m}^{\dagger}C_{m})\text{+}\sum_{m=1}^{M}C_{m}\rho C^{\dagger}_{m}
\end{eqnarray}
For the control atom, we have only one decay channel with the decay rate $\Gamma_{0R}$, namely,
\begin{eqnarray}
\hat{C}_{0R}&=&\sqrt{\Gamma_{0R}}|0\rangle\langle R|
\end{eqnarray}
For the target ensemble atoms, there are two decay channels with rates $\Gamma_{gr}$ and $\Gamma_{sr}$ defined as:
\begin{eqnarray}
\hat{C}_{gr}&=&\sqrt{\Gamma_{gr}}|g\rangle\langle r|\\
\hat{C}_{sr}&=&\sqrt{\Gamma_{sr}}|s\rangle\langle r|
\end{eqnarray}
In the forth coming numerical calculations, we have chosen $\Gamma_{gr}=\Gamma_{sr}\equiv\Gamma_{r}$. It is straight-forward to extend the master equation calculations for a system with more than one target atom using the Fock number state basis. \\
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig4.png}
\caption{The population in level $|s^{N}\rangle$ after the STIRAP pulses for different number of atoms in the target ensemble, N, and varying spontaneous emission rate $\Gamma_{r}T$. The value of detuning $\delta T =0$, $\Omega T = 9.5$ and $\tau = 1.4T$. We see that the population transfer does not depend on the decay rate significantly and has values higher than 0.99 for typical range of $\Gamma_{r}T\approx 0.01\sim0.1$ }\label{fig:4}
\end{figure}
We will first study the effect of decay due to spontaneous emissions on the target ensemble with different number of atoms. We choose the value of $T=1\mu s$ and $\tilde{\tau} = 1.4$ for all the numerical results here after. From Fig. (\ref{fig:4}), we see that even for an ensemble of about ten atoms, the population transferred to the $|s^{N}\rangle$ state from the $|g^{N}\rangle$ state is greater than 99\% for realistic values of Rydberg level spontaneous emission rates of about $\Gamma_{r}\approx 0.01\sim 0.1$ MHz. As discussed above, we see that the spontaneous emissions from the Rydberg excited levels of the target ensemble atoms do not affect this protocol which makes it a very robust scheme. \\
Having laid the groundwork we will now look at the simulation of GHZ state creation. The total Hamiltonian for this system is:
\begin{eqnarray}
H_{Tot}(t)&=&H_{C}(t)+H_{T}(t)+\hbar\Delta |R\rangle \sigma_{r}^{+}\sigma_{r}^{-}\langle R |
\label{eq:totH}
\end{eqnarray}
The expressions for $H_{C}(t)$ and $H_{T}(t)$ are given in the Eq. (\ref{eq:conH}) and Eq. (\ref{eq:tarH3}) respectively. The interaction between the target ensemble and the control atom is introduced via the last term in Eq. (\ref{eq:totH}) with the interaction strength given by frequency $\Delta$.\\
We solve the Schrodinger equation numerically in the basis set $\big\{|0\rangle,|1\rangle,|R\rangle\big\}\otimes\big\{|g;s;r\rangle_{N}\big\}$ defined in Eq. (\ref{eq:set}) with the Hamiltonian defined by Eq. (\ref{eq:totH}) for the control atom in the initial state, $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ and the ensemble atoms initiated in the $|g^{N}\rangle$ state. In Fig. (\ref{fig:5}) we have plotted the modulus squared of the co-efficients corresponding to the components $|0\rangle|g^{N}\rangle$, $|0\rangle|s^{N}\rangle$, $|1\rangle|g^{N}\rangle$ and $|1\rangle|s^{N}\rangle$ of the wave-vector as it evolves with time in the absence of any decay from the excited levels of the control and the target atoms. The final state obtained after measuring the control atom in $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ state has a fidelity of 0.97 with respect to the GHZ state $|\phi\rangle =\frac{1}{\sqrt{2}}(|g^{N}\rangle+|s^{N}\rangle)$ for a target ensemble with N = 5 atoms and having the interaction strength $\tilde{\Delta}=500$.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{Fig5.png}
\caption{Implementation of the protocol for N=5: Time evolution of the squared co-efficients of $|0\rangle|g^{N}\rangle$, $|0\rangle|s^{N}\rangle$, $|1\rangle|g^{N}\rangle$ and $|1\rangle|s^{N}\rangle$ under the influence of the Hamiltonian in Eq. (\ref{eq:totH}) with the initial condition $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)|g^{N}\rangle$. Chosen parameters: $\Omega_{c0}T=6.2$, $\delta_{R}T=0$, $T_{c} = 0.1T$, $\Omega T=5$, $\delta T =0$, $\tau = 1.4T$, $\Delta T = 500$, $\Gamma_{r}T = \Gamma_{R}T = 0$, $\tau_{c}=\tau+4(T+T_{c})$.}\label{fig:5}
\end{figure}
Note that for this simulation, $T=1\mu s$, which means that the entire operation takes only about 15-20$\mu s$. Typical excited Rydberg level lifetimes for $n\gtrsim 60$ are of the order of 100 $\mu s$ \cite{Saffman05}. Since the current time of gate operation is much less compared to the excited level lifetime, we can improve the fidelity by increasing the value of $\tilde{\Delta}$ without necessarily exciting the Rydberg atoms to much higher levels by simply increasing the width of the STIRAP pulses. In Fig. (\ref{fig:6}), we plot the fidelity of the obtained final ensemble state wrt to the GHZ state $|\phi\rangle$ as a function of the interaction strength $\tilde{\Delta}$ for a target ensemble having 1 and 5 atoms. The fidelity for a single target atom is above 98\% for $\tilde{\Delta}$ of 100 or more. On the other hand the fidelity of the target ensemble is 98\% and higher for values of $\tilde{\Delta}=$ 600 and above.\\
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig6.png}
\caption{The fidelity of the final ensemble state wrt to $|\phi\rangle$ for N= 1 and 5 as a function of the interaction strength $\Delta T$ with the initial condition $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)|g^{N}\rangle$. Parameters used in the simulation: $\Omega_{c0}T=6.2$, $\delta_{R}T=0$, $T_{c} = 0.1T$, $\delta T =0$, $\tau = 1.4T$, $\Gamma_{r}T = \Gamma_{R}T = 0$, $\tau_{c}=\tau+4(T+T_{c})$}\label{fig:6}
\end{figure}
As we have already seen, the spontaneous emission from the excited levels of the target atoms do not affect this protocol as long as the adiabaticity conditions are satisfied. What about the spontaneous emission from the excited level of the control atom? In Fig. (\ref{fig:7}) we show the decrease in the fidelity of the final density matrix wrt the state $|\phi\rangle$ for the same initial conditions as above due to the decay from the $|R\rangle$ level. This plot shows the decay rate for the target ensemble having a single atom and 5 atoms with $\tilde{\Omega}=3.5$, $\tilde{\Delta}=200$ and $\tilde{\Omega}=5$, $\tilde{\Delta}=500$ respectively and $\tilde{\delta}=0$. As expected the rate of the decay is same for both the cases since the number of target atoms does not influence it. The fidelity is seen to drop to a value of 97\% from 99\% for a single atom target ensemble when the value of $\Gamma_{r}T$ increases to 0.01, whereas for the target ensemble with 5 atoms, the fidelity drops from 97\% to 95\%. \\
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig7.png}
\caption{ Fidelity of the target ensemble density matrix after measurement of the control atom in the superposition state measured wrt to the state $|\phi\rangle$ with N = 1 and 5 for different values of $\Gamma_{R}T=\Gamma_{r}T$ numerically evaluated with the initial condition $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)|g^{N}\rangle$. Parameters: $\Omega_{c0}T=6.2$, $\delta_{R}T=0$, $T_{c} = 0.1T$, $\delta T =0$, $\tau = 1.4T$, $\Delta T = 200$ for N = 1, $\Delta T = 500$ for N = 5, $\tau_{c}=\tau+4(T+T_{c})$}\label{fig:7}
\end{figure}
It is possible to compensate for the losses due to spontaneous emission from the control atom by exciting it to higher Rydberg levels. This would serve the dual purpose of providing longer excited level lifetimes as well as stronger Rydberg dipole interaction strength \cite{Saffman10}, which would in turn improve the overall fidelity of the protocol.
\section{\label{sec:level5}Conclusion}
In conclusion, we have presented here a protocol to create N particle GHZ state with a single control atom and an ensemble of N target atoms based on the principles of Rydberg dipole blockade and STIRAP. We have discussed the conditions under which adiabatic transfer of the target ensemble population from one ground state to the other is facilitated subject to the state of the control atom. The biggest advantage of this scheme is that it is not affected by the decay from the excited Rydberg levels of the target ensemble atoms as long as the conditions for adiabatic transfer are satisfied. Spontaneous emission from the excited Rydberg level of the control atom leads to decrease in the fidelity of the protocol. This can be controlled for by exciting the control Rydberg atom to higher principal quantum number.\\
\section*{Acknowledgement}
TG would like to thank Prof. Luming Duan, University of Michigan, for helpful discussions and his valuable guidance throughout this project.
|
{
"timestamp": "2018-06-25T02:03:08",
"yymm": "1803",
"arxiv_id": "1803.02844",
"language": "en",
"url": "https://arxiv.org/abs/1803.02844"
}
|
\section{INTRODUCTION}
In the primordial material in the Solar System, such as comets and meteorites, molecular D/H ratios are often higher than
the elemental D/H ratio, which is $2\times 10^{-5}$ \citep{geiss98}. For example, the D/H ratio of water is found to be
$(1.6-5)\times 10^{-4}$ in comets \citep[e.g.][]{mumma11,altwegg15}. Since the zero-point energy of D-bearing species is
lower than that of the normal isotope by up to a few hundred K, the significant deuterium
enrichment is considered to originate in chemistry at low temperatures.
There are two possible sites of such low-temperature chemistry: molecular clouds prior to star formation and outer regions of protoplanetary disks.
In molecular clouds, the temperature is $\sim 10$ K and high D/H ratios are found for various molecules. For example,
even a doubly deuterated molecule, D$_2$CO, has been detected in prestellar cores, and the $n$(D$_2$CO)/$n$(H$_2$CO) ratio has been derived to be $0.01-0.1$
\citep{bacmann03}, where $n$($i$) denotes the number density of
species $i$. In such dense cold cores, where HD is the primary reservoir of deuterium,
deuterated H$_3^+$ is produced via exothermic exchange reactions; e.g.,
\begin{equation}
{\rm H_3^+ + HD \rightarrow H_2D^+ + H_2}. \label{h2dp}
\end{equation}
The high D/H ratio
of H$_3^+$ propagates to other molecules via ion-molecule reactions in the gas phase.
The atomic D/H ratio is also enhanced through the electron recombination of H$_3^+$ and H$_2$D$^+$. The high atomic D/H ratio propagates to icy molecules
through hydrogenation/deuteration of atoms and molecules on grain surfaces.
The ions H$_2$D$^+$ and D$_2$H$^+$ are actually detected at the center of the prestellar core L1544
\citep{caselli03,vastel06}. In such a dense cold region, the D/H ratio of H$_3^+$ is further enhanced by the freeze-out of CO, which is otherwise the major reactant of H$_3^+$ and its isotopologs.
Protoplanetary disks are also partially ionized by X-rays and/or cosmic rays, and the temperature is $\lesssim 20$ K in the outer midplane region, so that
deuterium enrichment can proceed in disks, as well.
In fact, DCN, DCO$^+$, and N$_2$D$^+$ have been detected in several disks \citep[e.g.][]{vandishoeck03, qi08, huang15}. While a stable neutral
species such as DCN can originate in interstellar ice, then delivered to and desorbed in the disk, molecular ions are apparently
formed in situ, considering their short destruction timescales. \cite{vandishoeck03} used the JCMT telescope to derive a disk-averaged
$n$(DCO$^+$)/$n$(HCO$^+$) ratio of 0.035 in TW Hya, providing evidence of ongoing deuterium enrichment in protoplanetary disks.
Although the deuterium enrichment in the primordial material in the Solar System can originate in interstellar chemistry,
at least partially, it has been debated if and to what extent the molecular D/H ratios are modified in disks \citep{aikawa99,willacy07,cleeves14}.
Spatially resolved observations of deuterated ions can specify the region of active deuteration in protoplanetary disks.
The emission of DCO$^+$ is spatially resolved in the disks around TW Hya \citep{qi08} and HD 163296 \citep{mathews13}.
In both disks, the DCO$^+$ emission shows a ring structure, which is considered to reflect the production of DCO$^+$ by the reaction
\begin{equation}
{\rm H_2D^+ + CO \rightarrow DCO^+ + H_2}. \label{dcop}
\end{equation}
The exchange reaction (\ref{h2dp}) to form H$_2$D$^+$ is exothermic only by $\lesssim 250$ K, and thus its backward reaction becomes active at $T\gtrsim 30$ K.
\cite{mathews13} found that DCO$^+$ emission shows a ring structure of radius $r= 110-160$ au, and argued that the outer radius of the DCO$^+$
ring corresponds to the CO snow line. The ion DCO$^+$ is expected to be abundant in the region with temperature $T\sim 19-21$ K;
at higher temperatures ($T\gtrsim 21$ K), CO and ortho-H$_2$ (see below) would become abundant enough to destroy the precursor molecules, H$_3^+$ and H$_2$D$^+$, while
at $T\lesssim 19$ K, CO abundance would be too low to make DCO$^+$ abundant \citep{mathews13}.
The DCN emission in TW Hya disk, on the other hand, is centrally peaked.
This molecule is expected to form primarily via the reaction of N atoms with CHD, which is formed by the dissociative recombination of CH$_2$D$^+$. Since the exothermicity of the exchange reaction
\begin{equation}
{\rm CH_3^+ + HD \rightarrow CH_2D^+ + H_2} \label{ch2dp}
\end{equation}
is higher than that of reaction (\ref{h2dp}), DCN can be abundant even at $T\gtrsim 30$ K \citep{MBH89,oberg12}.
Recent observations of deuterated species in disks, however, challenge the above scenario. \cite{qi15} observed DCO$^+$ in HD 163296 with a higher spatial resolution,
and found that the inner edge of the DCO$^+$ ring is at 40 AU, which is much closer to the central star than derived by \cite{mathews13}.
\cite{huang17} observed six protoplanetary disks, spatially resolving DCO$^+$, H$^{13}$CO$^+$, H$^{13}$CN, and DCN in most of them.
While the DCO$^+$ emission tends to be spatially more extended than the DCN emission, the relative distributions of
DCO$^+$ and DCN vary among the disks. While the DCN emission is more compact than the DCO$^+$ emission in HD 163296,
the radial intensity profiles of DCO$^+$ and DCN are similar in AS 209, for example. Such variations indicate that multiple paths
to deuteration occur in the disks.
On the theoretical side, deuterium chemistry in protoplanetary disks has long been investigated \citep[e.g.][]{aikawa99, willacy07, cleeves14}.
Recent progress in the study of deuterium fractionation includes the evaluation of the state-to-state rate coefficients of the H$_3^+$ + H$_2$ system \citep{hugo09}
and the re-evaluation of the exothermicity of reaction (\ref{ch2dp}) \citep{roueff13}. Since the ground-state
energy of ortho-H$_2$ (o$\mathchar`-$H$_2$) is higher than that of para-H$_2$ (p$\mathchar`-$H$_2$), the backward reactions of exchange reactions such as (\ref{h2dp}) are less
endothermic with o$\mathchar`-$H$_2$ than with p$\mathchar`-$H$_2$. The deuterium enrichment would thus be less efficient if o$\mathchar`-$H$_2$ is abundant.
Although several groups have presented the deuterium chemistry in disks coupled with ortho-para chemistry,
they concentrated mostly on molecular species such as HDO rather than DCO$^+$ and DCN, or made an assumption that the ortho-para chemistry
is locally thermalized \citep{albertsson14, cleeves14, cleeves16}.
One of the exceptions is the paper by \cite{teague15}; these authors observed HCO$^+$ and DCO$^+$ lines in DM Tau, and calculated molecular evolution in a disk model to be compared with their observations.
While they solved for the spin states of H$_2$ and H$_3^+$, they did not seem to adopt the updated exothermicity for reaction (\ref{ch2dp}).
\cite{roueff13} calculated the exothermicity of reaction (\ref{ch2dp}) for p-H$_2$ to be 654 K, which is higher than
the previous estimate of 370 K \citep{smith82}.
\cite{favre15} adopted this updated energy difference for reaction (\ref{ch2dp}) in their chemical model, and showed that it enhances
the abundance of DCO$^+$ in the warm surface layers of the disk. Their model, however, does not include the ortho-para chemistry,
and implicitly assumes that all H$_2$ is in the para state. The assumption would not be appropriate in the warm layers.
In the present work, we investigate the deuteration paths of HCO$^+$, N$_2$H$^+$, and HCN in protoplanetary disks using an updated
chemical reaction network with deuterium fractionation and ortho-para chemistry. Instead of constructing best-fit models of
observed disks, we investigate the chemistry in template disk models. The effects of elemental C/O ratio, grain size, vertical mixing, and exclusion of cosmic rays are also studied.
The rest of the paper is organized as follows.
Our chemical reaction network and protoplanetary disk models are described in \S 2. Section 3 presents the results of our numerical calculations; i.e.,
abundance and column density distributions of HCO$^+$, N$_2$H$^+$, HCN and their deuterated isotopologs in disk models.
The model results are qualitatively compared with observations in \S 4, and our conclusions are summarized in \S 5.
In appendices, we also present an analysis of the OPRs of H$_2$, H$_3^+$, and H$_2$D$^+$, and analytical formulae of abundances of
H$_2$D$^+$, DCO$^+$ and N$_2$D$^+$ in the cold midplane.
\section{MODELS}
We adopt basically the same disk structure and chemical reaction network as in \cite{aikawa15}. Although basic descriptions of the model and network
are given in this section, more detailed descriptions can be found in \cite{aikawa15}, \cite{furuya13}, and \cite{aikawa06}.
\subsection{Disk Models}
In the present work, we investigate deuterium chemistry in template disk models, rather than constructing a model for a specific object.
The radial distribution of the gas column density in our models is thus rather arbitrary; it is determined so that the mass accretion rate
($10^{-8} M_{\odot}$/yr) is constant at all radii for a constant viscosity parameter $\alpha$ (see \S 3.3). Note that we do not explicitly take into account the
radial accretion of gas and dust in solving the chemical reaction network.
The central star is assumed to be a T Tauri star of mass $M_*$=0.5 $M_{\odot}$, surface temperature $T_*= 4000$ K, and radius $R_*=0.5 R_{\odot}$.
We refer to the UV and X-ray spectrum of TW Hya, and adopt the luminosities of $L_{\rm UV}=10^{31}$ erg s$^{-1}$ and
$L_{\rm X}=10^{30}$ erg s$^{-1}$ \citep{herczeg02, kastner02}\citep[see also][]{nomura05, nomura07}.
The vertical density distribution is set by hydrostatic equilibrium. The temperatures of gas and dust are obtained by solving two-dimensional
radiative transfer and the balance between cooling and heating. The temperature and density structures are solved self-consistently.
The gas and dust temperatures are the same in the midplane, while gas is
warmer than dust at the disk surface, which is basically a dense photon-dominated region.
In our fiducial model, the grains have grown to the maximum size $a_{\rm max}$ of 1 mm size, while the minimum size is $a_{\rm min}=0.01$ $\mu$m, with the dust-to-gass mass ratio of 0.01, all over the disk.
A grain size distribution is assumed to follow the power law $dn(a)/da\propto a^{-3.5}$, where $n(a)$ is the number density of grains with size $a$.
For comparison, we also adopt a disk model with dust appropriate to dark clouds, which we label dark cloud dust. The maximum grain size of the latter model is 10 $\mu$m
\citep{weingartner01}.
The density and temperature distributions of the two disk models are shown in Figure \ref{dist_phys}. The temperature is generally higher in the model with dark cloud dust,
because it absorbs stellar radiation more efficiently. While we assume that the disk is static (i.e. no diffusion or accretion),
we also calculate models with vertical mixing \citep{furuya13}, in order to investigate its effect on deuterium chemistry,
\subsection{Chemical Model}
The gas-grain chemical reaction network is based on \cite{garrod06}, but has been updated to include reactions that are effective at high temperatures ($T\gtrsim 100$ K)
\citep{harada10,harada12}. Our network includes up to triply deuterated species, and nuclear spin states of H$_2$, H$_3^+$, and their isotopologs \citep{hugo09,hincelin14, coutens14, furuya15}.
We refer to \cite{roueff13} for the reaction rate coefficients and exothermicities/endothermcities of the exchange reaction (\ref{ch2dp})
and analogous reactions with multi-deuteration and spin states.
Ultra violet radiation from the central star and interstellar radiation field causes photodissociation and photoionization in the gas phase; the rate coefficients are calculated by convolving
the dissociation/ionization cross sections of molecules and the UV spectrum at each position in the disk.
Self- and mutual shielding of H$_2$, HD, CO, N$_2$, and C atoms are taken into account \citep{draine96, visser09, wolcott11, li13, kamp00}.
The shielding of D$_2$ is not considered in the present model,
but we confirmed that it does not affect our results, because D$_2$ is much less abundant than HD at the disk surface, where photodissociation is effective.
Photo-dissociation of icy molecules is also considered in our model \citep[e.g.][]{furuya13}.
The ionization sources in the disk are X-rays, cosmic rays, and the decay of radioactive nuclei. We assume a cosmic-ray ionization rate of $5\times 10^{-17}$ s$^{-1}$
with an attenuation length of 96 g \citep{umebayashi81}. Considering that the penetration of cosmic rays can be prohibited by stellar winds, we also run a model without
cosmic rays \citep{aikawa99b,cleeves14b}; the disk midplane is then mainly ionized by the decay of radioactive nuclei with a rate of $10^{-18}$ s$^{-1}$.
Layers above the midplane are mainly ionized by stellar X-rays; the ionization rate could reach $10^{-14}$ s$^{-1}$ at the disk surface, for example
\citep[see Figure 1 in][]{aikawa15}.
We adopt a two-phase model, which consists of a gas phase and an undifferentiated grain ice mantle. The sticking probability is assumed to be unity when a neutral atom or molecule
collides with a grain surface, except for H and D atoms, for which we adopt the temperature-dependent sticking probability of \cite{tielens85}.
Adsorption energies of molecules on gain surfaces are generally taken from \cite{garrod06}.
The adsorption energies of deuterated species are generally set to be the same as those of normal species. One exception is the D atom, the adsorption energy of which is set to be 21 K higher
than that of the H atom, which is assumed to be 600 K \citep{caselli02}. The adsorption energies of CO, N$_2$ and HCN, the abundance of which we present in the following section, are set to be
1150 K, 1000 K, and 3370 K, respectively. The value for HCN is adopted from the Temperature Programmed Desorption (TPD) experiment by \cite{noble13}.
The dependence of molecular abundances on the adsorption energies of CO and N$_2$ are presented in \cite{aikawa15}.
In addition to thermal desorption, we take into account three non-thermal desorption processes: photodesorption, stochastic heating
by cosmic rays, and reactive desorption \citep[e.g.][]{oberg09a,hase93,garrod07}. The efficiency of chemical desorption is set to be
$10^{-4}-10^{-2}$ depending of the reactions, referring to \cite{garrod07}.
Recent laboratory experiments show that the efficiency depends on the molecular composition of the grain surface \citep{minissale16}. In order to include such an effect, we need to adopt the three-phase model,
discriminating the ice surface from the bulk ice mantle, which we postpone to future work.
We assume that the surface reactions occur via the Langmuir-Hinshelwood mechanism; i.e. the adsorbed species diffuse on grain surfaces via thermal hopping, and react with each other when they meet, before they
desorb. The modified rate of \cite{caselli98} is adopted to ensure that the rate of hydrogenation (deuteration) is not higher than the adsorption rate of H atoms (D atoms)
when the number of such atoms on a grain is small ($\lesssim 1$).
The barrier for thermal hopping ($E_{\rm diff}$) is set to one-half of the adsorption energy ($E_{\rm ads}$). Lower values from 0.3 to 0.4 have also been used in astrochemical models \citep[e.g.][]{ruaud16, penteado17};
the surface reactions become more rapid with the lower ratio of $E_{\rm diff}/E_{\rm ads}$. The efficiency of H$_2$ formation and those of its isotopologs on granular surfaces become lower in warmer regions
(e.g. $T\gtrsim$ a few $10$ K) in the disk, since the physisorbed H (and D) atoms are desorbed more promptly. When the temperature is high enough, however, a bare grain surface appears, on which H atoms can
be chemisorbed. At such high temperatures, we assume that the formation rate of H$_2$ is 0.2 times the sticking rate of H atom onto grain surfaces, referring to \cite{cazaux04,cazaux10} \cite[see eq. 19 of][]{furuya13}.
The elemental abundance of deuterium is set to be $1.5\times 10^{-5}$ relative to hydrogen \citep{linsky03}.
The initial molecular abundances are determined by considering the molecular evolution from a cloud formation stage
to a collapse phase to form a protostar. \cite{furuya15} first calculated molecular evolution behind the shock front of colliding HI gas. \cite{furuya16} then calculated the subsequent molecular evolution
in the hydrostatic prestellar core and in its collapse phase to form a protostar, as in \cite{aikawa12}. We adopt the fiducial model of \cite{furuya16} for our initial abundances; the shock model is run until the column density of the post shock gas
reaches a visual extinction of 2 mag, and the duration of the hydrostatic prestellar phase is set to be $10^6$ yr. Then the core collapses to form a protostar in $2.5\times 10^5$ yr, and the
molecular evolution in the infalling envelope is calculated until the protostar is $9.3\times 10^4$ years old. While \cite{furuya16} calculated a spatial distribution of molecular abundances in a protostellar core,
our initial abundances for the present work are adopted from the infalling fluid parcel which reaches a radius
of 60 au in the core. The initial abundances of major species are presented in Table \ref{initial}; since the temperature ($\sim$ 144 K) and density ($\sim 5\times 10^7$ cm$^{-3}$) are high in the infalling fluid parcel at 60 au,
ices and ions are not abundant.
While the OPR of H$_2$ is initially $3.2 \times 10^{-3}$, it reaches thermal equilibrium or steady state in a relatively short timescale (see Appendix A).
Our results thus do not significantly depend on the initial OPR of H$_2$.
\section{RESULTS}
In the following, we consider the spatial distributions of HCO$^+$, DCO$^+$, N$_2$H$^+$, N$_2$D$^+$, HCN, and DCN, and describe the major formation pathways of the deuterated species.
We present the results at $3\times 10^5$ yr, which is earlier than a typical age of T Tauri stars ($\sim 1$ Myr), because we do not include radial
accretion and/or mixing in the present model. The abundances of HCO$^+$ and N$_2$H$^+$ (and their isotopologs) are strongly coupled with
CO and N$_2$, which are gradually converted to less volatile species such as CO$_2$ and NH$_3$, and depleted from the gas phase.
The timescale of this conversion in the gas phase is, however, $\sim 5\times 10^5 \left(\frac{\zeta_{\rm He}}{2.5\times 10^{-17} {\rm s}^{-1}}\right)^{-1}$ yr, where $\zeta_{\rm He}$ is the ionization rate of He atom.
It is comparable to or longer than the accretion timescale of the disk at $r\lesssim 100$ au \citep{furuya14}\citep[see also][]{furuya13}.
In other words, gaseous CO and N$_2$ can be supplied from the outer radius by viscous accretion and the radial drift of dust grains with ice mantles.
The abundances of CO and N$_2$ are thus underestimated, if we choose 1 Myr in our static models. The temporal variation of the abundance of each molecular species is also described in the following.
\subsection{Fiducial Model}
\subsubsection{HCO$^+$ and DCO$^+$}
Let us first consider HCO$^+$ and DCO$^{+}$. The former is mainly formed by the reaction CO + H$_3^+$. As a reference, distributions of gaseous CO abundance and its column density are shown in
Figure \ref{dist_1mm_1} (a) (b), while those of HCO$^+$ and DCO$^+$ are shown in Figure \ref{dist_1mm_1}, panels (c)(d) and (f).
The horizontal axis depicts the radius ($r$) of the disk. In panels (a) (c) (d) and (e), the vertical axis is the distance from the midplane $z$, normalized by the radius; i.e. $z/r$, while the vertical axis
is the column density for panels (b) and (f). In the panels showing the abundance distributions (a, c and d), the dotted lines depict the positions where the X-ray ionization rate is equal to the cosmic-ray
ionization rate ($5\times 10^{-17}$ s$^{-1}$) (the upper dotted line) and to the ionization rate by decay of radio active nuclei ($1\times 10^{-18}$ s$^{-1}$) (the lower dotted line). The long dashed line
indicates the CO snow surface; it indicates the positions where CO abundances in the gas phase and in the ice mantle should be equal,
if the abundances are determined simply by adsorption onto grains and thermal desorption. In the layer below the long dashed line, CO is expected to be mainly in the ice mantle, while
it is expected to be abundant in the gas phase, elsewhere. The snow line is defined as the radius at which the snow surface crosses the disk midplane.
In our model, CO is gradually converted to less volatile molecules such as CO$_2$ ice and CH$_3$OH ice and thus depleted even in the regions inside the CO snow line ($\sim 20$ au) and in the layer above the CO snow surface \citep{furuya14}.
Although we present the results at $3\times 10^5$ yr, which is shorter than the timescale of chemical conversion in the
gas phase, chemical conversion on grain surfaces is more efficient than in the gas phase.
In the midplane at 40 au $\lesssim r \lesssim 100$ au, CO (both in the gas phase and in ice mantle), and thus HCO$^+$, are deficient ($n(i)/n({\rm H})\lesssim 10^{-11}$), and such regions
with low CO and HCO$^+$ abundances extend inwards over time.
For example, the gaseous CO abundance is as low as $5\times 10^{-9}$ and the HCO$^+$ abundance is $1\times 10^{-12}$ in the midplane at $r=10$ au at $t=1$ Myr.
It should be noted that ALMA observations recently revealed such depletion of gaseous CO inside the CO snow line in TW Hya \citep{nomura16, schwarz16} \citep[see also][]{zhang17}.
In the midplane at inner radii ($r\lesssim 10$ au), the HCO$^+$ abundance is limited by the low ionization degree, which occurs at high density.
At $r\lesssim 70$ au, molecular ions such as H$_3^+$ and HCO$^+$ are deficient at $z/r\sim0.15$, because
photoionization makes atomic ions (e.g. S$^+$) the dominant positive charge carrier in such upper layers. The high electron abundance then reacts quickly with molecular ions.
Figure \ref{dist_1mm_1} (e) depicts the major formation pathways of DCO$^+$ at each position in the disk. Comparing panels (d) and (e), we can see that
DCO$^+$ is abundant in the layer below the CO snow surface (i.e. at the lower $z$) due to reaction (\ref{dcop}), despite the relatively low CO abundance in the gas phase.
In the midplane, deuterated-H$_3^+$ increases towards the outer radius, where the temperature is lower and the chemical conversion of HD is less efficient (see Appendix B).
The chemical conversion of CO is also less efficient at outer radii ($\gtrsim 200$ au). The DCO$^+$ abundance thus increases
outwards at $r\gtrsim 100$ AU. In the upper layers ($z/r\gtrsim 0.2$), on the other hand, DCO$^+$ is mainly formed via
\begin{equation}
{\rm HCO}^+ + {\rm D} \rightarrow {\rm DCO}^+ + {\rm H} + 796 {\rm K}. \label{hcopd}
\end{equation}
Since the reaction is exothermic by 796 K \citep{adams85}, the backward reaction is less efficient than the forward reaction even at warm temperatures.
These results are basically the same as described in \cite{oberg15}. One update is that, following \citet{roueff13}
we set the exothermicities of reaction (\ref{ch2dp}) and its multi-deuterated counterparts to values higher than those previously determined \citep{smith82} and
used in \cite{oberg15}. For example, the exothermicity of CH$_3^+$ + p$\mathchar`-$H$_2$ and CH$_3^+$ + o$\mathchar`-$H$_2$ are set to 660 K and 489 K, respectively.
Although we do not distinguish the nuclear spin state of CH$_3^+$, this omission does not affect our results, since the difference in exothermicities among the reactions with ortho- and
para-CH$_3^+$ is only 6 K \citep{roueff13}.
Although DCO$^+$ is also formed via the reaction between CO and CH$_4$D$^+$, where the ion is produced via reaction (\ref{ch2dp}) and subsequent radiative association with H$_2$,
this path is the major formation path of DCO$^+$ only in limited regions, i.e. the green regions in panel (e).
It is in contrast to the model of \cite{favre15}, in which reaction (\ref{ch2dp}) significantly contributes to the DCO$^+$ formation in warm surface layers and thus enhances the DCO$^+$ column density.
Although the direct comparison between our model results and those of \cite{favre15}
is not straightforward due to the differences in disk physical structure, knowledge of the spin state of H$_2$ would be a key to differentiate between the two results.
While \cite{favre15} assumed all H$_2$ in the para form, and used a constant exothermicity of 654 K for reaction (\ref{ch2dp}), we found that the OPR of H$_2$ is almost thermal
at each position in the disk (see Appendix A); the effective exothermicity of reaction (\ref{ch2dp}) thus decreases as o-H$_2$ increases with temperature.
Figure \ref{rate} shows the ratio of the backward to forward reaction rate coefficients referring to \cite{roueff13}.
For the solid line, we assumed that the OPR of H$_2$ is thermal. The dashed line, on the other hand, shows the ratio when all H$_2$ is in para state, so that the exothermicity of the forward
reaction is constant at 654 K. The dotted line depicts the ratio of the former (i.e. the ratio shown with the solid line) to the latter (the dashed line); it reaches a maximum at the temperature
of $\sim 30$ K, which seems to correspond to the layer with abundant DCO$^+$ claimed by \cite{favre15}.
It should be noted, however, that the contributions of reaction (\ref{ch2dp}) become more significant when the elemental C/O ratio is higher than unity (\S 3.2).
Figure \ref{dist_1mm_1} (f) shows the radial distributions of column densities of HCO$^+$ (dashed lines) and DCO$^+$ (solid lines).
The blue, green, and red lines depict the values at $t=1\times 10^5$ yr, $3\times 10^5$ yr (the fiducial value), and $9.3 \times 10^5$ yr, respectively.
Both HCO$^+$ and DCO$^+$ increase inwards from $r\sim 40$ au, which is slightly outside the CO snow line ($\sim 20$ AU).
In the midplane inside $r \sim 40$ au, thermal desorption of CO becomes non-negligible, which makes HCO$^+$ the dominant charge carrier,
while H$_3^+$ and its isotoplogues dominate in the outer radius \citep[see eq. (23) in][]{aikawa15}.
Inside a radius of $\sim 4$ au, the DCO$^+$ abundance is low in the midplane; the warm temperature lowers the atomic D/H ratio, and the D atom abundance is also lowered
by reaction with HS. Outside 50 au, the column density of DCO$^+$ increases outwards due to its enhancement in the outer midplane, while the HCO$^+$ column density is relatively flat.
Both column densities gradually decrease with time, as CO is converted to less volatile species. The decline of the DCO$^+$ column density is more significant than that of HCO$^+$,
since DCO$^+$ has its abundance peak in the midplane at outer radii, where CO conversion is efficient.
\subsubsection{N$_2$H$^+$ and N$_2$D$^+$}
Figure \ref{dist_1mm_2} shows distributions of molecular abundances, column densities, and major formation pathways of deuterium isotopologs as in Figure \ref{dist_1mm_1}, but for N$_2$,
N$_2$H$^+$, and N$_2$D$^+$.
N$_2$H$^+$ is formed by N$_2$ + H$_3^+$, and is destroyed by recombination with electrons and proton transfer to CO. Thus
its abundance depends on N$_2$, CO, H$_3^+$ and electrons \citep{aikawa15}; the abundance has a peak
around the CO snow surface, where the abundance ratio of CO to electron is $\sim 10^3$, and in the layer above the CO snow surface,
where gaseous CO is converted to less volatile species to be frozen onto grains.
In the midplane, N$_2$ is gradually converted to less volatile species such as NH$_3$ ice, and thus the region of low N$_2$H$^+$ abundance basically expands with time,
although N$_2$H$^+$ becomes abundant at $t=9.3\times 10^5$ yr around the radius of several au, where gaseous CO is reduced by the conversion effect.
The formation pathways of N$_2$D$^+$ are similar to those of DCO$^+$; N$_2$D$^+$ in the midplane is formed by the reaction of N$_2$ with deuterated H$_3^+$,
while N$_2$D$^+$ is formed by
\begin{equation}
{\rm N}_2{\rm H}^+ + {\rm D} \rightarrow {\rm N}_2{\rm D}^+ + {\rm H} + 550 {\rm K} \label{n2hpd}
\end{equation}
above the CO snow surface. In the upper layers, the ratio of N$_2$D$^+$/N$_2$H$^+$ is lower than that of DCO$^+$/HCO$^+$, because the exothermicity of reaction (\ref{n2hpd}) (550 K) is lower than that of reaction (\ref{hcopd}).
The column densities of both N$_2$H$^+$ and N$_2$D$^+$ have a peak at 20 au $\lesssim r \lesssim$ 50 au. The inner boundary corresponds to the snow line of CO, the main reactant with N$_2$H$^+$, while the outer boundary
is slightly outside the N$_2$ snow line, for a similar reason used for HCO$^+$. Outside $r\sim 150$ au, the N$_2$D$^+$ column density increases outwards, since it is abundant in the outer midplane.
\subsubsection{HCN and DCN}
Figure \ref{dist_1mm_3} shows the distributions of HCN and DCN, the formation pathways of DCN, and the column densities of HCN and DCN. Since the desorption energy of HCN is high (3370 K), the abundance
of gaseous HCN is $\lesssim 10^{-10}$ outside a radius of a few au, where the temperature is $\lesssim 100$ K. HCN in the ice mantle, on the other hand, is as abundant as $10^{-7}-10^{-6}$ in the midplane.
Towards the outer ($r\gtrsim 30$ au) midplane, HCN forms by the dissociative recombination of H$_2$CN$^+$, which forms by H$_3^+$ + CN $\rightarrow$ HCN$^+$ + H$_2$ followed by
HCN$^+$ + H$_2$ $\rightarrow$ H$_2$CN$^+$ + H. In the upper layers and inner radii, it forms by H + H$_2$CN and N + HCO. The precursor molecules
H$_2$CN and HCO are produced by CH$_3$ + N and CH$_2$ + O, respectively. HCN also is produced by reactive desorption after the grain surface reaction of H + CN.
The major formation paths of DCN are mostly the deuterated version of the above reactions. One exception is
\begin{equation}
{\rm HCN + D \rightarrow DCN + H,} \label{schilke}
\end{equation}
which is effective at the disk surface, depicted by the green region in Figure \ref{dist_1mm_3} (c). In our model, the activation barrier of both the forward and backward reaction of (\ref{schilke})
is set to be 500 K, following \cite{schilke92}.
A more recent quantum chemical calculation (Kayanuma in private communication) evaluates this barrier to be 3271 K, but also predicts that the effect of the barrier would be significantly lowered by tunneling.
Our result does not change, even if we assume a higher barrier, e.g. 1800 K, for reaction (\ref{schilke}).
Another exception is C$_3$H$_4$D$^+$ + N, which dominates in the midplane at $r\sim 6$ au, i.e. the blue region.
In this warm midplane region (40 K $< T < $ 50 K), hydrocarbons are also deuterated via
\begin{equation}
{\rm C_2H_2^+ + HD \rightarrow C_2HD^+ + H_2,} \label{c2h2p}
\end{equation}
which is exothermic by 550 K.
The major exothermic exchange reaction that initiates the enhancement of the DCN/HCN ratio is thus reaction (\ref{h2dp}) in the outer midplane, while reactions (\ref{ch2dp}) and (\ref{c2h2p}) dominate
in the upper layers and in inner radii.
It should be noted that HCND$^+$ and DCNH$^+$ are not distinguished in our reaction network, which means that the network of DCN and DNC is partially mixed in regions where the recombination
of HDCN$^+$ is their major formation path.
Due to the contribution from the outer midplane, the column density of DCN increases outwards at $r\gtrsim 20$ au.
Inside a radius of a few au, both the HCN and DCN column densities are high, since they are relatively abundant in the $z/r\sim 0.06$ and $z/r \sim 0$ layers. In the upper layer, large hydrocarbons in ice mantles such as H$_5$C$_3$N
serve as reservoirs of carbon and nitrogen, while the reactants of C-bearing species, such as O atoms, are deficient in the midplane with high density.
\subsection{Elemental Abundances}
Observations in recent years suggest that the surface and molecular layers are deficient in elemental carbon and oxygen, especially in relatively cold disks.
\cite{hogerheijde11} detected ground-state rotational emission lines of H$_2$O in the disk around TW Hya using the Herschel Space Observatory to find that the emissions are significantly weaker than expected
from disk models. \cite{du17} obtained and analyzed the water emission of 13 protoplanetary disks. They compared the observational data with disk models to show that the abundance of gas-phase oxygen needs to be
reduced by a factor of at least $\sim 100$ to be consistent with the observational upper limits and positive detections, if a dust-to-gas mass ratio is 0.01.
\cite{meijerink09} compared the radiative transfer models with mid-infrared spectrum of H$_2$O taken by {\it Spitzer} to find that water vapor is significantly depleted
in the disk surface beyond the radius of $\sim 1$ au. They proposed that water vapor is depleted by the vertical cold finger effect; turbulent diffusion transports the water vapor
from the disk surface to the layer below the snow surface, where water can freeze out and transported to the midplane via dust settling.
\cite{favre13} found that the gaseous CO abundance is low ($10^{-6}-10^{-5}$) even in the warm molecular layer (above the CO snow surface) in the disk of TW Hya by comparing C$^{18}$O ($J=2-1$) and HD ($J=1-0$)
emission line intensities. \cite{kama16} combined various emission lines including CI and OI towards TW Hya and HD100546; carbon and oxygen were found to be strongly depleted from the gas phase in the disk of TW Hya, while the depletion
is moderate in the disk of HD100546. \cite{kama16} also presented an analytical model of the vertical cold finger effect to show that a combination of turbulent mixing and settling of large dust grains deplete C- and O-bearing volatiles
from the surface and molecular layers of disks by locking them in the ices in the midplane \citep[see also][]{krijt16, xu17}.
Since H$_2$O is less volatile than CO, such a depletion mechanism would be more efficient for oxygen than carbon, which could enhance the C/O ratio in the surface and molecular layers.
\cite{kama16} found the C/O ratio to be higher than unity in the disk of TW Hya. \cite{bergin16} observed C$_2$H emission, which is bright in a ring region, in TW Hya and DM Tau. In order to reproduce the bright
C$_2$H emission in disk models, a high C/O ratio ($>1$) is required together with a strong UV field.
In order to investigate the depletion of elemental carbon and oxygen on deuterium chemistry, we performed a calculation of our fiducial disk model as in \S 3.1, but with a modified set of initial abundances.
We set the initial abundance of H$_2$O to zero and reduce the CO abundance by an order of magnitude, while the abundances of other species are the same as in our fiducial model.
Even with this reduced abundance, CO is still the major carbon carrier, although CH$_4$ is the most abundant among C-bearing species (see Table 1). Major oxygen carrier in the initial condition is
CO, H$_2$CO, and CO$_2$. The elemental C/O ratio is 1.43.
Figure \ref{dist_COdep_1} shows the distributions of CO, HCO$^+$ and DCO$^+$ abundances (panels a, c, and d), their column densities (panels b and f) and major formation pathways of DCO$^+$ (panel e).
While the spatial distributions of the molecular abundances and column densities of HCO$^+$ and DCO$^+$ are basically similar to the fiducial model (Figure \ref{dist_1mm_1}), there are several notable differences.
Firstly, CO depletion via chemical conversion to CO$_2$ ice is less effective above the CO snow surface in the outer radius ($r\gtrsim 20$ au) than in the fiducial model, which is natural considering the reduced
oxygen abundance. HCO$^+$ and DCO$^+$ thus becomes more abundant in the layer above the CO snow surface, in which DCO$^+$ is mainly formed by reaction (\ref{ch2dp}).
The high C/O ratio enhances the abundance of hydrocarbons and thus the importance of reaction (\ref{ch2dp}).
In spite of the reduced CO abundance, the column densities of HCO$^+$ and
DCO$^+$ are similar to those in the fiducial model, except that the depression of the DCO$^+$ column density at $r\sim 50$ au is more modest and the HCO$^+$ column density is reduced at the central region ($r \lesssim$
a few au), in which HCO$^+$ exists mostly in the surface layer.
Figure \ref{dist_COdep_2} (a-f) shows the distributions of N$_2$, N$_2$H$^+$, and N$_2$D$^+$, their column densities, and the major formation pathways of N$_2$D$^+$. The distributions of N$_2$ and N$_2$D$^+$ are
similar to those in our fiducial model (Figure \ref{dist_1mm_2}), while N$_2$H$^+$ is redued in the layer above the CO snow surface, in which the gaseous CO abundance exceeds that in the fiducial model.
Figure \ref{dist_COdep_2} (g-j) shows the distributions of HCN and DCN, their column densities, and the major formation pathways of DCN. Their abundances and column densities are significantly higher than in
the fiducial model, since the high C/O ratio enhances the abundance of hydrocarbons, which react with N atoms to form HCN.
We also calculated a model in which the initial abundance of H$_2$O is totally depleted but CO is not (the C/O ratio then becomes 1.10); the results are quite similar to the model described above,
except that CO is more abundant inside the CO snow line, which lowers the N$_2$H$^+$ abundance .
\subsection{Dust Grain Sizes}
Figure \ref{dist_ism_1} shows distributions of gaseous CO, HCO$^+$, and DCO$^+$ (panels a, c and d), the main formation paths of DCO$^+$ (panel e), and their column densities (panels b and f)
in the model with dark cloud dust. The initial abundances are the same as in our fiducial model.
Since the total surface area of grains is larger than in the fiducial model, the freeze-out of CO
and subsequent conversion to other molecules is more efficient in the dark cloud dust model than in the mm-sized grain model, especially
in the midplane regions. In the layer above the midplane, on the other hand, UV radiation is more efficiently attenuated, so that the molecular layer extends to larger $z$ than in Figure \ref{dist_1mm_1}.
At early time (e.g. $t=1\times 10^5$ yr), HCO$^+$ is abundant in the layer above the CO snow surface, and in the midplane inside the CO snow line. Since the contribution from the upper layers is
significant, the radial distribution of the HCO$^+$ column density is rather flat; although it slightly increases inwards around the CO snow line ($r\sim 100$ au), the increment is not significant compared with
that in Figure \ref{dist_1mm_1} at $r\sim 40$ au. HCO$^+$ in the midplane decreases with time, as CO is converted to less volatile species. Its column density, however,
varies by less than a factor of three, due to its constantly high abundance in the upper ($z/r\gtrsim 0.3$) layers.
The DCO$^+$ ion is mainly formed by reaction (\ref{hcopd}), since CO is depleted in the midplane, where reaction (\ref{dcop}) should be efficient. At the early time, D atoms are abundant
in the upper ($z/r\gtrsim 0.2$) warm layers, where reformation of HD is inefficient. DCO$^+$ at that stage is thus abundant in the upper layers, and its column density distribution is relatively flat
at $r\gtrsim 30$ au, including the radius around the CO snow line.
Then DCO$^+$ decreases as D atoms are incorporated into hydrocarbons, and CO inside the snow line is converted to less volatiles species. Its column density thus decreases significantly with time.
Even at $9.3 \times 10^5$ yr, DCO$^+$ in the midplane does not significantly contribute to its column density except around a radius of a few tens of au, where the column density has a sharp peak.
This is in contrast to the model with mm dust grains, in which DCO$^+$ in the midplane mostly determines its column density distribution.
In order to check the significance of the updated exothermicity of reaction (\ref{ch2dp}), we also ran a model in which its exothermicity is set to 370 K.
Compared with the model with this old value, the DCO$^+$ abundance is enhanced at $0.3 \lesssim z/r \lesssim 0.4$ at a radius of several tens of au in the present model, although
reaction (\ref{hcopd}) is the dominant formation path for DCO$^+$ there.
Figure \ref{dist_ism_2} shows the distributions of N$_2$, N$_2$H$^+$, HCN and their deuterated isotopologs, their column densities, and the major formation pathways of the deuterated isotopologs in the
model with dark cloud dust.
N$_2$H$^+$ is abundant in the upper layers, where H$_3^+$ is the dominant ion, and in the midplane region between the snow lines of CO ($\sim 120$ au) and N$_2$ ($\sim 230$ au). Its abundance in the midplane, however,
decreases as N$_2$ is converted to NH$_3$ ice in a few $\times$ $10^5$ yr \citep{furuya14}. The distribution of N$_2$D$^+$ is similar to that of N$_2$H$^+$, but its abundance in the upper ($z/r\gtrsim 0.3$) layer is
lower than the maximum abundance in the midplane. The column density of N$_2$D$^+$, therefore, decreases significantly as N$_2$ is converted to NH$_3$ ice in the midplane.
The major deuteration path is via H$_2$D$^+$ in the midplane, and via D atoms (reaction \ref{n2hpd}) in the upper layers.
Outside a radius of $r\sim 10$ au, gaseous HCN and DCN are distributed mostly in the upper ($z/r\gtrsim 0.2$) layers, where they form
via the recombination of H$_2$CN$^+$ (HDCN$^+$) and reaction H$_2$CN (HDCN) + H.
The radial distributions of their column densities gradually increase outwards at $r\gtrsim 10$ au. At the innermost radii ($r\lesssim 3$ AU), on the other hand,
the temperature is high enough ($\gtrsim 100$ K) to desorb HCN and DCN from ice mantles.
We also calculated molecular abundances in the disk model with the maximum grain (pebble) size of 10 cm, and found that the distributions of gaseous molecular abundances are qualitatively similar to those in our fiducial model (i.e. the model with mm-sized dust). Some notable differences are as follows. Firstly, the midplane temperature is slightly higher in the model with cm dust.
Secondly, CO is converted to CO$_2$ ice more efficiently in the model with cm dust inside the CO snow line (around a radius of a few tens of au), because the OH radical, which reacts with CO
to produce CO$_2$, is produced from H$_2$O ice due to the higher UV flux.
Thirdly, gaseous HCN and DCN are more abundant in the model with cm dust outside a radius of a few au; a deeper penetration of UV and slightly warmer temperature enhance their precursors,
N atoms and hydrocarbons, in the gas phase. Inside their snow line ($\lesssim 2$ au), on the other hand, HCN and DCN are less abundant in the model with cm dust,
since more refractory carbon chains accumulate in the ice mantle.
\subsection{Turbulent Mixing}
Protoplanetary disks are considered to be turbulent, most probably due to magneto-rotational instability \citep{balbus91}. The direct measurement of non-thermal velocity dispersion $v$
has been one of the major challenges in radio observations of disks. The dispersion is basically subsonic with a Mach number $\mathcal{M}=v/c_s \sim 0.2-0.4$ \citep{teague16}, where
$c_s$ is the sound speed. While a very low velocity
dispersion $\mathcal{M}<0.03$ is derived in HD163296 \citep{flaherty15}, \cite{teague16} took into account the uncertainties in flux calibration to derive an upper limit of $\mathcal{M}\sim0.16$.
Since the disk has vertical temperature gradients, and since the mixing timescale in the vertical direction is shorter than that in the radial direction \citep[e.g.][]{aikawa96}, the vertical mixing could alter the molecular D/H
ratios \citep{furuya13, albertsson14}. In this subsection, we investigate the effect of vertical turbulent mixing on chemistry including the D/H ratios.
The diffusion coefficient is of the same order as the kinematic viscosity coefficient, $\alpha c_s H$, where $H$ is the scale height of the disk.
Although there could be a slight difference between the two values \cite[e.g.][]{johansen05}, we assume that the values of the two coefficients are the same in the present work.
The non dimensional parameter $\alpha$ is equal to $(v/c_s)(l/H)\approx(v/c_s)^2$, where $l$ is the size of the turbulent eddy. The value of $\alpha$ is thus estimated to be $\lesssim 10^{-2}$
in protoplanetary disks. Figure \ref{diff_neutral} and Figure \ref{diff_ion} show distributions of neutral and ionic species, respectively, in models with a diffusion coefficient $\alpha= 10^{-3}$ (top row) and $10^{-2}$ (middle row).
The bottom panels show the
molecular column densities for the model without diffusion (solid lines) and with a diffusion coefficient of $\alpha=10^{-3}$ (dashed) and $10^{-2}$ (dotted).
CO and N$_2$ are also shown in Figure \ref{diff_neutral} in addition to HCN and DCN, since DCO$^+$ and N$_2$D$^+$ are chemically coupled to these precursor species.
The effect of turbulence is twofold. First, it transports ices to the disk surface, where the ices are thermally desorbed and photodissociated. Secondly, it transports H atoms and
radicals from the disk surface to the midplane, where they contribute to grain-surface reactions. In the static model, CO is efficiently converted to CO$_2$ via
the grain-surface reaction of CO + OH in the layer above the CO snow surface (\S 3.1). In the model with diffusion, on the other hand, CO is more abundant in this layer,
since CO ice in the midplane is transported to this layer and sublimated, while CO$_2$ ice is transported to the disk surface to be photodissociated.
Thus the column density of gaseous CO is slightly higher in the models with diffusion at $r\gtrsim 60 $ au.
In the midplane at 10 au $\lesssim r \lesssim 50$ au, on the other hand, CO is more efficiently converted
to CO$_2$ ($\alpha=10^{-3}$) and CH$_3$OH ($\alpha=10^{-2}$) in the models with diffusion, due to the enhanced abundances of OH and H atoms.
Inside a radius of $\sim 40$ au, HCO$^+$ is mostly on the disk surface, and is increased by the diffusion of CO in the turbulent disk.
The major nitrogen carriers are N$_2$ and NH$_3$ in our models.
In the models with diffusion, the N$_2$ column density is reduced at $r\gtrsim 50$ AU, while it is enhanced at smaller radii, compared with the model without diffusion.
In the cold outer midplane region, NH$_3$ is more
efficiently formed via H atoms coming from the disk surface, while in the inner warm regions, midplane temperatures are too warm to (re)form NH$_3$ via grain-surface hydrogenation, and NH$_3$ is transported to the
disk surface to be photodissociated \citep{furuya14}. The high abundance of N$_2$ in the midplane enhances the abundance of N$_2$H$^+$ at 10 au $\lesssim r \lesssim 50$ au.
Diffusion of CO ice from the midplane and of H atoms from the disk surface enhance the abundances of hydrocarbons around and below the CO snow surface, and thus
HCN and DCN, as well. Inside a radius of a few au, HCN and DCN abundances in the midplane are significantly reduced by the vertical mixing.
Turbulence transports the large hydrocarbons, which represent the major carbon reservoir near the midplane of the static model, to the disk surface, and O atoms to the midplane, which destroys C-bearing species.
The vertical diffusion also affects the deuterim chemistry. First, it makes the depletion of HD in the midplane and conversion to HDO and NH$_2$D ices (see Appendix B) less efficient via the transport of the deuterated ices to the disk
surface, where they can be dissociated. Thus in the model with mixing, HD and D atoms are more abundant in the midplane at $r\gtrsim 40$ au.
Secondly, D atoms are transported towards the midplane and enhance deuteration via reactions such as (\ref{hcopd}).
\subsection{Cosmic-ray Ionization}
So far we have assumed that cosmic rays provide a minimum ionization rate of $5\times 10^{-17}$ s$^{-1}$ in the midplane, where X-ray penetration is significantly attenuated.
But the cosmic rays may be excluded by stellar winds with magnetic fields \citep[e.g.][]{umebayashi81,cleeves13a}.
In order to check the effect of the exclusion of cosmic rays, we investigated the model without cosmic-ray ionization.
Here, the model disk is ionized by stellar X-rays, which dominate in and above the molecular layer, and the decay of radioactive nuclei, which sets a minimum ionization rate of $1\times 10^{-18}$ s$^{-1}$
in the midplane (below the lower dotted line in Figure \ref{cr18}), given
the $^{26}$Al abundance derived from the analysis of meteorites for the formation stage of the Solar System \citep{umebayashi09}. Although the ionization rate could be smaller depending on the abundances of
radioactive nuclei and the surface density of the disk \citep{umebayashi09, cleeves13b}, the effect
of a low ionization rate is already apparent in the model presented here, and it is straightforward to extrapolate the results to an even lower ionization rate, at least qualitatively.
Figure \ref{cr18} shows the distributions and column densities of HCO$^+$, N$_2$H$^+$, HCN and their deuterated isotopologs at $t=3\times 10^5$ yr. The solid lines depict the column density in the model without cosmic-ray ionization,
while the dashed lines depict our fiducial model. It is natural that the abundances of ionic molecules are suppressed by the low ionization rate.
It should be noted that the dependence of
molecular ion column densities on cosmic-ray ionization rate would vary among disk models. In the model with dark cloud dust, the midplane abundances of these molecular ions are lower due to the more efficient freeze-out
of C-bearing and N-bearing molecules, and thus the X-ray dominated layer contributes more to the column density than in the model with mm-sized grains. Thus the decline of molecular ion column densities due to the exclusion
of cosmic rays would be less significant in the disks with dark cloud dust.
We found that the column density of HCO$^+$ in the model without cosmic-ray ionization, for example, is similar to that in Figure \ref{dist_ism_1} in the dark cloud dust model. The column density of DCO$^+$, on the other hand,
is reduced by about an order of magnitude at 10 au $\lesssim r \lesssim $ 100 au compared with Figure \ref{dist_ism_1}.
Comparing the right column in Figure \ref{cr18} with that in Figure \ref{dist_1mm_3}, we can see that both HCN and DCN are reduced in the outer ($r\gtrsim 10$ au) midplane
in the model without cosmic-ray ionization. Cosmic rays play a key role in forming N atoms and hydrocarbons such as CH$_3$ from N$_2$ and CO, respectively. N atoms are formed by the
recombination of N$_2$H$^+$ with a small branching ratio, which is produced by protonation of N$_2$. CO reacts with He$^+$ to form C$^+$, which goes through successive reactions with H$_2$ and electrons to form hydrocarbons.
Inside a radius of $\sim 3$ au, on the other hand, column densities of HCN and DCN are slightly higher in the model without cosmic rays than in Figure \ref{dist_1mm_3}, since the production of O atoms,
which destroy them, from water
and CO$_2$ is suppressed by the low ionization rate.
In the disk with
dark cloud dust, both HCN and DCN are mostly abundant in the warm X-ray dominated layers, and thus their column densities do not depend much on the midplane ionization rate, except at $r\lesssim 3$ au,
where DCN is more abundant in the model without cosmic rays.
\section{Discussion}
The motivation of the present work is to investigate the major deuteration paths and their efficiency in disk models with an updated gas-grain chemical network, rather than constructing a best fit model for
a specific object. It is, however, worth comparing our model results with recent observations of deuterated species and their normal isotopologs.
Table \ref{obs} summarizes the morphologies of integrated intensity maps of H$^{13}$CO$^+$, DCO$^+$, N$_2$H$^+$, N$_2$D$^+$, H$^{13}$CN, and DCN in two full disks around T Tauri stars (AS 209 and IM Lup),
two transition disks around T Tauri stars (V4046 and LkCa 15) and two disks around Herbig Ae stars (MWC 480 and HD 163296). It is mainly based on \cite{huang17}, and is supplemented by \cite{oberg11},
\cite{huang15}, \cite{salinas17}, and \cite{flaherty17}.
Comparison should be qualitative, rather than quantitative, since disk physical structures are expected to vary among objects, including the radial distribution of midplane temperature, which sets the location
of the CO snow line. While we compare the distribution of molecular lines with the estimated position of the CO snow line and temperature distribution for some disks,
the derivation of temperature distributions in disks is not straightforward due to the opacity effect and temperature gradient in the vertical direction.
CO sublimation temperature, and thus the position of CO snow line could also depend on the ice composition, i.e. whether its surface is water-rich or not, while desorption energy of a molecule is set to be constant in our model.
It should also be noted that we compare the radial profiles (distributions) of molecular column density in our models with the observed distributions of line emissions.
\cite{huang17} observed the $J=3-2$ emission lines of DCO$^+$, DCN, H$^{13}$CO$^+$, and H$^{13}$CN, while \cite{huang15} observed the $J=3-2$ line of N$_2$D$^+$.
Using the RADEX code \citep{radex07}, we estimate that the opacity of the $J=3-2$ emission lines of HCO$^+$, DCO$^+$, N$_2$H$^+$ and HCN reaches unity, when the molecular column density is
a few $10^{12}$ cm$^{-2}$, with a gas temperature of 30 K. These emission lines from our model disks are thus expected to be optically thin except for limited
regions where the molecular column densities have a peak; the radial profile of the emission (e.g. ring emission) basically reflects the column density distributions.
\subsection{Molecular Ions}
In IM Lup, DCO$^+$ emission shows a double-ring structure with radii of 110 au and 310 au, while H$^{13}$CO$^+$ has a single ring structure located at $\sim$ 130 au.
\cite{oberg15} attributed the H$^{13}$CO$^+$ ring and the inner ring of DCO$^+$ to the CO snow line. The central dip (i.e. a small region of low brightness)
in the molecular emission lines is suggested to be caused by
the subtraction of the optically thick dust continuum. Thus we cannot tell if H$^{13}$CO$^+$ and DCO$^+$ decrease inwards at $r\lesssim 95$ au.
The location of the DCO$^+$ outer ring coincides with the edge of the disk observed in the millimeter continuum. \cite{oberg15} argued that in this outermost radius the dust opacity is reduced, which enhances
the photodesorption of CO and thus the DCO$^+$ abundance. Our model with mm-sized grains is consistent with this observation. The HCO$^+$ and DCO$^+$ column density increases inwards around (slightly outside)
the CO snow line. Beyond the CO snow line, on the other hand, the DCO$^+$ column density increases outwards, as in the outer ring in IM Lup, although desorption via cosmic-rays
is more efficient than photodesorption in the cold midplane in our model.
Another T Tauri disk, AS 209, shows more compact emission of H$^{13}$CO$^+$ and DCO$^+$ than does IM Lup. While H$^{13}$CO$^+$ shows a ring of $r\sim 50$ au, the radial size of the DCO$^+$ emission is similar to or slightly extended
than 90 au, at which continuum emission shows a break. DCO$^+$ also has a central dip, which is shallower than that of H$^{13}$CO$^+$ \citep{huang17}. The midplane dust temperature distribution is estimated
from continuum observations \citep{andrews09}; it is about 20 K at $r\sim 60$ au. Hence the ring-like emission of H$^{13}$CO$^+$ could coincide with the CO snow line. Our models indicate that DCO$^+$ inside the
CO snow line could be formed via reactions (\ref{hcopd}) and CH$_4$D$^+$ + CO.
\cite{perez12} found that the dust opacity index $\beta$ increases outwards, which suggests that grains have grown (at least) to mm-sizes inside 80 au, while grains are still small at the outer radius.
The absence of an outer DCO$^+$ ring could be due to efficient freeze out (and chemical conversion) of CO molecules on small dust grains. C$^{18}$O emission, however, shows a local peak
at $r\sim 150$ au, indicating enhanced non-thermal desorption of CO there \citep{huang16}. It is not straightforward, either, to explain in our model why the central dip of H$^{13}$CO$^+$ is more clear than that of DCO$^+$.
Although the HCO$^+$ abundance in the midplane could decline at small radii due to a low ionization degree, or sublimation of H$_2$CO, H$_2$S and H$_2$O, which have higher proton affinities than CO, HCO$^+$ is the dominant charge carrier at the disk
surface, which makes the distribution of HCO$^+$ column density relatively flat.
In our model with CO and H$_2$O depletion, the HCO$^+$ column density shows a ring-like structure with a depression at $r\lesssim$ a few au, which indicates that the disk surface might be deficient in carbon at the dip radius.
A disk structure model specific for AS 209 and a chemical model with isotope selective CO photodissociation might also be necessary
to account for the central dip of H$^{13}$CO$^+$.
In addition to HCO$^+$, N$_2$D$^+$ is detected in AS 209. Its emission is offset from the disk center, and has a peak around the outer edge of the $^{13}$CO emission, which is consistent with our models.
LkCa15 is a transient disk with a central hole of $40-50$ au in the dust continuum. Distributions of H$^{13}$CO$^+$ and DCO$^+$ emission in LkCa15 are qualitatively similar to those in IM Lup;
the H$^{13}$CO$^+$ emission is rather compact, peaking at $\sim 40$ au, while the DCO$^+$ emission is more extended than the dust disk of radius $\sim 200$ AU, although the ring-like structure is much less clear in LkCa 15.
\cite{pietu06} derived a disk temperature of 22$\pm1$ K at $r=100$ au. Thus DCO$^+$ emission comes from both inside and outside the CO snow line. The ring of H$^{13}$CO$^+$ emission, on the other hand, could be linked to
the hole seen in the dust continuum.
Another transient disk, V4046, has a central hole of radius $\sim 29$ au in the dust continuum. While H$^{13}$CO$^+$ emission is diffuse extending to $\sim 200$ au, DCO$^+$ emission has a ring-like structure
with its peak at $\sim 70$ au \citep{huang17}. \cite{rosenfeld12} estimated the temperature distribution to be $T(r)=115 (r/10 {\rm au})^{-0.63}$ K; i.e. the temperature is 27 K at $r=100$ au. The DCO$^+$ ring thus could be
caused by reactions (\ref{hcopd}) and CH$_4$D$^+$ + CO. It is consistent with our model that the H$^{13}$CO$^+$ distribution is rather flat and extends inwards compared with that of DCO$^+$. \cite{rosenfeld13}
determined that most of the dust mass is confined to a ring with a peak at $r=37$ au and FWHM of 16 au. The disk may not extend beyond the CO snowline, where DCO$^+$ could have another (outer)
emission peak.
In MWC 480, both H$^{13}$CO$^+$ and DCO$^+$ emissions have a peak at $\sim 40$ au \citep{huang17}. Despite the relatively high luminosity of the central star (11.5 $L_{\odot}$), the disk temperature is rather low.
\cite{pietu06} constrained the disk temperature to be $\sim 20$ K at a radius of $20-30$ au. \cite{akiyama13} observed $^{12}$CO ($J=3-2$), $^{12}$CO ($J=1-0$) and $^{13}$CO ($J=1-0$), and estimated a gas temperature
of $\sim 13-15$ K for the layer traced by $^{13}$CO ($J=1-0$) at the radius of 100 au.
HCO$^+$ and DCO$^+$ emission thus seems to originate in the region around and slightly outside the CO snow line.
In HD 163296, the H$^{13}$CO$^+$ ($J=3-2$) line features an emission ring peaking at $r \sim 50$ au, and also has a more extended component with a break at $r\sim 200$ au.
The DCO$^+$ emission shows a rather broad ring with a peak at $r\sim 70$ au \citep{huang17}. The temperature distribution in the disk of HD 163296 was derived by \cite{isella16} from CO observations;
the midplane temperature is 23 K at $r \sim 80$ au. \cite{qi15}, on the other hand, derived the location of the CO snow line to be $r\sim 90$ au from observations of N$_2$H$^+$ and CO isotopologs.
HCO$^+$ and DCO$^+$ emission thus originates both inside and outside of the CO snow line.
Recently, \cite{salinas17} reported spatially resolved emissions of DCO$^+$, N$_2$D$^+$, and DCN. Their analysis shows that DCO$^+$ emission can be divided to three ring-like components at 70 au, 150 au, and 260 au \citep[see also][]{flaherty17}.
The innermost and sencond ring coincide with the emission peaks of DCN and N$_2$D$^+$, respectively, while the outermost ring could arise from the non-thermal desorption of CO, as in the outer ring of IM Lup
\citep{salinas17}. In our fiducial model, the radial distribution of the DCO$^+$ column density can be divided to three components at $r\sim$ several au, 10-30 au, and an outermost radius ($\gtrsim 100$ au).
The innermost peak coincides with the local peak of DCN, while DCO$^+$ originates in non-thermal desorption of CO in the midplane at the outermost radius, which could be consistent with \cite{salinas17}.
The second component at $r\sim 10-30$ au, however, is located inside the local peak of the N$_2$D$^+$ column density ($\gtrsim 20-100$ au).
So far, the DCO$^+$ emission detected in disks always has a central hole or dip \citep{huang17, qi15}.
In our models, the DCO$^+$ distribution does have a dip. Right outside the dip, DCO$^+$ is mainly formed by reaction (\ref{hcopd}), while D atoms are destroyed mainly by the reaction with HS inside the dip.
Although the sublimation of S-bearing species such as H$_2$S seems to be a key for the decline of D atom in our model, a derivation of the physical condition for the decline of D atoms and thus of DCO$^+$
may not be straightforward, since D atoms are chemically active. It should also be noted that a central dip could be caused by subtraction of
optically thick dust continuum, rather than by a chemical effect, in some disks \citep{huang17, salinas17}.
\subsection{HCN and DCN}
In our models, both in models with mm-sized dust and dark cloud dust, the radial distributions of the HCN and DCN column densities are centrally peaked, although the
DCN/HCN ratio is low ($10^{-5}-10^{-4}$) at the central region ($r\lesssim 10$ au) due to high temperature. Their column densities drop sharply outside this high temperature region, and then slightly
increase outwards, with the gradient steeper for DCN. In the models with vertical diffusion, the central peak disappears, since hydrocarbons, from which HCN and DCN are formed, are transported to the disk surface to be destroyed.
In the disk of IM Lup, H$^{13}$CN is not detected and DCN emission is weak and diffuse \citep{huang17},
while both H$^{13}$CN and DCN are clearly detected and have a central dip \citep{huang17} in the disk of AS 209.
In these disks, HCN and DCN are not centrally peaked, possibly because the temperature is not high enough to desorb HCN at the radius traced by current observations
(beam size of $\sim 60$ au), or because turbulent mixing is at work.
V4046 and HD163296, on the other hand, show centrally peaked H$^{13}$CN emission and a central dip in DCN emission.
If these features simply reflect their column density distributions, it is difficult to account for in our fiducial model, in which both HCN and DCN are most abundant in the innermost radii.
Our models with turbulent diffusion, however, might produce the observed feature; while DCN is abundant only in the outer cold regions, HCN is abundant
at the disk surface even inside 20 au, which could be bright due to high temperatures.
\section{SUMMARY}
We investigated deuterium chemistry coupled with the nuclear spin-state chemistry of H$_2$ and H$_3^+$ and their isotopologs in protoplanetary disks. Our principal
findings are as follows:
\begin{enumerate}
\item{We have found multiple paths for deuterium enrichment. The exchange reactions with D atoms, such as HCO$^+$ + D, are found to be effective,
in addition to H$_3^+$ + HD, CH$_3^+$ + HD, and C$_2$H$_2^+$ + HD, which had been considered in previous studies.}
\item{As discussed in Appendix A, the OPR of H$_2$ is found to be almost thermal, as long as the cosmic rays ionize the disk with a rate $\sim 10^{-17}$ s$^{-1}$.
In the cold ($\sim$ 10 K) midplane, however, the OPR reaches its minimum value,
which is higher than the thermal value. The minimum value is determined by the balance between the rates of H$_2$ formation, which sets the OPR to be 3,
and spin conversion via ion-molecule reactions involving protons and H$_3^+$ mainly.
The OPR could reach the thermal value at such cold dense regions, if we take into account the spin conversion on grain surfaces, which has recently been found in
laboratory experiments \citep{ueta16}. We have also analyzed the OPR of H$_3^+$ and H$_2$D$^+$,
and the abundances of H$_2$D$^+$, DCO$^+$, and N$_2$D$^+$ in the cold midplane, in part by the use of derived analytical formulae.}
\item{In our models, the contribution of the exchange reaction CH$_3^+$ + HD is found to be less significant than that described in \cite{favre15}. The increasing OPR of H$_2$
helps the backward reaction at $T\gtrsim 20$ K.}
\item{In the disk model with mm-sized grains, reduced freeze-out rates enhance the gaseous molecular abundances in the cold midplane.
DCO$^+$, N$_2$D$^+$, and DCN in the outer midplane thus contribute significantly to their column densities. The radial distribution
of the DCO$^+$ column density has a double-ring structure, similar to the DCO$^+$ emission observed in IM Lup. While the outer ring is caused by the enhanced deuteration
of H$_3^+$ and less efficient chemical conversion of HD and CO in the outer radii in our model,
the inner ring is linked to the CO snow line and depletion of D atoms due to reactions with HS for example. N$_2$D$^+$, on the other hand, is more
abundant outside the CO snow line. Gaseous DCN decreases inward, except in the central hot region ($T\gtrsim 100$~K) where it is thermally desorbed.}
\item{If the elemental C/O ratio is higher than unity due to sedimentation of H$_2$O ice, hydrocarbons become abundant. The exchange reaction CH$_3^+$ + HD,
which eventually leads to the ion CH$_4$D$^+$, thus contributes to
form DCO$^+$ via the reaction CH$_4$D$^+$ +CO in the warm molecular layer. The column densities of gaseous HCN and DCN are also enhanced by an order of magnitude compared with
the fiducial model. The spatial distributions of molecular abundances and radial profiles of molecular column densities are, however, qualitatively similar to those in our fiducial model.
}
\item{In the disk model with dark cloud dust, the freeze out of molecules is more severe in the outer midplane, while the disk surface is better
shielded from UV radiation than in the model with mm-sized grains. The disk surface area thus harbors abundant gaseous molecules and contributes
to the column densities (and emissions) of HCO$^+$, DCO$^+$, N$_2$H$^+$, HCN and DCN.
One exception is N$_2$D$^+$, which is not abundant in the disk surface.}
\item{Turbulence helps to prevent chemical conversion of molecules to less volatile species by transporting ices from the midplane to the
disk surface, where ices are desorbed and photodissociated, but it also enhances the formation of saturated or less volatile molecules in the midplane by transporting
H atoms and radicals from the disk surface. For example, NH$_3$ ice formation is hampered and thus N$_2$ and N$_2$H$^+$ abundances tend to be
enhanced inside the N$_2$ snow line. Turbulence also transports D atoms in the disk surface
to the lower layers, which helps the formation of DCO$^+$ and N$_2$D$^+$. At the innermost radii, the abundances of HCN and DCN are
significantly reduced by the turbulence; their icy components and hydrocarbons are transported to the disk surface and destroyed, while O atoms are
transported to the midplane to react with C-bearing species.}
\item{If the penetration of cosmic rays is hampered by the stellar wind, the midplane ionization rate decreases by more than one order of magnitude.
Column densities of molecular ions decline, although the decrement varies with radius, species, and the dust grain sizes in the disk model.
HCN and DCN also decrease at $r >$ several au,
since the cosmic-ray ionization is needed to form their precursors, N atoms and hydrocarbons, from N$_2$ and CO, respectively.}
\end{enumerate}
\acknowledgments
This work is supported by JSPS KAKENHI Grant Numbers 23540266, 16H00931, and 17K14245. We would like to thank Hideko Nomura for providing the disk physical models, and to Jane Huang, Megumi Kayanuma, and Liton Majumdar for helpful discussions.
We are grateful to the anonymous referee for helpful comments, which improved the manuscript.
E. H. wishes to acknowledge the support of the National Science Foundation through grant AST-1514844.
|
{
"timestamp": "2018-03-08T02:03:22",
"yymm": "1803",
"arxiv_id": "1803.02498",
"language": "en",
"url": "https://arxiv.org/abs/1803.02498"
}
|
\section{Introduction}
Self-propelled micro-/nano-swimmers have garnered increased interest over the past few decades \cite{marchetti-review,ebbenshowse-review,abp-review,sntln-shelley-review}. One important motive has been the abundance of self-propelled particles in nature. This includes the vast majority of bacteria \cite{bergbook,bacteria-lauga}, sperm cells \cite{sperm-woolley,sperm-alvarez-review}, and algae \cite{green-algae-goldstein,algae-drescher}. Inspired by nature, many artificial swimmers have been realized that swim in fluid environments using different mechanisms \cite{artificial-sperm,ramin-colloidal-prl,paxton}. Among other (bio)technological applications, artificial swimmers bring the prospect of drug delivery on the nano-scale \cite{drugdelivery-lauga}. Biological or artificial, the motion of small-scale swimmers in fluid media is governed by low-Reynolds-number hydrodynamics \cite{purcell}, given the small sizes and low self-propulsion speeds that are typical of these particles.
Swimmers are most commonly found in confined environments (e.g., microfluidic channels or physiological pathways), and are subject to a form of shear. The self-propulsion of biological swimmers is an inherent feature that helps them follow or avoid the different forms of external stimuli (e.g., nutrients, chemical toxins, light, etc.) in the environment. Given the dynamic nature of fluid environments (especially biological/physiological), the swimmers would have to change orientation every while to adapt to, and optimally survive in, the changing environment. This pattern of motion is known as run-and-tumble \cite{runtumble-elgetigompper,runtumble-ramin}, and is complemented by translational and rotational diffusion of the active particles. The presence of external shear in the environment only complicates the strategy that would need to be adopted by the swimmers in their motion \cite{stocker-natrev,ecoli-upstream,ecoli-jefferey}. In confined environments, active particles show a propensity to move toward and accumulate on confining boundaries (e.g., channel walls) \cite{sperm-woolley,near-wall-classic,berke-near-wall}. This tendency of swimmers has led to focussed interest on the near-wall behavior of active particles \cite{ardekani-nearwall,brown-uni_near-wall,Mathijssen2015,zoettl}.
The mechanics of swimmer self-propulsion in fluid environments can be described using the notion of a force dipole: Two equal and opposite forces, on (by) the fluid by (on) the particle \cite{batchelor1970}. The vast majority of biological swimmers are asymmetrically shaped, exhibiting a form of chirality. Artificial swimmers, too, commonly exhibit chirality, due to fabrication inaccuracies, or indeed by design, for different tasks and purposes desired of the particles \cite{ghosh-fischer,yeomans-lowen-circle-swimmer,chiral-clusters}. The asymmetry leads to misalignment between the line of self-propulsion and the force dipole, hence a torque experienced by chiral swimmers that works as an extra factor (alongside shear and rotational diffusion) affecting swimmer orientation. The chiral geometry gives rise to hydrodynamic coupling between translational and rotational motion of the low-Reynolds swimmers \cite{kraft2013}, with the active particles rotating simultaneous with their translational motion. The repeated rotation and translation patterns of motion result in chiral swimmers following helical trajectories in three dimensions (3D), and circular trajectories in two dimensions (2D) \cite{lowen-review-chiral}. While the circular (2D) swimming of micro-organisms was realized and studied from long ago \cite{jennings1901}, it was thanks to the development of advanced 3D tracking and imaging techniques that the true 3D swimming of biological swimmers was brought to light \cite{crenshaw1996}. The chemotaxis of biological swimmers (such as sperm cells) toward attractants in the fluid environment is also known to follow helical paths \cite{julicher-prl-chiral,sperm-chiral-ribbon}.
As swimmer geometry is key to the helical pattern of motion, artificial swimmers have also been designed to follow helical paths. As an example (among many), biomimetic bacteria that use artificial flagella have been shown to follow helical trajectories in two directions, with the swimming induced by particle chirality \cite{artificial-flagella}. Even camphors, with a nearly spherical geometry, have been shown to feature helical swimming \cite{camphor}. With swimmer geometry crucial to the performance of chiral active particles, the structures of artificial swimmers can be optimized for best performance, e.g., significantly increased self-propulsion speeds \cite{chiral-geom-opt}.
In this work, we study the steady-state behavior of a dilute suspension of chiral swimmers confined by the walls of a planar channel and subject to externally imposed shear with a linear profile across the channel width (Couette flow). We model the chiral swimmers as spheroidal particles of varying aspect ratio, and report on the effect of particle chirality and aspect ratio on their overall swimming behavior. Given the importance of the near-wall behavior of active particles, we choose the top wall of the channel (with no loss of generality) as test region to display our results. We specifically report on how the population splitting of active particles into distinct oppositely swimming (downstream and upstream) sub-populations, arising -- in the case of non-chiral swimmers \cite{popsplit-paper} -- from imposed shear rate surpassing a threshold, is altered qualitatively when the swimmers are chiral and exhibit finite thickness.
\section{Model and continuum method}
\label{sec:model}
Physical specifications of the system we study is shown in Fig. \ref{channel}. We consider an active suspension of chiral self-propelled particles that we model as spheroids with major and minor axes of lengths $a$ and $b$, respectively, giving aspect ratio $\lambda=a/b$. The swimmer orientation, denoting its active self-propulsion, is represented by orientation vector $\mathbf{p}$ that makes an angle $\theta$ with the positive horizontal axis. Both levogyre and dextrogyre chirality are schematically depicted, corresponding to counter-clockwise (CCW) and clockwise (CW) rotations of the swimmers, or positive and negative angular speed $\Omega$, respectively.
The active suspension is confined by the walls of a channel of half-width $H$, and an external flow is imposed onto the system that we assume to have a linear profile $\mathbf{u}_f(y)$, namely a Couette flow, directed along the horizontal axis $x$ with shear rate $\dot{\gamma}={\partial \mathbf{u}_f(y)}/{\partial y}=U_{m}/2H$, where $y$ is the direction perpendicular to the flow, and $U_{m}$ is the maximum flow velocity, at which we could assume the top wall (at $y=+H$) is moved, while the bottom wall (at $y=-H$) remains stationary. With the given structure of imposed flow, the torque $\tau_{f}$ it exerts on the chiral swimmers will always act to move them in the clockwise (CW) direction. As such, levogyre chirality acts in opposition and dextrogyre chirality in concert with the flow in the torque they exert on the active particles; this can be seen from the schematic of Fig. \ref{schematic}.
\floatsetup[figure]{style=plain,subcapbesideposition=top}
\begin{figure}[t!]
\sidesubfloat[]
\label{channel}
\includegraphics[width=0.85\linewidth]{fig1a
}\\
\sidesubfloat[]
\label{torques}
\includegraphics[width=0.85\linewidth]{fig1b
}
\caption{(a) Sample spheroidal self-propelled particle with major and minor axes of lengths $a$ and $b$ swimming downstream (i.e., with an orientation vector $\mathbf{p}$ making an angle $-\pi/2<\theta<\pi/2$ with the positive horizontal axis) near the top wall of a channel subjected to an imposed Couette flow. The swimmers are chiral, with angular speeds $\Omega$ with both levogyre and dextrogyre chiralities ($\Omega>0$ or $<0$, respectively) permitted. The torque from flow (always clockwise in the current settings) acting on the active particles is shown as $\tau_{f}$; (b) For a given imposed shear, there are two angular speeds at which a fraction of downstream-swimming (majority) chiral particles flip their swimming direction to upstream, leading to the emergence of a minority population. At a smaller angular speed (right schematic), the conversion is dominated by imposed shear, and at a larger angular speed (left schematic), chirality overtakes the effect of imposed shear, and leads to a second population splitting, marking re-entrant bimodality of the active suspension.
}
\label{schematic}
\end{figure}
We adopt a continuum model of swimmer behavior that has been presented and discussed in a number of studies \cite{saintillan-pof,ezhilan}. The model is based on the Smoluchowski equation, expressing conservation of swimmer numbers, and solves for the probability distribution function (PDF), $\Psi(y,\theta)$. We look at the steady-state behavior of the active suspension, hence the absence of time in the independent variables. Also, the symmetry of the problem implies $x$-independence. Chirality of the particles, as well as their geometry (finite aspect ratio), affects the rotational flux velocity of the swimmers, $\dot{\theta}=\dot{\gamma}(\beta \cos(2\theta)-1)/2$, where $\beta$ is the Bretherton shape parameter \cite{bretherton} given, in terms of particle aspect ratio $\gamma$, as $\beta=(\lambda^2-1)/(\lambda^2+1)$. From this, the Smoluchowski equation governing system behavior takes the following form, after the non-dimensionalization of the vertical coordinate with the channel half-width as $y\rightarrow y/H$:
\begin{align} \label{smol-nond}
\xi^{2}\frac{\partial^{2}\Psi}{\partial y^{2}}+\frac{\partial^{2}\Psi}{\partial \theta^{2}} &=\frac{\partial}{\partial \theta}\left\lbrace\Psi\left[\frac{1}{2}Pe_{f}(\beta \cos(2\theta)-1)+\Gamma\right]\right\rbrace\nonumber\\
&+\frac{\partial}{\partial y}(2Pe_{s}\Psi \sin\theta),
\end{align}
where the $\Gamma=\Omega/D_R$ is the dimensionless angular speed of the chiral swimmers, and $Pe_s=V_s/(2HD_R)$ and $Pe_f=U_m/(2HD_R)$ are the swim and flow P\'eclet numbers, representing the (relative) strengths of active self-propulsion and imposed shear, respectively. The third dimensionless parameter in Eq. \eqref{smol-nond}, $\xi^{2}=D_{t}/(D_RH^{2})$, serves as a measure of channel confinement of swimmers. Also, $D_{t}$ and $D_R$ are translational and rotational diffusion coefficients of the spheroidal swimmer, for which we use the expressions derived by Koenig \cite{koenig} that provided corrections to original formulae by Perrin \cite{perrin}.
All calculations, for numerical solution of the governing Eq. \eqref{smol-nond}, were done in COMSOL Multiphysics v5.2a; our previous work \cite{popsplit-paper} contains the details. On top of the dimensionless parameters defined under Eq. \eqref{smol-nond}, we use the following as simulation parameters (all derivable from the three dimensionless numbers of the governing equation): $n_s$, giving the number of particle lengths (major axis) that the particle swims in a unit of time; $n_H$, the ratio of channel half-width $H$ over particle length (major axis); $n_F$, the ratio of maximum imposed flow speed (at top channel wall), $U_{m}$, over the swimmer self-propulsion speed $V_{s}$.
\section{Results and Discussion}
\subsection{Specifications of the baseline parameters}
\label{baseline}
Our baseline (used as reference) active suspension comprises of prolate spheroidal particles with aspect ratio $\lambda=4$, corresponding to the aspect ratio an \textit{E. coli} bacterium with $a=2\,\mu {\mathrm{m}}$, $b=0.5\,\mu {\mathrm{m}}$. To observe the effect of particle aspect ratio on system behavior, we shall keep the major axis length fixed at $a=2\,\mu {\mathrm{m}}$, and will vary $b$ over a wide range, covering particles close in shape to a sphere to those that are needle-shaped. We should like to stress that our goal is to analyze the generic behavior of spheroidal chiral
swimmers over a wide range of values for the system parameters introduced in Section \ref{sec:model}, rather than any particular swimmer-specific features.
Since we base our analysis on a dimensionless representation, the results reported for a fixed set of dimensionless parameters will be applicable to any set of actual (or dimensional) parameter values (such as channel width, swimmer semi-axis dimensions, self-propulsion velocity, etc.) as long as they can be mapped to the same values of the dimensionless parameters. However, for the sake of concreteness, and extending on the default parameter values, we make the following as baseline, so that the actual parameters will have these values unless otherwise stated: $V_s=2\,\mu {\mathrm{m}}/{\mathrm{s}}$ and $U_{m}=200\,\mu {\mathrm{m}}/{\mathrm{s}}$, corresponding to self-propulsion and shear factors $n_{s}=0.5$ and $n_F=100$, respectively; and $n_{H}=5$, giving the channel a width of $2H=2n_{H}a=
20\,\mu {\mathrm{m}}$. The translational and rotational diffusion coefficients, derived from the parameter values, are $D_t=2.3\times 10^{-13}$ $m^{2}/{\mathrm{s}}$, and $D_R=0.2/{\mathrm{s}}$. The dimensionless parameters will have the following values: $Pe_s=0.25$, $Pe_f=25$, and $\xi=0.1$.
\subsection{Effect of chirality on swimmer distribution}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{fig2}
\caption{Rescaled orientational PDF (normalized by total concentration in the channel) of spheroidal chiral swimmers of aspect ratio $\lambda=4$ (corresponding, e.g., to $a=2\,\mu {\mathrm{m}}$ and $b=0.5\,\mu {\mathrm{m}}$) on the top wall of the channel, when they exhibit angular speeds of different magnitudes and directions, including the non-chiral case ($\Gamma=0$).\label{distribution}}
\end{figure}
For the baseline set of parameter values, Fig. \ref{distribution} shows the re-scaled swimmer PDF across the whole $(0,2\pi)$ range of swimmer orientation angles $\theta$. The horizontal axis ($\theta$) has been set to start from $\theta=-\pi/2$ and end at $\theta=3\pi/2$, so that (for clearer display) the first and second halves of the axis correspond to active particles swimming {\em downstream} and {\em upstream}, respectively. The plot shows the effect of angular speed $\Gamma$ of the chiral active particles on swimmer distribution, where (given the importance of near-wall swimmer behavior) we have chosen the top wall of the channel as test region to display our results. The case of non-chiral swimmers ($\Gamma=0$) has been shown for comparison. It can be seen that were the swimmers non-chiral, there will be splitting of swimmer population into majority (downstream) and minority (upstream) sub-populations, represented by the two peaks in the swimmer PDF. This shows that with no chirality in the picture, the imposed flow has sufficient strength to overturn the swimming direction of a sizeable fraction of active particles from downstream (the direction dictated by shear) to upstream; this is the `population splitting' phenomenon that we thoroughly discussed in our previous work \cite{popsplit-paper}. Increasing angular speed from $\Gamma=0$ to $\Gamma=50$ (levogyre chirality) is seen to lead to a suppression of the minority (and increase of the majority) population peak, hence a lowering of bimodal ratio (defined here as the ratio of the minority over the majority peak population) from $R_{bm}\simeq1/3$ to $R_{bm}\simeq1/11$.
Staying with levogyre chirality, increasing the angular speed further, to $\Gamma=100$, is shown to lead to a complete suppression of population splitting. In fact, as is seen to be the case for dextrogyre chiral swimmers of the same angular speed, i.e., $\Gamma=-100$, the distribution of active particles becomes close to uniform, or at least more `even', across all possible swimmer orientations. We showed in our previous work that when the strength of imposed shear surpasses a certain threshold, an active suspension of (non-chiral) swimmers will undergo transition from a unimodal to a bimodal regime (distribution). Figure \ref{distribution} shows that with imposed shear unchanged, changing the angular speed, alone, of chiral swimmers may also lead to transitions between unimodal (UM) and bimodal (BM) phases. In fact, Fig. \ref{distribution} suggests that increasing signed angular speed of spheroidal chiral particles from $-100$ to $100$ leads to two transitions: One UM-to-BM transition, followed by a BM-to-UM transition.
\subsection{Effect of chirality on swimmer populations}
By continuous variation of the angular speed, Fig. \ref{populations} provides a closer look into the transitions that an active suspension of chiral swimmers goes through as angular speed $\Gamma$ is varied over a wide range: From dextrogyre chirality with rapid rotation ($\Gamma=-100$) to the non-chiral scenario, and from there to levogyre chirality with rapid rotation ($\Gamma=100$). The plot shows how downstream- and upstream-swimming populations of chiral active particles (as fractions of total swimmer population) vary with angular speed $\Gamma$, with everything else remaining intact, as per our baseline set of parameter values mentioned earlier. We have used boxes of two different colors to represent the two (UM and BM) phases. An immediate observation is that the system goes through four (rather than two) transitions as $\Gamma$ is varied over the range. Starting from $\Gamma=-100$ towards $\Gamma=100$, there is a first UM-to-BM transition (at $\Gamma_{ub_{1}}$), then a BM-to-UM transition (at $\Gamma_{bu_{1}}$), followed by a second UM-to-BM transition (at $\Gamma_{ub_{2}}$), and at last a second BM-to-UM transition (at $\Gamma_{bu_{2}}$). In statistical terms, the bimodality occurring at $\Gamma_{ub_{2}}$ is \textit{re-entrant}.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{fig3}
\caption{Fractions of the total swimmer population swimming downstream and upstream, at different angular speeds of the levogyre/dextrogyre chiral swimmers. Boxes of two different colors correspond to the two phases of the system: Unimodal (UM) and bimodal (BM). The chiral swimmers are spheroidal with aspect ratio $\lambda=4$ (major and minor axes of lengths $a=2\,\mu {\mathrm{m}}$ and $b=0.5\,\mu {\mathrm{m}}$, respectively).}
\label{populations}
\end{figure}
We start discussing the results illustrated in Fig. \ref{populations} by looking at the vertical midline that represents the non-chiral ($\Gamma=0$) scenario. As was shown earlier (Fig. \ref{distribution}), with non-chiral swimmers, the imposed shear has sufficient strength to cause population splitting; the swimmer population is split into about 70\% swimming downstream, and the remaining 30\% swimming upstream: The regime is bimodal (BM). Dextrogyre chirality works in concert with the imposed flow (see Fig. \ref{channel}), and larger negative (CW) angular speeds enhance the population splitting of swimmers; hence the decreasing downstream (majority) and increasing upstream (minority) populations with increasing angular speed of dextrogyre chiral swimmers. However, beyond $\Gamma_{ub_{1}}$ (i.e., for CW rotation faster than a certain rate), the system is seen to enter the unimodal phase. The presence of population splitting and the unimodal distribution of swimmers might seem contradictory, yet the population and population \textit{peak} distinction clears the potential ambiguity. Populations represent the fraction of active particles swimming down- or upstream, and while there can be two oppositely swimming sub-populations, the swimmer distribution can be unimodal, as there could be no visible \textit{peaks} of a minority population, as was shown to be the case earlier in Fig. \ref{distribution}, where for $\Gamma=-100$ and $\Gamma=100$, the swimmers were seen to be more evenly distributed compared to those with smaller angular speeds. At CW angular speeds larger (in magnitude) than $\Gamma_{ub_{1}}$, the rotation of the chiral swimmers is so fast, and the whole ($0,2\pi$) range of orientation angles is spanned at such rapid rate, that the smaller of the two peaks (i.e., the minority population \textit{peak}) recedes, giving rise to a unimodal distribution (while the \textit{minority} population still exists). For the exact same reason, there is a transition from unimodality to bimodality at levogyre angular speeds larger than $\Gamma_{bu_{2}}$.
The other two transitions are different, in that they correspond to actual onsets of population splitting. As, on the positive horizontal axis in Fig. \ref{populations}, we move from the non-chiral situation towards positive (CCW) angular speeds for levogyre chiral swimmers, the downstream population is seen to (sharply) increase, to the point that all chiral active particles are swimming downstream, and there is no minority population. This occurs at $\Gamma_{bu_{1}}$, marking the first of two bimodal-to-unimodal transitions for levogyre chiral swimmers. As illustrated in Fig. \ref{schematic}, positive (CCW) angular speed opposes the effect of the imposed Couette flow in exerting a CW torque on the spheroidal swimmers. The angular speed $\Gamma_{bu_{1}}$ is the maximum opposition the imposed flow can bear before its effect in splitting the swimmers into downstream (majority) and upstream (minority) populations is totally cancelled out by that of levogyre chirality. As indicated by the middle UM box in Fig. \ref{populations}, the active suspension remains in the unimodal phase for increasingly large CCW angular speeds, yet at $\Gamma_{ub_{2}}$, there is re-entrant bimodality, i.e., there is transition from the unimodal to the bimodal phase for the second time. For CCW angular speeds larger than $\Gamma_{ub_{2}}$, the downstream population starts decreasing and the upstream population increasing, and therefore the re-entrant bimodality coincides with the onset of a second population splitting. While the first population splitting was initiated by the imposed \textit{flow} attaining sufficient strength to flip the swimming direction of a sizeable fraction of swimmers, this second population splitting is \textit{chirality}-induced. At $\Gamma=\Gamma_{ub_{2}}$, the CCW angular speed of the chiral swimmers has reached sufficient magnitude to give rise to a population splitting of its own: Having overcome the counteracting effect of imposed shear, the CCW torque has become sufficiently strong to flip the orientation of some of the swimmers from downstream to upstream. As schematically shown in Fig. \ref{torques}, the flipping of swimming direction from downstream to upstream can occur under the dominating effect of CW torque (shear-induced), or the dominating effect of CCW torque (chirality-induced). While acting in opposing directions, both shear and chirality can give rise to the conversion of a fraction of downstream-swimming particles to upstream-swimming particles. Figure \ref{populations} also shows that as angular speed of levogyre chiral swimmers is increased beyond the point of transition to re-entrant bimodality ($\Gamma=\Gamma_{ub_{2}}$), the downstream-swimming population decreases, until at some angular speed it becomes equal to the upstream-swimming population. Beyond this point, i.e., for yet larger angular speeds of the levogyre chiral swimmers, the majority and minority populations change places (i.e., exchange orientations), with the upstream-swimming population now forming the majority.
\floatsetup[figure]{style=plain,subcapbesideposition=top}
\begin{figure}[t!]
\centering
\sidesubfloat[]
\centering
\label{phdiag_v2_a}
\includegraphics[width=0.925\linewidth]{fig4a
}\\
\sidesubfloat[]
\centering
\label{phdiag_v2_b}
\hspace{-4mm} \includegraphics[width=0.925\linewidth]{fig4b
}
\caption{Phase diagrams showing the transitions of spheroidal chiral swimmers between unimodal (UM) and bimodal (BM) phases as flow factor $n_F$ (representing imposed flow strength) and angular speed $\Gamma$ (due to chirality) are varied along vertical and horizontal axes of the phase diagrams, respectively; (a) phase diagrams for two different particle aspect ratios: $\lambda=4$ (the reference situation) and $\lambda=10$; (b) phase diagrams for two different swimmer self-propulsion strengths, represented by swim P\'eclet numbers $Pe_s=0.25$ and $Pe_s=0.125$ (corresponding to swim speeds $V_s=2\,\mu {\mathrm{m}}/{\mathrm{s}}$, the reference value, and $V_s=1\,\mu {\mathrm{m}}/{\mathrm{s}}$, respectively).
}
\label{phdiags}
\end{figure}
\subsection{Phase diagrams}
The data presented in Fig. \ref{populations} is obtained with a given imposed Couette flow, characterized by flow P\'eclet number $Pe_f=25$ or flow factor $n_F=100$ (corresponding to shear rate $\dot{\gamma}=10\,{\mathrm{s}}^{-1}$), from our baseline set of parameter values. The repeated transitions between the two phases of the system (UM and BM) were shown to occur as a result of competition between the torques due to shear and chirality. As shear rate and chirality are crucial in determining system behavior, we present phase diagrams in Fig. \ref{phdiags} that have flow factor $n_{F}$ (representing shear rate) on the vertical and angular speed $\Gamma$ of the chiral swimmers on the horizontal axis. The baseline situation is shown in both Figs. \ref{phdiag_v2_a} and \ref{phdiag_v2_b} (using black lines) for comparison: It shows the four transitions that can occur for a given strength of imposed flow as swimmer chirality is varied over the $\left[ -150,150\right]$ range. It also shows that the number of transitions will depend on the imposed shear rate. At shear rates (or flow factors $n_F$; we shall use the two related parameters interchangeably in our qualitative discussions) smaller than that required to initiate population splitting of \textit{non}-chiral swimmers, chiral swimmers will not experience population splitting either, regardless of the angular speed sign or magnitude; there are no transitions in this range of imposed shear rate. At imposed flows stronger than this threshold (we have shown the threshold flow factor in Fig. \ref{phdiag_v2_b} as $n_{F0}$), there will be two or four transitions between UM and BM regimes, depending on how large the shear rate is. The data in Figs. \ref{distribution} and \ref{populations} pertained to $n_{F}=100$, at which (as can be also seen from Fig. \ref{phdiags}) there are four transitions, with the factors contributing to each of the four discussed earlier above. It can be seen from Fig. \ref{phdiags} that the angular speeds for all four transitions are larger (in magnitude) at larger shear rates. For the two transitions at largest angular speeds ($\Gamma_{ub_{1}}$ and $\Gamma_{bu_{2}}$), it is rapid spanning of the whole $[0,2\pi)$ range of orientation angles that suppresses the effect of imposed shear in giving the active particles a preferred swimming direction. It is therefore according to expectation that stronger imposed flow should face faster rotation (larger magnitude of angular speed) for chirality to dominate and lead to transition. The BM-to-UM transition at $\Gamma_{bu_{1}}$ also occurs at larger angular speeds for stronger imposed flow; as the transition angular speed is the maximum an imposed flow can stand before it loses (effective) strength for initiation of population splitting: Stronger imposed flow can stand larger angular speeds due to chirality. The UM-to-BM transition at $\Gamma_{ub_{2}}$ occurs when CCW rotation due to levogyre chirality overcomes the effect of imposed shear and gives rise to a population splitting of its own. Stronger flow would have to be overcome by larger (CCW) angular speeds, i.e., more `power' from chirality.
A less trivial observation from Fig. \ref{phdiags} is that for a range of imposed shear rates, all larger than that required to initiate population splitting of non-chiral swimmers, only two transitions occur. The observation implies that for the second population splitting to take place (at $\Gamma_{ub_{2}}$), even though it is driven by chirality, the imposed shear rate needs to be greater than a threshold. Chirality of the active particles, independently, and without the presence of imposed shear beyond a certain strength, cannot give rise to population splitting of the swimmers. This is expected to be true for shear-driven population splitting; Fig. \ref{phdiags} shows that this is true also for chirality-driven population splitting of the swimmers. At weaker imposed flow, the angular speed $\Gamma_{bu_{1}}$, at which the first BM-to-UM transition takes place, is the point beyond which the swimmer distribution starts verging towards increased evenness (eventually taking the shape of a nearly uniform distribution), so that, in effect, it coincides with the angular speed $\Gamma_{bu_{2}}$, never giving chance for the rising of a minority population peak.
\subsection{Effect of swimmer aspect ratio}
Figure \ref{phdiag_v2_a} shows the effect of swimmer aspect ratio on the behavior of an active suspension of chiral swimmers subject to imposed shear. It can be seen that particle aspect ratio mostly affects the transitions specific to levogyre chiral swimmers, occurring at $\Gamma_{bu_{1}}$ and $\Gamma_{ub_{2}}$: In both cases, the angular speed (due to chirality) at which the transition occurs is smaller (at a given imposed shear rate) for thinner (larger aspect ratio) swimmers. This can be explained by the larger rotational diffusivity of thinner particles. As the effect of the imposed shear is to orient the chiral swimmers along the direction of flow, i.e., horizontally (in or against the flow), increased rotational diffusion of thinner swimmers is a hindrance to this task, in that larger $D_R$ would imply larger resistance against remaining in a certain direction. With the effect of the imposed flow in bringing about population splitting (by aligning the chiral swimmers horizontally) is weakened for thinner chiral particles, the imposed flow will lose its ability to maintain the population splitting marked by $\Gamma_{bu_{1}}$ at an angular speed smaller than that for a swimmer of smaller aspect ratio, hence the smaller $\Gamma_{bu_{1}}$. The angular speed $\Gamma_{ub_{2}}$ marking the onset of chirality-driven population splitting is also smaller for thinner particles due more rotational diffusion of the chiral swimmers making it easier for CCW chiral torque to overcome the effect of imposed shear in horizontally aligning the active spheroidal particles.
\subsection{Effect of swimmer self-propulsion strength}
The effect of self-propulsion speed on the behavior of the confined active suspension of spheroidal chiral swimmers can be seen from the phase diagram of Fig. \ref{phdiag_v2_b}. In contrast with swimmer aspect ratio, self-propulsion speed is seen to affect the transitions at largest angular speeds much more than the other two transitions. This can be explained by the fact that the latter two transitions arise from a competition of imposed shear and chirality, with stronger or weaker active self-propulsion not having a major say. But the transitions to unimodal distribution at large magnitudes of angular speed (due to chirality) occur when all orientation angles are spanned at a very rapid rate, leading to even distribution of swimmers across all $\theta$. Stronger swimmer self-propulsion works to have the chiral particles oriented vertically toward channel walls, and in doing so resists the action of large angular speeds at spanning the whole circle of radiation at very rapid rates, leading to nearly uniform distributions.
\section{Conclusions}
We presented quantitative analysis on the behavior of a dilute active suspension of spheroidal chiral swimmers, in confinement, subjected to imposed shear. Having shown in previous work \cite{popsplit-paper} that imposed flow beyond a certain strength gives rise to the splitting of swimmers into distinct downstream and upstream populations, we showed here that for chiral swimmers, the picture is considerably more nuanced, with the occurrence of population splitting (as characterizer of the response of an active suspension to shear) showing strong dependency on swimmer chirality: Angular speed and direction of rotation (levogyre/dextrogyre chirality). We attributed two phases to the system, corresponding to the presence of one or two peaks in the swimmer distribution function across all orientation angles; namely, unimodal and bimodal phases, respectively. Using phase diagrams covering a wide range of chiralities and imposed shear rates, we showed that the active suspension could switch states (transit between phases) upon modest changes in the angular speed of the swimmers and/or the shear rate of imposed flow. Considering variations of chiral swimmer angular speed at a given imposed shear rate, we observed re-entrant bimodality in the active suspension, meaning that under otherwise identical circumstances, chiral swimmers with a given angular speed $\Omega_1$ could be in the bimodal phase, while those with larger angular speeds $\Omega_2$ could be in the unimodal \textit{or} bimodal phase depending on how larger $\Omega_2$ is, compared to $\Omega_1$. Considering increasing $\Omega_{2}-\Omega_{1}$ from $0$ to larger values, the chiral swimmers will be in the bimodal, unimodal, bimodal (again) and unimodal (again) phases as the difference in angular speeds is gradually increased. We further showed, based again on phase diagrams, that the state of the active suspension is notably different for chiral swimmers of different aspect ratios (albeit propelling at the same speed and in the same direction, and subject to the same imposed shear rate). We also showed that otherwise identical swimmers (subject to the same shear rate) may or may not flip their swimming direction to the exact opposite (of that dictated by the imposed flow) depending on their self-propulsion speed. The observations suggest the possibility of using imposed shear as control factor to sort the chiral swimmers in an active suspension according to particle aspect ratio, self-propulsion, or angular speed. With the three mentioned features characterizing swimmers of different types, applications could be envisaged for sorting/separating biological or artificial self-propelled particles of different specifications.
\begin{acknowledgments}
A.N. acknowledges partial support from Iran Science Elites Federation (ISEF) and the Associateship Scheme of The Abdus Salam International Centre for Theoretical Physics (Trieste, Italy). We thank M. Kheyri and M.R. Shabanniya for useful discussions.
\end{acknowledgments}
|
{
"timestamp": "2018-03-08T02:12:17",
"yymm": "1803",
"arxiv_id": "1803.02801",
"language": "en",
"url": "https://arxiv.org/abs/1803.02801"
}
|
\section{Introduction} \label{sec:Introduction}
\subsection{Multi-instance learning}
In supervised learning, the training data consists of $K$ objects, $\mathbf{x}$, with corresponding class labels, $y$; $\{(\mathbf{x}_1,y_1), \ldots, (\mathbf{x}_k,y_k), \ldots, (\mathbf{x}_K,y_K)\}$.
An object is typically a vector of $d$ feature values, $\mathbf{x}_k = (x_{k1}, \ldots, x_{kd})$, named {\it instance}.
In multi-instance (MI) learning, each object consists of several instances.
The set $\mathbb{X}_k = \{ \mathbf{x}_{k1}, \ldots, \mathbf{x}_{kn_{k}} \}$, where the $n_k$ elements are vectors of length $d$, is referred to as {\it bag}.
The number of instances, $n_k$, varies from bag to bag, whereas the vector length is constant.
In supervised MI learning, the training data consists of $K$ sets and their corresponding class labels, $\{(\mathbb{X}_1, y_1), \ldots, (\mathbb{X}_k, y_k), \ldots, (\mathbb{X}_K,y_K)\}$.
Figure~\ref{fig:Benign} shows an image (bag), $k$, of benign breast tissue \cite{Gelasca2008Evaluation}, divided into $n_k$ segments with corresponding feature vectors (instances) $\mathbf{x}_{k1}, \ldots, \mathbf{x}_{kn_k}$ \cite{Kandemir2014Empowering}.
Correspondingly, figure~\ref{fig:Malignant} shows malignant breast tissue.
\begin{figure}[t!]
\centering
\subfloat[Benign]{
\includegraphics[width=0.3\textwidth]{fig1a.jpg} \label{fig:Benign}}
\subfloat[Malignant]{
\includegraphics[width=0.3\textwidth]{fig1b.jpg} \label{fig:Malignant}}
\caption{Breast tissue images. The image segments are not labelled.}
\label{fig:Breast}
\end{figure}
The images in the data set have class labels, the individual segments do not.
This is a key characteristic of MI learning: the instances are not labelled.
MI learning includes instance classification \cite{Doran2016MultipleInstance}, clustering \cite{Zhang2009Multiinstance}, regression \cite{Zhang2009Multiinstance}, and multi-label learning \cite{Zhou2012Multiinstance, Tang2017Deep}, but this article will focus on bag classification.
MI learning can also be found as integrated parts of end-to-end methods for image analysis that generate patches, extract features and do feature selection \cite{Tang2017Deep}.
See also \cite{Wang2018Revisiting} for an overview and discussion on end-to-end neural network MI learning methods.
The term MI learning was introduced in an application of molecules (bags) with different shapes (instances), and their ability to bind to other molecules \cite{Dietterich1997Solving}.
A molecule binds if at least one of its shapes can bind.
In MI terminology, the classes, $C$, in binary classification are referred to as positive, $pos$, and negative, $neg$.
The assumption that a positive bag contains at least one positive instance, and a negative bag contains only negative instances is referred to as the standard MI assumption.
Many new applications violate the standard MI assumption, such as image classification \cite{Xu2016Multipleinstance} and text categorisation \cite{Qiao2017Diversified}.
Consequently, successful algorithms meet more general assumptions, see e.g.\ the hierarchy of Weidmann et al.~\cite{Weidmann2003Twolevel} or Foulds and Frank's taxonomy \cite{Foulds2010Review}.
For a more recent review on MI classification algorithms, see e.g.\ \cite{Cheplygina2015Multiple}.
Carbonneau et al.~\cite{Carbonneau2018Multiple} discussed sample independence and data sparsity, which we address in Section~\ref{sec:Bagtoclass}.
Amores \cite{Amores2013Multiple} presented the three paradigms of instance space (IS), embedded space (ES), and bag space (BS).
IS methods aggregate the outcome of single-instance classifiers applied to the instances of a bag, whereas ES methods map the instances to a vector, and then use a single-instance classifier.
In the BS paradigm, the instances are transformed to a non-vectorial space where the classification is performed, avoiding the detour via single-instance classifiers.
The non-vectorial space of probability functions has not yet been introduced to the BS paradigm, despite its analytical benefits.
Whereas both Carbonneau et al.~\cite{Carbonneau2018Multiple} and Amores \cite{Amores2013Multiple} defined a bag as a set of feature vectors, Foulds and Frank \cite{Foulds2010Review} stated that a bag can also be modelled as a probability distribution.
The distinction is necessary in analysis of classification approaches, and both viewpoints offer benefits, see Section~\ref{sec:Point} for a discussion.
\subsection{The non-vectorial space of probability functions} \label{sec:ProbSpace
From the probabilistic viewpoint, an instance is a realisation of a random vector, $X$, with probability distribution $P(X)$ and sample space $\mathcal{X}$.
The posterior probability, $P(C|\mathbb{X}_k)$, is an effective classifier if the standard MI assumption holds, since it is known beforehand to be
\begin{align*}
\begin{split}
P(pos|\mathbb{X}_k) &= \begin{cases} 1 \text{ if any } \mathbf{x}_{ki} \in \mathcal{X}^+, \, i = 1, \ldots, n_k\\
0 \text{ otherwise, } \end{cases}
\end{split}
\end{align*}
where $\mathcal{X}^+$ is the positive instance space, and the positive and negative instance spaces are disjoint.
Bayes' rule, $P(C|X) \propto P(X|C)P(C)$, can be used when the posterior probability is unknown.
An assumption used to estimate the probability distribution of instance given the class, $P(X|C)$, is that instances from bags of the same class are independent and identically distributed (i.i.d.) random samples, but this is a poor description for MI learning.
As an illustrative example, let the instances be the colour of image segments from the class {\it sea}.
Image $k$ depicts a clear blue sea, whereas image $\ell$ depicts a deep green sea, and instance distributions are clearly dependent not only on class, but also on bag.
The random vectors in $\mathbb{X}_k$ are i.i.d., but have a different distribution than those in $\mathbb{X}_{\ell}$.
An important distinction between uncertain objects, whose distribution depends solely on the class label \cite{Jiang2013Clustering, Kriegel2005Densitybased}, and MI learning is that the instances of two bags from the same class are not from the same distribution.
The dependency nature for MI learning can be described as a hierarchical distribution (Eq.~\ref{eq:GeneralHierarchical}), where a bag, $B$, is defined as the probability distribution of its instances, $P(X|B)$, and the bag space, $\mathcal{B}$, is a set of distributions.
\subsection{Dissimilarities in MI learning}
Dissimilarities in MI learning can be categorised as instance-to-instance, bag-to-bag or bag-to-class.
Amores \cite{Amores2013Multiple} implicitly assumed metricity for dissimilarity functions \cite{Scholkopf2000Kernel} in the BS paradigm, but there is nothing inherent to MI learning that imposes these restrictions.
The non-metric Kullback-Leibler (KL) information \cite{Kullback1951Information} is an example of a divergence: a dissimilarity measure between two probability distributions.
Divergences have not been used in MI learning, due to the lack of a probability function space defined for the BS paradigm, despite the benefit of analysis independent of specific data sets \cite{Gibbs2002Choosing}.
The $f$-divergences \cite{Ali1966General,Csiszar1967Informationtype} have desirable properties for dissimilarity measures, including minimum value for equal distributions, but there is no complete categorisation of divergences.
The KL information is a non-symmetric $f$-divergence, often used in both statistics and computer science, and is defined as follows for two probability density functions (pdfs) $f_k(\mathbf{x})$ and $f_\ell(\mathbf{x})$:
\begin{align} \label{eq:KLinformation}
D_{KL}(f_k, f_\ell) = \int f_k(\mathbf{x}) \log \frac{f_k(\mathbf{x})}{f_\ell(\mathbf{x})} d\mathbf{x}.
\end{align}
An example of a symmetric $f$-divergence is the Bhattacharyya (BH) distance, defined as
\begin{align} \label{eq:BHdistance}
D_{BH}(f_k, f_\ell) = - \log \int \sqrt{ {f_k(\mathbf{x})}{f_\ell (\mathbf{x})}} d\mathbf{x},
\end{align}
and can be a better choice if the absolute difference, and not the ratio, differentiates the two pdfs.
The appropriate divergence for a specific task can be chosen based on identified properties, e.g.\ for clustering \cite{Mollersen2016DataIndependent}, or a new dissimilarity function can be proposed \cite{Mollersen2015Divergencebased}.
This article aims to identify properties for bag classification, and we make the following contributions:
\begin{itemize}
\item Presenting the hierarchical model for general, non-standard MI assumptions (Section~\ref{sec:Hierarchical}).
\item Introduction of bag-to-class dissimilarity measure (Section~\ref{sec:Bagtoclass}).
\item Identification of two properties for bag-to-class divergence (Section~\ref{sec:Properties}).
\item A new bag-to-class dissimilarity measure for sparse training data (Section~\ref{sec:Classconditional}).
\end{itemize}
In Section~\ref{sec:Data}, the KL information and the new dissimilarity measure is applied to data sets and the results are reported.
Bags defined in the probability distribution space, in combination with bag-to-class divergence, constitutes a new framework for MI learning, which is compared to other frameworks in Section~\ref{sec:Discussion}.
\section{Related work} \label{sec:Related}
The feature vector set viewpoint seems to be the most common, but the probabilistic viewpoint was introduces already in 1998, then under the i.i.d.\ given class assumption \cite{Maron1998Framework}.
This assumption has been used in approaches such as estimating the expectation by the mean \cite{Xu2004Logistic}, or estimation of class distribution parameters \cite{Tax2011Bag}, but has also been criticised \cite{Zhou2009Multiinstance}.
The hierarchical distribution was introduced for learnability theory under the standard MI assumption for instance classification \cite{Doran2016MultipleInstance}, and we expand the use for more general assumptions.
Dissimilarities in MI learning have been categorised as instance-to-instance or bag-to-bag \cite{Amores2013Multiple,Cheplygina2016DissimilarityBased}.
The bag-to-prototype approach in \cite{Cheplygina2016DissimilarityBased} offers an in-between category, but the theoretical framework is missing.
Bag-to-class dissimilarity has not been studied within the MI framework, but was used under the i.i.d.\ given class assumption for image classification in \cite{Boiman2008In}, where also the sparseness of training sets was addressed: if the instances are aggregated on class level, a denser representation is achieved.
Many MI algorithms use dissimilarities, e.g.\ graph distances \cite{Lee2012Bridging}, Hausdorff metrics \cite{Scott2005Generalized},
functions of the Euclidean distance \cite{Cheplygina2015Multiple, RuizMunoz2016Enhancing}, and distribution parameter based distances \cite{Cheplygina2015Multiple}.
The performances of dissimilarities on specific data sets have been investigated \cite{Cheplygina2015Multiple, Tax2011Bag, Cheplygina2016DissimilarityBased, RuizMunoz2016Enhancing, Sorensen2010DissimilarityBased}, but more analytical comparisons are missing.
A large class of commonly used kernels are also distances \cite{Scholkopf2000Kernel}, and hence, many kernel-based approaches in MI learning can be viewed as dissimilarity-based approaches.
In \cite{Wei2017Scalable}, the Fisher kernel is used as input to a support vector machine (SVM), whereas in \cite{Zhou2009Multiinstance} and \cite{Qiao2017Diversified} the kernels are an integrated part of the methods.
The non-vectorial graph space was used in \cite{Zhou2009Multiinstance, Lee2012Bridging}.
We introduce the non-vectorial space of probability functions as an extension within the BS paradigm for bag classification through dissimilarity measures between distributions.
The KL information was applied in \cite{Boiman2008In}, and is a much-used divergence function.
It is closely connected to the Fisher information \cite{Kullback1951Information} used in \cite{Wei2017Scalable} and to the cross entropy used as loss function in \cite{Wang2018Revisiting}.
We propose a conditional KL information in Section~\ref{sec:Classconditional}, which differs from the earlier proposed weighted KL information \cite{Sahu2003Fast} whose weight is a constant function of $X$.
\section{Theoretical background} \label{sec:Theoretical}
\subsection{Hierarchical distributions} \label{sec:Hierarchical}
A bag is the probability distribution from which the instances are sampled.
The generative model of instances from a positive or negative bag follows a hierarchical distribution
\begin{align}\label{eq:GeneralHierarchical}
\begin{aligned}
X|B & \sim P(X | B) \,\,\, & X|B \sim P(X | B)\\
B & \sim P(B|pos) \,\,\, \,\,\, \text{ or } & B \sim P(B|neg),
\end{aligned}
\end{align}
respectively.
The common view in MI learning is that a bag consists of positive and negative instances, which corresponds to a bag being a mixture of a positive and a negative distribution.
Consider tumour images labelled $pos$ or $neg$, with instances extracted from segments.
Let $f(\mathbf{x}|\theta^+_k)$ and $f(\mathbf{x}|\theta^-_k)$ denote the pdfs of positive and negative segments, respectively, of image $k$.
The pdf of bag $k$ is a mixture distribution
\begin{align*}
f_{k}(\mathbf{x}) = p_{k} f(\mathbf{x}|\theta_k^+) + (1-p_{k})f(\mathbf{x}|\theta_k^-),
\end{align*}
where $p_k = \sum_{i = 1}^{n_k} \tau_i/n_k$, where $\tau_i = 1$ if instance $i$ is positive.
The probability of positive segments, $\pi_{k}$, depends on the image's class label, and hence $\pi_k$ is sampled from $P(\Pi_{pos})$ or $P(\Pi_{neg})$.
The characteristics of positive and negative segments vary from image to image.
Hence, $\theta^+_k$ and $\theta^-_k$ are realisations of random variables, with corresponding probability distributions $P(\Theta^+)$ and $P(\Theta^-)$.
The generative model of instances from a positive (negative) bag is
\begin{align} \label{eq:Hierarchical}
\begin{split}
X|\mathcal{T}, \Theta^+,\Theta^- & \sim \begin{cases}
P(X|\tau = 1) = P(X|\Theta^+)\\
P(X|\tau = 0) = P(X|\Theta^-)
\end{cases} \\
\mathcal{T}|\Pi_{pos(neg)} & \sim \begin{cases}
P(\tau = 1) = \Pi_{pos(neg)}\\
P(\tau = 0) = 1-\Pi_{pos(neg)}
\end{cases} \\
\Pi_{pos(neg)} & \sim P(\Pi_{pos(neg)}), \Theta^+ \sim P(\Theta^+), \Theta^- \sim P(\Theta^-).
\end{split}
\end{align}
The corresponding sampling procedure from positive (negative) bag, $k$, is \\
Step 1: Draw $\pi_{k}$ from $P(\Pi_{pos(neg)})$, $\theta^+_k$ from $P(\Theta^+)$, and $\theta^-_k$ from $P(\Theta^-)$. These three parameters define the bag. \\
Step 2: For $i = 1, \ldots, n_k$, draw $\tau_i$ from $P(\mathcal{T}|\pi_{k})$, draw $\mathbf{x}_i$ from $P(X|\theta_k^+)$ if $\tau_i = 1$, and from $P(X|\theta_k^-)$ otherwise.
By imposing restrictions, assumptions can be accurately described, e.g.\ the standard MI assumption:
at least one positive instance in a positive bag: $P(p_k \geq 1/n_k) = 1$;
no positive instances in a negative bag: $P(\Pi_{neg} = 0) = 1$;
the positive and negative instance spaces are disjoint.
Eq.~\ref{eq:Hierarchical} is the generative model of MI problems, assuming that the instances have unknown class labels and that the distributions are parametric.
The parameters $\pi_k$, $\theta_k^+$ and $\theta_k^-$ are i.i.d.\ samples from their respective distributions, but are not observed and are hard to estimate, due to the very nature of MI learning: The instances are not labelled.
Instead, $P(X|B)$ can be estimated from the observed instances, and a divergence function can serve as classifier.
\subsection{Bag-to-class dissimilarity} \label{sec:Bagtoclass
The training set in MI learning is the instances, since the bag distributions are unknown.
Under the assumption that the instances from each bag are i.i.d.\ samples, the KL information has a special role in model selection, both from the frequentist and the Bayesian perspective.
Let $f_{bag}(\mathbf{x})$ be the sample distribution (unlabelled bag), and let $f_k(\mathbf{x})$ and $f_\ell(\mathbf{x})$ be two models (labelled bags).
Then the expectation over $f_{bag}(\mathbf{x})$ of the log ratio of the two models, $E \{ \log ( f_k(\mathbf{x})/f_\ell(\mathbf{x})) \} $, is equal to $D_{KL}(f_{bag}, f_\ell)- D_{KL}(f_{bag}, f_k)$.
In other words, the log ratio test reveals the model closest to the sampling distribution in terms of KL information \cite{Eguchi2006Interpreting}.
From the Bayesian viewpoint, the Akaike Information Criterion (AIC) reveals the model closest to the data in terms of KL information, and is asymptotically equivalent to Bayes factor under certain assumptions \cite{Kass1995Bayes}.
The i.i.d.\ assumption is not inherent to the probability distribution viewpoint, but the asymptotic results for the KL information rely on it.
In many applications, such as image analysis with sliding windows, the instances are best represented as dependent samples, but the dependencies are hard to estimate, and the independence assumption is often the best approximation.
Doran and Ray \cite{Doran2016MultipleInstance} showed that the independence assumption is an approximation of dependent instances, but comes with the cost of slower convergence.
If the bag sampling is sparse, the dissimilarity between $f_{bag}(\mathbf{x})$ and the labelled bags becomes somewhat arbitrary w.r.t.\ the true label of $f_{bag}(\mathbf{x})$.
The risk is high for ratio-based divergences such as the KL information, since $f_k(\mathbf{x})/f_\ell(\mathbf{x}) = \infty$ for $\{\mathbf{x}: f_\ell(\mathbf{x}) = 0, f_k(\mathbf{x}) > 0\}$.
The bag-to-bag KL information is asymptotically the best choice of divergence function, but this is not the case for sparse training sets.
Bag-to-class dissimilarity makes up for some of the sparseness by aggregation of instances.
Consider an image segment of colour {\it deep green}, which appears in {\it sea} images, but not in {\it sky} images, and a segment of colour {\it white}, which appears in both classes (waves and clouds).
If the combination {\it deep green} and {\it white} does not appear in the training set, then a bag-to-bag KL information will result in infinite dissimilarity for all bags, regardless of class, but the bag-to-class KL information will be finite for the {\it sea} class.
Let $P(X|C) = \int_{\mathcal{B}} P(X|B) dP_\mathcal{B} (B|C)$ be the probability distribution of a random vector from the bags of class $C$.
Let $D(P(X|B),P(X|pos))$ and $D(P(X|B),P(X|neg))$ be the divergences between the unlabelled bag and each of the classes.
Choice of divergence is not obvious, since $P(X|B)$ is different from both $P(X|pos)$ and $P(X|neg)$, but can be done by identification of properties.
\section{Properties for bag-level classification} \label{sec:Divergence
\subsection{Properties for bag-to-class divergences} \label{sec:Properties}
We here propose two properties for bag-to-class divergences regarding infinite bag-to-class ratio and zero instance probability.
Let $P_{bag} = P(X|B)$, $P_{pos} = P(X|pos)$ and $P_{neg} = P(X|neg)$.
Denote the divergence between an unlabelled bag and the reference distribution, $P_{ref}$, by $D(P_{bag}, P_{ref})$.
As a motivating example, consider the following: A positive bag, $P_a$, is a continuous uniform distribution $\mathcal{U}(a, a + \delta)$, sampled according to $P(A) = \mathcal{U}(\eta, \zeta - \delta)$.
A negative bag, $P_{a'}$, is $\, \mathcal{U}(a', a'+\delta')$ sampled according to $P(A') = \mathcal{U}(\eta', \zeta'-\delta')$, and let $ \eta' < \zeta$ so that there is an overlap between the two classes.
For both positive and negative bags, we have that $P_{pos}/P_{bag} = \infty$ for a subspace of $\mathcal{X}$ and $P_{neg}/P_{bag} = \infty$ for a different subspace of $\mathcal{X}$, merely reflecting that the variability in instances within a class is larger than within a bag, as illustrated in Fig.~\ref{fig:Uniform}.
\begin{figure}[!h]
\centering
\includegraphics[width = 1\textwidth]{fig2.jpg}
\caption{The pdf of a bag with uniform distribution and the pdfs of the two classes.}
\label{fig:Uniform}
\end{figure}
If $P_{bag}$ is a sample from the negative class, and $P_{bag}/P_{pos}= \infty$ for some subspace of $\mathcal{X}$ it can easily be classified.
From the above analysis, large bag-to-class ratio should be reflected in large divergence, whereas large class-to-bag ratio should not.
{\bf Property 1:} For the subspace of $\mathcal{X}$ where the bag-to-class ratio is larger than some $M$, the contribution to the total divergence, $D_{\mathcal{X}_M}$, approaches the maximum contribution as $M \rightarrow \infty$. For the subspace of $\mathcal{X}$ where the class-to-bag ratio is larger than $M$, the contribution to the total divergence, $D_{\mathcal{X}_M^*}$, {\it does not} approach the maximum contribution as $M \rightarrow \infty$:
\begin{align*}
\begin{split}
\mathcal{X}_M: & P_{bag}/P_{ref} > M, \, \, \, \,
\mathcal{X}_M^*: P_{ref}/P_{bag} > M \\
M \rightarrow \infty :&
\begin{cases}
D_{\mathcal{X}_M}(P_{bag},P_{ref}) \rightarrow \max(D_{\mathcal{X}_M}(P_{bag},P_{ref})) \\
D_{\mathcal{X}_M^*}(P_{bag},P_{ref})\centernot \rightarrow \max(D_{\mathcal{X}_M^*}(P_{bag},P_{ref})).
\end{cases}
\end{split}
\end{align*}
Property 1 can not be fulfilled by a symmetric divergence.
As a second motivating example, consider the same positive class as before, and the two alternative negative classes defined by;
\begin{align*}
\begin{aligned}
P_{neg} =
\begin{cases}
P(A'= \eta') = 0.5\\
P(A'= \eta'+2\delta') = 0.5
\end{cases}
\end{aligned}
&&
\begin{aligned}
P_{neg'} =
\begin{cases}
P(A'= \eta') = 0.5\\
P(A'= \eta' + 2\delta') = 0.25 \\
P(A'= \eta' + 3\delta') = 0.25.
\end{cases}
\end{aligned}
\end{align*}
For bag classification, the question becomes: from which class is a specific bag sampled?
It is equally probable that a bag $P_{\eta'} = P(X|A'= \eta')$ comes from each of the two negative classes, since $P_{neg}$ and $P_{neg'}$ only differ where $P_{\eta'} = 0$, and we argue that $D(P_{\eta'},P_{neg})$ should be equal to $D(P_{\eta'},P_{neg'})$.
{\bf Property 2:} For the subspace of $\mathcal{X}$ where $P_{bag}$ is smaller than some $\epsilon$, the contribution to the total divergence, $D_{\mathcal{X}_\epsilon}$, approaches zero as $\epsilon \rightarrow 0$:
\begin{align*}
\begin{split}
\mathcal{X}_\epsilon &: P_{bag} < \epsilon, \, \epsilon > 0 \\
\epsilon \rightarrow 0 & : D_{\mathcal{X}_\epsilon}(P_{bag},P_{ref}) \rightarrow 0.
\end{split}
\end{align*}
KL information is the only divergence that fulfils these two properties among the non-symmetric divergences listed in \cite{Taneja2006Generalized}.
As there is no complete list of divergences, so it is possible that other divergences that the authors are not aware of fulfil these properties.
\subsection{A class-conditional dissimilarity for MI classification} \label{sec:Classconditional}
In the {\it sea} and {\it sky} images example, consider an unlabelled image with a {\it pink} segment, e.g.\ a boat.
If {\it pink} is absent in the training set, then the bag-to-class KL information will be infinite for both classes.
We therefore propose the following property:
{\bf Property 3:} For the subspace of $\mathcal{X}$ where both class probabilities are smaller than some $\epsilon$, the contribution to the total divergence, $D_{\mathcal{X}_\epsilon}$, approaches zero as $\epsilon \rightarrow 0$:
\begin{align*}
\begin{split}
\mathcal{X}_\epsilon &: P_{ref} < \epsilon, \, P'_{ref} < \epsilon \\
\epsilon \rightarrow 0 & : D_{\mathcal{X}_\epsilon}(P_{bag},P_{ref}) \rightarrow 0.
\end{split}
\end{align*}
We present a class-conditional dissimilarity that accounts for this:
\begin{align} \label{eq:ClassConditional}
{cKL}(f_{bag},f_{pos}|f_{neg}) = \int \frac{f_{neg}(\mathbf{x})}{f_{pos}(\mathbf{x})} f_{bag}(\mathbf{x}) \log \frac{f_{bag}(\mathbf{x})}{f_{pos}(\mathbf{x})} d\mathbf{x},
\end{align}
which also fulfils Properties 1 and 2.
\subsection{Bag-level divergence classification} \label{sec:Algorithm}
We propose two similar methods based on either the ratio of bag-to-class divergences, $rD\big(f_{bag}, f_{pos} ,f_{neg} \big) = D\big(f_{bag}(\mathbf{x}), f_{pos}(\mathbf{x}))\big) / D\big(f_{bag}(\mathbf{x}),f_{neg}(\mathbf{x})\big )$, or the class-conditional dissimilarity in Eq.~\ref{eq:ClassConditional}.
We propose using the KL information (Eq.~\ref{eq:KLinformation}) or the Bhattacharyya distance (Eq.~\ref{eq:BHdistance}), but any divergence function can be applied.
Given a training set $\{(\mathbb{X}_1, y_1), \ldots, (\mathbb{X}_k, y_k), \ldots, (\mathbb{X}_K,y_K)\}$ and a set, $\mathbb{X}_{bag}$, of instances drawn from an unknown distribution, $f_{bag}(\mathbf{x})$, with unknown class label $y_{bag}$, and let $\mathbb{X}_{neg (pos)}$ denote the set of all $\mathbf{x}_{ik} \in \big (\mathbb{X}_k, y_k = neg (pos)\big )$. The bag-level divergence classification follows the steps:
\begin{align} \label{eq:Algorithm}
1. & \text{ Estimate pdfs: Fit }\hat{f}_{neg}(\mathbf{x}) \text{ to } \mathbb{X}_{neg} \text{, } \hat{f}_{pos}(\mathbf{x}) \text{ to } \mathbb{X}_{pos}, \text{ and } \hat{f}_{bag}(\mathbf{x}) \text{ to } \mathbb{X}_{bag}. \nonumber\\
2. & \text{ Calculate divergences: } D\big(\hat{f}_{bag}(\mathbf{x}), \hat{f}_{neg}(\mathbf{x}))\big) \text{ and } D\big(\hat{f}_{bag}(\mathbf{x}),\hat{f}_{pos}(\mathbf{x})\big ), \nonumber \\
& \text{ or } cKL\big (\hat{f}_{bag}(\mathbf{x}), \hat{f}_{pos}(\mathbf{x})| \hat{f}_{neg}(\mathbf{x})\big ) \text{ by integral approximation. } \nonumber \\
3. & \text{ Classify according to: } \\
& y_{bag} =
\begin{cases}
pos \text{ if } rD\big(\hat{f}_{bag}, \hat{f}_{pos} ,\hat{f}_{neg} \big) < t \nonumber \\
neg \text{ otherwise. }
\end{cases}\\
& \text{ or } \nonumber \\
& y_{bag} =
\begin{cases}
pos \text{ if } cKL\big (\hat{f}_{bag}, \hat{f}_{pos}| \hat{f}_{neg}\big ) < t \nonumber \\
neg \text{ otherwise. }
\end{cases}
\end{align}
Common methods for pdf estimation are Gaussian mixture models (GMMs) and kernel density estimation (KDE).
The integrals in step 2 are commonly approximated by importance sampling and Riemann sums. In rare cases, e.g.\ when the distributions are Gaussian, the divergences can be calculated directly.
The threshold $t$ can be pre-defined based on, e.g.\ misclassification penalty and prior class probabilities, or estimated from the training set by leave-one-out cross-validation.
When the feature dimension is high and the number of instances in each bag is low, pdf estimation becomes arbitrary.
A solution is to estimate separate pdfs for each dimension, calculate the corresponding divergences $D_1, \ldots, D_{Dim}$, and treat them as inputs into a classifier replacing step 3.
Code available at https://github.com/kajsam/Bag-to-class-divergence.
\section{Experiments} \label{sec:Data}
\subsection{Simulated data} \label{sec:Sim}
The following study exemplifies the difference between BH distance ratio, $rBH$, KL information ratio, ${rKL}$, and $cKL$ as classifiers for sparse training data.
The minimum dissimilarity bag-to-bag classifiers are also implemented, based on KL information and BH distance.
The number of instances from each bag is $50$, the number of bags in the training set is varied from $1$ to $25$ from each class, and the number of bags in the test set is $100$.
Each bag and its instances are sampled as described in Eq.~\ref{eq:Hierarchical}, and the area under the receiver operating characteristic (ROC) curve (AUC) serves as performance measure.
For simplicity, we use Gaussian distributions in one dimension for {\it Sim 1}-{\it Sim 4}:
\begin{align*}
\begin{aligned}
X^- & \sim \mathcal{N} (\mu^-, \sigma^{2-}) \\
\mu^- &\sim \mathcal{N} (0,10)\\
\sigma^{2-} & = |\zeta^-|, \, \zeta^- \sim \mathcal{N} (1,1) \\
\Pi^- & = \pi^-
\end{aligned}
&&
\begin{aligned}
X^+ & \sim \mathcal{N} (\mu^+, \sigma^{2+}) \\
\mu^+ & \sim \mathcal{N} (\nu^+,10) \\
\sigma^{2+} & = |\zeta^+|, \, \zeta^+ \sim \mathcal{N} (\eta^+,1)\\
\Pi^+ & = 0.10.
\end{aligned}
\end{align*}
\noindent {\it Sim 1:} $\nu^+ = 15, \, \eta^+ = 1, \, \pi^- = 0$:
No positive instances in negative bags. \\
{\it Sim 2:} $\nu^+ = 15, \, \eta^+ = 1, \, \pi^- = 0.01$:
Positive instances in negative bags.\\
{\it Sim 3:} $\nu^+ = 0, \, \eta^+ = 100, \, \pi^- = 0$:
Positive and negative instances have the same expectation of the mean, but unequal variance. \\
{\it Sim 4:} $P(\nu^+= 15) = P(\nu^+ = -15) = 0.5, \, \eta^+ = 1, \, \pi^- = 0.01$:
Positive instances are sampled from two distributions with unequal mean expectation.
We add {\it Sim 5} and {\it Sim 6} for the discussion on instance labels in Section~\ref{sec:Discussion}, as follows:
{\it Sim 5} is an uncertain object classification, where the positive bags are lognormal densities with $\mu = \log(10)$ and $\sigma^2 = 0.04$, and negative bags are Gaussian mixtures densities with $\mu_1 = 9.5$, $\mu_2 = 13.5$, $\sigma^2 = 2.5$, and $\pi_1 = 0.9$.
These two densities are nearly identical, see \cite[p.\ 15]{McLachlan2000Finite}.
In {\it Sim 6}, the parameters of {\it Sim 5} are i.i.d.\ observations from Gaussian distributions, each with $\sigma^2 = 1$ for the Gaussian mixture, and $\sigma^2 = 0.04$ for the lognormal distribution.
Figure~\ref{fig:SparseTraining} shows the estimated class densities and two estimated bag densities for {\it Sim 2} with $10$ negative bags in the training set.
\begin{figure}[!h]
\centering
\subfloat[]{
\includegraphics[width = 1\textwidth]{fig3a.jpg}} \\
\subfloat[]{
\includegraphics[width = 1\textwidth]{fig3b.jpg}}
\caption{(a) One positive bag in the training set give small variance for the class pdf. (b) Ten positive bags in the training set, and the variance has increased.}
\label{fig:SparseTraining}
\end{figure}
We use the following details for the algorithm in (\ref{eq:Algorithm}): KDE fitting: Epanechnikov kernel with estimated bandwidth varying with the number of observations. Integrals: Importance sampling.
Classifier: $t$ is varied to give the full range of sensitivities and specificities necessary to calculate AUC.
Table~\ref{tab:Simulations1} shows the mean AUCs for $50$ repetitions.
\begin{table}[h!]
\centering
\caption{AUC$\cdot100$ for simulated data.}
\label{tab:Simulations1}
\begin{tabular}{|c|c||c|c|c||c|c|c||c|c|c|}
\hline
& Bags & \multicolumn{3}{c||}{$neg$: 5} & \multicolumn{3}{c||}{$neg$: 10} & \multicolumn{3}{c|}{$neg$: 25} \\
\hline
Sim: & {$pos$}: & ${rBH}$ & ${rKL}$ & ${cKL}$ & ${rBH}$ & ${rKL}$ & ${cKL}$ & ${rBH}$ & ${rKL}$ & ${cKL}$ \\
\hline
& 1 & 61 & 69 & 85 & 62 & 72 & 89 & 61 & 73 & 92 \\
1& 5 & 63 & 75 & 86 & 64 & 82 & 94 & 68 & 84 & 97 \\
& 10 & 69 & 86 & 87 & 73 & 91 & 95 & 75 & 91 & 98 \\
\hline
& 1 & 57 & 61 & 75 & 59 & 61 & 78 & 58 & 55 & 75 \\
2 & 5 & 59 & 67 & 79 & 60 & 68 & 84 & 62 & 63 & 85 \\
& 10 & 64 & 77 & 80 & 66 & 78 & 86 & 68 & 72 & 86 \\
\hline
& 1 & 51 & 55 & 71 & 52 & 58 & 73 & 50 & 57 & 74 \\
3 & 5 & 53 & 61 & 76 & 53 & 66 & 81 & 52 & 65 & 83 \\
& 10 & 58 & 73 & 78 & 58 & 76 & 84 & 57 & 76 & 87 \\
\hline
& 1 & 55 & 61 & 70 & 56 & 62 & 73 & 56 & 58 & 69 \\
4& 5& 56 & 63 & 75 & 57 & 64 & 81 & 59 & 59 & 80 \\
& 10 & 60 & 74 & 77 & 62 & 76 & 85 & 63 & 69 & 84 \\
\hline
& 1 & 64 & 61 & 62 & 67 & 63 & 66 & 64 & 62 & 67 \\
5& 5 & 73 & 69 & 63 & 74 & 70 & 67 & 75 & 71 & 72 \\
& 10 & 74 & 70 & 62 & 75 & 73 & 69 & 76 & 74 & 72 \\
\hline
& 1 & 68 & 68 & 67 & 66 & 68 & 68 & 68 & 71 & 68 \\
6& 5 & 65 & 64 & 67 & 68 & 68 & 69 & 70 & 71 & 74 \\
& 10 & 66 & 64 & 66 & 70 & 69 & 72 & 72 & 73 & 74 \\
\hline
\end{tabular}
\end{table}
\subsection{Breast tissue images}
Breast tissue images (see Fig.~\ref{fig:Breast}) with corresponding feature vectors are used as example.
Following the procedure in \cite{Kandemir2014Empowering}, the principal components are used for dimension reduction, and $4$-fold cross-validation is used so that $\hat{f}_{neg}(x)$ and $\hat{f}_{pos}(x)$ are fitted only to the instances in the training folds.
For pdf estimation, GMMs are fitted to the first principal component, using an EM-algorithm, with number of components chosen by minimum AIC.
In addition, KDE as in Section~\ref{sec:Sim}, and KDE with Gaussian kernel and optimal bandwidth \cite{Sheather1991Reliable} is used.
\begin{table}[h!]
\centering
\caption{AUC$\cdot 100$ for breast tissue images.}
\label{tab:Breast}
\begin{tabular}{|c|c|c|c|}
\hline
& KDE (Epan.) & KDE (Gauss.) & GMMs \\
\hline
${cKL}$ & 90 & 92 & 94 \\
\hline
${rKL}$ & 82 & 92 & 96 \\
\hline
\end{tabular}
\end{table}
\subsection{Benchmark data} \label{sec:Benchmark}
We here present the results for 7 benchmark datasets\footnote{https://figshare.com/articles/MIProblems\_A\_repository\_of\_multiple\_instance\_learning\_datasets/6633983} together with the results of five other methods as reported in the cited publications.
The datasets have relatively few instances per bag compared to the dimensionality.
For detailed descriptions and references, see \cite{Cheplygina2015Multiple}.
We use the following details for the algorithm in (\ref{eq:Algorithm}):
KDE fitting: Gaussian kernel with optimal bandwidth. Integrals: Importance sampling. Classifier: Support vector machine (SVM) with linear kernel. \\
\texttt{
for d = 1: Dim \\
\indent Fit $f_{imp,d}(x)$ to $\mathbb{X}_{bag,d}$ and sample $\mathbf{z} = [z_{1}, \ldots, z_{n_{imp}}]$ from $f_{imp,d}(x)$. \\
\indent 1. Fit $\hat{f}_{neg,d}(x)$, $\hat{f}_{pos,d}(x)$, $\hat{f}_{bag,d}(x)$ using KDE. \\
\indent 2. Approximate $r\hat{D}_d$ or $c\hat{D}_d$ using $f_{imp,d}(x)$ and $\mathbf{z}$.\\
end \\
3. Input $r\hat{\mathbf{D}} = [r\hat{D}_1, \ldots, r\hat{D}_{Dim}]$ or $c\hat{\mathbf{D}} = [c\hat{D}_1, \ldots, c\hat{D}_{Dim}]$ to SVM.
}
10 times 10-fold cross-validation is used, except for the {\it 2000-Image} dataset where 5 times 2 fold cross-validation is used as in \cite{Wei2017Scalable} and \cite{Zhou2009Multiinstance}.
In \cite{Cheplygina2016DissimilarityBased}, one 10-fold cross-validation was performed, and the standard error was reported. In \cite{Wang2018Revisiting}, 5 times 10-fold cross-validation was performed. In \cite{Qiao2017Diversified}, several parameters are optimised for each data set, which prevents a fair comparison, and there was no reported deviation/error.
The accuracies and the standard deviations are presented in Table~\ref{tab:Benchmark1} and Table~\ref{tab:Benchmark2}, where the highest accuracies for each data set and those within one standard deviation are marked in bold.
\begin{table}[h!]
\centering
\caption{Accuracy and standard deviation/error for benchmark data sets.}
\label{tab:Benchmark1}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \small Musk1 & \small Musk2 & {\small Fox} & {\small Tiger} & {\small Elephant} \\
\hline
\footnotesize{MI-Net(DS)\cite{Wang2018Revisiting}} & \footnotesize {\bf 89.4} (9.3) & \footnotesize 87.4 (9.7) & \footnotesize 63.0 (8.0) & \footnotesize {\bf 84.5} (8.7) & \footnotesize {\bf 87.2} (7.2) \\
\hline
\footnotesize{miFV$_{def}$\cite{Wei2017Scalable}} & \footnotesize {\bf 87.5} (10.6) & \footnotesize 86.1 (10.6) & \footnotesize 56.0 (9.9) & \footnotesize 78.9 (9.1) & \footnotesize 78.9 (9.1) \\
\hline
\footnotesize{miGraph\cite{Zhou2009Multiinstance}} & \footnotesize {\bf 88.9} (3.3) & \footnotesize {\bf 90.3} (2.6) & \footnotesize 61.6 (2.8) & \footnotesize {\bf 86.0} (1.6) & \footnotesize {\bf 86.8} (0.7)\\
\hline
\footnotesize{$D^{RS}$\cite{Cheplygina2016DissimilarityBased}} & \footnotesize {\bf 89.3} (3.4) & \footnotesize 85.5 (4.7) & \footnotesize 64.4 (2.2) & \footnotesize 81.0 (4.6) & \footnotesize {\bf 80.4} (3.5)\\
\hline
\footnotesize{$DivDict$\cite{Qiao2017Diversified}} & \footnotesize 87.7 & \footnotesize 89.1 & \footnotesize 65.0 & \footnotesize 80.0 & \footnotesize 90.67 \\
\hline
\footnotesize{rBH} & \footnotesize 64.4 (3.1) & \footnotesize 69.2 (3.2) & \footnotesize {\bf 71.5} (1.2) & \footnotesize \footnotesize 70.1 (1,3) & \footnotesize {\bf 81.7} (1.7) \\
\hline
\footnotesize{cKL} & \footnotesize 74.0 (1.9) & \footnotesize 69.9 (2.0) & \footnotesize 65.8 (2.1) & \footnotesize {\bf 85.0} (1.4) & \footnotesize 71.1 (3.3)\\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\caption{Accuracy and standard deviation/error for benchmark data sets.}
\label{tab:Benchmark2}
\begin{tabular}{|c|c|c|c|}
\hline
& {\small 2000 - Image} & {\small Alt.atheism} \\
\hline
\footnotesize{MI-Net(DS)} & - & \footnotesize {\bf 86.0} (13.4)\\
\hline
\footnotesize{miFV$_{def}$} & \footnotesize {\bf 87.5} (7.2) & - \\
\hline
\footnotesize{miGraph} & \footnotesize 72.1 & \footnotesize{65.5 (4.0)} \\
\hline
\footnotesize{$D^{RS}$} & - & \footnotesize{44.0 (4.5)} \\
\hline
\footnotesize{rBH} & \footnotesize {\bf 90.0} (6.4) & \footnotesize 62.0 (2.6)\\
\hline
\footnotesize{cKL} & \footnotesize 80.1 (10.5)& \footnotesize {\bf 85.5} (1.4)\\
\hline
\end{tabular}
\end{table}
\subsection{Results} \label{sec:Results}
The general trend in Table~\ref{tab:Simulations1} is that $cKL$ gives higher AUC than $rKL$, which in turn gives higher AUC than $rBH$, in line with the divergences' properties for sparse training sets.
The same trend can be seen with a Gaussian kernel and optimal bandwidth (numbers not reported).
The gap between $cKL$ and $rKL$ narrows with larger training sets.
In other words, the benefit of $cKL$ increases with sparsity.
This can be explained by the $\infty/\infty$ risk of $rKL$, as seen in Figure~\ref{fig:SparseTraining}(a).
Increasing $\pi^+$ also narrows the gap between $rKL$ and $cKL$, and eventually (at approximately $\pi^+ = 0.25$), $rKL$ outperforms $cKL$ (numbers not reported).
{\it Sim 1} and {\it Sim 3} are less affected because the ratio $\pi^+/\pi^-$ is already $\infty$.
The minimum bag-to-bag classifier gives a single sensitivity-specificity outcome, and the KL information outperforms the BH distance.
Compared to the ROC curve, as illustrated in Fig.~\ref{fig:ROC_SEP}, the minimum bag-to-bag KL information classifier exceeds the bag-to-class dissimilarities only for very large training sets, typically for 500 or more, then at the expense of extensive computation time.
\begin{figure}[!h]
\centering
\includegraphics[height = 0.5\textheight]{fig4.jpg}
\caption{An example of ROC curves for $cKL$, $rKL$ and $rBH$ classifiers. The performance increases when the number of positive bags in the training set increases from $1$ (dashed line) to $10$ (solid line). The sensitivity-specificity pairs for the bag-to-bag KL and BH classifier is displayed for $100$ positive and negative bags in the training set for comparison.}
\label{fig:ROC_SEP}
\end{figure}
{\it Sim 5} is an example in which the absolute difference, and not the ratio, differentiates the two classes, and $rBH$ has the superior performance.
When the extra hierarchy level is added in {\it Sim 6}, the performances returned to normal.
The breast tissue study shows that the simple divergence-based approach can outperform more sophisticated algorithms.
$rKL$ is more sensitive than $cKL$ to choice of density estimation method.
$rKL$ performs better than $cKL$ with GMM, and both exceed the AUC of $0.90$ of the original algorithm.
Table~\ref{tab:Breast} shows how the performance can vary between two common pdf estimation methods that do not assume a particular underlying distribution.
Both KDE and GMM are sensitive to chosen parameters or parameter estimation method, bandwidth and number of components, respectively, and no method will fit all data sets.
In general, KDE is faster, but more sensitive to bandwidth, whereas GMM is more stable.
For bags with very few instances the benefits of GMM cannot be exploited, and KDE is preferred.
The benchmark data study shows that the proposed method combined with a standard classifier obtain comparable results with state-of-the-art algorithms, with the exception of the Musk data sets where the number of instances per bag is low. In $Musk1$, more than half of the bags contain less than 5 instances, and in $Musk2$, one fourth of the bags contain less than 5 instances.
Few instances per bag prevents good distribution estimations, and since the proposed method is based on bag distributions, the result is not surprising.
The algorithms perform in the same range, although they are conceptually very different:
{\it MI-Net} is a neural network approach, {\it miFV} is a kernel approach, {\it miGraph} is a graph approach, {\it D$^{RS}$} is a dissimilarity approach, and {\it DivDict} is a diverse dictionary approach.
\section{Discussion} \label{sec:Discussion}
\subsection{Point-of-view} \label{sec:Point
The theoretical basis of the bag-to-class divergence approach relies on viewing a bag as a probability distribution, and hence fits into the branch of collective assumptions of the Foulds and Frank taxonomy \cite{Foulds2010Review}.
The probability distribution estimation can be seen as extracting bag-level information from a set $\mathbb{X}$, and hence falls into the BS paradigm of Amores \cite{Amores2013Multiple}.
The probability distribution space is non-vectorial, different from the distance-kernel spaces in \cite{Amores2013Multiple}, and divergences are used for classification.
In practice, the evaluation points of the importance sampling gives a mapping from the set $\mathbb{X}$ to a single vector, $\hat{f}_{bag}(\mathbf{z})$.
The mapping concurs with the ES paradigm, and the same applies for the graph-based methods.
From that viewpoint, the bag-to-class divergence approach expands the distance branch of Foulds and Frank to include a bag-to-class category in addition to instance-level and bag-level distances.
However, the importance sampling is a technicality of the algorithm, and we argue that the method belongs to the BS paradigm.
When the divergences are used as input to a classifier as in Section~\ref{sec:Benchmark}, the ES paradigm is a better description.
Carbonneau et al.~\cite{Carbonneau2018Multiple} assume underlying instance labels, and from a probability distribution viewpoint this corresponds to posterior probabilities, which are in practice inaccessible.
In {\it Sim 1 - Sim 4}, the instance labels are inaccessible through observations without previous knowledge about the distributions.
In {\it Sim 6}, the instance label approach is not useful, due to the similarity between the two distributions:
\begin{align} \label{eq:NoInstance}
\begin{aligned}
X|\Theta^+ & \sim P(X | \Theta^+) \\
\Theta^+ & \sim P(\Theta^+)
\end{aligned}
&&
\begin{aligned}
X|\Theta^- & \sim P(X | \Theta^-) \\
\Theta^- & \sim P(\Theta^-),
\end{aligned}
\end{align}
where $P(X | \Theta^+)$ and $P(X | \Theta^-)$ are the lognormal and the Gaussian mixture, respectively.
Eq.~\ref{eq:Hierarchical} is just a special case of Eq.~\ref{eq:NoInstance}, where $\Theta^+$ is the random vector $\{ \Theta , \Pi_{pos}\}$.
Without knowledge about the distributions, discriminating between training sets following the generative model of Eq.~\ref{eq:Hierarchical} and Eq.~\ref{eq:NoInstance} is only possible for a limited number of problems.
Even the uncertain objects of {\it Sim 5} is difficult to discriminate from MI objects based solely on the observations in the training set.
\subsection{Conclusions and future work}
Although the bag-to-bag KL information has the minimum misclassification rate, the typical bag sparseness of MI training sets is an obstacle, which is partly solved by bag-to-class dissimilarities, and the proposed class-conditional KL information accounts for additional sparsity of bags.
The bag-to-class divergence approach addresses three main challenges MI learning.
(1) Aggregation of instances according to bag label and the additional class-conditioning provides a solution for the bag sparsity problem.
(2) The bag-to-bag approach suffers from extensive computation time, solved by the bag-to-class approach.
(3) Viewing bags as probability distributions give access to analytical tools from statistics and probability theory, and comparisons of methods can be done on a data-independent level through identification of properties.
The properties presented here are not an extensive list, and any extra knowledge should be taken into account whenever available.
A more thorough analysis of the proposed function, $cKL$, will identify its weaknesses and strengths, and can lead to improved versions as well as alternative class-conditional dissimilarity measures and a more comprehensive tool.
The diversity of data types, assumptions, problem characteristics, sampling sparsity, etc. is far too large for any one approach to be sufficient.
The introduction of divergences as an alternative class of dissimilarity functions; and the bag-to-class dissimilarity as an alternative to the bag-to-bag dissimilarity, has added additional tools to the MI toolbox.
\section*{Acknowledgements}
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
\section*{Bibliography}
|
{
"timestamp": "2018-10-16T02:04:34",
"yymm": "1803",
"arxiv_id": "1803.02782",
"language": "en",
"url": "https://arxiv.org/abs/1803.02782"
}
|
\section{Introduction}
\label{sec::intro}
Even though the optical vortices have been studied theoretically already in the 40's of the last century~\cite{Stratton1941},
their usefulness in modern physics and technology has been realized quite recently. Both theoretical and experimental advances in the field of optical and
matter vortex waves have been reviewed in recent articles devoted to light~\cite{Review3} and electrons~\cite{Review1,Review2}, and in the reviews
on Bose-Einstein condensates~\cite{Fetter2009,SRHM2010} and quantum fluids of light~\cite{CC2013}.
In this paper, we consider the electron vortex states (EVS) generated in strong-field ionization. Actually, the EVS in quantum mechanics can
arise in different scenarios. For instance, they manifest themselves in the quantum Hall, De~Haas-van~Alphen, and Shubnikov-De~Hass effects as
collective properties of condensed-matter electrons in solids~\cite{Kittel1987,Abrikosov1988,Marder2010}. Related to this is the appearance
of impurity resonant states in two-dimensional quantum wells~\cite{KKvortex1,KKvortex2}. Such states, observed in crossed magnetic and electric fields,
have the vortex-like structure. Positions of their singularities (i.e., points at which the electron wave function vanishes and
its phase is not uniquely defined~\cite{BCK}) can be controlled by external fields. Namely, for certain
field parameters, the vortex singularities can be aligned and the usually short-living resonances become the long-living
ones~\cite{KKvortex1,KKvortex2}. Note that the propagation of EVS in magnetic fields has been also discussed in both the Aharonov-Bohm
and Landau configurations~\cite{BSVN2012}, showing their distinctive phase properties. The creation of EVS in angle-resolved
photoemission of electrons from solids and their relation to the Berry phase has been studied in Ref.~\cite{TN2015}.
Moreover, the Stern-Gerlach-type measurement of electrons with large orbital angular momenta has been analyzed~\cite{HGM2017}.
It is particularly important in light of the current paper that EVS can be generated in laser-assisted quantum processes, such as scattering~\cite{SSF2014} and
ionization~\cite{DHMMMS2015,DGGSVB2016,DMMHMS2017,PKJEBW2017} in strong laser fields. Supplementary to these investigations is the analysis of electron
recombination~\cite{MHSSF2014} and scattering~\cite{Ivanov2012,ISSF2016,KKSS2017} in the absence of the laser pulse, or the propagation of EVS
in a strong laser wave~\cite{BG2005,Karlovets2012,HMASF2014,BBC2015}. In addition, free-electron vortex states have been studied recently in~\cite{BDN2011,BB2017,Barnett2017}.
The aim of this paper is to investigate ionization by intense and short laser pulses resulting in electron states of very large orbital angular momenta.
For this purpose, we shall focus on the high-energy portion of the ionization spectrum. In order to neglect spin effects, we limit ourselves
to nonrelativistic laser pulse intensities of roughly $10^{16}$~W/cm$^2$. The reason being that, for high-energy ionization by near infrared laser fields,
the spin corrections are marginal at these intensities. We find, however, that other corrections (such as the recoil and relativistic mass corrections) already play
a role and have to be incorporated into the nonrelativistic theory. Our new quasi-relativistic treatment is an extension of~\cite{KKpress}
and, in the regime of parameters considered in the current paper, agrees very well with the fully relativistic approach. In order to select optimal conditions for the generation of EVS we shall discuss the
concept of the ionization spiral, around which the probability distribution of high-energy ionization is peaked. We will show that, if the electron momenta
of vortex states follow the spiral, the EVS of large orbital angular momenta are created with significant probabilities.
The paper is organized as follows. While in Sec.~\ref{sec:transprob} we define the transition probabilities for arbitrarily
normalized states, in Sec.~\ref{sec::planewave} we apply this general scheme to the plane-wave states of well defined momenta. Some properties of EVS,
together with the notation used in this paper, are discussed in Sec.~\ref{sec:twistfree}. Also, the transition probabilities and amplitudes involving EVS
are discussed there. The generalization to the electron scattering vortex states for static and spherically symmetric potentials is elucidated in
Sec.~\ref{sec:ScatteringTwist}. Ionization of a one-electron system is discussed in Sec.~\ref{sec:probdist}. In particular, in Sec.~\ref{sec:theory},
we derive the exact differential probability distribution of ionization to a vortex state. The lowest-order Born approximation is discussed in Sec.~\ref{sec:CSFA},
together with the importance of the recoil and mass corrections. In Secs.~\ref{model} and~\ref{sec:comparison}, we define the shape of the laser
pulse and introduce two quasi-relativistic approximations. We show that, in the high-energy portion of the ionization spectrum (i.e., for kinetic energies of the order
of 1~keV or larger), the relativistic mass corrections become significant for the considered Ti:Sapphire laser field. Sec.~\ref{sec:twist} is devoted to the
creation of EVS. In order to generate such states efficiently in strong-field ionization, it is necessary to choose properly the parameters
of the final electron momenta. Namely, they have to follow the ionization spiral which is discussed in Sec.~\ref{sec:ionspiral}. Probability distributions
of EVS as well as their properties are analyzed in Sec.~\ref{sec:OAM}. Finally, in Sec.~\ref{sec:Conclusions} we summarize our results and draw perspectives
for further investigations.
Throughout the paper we keep $\hbar=1$. Unless otherwise stated, in our numerical
analysis we use relativistic units (rel. units) such that $\hbar=m_{\rm e}=c=1$, where $m_{\rm e}$ is the electron rest mass.
\section{Transition probabilities}
\label{sec:transprob}
Let us start with the most general situation when the time-evolution of a system is described by a unitary operator $\op{S}$, $(\op{S}^{\dagger}\op{S}=\op{I})$ and,
in the remote past, it is prepared in an initial state $\ket{\mathrm{in}}$ such that $\scal{\mathrm{in}}{\mathrm{in}}=N_{\mathrm{in}}<\infty$. We further assume that
in the far future the Hilbert space of the system is spanned by a set of orthogonal states $\ket{\Lambda}$,
\begin{equation}
\scal{\Lambda'}{\Lambda}=N_{\Lambda}\delta_{\Lambda'\Lambda},
\label{tp1}
\end{equation}
that satisfy the completeness relation,
\begin{equation}
\sum_{\Lambda}\frac{1}{N_{\Lambda}}\ket{\Lambda}\bra{\Lambda}=\op{I}.
\label{tp2}
\end{equation}
In general, $\Lambda$ is a multi-index labeling these states and it contains both continuous and discrete parameters. For the continuous parameters,
the symbol $\delta_{\Lambda'\Lambda}$ in~\eqref{tp1} has to be replaced by the Dirac delta distribution, whereas the sum over $\Lambda$ in Eq.~\eqref{tp2} refers to integration.
The transition probability amplitude from the initial $\ket{\mathrm{in}}$ to the final state $\ket{\Lambda}$ is defined as the matrix element
of the corresponding evolution operator $\op{S}$,
\begin{equation}
\mathcal{A}_{\mathrm{in}}(\Lambda)=\bra{\Lambda}\op{S}\ket{\mathrm{in}}.
\label{tp3}
\end{equation}
It follows from the unitarity of $\op{S}$ that these amplitudes satisfy the sum rule,
\begin{equation}
\sum_{\Lambda}\frac{1}{N_{\mathrm{in}}N_{\Lambda}}|\mathcal{A}_{\mathrm{in}}(\Lambda)|^2=1.
\label{tp4}
\end{equation}
Hence, the transition probabilities are equal to
\begin{equation}
\mathcal{P}_{\mathrm{in}}(\Lambda)=\frac{1}{N_{\mathrm{in}}N_{\Lambda}}|\mathcal{A}_{\mathrm{in}}(\Lambda)|^2.
\label{tp5}
\end{equation}
Of course, the Hilbert space of the system can be spanned by a different set of orthogonal and complete states $\ket{\Xi}$, labeled by a multi-index $\Xi$.
In this case, the corresponding transition probabilities are
\begin{equation}
\mathcal{P}_{\mathrm{in}}(\Xi)=\frac{1}{N_{\mathrm{in}}N_{\Xi}}|\mathcal{A}_{\mathrm{in}}(\Xi)|^2,
\label{tp6}
\end{equation}
with
\begin{equation}
\mathcal{A}_{\mathrm{in}}(\Xi)=\sum_{\Lambda}\frac{\scal{\Xi}{\Lambda}}{N_{\Lambda}}\mathcal{A}_{\mathrm{in}}(\Lambda).
\label{tp7}
\end{equation}
This defines how to transform the probability amplitudes when calculated in different bases. To illustrate this general approach, we consider now free-electron states.
\subsection{Free-electron plane-wave states}
\label{sec::planewave}
For a free electron, the plane-wave states $\ket{\bm{p}}$, where $\bm{p}$ is the electron momentum, are the most common choice of the states $\ket{\Lambda}$.
Their wave function in position representation are
\begin{equation}
\scal{\bm{x}}{\bm{p}}=\ee^{\ii\bm{p}\cdot\bm{x}}.
\label{tp8}
\end{equation}
Hence,
\begin{align}
\scal{\bm{p}'}{\bm{p}}=\int \dd^3x\, \scal{\bm{p}'}{\bm{x}}\scal{\bm{x}}{\bm{p}}&=(2\pi)^3\delta^{(3)}(\bm{p}-\bm{p}'), \label{tp9}\\
\frac{1}{(2\pi)^3}\int\dd^3p\scal{\bm x}{\bm p}\scal{\bm p}{{\bm x}'}&=\delta^{(3)}(\bm{x}-\bm{x}'), \label{tp9new}
\end{align}
and the transition probability distribution,
\begin{equation}
\mathcal{P}_{\mathrm{in}}(\bm{p})=\frac{1}{(2\pi)^3N_{\mathrm{in}}}|\mathcal{A}_{\mathrm{in}}(\bm{p})|^2=\frac{1}{(2\pi)^3N_{\mathrm{in}}}|\bra{\bm{p}}\op{S}\ket{\mathrm{in}}|^2,
\label{tp10}
\end{equation}
satisfies the completeness relation,
\begin{equation}
\int\dd^3p\, \mathcal{P}_{\mathrm{in}}(\bm{p})=1.
\label{tp11}
\end{equation}
\subsection{Free-electron vortex states}
\label{sec:twistfree}
Now, we choose a different basis of free-electron states. In order to define them, we choose first an arbitrary unit vector in space $\bm{n}_{\|}$,
which is uniquely determined by the polar and azimuthal angles $\theta_{\mathrm{T}}$ and $\varphi_{\mathrm{T}}$, respectively. This vector together with
two other vectors, $\bm{n}_{\bot,1}$ and $\bm{n}_{\bot,2}$,
\begin{align}
\bm{n}_{\bot,1}=&\begin{pmatrix}
\cos\theta_{\mathrm{T}}\cos\varphi_{\mathrm{T}} \cr \cos\theta_{\mathrm{T}}\sin\varphi_{\mathrm{T}} \cr -\sin\theta_{\mathrm{T}}
\end{pmatrix}, \,
\bm{n}_{\bot,2}=\begin{pmatrix}
-\sin\varphi_{\mathrm{T}} \cr \cos\varphi_{\mathrm{T}} \cr 0
\end{pmatrix}, \nonumber \\
\bm{n}_{\|}=&\begin{pmatrix}
\sin\theta_{\mathrm{T}}\cos\varphi_{\mathrm{T}} \cr \sin\theta_{\mathrm{T}}\sin\varphi_{\mathrm{T}} \cr \cos\theta_{\mathrm{T}}
\end{pmatrix},
\label{twisttriad}
\end{align}
constitute a triad of right-handed orthogonal unit vectors~\cite{KK2014a,KK2014b}.
The new states are defined as free-electron states which are eigenvectors of $\op{L}_{\|}=\bm{n}_{\|}\cdot \opb{L}$, where $\opb{L}=\opb{x}\times\opb{p}$ is the
orbital angular momentum operator. We will refer to them as {\it free-electron vortex states}.
The triad of vectors~\eqref{twisttriad} defines a cylindrical coordinate system, in which a position vector $\bm{x}$ can be decomposed such that
\begin{equation}
\bm{x}=x_{\|}\bm{n}_{\|}+x_{\bot}(\bm{n}_{\bot,1}\cos\varphi_x+\bm{n}_{\bot,2}\sin\varphi_x),
\label{tw1}
\end{equation}
and similarly for a momentum vector ${\bm p}$. One can show that the free-electron vortex states $\ket{p_{\|},p_{\bot},m}$ (also called the twisted or Bessel states)
in position representation have the form
\begin{equation}
\scal{\bm{x}}{p_{\|},p_{\bot},m}=\ii^m \ee^{\ii p_{\|}x_{\|}}J_m(p_{\bot}x_{\bot})\ee^{\ii m\varphi_x},
\label{tw3}
\end{equation}
where the parallel and perpendicular components of the electron momentum are
\begin{equation}
p_{\|}=\bm{p}\cdot\bm{n}_{\|} \quad \textrm{and} \quad p_{\bot}=\sqrt{\bm{p}^2-p_{\|}^2},
\label{tw2}
\end{equation}
and the integer $m$ is called the topological charge. The free-electron wave functions~\eqref{tw3} fulfill the orthogonality condition~\eqref{tp1},
\begin{equation}
\scal{p'_{\|},p'_{\bot},m'}{p_{\|},p_{\bot},m}=\frac{(2\pi)^2}{p_{\bot}}\delta(p_{\|}-p'_{\|})\delta(p_{\bot}-p'_{\bot})\delta_{mm'},
\label{tw4}
\end{equation}
which follows from the property of the Bessel functions (see, e.g.,~\cite{TS1972,AG1993,Sneddon}),
\begin{equation}
\int_0^{\infty} x_{\bot}\dd x_{\bot}\, J_m(p'_{\bot}x_{\bot})J_m(p_{\bot}x_{\bot})=\frac{1}{p_{\bot}}\delta(p_{\bot}-p'_{\bot}).
\label{tw6}
\end{equation}
Hence, the following completeness relation~\eqref{tp2} for the wave functions~\eqref{tw3} holds
\begin{align}
\frac{1}{(2\pi)^2}\sum_{m=-\infty}^{\infty}\int_{-\infty}^{\infty}\dd p_{\|}&\int_0^{\infty}p_{\bot}\dd p_{\bot}
\scal{\bm{x}'}{p_{\|},p_{\bot},m}
\nonumber \\
\times &\scal{p_{\|},p_{\bot},m}{\bm{x}}=\delta^{(3)}(\bm{x}-\bm{x}').
\label{tw5}
\end{align}
Now, our aim is to determine the probability amplitude of a transition to a free-electron vortex state~\eqref{tw3} knowing the respective probability amplitudes
to the plane-wave states (see, Sec.~\ref{sec::planewave}). Since the EVS wave functions~\eqref{tw3} depend on the choice of the coordinate system, which is defined by
the angles $\theta_{\rm T}$ and $\varphi_{\rm T}$, we will attach the same subscript to the momentum labeling the plane waves, $\ket{{\bm p}_{\rm T}(\varphi)}$.
This is to emphasize that these states are determined in the cylindrical coordinates~\eqref{twisttriad}. In this case,
the electron momentum $\bm{p}_{\rm T}(\varphi)$ can be parametrized by the angle $\varphi\in [0,2\pi]$ (see, Fig.~\ref{twistmomentum00}),
\begin{align}
\bm{p}_{\rm T}(\varphi)=&\bm{p}_{\|}+\bm{p}_{\bot}(\varphi)=p_{\mathrm{T}}\cos\beta_{\mathrm{T}}\bm{n}_{\|} \nonumber \\
&+p_{\mathrm{T}}\sin\beta_{\mathrm{T}}(\bm{n}_{\bot,1}\cos\varphi+\zeta_H \bm{n}_{\bot,2}\sin\varphi).
\label{twistmom1}
\end{align}
Here, we understand that the momentum $\bm{p}_{\|}$ is parallel to the axis ${\bm n}_{\|}$ and it has the origin at the cone's apex; therefore, it
is independent of $\varphi$. The perpendicular component $\bm{p}_{\bot}(\varphi)$, on the other hand, rotates on the cone's circular base of radius
$p_{\mathrm{T}}\sin\beta_{\mathrm{T}}$. The direction of rotation is controlled by the sign of $\zeta_H=\pm$, which determines the helicity of the vortex state.
Without loosing generality, we assume that $\zeta_H=1$. Also, we will call the momenta~\eqref{twistmom1}
the \textit{family of twisted momenta} and the parameter $\varphi$ the \textit{twist angle}.
\begin{figure}
\includegraphics[width=6cm]{xtwistmomentum00.eps}
\caption{The twisted momentum $\bm{p}_{\rm T}(\varphi)$ circulating on the lateral surface of the cone with apex at the origin of coordinates, the opening angle
$2\beta_{\mathrm{T}}$, and its side length $p_{\mathrm{T}}$. The symmetry axis is defined by the polar and azimuthal angles $\theta_{\mathrm{T}}$ and $\varphi_{\mathrm{T}}$,
respectively. Here, $\bm{p}_{\rm T}(\varphi)$ is parametrized by the angle $0\leqslant \varphi\leqslant 2\pi$ [see, Eq.~\eqref{twistmom1}] for $\zeta_H=1$. While the momentum
$\bm{p}_{\|}$, parallel to the symmetry axis and fixed at the cone's apex, is independent of $\varphi$ and has the length $p_{\|}=p_{\mathrm{T}}\cos\beta_{\mathrm{T}}$,
the perpendicular component $\bm{p}_{\bot}(\varphi)$ rotates on the cone's circular base of radius $p_{\bot}=p_{\mathrm{T}}\sin\beta_{\mathrm{T}}$.
}
\label{twistmomentum00}
\end{figure}
Now, by applying \eqref{tw1} and the generating function for the Bessel functions,
\begin{equation}
\ee^{\ii x\cos\varpi}=\sum_{m=-\infty}^{\infty}\ii^m J_m(x)\ee^{\ii m\varpi},
\label{tw8}
\end{equation}
we find out that the state $|\bm{p}_{\rm T}(\varphi)\rangle$, in position representation, can be expanded as
\begin{equation}
\scal{\bm{x}}{\bm{p}_{\rm T}(\varphi)}=\ee^{\ii \bm{x}\cdot\bm{p}_{\rm T}(\varphi)}=\sum_{m=-\infty}^{\infty}\ee^{-\ii m\varphi}\scal{\bm{x}}{p_{\|},p_{\bot},m},
\label{tw9}
\end{equation}
where $p_{\|}=p_{\mathrm{T}}\cos\beta_{\mathrm{T}}$ and $p_{\bot}=p_{\mathrm{T}}\sin\beta_{\mathrm{T}}$, in accordance with the definition~\eqref{twistmom1}.
It follows from Eq.~\eqref{tw9} that
\begin{equation}
\ket{p_{\|},p_{\bot},m}=\frac{1}{2\pi}\int_0^{2\pi}\dd\varphi\, \ee^{\ii m\varphi}\ket{\bm{p}_{\rm T}(\varphi)}.
\label{tw10}
\end{equation}
Hence, the transition probability amplitude to the vortex state, $\mathcal{A}_{\mathrm{in}}(p_{\|},p_{\bot},m)$, can be expressed in terms of the transition
probability amplitudes to the plane-wave states, $\mathcal{A}_{\mathrm{in}}(\bm{p}_{\rm T}(\varphi))$, such that
\begin{equation}
\mathcal{A}_{\mathrm{in}}(p_{\|},p_{\bot},m)=\frac{1}{2\pi}\int_0^{2\pi}\dd\varphi\, \ee^{-\ii m\varphi}\mathcal{A}_{\mathrm{in}}(\bm{p}_{\rm T}(\varphi)).
\label{tw11}
\end{equation}
For completeness, we also write that
\begin{equation}
\mathcal{A}_{\mathrm{in}}(\bm{p}_{\rm T}(\varphi))=\sum_{m=-\infty}^{\infty}\ee^{\ii m\varphi}\mathcal{A}_{\mathrm{in}}(p_{\|},p_{\bot},m) ,
\label{tw11a}
\end{equation}
which follows directly from Eq.~\eqref{tw9}.
Finally, according to the general formula~\eqref{tp5}, we arrive at the transition probability distribution to the free-electron vortex state $\ket{p_{\|},p_{\bot},m}$,
\begin{equation}
\frac{\dd^2\mathcal{P}_m}{\dd p_{\|}\dd p_{\bot}}\equiv\mathcal{P}_m(p_{\|},p_{\bot})=\frac{p_{\bot}}{(2\pi)^2N_{\mathrm{in}}}|\mathcal{A}_{\mathrm{in}}(p_{\|},p_{\bot},m)|^2,
\label{tw12}
\end{equation}
where $\mathcal{A}_{\mathrm{in}}(p_{\|},p_{\bot},m)$ can be obtained from~\eqref{tw11}.
In order to describe ionization, which is the main topic of this paper, one has to calculate the transition to the final scattering state.
For this reason, we will demonstrate next that the same formulation as presented here for the free-electron states [cf., Eq.~\eqref{tw10}]
can be carried on with the scattering states of the electron.
\subsection{Scattering vortex states}
\label{sec:ScatteringTwist}
Consider the scattering states of an electron interacting with a static and spherically symmetric atomic potential. There are two types of such states:
the ones with outgoing spherical waves, $\psi^{(+)}_{\bm{p}}(\bm{x})$, and the ones with incoming spherical waves, $\psi^{(-)}_{\bm{p}}(\bm{x})$~\cite{RodbergThaler1967};
both labeled by the asymptotic electron momentum ${\bm p}$. These two wave functions are not independent, since
\begin{equation}
\bigl[\psi^{(-)}_{\bm{p}}(\bm{x})\bigr]^*=\psi^{(+)}_{-\bm{p}}(\bm{x}).
\label{tw14}
\end{equation}
Similar to~\cite{Taylor1972}, if considered in the abstract Hilbert space, we shall denote these states as $\ket{\bm{p};+}$ and $\ket{\bm{p};-}$, respectively.
The question is: How to construct the corresponding scattering vortex states having known $\ket{{\bm p},\pm}$?
Based on Eq.~\eqref{tw14}, we understand that it is sufficient to define the scattering vortex state for either $\ket{\bm{p};+}$ or $\ket{\bm{p};-}$. We shall
do this for the latter. The reason being that it is the scattering state with the incoming spherical waves that has to be accounted for in
the transition probability amplitude of ionization. On the other hand, when analyzing recombination one should use $\ket{\bm{p};+}$ instead~\cite{JKE2000}.
For a spherically symmetric and static potential the time-independent Schr\"odinger equation is rotationally invariant.
Since the boundary conditions imposed on the scattering states depend only on scalars with respect to rotations (i.e., $\bm{x}^2$, $\bm{p}^2$, and $\bm{p}\cdot\bm{x}$),
the exact solution to the Schr\"odinger equation also depends only on these combinations. This property is used, for instance, in the partial wave analysis
of scattering by a spherically symmetric potentials~\cite{BCK,Taylor1972}. The exact solution of scattering problem for the Coulomb potential can serve as
an example of this general property.
Having this in mind, we write the scattering state with incoming spherical waves, in position representation, as
\begin{equation}
\scal{\bm{x}}{\bm{p};-}=\psi^{(-)}_{\bm{p}}(\bm{x})=f_{\psi}^{(-)}(\bm{x}^2,\bm{p}^2,\bm{p}\cdot\bm{x}),
\label{tw15}
\end{equation}
where $f_{\psi}^{(-)}$ is \textit{a priori} unknown function of its arguments. In our case, the momentum in~\eqref{tw15} is the twisted
momentum ${\bm p}_{\rm T}(\varphi)$ [Eq.~\eqref{twistmom1}]. Since $\bm{p}^2_{\rm T}(\varphi)=p_{\|}^2+p_{\bot}^2$ and
\begin{equation}
\bm{p}_{\rm T}(\varphi)\cdot\bm{x}=p_{\|}x_{\|}+p_{\bot}x_{\bot}\cos(\varphi_x-\varphi),
\label{tw16}
\end{equation}
the wave function~\eqref{tw15} can be Fourier decomposed,
\begin{equation}
\scal{\bm{x}}{\bm{p}_{\rm T}(\varphi);-}=\sum_{m=-\infty}^\infty\ee^{-\ii m\varphi}\scal{\bm{x}}{p_{\|},p_{\bot},m;-}.
\label{tw16a}
\end{equation}
One can show that
\begin{equation}
\scal{\bm{x}}{p_{\|},p_{\bot},m;-}=\ee^{\ii m\varphi_x}f_{\psi,m}^{(-)}(x_{\|},x_{\bot};p_{\|},p_{\bot}),
\label{tw18}
\end{equation}
with
\begin{align}
f_{\psi,m}^{(-)} &(x_{\|},x_{\bot};p_{\|},p_{\bot})=\frac{1}{2\pi} \int_0^{2\pi}\dd\varpi \, \ee^{-\ii m\varpi} \\
&\times f_{\psi}^{(-)}\bigl(x_{\|}^2+x_{\bot}^2,p_{\|}^2+p_{\bot}^2,p_{\|}x_{\|}+p_{\bot}x_{\bot}\cos\varpi \bigr), \nonumber
\label{tw19}
\end{align}
is an eigenfunction of the operator $\op{L}_{\|}=\bm{n}_{\|}\cdot\opb{L}$ with the eigenvalue $m$; hence, it defines the scattering vortex wave function with the incoming
spherical waves. As it follows from Eq.~\eqref{tw16a}, the scattering vortex state is
\begin{equation}
\ket{p_{\|},p_{\bot},m;-}=\frac{1}{2\pi}\int_0^{2\pi}\dd\varphi\, \ee^{\ii m\varphi}\ket{\bm{p}_{\rm T}(\varphi);-},
\label{tw17}
\end{equation}
which is an analogue of Eq.~\eqref{tw10}. As a consequence, for spherically symmetric potentials, the expressions for the amplitudes and probability distributions
[Eqs.~\eqref{tw11} and \eqref{tw12}, respectively] remain unchanged if the plane-wave state $\ket{\bm{p}}$ is replaced by the scattering one $\ket{\bm{p};-}$.
Note also that, if the final energy of the electron is sufficiently large, the Born approximation can be applied. In its lowest order, this is equivalent to approximate
the final scattering state by a plane wave. Hence, in the zeroth-order Born approximation, the function $f_{\psi}^{(-)}$ defined in~\eqref{tw15} becomes the plane
wave $\ee^{\ii\bm{p}\cdot\bm{x}}$ and we recover the Bessel states discussed above.
\section{Ionization distributions}
\label{sec:probdist}
After these general remarks, we present now the theoretical treatment of strong-field ionization leading to generation of EVS.
\subsection{General formulation}
\label{sec:theory}
Consider a single-electron system whose time-evolution is governed by the Hamiltonian,
\begin{equation}
\op{H}(t)=\op{H}_0+\op{V}+\op{H}_I(t),
\label{pd1}
\end{equation}
where $\op{H}_0$ is the free-particle Hamiltonian, $\op{V}$ corresponds to the static interaction, and $\op{H}_I(t)$ accounts for the interaction with
the laser field, which is always assumed to act for a finite time $T_{\rm p}$, i.e., $\op{H}_I(t)$ vanishes for $t<0$ and $t>T_{\mathrm{p}}$.
Here, we also define the atomic Hamiltonian,
\begin{equation}
\op{H}_A=\op{H}_0+\op{V},
\label{pd2}
\end{equation}
and the so-called Volkov Hamiltonian,
\begin{equation}
\op{H}_V(t)=\op{H}_0+\op{H}_I(t).
\label{pd3}
\end{equation}
For these three Hamiltonians we introduce the evolution operators,
\begin{align}
\op{U}(t,t')=&\hat{\mathcal{T}}\exp\Bigl( -\ii\int_{t'}^t \dd\tau \op{H}(\tau) \Bigr) , \nonumber \\
\op{U}_A(t,t')=&\ee^{-\ii\op{H}_A (t-t')} , \nonumber \\
\op{U}_V(t,t')=&\hat{\mathcal{T}}\exp\Bigl( -\ii\int_{t'}^t \dd\tau \op{H}_V(\tau) \Bigr) ,
\label{pd4}
\end{align}
where $\hat{\mathcal{T}}$ is the time-ordering operator.
We assume that the atomic Hamiltonian $\op{H}_A$ has both discrete and continuous eigenenergies such that
\begin{equation}
\op{H}_A\ket{B}=E_B\ket{B}, \,
\op{H}_A\ket{\bm{p};-}=E_{\bm{p}}\ket{\bm{p};-},
\label{pd5}
\end{equation}
where ${\bm p}$ is the asymptotic momentum of the electron. Because the corresponding eigenstates $\ket{B}$ and $\ket{{\bm p};-}$ fulfill the relations
\begin{align}
&\scal{B'}{B}=\delta_{B,B'}, \, \scal{B}{\bm{p};-}=0, \nonumber \\
&\scal{\bm{p}';-}{\bm{p};-}=(2\pi)^3\delta^{(3)}(\bm{p}-\bm{p}'),
\label{pd6}
\end{align}
we can write that
\begin{equation}
\sum_B\ket{B}\bra{B}+\int\frac{\dd^3p}{(2\pi)^3}\ket{\bm{p};-}\bra{\bm{p};-}=\op{I}.
\label{pd7}
\end{equation}
Now, in order to describe ionization, one typically calculates the transition probability amplitude from a bound state $\ket{B}$ to a scattering state $\ket{\bm{p};-}$,
\begin{equation}
\mathcal{A}_B(\bm{p};-)=\bra{\bm{p};-}\op{S}\ket{B}=\lim_{t\rightarrow\infty}\lim_{t'\rightarrow -\infty}\mathcal{A}_B(\bm{p};t,t'),
\label{pd8}
\end{equation}
where $\op{S}=\op{U}(\infty,-\infty)$ and
\begin{equation}
\mathcal{A}_B(\bm{p};t,t')=\bra{\bm{p};-}\op{U}(t,t')\ket{B}.
\label{pd9}
\end{equation}
Using here the Lippmann-Schwinger equation,
\begin{equation}
\op{U}(t,t')=\op{U}_A(t,t')-\ii\int\dd\tau\op{U}(t,\tau)\op{H}_I(\tau)\op{U}_A(\tau,t'),
\label{pd12}
\end{equation}
and the property $\bra{\bm{p};-}\op{U}_A(t,t')\ket{B}=0$, we arrive at the following expression for the ionization probability amplitude,
\begin{equation}
\mathcal{A}_B(\bm{p};-)=-\ii\int_0^{T_{\mathrm{p}}}\dd t\int\dd^3 x \bigl[\Psi^{(-)}_{\bm{p}}(\bm{x},t)\bigr]^{*}\op{H}_I(t)\psi_B(\bm{x},t),
\label{pd13}
\end{equation}
where $\psi_B(\bm{x},t)=\ee^{-\ii E_B t}\scal{\bm{x}}{B}$ and
\begin{equation}
\bigl[\Psi^{(-)}_{\bm{p}}(\bm{x},t)\bigr]^{*}=\scal{\Psi^{(-)}_{\bm{p}}(t)}{\bm{x}}=\bra{\bm{p};-}\op{U}(T_{\mathrm{p}},t)\ket{\bm{x}}.
\label{pd14}
\end{equation}
Here, we emphasize that the state $\ket{\Psi^{(-)}_{\bm{p}}(t)}$ satisfies the Schr\"odinger equation with the full Hamiltonian $\op{H}(t)$.
Finally, the total probability of ionization equals
\begin{equation}
\mathcal{P}_B=\int\frac{\dd^3p}{(2\pi)^3}|\mathcal{A}_B(\bm{p};-)|^2,
\label{pd10}
\end{equation}
and its momentum distribution is
\begin{equation}
\frac{\dd^3\mathcal{P}_B}{\dd^3p}\equiv\mathcal{P}_B(\bm{p};-)=\frac{1}{(2\pi)^3}|\mathcal{A}_B(\bm{p};-)|^2.
\label{pd11}
\end{equation}
Similarly, the probability amplitude for ionization from the bound state $\ket{B}$ to the final vortex state $\ket{p_{\|},p_{\bot},m;-}$ is defined as
\begin{align}
\mathcal{A}_B(p_{\|},p_{\bot},&m;-)=\bra{p_{\|},p_{\bot},m;-}\op{S}\ket{B} \label{pd8t}\\
=&\frac{1}{2\pi}\int_0^{2\pi}\dd\varphi\, \ee^{-\ii m\varphi}\bra{\bm{p}_{\rm T}(\varphi);-}\op{S}\ket{B} \nonumber \\
=& \frac{1}{2\pi}\int_0^{2\pi}\dd\varphi\, \ee^{-\ii m\varphi}\mathcal{A}_B(\bm{p}_{\rm T}(\varphi);-), \nonumber
\end{align}
or,
\begin{equation}
\mathcal{A}_B(\bm{p}_{\rm T}(\varphi);-)=\sum_{m=-\infty}^{\infty}\ee^{\ii m\varphi}\mathcal{A}_B(p_{\|},p_{\bot},m;-),
\label{pd8ta}
\end{equation}
which follows from the previous section. Hence, the probability distribution of ionization resulting in generation of EVS can be defined as
\begin{equation}
\frac{\dd^2\mathcal{P}_{B,m}}{\dd p_{\|}\dd p_{\bot}}\equiv\mathcal{P}_{B,m}(p_{\|},p_{\bot};-)=\frac{p_{\bot}}{(2\pi)^2}|\mathcal{A}_B(p_{\|},p_{\bot},m;-)|^2.
\label{pd11t}
\end{equation}
Note that this is the most general nonrelativistic description of ionization.
We will show next that, for the parameters used in this paper, relativistic corrections play already a role and have to be incorporated
into the nonrelativistic theory.
\subsection{Corrected quasi-relativistic SFA}
\label{sec:CSFA}
Since recent experimental~\cite{PressExp} and theoretical~\cite{KKpress,Bandrauk1,Bandrauk2,Bandrauk3,TD2012,R2013,HLH2017,Ivan} investigations, it has become clear that,
for near infrared pulses of intensities of the order of $10^{14}$~W/cm$^2$ or larger, the effects related to the radiation pressure~\cite{Lebiediew}
can be detected in photoionization spectra. These effects are accounted for in the relativistic theories based on the Dirac or Klein-Gordon equations. Comparisons
between the relativistic Dirac and nonrelativistic Schr\"odinger approaches show how the latter has to be modified in order to obtain a good agreement with the relativistic
treatment for intensities up to $10^{16}$~W/cm$^2$~\cite{KKpress}. This goal can be achieved using the quasi-relativistic strong-field approximation which
for free-free transitions in intense laser fields has been considered by Ehlotzky~\cite{Ehlo} (see, also~\cite{EJK1998}), whereas for bound-free transitions
by Krajewska and Kami\'nski~\cite{KKpress}. Below, we outline briefly the key ingredients of the corrected (as compared to~\cite{KKpress}) quasi-relativistic strong-field approximation (QRSFA),
which is necessary in the regime of parameters used in this paper.
Generally speaking, the strong-field approximation (SFA) is applicable for high-energy ionization if the kinetic energy of photoelectrons is much larger than the ionization potential of the initial
bound state, $E_{\mathrm{kin}}(\bm{p})\gg |E_B|$. This condition is very well satisfied in our paper. In such case,
it is justified to expand the full scattering state $\ket{\Psi^{(-)}_{\bm{p}}(t)}$ [Eq.~\eqref{pd14}] in a Born series with respect to the binding potential and, in its lowest order,
to approximate this state by the Volkov solution, $\ket{\psi_{\bm p}^{(0)}(t)}$~\cite{Volkov,VolkovRev1,VolkovRev2}. The latter has a different form, depending on the
framework we use.
\subsubsection{Relativistic corrections}
\label{qrsfa}
Following Ref.~\cite{KKpress}, we assume that in the QRSFA the interaction Hamiltonian $\op{H}_I(t)$, in the velocity gauge, is
\begin{equation}
\op{H}_I(t)=-\frac{e}{m_{\mathrm{e}}}\bm{A}(\phi)\cdot\opb{p}+\frac{e^2}{2m_{\mathrm{e}}}\bm{A}^2(\phi),
\label{pd19}
\end{equation}
where ${\bm A}(\phi)$ is the vector potential describing the laser pulse with a phase $\phi=\omega t-{\bm k}\cdot{\bm x}$. Here, we introduce
the fundamental frequency of field oscillations $\omega$ that is related to the pulse duration $T_{\rm p}$ such that $\omega=2\pi/T_{\rm p}$.
The wave vector ${\bm k}$ is defined as ${\bm k}=(\omega/c)\bm{n}$ with a unit vector $\bm{n}$ determining the propagation direction of the laser pulse.
As stated before, the electromagnetic potential vanishes outside the interval $0 < \phi <2\pi$. Having specified $\op{H}_I(t)$, we know the
exact form of the Volkov Hamiltonian~\eqref{pd3} and, hence, also the Volkov evolution operator, $\hat{U}_V(t,0)$.
The Volkov state $\ket{\psi^{(0)}_{\bm{p}}(t)}$ originates from the free-electron state $\ket{\bm{p}}$ which evolves in time in the presence
of a laser pulse, meaning that
\begin{equation}
\ket{\psi^{(0)}_{\bm{p}}(t)}=\op{U}_V(t,0)\ket{\bm{p}}.
\label{pd18}
\end{equation}
As a result, we obtain the Volkov wave function,
\begin{align}
\psi^{(0)}_{\bm{p}}(\bm{x},t)=\exp\Bigl[&-\ii E_{\mathrm{kin}}(\bm{p})t+\ii\bm{p}\cdot\bm{x} \label{pd20} \\
&+\ii\int_0^{\phi}\dd\phi'\Bigl(\frac{e\bm{A}(\phi')\cdot \bm{p}}{N(\bm{p},\bm{k})}
-\frac{e^2\bm{A}^2(\phi')}{2N(\bm{p},\bm{k})}\Bigr)\Bigr]. \nonumber
\end{align}
Note that for the nonrelativistic theory and the dipole approximation:
$\phi=\omega t$, $N(\bm{p},\bm{k})=\omega m_{\mathrm{e}}$, $E_{\rm kin}({\bm p})\equiv E_{\rm kin}^{(0)}({\bm p})={\bm p}^2/(2m_{\rm e})$, and the function $\psi^{(0)}_{\bm{p}}(\bm{x},t)$ in~\eqref{pd20} is the exact solution of the
Schr\"odinger equation. Its generalization, the way it was introduced in~\cite{KKpress}, accounts for two relativistic corrections referred to as the retardation and
recoil corrections. While we recapture below the essence of these modifications, a new aspect of our approach is to account for the relativistic mass corrections.
The {\it retardation correction}, stating that $\phi=\omega t-{\bm k}\cdot{\bm x}$, reflects the fact that the laser pulse is a propagating wave.
It follows from~\cite{KKpress} that for near infrared laser fields of intensities up to $10^{16}$~W/cm$^2$, this correction is negligibly small and can be neglected
in our further analysis. Hence, we shall assume that $\phi\approx\omega t$ in Eq.~\eqref{pd20}.
The {\it recoil corrections} account for the recoil of the electron during the exchange of momenta with the laser photons, meaning that
\begin{equation}
N(\bm{p},\bm{k})=p\cdot k=\frac{\omega}{c}(\sqrt{\bm{p}^2+(m_{\mathrm{e}}c)^2}-\bm{p}\cdot\bm{n}).
\label{pd22}
\end{equation}
Note that in the nonrelativistic limit: $N(\bm{p},\bm{k})\approx \omega m_{\rm e}$, or if further terms of the nonrelativistic expansion of~\eqref{pd22} are considered~\cite{Nordsieck},
\begin{equation}
N(\bm{p},\bm{k})\approx \omega m_{\mathrm{e}}\Bigl(1-\frac{1}{m_{\mathrm{e}}c}\bm{p}\cdot\bm{n} \Bigr).
\label{pd21}
\end{equation}
This modification of the nonrelativistic Volkov wave function, if compared with the relativistic SFA, is sufficient in describing the radiation pressure effects for
intensities up to $10^{15}$~W/cm$^2$. It fails, however, for larger intensities~\cite{KKpress}. For this reason, we shall keep in the following
$N({\bm p},{\bm k})$ as defined in~\eqref{pd22} (see, Appendix~\ref{rsfa}). Note that the momentum of the parent ion is also changed during the ionization process.
However, due to its large mass, it is commonly assumed that this correction only marginally modifies the probability distributions of photoelectrons,
although it contributes to the overall momentum balance~\cite{Bandrauk1,Bandrauk2,Bandrauk3}.
It appears that the {\it relativistic mass corrections} start to significantly influence ionization for near infrared laser fields and intensities larger than
$10^{15}$~W/cm$^2$. It follows from the Klein-Gordon or Dirac equations that the electron kinetic energy, $E_{\mathrm{kin}}(\bm{p})$, is equal to
\begin{align}
E_{\mathrm{kin}}(\bm{p})&=\sqrt{(m_{\mathrm{e}}c^2)^2+(c\bm{p})^2}-m_{\mathrm{e}}c^2 \label{pd24} \\
&\approx \frac{\bm{p}^2}{2m_{\mathrm{e}}}-\frac{\bm{p}^4}{8m_{\mathrm{e}}^3c^2} \, \dots .\nonumber
\end{align}
Keeping this in mind, we ask the question: When can we neglect the higher mass corrections in the Volkov wave~\eqref{pd20}? Since $E_{\mathrm{kin}}(\bm{p})t$ appears
there in the phase, the nonrelativistic approximation is acceptable if
\begin{equation}
\frac{[E^{(0)}_{\mathrm{kin}}(\bm{p})]^2}{2m_{\mathrm{e}}c^2}T < \pi,
\label{pd25}
\end{equation}
where $E^{(0)}_{\mathrm{kin}}(\bm{p})$ is the nonrelativistic kinetic energy of the photoelectron introduced before and $T$ is a characteristic time
of the electron-laser-field interaction. For long pulses, we can assume that this time equals the duration of a single cycle,
$T=2\pi/\omega_{\mathrm{L}}$, where $\omega_{\mathrm{L}}$ is the laser carrier frequency. For short pulses, $T$ denotes the pulse duration $T_{\rm p}$.
Since these two times are comparable, in our rough estimate we will choose the former one. Hence, the nonrelativistic approximation for the kinetic energy
of photoelectrons is applicable if
\begin{equation}
\frac{[E^{(0)}_{\mathrm{kin}}(\bm{p})]^2}{m_{\mathrm{e}}c^2\omega_{\mathrm{L}}} < 1.
\label{pd26}
\end{equation}
Specifically, for the Ti:Sapphire laser, the nonrelativistic theory brakes down when the kinetic energy of photoelectrons is at least
\begin{equation}
\sqrt{m_{\mathrm{e}}c^2\omega_{\mathrm{L}}}\approx 860\,\mathrm{eV}.
\label{pd27}
\end{equation}
While this estimate seems to be independent of the laser field intensity, for intensities not exceeding $10^{15}\,\mathrm{W/cm}^2$ the probability of detecting
such energetic photoelectrons is extremely small. In this case, our estimate has no practical importance. With increasing intensity, however, the high energy portion
of the spectrum contributes more significantly to the overall ionization probability, as shown, for instance, in Refs.~\cite{KKsuper,CKKsuper,KKcomb,KCKspiral1,KCKspiral2}.
This situation will be analyzed closely in our numerical simulations, where the full relativistic kinetic energy will be accounted for.
\subsubsection{Probability amplitude of ionization}
\label{sec:amplitudes}
It follows from the above definitions that the probability amplitude of ionization~\eqref{pd13} in the lowest-order
Born approximation with respect to the final electron state, denoted now as $\mathcal{A}(\bm{p})$, is
\begin{equation}
\mathcal{A}(\bm{p})=-\ii\int_{-\infty}^{\infty}\dd t\int\dd^3x \,\ee^{-\ii\bm{p}\cdot\bm{x}+\ii G(\omega t,\bm{p})}\op{H}_I(t)\psi_B(\bm{x}),
\label{pad8}
\end{equation}
where
\begin{align}
G(\phi,\bm{p})=\int_0^{\phi}\dd\phi'\Bigl(&\frac{E_{\mathrm{kin}}(\bm{p})-E_B}{\omega} \nonumber \\
&-\frac{e\bm{A}(\phi')\cdot \bm{p}}{N(\bm{p},\bm{k})}+\frac{e^2\bm{A}^2(\phi')}{2N(\bm{p},\bm{k})}\Bigr)
\label{pad8a}
\end{align}
and $\psi_B(\bm{x})=\scal{\bm{x}}{B}$ is the bound state wave function of energy $E_B$, which follows from the Schr\"odinger equation. As stated above, while the
retardation corrections are neglected in~\eqref{pad8} and~\eqref{pad8a}, the recoil corrections are fully accounted for by taking $N({\bm p},{\bm k})$ defined in Eq.~\eqref{pd22}.
We will demonstrate later on that, for the considered parameters, $E_{\rm kin}({\bm p})$ has to be treated relativistically, according to~\eqref{pd24}.
This actually follows from the relativistic formulation of the SFA which, for convenience of the reader, is presented in Appendix~\ref{rsfa}.
\subsection{Model}
\label{model}
In our model, the laser pulse is described by the electric field $\bm{\mathcal{E}}(\phi)$,
\begin{equation}
\bm{\mathcal{E}}(\phi)=F_1(\phi)\bm{\varepsilon}_1+F_2(\phi)\bm{\varepsilon}_2,
\label{pad3}
\end{equation}
where two real polarization vectors $\bm{\varepsilon}_1$ and $\bm{\varepsilon}_2$
fulfill the relation $\bm{n}=\bm{\varepsilon}_1 \times\bm{\varepsilon}_2 $. As already stated, the pulse lasts for time $T_{\mathrm{p}}$ and, hence,
$\omega=2\pi/T_{\mathrm{p}}$. The two real functions $F_j(\phi)$ ($j=1,2$) determine the shape of the pulse in the plane-wave front
approximation~\cite{Neville} such that
\begin{equation}
F_j(\phi)=\mathcal{N}\omega\sin^2\Bigl(\frac{\phi}{2}\Bigr)\sin(N_{\mathrm{osc}}\phi+\delta_j)\cos(\delta+\delta_j)
\label{pad5}
\end{equation}
for $\phi\in [0,2\pi]$ and 0 otherwise. Here, the real constant $\mathcal{N}$ determines the time-averaged intensity of the laser pulse (cf. Ref.~\cite{KKsuper} for details).
The polarization properties of the field are controlled by the angles $\delta_j$ and $\delta$. We choose in the following: $\delta_j=(j-1)\pi/2$ and $\delta=\pi/4$
for a circularly polarized laser light. The number of cycles is denoted by $N_{\rm osc}$, which allows us to define the laser carrier frequency, $\omega_{\rm L}=N_{\rm osc}\omega$.
As it follows from Eq.~\eqref{pad3}, the vector potential has the form
\begin{equation}
\bm{A}(\phi)=f_1(\phi)\bm{\varepsilon}_1+f_2(\phi)\bm{\varepsilon}_2,
\label{pad6}
\end{equation}
with
\begin{equation}
f_j(\phi)=-\int_0^{\phi}\dd\phi'\, F_j(\phi'),
\label{pad7}
\end{equation}
and it vanishes for $\phi<0$ and $\phi>2\pi$.
We use the above model to describe a circularly polarized Ti:Sapphire laser pulse, with the laser carrier frequency $\omega_{\rm L}=1.5498$~eV (wavelength $\lambda=800$~nm).
While in the following we assume that the pulse consists of three cycles ($N_{\rm osc}=3$), we want to emphasize that we arrive at the same general conclusions for other short
pulse durations. Such short pulses can be generated experimentally as reported, for instance, in Ref.~\cite{Hung2016}. Moreover, as presented in the captions of
the figures, we will consider the time-averaged intensities of the order of $10^{16}$~W/cm$^2$.
Our numerical illustrations will concern ionization of a helium ion He$^+$ (i.e., $Z=2$) in the ground state. As it follows from the Dirac equation,
the binding energy of such state is $E_B^{\rm rel}=m_{\mathrm{e}}c^2\sqrt{1-Z^2\alpha^2}$, where $\alpha\approx 1/137$ is the fine-structure constant.
When taking the nonrelativistic limit, we obtain
\begin{align}
E_B^{\rm rel}-m_{\rm e}c^2&=m_{\mathrm{e}}c^2(\sqrt{1-Z^2\alpha^2}-1) \nonumber \\
&\approx -\frac{1}{2}Z^2\alpha^2m_{\mathrm{e}}c^2-\frac{1}{8}Z^4\alpha^4m_{\mathrm{e}}c^2\,\dots ,
\label{pad1}
\end{align}
where the lowest order term in $\alpha$ corresponds to the nonrelativistic ground state energy of a hydrogen-like ion, the way it follows
from the Schr\"odinger equation (for He$^+$, $|E_B|=\frac{1}{2}Z^2\alpha^2m_{\mathrm{e}}c^2\approx 54$~eV). Let us note that for He$^+$ and for the Ti:Sapphire laser field,
\begin{equation}
\frac{Z^4\alpha^4m_{\mathrm{e}}c^2}{8\omega_{\mathrm{L}}}\approx 2\times 10^{-3}\ll 1.
\label{pad2}
\end{equation}
Thus, we can neglect the relativistic corrections to the binding energy in our QRSFA (see, Appendix~\ref{rsfa}). However, for heavier ions ($Z\gtrsim10$), this assumption is questionable
and it becomes necessary to treat the ionization from the ground state in the relativistic framework.
\subsection{Comparison between different approximations}
\label{sec:comparison}
According to the general theory presented in Sec.~\ref{sec:theory}, the total probability of ionization in the QRSFA is
\begin{equation}
\mathcal{P}_{\mathrm{ion}}=\int\frac{\dd^3p}{(2\pi)^3} |\mathcal{A}(\bm{p})|^2,
\label{pad8aa}
\end{equation}
where ${\cal A}({\bm p})$ is given by~\eqref{pad8}. In the following, we consider two versions of this equation. When in Eq.~\eqref{pad8a},
\begin{itemize}
\item[i)]
there is no mass corrections,
\begin{equation}
E_{\mathrm{kin}}(\bm{p})\approx\frac{\bm{p}^2}{2m_{\mathrm{e}}},
\label{pad10}
\end{equation}
\item[ii)]
mass corrections are fully accounted for,
\begin{equation}
E_{\mathrm{kin}}(\bm{p})=\sqrt{(m_{\mathrm{e}}c^2)^2+(c\bm{p})^2}-m_{\mathrm{e}}c^2.
\label{pad11}
\end{equation}
\end{itemize}
Despite these substitutions in~\eqref{pad8a}, in both cases we define the triply-differential probability distribution as
\begin{equation}
\frac{\dd^3\mathcal{P}}{\dd E_{\mathrm{kin}}\dd^2\Omega_{\bm{p}}}=\frac{m_{\mathrm{e}}|\bm{p}|}{(2\pi)^3} |\mathcal{A}(\bm{p})|^2,
\label{pad8b}
\end{equation}
or, if expressed in atomic units,
\begin{equation}
\mathcal{P}(\bm{p})=\alpha^2m_{\mathrm{e}}c^2 \frac{\dd^3{\cal P}}{\dd E_{\mathrm{kin}}\dd^2\Omega_{\bm{p}}}.
\label{pad8c}
\end{equation}
Note also that the recoil corrections are fully accounted for in both these quasi-relativistic approaches.
\begin{figure}
\includegraphics[width=7cm]{xfi16Compare170914.eps}
\caption{Energy probability distributions of ionized electrons, in atomic units, calculated from different theories: the relativistic SFA (thick solid blue line),
the QRSFA accounting only for the recoil corrections (thin solid red line), and the QRSFA accounting for both
the recoil and mass corrections (thick dashed cyan line). In the upper panel, we present the results for the time-averaged intensity of
the laser pulse $5\times 10^{16}$~W/cm$^2$ and the polar and azimuthal angles of emission: $\theta_{\bm{p}}=0.4719\pi$ and $\varphi_{\bm{p}}=0$, respectively.
In the lower panel, we plot the same but for the time-averaged intensity of $10^{16}$~W/cm$^2$ and the polar angle $\theta_{\bm{p}}=0.4874\pi$.
}
\label{fiCompare170914}
\end{figure}
These two versions of the QRSFA will be compared with the relativistic treatment based on the Dirac equation.
Note that the relativistic SFA accounts exactly for the recoil, retardation, and mass corrections. In this approximation, the total probability of ionization is
\begin{equation}
\mathcal{P}_{\mathrm{ion}}=\frac{1}{2}\sum_{\lambda,\lambda_{\rm i}=\pm}\int\frac{\dd^3p}{(2\pi)^3} |\mathcal{A}_{\lambda_{\rm i}\lambda}(\bm{p})|^2,
\label{prob}
\end{equation}
where we have summed over the final and averaged over the initial electron spin states, $\lambda_{\rm i}$ and $\lambda$, respectively.
Here, $\mathcal{A}_{\lambda_{\rm i}\lambda}(\bm{p})$ is given in Appendix~\ref{rsfa} [Eq.~\eqref{ampDirac}]. Based on~\eqref{prob},
we define the spin-independent triply-differential probability distribution of ionization,
\begin{equation}
\frac{\dd^3\mathcal{P}}{\dd E_{\bm p}\dd^2\Omega_{\bm{p}}}=\frac{m_{\mathrm{e}}|\bm{p}|}{2(2\pi)^3}\sum_{\lambda,\lambda_{\rm i}=\pm}
|\tilde{\mathcal{A}}_{\lambda_{\rm i}\lambda}(\bm{p})|^2,
\label{probnew}
\end{equation}
with $\displaystyle\tilde{\cal A}_{\lambda_{\rm i}\lambda}({\bm p})={\cal A}_{\lambda_{\rm i}\lambda}({\bm p})\sqrt{\frac{E_{\bm p}}{m_{\rm e}c^2}}$. When expressed in atomic units,
\begin{equation}
\mathcal{P}(\bm{p})=\alpha^2m_{\mathrm{e}}c^2 \frac{\dd^3 {\cal P}}{\dd E_{\bm p}\dd^2\Omega_{\bm{p}}},
\label{probnewnew}
\end{equation}
it represents the quantity to be compared with~\eqref{pad8c}.
In Fig.~\ref{fiCompare170914}, we compare the high-energy spectra of photoelectrons when calculated from either the relativistic SFA (thick solid blue line)
or the QRSFA without the mass corrections (thin solid red line) and fully accounting for them (dashed cyan line). Note also that,
in both quasi-relativistic approaches, the recoil corrections are taken into account.
As expected based on our theoretical analysis, for a three-cycle Ti:Sapphire laser pulse of the nonrelativistic intensity $I=10^{16}\,\mathrm{W/cm}^2$
(lower panel), not only the recoil corrections, but also
the relativistic mass corrections play a significant role in the energy spectra of photoelectrons around 1600~eV. With increasing the
intensity and the photoelectron kinetic energy (although still nonrelativistic), the role of these corrections become even more important (upper panel).
By comparing the results derived from the Dirac equation and the quasi-relativistic approach accounting fully for the mass
corrections, one can conclude that both approaches lead to almost identical distributions. Also, it shows that the effects related
to the retardation corrections are negligible, which has been already shown in~\cite{KKpress}. Furthermore, for $I=10^{16}\,\mathrm{W/cm}^2$,
all the considered cases show probability distributions which are qualitatively similar, although their peak values depend on the corrections applied.
In fact, by scaling all these distributions to their maximum values (i.e., by presenting them in `arbitrary units') one would get nearly identical curves.
On the other hand, while at intensities close to $5\times10^{16}\,\mathrm{W/cm}^2$ (upper panel) the results accounting for recoil and mass corrections still
agree very well with the ones obtained from the Dirac theory, this is not the case for the QRSFA neglecting the mass corrections.
It does not only differ considerably but it leads to negligibly small (compared to the full relativistic treatment) probabilities for high-energy ionization.
\section{Generation of vortex states}
\label{sec:twist}
In our further analysis, we will use the QRSFA in which we take into account the recoil and mass corrections fully [i.e., the version ii) above].
We have selected this specific approach as, for the laser field intensities and photoelectron kinetic energies considered here,
it very well coincides with the relativistic theory.
In order to proceed, we write the corrected Volkov wave function \eqref{pd20} in the abstract form
\begin{equation}
\psi^{(0)}_{\bm{p}}(\bm{x},t)=\bra{\bm{x}}\op{U}_{\mathrm{QR-B}}(t)\ket{\bm{p}},
\label{gad1}
\end{equation}
where
\begin{align}
\label{gad2}
\op{U}_{\mathrm{QR-B}}(t)=&\exp\Bigl[-\ii E_{\mathrm{kin}}(\opb{p})t \\
&+\ii\int_{-\infty}^{\omega t}\dd\phi'\Bigl(\frac{e\bm{A}(\phi')\cdot \opb{p}}{N(\opb{p},\bm{k})}-\frac{e^2\bm{A}^2(\phi')}{2N(\opb{p},\bm{k})}\Bigr)\Bigr] \nonumber
\end{align}
and the integration over $\phi'$ has been extended to $-\infty$ as the vector potential vanishes for $\phi'<0$.
This allows us to represent the amplitude~\eqref{pad8} in the form~\eqref{tp3}, i.e.,
\begin{equation}
\mathcal{A}(\bm{p})=\bra{\bm{p}}\op{S}_{\mathrm{QR-B}}\ket{B},
\label{gad3}
\end{equation}
where
\begin{equation}
\op{S}_{\mathrm{QR-B}}=-\ii \int_{-\infty}^{\infty}\dd t \,\op{U}_{\mathrm{QR-B}}^{\dagger}(t)\op{H}_I(t)\ee^{-\ii\op{H}_At},
\label{gad4}
\end{equation}
as $\op{H}_I(t)$ vanishes for $t<0$ and $t>T_{\mathrm{p}}$. Thus, we can formally interpret $\op{S}_{\mathrm{QR-B}}$ as the evolution operator for the transition
from a bound state to the high-energy continuum in the quasi-relativistic and Born approximations. This also shows that the probability amplitudes of ionization
into vortex states can be calculated from $\mathcal{A}(\bm{p})$ by the Fourier decomposition \eqref{tw11a},
\begin{align}
\mathcal{A}(\bm{p}_{\mathrm{T}}(\varphi))&=\sum_{m=-\infty}^{\infty}\ee^{\ii m\varphi}\bra{p_{\|},p_\perp,m}\op{S}_{\mathrm{QR-B}}\ket{B}\nonumber\\
&=\sum_{m=-\infty}^{\infty}\ee^{\ii m\varphi}\mathcal{A}_m(p_{\|},p_{\bot}).
\label{gad5}
\end{align}
Here, we have changed the notation from ${\cal A}(p_{\|},p_\perp,m)$ to $\mathcal{A}_m(p_{\|},p_{\bot})$ in order to separate the discrete variable $m$
from the remaining two continuous ones, $p_{\|}$ and $p_{\bot}$.
\subsection{Ionization spiral}
\label{sec:ionspiral}
\begin{figure}
\includegraphics[width=5cm]{xspiraltouchp.eps}
\caption{Schematic representation of the kinematics in momentum space considered in this paper. The thick line represents the ionization spiral
$\bm{p}_{\mathrm{S}}(\phi)$ with the red (lighter) line corresponding to the ramp-up portion of the laser pulse, and the dark green (darker) line
to the ramp-down portion. The twisted momentum, $\bm{p}_{\mathrm{T}}(\varphi)$, rotates on the surface of the semitransparent blue cone such that, for a particular value of $\varphi=\varphi_0$, it touches the ionization spiral, i.e., there exists a phase $\phi=\phi_0$ such that $\bm{p}_{\mathrm{S}}(\phi_0)=\bm{p}_{\mathrm{T}}(\varphi_0)=\bm{p}_0$. In our analysis, we choose $\phi_0=\pi$ and $\varphi_0=0$.
Note that, for visual purposes, the vertical and horizontal axes are not in scale.
}
\label{spiraltouchp}
\end{figure}
As it has been shown in~\cite{KCKspiral1,KCKspiral2}, the high-energy ionization is unlikely unless the photoelectron momentum $\bm{p}$ approaches ${\bm p}_{\rm S}(\phi)$,
which is parametrized by the laser phase $\phi\in[0,2\pi]$ such that
\begin{equation}
\bm{p}^{\bot}_{\rm S}(\phi)=e\bm{A}(\phi), \quad p^{\|}_{\rm S}(\phi)=\frac{e^2\bm{A}^2(\phi)}{2m_{\mathrm{e}}c\sqrt{1-Z^2\alpha^2}}.
\label{gad7}
\end{equation}
Here, $\bm{p}^{\bot}_{\rm S}$ and $p^{\|}_{\rm S}$ are the perpendicular and parallel components of momentum ${\bm p}_{\rm S}(\phi)$ with respect to the direction
of propagation of the laser pulse ${\bm n}$, and have to be distinguished from the cylindrical coordinates introduced in Sec.~\ref{sec:twistfree}. They define
a curve in momentum space,
\begin{equation}
\bm{p}_{\mathrm{S}}(\phi)=\bm{p}^{\bot}_{\rm S}(\phi)+p^{\|}_{\rm S}(\phi)\bm{n},
\label{gad8}
\end{equation}
which we will call the {\it ionization spiral}. Several properties of the ionization probability distribution can be deduced from this analytical prediction~\eqref{gad8}.
For instance, for the laser pulse parameters considered in Fig.~\ref{fiCompare170914}, $\bm{p}_{\mathrm{S}}(\pi)$ (i.e., the value of ${\bm p}_{\rm S}$ at the pulse
maximum) defines the polar and azimuthal angles ($\theta_{\bm p}$ and $\varphi_{\bm p}$, respectively) at which the ionized electron is detected with the locally largest
probability distribution. These values are presented in the caption of Fig.~\ref{fiCompare170914}. Also, the kinetic energy corresponding to $\bm{p}_{\mathrm{S}}(\pi)$ determines the central energy
of the probability distribution, i.e., the energy at which the distribution is peaked. Note, however, that the predictions arising from the momentum spiral are valid
only for the high-energy ionization (i.e., for sufficiently intense pulses)~\cite{KCKspiral1,KCKspiral2}. Thus, even though we define
$\bm{p}_{\mathrm{S}}(\phi)$ for all possible laser phases $\phi$, its interpretation as photoelectron momentum detected with maximum probability is only valid for
the high-energy portion of ionization spectrum. Based on our numerical analysis, we can roughly quantify what `the high-energy portion of ionization spectrum' means.
Namely, it relates to photoelectron kinetic energies larger than $10|E_B|$~\cite{CKKsuper}.
\begin{figure}
\includegraphics[width=6cm]{xf5x16amp2phase170920.eps}
\caption{Modulus squared of the probability amplitude of ionization ${\cal A}({\bm p}_{\rm T}(\varphi))$, in relativistic units, (upper panel)
and the derivative of its phase (bottom panel) as functions of the twist angle $\varphi$. While the time-averaged intensity of the laser pulse is
$5\times 10^{16}\,\mathrm{W/cm}^2$, the remaining parameters of the pulse are specified in Sec.~\ref{model}. In cylindrical coordinates
defined by the angles $\theta_{\mathrm{T}}=0.37\pi$ and $\varphi_{\mathrm{T}}=0$, the photoelectron final momentum is such that $p_{\|}=0.17m_{\mathrm{e}}c$
and $p_{\bot}=0.055m_{\mathrm{e}}c$. In addition, we take: $\phi=\pi$, $\delta p_{\|}=\delta p_{\bot}=\delta\varphi_{\mathrm{T}}=0$, and $\delta\theta_{\mathrm{T}}=-0.1\pi$,
meaning that $\beta_{\rm T}=0.1\pi$.
The solid blue line represents the results based on the Dirac equation (i.e., the relativistic SFA) with the initial and final electron spin projections on the direction
of laser pulse propagation. The dashed red line is for the quasi-relativistic approach ii) specified in Sec.~\ref{sec:comparison}. The results presented here are limited to twist angles
for which the modulus of the probability amplitude is sufficiently different than 0, otherwise, the determination of the phase is erratic.
}
\label{xf5x15amp2phase170920}
\end{figure}
\begin{figure}
\includegraphics[width=6cm]{xf1x16amp2phase170920.eps}
\caption{The same as in Fig.~\ref{xf5x15amp2phase170920}, but for the laser field intensity $10^{16}\,\mathrm{W/cm}^2$ and for the parameters:
$p_{\|}=0.075m_{\mathrm{e}}c$, $p_{\bot}=0.024m_{\mathrm{e}}c$, $\theta_{\mathrm{T}}=0.387\pi$, and $\varphi_{\mathrm{T}}=0$. The remaining parameters are still:
$\phi=\pi$, $\delta p_{\|}=\delta p_{\bot}=\delta\varphi_{\mathrm{T}}=0$, $\delta\theta_{\mathrm{T}}=-0.1\pi$, and $\beta_{\rm T}=0.1\pi$.
}
\label{xf1x16amp2phase170920}
\end{figure}
As we have stated above, in high-energy ionization, the photoelectrons with momenta far away from the spiral~\eqref{gad8} are emitted with very small probabilities.
Therefore, for an arbitrary choice of twisted momenta $\bm{p}_{\mathrm{T}}(\varphi)$, a very weak ionization signal is expected. A stronger signal will be obtained
only for those momenta $\bm{p}_{\mathrm{T}}(\varphi)$ which, for some values of the twist angle $\varphi$, approach $\bm{p}_{\mathrm{S}}(\phi)$ in momentum space. Now,
we shall construct such $\bm{p}_{\mathrm{T}}(\varphi)$.
Let us select a particular laser phase $\phi_0$ and define the momentum $\bm{p}_0=\bm{p}_{\mathrm{S}}(\phi_0)$, which points in the direction determined by the polar and
azimuthal angles $\theta_0$ and $\varphi_0$, respectively. Next, we fix the angles $\theta_{\mathrm{T}}$ and $\phi_{\mathrm{T}}$ as follows
\begin{equation}
\theta_{\mathrm{T}}=\theta_0+\delta\theta_{\mathrm{T}}, \quad \varphi_{\mathrm{T}}=\varphi_0+\delta\varphi_{\mathrm{T}},
\label{gad9}
\end{equation}
with arbitrary increments $\delta\theta_{\mathrm{T}}$ and $\delta\varphi_{\mathrm{T}}$. These two angles ($\theta_{\rm T}$ and $\varphi_{\rm T}$) determine the cylindrical
coordinates with symmetry axis $\bm{n}_{\|}$ and two perpendicular vectors, $\bm{n}_{\bot,1}$ and $\bm{n}_{\bot,2}$~\eqref{twisttriad}. In this system of coordinates, we have
\begin{equation}
p_{0\|}=\bm{p}_0\cdot\bm{n}_{\|}, \quad p_{0\bot}=\sqrt{\bm{p}_0^2-p_{0\|}^2},
\label{gad10}
\end{equation}
and so, the family of twisted momenta $\bm{p}_{\mathrm{T}}(\varphi)$ is defined,
\begin{align}
\label{gad11}
\bm{p}_{\mathrm{T}}(\varphi)&=(p_{0\|}+\delta p_{\|})\bm{n}_{\|} \\
&+(p_{0\bot}+\delta p_{\bot})(\bm{n}_{\bot,1}\cos\varphi+\zeta_H\bm{n}_{\bot,2}\sin\varphi).
\nonumber
\end{align}
As before, we choose the helicity of the vortex state such that $\zeta_H=1$. In principle, the increments $\delta p_{\|}$ and $\delta p_{\bot}$ can be chosen
arbitrarily. However, they should be close to 0 for the twisted momenta $\bm{p}_{\mathrm{T}}(\varphi)$ to approach the spiral $\bm{p}_{\mathrm{S}}(\phi)$.
Such a choice of $\bm{p}_{\mathrm{T}}(\varphi)$ is schematically illustrated in Fig.~\ref{spiraltouchp} for $\phi_0=\pi$ (i.e., when both the strength of the laser
pulse and the length of $\bm{p}_0$ are maximum), $\delta p_{\|}=\delta p_{\bot}=\delta\varphi_{\mathrm{T}}=0$, and for $\delta\theta_{\mathrm{T}}=-0.1\pi$.
This means that the twisted momenta rotate on a cone with the half-opening angle $\beta_{\rm T}=0.1\pi$.
For these parameters, the curves $\bm{p}_{\mathrm{S}}(\phi)$ and $\bm{p}_{\mathrm{T}}(\varphi)$ are tangent to each other for $\phi=\pi$ and $\varphi=0$.
In Figs.~\ref{xf5x15amp2phase170920} and~\ref{xf1x16amp2phase170920}, we present the modulus squared and the phase derivative of the probability amplitude
of ionization $\mathcal{A}(\bm{p}_{\mathrm{T}}(\varphi))$ as functions of the twist angle $\varphi$. While Fig.~\ref{xf5x15amp2phase170920}
relates to a time-averaged laser pulse intensity $I=5\times 10^{16}\,\mathrm{W/cm}^2$, Fig.~\ref{xf1x16amp2phase170920} is obtained for $I=10^{16}\,\mathrm{W/cm}^2$.
It can be seen in both figures that the probability amplitudes of ionization are large for $\varphi$ close to 0. This is the case discussed above in relation to
Fig.~\ref{spiraltouchp}, when the twisted momentum $\bm{p}_{\mathrm{T}}(\varphi)$ approaches the ionization spiral. This confirms numerically our earlier hypothesis.
In addition, we observe a significant dependence of the amplitude phase,
\begin{equation}
\Phi(\bm{p}_{\mathrm{T}}(\varphi))=\arg \mathcal{A}(\bm{p}_{\mathrm{T}}(\varphi)),
\label{gad13}
\end{equation}
and its derivative,
\begin{equation}
\Phi'(\bm{p}_{\mathrm{T}}(\varphi))=\frac{\dd}{\dd \varphi}\Phi(\bm{p}_{\mathrm{T}}(\varphi)),
\label{gad13a}
\end{equation}
on the twist angle $\varphi$. It is also worth noting that Figs.~\ref{xf5x15amp2phase170920} and~\ref{xf1x16amp2phase170920} present the results based on the QRSFA
accounting fully for the mass corrections and electron recoil (dashed red curve) and based on the relativistic SFA (solid blue curve). A very good agreement between both theories
is observed not only for the modulus of the ionization probability amplitudes but also for the amplitude phases~\eqref{gad13}, up to an irrelevant constant term.
Once again we see that our quasi-relativistic approach correctly describes the high-energy ionization in the considered regime of parameters.
\subsection{OAM distributions}
\label{sec:OAM}
In this section, using the quasi-relativistic description, we will analyze probability distributions of generating the EVS carrying large
orbital angular momenta $m$. We will refer to them as the orbital angular momenta (OAM) distributions, $|{\cal A}_m(p_{\|},p_\perp)|^2$, where ${\cal A}_m(p_{\|},p_\perp)$
is implicitly defined in~\eqref{gad5}.
\begin{figure}
\includegraphics[width=7cm]{xfi1x16oam170920.eps}
\caption{The OAM distribution (upper panel), in relativistic units, for the family of vortex states represented in Fig.~\ref{xf1x16amp2phase170920}. The discrete derivative of the phase of the probability amplitude [cf., Eq.~\eqref{gad15}]
is also presented (lower panel). For visual purposes, in both panels the points corresponding to the integer values of $m$ have been connected by the solid line.
}
\label{xfi1x16oam170920}
\end{figure}
\begin{figure}
\includegraphics[width=7cm]{xfi1x16v1oam170920.eps}
\caption{The same as in Fig.~\ref{xfi1x16oam170920} but for $p_{\|}=0.078m_{\mathrm{e}}c$, $p_{\bot}=0.012m_{\mathrm{e}}c$, $\theta_{\mathrm{T}}=0.437\pi$, and $\varphi_{\mathrm{T}}=0$.
Moreover, $\phi=\pi$, $\delta p_{\|}=\delta p_{\bot}=\delta\varphi_{\mathrm{T}}=0$, and $\delta\theta_{\mathrm{T}}=-0.05\pi$; i.e., now the opening angle of the cone of twisted
momenta is two times smaller than in Fig.~\ref{xfi1x16oam170920}.
}
\label{xfi1x16v1oam170920}
\end{figure}
Some properties of OAM distributions can be anticipated already from Figs.~\ref{xf5x15amp2phase170920} and~\ref{xf1x16amp2phase170920}.
It follows from these figures that the phase derivative, $\Phi'(\bm{p}_{\mathrm{T}}(\varphi))$, is large. Because of the definition~\eqref{gad5} and general
properties of the Fourier transform, one can conclude that if $\Phi'(\bm{p}_{\mathrm{T}}(\varphi))$ takes large values then the EVS with substantial topological charges $m$ will be generated.
Additionally, since the second and higher derivatives of $\Phi(\bm{p}_{\mathrm{T}}(\varphi))$ are also significantly different from zero (contrary to what is observed
for the supercontinuum in ionization~\cite{KKsuper}, but similarly to what is predicted for the Compton process~\cite{KK2014a,KK2014b}), we can expect that the OAM distributions
will attain a chirp. This is illustrated in Fig.~\ref{xfi1x16oam170920}.
In the upper panel of Fig.~\ref{xfi1x16oam170920}, we show the discrete OAM distribution, $|\mathcal{A}_m(p_{\|},p_{\bot})|^2$, for the time-averaged
laser field intensity $10^{16}$~W/cm$^2$. The electron final momenta, calculated in the coordinate system determined by the angles
$\theta_{\mathrm{T}}=0.387\pi$ and $\varphi_{\mathrm{T}}=0$, are: $p_{\|}=0.075\, m_{\mathrm{e}}c$ and $p_{\bot}=0.024\, m_{\mathrm{e}}c$.
Note that in order to obtain the OAM probability distribution out of this figure, one has to multiply $|\mathcal{A}_m(p_{\|},p_{\bot})|^2$
by $p_{\bot}/(2\pi)^2$. Here, we observe a chirp-type structure with the dominant peak centered at $m=645$, which roughly corresponds to the maximum
value of $\Phi'(\bm{p}_{\mathrm{T}}(\varphi))$ presented in Fig.~\ref{xf1x16amp2phase170920}. Such a coincidence is in full agreement with
the general property of the Fourier transform, which states that the linear part of the phase is responsible for the `shift' of the Fourier components.
In our case, this shift occurs towards positive values of $m$. Had we consider the opposite circular polarization of the laser pulse [i.e., $\delta=-\pi/4$
in Eq.~\eqref{pad5}] we would observe an identical shift, but towards negative values. Moreover, for the higher laser pulse intensity $5\times 10^{16}\,\mathrm{W/cm}^2$,
as expected from the lower panel of Fig.~\ref{xf5x15amp2phase170920}, the probability distribution acquires its maximum values for larger $m$, $m\approx 3200$.
In the lower panel of Fig.~\ref{xfi1x16oam170920}, we plot the discrete derivative of the phase of the probability amplitude,
\begin{equation}
\Phi_m(p_{\|},p_{\bot})=\arg \mathcal{A}_m(p_{\|},p_{\bot}),
\label{gad14}
\end{equation}
defined as
\begin{equation}
\Delta\Phi_m(p_{\|},p_{\bot})=\Phi_m(p_{\|},p_{\bot})-\Phi_{m-1}(p_{\|},p_{\bot}) \mod 2\pi .
\label{gad15}
\end{equation}
Except for particular values of $m$, for which the ionization probability is very small, the phases of $\mathcal{A}_m(p_{\|},p_{\bot})$ increase approximately linearly with $m$, i.e.,
\begin{equation}
\Phi_m(p_{\|},p_{\bot})\approx\Phi_0(p_{\|},p_{\bot})+m\pi \mod 2\pi .
\label{gad16}
\end{equation}
Due to this regularity, the inverse discrete Fourier transform leads to the smooth dependence of $|\mathcal{A}(\bm{p}_{\mathrm{T}}(\varphi))|$ and $\Phi(\bm{p}_{\mathrm{T}}(\varphi))$ on the twist angle $\varphi$.
In Fig.~\ref{xfi1x16v1oam170920}, we show the same as in Fig.~\ref{xfi1x16oam170920} but the cone of twisted momenta is twice that narrow, i.e., a half-opening angle of the cone
is now $\beta_{\mathrm{T}}=0.05\pi$. Both figures exhibit a similar behavior, except that the probability distribution is now peaked at around two times smaller values
of $m$, namely, $m\approx 325$. Similar studies carried out for a larger opening angle, with $\beta_{\mathrm{T}}=0.2\pi$, show that the maximum of the OAM distribution
is shifted towards larger values of the topological charge. Specifically, for $\beta_{\rm T}=0.2\pi$ such maximum is found at $m\approx 1120$. This demonstrates that, by changing the angles of
electron propagation $\theta_{\mathrm{T}}$ and $\varphi_{\mathrm{T}}$, one can select a group of vortex states of topological charges $m$ gathered around a specific value.
\begin{figure}
\includegraphics[width=7.5cm]{xfiPar2D171002.eps}
\caption{Color mappings of ionization probability distributions $|\mathcal{A}(\bm{p}_{\mathrm{T}}(\varphi))|^2$ (upper panel) and $|\mathcal{A}_m(p_{\|},p_{\bot})|^2$
(lower panel) for the fixed $p_{\|}=0.075\, m_{\mathrm{e}}c$, and for the polar and azimuthal angles $\theta_{\mathrm{T}}=0.387\pi$ and $\varphi_{\mathrm{T}}=0$.
The maxima of the OAM distribution (lower panel) depend linearly on the photoelectron perpendicular momentum $p_{\bot}$. Both distributions are for
$\delta\theta_{\mathrm{T}}=-0.1\pi$ and $\delta\varphi_{\mathrm{T}}=0$.
}
\label{xfiPar2D171002}
\end{figure}
The family of twisted momenta~\eqref{gad11} with $\delta p_{\|}=\delta p_{\bot}=0$ represents the most optimal choice for the generation of EVS photoelectron states.
This is well seen in Figs.~\ref{xfiPar2D171002} and~\ref{xfiPer2D171002}. Note that in both figures we refer to the cylindrical coordinate system
such that $\theta_{\mathrm{T}}=0.387\pi$ and $\varphi_{\mathrm{T}}=0$. Specifically, in the upper panel of Fig.~\ref{xfiPar2D171002} we show the color mapping of the probability
distribution $|{\cal A}({\bm p}_{\rm T}(\varphi))|^2$ as a function of the perpendicular momentum of
the final electron $p_\bot$ and the twist angle $\varphi$. Here, the results are for the fixed value of the electron parallel momentum
$p_{\|}=0.075\, m_{\mathrm{e}}c$. As expected, $|\mathcal{A}(\bm{p}_{\mathrm{T}}(\varphi))|^2$ reaches its maximum value at the twist angle $\varphi=0$ (i.e., when the
twisted momenta $\bm{p}_{\mathrm{T}}(\varphi)$ touch the ionization spiral $\bm{p}_{\mathrm{S}}(\phi)$ at the pulse maximum, $\phi=\pi$). This happens for
$p_{\bot}=0.024\, m_{\mathrm{e}}c$, in agreement with the results presented in Fig.~\ref{xf1x16amp2phase170920}. Also, the probability distribution
presented in the upper panel of Fig.~\ref{xfiPer2D171002} peaks at the exact same values.
\begin{figure}
\includegraphics[width=7.5cm]{xfiPer2D171002.eps}
\caption{The same as in Fig.~\ref{xfiPar2D171002} but as a function of $p_{\|}$ and for the fixed momentum $p_{\bot}=0.024\, m_{\mathrm{e}}c$.
Note that the maxima of the OAM distribution (lower panel) are independent of the photoelectron parallel momentum $p_{\|}$.
}
\label{xfiPer2D171002}
\end{figure}
In the lower panels of Figs.~\ref{xfiPar2D171002} and~\ref{xfiPer2D171002} we show the OAM distributions $|\mathcal{A}_m(p_{\|},p_{\bot})|^2$, which
consist of many parallel stripes. While the ones with the largest topological charge dominate, the sidebands characterized by smaller $m$ gradually disappear.
If we consider the case of fixed $p_{\|}$ (Fig.~\ref{xfiPar2D171002}), one can observe that the positions of maxima of the distribution change linearly with $p_{\bot}$.
This is understandable since the orbital angular momentum in the ${\bm n}_{\|}$-direction is a linear function of $p_\bot$, with a slope $x_{\bot}$.
The quantity $x_\bot$ can be interpreted as a perpendicular size of the EVS wave packet. Specifically, based on data plotted in Fig.~\ref{xfiPar2D171002},
we estimate for those vortex states that $x_\bot\approx 10$~nm. On the other hand, by considering the case of fixed $p_{\bot}$ (Fig.~\ref{xfiPer2D171002}),
the maxima of the OAM distribution are located at specific values of the topological charge, independently of $p_{\|}$. This means that, for the given $m$ and
$p_{\bot}$, the ionization probability distribution as the function of $p_{\|}$ forms a broad supercontinuum, similar to the one observed for photoelectrons
with linear momenta~\cite{KKsuper,KKcomb,KCKspiral1,KCKspiral2}. This has a potential to employ such photoelectron wave packets in 5-d electron diffraction.
Such technique, an extension of the 4-d diffraction which is based on the use of femtosecond electron wave packets (see, e.g., \cite{ESI2000,SM2011,Baum2013,IABPR2014,Oxley2017}),
would be able to probe helical (or magnetic) properties of matter at different times.
\section{Conclusions}
\label{sec:Conclusions}
We have studied generation of the EVS in ionization by short and intense laser pulses. For this purpose, we have developed a quasi-relativistic approach
going beyond our recent formulation presented in~\cite{KKpress}. As we have shown for near infrared laser pulses and intensities of the order of $10^{16}$~W/cm$^2$,
our modified QRSFA, that accounts for the recoil and mass relativistic corrections, gives quantitatively good results as compared to the relativistic SFA.
We have used this approach to demonstrate that the vortex states of large topological charge (approaching 1000) are generated under current conditions.
It follows from our investigations that such states are detected provided that the family of twisted momenta
approach the ionization spiral. The latter defines the region in momentum space where the ionization occurs with significant probabilities~\cite{KCKspiral1,KCKspiral2}.
We have shown that, for the fixed perpendicular electron momentum $p_{\bot}$ and topological charge $m$, the ionization spectrum form the
supercontinuum~\cite{KKsuper,KCKspiral1,KCKspiral2}. This means that the EVS might be interesting and important subjects for further studies,
as they can probe a new degree of freedom (namely, chirality) in electron diffraction experiments. In order to generate few femtosecond or
attosecond electron vortex wave packets, the creation of photoelectrons of relativistic energies is necessary~\cite{KCKspie}. In this case, however,
the free-electron states of well defined orbital angular momentum cannot be defined (see, e.g., Refs.~\cite{BB2017,Barnett2017}). This problem is going
to be explored in our further investigations.
\section*{Acknowledgements}
This work is supported by the National Science Centre (Poland) under Grant No. 2014/15/B/ST2/02203.
|
{
"timestamp": "2018-03-08T02:05:55",
"yymm": "1803",
"arxiv_id": "1803.02574",
"language": "en",
"url": "https://arxiv.org/abs/1803.02574"
}
|
\section{Discussion and CONCLUSION}
\label{Sec:Disc}
The experiments presented above show that the proposed methods can compensate for missing markers better than the state of the art, when the gap is long, especially when the motion is complex.
Our method is not relying on future frames, unlike most of the alternatives. That property makes it suitable for on-line usages when the markers are being reconstructed as they are collected.
Another notable property of the proposed method is that it can recover markers which are missing over a long period of time.
LSTM-based architecture is modeling the correlation is the human body to recover missing markers better than a window-based architecture, accordingly to our experiments.
In summary, the proposed methods can be used to recover markers over many frames in an accurate and stable way.
\section{Acknowledgements}
Authors would like to thank Simon Alexanderson and Judith Butepage for the useful discussions. This PhD project is supported by Swedish Foundation for Strategic Research Grant No.: RIT15-0107.
The data used in this project was obtained from mocap.cs.cmu.edu.
The database was created with funding from NSF EIA-0196217.
\begin{figure}[t
\centering
\vspace{-2mm}
\subfloat[Ground truth markers]{~~~\includegraphics[width=0.815\linewidth]{boxing_frames_GT.png}~~~}~~\\
\subfloat[15 (our of 41) markers are missing]{~~~\includegraphics[width=0.815\linewidth]{boxing_noisy.png}~~~}\\
\subfloat[Burke{[3]} reconstruction result]{~~~\includegraphics[width=0.815\linewidth]{boxing_burke.png}~~~}\\
\subfloat[LSTM (ours) reconstruction result]{~~~\includegraphics[width=0.815\linewidth]{boxing_our.png}~~~}
\caption{Three keyframes from the boxing test sequence, illustration of the reconstruction using the Burke and LSTM (ours) methods.}
\label{fig:VisualResults}
\end{figure}
\section{Dataset}
\label{sec:Data}
We evaluate our method on the popular benchmark CMU Mocap dataset \cite{CMUdata}. This database contains
2235 mocap sequences of 144 different subjects. We use the recordings of 25 subjects, sampled at the rate of 120 Hz, covering a wide range of activities, such as boxing, dancing, acrobatics and running.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth]{Markers.png}
\vspace{-2mm}
\caption[Marker placement]{Marker placement in the CMU Mocap dataset (mocap.cs.cmu.edu).}
\label{fig:markers}
\end{figure}
\vspace{-2mm}
\subsection{Preprocessing}
We start preprocessing by transforming every mocap sequence into the hips-center coordinate system. First joint angles from the BVH file are transformed into the 3D coordinates of the joints. The 3D coordinates are translated to the center of the hips by subtracting hip coordinates from each marker.
We then normalize the data into the range [-1,1] by subtracting the mean pose over the whole dataset and \hk{then dividing all values by the} absolute maximal value in the dataset.
\subsection{Data Explanation}
\label{sec:marker}
\hk{The CMU dataset} contains 3D positions of a set of markers, which were recorded by the mocap system at CMU. Example of a marker placement during the capture can be seen in Figure~\ref{fig:markers}. All details can be found in the dataset description \cite{CMUdata}.
The human pose at each time-frame $t$ is represented as a vector of the marker 3D coordinates: $\mathbf{x}_t = [x_{i,t}, y_{i,t}, z_{i,t}] _{i=1:n} $, where $n$ denotes the number of markers used during the mocap collection. In the CMU data, $n=41$, and the dimensionality of a pose is $3n=123$.
A sequence of poses is denoted $\mathbf{x} = [\mathbf{x}_t]_{t=1:T}$.
\subsection{Training, Validation, and Test Data Configurations}
\label{sec:dataset_config}
\par \hk{The} \textit{validation} dataset contains 2 sequences from each of the following motions: pantomime, sports, jumping, and general motions \footnote{pantomime (subjects 32 and 54), sports (subject 86 and 127), jumping (subject 118), and general motions (subject 143)}.
\hk{The} \textit{test} dataset contains
basketball, boxing and jump turn\footnote{102\_03 (basketball), 14\_01 (boxing), and 85\_02 (jump-turn).} sequences.
\hk{The} \textit{training} dataset contains all the sequences not used for validation or testing, from 25 different folders in the CMU Mocap Dataset, such as 6, 14, 32, 40, 141, 143, which include testing types as well.
Subjects from the training dataset were also present in the test and validation datasets. Generalization to the novel subjects and motion types was tested experimentally.
\section{Experiments}
\label{secExp}
We use the commonly used \cite{peng2015hierarchical,wang2016human,burke2016estimating} Root Mean Squared Error (RMSE) over the missing markers to measure reconstruction error.
\iffalse
\begin{equation}
rmse(\mathbf{x},\mathbf{\hat{x}}) = \sqrt []{ \frac{ \mid \mid \mathbf{x} -\mathbf{\hat{x}} \mid \mid ^2} {n_e}} ,
\end{equation}
where $\mathbf{x}$ is the original motion, $\mathbf{\hat{x}}$ is the recontructed one and $n_e$ is the amount of missing markers.
\fi
\textbf {Implementation Details:}
All methods were implemented using Tensorflow\cite{abadi2016tensorflow}. The code is publicly available\footnote{https://github.com/Svito-zar/NN-for-Missing-Marker-Reconstruction}.
For \textbf{training} purposes, we extract short sequences from the dataset by sliding window, then shuffle them and feed to the network. The training was done using the Adam optimizer \cite{kingma2014adam} with a batch size of 32
The \textbf{hyperparameters} for both architectures were optimized w.r.t.~the validation dataset (Section~\ref{sec:dataset_config}) using grid search.
Table \ref{hyperparams} contains the main hyper-parameters.
\begin{table}
\caption{Hyperparameters for our NNs.\\ $\alpha$ is initial learning rate, $\Delta t$ is sequence length.}
\begin{tabular}
{|l|c|c|c|c|c|}
\hline
NN-type & Width & Depth & Dropout & $\alpha$ & $\Delta t$\\
\hline
LSTM& 1024 & 2 & 0.9 & 0.0002 & 64 \\
Window& 512 & 2 & 0.9 & 0.0001 & 20\\
\hline
\end{tabular}
\label{hyperparams}
\end{table}
\subsection{Comparison to the State of the Art}
First of all, the models presented above are evaluated in the same setting as most of the other random missing marker reconstruction methods. A specific amount of random markers (10\%, 20\%, or 30\%) are removed over a few time-frames and each method is applied to recover them. The length of the gap was sampled from the Gaussian distribution with mean 10 and standard deviation 5, following the state-of-the-art settings \cite{wang2016human}. The reconstruction error is measured.
There is randomness in the system; in the initialization of the network weights and choosing missing markers. Therefore, every experiment is repeated 3 times and error mean and standard deviation are measured.
Tables \ref{CompWithOthers} provide the comparison of the performance of our system with 3 state-of-the-art papers and with the simplest solution (linear interpolation) as a baseline, on 3 action classes from the CMU Mocap dataset. The experiments from \cite{burke2016estimating} were repeated by us while using the same hyperparameters as in their original paper. The results of the Wang method \cite{wang20163d} were taken from the diagram in their paper. Last, the error measures of the Peng method \cite{peng2015hierarchical} were rescaled, since in their original paper they measure it with averaging the error over all the markers, but we average only over the missing markers.
Table \ref{CompWithOthers} shows that standard interpolation outperforms all the state-of-the-art method, including ours. A probable reason for that is that the duration of the gap is short (less than 0.1 s), so it is easy to interpolate between existing frames.
We will, therefore, study a more challenging scenario, when markers are missing over longer periods and when more markers are missing. We can compare with the Burke method only because only they provide the implementation.
\begin{table}[t]
\caption{Comparison to the state of the art in missing marker reconstruction. RMSE in marker position is in cm. A training set comprises all activities. $^*$The numbers from \cite{wang20163d} were extracted from a diagram.}
\vspace{-2mm}
\centering
\subfloat[10\% of the markers in each indata frame are missing.]{
\begin{tabular}{|l|c|c|c|}
\hline
Method & Basketball & Boxing & Jump turn\\
\hline\hline
Interpolation & 0.64$\pm 0.03$ & 1.06$\pm 0.12$ & \textbf{1.74}$\pm 0.3$\\
Wang \cite{wang20163d} & \textbf{0.4}$^*$ & \textbf{0.5}$^*$ & n.a.\\
Peng \cite{peng2015hierarchical} & n.a. & n.a. & n.a.\\
Burke \cite{burke2016estimating} & 4.56 $\pm 0.17$ & 3.47$\pm 0.19$ & 15.97$\pm 1.34$ \\
Window (ours) & 2.34 $\pm 0.27$ & 2.61 $\pm 0.21$ & 4.4 $\pm 0.5$ \\
LSTM (ours) & 1.21$\pm 0.02$ &1.44$\pm 0.02$ & 2.52$ \pm 0.3$\\
\hline
\end{tabular}}
\par\medskip
\subfloat[20\% of the markers in each indata frame are missing.]{
\begin{tabular}{|l|c|c|c|}
\hline
Method & Basketball & Boxing & Jump turn\\
\hline\hline
Interpolation & \textbf{0.67}$\pm 0.04$ & \textbf{1.09}$\pm 0.07$ & \textbf{1.91}$\pm 0.31$
\\
Wang \cite{wang20163d} & 1.6$^*$ & 1.5$^*$ & n.a.\\
Peng \cite{peng2015hierarchical} & n.a. & 4.94 & 5.12 \\
Burke \cite{burke2016estimating} & 4.18 $\pm 0.48$ & 3.98$\pm 0.07$ & 27.1$\pm 1.21$ \\
Window (ours) & 2.42 $\pm 0.32$ & 2.77 $\pm 0.13$ & 4.3 $\pm 0.75$ \\
LSTM (ours) & 1.34$\pm 0.01$ & 1.58$\pm 0.04$ & 2.67$\pm 0.2$\\
\hline
\end{tabular}}
\par\medskip
\subfloat[30\% of the markers in each indata frame are missing.]{
\begin{tabular}{|l|c|c|c|}
\hline
Method & Basketball & Boxing & Jump turn\\
\hline\hline
Interpolation & \textbf{0.7}$\pm 0.1$ & \textbf{1.21}$\pm 0.14$ & \textbf{2.29}$\pm 0.3$
\\
Wang \cite{wang20163d} & 0.9$^*$ & 0.9$^*$ & n.a.\\
Peng \cite{peng2015hierarchical} & n.a. & 4.36 & 4.9 \\
Burke \cite{burke2016estimating} & 4.23 $\pm 0.57$ & 4.01$\pm 0.26$ & 34.9 $\pm 2.55$ \\
Window (ours) & 2.33 $\pm 0.13$ & 2.63 $\pm 0.08$ & 4.53 $\pm 0.48$ \\
LSTM (ours) & 1.48$\pm 0.03$&1.75$\pm 0.07$ & 3.1$ \pm 0.25$\\
\hline
\end{tabular}}
\label{CompWithOthers}
\end{table}
\subsection{Gap duration analysis}
In the next experiments, we varied the length of the gap and kept the number of missing markers fixed to 5. As before we averaged the performance over 3 experiments.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{Gap_duration.png}
\caption{Dependency on the duration of the gap. Basketball motion. 5 missing markers.}
\label{fig:length_analysis}
\end{figure}
Figure \ref{fig:length_analysis} shows that our methods can be applied for any length of gaps, while the performance of other methods degrades steadily with the increase of the length of the gap. Interpolation-based methods struggle to reconstruct markers when gaps become longer. Our method, in contrast, can propagate the information about the marker position using the hidden state, hence being robust to the long gaps.
\subsection{Very long gaps}
\begin{figure}[p]
\centering
\vspace{-2mm}
\subfloat[Basketball: 3 markers missing]{~~~\includegraphics[width=0.92\linewidth]{basketball_long_3mm.png}~~~}~~\\
\subfloat[Basketball: 30 markers missing]{~~~\includegraphics[width=0.92\linewidth]{basketball_long_30mm.png}~~~}\\
\subfloat[Boxing: 3 markers missing]{~~~\includegraphics[width=0.92\linewidth]{boxing_long_3mm.png}~~~}\\
\subfloat[Boxing: 30 markers missing]{~~~\includegraphics[width=0.92\linewidth]{boxing_long_30mm.png}~~~}
\caption{A few markers were missing over 5 seconds. All the markers were present for 1s before and after the gap.}
\label{fig:long_gap}
\end{figure}
In the following experiment, the same markers were missing over a long period of time. The measurement period started at 1.5 seconds into the clip to avoid artifacts. Then for 1 second, all markers were present, followed by a 5 second window where certain markers were missing for the entire time. Afterwards, all the markers were present again. Each experiment was repeated 5 times and the mean result was registered.
We can clearly see in Figure \ref{fig:long_gap} that while interpolation and Burke\cite{burke2016estimating} are quickly losing track of the markers, our methods stay stable and accurate. This hold for all the scenarios. Figure \ref{fig:long_gap}(b,d) illustrates that all methods except interpolation are degrading significantly when most of the markers are missing. That indicates that those methods are using information about the other markers, not only about the past or the future of a particular marker.
\subsection{Visualization of the Results}
Figure~\ref{fig:VisualResults} illustrates the reconstruction results for one of the test sequences, boxing.
The subject is boxing with their right arm. The observed marker cloud (Figure~\ref{fig: VisualResults}b) misses 15 markers.
Our reconstruction result (Figure~\ref{fig: VisualResults}d) is visually close to the ground truth (Figure~\ref{fig: VisualResults}a), which is also supported by the numerical errors.
\subsection{Generalization}
\begin{table}
\caption{Generalization test for the LSTM network. 20\% of markers missing. Complete dataset contains motions from all the subjects and from all types. Then all motion with the same subject as in testing were removed. Finally all the recordings with the same type of motion very removed. Reconstruction error in cm is measured.}
\vspace{-3mm}
\begin{tabular}{|l|c|c|}
\hline
Motion / dataset & Basketball & Boxing \\
\hline
Complete & 7.9 $\pm{ 0.14}$ & 2.08 $\pm{ 0.5}$ \\
w/o the subject & 9.93 $\pm{ 0.96}$ & 2.13 $\pm{ 0.42}$ \\
w/o the motion & 8.54 $\pm{ 1.04}$ & 2.57 $\pm{ 1.18}$ \\
\hline
\end{tabular}
\label{generalization_lstm}
\end{table}
\begin{table}
\caption{Generalization test for the Window-based network. 20\% of markers missing. The same setup as in Table \ref{generalization_lstm}.}
\vspace{-3mm}
\begin{tabular}{|l|c|c|}
\hline
Motion / dataset & Basketball & Boxing \\
\hline
Complete & 5.59 $\pm{ 0.29}$ & 3.54 $\pm{ 0.15}$ \\
w/o the subject & 5.68 $\pm{ 0.48}$ & 4.37 $\pm{ 1.13}$ \\
w/o the motion & 6.52 $\pm{ 0.54}$ & 4.58 $\pm{ 1.51}$ \\
\hline
\end{tabular}
\label{generalization_window}
\end{table}
Up to now, our models, as well as the baselines, have been trained with all motions and all individuals.
In the final experiments, we evaluated the generalization capability with respect to motion type and individual. To this end, we removed all the recordings of the test subject from the training data
Furthermore, for each test motion, we created a training set where this motion was removed.
We then evaluated our networks while having 20\% of the markers missing for gaps of 100 frames (almost 1 second).
Table \ref{generalization_lstm} illustrates the results for the LSTM-based network and Table \ref{generalization_window} for the Window-based method. We can observe that the performance drop is not dramatic: it is less than 25\% and depends on the motion type and the network architecture. It is important to note that the variance is significantly higher for the "generalization" scenarios, meaning that the system is less stable
\iffalse
In the next experiment, we evaluate if our network can generalize to a new scenario as well. Table \ref{generalization_2} illustrates results of experiments when missing rate (mr) at testing and training are different. The network performs best when trained on the highest amount of noise. We must note that the performance degrades if our system is tested in different conditions as trained, but this degradation is negligible.
\begin{table}
\caption{Generalization test on a motion basketball. Mr stays for missing rate.}
\begin{tabular}{|l|c|c|c|}
\hline
Result & 10\% mr in training & 20\% mr in tr. & 30\% mr in tr.\\
\hline
10\% mr in test & 1.66 & 1.5 & 1.79 \\
20\% mr in test & 1.82 & 1.58 & 1.87\\
30\% mr in test & 2.15 & 1.74 & 1.9\\
\hline
\end{tabular}
\label{generalization_2}
\end{table}
\fi
\iffalse
The following experiments compare our system with the method of Burke and Lasenby \cite{burke2016estimating} (for which we have access to the implementation) in this setup. We focus on the markers of the hand because this part of the body is the most articulated and the hardest to reconstruct.
It should be noted that while the method of Burke and Lasenby \cite{burke2016estimating} requires a complete sequence, our method only uses the past frames, making it suitable for online estimation.
Results for the basketball test sequence, frames 20-110, are illustrated in Figure~\ref{fig:long_term}a. Results for the other motions, frames 480-570, are illustrated in Figure~\ref{fig:long_term}b,c,d. We can observe that the Burke and Lasenby method performs much better for the "boxing" sequence in comparison to "basketball". The probable reason for that is that the "basketball" sequence is much longer and hence provides more information for interpolation. Our system performs better than the one of Burke and Lasenby when just one marker is missing but degrades steeply when the number of missing markers increase. During the training, the data had random missing markers, so either previous time-steps or the neighboring markers were present for each marker, but in the test condition those markers have neither information about the previous time-step, nor about their neighbors, so our system could not recover them.
\fi
This experiment indicates that our systems can recover unseen motions, performed by unseen individuals, albeit
with slightly worse performance.
\section{Introduction}
Often a digital representation of human motion is needed. This representation is useful in a wide range of scenarios: mapping an actor performance to a virtual avatar (in movie productions or in the game industry); predicting or classifying a motion (in robotics)
; trying clothes in a digital mirror; etc.
A common way to obtain this digital representation is marker-based optical motion capture (mocap) systems. Such systems use a large number of cameras to triangulate the position of optical markers.These are then used to reconstruct the motion of the objects to which the markers are attached.
All
motion capture systems
suffer to a higher or lower degree from missing marker detections, due to occlusion problems (less
than two cameras see the marker) or marker detection failures.
In this paper, we propose a method for
reconstruction of missing markers to create a more complete pose estimate (see Figure~\ref{fig:intro}). The method exploits knowledge about spatial and temporal correlation in human motion, learned from data examples to remove position noise and fill in missing parts of the pose estimate (Section~\ref{secMeth}).
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{intro_pic.pdf} %
\caption{Illustration of our method for missing marker reconstruction. Due to errors in the capturing process, some markers are not captured. The proposed method exploits spatial and temporal correlations to reconstruct the pose of the missing markers.}
\label{fig:intro}
\end{figure}
A number of methods have been proposed within the Graphics community to address the problem of
missing marker reconstruction. The traditional approach \cite{burke2016estimating,peng2015hierarchical} is interpolation within the current sequence. Wang et al proposed a method which exploits motion examples to learn typical correlations \cite{wang2016human}. The novelty of our method with respect to theirs is that while they learn linear dependencies, we employ a Neural Network (NN) methodology which enables modeling of more complicated spatial and temporal correlations in sequences of a human pose.
In Section ~\ref{secExp} we demonstrate the effectiveness of our network, showing that our method outperforms the state of the art in missing marker reconstruction in various conditions.
Finally, we discuss our results in Section~\ref{Sec:Disc}.
\section{Method overview}
\label{secMeth}
In the following section, we give a mathematical problem formulation and an overview of the proposed approach.
\subsection{Missing Markers}
\label{subsec_Noise}
Missing markers in real life correspond to the failure of a sensor in the motion capture system.
In our experiments, we use mocap data without missing markers.
Missing markers are emulated by nullifying some marker positions in each frame. This process can be mathematically formulated as a multiplication of the mocap frame $\mathbf{x}_t$ by a binary matrix $M_t$:
\begin{equation}
\label{eq:missing}
\mathbf{\hat{x}}_t = C(\mathbf{x}_t) = M_t \mathbf{x}_t,
\end{equation}
where $M_t \in [0,1] ^{3n x 3n}$, such that that all 3 coordinates of any marker are either missing or present at the same time.
Every marker is missing over a few time-frames. The percentage of missing values is usually referred to as the \textit{missing rate}.
\subsection{Missing Marker Reconstruction as Function Approximation}
Missing marker reconstruction is defined in the following way: Given a human motion sequence $\mathbf{\hat{x}}$ corrupted by missing markers, the goal is to reconstruct the true pose $\mathbf{x}_t$ for every frame $t$.
We approach missing markers reconstruction as a function approximation problem: The goal is to learn a reconstruction function $R$ that approximates the inverse of the corruption function $C$ in Eq.~(\ref{eq:missing}). This function would map the sequence of corrupted poses to an approximation of the true poses:
\begin{equation}
\mathbf{x} = R(\mathbf{\hat{x}}) \approx C^{-1}(\mathbf{\hat{x}})
\end{equation}
The mapping $C$ is under-determined, so it is not invertible. However, it can be approximated by learning
spatial and temporal correlations in human motion in general, from a set of other pose sequences.
We propose to use a Neural Network (NN) approach to learn $R$, well known for being a powerful tool for function approximation \cite{hornik1989multilayer}.
We employ two different types of neural network models, which are described in the following sections. Both of them are using a principle of Denoising Autoencoder \cite{vincent2008extracting}: during the training Gaussian additive noise was injected into the input:
\begin{equation}
\label{eq:missing}
\mathbf{\hat{x}}_t = \hat{C}(\mathbf{x}_t) = M_t ( \mathbf{x}_t + \mathcal{N}(0, \sigma(X)*\alpha)),
\end{equation}
where $\sigma(X)$ is a standard deviation in the training dataset and $\alpha$ is a coefficient of proportionality, which we call \textit{the noise parameter}. It was experimentally set to the value of 0.3.
Denoising is commonly used to regularize encoder-decoder NN. Experiments proved it to be beneficial in our application as well.
The network was learning to remove noise, at the same time as reconstructing missing values. During the testing, no noise was injected. Our two methods are compared to each other and to the state of the art in missing marker reconstruction in Section~\ref{secExp}.
\section{Neural Network Architectures}
In this section, the two versions of the method are explained.
\begin{figure
\centering
\subfloat[LSTM-based]{~~~\includegraphics[width=0.27\linewidth]{LSTM-based.pdf}~~~}~~
\subfloat[Window-based]{~~~\includegraphics[width=0.35\linewidth]{window-based.pdf}~~~}
\vspace{-2mm}
\caption{Illustration of the two architecture types. (a) LSTM-based architecture (Section~\ref{subsec:LSTM}). (b) Window-based architecture (Section~\ref{subsec:Window}).}
\label{fig:models}
\end{figure}
\subsection{LSTM-Based Neural Network Architecture}
\label{subsec:LSTM}
Long-Short Term Memory (LSTM) \cite{lstm1997} is a special type of Recurrent Neural Network (RNN). It was designed as a solution to the vanishing gradient problem \cite{hochreiter1998vanishing} and has become a default choice for many problems that involve sequence-to-sequence mapping \cite{sutskever2014sequence, donahue2015long, shi2017end}.
\iffalse
\begin{equation}
c_t = f_t c_{t-1} + i_t \tanh(W_{xc} x_t + W_{hc} h_{t-1} + b_c )
\end{equation}
\begin{equation}
f_t = \sigma (W_{xf} * x_t + W_{hf} h_{t-1} + W_{cf} c{t-1} + b_{f} )
\end{equation}
\begin{equation}
o_t = \sigma (W_{xo} x_t + W_{ho} h_{t-1}
\end{equation}
\fi
Our network is based on LSTM and illustrated in Figure~\ref{fig:models}a. The input layer is a corrupted pose $\mathbf{\hat{x}}_t$, and the output LSTM layer is the corresponding true pose $\mathbf{x}_t$.
\subsection{Window-based Neural Network architecture}
\label{subsec:Window}
An alternative approach is to use a range of previous time-steps explicitly, and to train a regular Fully Connected Neural Network (FCNN) with the current pose along with a short history, i.e., a window of poses over time $(t - \Delta t):t$.
This network is illustrated in Figure~\ref{fig:models}b. The input layer is a window of concatenated corrupted poses $[\mathbf{\hat{x}}^T_{t-\Delta t}, ..., \mathbf{\hat{x}}^T_t]^T$. The output layer is the corresponding window of true poses $[\mathbf{x}^T_{t-\Delta t}, ..., \mathbf{x}^T_t]^T$. In between, there are a few hidden fully connected layers.
This structure is inspired by the sliding time window-based method of B\"utepage et al.~\cite{butepage2017deep}, but is adapted to pose reconstruction. For example, there is no bottleneck middle layer and fewer layers in general, to create a tighter coupling between the corrupted and real pose, rather than learning a high-level \hk{and} holistic mapping of a pose. We also use window length T=10, instead of 100, based on the performance on the validation dataset.
\section{Related work}
\label{sec:relWork}
The task of modeling human motion from mocap data has been studied quite extensively in the past. We here give a short review of the works most related to ours.
\subsection{Missing Marker Reconstruction}
It is possible to do 3D pose estimation even with affordable sensors such as Kinect. However, all motion capture systems suffer to some degree from missing data.
This has created a need for methods for \textit{missing marker reconstruction}.
The missing marker problem has been traditionally formulated as a matrix completion task. Peng et al.~\cite{peng2015hierarchical} solve it by non-negative matrix factorization, using the hierarchy of the body to break the motion into blocks. Wang et al.~\cite{wang2016human} follow the idea of decomposing the motion and do dictionary learning for each body part. They train their system separately for each type of motion. Burke and Lasenby \cite{burke2016estimating} apply PCA first and do Kalman smoothing afterward, in the lower dimensional space. Gloersen and Federolf \cite{gloersen2016predicting} used weighted PCA to reconstruct markers. Taylor et al. \cite{taylor2007modeling} applied Condition Restricted Boltzmann Machine based on the binary variables. Both Taylor \cite{taylor2007modeling} and Gloersen\cite{gloersen2016predicting} were limited to cyclic motions, such as walking and running. All those methods are based on linear algebra. They make strong assumptions about the data: each marker is often assumed to be present at least at one time-step in the sequence.
Moreover, due to the linear models, they often struggle to reconstruct irregular and complex motion.
The limitations discussed above motivate the neural network approach to the missing marker problem. Mall et al. \cite{mall2017deep} successfully applied deep neural network based on the bidirectional LSTM to denoise human motion and recover missing markers. Our approach is similar to theirs, but we are using a simpler network, which requires less data and computational resources. We also experiment with two different ways to handle the sequential character of the problem, while they just choose one approach.
\subsection{Denoising}
Another related task which can be tackled with our networks is removing additive noise from the marker data. Recently Holden \cite{holden2018robust} used a similar approach to this problem. He employed a neural network that took noisy markers as an input and returned clean body position as an output. The main difference is that our network is also capable of reconstructing missing values and takes sequential information into account.
\subsection{Prediction
A highly related problem is to predict human motion some time into the future.
State-of-the-art methods try to ensure continuity either by using Recurrent Neural Networks (RNNs) \cite{fragkiadaki2015recurrent,jain2016structural} or by feeding many time-frames at the same time \cite{butepage2017deep}. While our focus is not on prediction, our networks architectures are inspired by those methods.
Since our application is not a prediction, our architecture is slightly different.
Another related paper is the work of B\"utepage et al.~\cite{butepage2017deep}, who use a sliding window and a Fully Connected Neural Network (FCNN) to do motion prediction and classification. Again, since our problem is different, we modify their network,
using a much shorter window length, fewer layers, and no bottleneck.
|
{
"timestamp": "2018-09-26T02:11:06",
"yymm": "1803",
"arxiv_id": "1803.02665",
"language": "en",
"url": "https://arxiv.org/abs/1803.02665"
}
|
\section{Introduction}
\subsection{One-relator groups}
The beginnings of combinatorial group theory are often identified with
Dehn's articulation of the word, conjugacy and isomorphism problems
\cite{dehn_unendliche_1911}, and Magnus' solution of the word problem
for one-relator groups was an early triumph of the subject
\cite{magnus_identitatsproblem_1932}. The contemporary approach to
these decision problems takes the geometric route: to solve them in a
class of groups $\mathcal{C}$, one first shows that the groups in
$\mathcal{C}$ admit some kind of geometric structure. The fundamental
example is the class of word-hyperbolic groups, for which the word,
conjugacy and isomorphism problems have all been solved. Related
techniques can be applied to handle other important classes:
3-manifold groups, sufficiently small-cancellation groups and fully
residually free groups, to name a few.
After a century of progress, it is remarkable that the class of
one-relator groups is still almost untouched by geometric techniques,
and the conjugacy and isomorphism problems remain wide open. Many
one-relator groups are word-hyperbolic -- all one-relator groups with
torsion, and a randomly chosen one-relator group is $C'(1/6)$ -- but
there is also a menagerie of non-hyperbolic examples, including
Baumslag--Solitar groups, Baumslag's example
\cite{baumslag_non-cyclic_1969}, Gersten's free-by-cyclic group
\cite{gersten_automorphism_1994}, fundamental groups of two-bridge
knot complements, and the recent examples of Gardam--Woodhouse
\cite{gardam_geometry_2017}.
In this paper, we present theorems about about the structure of
one-relator groups which begin to suggest a general geometric
classification. The starting point for these results is a recent
result established independently by the authors \cite{louder-wilton}
and by Helfer--Wise \cite{helfer-wise}: the presentation complex ${X}$
of a torsion-free one-relator group has \emph{non-positive
immersions}, meaning that every connected, finite complex ${Y}$ that
immerses into ${X}$ either has $\chi({Y})\leq 0$ or Nielsen
reduces\footnote{See Definition \ref{defn: Nielsen reduction} for the
definition of Nielsen reduction. For now it suffices to know that
Nielsen reduction is stronger than homotopy equivalence and weaker
than collapsibility.} to a point. We investigate the negatively
curved analogue of this definition.
\begin{definition}\label{defn: Negative immersions}
A compact 2-complex $X$ has \emph{negative immersions} if, for every
immersed, compact, connected 2-complex $Y\looparrowright X$, either
$\chi(Y)<0$ or $Y$ Nielsen reduces to a graph.
\end{definition}
On the face of it, negative immersions should be a difficult condition
to check, since it applies to all immersed compact complexes $Y$.
However, there turns out to be a connection with a quantity defined by
Puder \cite{puder_primitive_2014}
\begin{definition}\label{defn: Primitivity rank}
Let ${F}$ be a free group and $w\in{F}\setminus\{1\}$. The
\emph{primitivity rank} of $w$ is
\[
\pi(w)=\min\{\rk{\KK} \mid w\in \KK<{F}\mbox{ and }w\mbox{ not primitive in }\KK\}\in\mathbb{N}\cup\{\infty\}~,
\]
where, by convention, $\pi(w)=\infty$ if $w$ is primitive in
${F}$, since in that case $w$ is primitive in every subgroup $\KK$
containing $w$. Note that $\pi(1)=0$, since $1$ is an imprimitive
element of the trivial subgroup.
\end{definition}
In fact, negative immersions for the presentation complex ${X}$ of a
one-relator group $\GG={F}/\ncl{w}$ are governed by $\pi(w)$. Note
that $\pi(w)$ is computable -- see Lemma \ref{lem: Finitely many
w-subgroups}.
\begin{theorem}[Negative immersions for one-relator groups]\label{thm: Negative immersions}
The presentation complex of the one-relator group ${F}/\ncl{w}$
has negative immersions if and only if $\pi(w)>2$.
\end{theorem}
Theorem \ref{thm: Negative immersions} follows from Lemma \ref{lem:
Negative immersions}, which is a finer classification of immersions
from complexes with sufficiently large Euler characteristic.
Non-positive immersions constrains the subgroup structure of a group:
it follows that every finitely generated subgroup has finitely
generated second homology \cite[Corollary 1.6]{louder-wilton}. Indeed,
Wise conjectured that the fundamental groups of complexes with
non-positive immersions are \emph{coherent}, i.e.\ every finitely
generated subgroup is finitely presented.
Our next theorem asserts that negative immersions also constrains the
subgroup structure of a one-relator group. Recall that a group $G$ is
called \emph{$k$-free} if every subgroup generated by $k$ elements is
free.
\begin{theorem}[Low-rank subgroups of one-relator groups]
\label{thm: f.g. subgroups}
\label{thm: k-free}
Let $\GG={F}/\ncl{w}$ be a one-relator group with
$\pi(w)>1$. There is a finite collection $P_1,\dotsc,P_n$ of
one-ended, one-relator subgroups of $\GG$ with the following
property. Let $H<\GG$ be a finitely generated subgroup.
\begin{enumerate}[(i)]
\item If $\rk{H}<\pi(w)$ then $H$ is free.
\item If $\rk{H}=\pi(w)$ then $H$ is either free or conjugate
into some $P_i$.
\end{enumerate}
In particular, the one-relator group $\GG$ is $(\pi(w)-1)$--free.
\end{theorem}
The $P_i$ are defined in Subsection~\ref{subsection: primitivity rank
and w subgroups}. Theorem~\ref{thm: k-free} is a cousin of Magnus'
Freiheitssatz, which says that if $H$ is a proper free factor of a
free group ${F}$ and the natural map $H\to{F}/\ncl{w}$ is not
injective then $w$ is in fact conjugate into
$H$~\cite{magnus-freiheitssatz}. Theorem~\ref{thm: k-free} follows
immediately from Lemma \ref{lem: Universal property of w-subgroups},
which applies to homomorphisms from groups of low rank to $\GG$.
Taken together, Theorems \ref{thm: Negative immersions} and \ref{thm:
k-free} imply that one-relator groups with negative immersions have
a similar subgroup structure to hyperbolic groups.
\begin{corollary}\label{cor: Subgroups of NI groups}
Let $w$ be an element of a free group ${F}$. If the one-relator
group $\GG={F}/\ncl{w}$ has negative immersions then $\GG$ doesn't
contain any Baumslag--Solitar groups and any abelian subgroup of
$\GG$ is locally cyclic.
\end{corollary}
A famous question in geometric group theory asks whether or not a
group with a finite classifying space and without Baumslag--Solitar
subgroups must be hyperbolic \cite[Question
1.1]{bestvina_questions_????}. Lyndon's identity theorem implies that
presentation complexes of torsion-free one-relator groups are
classifying spaces, so in light of Corollary \ref{cor: Subgroups of NI
groups}, the case of one-relator groups with negative immersions is
of immediate interest.
\begin{conjecture}\label{conj: One-relator hyperbolization}
Every one-relator group with negative immersions is hyperbolic.
\end{conjecture}
A positive resolution of Conjecture \ref{conj: One-relator
hyperbolization} would resolve the conjugacy and isomorphism
problems for the class of one-relator groups with negative immersions.
Of course, one can also ask whether one-relator groups with negative
immersions have other conjectural properties of hyperbolic groups,
such as residual finiteness and surface subgroups.
Since $\pi(w)=1$ if and only if the corresponding one-relator group
has torsion, and these are known to be hyperbolic by the B. B. Newman
Spelling Theorem~\cite{newman-spelling,hruska-wise-spelling}, the
remaining case of interest is $\pi(w)=2$. In this case, our
techniques provide the following result.
\begin{corollary}\label{cor: NN associate properties}
Let $w$ be an element of a free group ${F}$. If $\pi(w)=2$ then
the one-relator group $\GG={F}/\ncl{w}$ contains a subgroup
$P<\GG$ with the following properties:
\begin{enumerate}[(i)]
\item $P$ is a two-generator, one-relator group;
\item every two-generator subgroup of $\GG$ is either free or
conjugate into $P$.
\end{enumerate}
\end{corollary}
We call the subgroup $P$ the \emph{peripheral subgroup} of $\GG$ (we
cannot currently prove that $P$ is an isomorphism invariant of $\GG$).
Corollary \ref{cor: NN associate properties} suggests the following
natural counterpart to Conjecture \ref{conj: One-relator
hyperbolization}.
\begin{conjecture}\label{conj: rel hyp}
Suppose $\pi(w)=2$. Then $\GG$ is hyperbolic relative to $P$.
\end{conjecture}
Conjectures~\ref{conj: One-relator hyperbolization} and~\ref{conj: rel
hyp} provide a conceptual explanation for the fact that all known
examples of pathological one-relator groups have two generators. We
are unable to say anything new about two-generator one-relator groups.
\subsection{The dependence theorem}
In 1959, Lyndon proved that a non-trivial commutator in a free group
$F$ cannot be expressed as a square \cite{lyndon-abc}. In this
paper, we view Lyndon's theorem as the first in a line of
\emph{dependence theorems} for free groups, which bound the rank of
the target of a homomorphism in which certain elements are forced
either to be conjugate or to have roots.
\begin{thm}[Lyndon, 1959]
Let $H=\langle a,b\rangle$, $v=[a,b]$, $n=2$. Consider the group
$\Delta=H*_{v=w^n}\langle w\rangle$. If $f\colon\pio{\Delta}\to F$
is a surjective homomorphism onto a free group then $\rk{F}=1$.
\end{thm}
Shortly afterwards, the hypotheses of Lyndon's theorem were weakened
to cover the case when $n\geq 2$; see, for
example,~\cite[Lemma~36.4]{baumslag-aspects}. The commutator
$v=[a,b]$ in Lyndon's theorem cannot be replaced by an arbitrary
element of the free group; indeed, adjoining a root to a generator $a$
exhibits a map in which the rank of the target group does not go down.
We need a hypothesis that excludes generators.
\begin{definition}
A malnormal collection of cyclic subgroups
$\{\langle v_j\rangle\}$ is called \emph{independent} if there
exists a free splitting $H=H'*\langle v_k\rangle$ of $H$, for
some $k$, with $v_j$ conjugate into $H'$ for $j\neq k$.
Otherwise, $\{\langle v_j\rangle\}$ is called \emph{dependent}.
\end{definition}
Note that the singleton $\{\langle v\rangle\}$ is dependent if and only
if $v$ is not primitive. Using the theory of pro-$p$ groups, Baumslag
generalized Lyndon's theorem to all dependent words $v$
\cite{baumslag}.
\begin{thm}[Baumslag, 1962]
Let $H$ be a free group, $v$ a dependent element of $H$ and
$n>1$. If $\Delta=H*_{v=w^n}\langle w\rangle$ and
$f\colon\pio{\Delta}\to F$ is a surjective homomorphism onto a free
group, then $\rk{F}<\rk{H}$.
\end{thm}
We now introduce the data for a more general dependence theorem. Let
$H_1,\ldots, H_l$ be free groups and $\{\langle v_{i,j}\rangle\}$
a malnormal collection of non-trivial cyclic subgroups of $H_i$.
For each $i$ and $j$, let $n_{i,j}$ be a positive integer. We
associate a graph of groups
$\Delta=\Delta(\{H_i\},\langle w \rangle,\{\langle
v_{i,j}\rangle\},\{n_{i,j}\})$ to these data as follows. There are $l$
vertices labelled by the $H_i$, arranged around one central vertex
labelled $\langle w\rangle$. For each $i$ and $j$, there is an edge
which attaches the subgroup $\langle v_{i,j}\rangle$ to the
index-$n_{i,j}$ subgroup of the vertex group
$\langle w\rangle$.\footnote{When $l=1$ or $m=1$ we will drop the
indices $i$ or $j$ as appropriate, to minimize notation.}
A dependence theorem relates these data to the rank of a possible free
image of $\pio{\Delta}$. For instance, Lyndon's theorem is the case
when $l=m=1$, $H=\langle a,b\rangle$, $v=[a,b]$ and $n=2$. A more
general theorem of this form was proved by the first author
\cite{adjoiningroots}.
\begin{thm}[Louder, 2013]
Let $H_1,\ldots, H_l$ be free groups,
$\{\langle v_{i,j}\rangle\}$ a malnormal collection of non-trivial
cyclic subgroups of $H_i$ and $n_{i,j}$ positive integers. Let
$\Delta$ be the associated graph of groups and let
$f\colon \pio{\Delta}\toF$ be a surjective homomorphism to a free
group with $f\vert_{H_i}$ injective for each $i$. If the family
$\{\langle v_{i,j}\rangle\}$ is dependent for each $i$, and
$\sum_{i,j} n_{i,j}>1$, then
\[
\rk{F}-1<\sum_{i}(\rk{H_i}-1)~.
\]
\end{thm}
Baumslag's theorem, and hence Lyndon's, follows immediately. Indeed,
if $f\vert_{H}$ is not injective, the conclusion holds
automatically, and otherwise the theorem applies. A 1983 theorem of
Stallings in a similar spirit also follows {\cite[Theorem
5.3]{stallings-surfaces}}; we discuss Stallings' theorem in
Subsection \ref{subsec: stallings}. The main theorem of
\cite{adjoiningroots} is in fact more general than stated above, and
applies to arbitrary acylindrical graphs of free groups with cyclic
edge groups.
Another kind of dependence theorem constrains the integers
$n_{i,j}$ in terms of the ranks of the $H_i$. A prototypical
result here is provided by a theorem of Duncan and Howie, which
extends and quantifies Lyndon's theorem by bounding from below the
genus of a proper power \cite{duncan-howie}. The Duncan--Howie
theorem can be stated as follows.
\begin{thm}[Duncan--Howie, 1991]
Let $\Sigma$ be a compact, orientable surface of genus $g$ with one
boundary component $v=\partial\Sigma$ and let $H=\pio{\Sigma}$. If
$\Delta=H*_{v=w^n}\langle w\rangle$ and $f\colon\pio{\Delta}\to F$ is
a homomorphism onto a free group, $f(v)\neq 1$, then
$n \leq\rk{H}-1=2g-1$.
\end{thm}
Just as Lyndon's theorem was generalized from surfaces to more general dependent families of elements, so the Duncan--Howie theorem can be extended to arbitrary dependent malnormal families of elements. The following theorem, proved by the authors and also Helfer--Wise, answered Wise's \emph{$w$-cycles conjecture}, which was made in connection with the question of whether or not one-relator groups are coherent.
\begin{thm}[Louder--Wilton, Helfer--Wise]
Let $H$ be a free group, $\{\langle v_j\rangle\}$ a malnormal
collection of non-trivial cyclic subgroups of $H$ and $n_j$
positive integers. Let $\Delta$ be the associated graph of groups
and let $f\colon \pio{\Delta}\toF$ be a homomorphism to a free
group with $f\vert_{H}$ injective. If the family
$\{\langle v_j\rangle\}$ is dependent then
$\sum_{j} n_j\leq \rk{H}-1$.
\end{thm}
Despite the fifty-nine years of work documented above, there are
simple examples that do not fall within the scope of these theorems.
\begin{example}\label{exa: Borromean rings}
Let $H$ be the fundamental group of the three-punctured torus
$\Sigma_{1,3}$ and $u_1,u_2,u_3$ represent the boundary components,
and take $n_j=1$ for $j=1,2,3$. Then $\pio{\Delta}$ is the
fundamental group of the space obtained by identifying the boundary
components of $\Sigma_{1,3}$, and we may ask about the ranks of
possible free groups $F$ surjected by
$\pio{\Delta}$. The natural covering map
$\Sigma_{1,3}\to\Sigma_{1,1}$, where $\Sigma_{1,1}$ is the
once-punctured torus, induces a surjection onto the free group of
rank two.
The theorems of Stallings and the first author only predict that the
rank of a free image should be at most three, and so one is
naturally led to wonder whether there is a surjection from
$\pio{\Delta}$ to the free group of rank three that is injective on
$\langle u_i\rangle$.
\end{example}
The main tool developed in this paper is a dependence theorem for free
groups, which implies all of the above. It gives a precise
relationship between the integers $n_{i,j}$ and the ranks of the
free groups $H_i$ and $F$.
\begin{theorem}\label{introthm: main}
Let $H_1,\ldots,H_l$ be free groups,
$\{\langle v_{i,j}\rangle\}$ a malnormal collection of non-trivial
cyclic subgroups of $H_i$ and $n_{i,j}$ positive integers. Let
$\Delta$ be the associated graph of groups and let
$f\colon \pio{\Delta}\toF$ be a surjective homomorphism to a free
group with $f\vert_{H_i}$ injective for each $i$. If the family
$\{\langle v_{i,j}\rangle\}$ is dependent for each $i$, then
\[
\sum_{i,j} n_{i,j}-1\leq
\sum_i(\rk{H_i}-1)-(\rk{F}-1)~.
\]
\end{theorem}
As stated, Theorem \ref{introthm: main} does not strictly generalize
the Duncan--Howie theorem, since the map $f$ in Theorem \ref{introthm:
main} is required to be injective on the $H_i$. Theorem
\ref{maintheorem} relaxes the injectivity hypothesis to a hypothesis
of `diagrammatic irreducibility', which is weak enough to encompass
the Duncan--Howie theorem; see Corollary \ref{dhcorollary} for
details.
The connection between the dependence theorem and one-relator groups goes via an estimate on the Euler characteristic of the one-relator pushout of a branched map; the reader is referred to Definitions \ref{defn: Branched map} and \ref{def: one-relator pushout} for the relevant terms. A special case of the estimate can be stated as follows, which is direct consequence of Corollary \ref{monotonicity}.
\begin{corollary}
\label{cor: Intro monotonicity}
Let $f\colon Y\looparrowright X$ be an immersion from a compact, connected two-complex $Y$ to the presentation complex $X$ of a one-relator group $G=F/\ncl{w}$, with $w$ not a proper power. If $Y$ has no free faces then
\[
\chi(Y)\leq\chi(\hatt Y)~,
\]
where $\hatt{Y}$ is the one-relator pushout of $f$.
\end{corollary}
As well as the applications to non-positive immersions mentioned above, this estimate on Euler characteristics also gives new proofs of Magnus' Freiheitssatz and Lyndon's asphericity theorem; see Theorem \ref{thm: Magnus Lyndon}.
\subsection*{Acknowledgements}
The second author was funded by EPSRC Standard Grant EP/L026481/1.
\section{Graphs and graphs of graphs}
\subsection{Graphs}
\label{graphs}
A (directed) graph $G$ is a tuple $G=(V_G,E_G,\iota,\tau)$, where
$V_G$ and $E_G$ are sets, the vertices and edges of $G$, respectively,
and $\iota\colon E_G\to V_G$ and $\tau\colon E_G\to V_G$ are incidence
maps. When convenient we suppress the subscript $G$.
A \emph{morphism of graphs} is a map $f\colon G\to G'$ such that
$f(V_G)\subseteq V_{G'}$, $f(E_G)\subseteq E_{G'}$, such that
$f\circ\alpha=\alpha\circ f$ for $\alpha\in\{\iota,\tau\}$. A morphism
of graphs is an \emph{immersion} if $\alpha(e)=\alpha(e')$ implies $f(e)\neq
f(e')$.
The valence of a vertex $v\in V_G$ is denoted $\nu(v)$.
A \emph{bipartite graph} is a graph
$B=(C_B\sqcup U_B,E_B,\sigma,\lambda)$, where the vertex set $V_B$ is
divided into two sets $C_B$ and $U_B$ with edge maps $\sigma$ and
$\lambda$ such that $\sigma(E)\subseteq C_B$ and
$\lambda(E)\subseteq U_B$. A bipartite graph is \emph{simple} if its
edges are determined by their endpoints, i.e., if
$\lambda(p)=\lambda(p')$ and $\sigma(e)=\sigma(e')$ then $e=e'$. In
this case we think of $E_B$ as lying in $C_B\times U_B$. As usual, we
will avoid writing subscripts when possible.
A \emph{morphism of bipartite graphs} is a morphism of graphs
$\alpha\colon B\to D$ such that $\alpha(C_B)\subseteq C_D$ and
$\alpha(U_B)\subseteq U_D$. Note that
$\alpha\circ\beta=\beta\circ\alpha$ for $\beta\in\{\lambda,\sigma\}$.
Given a graph $G$ the \emph{geometric realization} of $G$ is the
1-complex
\[
\real{G}=(V_G\sqcup
(E_G\times\left[-1,1\right]))/\{(e,-1)\sim\iota(e),(e,1)\sim\tau(e)\}~,
\]
and if $f\colon G\to G'$ is a morphism of graphs, the realization of
$f$ is the map
\begin{equation*}
\real{f}(x)=
\begin{cases}
f(x) & \mbox{ if } x\in V_G \\
(f(e),t) & \mbox{ if } x=(e,t)\in e\times\left[-1,1\right]
\end{cases}
\end{equation*}
We define $\pio{G}=\pio{\real{G}}$, and $f_*=\real{f}_*$ for a
morphism $f\colon G\to G'$.
\subsection{Graphs of graphs}
The construction below appears in various guises in the
papers~\cite{dicks-shnc,louder-mcreynolds,adjoiningroots}.
Let $h\colon {\Gamma}\to {{\Omega}}$ and $w\colon S\to {{\Omega}}$ be
morphisms of (directed) graphs. Recall that the \emph{fibre product}
is the graph
\[
{\Gamma}\times_{{\Omega}} S=\{(x,y)\in{\Gamma}\times S\mid h(x)=w(y)\}
\]
as in \cite{stallings-folding}. Let
$\rho\colonP\to{\Gamma}\times_{{\Omega}}S$ be a morphism from a graph
$P$ to the fiber product ${\Gamma}\times_{{\Omega}} S$. Let
$\lambda\colonP\to{\Gamma}$ and $\sigma\colonP\to S$ be the maps
induced by the projections ${\Gamma}\times S\to {\Gamma}$ and
${\Gamma}\times S\to S$, respectively.
These data determine a square complex as the graph of graphs
associated to ${{\Omega}}$, ${\Gamma}$, $S$, ${P}$,
$\lambda$, and $\sigma$: let
\[
\real{W}=(\real{{\Gamma}}\sqcup\real{S}\sqcup
(\real{P}\times\left[-1,1\right]))/\{(p,-1)\sim\lambda(p),(p,1)\sim\sigma(p)\}~.
\]
Alternatively, $\real{W}$ is the adjunction space
\[
\left(\real{{\Gamma}}\sqcup \real{S}\right)\cup_f\left(\real{P}\times\left[-1,1\right]\right)~,
\]
where
$f\colon\real{P}\times\{-1,1\}\to
\real{S}\sqcup\real{{\Gamma}}$ is defined by
$f(y,-1)=\lambda(y)$ and $f(y,1)=\sigma(y)$. The
realization $\real{P}$ sits horizontally in
$\real{W}$ as $\real{P}\times\{0\}$.
\subsection{Resolving}\label{ss: Resolving}
In Wise's terminology, the realization $\real{W}$ is a
$VH$-complex, and the maps $\sigma$ and $\lambda$ are
the attaching maps of the \emph{horizontal} graph-of-graphs structure
on $\real{W}$. We now turn our attention to the
\emph{vertical} graph-of-spaces structure on
$\real{W}$.
The vertical vertex-graphs are the components of the graph with vertex
set $V_{{\Gamma}}\sqcup V_S$, edge set $V_{P}$, and edge maps
$\lambda$ and $\sigma$ (suitably restricted). The vertical
edge-graphs are the components of the graph with vertex set
$E_{{\Gamma}}\sqcup E_S$ and edge set $E_{P}$, and edge maps
again $\lambda$ and $\sigma$ (suitably restricted). We collect these
together into a bipartite graph ${W}$, whose
realization is the disjoint union of the realizations of the vertical
vertex- and edge-graphs of ${{W}}$. That is,
${W}$ is the bipartite graph with vertex set
$V_{W}=V_{{\Gamma}}\sqcup E_{{\Gamma}}\sqcup V_S\sqcup
E_S$, edge set
$E_{W}=V_{P}\sqcup E_{P}$, and edge
maps induced by $\lambda$ and $\sigma$.
The maps of graphs $S,{\Gamma},{P}\to {{\Omega}}$ determine a
map (of sets) $f\colon {W}\to {{\Omega}}$, which in
turn extends to a map of realizations that sends vertical
vertex-graphs to vertices and vertical edge-graphs to midpoints of
edges.
We now \emph{resolve} this map, by factoring it through the underlying
graph ${{\Gamma_u}}$ of the vertical decomposition of ${{W}}$. That is,
${{\Gamma_u}}$ is the graph defined by letting $V_{{{\Gamma_u}}}$ be the set of
connected components of $f^{-1}(V_{{\Omega}})$ and $E_{{{\Gamma_u}}}$ be the
set of connected components of $f^{-1}(E_{{\Omega}})$. The map
$f\colon {W}\to {{\Omega}}$ factors through a map
$m\colon {W}\to {{\Gamma_u}}$, and there is an induced morphism of graphs
$l\colon {{\Gamma_u}}\to {{\Omega}}$.
\[
{W}\stackrel{m}{\longrightarrow} {{\Gamma_u}}\stackrel{l}{\longrightarrow} {{\Omega}}
\]
The graph ${{\Gamma_u}}$ is the pushout of ${\Gamma}$ and $S$ along ${P}$
in the category of (directed) graphs. There are injective maps of sets
$i_S\colon S\into {W}$ and $i_{{\Gamma}}\colon {\Gamma}\into {W}$,
and we will also denote by $w$ the composition
$m\circ i_S\colon S\to {{\Gamma_u}}$, even though, strictly speaking $w$ is
a map from $S$ to ${{\Omega}}$. We denote by ${{\Gamma_u^I}}$ the graph
obtained by Stallings folding the map $l\colon {{\Gamma_u}}\to {{\Omega}}$
to an immersion. Note that $\chi({{\Gamma_u^I}})\geq\chi({{\Gamma_u}})$.
\begin{figure}[ht]
\centering
\begin{tikzcd}
& & S\arrow[swap]{dr}{i_S} \arrow{dr} \arrow{drr}{w} \arrow[bend left=13]{drrrr}{w}\\
{P}\arrow{r}{\rho} \arrow[bend left=13]{urr}{\sigma} \arrow[bend right=13, swap]{drr}{\lambda} & {\Gamma}\times_{{\Omega}}S\arrow[swap]{ur}{\pi_S} \arrow{dr}{\pi_{{\Gamma}}} & & {W} \arrow{r}{m} & {{\Gamma_u}} \arrow{r}\arrow[bend left=16]{rr}{l} & {{\Gamma_u^I}}\arrow{r}& {{\Omega}}\\
& & {\Gamma} \arrow{ur}{i_{{\Gamma}}} \arrow{urr} \arrow{urr} \arrow[bend right=13, swap]{urrrr}{h}
\end{tikzcd}
\caption{The graphs ${\Omega}$, $S$, ${\Gamma}$, and $P$ are given,
and $W$, ${\Gamma_u}$ and, ${\Gamma_u^I}$ are constructed.}
\label{fig: big diagram}
\end{figure}
For each $x\in {{\Gamma_u}}$ denote by ${W}_{x}$ the
(connected) graph $m^{-1}(x)$; ${W}_{x}$ is
bipartite, with vertices $S_{x}=S\cap {W}_{x}$
and ${\Gamma}_{x}={\Gamma}\cap {W}_{x}$, and edges
${P}_{x}={P}\cap {W}_{x}$. The incidence maps
$\iota$ and $\tau$ from $S$, ${\Gamma}$, and ${P}$ induce maps
$\tau\colon {W}_e\to {W}_{\tau(e)}$ and
$\iota\colon {W}_e\to {W}_{\iota(e)}$.
The natural map $f\colon {W}\to {{\Omega}}$ factors
through $ m\colon {W}\to {{\Gamma_u}}$. The points of
$ {{\Gamma_u}}$ are in bijection with the connected components of the
fibers of the map $f$. The graph ${W}$ expresses
$\real{W}$ as a graph of graphs over
$\real{{\Gamma_u}}$ as follows.
\[
\real{W} = \left(\coprod_{v\in V_{{\Gamma_u}}}
\real{W}_v\sqcup \coprod_{e\in E_{{\Gamma_u}}}(
\real{W}_e\times\left[-1,1\right])\right)/\{(x,-1)\sim\real{\iota}(x),(x,1)\sim\real{\tau}(x)\}~.
\]
The vertical (cellular) graphs $\real{W}_e$ are two sided,
transversely oriented, and sit vertically in $\real{W}$ as
$\real{W}_e\times\{0\}$. See Figure~\ref{hortovert}. The vertical
and horizontal one-cells in $\real{W}$ are the one cells of
$\real{{\Gamma}}$ and $\real{S}$, and of $\sqcup \real{W}_v$,
respectively. The connected components of the horizontal graph
$\real{P}\times\{0\}$ are the \emph{horizontal hyperplanes} and the
$\real{W}_e\times\{0\}$ are the \emph{vertical hyperplanes}. The
homomorphism
$\real{f}_*\colon\pio{\real{W}}\to\pio{\real{{\Omega}}}$ factors
through a surjection
$\real{m}_*\colon\pio{\real{W}}\twoheadrightarrow} \def\into{\hookrightarrow\pio{\real{{\Gamma_u}}}$.
\begin{figure}[ht]
\centerline{
\includegraphics[width=.7\textwidth]{squares2.pdf}
}
\caption{$S$ and ${\Gamma}$ are horizontal in ${W}$,
and we resolve the map ${W}\to {{\Omega}}$ by
passing to connected components of preimages of points. The graphs
that result are vertical, and dual to the horizontal graphs. }
\label{hortovert}
\end{figure}
As usual, $\chi(G)$ will denote the Euler characteristic of a graph
$G$. The graph ${W}$, however, plays a special role,
as it combinatorially encodes all the important features of the graph
of graphs $\real{W}$. This motivates the following
definition.
\begin{definition}
The \emph{characteristic} of ${W}$ is the alternating sum
\begin{align*}
\chi({W}) &= \sum_{v\in V_{{\Gamma_u}}}\chi({W}_v)-\sum_{e\in E_{{\Gamma_u}}}\chi({W}_e)\\
&= \chi({\Gamma})+\chi(S)-\chi({P})~.
\end{align*}
\end{definition}
Note that $\chi({W})$ is the Euler characteristic of
$\real{W}$ as a CW--complex.
\begin{figure}
\centerline{
\includegraphics[width=.8\textwidth]{resolving3.pdf}
}
\caption{Schematic of ${W}$. The realization $\real{W}$ is the
graph of spaces obtained by gluing the ends of
$\real{P}\times I$ to $\real{{\Gamma}}$ and $\real{S}$ using
$\lambda$ and $\sigma$. We think of $\real{{\Gamma}}$ and
$\real{S}$ as running through $\real{W}$ horizontally. The
vertical graph-of-graphs structure on $\real{W}$ is
cartoonishly depicted above, with vertex spaces $\real{W}_v$
and edge spaces $\real{W}_e$.}
\label{schematic}
\end{figure}
\subsection{The dependence theorem}
\label{subsec: dependence}
The formalism we have developed is applied in the following setting.
Let ${F}$ and $H_1,\ldots, H_l$ be free groups and
${{\Omega}},{\Gamma}_1,\ldots, {\Gamma}_l$ graphs with
${F}=\pio{{\Omega}}$ and $H_i=\pio{{\Gamma}_i}$ for
all $i$. Let $h_i\colonH_i\to {F}$ be a
homomorphism of free groups, realized by a morphism of graphs
$h\colon {\Gamma}\to {{\Omega}}$, where
${\Gamma}=\coprod_i {\Gamma}_i$.
As in the introduction, we fix malnormal families of cyclic subgroups
$\{\langle v_{i,j}\rangle\}$ of $H_i$, realized by morphisms of
graphs $\lambda_i\colon {P}_i\to {\Gamma}_i$, where ${P}_i$ is a
disjoint union of circles. Let ${P}=\coprod_i {P}_i$ and
$\lambda=\coprod \lambda_i\colon{P}\to {\Gamma}$. In the setting of a
dependence theorem, there is a map $w\colon S\to {{\Omega}}$, where
$S$ is a circle, which represents the generator of a cyclic subgroup
of ${F}$ into which the $h_i(v_{i,j})$ are all conjugate. In
particular, the map $h\circ\lambda$ also factors as $w\circ\sigma$,
where $\sigma\colon {P}\to S$ is a map of graphs.
The link between the dependence theorem and immersions into
one-relator complexes can be explained as follows. Suppose
$\real w\colon \real{S}\to \real{{\Omega}}$ is the attaching map
defining the presentation complex $X$ of a one-relator group, and the
map $\lambda\colon \real{P}\to \real{{\Gamma}}$ as the
coproduct of the attaching maps defining a complex $Y$ that maps to
$X$. In Section~\ref{subsec: one-relator pushouts} we will see that
the realization of the pushout ${\Gamma_u}$ of ${\Gamma}$ and $S$
along $P$ is the one-skeleton of a ``best'' one-relator
complex $\hatt Y$ that the map $Y\to X$ factors through. The
dependence theorem implies that when $Y$ cannot be simplified in an
obvious way, i.e. when $Y$ doesn't have any free faces, then
$\chi(Y)\leq\chi(\hatt Y)$.
\begin{definition}\label{defn: Boundary}
Let $S$, ${\Gamma}$, ${P}$, ${{\Omega}}$,
${W}$, and ${{\Gamma_u}}$ be as above. The
\emph{boundary} of ${W}$ is
\[
\partial{W}=
\{e\in E_{{\Gamma}}\mid \vert\lambda^{-1}(e)\vert = 1\}~.
\]
The boundary of
$\real{W}$ is
\[
\partial\real{W}=
\bigcup_{{e\in\partialW}}
\overline{e\times(-1,1)}
\subseteq
\real{{\Gamma}}~.
\]
The boundary of a two-complex $Y$ is the closure of its free faces.
\end{definition}
The boundary of $W$ consists of those edges of ${\Gamma}$ that are
hit by precisely one element of ${P}$. By construction
$\partial\real{W}=\partial Y$. When ${W}$ has nonempty
boundary, the complex $Y$ can be simplified by a collapse, and we call
this circumstance `independent' (since, when $S$ is a circle, it
corresponds to the group-theoretic notion of independence given in the
introduction). We will also be interested in a strengthening of this,
in which the whole image of $S$ in ${{\Gamma_u}}$ (and therefore in
${\Omega}$) is covered at least twice by the boundary.
\begin{definition}\label{defn: Dependent}
The map $\lambda\colonP\to{\Gamma}$ is \emph{independent} if
$\partialW\neq \varnothing$; otherwise, it is called
\emph{dependent}. The map $\lambda\colonP\to{\Gamma}$ is
\emph{strongly independent} if, for all $e\in w(E_S)$,
$\vert\partialW\capW_e\vert\geq 2$; otherwise, it is
called \emph{weakly dependent}.
\end{definition}
When considering a map of complexes $Y\to X$, we want to assume
that the attaching maps are immersions and that no adjacent pair of
2-cells of $Y$ cancel, in the sense that they map to opposite copies
of the 2-cell of $X$. These assumptions correspond to the hypothesis
that $W$ is `diagrammatically irreducible', which is motivated by the
notion of a reduced disc diagram.
\begin{definition}
We say that $W$ is \emph{diagrammatically irreducible} if the
restriction
$\rho\vert_{E_{P}}\colon E_{P}\to E_{{\Gamma}}\times E_S$ is an
embedding and the maps $\sigma$, $w$ are immersions.
\end{definition}
We record some consequences of diagrammatic irreducibility in the
following lemma.
\begin{lemma}
\label{lem: DI properties}
Let ${{\Omega}}$, ${\Gamma}$, $S$, ${P}$, ${W}$, and ${{\Gamma_u}}$ be as above.
\begin{enumerate}[(i)]
\item If $\rho\vert_{E_{P}}$ is injective then ${W}_e$ is a
simple bipartite graph for all $e\in E_{{\Gamma_u}}$.
\item If $\sigma\colon {P}\to S$ and $w\colon S\to {{\Omega}}$ are
immersions then $\alpha\colon {W}_e\to {W}_{\alpha(e)}$ maps
${P}_e$ injectively to ${P}_{\alpha(e)}$ and $S_e$ injectively
to $S_{\alpha(e)}$.
\end{enumerate}
\end{lemma}
The proof is left as an easy exercise. If ${W}$ is
diagrammatically irreducible and $h\colon {\Gamma}\to {{\Omega}}$ is
an immersion then the maps
$\alpha\colon {W}_e\to {W}_{\alpha(e)}$ are embeddings,
although we will not use this fact.
\begin{lemma}\label{lem: Circle valence}
Let ${{\Omega}}$, ${\Gamma}$, $S$, ${P}$, ${W}$, and ${{\Gamma_u}}$ be
as above. If $S$ is connected and $\sigma\colon {P}\to S$ is a
covering map then, for each
$s\in S_{x}\subset {W}_{x}$,
$\nu(s)=\deg(\sigma)$.
\end{lemma}
We can now state the dependence theorem in the form in which we prove
it.
\begin{theorem}[Dependence theorem]
\label{maintheorem}
Let ${{\Omega}}$, ${\Gamma}$, $S$, ${P}$, ${W}$, and ${{\Gamma_u}}$ be
as above. Suppose further that ${W}$ is diagrammatically
irreducible, $S$ is a circle, $w\colon S\to {{\Omega}}$ is
indivisible, and that $\sigma$ is a covering map. If
$\lambda\colon {P}\to {\Gamma}$ is weakly dependent then
\[
\chi({\Gamma})+\deg(\sigma)-1\leq\chi({{\Gamma_u}})~.
\]
\end{theorem}
Usually, following~\cite{stallings-folding}, subgroups of free groups
are represented by immersions of connected graphs, so for the purposes
of generalizing the theorems of Baumslag and Stallings it is safe to
restrict to immersions of connected ${\Gamma}\to{{\Omega}}$. However,
in order to strengthen the Duncan--Howie theorem we need to allow maps
that are not immersions.
If $W$ is diagrammatically irreducible and weakly dependent then
$\chi({{\Gamma_u}})\leq -1$, and in this case Theorem \ref{maintheorem}
implies the inequality
\[
\chi({\Gamma})+\deg(\sigma)\leq 0~,
\]
which is precisely Wise's $w$-cycles conjecture
\cite{helfer-wise,louder-wilton}.
\begin{question}
Are there ${W}$ as above, with $\lambda$ dependent, such that
\[
\chi({\Gamma})+\deg(\sigma)-1=\chi({{\Gamma_u}})
\]
for all $\deg(\sigma)\geq 2$ and $\chi({{\Gamma_u}})\leq -1$? For
$\lambda$ weakly dependent?
\end{question}
We next explain how Theorem \ref{maintheorem} implies Theorem
\ref{introthm: main}.
\begin{proof}[Proof of Theorem \ref{introthm: main}]
We may assume that $f(w)$ is indivisible in $F$: if there are
$v\inF$, $k\geq 1$, such that $f(w)=v^k$ then, since $f$ is
surjective, if
\[
\sum_{i,j} kn_{i,j}-1\leq
\sum_i(\rk{H_i}-1)-(\rk{F}-1)~,
\]
then certainly
\[
\sum_{i,j} n_{i,j}-1\leq
\sum_i(\rk{H_i}-1)-(\rk{F}-1)~.
\]
We take ${{\Omega}}$ to be a rose with ${F}=\pio{{\Omega}}$ a free
group, ${\Gamma}$ to be a graph immersing into ${{\Omega}}$, for
which the components have fundamental groups $H_i$, and
$\lambda\colon {P}\to {\Gamma}$ an immersion of a disjoint union of
circles into ${\Gamma}$ that represent the family $\{v_{i,j}\}$. As
explained above, these factor through a common circle
$w\colon S\to {{\Omega}}$ which induces the maps
$\sigma\colon {P}\to S$ and $\lambda\colon {P}\to {\Gamma}$. We
may therefore construct the adjunction space ${W}$. The map
$\pio{\Delta}=\pio{\real{W}}\to F$ is surjective and factors
through $m_*\colon \pio{W}\to \pio{{\Gamma_u}}$ so
$\chi(F)=\chi({{\Omega}})\geq\chi({{\Gamma_u}})$.
The graph ${W}$ is diagrammatically irreducible since, for each
$i$, the subgroups $\{\langle v_{i,j}\rangle\}$ are a malnormal
family. As observed above, because the $\{\langle v_{i,j}\rangle\}$
are dependent, it follows that the map
$\lambda\colon {P}\to {\Gamma}$ is dependent, in particular weakly
dependent, and Theorem \ref{maintheorem} applies. After noting that
\[
\deg(\sigma) = \sum_{i,j} n_{i,j}~,
\]
that $\chi({{\Omega}})=1-\rk{F}$ and that
$\chi({\Gamma})=\sum_i(1-\rk{H_i})$, the result follows.
\end{proof}
\section{One-relator pushouts}
\label{subsec: one-relator pushouts}
We now explain the link between the dependence theorem and maps to
one-relator complexes. We work in the category of combinatorial
2-complexes and branched maps.
\begin{definition}\label{defn: Branched map}
Let $D\subseteq\mathbb{C}$ be the unit disc in the complex plane,
and let $p_n\colon D\to D$ be the map defined by $z\mapsto z^n$. A
cellular map of two-complexes $f\colon Y\to X$ is a \emph{branched
map} if it satisfies the following conditions:
\begin{enumerate}[(i)]
\item $f$ restricts to a homeomorphism on each 1-cell of $Y$;
\item $f$ induces an immersion on the link of each 0-cell of $Y$;
\item for each 2-cell $e$ of $Y$, there is a 2-cell $e'$ of $X$ so
that $f(e)\subseteq e'$, and $e$ and $e'$ can be parametrized so
that $f\vert_e$ agrees with some $p_n$.
\end{enumerate}
\end{definition}
Let $f\colon Y\to X$ be a branched map, and let $e$ be a two-cell of
$X$, with $e'$ the corresponding two-cell in $Y$. The \emph{degree of
branching} of $e$ is the number $n_e$ such that $e\to e'$ is
parametrized as $z\mapsto z^{n_e}$. Clearly
\begin{equation}
\sum(n_e-1)=\deg(\sigma)-\#\{e\mid e\mbox{ is a two-cell in }Y\}~.\label{cells}
\end{equation}
\begin{remark}
A cellular map $f\colon Y\to X$ which is combinatorial on
1-skeleta and induces an immersion on links of vertices is homotopic
to a branched map.
\end{remark}
Let $X$ be the presentation complex of a one-relator group and let
$f\colon Y\looparrowright X$ be a branched map from a compact connected
2-complex $Y$ with at least one 2-cell to $X$. We consider the poset
$\mathcal{O}(Y,X)$ defined as follows. The objects are diagrams
\[
Y\fontop{f_Z}Z\fontop{g_Z}X~,
\]
where both $f_Z$ and $g_Z$ are cellular, $g_Z$ maps the two-cell of
$Z$ homeomorphically to the two-cell of $X$, $f_Z$ is surjective, and
$f=g_Z\circ f_Z$. We usually abuse notation and use $Z$ to denote the
diagram. For $Z_1,Z_2\in\mathcal{O}(Y,X)$, we write $Z_2\leq Z_1$ if
there is a factorization of $f$
\[
Y\fontop{f_{Z_1}}Z_1\fontop{h} Z_2\fontop{g_{Z_2}}X
\]
so that $g_{Z_1}=g_{Z_2}\circ h$ and $f_{Z_2}=h\circ f_{Z_1}$.
\begin{lemma}\label{lem: One-relator pushout}
The poset $\mathcal{O}(Y,X)$ has finitely many objects and has a
unique maximal object $\hatt{Y}$.
\end{lemma}
\begin{proof}
Consider an object $Z$. Since the map $f_Z$ is surjective, $Z$ is
determined by the collections of points of $Y$ that are identified
by $f_Z$. Since $Y$ is compact, it follows that there are only
finitely many elements in $\mathcal{O}$. Choose $Z_1$ and $Z_2$, and
let $W$ be the image of the map $Y\to Z_1\times_X Z_2$; $W$ is a
one-relator complex which dominates both $Z_1$ and $Z_2$. Since
$\mathcal{O}$ is finite it follows that there is a unique maximal
element.
\end{proof}
\begin{definition}[One-relator pushout]
\label{def: one-relator pushout}
$\hatt Y$ is the \emph{one-relator pushout} of $Y$. The
\emph{immersed one-relator pushout} $\hatt{Y}^I$ is the result of
folding the 1-skeleton of $\hatt{Y}$ to an immersion to the
1-skeleton of $X$.
\end{definition}
In the context of one-relator complexes, the dependence theorem gives
a relation between the Euler characteristics of $Y$ and $\hatt Y$,
which we explain next. Given $Z\in\mathcal{O}$, we have a diagram as
follows.
\begin{center}
\begin{tikzcd}
{P}\arrow{r}{\sigma}\arrow{d}{\lambda} & S\arrow{d}{w} \\
Y^{(1)} \arrow{r}{h}& Z^{(1)}
\end{tikzcd}
\end{center}
Here, ${P}$ is a collection of circles, $\lambda$ represents the
attaching maps for the two cells of $Y$, and $w$ is the attaching map
for the two-cell of $Z$. The map $\sigma$ restricts to a degree-one
map on each connected component of ${P}$, so $\deg(\sigma)$ is the
number of two-cells of $Y$. Setting ${\Gamma}=Y^{(1)}$, there is a
natural map from ${{\Gamma_u}}$, the pushout of ${\Gamma}$ and $S$ along
${P}$, to $Z^{(1)}$. Letting $w$ be the lift of $w$ to ${{\Gamma_u}}$, the
natural map from the one-relator complex ${{\Gamma_u}}\cup_w D\to\hatt Y$ is
an isomorphism since $\hatt Y$ is maximal.
The boundary $\partial Y$, as in Definition \ref{defn: Boundary}, is
the closure of the free faces of $Y$, and $\partialW=\partial
Y$. We can now state the dependence theorem's consequence in this
context.
\begin{corollary}[One-relator pushout inequality]
\label{monotonicity}
Let $f\colon Y\to X$ be a branched map from a compact connected one-
or two-complex to a one-relator complex $X=\real{{\Omega}}\cup_wD$,
with $w$ not a proper power. If the restriction
$f\vert_{\partial Y}\colon\partial Y\to w(S)$ is not at least
two-to-one then
\[
\chi(Y)+\sum(n_e-1)\leq\chi(\hatt Y)~.
\]
\end{corollary}
\begin{proof}
By~(\ref{cells}),
\[
\chi(Y)+\sum(n_e-1)=\chi({\Gamma})+\deg(\sigma)~,
\]
so if $\chi(Y)+\sum(n_e-1)>\chi(\hatt Y)$ then
$\chi({\Gamma})+\deg(\sigma)>\chi({{\Gamma_u}})+1$ and by the dependence
theorem for each edge $e$ of of ${\Gamma_u}$,
$\vert\partial {W}\cap {W}_e\vert\geq 2$. The map
$\partialW\to w(S)$ is therefore at least two-to one, and since
$\partial Y=\partial\real{W}$, so is $f\vert_{\partial Y}$.
\end{proof}
Clearly $\chi(\hatt Y^I)\geq\chi(\hatt Y)$ since the one-skeleton
${{\Gamma_u^I}}$ of $\hatt Y^I$ is obtained from the one-skeleton ${\Gamma_u}$ of
$\hatt Y$ by folding.
\section{Proof of the dependence theorem}
\subsection{Stackings}
As well as the adjunction space, the second tool that we will use is
the notion of a \emph{stacking} from \cite{louder-wilton}. In that
paper, a stacking of a map $\real{w}\colon\real{S}\to\real{{\Omega}}$
was defined to be a lift of $w$ to an embedding into
$\real{{\Omega}}\times\mathbb{R}$ (where $\mathbb{R}$ denotes the real
numbers). Here, we use an an equivalent, combinatorial, version of the
definition. Given an injection of sets $\alpha\colon C\to D$ and a
total order $\leq$ on $D$, we let $\alpha^*(\leq)$ denote the pullback
order on $C$.
\begin{definition}[Stacking]
\label{def:stacking}
Let $w\colon S\looparrowright {{\Omega}}$ be an immersion of graphs. A
\emph{stacking} of $w$ is a collection of orders $\leq_x$ on
$w^{-1}(x)$ for $x\in w(S)$, such that
$\alpha^*(\leq_{\alpha(e)})=\leq_e$ for each $e\in w(E_S)$ and
$\alpha\in\{\iota,\tau\}$.
\end{definition}
\begin{figure}[ht]
\centering
\includegraphics[width=.4\textwidth]{uuvuvvUUVUVV.pdf}
\caption{A stacking gives an inclusion
$\widetilde{\real{w}}\colon \real{S}\into
\real{{\Omega}}\times\mathbb{R}$ and vice-versa. This is a picture
of a stacking of the word $w=uuvuvvUUVUVV$ in the rose with two
petals. This word can be written as a commutator in two
inequivalent ways (see~\cite{bestvina-feighn-counting}).}
\label{stackingfigure}
\end{figure}
\begin{lemma}[{Loo-roll lemma~\cite[Lemma~17]{louder-wilton}}]
\label{stackings}
Any indivisible immersion $w\colon S\looparrowright {{\Omega}}$ from a
circle to a graph has a stacking.
\end{lemma}
For the rest of the paper we will write realizations in normal rather
than boldface font.
\subsection{Computing the characteristic of ${W}$}
In this subsection, we observe that Theorem \ref{maintheorem} can be
proved by estimating the Euler characteristic of a certain chain
complex $\mathcal{C}$ naturally associated to ${W}$. All
coefficients are in a fixed but arbitrary field.
Let ${W}$ be (not necessarily diagrammatically irreducible) as in
Subsection~\ref{graphs}. Considering the vertical decomposition of
${{W}}$ as a graph of graphs over ${\Gamma_u}$, and using the fact that
vertex and edge spaces are connected, the Mayer--Vietoris sequence
implies that
\[
\chi({W})=\chi({{\Gamma_u}})-\chi(\mathcal{C})
\]
where $\mathcal{C}$ is the chain complex
\[
0\to\bigoplus_{e\in E_{{{\Gamma_u}}}}H_1({W}_e)\to\bigoplus_{v\in V_{{{\Gamma_u}}}}H_1({W}_v)\to 0~.
\]
Clearly $\chi({W})=\chi({\Gamma})+\chi(S)-\chi({P})$. In
Theorem~\ref{maintheorem}, $S$ is a circle, ${P}$ is a union of
circles, so $\chi(S)=\chi({P})=0$ and $\chi(W)=\chi({\Gamma})$.
Rearranging, $\chi({\Gamma})+\chi(\mathcal{C})=\chi({{\Gamma_u}})$, and it
suffices to show that $\chi(\mathcal{C})\geq\deg(\sigma)-1$ whenever
$W$ is diagrammatically irreducible and
$\lambda\colon {P}\to {\Gamma}$ is weakly dependent.
\subsection{Fiberwise filtering ${W}$}
\label{fiberwisefiltering}
Let ${W}$ be diagrammatically irreducible and consider the chain
complex $\mathcal{C}$
indexed by the graph ${{\Gamma_u}}$. In this section we use stackings to
replace $\mathcal{C}$ by a pair of chain complexes $\mathcal{C}^{\pm}$
indexed by $S$ and which have easily computable characteristic.
Let
${W}_{x}=(S_{x}\sqcup{\Gamma}_{x},{P}_{x},\lambda,\sigma)$
be a (bipartite) vertex or edge graph of ${W}$, where
$S_{x}={W}_{x}\cap S$,
${\Gamma}_{x}={W}_{x}\cap {\Gamma}$,
${P}_{x}={W}_{x}\cap {P}.$ For each
$s\in S_{x}$, let $P_s=\sigma^{-1}(s)$.
Suppose that $w\colon S\to {{\Omega}}$ has a stacking, which we pull
back to a stacking of $w\colon S\to {{\Gamma_u}}$. For $s\in S_{x}$
define
\[
{W}_{x}^+(s)={\Gamma}_{x}\cup\{t\mid t\leq_{x} s\}\cup\{p\mid\sigma(p)\leq_{x} s\}
\]
and
\[
{W}_{x}^-(s)={\Gamma}_{x}\cup\{t\mid s\leq_{x} t\}\cup\{p\mid s\leq_{x} \sigma(p) \}~.
\]
Let $s+1$ be the successor of $s$ and $s-1$ be the predecessor of $s$,
when defined, and interpret ${W}_{x}^+(s-1)$ as
${\Gamma}_{x}$ if $s$ is minimal and
${W}_{x}^-(s+1)$ as ${\Gamma}_{x}$ if $s$ is
maximal. The order $\leq_{x}$ gives two filtrations of
${W}_{x}$ by the sublevel sets
${W}_{x}^{\pm}(s)$.
\begin{align}
{\Gamma}_{x}\subsetneq\dotsb\subsetneq {W}^{+}_{x}(s-1)\subsetneq {W}^{+}_{x}(s)\subsetneq
{W}^{+}_{x}(s+1)\subsetneq\dotsb\subsetneq {W}_{x} \label{upfiltration}
\end{align}
and
\begin{align}
{\Gamma}_{x}\subsetneq \dotsb \subsetneq {W}^{-}_{x}(s+1)\subsetneq {W}^{-}_{x}(s)\subsetneq
{W}^{-}_{x}(s-1)\subsetneq\dotsb\subsetneq {W}_{x}~. \label{downfiltration}
\end{align}
For $s\in S_{x}$, define
\[
A^{\pm}(s)=H_1({W}_{x}^{\pm}(s))/H_1({W}_{x}^{\pm}(s\mp 1))~.
\]
The quotient group $A^{\pm}(s)$ represents the additional first
homology gained when going from ${W}_{x}^{\pm}(s\mp 1)$ to
${W}_{x}^{\pm}(s)$. See Figure~\ref{filtration}. Summing
over $s\in S_{x}$, we have
\begin{align}
H_1({W}_{x})\cong\bigoplus_{s\in S_{x}}A^{\pm}(s)~.\label{filterdecomposition}
\end{align}
Since
$\alpha\colon {W}_e\to {W}_{\alpha(e)}$
is injective on $S$--vertices and $\alpha^*(\leq_{\alpha(e)})=\leq_e$,
there are induced restrictions
\[
{W}_e^{\pm}(s)\to {W}_{\alpha(e)}^{\pm}(\alpha(s))
\]
such that
\[
\alpha({W}_e^{\pm}(s\mp 1))\subseteq {W}_{\alpha(e)}^{\pm}(\alpha(s\mp
1))\subseteq {W}_{\alpha(e)}^{\pm}(\alpha(s)\mp 1)~.
\]
Because ${W}$ is diagrammatically irreducible, each
$\alpha\colon {P}_e\to {P}_{\alpha(e)}$ is injective, so
$\alpha\colon {P}_s\to {P}_{\alpha(s)}$ is as well, so there are induced
injections
\begin{align}
\alpha\colon A^{\pm}(s)\into A^{\pm}(\alpha(s))~.\label{boundaryonelevel}
\end{align}
Again, summing over $s\in S_{x}$, there are maps
\begin{align}
\alpha\colon\bigoplus_{s\in S_e}A^{\pm}(s)\into\bigoplus_{s\in S_{\alpha(e)}}A^{\pm}(s)~. \label{bipartiteinclusion}
\end{align}
\begin{figure}[ht]
\centerline{
\includegraphics[width=.6\textwidth]{edgemap.pdf}
}
\caption{The map $\alpha\colon {W}_e\to {W}_{\alpha(e)}$ is
injective on ${P}_e$ and induces an injection
$A^{\pm}(s)\into A^{\pm}(\alpha(s)).$ In this example two vertices
of ${\Gamma}_e$ are identified in ${\Gamma}_{\alpha(e)}$. The map
$\alpha$ respects the sublevelset filtrations (\ref{upfiltration})
and (\ref{downfiltration}). Here we have drawn $S_{x}$ as
sitting ``above'' the ${\Gamma}_{x}$ so this picture
should be thought of as illustrating the
filtration~(\ref{upfiltration}).}
\label{inclusion}
\end{figure}
We now define a pair of auxiliary chain complexes $\mathcal{C}^{\pm}$
by replacing each $H_1({W}_{x})$ in
$\mathcal{C}$ using the isomorphism (\ref{filterdecomposition}), using
the sum of the maps from~(\ref{bipartiteinclusion}) as the boundary
map.
\begin{align}
\mathcal{C}^{\pm}=\left(0\to\bigoplus_{e\in E_{{{\Gamma_u}}}}\bigoplus_{s\in S_e}A^{\pm}(s)\to\bigoplus_{v\in V_{{{\Gamma_u}}}}\bigoplus_{s\in S_v}A^{\pm}(s)\to 0\right)\label{fcomplex}
\end{align}
By~(\ref{filterdecomposition}),
$\chi(\mathcal{C}^{\pm})=\chi(\mathcal{C})$. Since
\[
V_S=\bigsqcup_{v\in V_{{{\Gamma_u}}}}S_v\mbox{ and } E_S=\bigsqcup_{e\in E_{{{\Gamma_u}}}}S_e~,
\]
after reindexing,~(\ref{fcomplex}) becomes
\[
\mathcal{C}^{\pm}=\left(0\to \bigoplus_{e\in E_S}A^{\pm}(e) \to \bigoplus_{v\in V_S}A^{\pm}(v)\to 0\right)~,
\]
with boundary maps coming from~(\ref{boundaryonelevel}).
These auxiliary chain complexes enable us to relate $\chi(\mathcal{C})$ to the vector spaces $A^{\pm}(s)$ that come from the filtrations of the ${W}_{x}$.
\begin{lemma}
\label{charc}
Suppose $S$ is a circle. Then
\[
\max\{\dim(A^{\pm}(s))\mid s\in S\}\leq \chi(\mathcal{C})~.
\]
\end{lemma}
The proof uses the following naive estimate.
\begin{remark}\label{rem: Naive estimate}
Let $a_1,\dotsc,a_n$ and $b_1,\dotsc,b_{n-1}$ be non-negative
integers, and suppose that $a_i\geq b_i\leq a_{i+1}$ for $i=1\dotsc
n-1$. Then
\[
a_1-b_1+\dotsb-b_{n-1}+a_n\geq \max\{a_i,b_i\}~.
\]
\end{remark}
\begin{proof}[Proof of Lemma~\ref{charc}]
Pick an edge $g\in w(E_S)\subseteq E_{{\Gamma_u}}$, and let $m^+$
and $m^-$ be the minimal and maximal elements of $S_g$ with respect
to the order $\leq_g$. Since $m^{\pm}$ is minimal/maximal,
\[
V_{W_g^{\pm}(m^{\pm})}={\Gamma}_g\cup\{m^{\pm}\}
\]
and
\[
E_{W_g^{\pm}(m^{\pm})}=\{p\mid\sigma(p)=m^{\pm}\}~.
\]
By Lemma~\ref{lem: DI properties} ${W}_g$ is simple, so if
$p\in E_{W_g^{\pm}(m^{\pm})}$ then $p$ is determined by
$\lambda(p)$, and ${W}_g(m^{\pm})$ is therefore ${\Gamma}_g$ with
$\lambda({P}_{m^{\pm}})$ coned off, so $A^+(m^+)=A^-(m^-)=0$.
Removing $m^{\pm}$ from $S$ therefore doesn't change the
characteristic of the chain complexes $\mathcal{C}^{\pm}$, i.e.
\[
\chi(\mathcal{C}^{\pm})=\chi(\mathcal{C}^{\pm}\vert_{S\setminus{m^{\pm}}})
\]
where
\[
\mathcal{C}^{\pm}\vert_{S\setminus m^{\pm}}=\left(0\to \bigoplus_{e\in
E_S\setminus m^{\pm}}A^{\pm}(e) \to \bigoplus_{v\in
V_S}A^{\pm}(v)\to 0\right).
\]
The chain complex $\mathcal{C}^{\pm}\vert_{S\setminus m^{\pm}}$ is
over an interval $S\setminus m^{\pm}$, which makes its Euler
characteristic easy to estimate. Label and reorient $S$ so that
$V_S=\{v^{\pm}_1,\dotsc,v^{\pm}_n\}$ and
$E_S=\{m^{\pm},e^{\pm}_1,\dotsc,e^{\pm}_{n-1}\}$ with
$\iota(e^{\pm}_i)=v^{\pm}_i$ (for $i=1,\dotsc, n$) and
$\tau(e^{\pm}_i)=v^{\pm}_{i+1}$ (for $i=1,\dotsc, n-1$). Set
$a^{\pm}_i=\dim(A^{\pm}(v^{\pm}_i))$ and
$b^{\pm}_j=\dim(A^{\pm}(e^{\pm}_j))$. Then
\[
\chi(\mathcal{C})=\chi(\mathcal{C}^{\pm})=a^{\pm}_1-b^{\pm}_1+a^{\pm}_2-\dotsb+a^{\pm}_{n-1}-b^{\pm}_{n-1}+a^{\pm}_n~.
\]
Since $\alpha\colon A^{\pm}(e)\to A^{\pm}(\alpha(e))$ is injective,
$a^{\pm}_i\geq b^{\pm}_i\leq a^{\pm}_{i+1}$ for $i=1,\dotsc ,n-1$, and
\[
\chi(\mathcal{C})\geq\max\{a^{\pm}_i,b^{\pm}_i\}=\max\{\dim(A^{\pm}(s))\mid
s\in S\}\geq 0
\]
by Remark \ref{rem: Naive estimate}.
\end{proof}
\begin{remark}
It is not clear from the start that $\chi(\mathcal{C})$ is
non-negative. It follows from Mayer--Vietoris that the chain
complexes $\mathcal{C}^{\pm}\setminus m^{\pm}$, and therefore
$\mathcal{C}^{\pm}$, have their homology concentrated in
dimension $0$.
\[
\chi(\mathcal{C}^{\pm})=\dim(H_0(\mathcal{C}^{\pm}))
\]
The special case $\chi({\Gamma})=\chi({{\Gamma_u}})$ is of some
interest since it implies the theorems of Baumslag and Stallings.
In these cases $\chi(\mathcal{C})=0$, and by Lemma~\ref{charc}
$\dim(A^{\pm}(s))=0$ for all $s\in
S$. By~(\ref{filterdecomposition}),
$H_1({W}_{x})=0$ for all
$x\in{{\Gamma_u}}$, but a connected graph with trivial
homology is a tree. If $\deg(\sigma)\geq 2$ then no
$s\in S_{x}$ has valence one, so there are at least two
valence-one vertices in ${\Gamma}_{x}$, hence $\lambda$ is
strongly reducible, and therefore reducible. This case is argued
differently in the paper~\cite{adjoiningroots}. There it was shown
directly that the vertices in ${\Gamma}_{x}$ are cutpoints
in $W_{x}$, and acylindricity of the
associated graph of groups $\Delta$ then implied that the edge and
vertex spaces are trees. Since this is not true in general, we use
stackings to argue indirectly that if
$\chi(\mathcal{C})<\deg(\sigma)-1$ then the edge spaces have
``treelike'' features, and ultimately, valence one
vertices.
\end{remark}
\subsection{The up-down lemma and the proof of Theorem~\ref{maintheorem}}
The final ingredient of the proof of the dependence theorem is the \emph{up-down lemma}. To formulate it, we first recapitulate some of the discussion from
Section~\ref{fiberwisefiltering} in general terms.
Consider a finite bipartite graph
$B=(V_B=C_B\sqcup U_B,E_B,\sigma,\lambda)$ with an order $\leq$ on
$C_B$. For $c\in C$ define
\[
B^+(c)=U\cup\{c'\mid c'\leq c\}\cup\{e\mid\sigma(e)\leq c\}
\]
and
\[
B^-(c)=U\cup\{c'\mid c'\geq c\}\cup\{e\mid\sigma(e)\geq c\}~.
\]
Let
\[
A^{\pm}(c)=H_1(B^{\pm}(c))/H_1(B^{\pm}(c\mp 1))~,
\]
where we interpret $B^+(c-1)$ as $U$ if $c$ is minimal and $B^-(c+1)$
as $U$ if $c$ is maximal. A vertex $c\in C$ is \emph{good} if
\[
\max\{\dim(A^{\pm}(c))\}=\nu(c)-1~.
\]
A vertex $u\in U$ is \emph{good} if it has valence one.
\begin{figure}[ht]
\centerline{
\includegraphics[width=\textwidth]{filtration-allsteps.pdf}
}
\caption{Illustration of a filtration associated to an order $\leq$
on a (simple) bipartite graph $B$. The elements of $U$ are all
drawn at the same level, and elements of $C$ are placed
vertically. To keep the pictures uncluttered we omit elements of
$U$ which aren't connected to vertices in $C\cap B^+(c)$. The
number below each graph is the dimension of $A^+(c)$ for the
vertex $c$ added at that stage. The graph $B$ has $6+6$ vertices
and $18$ edges, for a characteristic of $-6$, and is connected
with first betti number $0+0+2+2+1+2$.}
\label{filtration}
\end{figure}
\begin{lemma}[Up-down lemma]
\label{updownlemma}
Let $B$ be a simple connected bipartite graph which is not a
point. Let $\leq$ be an order on $C$. Then
\[
\vert\{p\in C\cup U\mid p\hfill\mbox{ is good.}\}\vert \geq 2.
\]
\end{lemma}
\def{m^-}{{m^-}}
\def{m^+}{{m^+}}
\begin{proof}
The proof is by induction on $\vert C\vert$. Suppose that
$\vert C\vert=1$. If $\vert U \vert=1$ then $C=\{c\}$, $U=\{u\}$,
$c$ has valence $1$, $\dim(A^{\pm}(c))=\nu(c)-1=0$, so $c$ is
good, and $\vert\lambda^{-1}(u)\vert=1$ so $u$ has valence one, so
is good. If $\vert U\vert\geq 2$ then there are $\vert U\vert\geq 2$
valence one vertices.
Suppose that $\vert C\vert\geq 2$, and let ${m^-}$ and ${m^+}$
be the maximal and minimal elements of $C$, respectively. If
${m^-}$ and ${m^+}$ are both good then we are done.
The long exact sequence for the pair $(B,B^+({m^-}-1))$
reduces to the exact sequence
\begin{equation}
0\to A^+({m^-}) \to H_1(B,B^+({m^-}-1))\to
H_0(B^+({m^-}-1))\to H_0(B)\to 0~.\label{les}
\end{equation}
Since $B\setminus B^+({m^-}-1)$ has one vertex ${m^-}$ and has
$\nu({m^-})$ edges connecting $B^+({m^-}-1)$ to
${m^-}$, the relative homology group $H_1(B,B^+({m^-}-1))$ is
$\nu({m^-})-1$ dimensional. Since $B$ is connected,
$\dim(H_0(B))=1$. Suppose now that ${m^-}$ is not good. Since
$B$ is simple, $\dim(A^-({m^-}))=0$, and since ${m^-}$ is not
good, $\dim(A^+({m^-}))<\nu({m^-})-1$, so by~(\ref{les})
$\dim(H_0(B^+({m^-}-1)))>1$, and $B^+({m^-}-1)$ is therefore
not connected and $B\setminus {m^-}$ has at least two connected
components.
Let $B_{m^-}$ be the closure of a connected component of
$B\setminus {m^-}$ which doesn't contain ${m^+}$. By induction
on $\vert C\vert$, $B_{m^-}$ has at least two good vertices, one
of which is not ${m^-}$. Let $g$ be this vertex. If ${m^+}$ is
good then ${m^+}$ and $g$ are both good. Argue symmetrically if
${m^-}$ is good and ${m^+}$ is not good.
Thus we assume both ${m^-}$ and ${m^+}$ are not good. Again,
let $B_{m^-}$ be the closure of a connected component of
$B\setminus {m^-}$ which doesn't contain ${m^+}$, and let
$B_{m^+}$ be the closure of a connected component of
$B\setminus {m^+}$ which doesn't contain ${m^-}$. The vertices
${m^-}$ and ${m^+}$ are good in $B_{m^-}$ and $B_{m^+}$,
respectively, and $B_{m^-}$ and $B_{m^+}$ are disjoint. By
induction on $\vert C\vert$, $B_{m^-}$ and $B_{m^+}$ each
contain at least two good vertices, at least one of which is not
${m^-}$ or ${m^+}$, respectively. A good vertex in
$B_{m^-}$ which is not ${m^-}$ is good in $B$, and a good
vertex in $B_{m^+}$ which is not ${m^+}$ is good in $B$ as
well, so $B$ has at least two good vertices.
\end{proof}
\begin{figure}[ht]
\centerline{
\includegraphics[width=.9\textwidth]{updownlemma2.pdf}
}
\caption{Illustration for Lemma~\ref{updownlemma}. In this case
neither ${m^-}$ nor ${m^+}$ is good. We picture $U$ as
sitting below ${m^-}$ and above ${m^+}$.}
\end{figure}
With the up-down lemma in hand, we can finally prove the dependence theorem.
\begin{proof}[Proof of Theorem~\ref{maintheorem}]
We prove the contrapositive. Suppose that
\[
\chi({\Gamma})+\deg(\sigma)-1>\chi({{\Gamma_u}})~.
\]
Our goal is to prove that ${W}$ is strongly independent.
By Lemma~\ref{charc}, $\chi(\mathcal{C})$ is bounded from below by
\[
\max_{x\in {{\Gamma_u}}}\max_{s\in S_{x}}\{\dim(A^{\pm}(s))\}~.
\]
so
\[
\chi({\Gamma}) +
\max_{x\in {{\Gamma_u}}}\max_{s\in S_{x}}
\{\dim(A^{\pm}(s))\}\leq
\chi({\Gamma})+\chi(\mathcal{C}) = \chi({{\Gamma_u}})~.
\]
If $\chi({\Gamma})+\deg(\sigma)-1>\chi({{\Gamma_u}})$ then
\begin{align}
\max_{x\in {{\Gamma_u}}}\max_{s\in S_{x}}\{\dim(A^{\pm}(s))\}<\deg(\sigma)-1~.\label{nearlythere}
\end{align}
To show that ${W}$ is strongly independent, we need to show that
$\vert\partial {W}\cap {W}_e\vert\geq 2$ for each
$e\in w(E_S)\subseteq E_{{\Gamma_u}}$.
To that end, choose $e\in w(E_S)$ and apply the up-down lemma to ${W}_e$ by
setting $B={W}_e$, $C_B=S_e$, $U_B={\Gamma}_e$, $E_B={P}_e$, and $\leq=\leq_e$.
Lemma~\ref{lem: Circle valence} asserts that
$\deg(\sigma)=\nu(s)$, so~(\ref{nearlythere}) implies that
$\dim(A^{\pm}(s))<\nu(s)-1$ for all $s\in S_e$. In particular,
no vertex in $S_e$ is good. Since the up-down lemma guarantees two
good vertices in ${W}_e$, it follows that there are two good vertices
in ${\Gamma}_e$. A good vertex in ${\Gamma}_e$ has valence one, so
$\vert\partial {W}\cap {W}_e\vert\geq 2$.
This is true for all $e\in w(E_S)$, but this is precisely what it
means for the map $\lambda\colon {P}\to {\Gamma}$ to be strongly independent.
\end{proof}
\section{Stallings; Magnus and Lyndon; Duncan--Howie}
\label{applications}
In this section we show how the dependence theorem implies its
predecessors mentioned in the introduction. We have already seen that
it implies Theorem \ref{introthm: main}, which in turn implies
Baumslag's theorem. We next state a generalization of Stallings'
theorem and explain how it follows as well. In the following
subsection we explain how the dependence theorem implies Magnus'
Freiheitssatz and Lyndon's asphericity theorem. Finally, we explain
how the dependence theorem implies a strengthening of the theorem of
Duncan--Howie.
\subsection{Conjugacy and homology}
\label{subsec: stallings}
A homomorphism of free groups $f\colonH\to{F}$
induces a map
$f_{\sim}\colon
H/\negthinspace\negthinspace\sim\thinspace\to{F}/\negthinspace\negthinspace\sim$
on sets of conjugacy classes. A 1983 theorem of Stallings, which we
also think of as a kind of dependence theorem, relates $f_{\sim}$ to
the induced map on abelianizations,
$f_{\#}\colon H_1(H)\to H_1({F})$ \cite[Theorem
5.3]{stallings-surfaces}.
\begin{thm}[Stallings]
Let $f\colonH\to{F}$ be an injection of free
groups. If $f_{\#}$ is injective then so is $f_\sim$.
\end{thm}
\noindent(A homomorphism $f$ for which $f_\sim$ is injective is
sometimes called a \emph{Frattini embedding}; cf.\
\cite{olshanskii_conjugacy_2004}.)
In this section we quantify Stallings' theorem, and compare how badly
$f_\sim$ and $f_\#$ may fail to be injective. In the case of $f_\#$,
the failure of injectivity is measured by the rank of the kernel. To
measure the failure of $f_\sim$ to be injective, we define
\[
\gamma(f)=\max_{\left[v\right]\in{F}/\negthinspace\sim}\{\vert f^{-1}_{\sim}(\left[ v\right])\vert\}\in\mathbb{N}\cup\{\infty\}~,
\]
the maximal number of conjugacy classes in $H$ that are identified
in ${F}$. Using this terminology, Stallings' theorem
asserts: if $\gamma(f)>1$ then $\rk{\ker(f_\#)}>0$.
The main result of this section is a corollary of the dependence theorem that strengthens Stallings' theorem by comparing $\rk{\ker(f_\#)}$ to $\gamma(f)$.
\begin{corollary}
\label{stallings}
Let $f\colonH\to{F}$ be an injection of free groups. Then
\[
\rk{\ker(f_{\#})}\geq\gamma(f)-1~.
\]
\end{corollary}
\begin{proof
The proof is by induction on $m=\gamma(f)$. In the base case, $m=1$,
there is nothing to prove, so we assume that $m\geq 2$. We may also
assume that $H$ and ${F}$ are finitely generated. Let
$u_1,\dotsc,u_m$ be a collection of non-conjugate elements realizing
$\gamma(f)$. For each $u_j$ let $v_j$ be an element of $H$ such
that $u_j\in v_j^{k_j}$, with $k_j\geq 1$ maximal, so
$\{\langle v_j\rangle\}$ forms a malnormal family of cyclic
subgroups of $H$. There is some $w\in {F}$ with the property
that each $f(v_j)$ is conjugate to $w^{n_j}$ for some unique integer
$n_j$. As in the introduction, these data define a graph of groups
$\Delta$ and $f$ extends to a homomorphism
$\phi\colon \pio{\Delta}\to{F}$. Let
$L=\phi(\pio{\Delta})<{F}$.
Since $f(H)$ is contained in $L$ we have $\rk{f_\#}\leq\rk{L}$
and so the rank-nullity lemma applied to $f_\#$ gives
\[
\rk{\ker(f_\#)}= \rk{H}-\rk{f_\#}\geq \rk{H}-\rk{L}~.
\]
If the malnormal family $\{\langle v_j\rangle\}$ is dependent then
Theorem \ref{introthm: main} implies that
\[
\rk{H}-\rk{L} \geq \sum_{i=1}^n n_i-1\geq m-1~.
\]
These two estimates together imply the result, so it remains to deal
with the case in which $\{\langle v_j\rangle\}$ is independent.
After permuting indices and conjugating the $v_j$ appropriately,
this means that
\[
H=\KK*\langle v_m\rangle
\]
and $v_j\in \KK$ for $j<m$. Therefore, by the inductive hypothesis
applied to $f\vert_{\KK}\colon \KK\to{F}$, we have
$\rk{\ker(f_{\#}\vert_{H_1(\KK)})}\geq m-2$. Since
$f(v_1)^{n_m}$ is conjugate to $f(v_m)^{n_1}$, the class
\[
c = n_m [v_1] - n_1 [v_m]
\]
is non-zero in $H_1(H)$, is contained in the kernel of $f_{\#}$,
but is not in $H_1(\KK)$. Therefore,
\[
\rk{\ker(f_{\#})}\geq\rk{\ker(f_{\#}\vert_{H_1(\KK)})}+1\geq m-1
\]
as required.
\end{proof}
\begin{remark}
Corollary~\ref{stallings} is sharp. Let
${F}=\langle a,b\rangle$ and
\[
H=\langle a,bab^{-1},\dotsc,b^{n-2}ab^{2-n},b^{n-1}ab^{1-n}\rangle
\]
with $f$ the inclusion map. The $n$ basis elements $b^iab^{-i}$ of $H$ are conjugate in
${F}$ and $\rk{\ker(f_{\#})}=n-1$.
\end{remark}
\subsection{The Freiheitssatz and Lyndon asphericity}
We again consider a one-relator group $\GG={F}/\ncl{w}$, with the
word $w$ realized as usual by an immersion of graphs
$w\colon S\looparrowright {{\Omega}}$. Note that $w$ may be a proper power $v^k$, where $k\geq 1$ is assumed to be maximal. In this section we show how
Corollary~\ref{monotonicity} implies the Freiheitssatz and Lyndon
asphericity. In what follows $X$ is the presentation complex
${{\Omega}}\cup_wD$ of the one-relator group $G$, where
$w\colon{S}\to{{\Omega}}$ is the attaching map of the two cell, and
$Z$ is the presentation complex of the one-relator group
${{\Omega}}\cup_vD$. There is a
natural map $q\colon X\to Z$, equal to the identity on $\Omega$ and a $k$-fold branched cover on $D$. Note that $q$ is \emph{not} a branched map in the sense of Definition \ref{defn: Branched map} if $k>1$.
\begin{definition}[Surface diagram]
A \emph{singular surface diagram} in $X$ is a cellular map, branched
over two-cells, $f\colon Y\to X$, where $Y$ is a cell complex such that
the link of every vertex in $Y$ is a union of points, circles and intervals. A singular surface diagram $f\colon Y\to X$ is
\emph{reduced} if the induced map $q\circ f$ is a branched map.
\end{definition}
This definition agrees with the usual notions of reduced disk and
sphere diagram. The following theorem, which is the main theorem of this section, is a common generalization of Magnus' Freiheitssatz and Lyndon asphericity.
\begin{theorem}[Magnus, Lyndon]\label{thm: Magnus Lyndon}
Let $X$ be the presentation complex of a one-relator group, and $f\colon Y\to X$ a reduced singular surface diagram. If $\chi(Y)\geq 1$ then $w(S)\subseteq f(\partial Y)$.
\end{theorem}
\begin{proof}
If $w(S)\not\subseteq f(\partial Y)$ then certainly
$w(S)\not\subseteq q(f(\partial Y))$, so we may replace $X$ by $Z$.
Let $\hatt Y$ be the one-relator pushout of the map $Y\to Z$. By
Corollary~\ref{monotonicity}, $\chi(Y)\leq\chi(\hatt Y)$, so
$\chi(\hatt Y)\geq 1$. Since $\hatt Y$ is one-relator, and $v$ is
indivisible, $\hatt Y$ is the disk $D$, and ${\Gamma_u}=\partial D$ is a
circle. Since $Y\to Z$ is a branched map, it doesn't fold faces, but
since $Y\to Z$ factors through $D$, no two two-cells in $Y$ share an
edge. Thus $Y$ is a tree of disks. See Figure~\ref{fig: tree of
disks}. In this case, $\partial Y$ clearly surjects $w(S)$.
\end{proof}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.6\textwidth]{treeofdisks.pdf}
\end{center}
\caption{$Y$ is a tree of disks.}
\label{fig: tree of disks}
\end{figure}
Magnus' Freiheitssatz \cite{magnus-freiheitssatz}, corresponding to the case when $Y$ is a disk, and Lyndon asphericity \cite{lyndon-cohomology,cockcroft_two-dimensional_1954}, corresponding to the case where $Y$ is a sphere, follow immediately.
\begin{corollary}[Magnus' Freiheitssatz]\label{cor: Freiheitssatz}
Let $X$ be the presentation complex of a one-relator group $G=F/\ncl{w}$. If $Y\to X$ is a reduced disk diagram, then $\partial Y$ surjects $w(S)$.
\end{corollary}
\begin{corollary}[Lyndon asphericity]\label{cor: Lyndon asphericity}
Let $X$ be the presentation complex of a one-relator group $G=F/\ncl{w}$. If $Y$ is homeomorphic to a 2-sphere, no combinatorial map $Y\to X$ is reduced.
\end{corollary}
\subsection{Roots of products of commutators}
\begin{definition}
Let ${F}$ be a free group The \emph{genus} or
\emph{commutator length} of an element $v\in{F}$ is
defined to be the minimal $g\in\mathbb{N}$ such that
\[
v=[a_1,b_1]\ldots[a_g,b_g]~.
\]
\end{definition}
The Duncan--Howie theorem is an estimate on the commutator length of a
proper power $v=w^n$: it asserts that $n\leq 2g-1$
\cite{duncan-howie}. Here, we view it as a dependence theorem about
maps $H\toF$ where $H$ is the fundamental group of
a surface $\Sigma$ with boundary, and $\partial\Sigma$ maps to powers
of conjugates of $w$. In this section, we prove another corollary of
Theorem \ref{maintheorem}, which strengthens the Duncan--Howie
theorem.
\begin{corollary}
\label{dhcorollary}
Let ${F}$ be a free group and consider $v$ a
non-trivial element which is both a $k$--th power and a product of
$g$ commutators, that is there are $a_i,b_i,w\in{F}$
with $1\leq i\leq g$ and
\[
v=[a_1,b_1]\ldots[a_g,b_g]=w^k~.
\]
Then
\[
\rk{\langle a_1,\ldots,a_g,b_1,\ldots,b_g,w\rangle} +k-1\leq 2g~.
\]
\end{corollary}
Since $g$ is at least one,
$\rk{\langle a_1,\ldots,a_g,b_1,\ldots,b_g,w\rangle} \geq 2$ and it
follows that $k\leq 2g-1$, recovering the Duncan--Howie estimate.
\begin{figure}[ht]
\centerline{
\includegraphics[width=.8\textwidth]{mobius.pdf}
}
\caption{When $\Sigma$ is orientable the map
$\lambda\colon {P}\to {\Gamma}$ is diagrammatically irreducible
since otherwise $\Sigma$ contains a M\"obius band.}
\label{fig: Mobius}
\end{figure}
\begin{proof}[Proof of Corollary~\ref{dhcorollary}.]
Represent the subgroup $\langle a_i,b_i\rangle<{F}$ by a map
$f\colon\Sigma\to {{\Omega}}$ from an orientable surface of genus
$g$ with one boundary component, so that $f\vert_{\partial\Sigma}$
represents the element $v$. We may assume that $f$ doesn't pinch any
simply closed curves, and that $w$ is indivisible in
$\langle a_1,\ldots,a_g,b_1,\ldots,b_g,w\rangle$. By
\cite{culler-surfaces}, we may realize $\Sigma$ as the mapping
cylinder of $\lambda\colon {P}\to {\Gamma}$, where ${P}$ is a
circle representing the boundary of $\Sigma$, with a morphism of
graphs $h\colon {\Gamma}\to {{\Omega}}$ representing
$\langle a_1,\ldots,a_g,b_1,\ldots,b_g\rangle$. Orientability of
$\Sigma$ implies that $\lambda$ is diagrammatically irreducible. See
Figure~\ref{fig: Mobius}. The induced map from the pushout ${{\Gamma_u}}$ surjects
$\langle a_1,\ldots,a_g,b_1,\ldots,b_g,w\rangle$, and the inequality
then follows from the dependence theorem.
\end{proof}
\section{Subgroups of one-relator groups}
The results of this section show how $\pi(w)$ controls the subgroup
structure of the one-relator group $\GG=F/\ncl{w}$.
\subsection{Primitivity rank and $w$--subgroups}
\label{subsection: primitivity rank and w subgroups}
Recall the definition of the primitivity rank $\pi(w)$ from the introduction
(Definition \ref{defn: Primitivity rank}). We start with a few simple
observations.
\begin{enumerate}[(i)]
\item The word $w$ is primitive in ${F}$ if and only
if $\pi(w)=\infty$.
\item Unless $w$ is primitive, $\rk{{F}}$ is an upper
bound for $\pi(w)$.
\item The word $w$ is a proper power if and only if $\pi(w)=1$.
\end{enumerate}
We now turn to the second definition needed for the main lemma.
\begin{definition}\label{defn: w-subgroup}
Let ${F}$ be a free group and
$w\in{F}$ a non-trivial element. A subgroup $\KK$ of
${F}$ is a \emph{$w$--subgroup} if:
\begin{enumerate}[(i)]
\item $\KK$ contains $w$ as an imprimitive element;
\item $\rk{\KK}=\pi(w)$; and
\item every proper overgroup $\KK'$ of $\KK$ in ${F}$ has
$\rk{\KK'}>\rk{\KK}$.
\end{enumerate}
\end{definition}
In the easiest case $w$--subgroups are cyclic; this occurs if and only
if $\pi(w)=1$, i.e.\ when $w$ is a proper power $u^k$.
\begin{example}\label{ex: w-subgroups of powers}
If $w=u^k\in{F}$ with $k>1$ and $u$ not a proper
power then $\langle u\rangle$ is the unique $w$--subgroup of
${F}$. It is well-known that the inclusion
$\langle u\rangle/\ncl{w}\to {F}/\ncl{w}$ is
injective \cite[Proposition II.5.17]{lyndon-schupp}.
\end{example}
So when $\pi(w)=1$, a $w$--subgroup is unique and malnormal. In fact,
malnormality holds in general.
\begin{lemma}\label{lem: w-subgroups of free groups are malnormal}
If $\KK<{F}$ is a $w$--subgroup then $\KK$ is
malnormal. In particular, if $w^g\in\KK$ then $g\in\KK$.
\end{lemma}
\begin{proof}
Let $g\in{F}$; then $\KK<\langle K,g\rangle$ and
$\rk{\langle K,g\rangle}\leq\rk{\KK}+1$. If $k_1^g=k_2$ for
$k_1,k_2\in\KK\setminus 1$ then there is a non-trivial relation
between $\KK$ and $g$ and so, since free groups are Hopfian,
$\rk{\langle K,g\rangle}\leq\rk{\KK}$. Therefore, by the definition of a
$w$--subgroup, $\langle K,g\rangle=\KK$, so $g\in\KK$.
\end{proof}
Uniqueness in the case $\pi(w)=1$ extends to finiteness in general,
and the finite list of $w$--subgroups is computable. Computability
was touched on in \cite[p. 66]{puder_measure_2015}.
\begin{lemma}\label{lem: Finitely many w-subgroups}
There are only finitely many $w$--subgroups in a free group
${F}$, and there is an algorithm that lists them.
\end{lemma}
\begin{proof}
If ${F}$ is the fundamental group of a based finite graph
${{\Omega}}$, then any finitely generated subgroup $\KK$ can be
realized by a based immersion of finite graphs
$\Lambda\looparrowright {{\Omega}}$, and if $w$ is contained in $\KK$ then
the immersion $w\colon S\to {{\Omega}}$ lifts to $\Lambda$. We only
need to consider subgroups $\KK$ for which $w$ is not contained in a
proper free factor, and for such subgroups $\KK$, every edge of
$\Lambda$ is in the image of $w$. In fact, every edge of $\Lambda$
is hit at least twice by $w$, so we only need to consider the
finitely many based immersions $\Lambda\looparrowright {{\Omega}}$ with
$\vert \Lambda\vert\leq\vert w\vert/2$. For each such
$\Lambda\looparrowright {{\Omega}}$, Whitehead's algorithm decides whether
or not $w$ is contained in a free factor of $\KK$. Keep those
$\Lambda$ of minimal rank, and of these the $w$--subgroups are the
maximal ones with respect to inclusion: $\KK<\KK'$ if and only if
the based immersion $\Lambda\to {{\Omega}}$ factors through the
based immersion $\Lambda'\to {{\Omega}}$, which can be checked
trivially.
\end{proof}
If we realize ${F}$ as the fundamental group of a core graph
${{\Omega}}$ and $w$ by an immersion $w\colon S\to {{\Omega}}$ then
each of the finitely many $w$--subgroups $\KK_i$ is realized by an
immersion of core graphs $\Lambda_i\looparrowright {{\Omega}}$. We may then
define complexes $Q_i=\Lambda_i\cup_{w} D$ (where $w$ is the unique,
by Lemma~\ref{lem: w-subgroups of free groups are malnormal}, lift of
$w$ to $\Lambda_i$), which come equipped with immersions
$Q_i\looparrowright X$. These play a key role in the classification of
immersions $Y\looparrowright X$ with $\chi(Y)=2-\pi(w)$.
\begin{definition}\label{defn: w-subgroups of O}
If $\KK_i<{F}$ is a $w$--subgroup we also call
$P_i=\KK_i/\ncl{w}$ a \emph{$w$--subgroup} of
$\GG={F}/\ncl{w}$.
\end{definition}
The $w$--subgroups come equipped with homomorphisms $P_i\to \GG$
induced by the immersions $Q_i\looparrowright X$. The name `$w$--subgroup'
turns out to be justified, since by Theorem \ref{thm: Q is a subgroup}
these homomorphisms are injective.
\begin{remark}\label{rem: w-subgroups are one-ended}
A one-relator group defined by an imprimitive element is not
free. See \cite[Proposition 5.10]{lyndon-schupp}. In particular, the
$w$-subgroups $P_i$ are one-ended.
\end{remark}
\subsection{Nielsen equivalence}
\label{subsection: nielsen equivalence}
This section introduces the strong version of homotopy equivalence
that plays a role in our main results.
We will consider $w$ as defining a one-relator group
$\GG={F}/\ncl{w}$. As usual, we realize this topologically: we
consider ${F}$ as the fundamental group of a graph ${{\Omega}}$ and
$w$ (up to conjugacy) as an immersion of a circle
$w\colon S\to {{\Omega}}$ that defines the attaching map for a
2-complex $X$ with a single 2-cell; for instance, $X$ could be the
natural presentation complex for $\GG$. We work with combinatorial
maps of 2-complexes -- that is, maps that send $n$-cells to $n$-cells,
for each $n$. A map of 2-complexes $Y\to X$ is an \emph{immersion} if
it is a local injection; in this case, we write $Y\looparrowright X$. A
branched map is an immersion if and only if it is an immersion when
restricted to one-skeleta and the branching index of each two-cell is
one.
\begin{definition}\label{defn: Nielsen reduction}
Let $ Y$ be a 2-complex. An \emph{edge collapse} of $ Y$ is a
continuous surjection $f\colon Y\to Z$ of 2-complexes such that
there is are finitely many zero cells $z_1,\dotsc,z_n$ of $ Z$ so
that $f^{-1}(\{z_i\})$ is a disjoint union of closed embedded
one-cells of $ Y$ and $\vert f^{-1}(z)\vert=1$ for $z\neq z_i$. A
\emph{face collapse} of $ Y$ is an inclusion $f\colon Z\into Y$ such
that $Y\setminus Z$ consists of a disjoint collection of open
1-cells $e_1,\dotsc,e_n$, disjoint open 2-cells $g_i$, so that the
attaching map for $g_i$ traverses $e_i$ exactly once, and $e_i$ is
traversed only by $g_i$. Edge and face collapses are homotopy
equivalences. Let $\stackrel{n}{\rightarrow}$ be the reflexive and transitive relation
generated by:
\begin{enumerate}[(i)]
\item $ Y\stackrel{n}{\rightarrow} Z$ if there is an edge collapse
$f\colon Y\to Z$ \emph{or} $f\colon Z\to Y$; and
\item $ Y\stackrel{n}{\rightarrow} Z$ if there is a face collapse
$f\colon Z\into Y$.
\end{enumerate}
If $ Y\stackrel{n}{\rightarrow} Z$ then we say that $ Y$ \emph{Nielsen
reduces}, or simply \emph{reduces}, to $ Z$.
\end{definition}
A 2-complex $Y$ that admits a face collapse $Z\into Y$ is said to \emph{have a free
face}.
Complexes that Nielsen reduce to graphs can also be characterized
algebraically. The following theorem is an easy consequence of the fact that any pair of bases of a free group are related by Nielsen moves \cite[Proposition I.4.1]{lyndon-schupp}.
\begin{proposition}
\label{prop: reducible iff basis}
A two-complex $Y$ Nielsen reduces to a graph if and only if
the conjugacy classes represented by the attaching maps for the
two-cells of $ Y$ have representatives which are a sub-basis
of the free group $\pio{Y^{(1)}}$.
\end{proposition}
We will make use of the following technical fact about Nielsen
reduction.
\begin{lemma} \label{lem: nielsenreduces}
Let $Y$ be a 2-complex. If $U\looparrowright Y$ is an
immersion of 2-complexes and $Y\stackrel{n}{\rightarrow} Z$ then there is a
two-complex $V$ immersing in $Z$ such that
$U\stackrel{n}{\rightarrow} V$. In particular, if $U$ immerses in $Y$
and $Y$ Nielsen reduces to a graph then $V$ Nielsen
reduces to a graph.
\end{lemma}
\begin{proof}
Suppose that $f\colon U\looparrowright Y$ and that $g\colon Y\to Z$ is an
edge collapse. Let $\sim$ be the equivalence relation on $ Y$ given
by $y\sim y'$ if and only if $g(y)=g(y')$. Pull $\sim$ back to an
equivalence relation $f^*(\sim)$ on $U$. Since $f$ is an immersion
the map $U\to U/f^*(\sim)=V$ is an edge collapse, and there is an
obvious immersion $f/\sim\colon V\to Z$.
If $g\colon Z\to Y$ is an edge collapse and $f\colon U\to Y$ is an
immersion then the projection $U\times_{Y} Z\to Z $ is an
immersion and the projection $U\times_{Y} Z\to U$ is an edge
collapse.
If $g\colon Z\into Y$ is a face collapse and $f\colon U\to Y$ is an
immersion set $V=f^{-1}(g( Z))$. The inclusion map $V\into U$ is a
face collapse, and the restriction $f\vert_{V}$ is an immersion.
\end{proof}
\subsection{One-relator pushouts and primitivity rank}
We can now classify immersions of finite complexes $Y\looparrowright X$ when
$\chi(Y)$ is sufficiently large: specifically, when
$\chi(Y)\geq 2-\pi(w)$.
\begin{lemma}
\label{lem: Negative immersions}
Let $\GG={F}/\ncl{w}$ be a one-relator group as above, and $X$ a
presentation complex of $\GG$, with $w$ represented by an immersion
$w\colon S\looparrowright {{\Omega}}$. Let $Y\looparrowright X$ be an immersion
from a compact connected one- or two-complex $Y$ to $X$. Suppose
that $\chi(Y)\geq 2-\pi(w)$, that $Y$ has no free faces, and that
the one-skeleton of $Y$ is a core graph.
\begin{enumerate}[(i)]
\item If $\chi(Y)>2-\pi(w)$ then $Y$ reduces to a graph.
\item If $\chi(Y)=2-\pi(w)$ then either $Y$ reduces to a graph or
$Y\looparrowright X$ factors through some $Q_i\looparrowright X$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $Y$ has no free faces, Corollary~\ref{monotonicity} implies
that $\chi(\hatt Y^I)\geq \chi(Y)$.
We first prove item (i). Suppose that $\chi(Y)>2-\pi(w)$. If
$\pio{{\Gamma_u^I}}$ is the subgroup of ${F}$ corresponding to the
1-skeleton of $\hatt{Y}^I$,
\[
\rk{\pio{{\Gamma_u^I}}}=2-\chi(\hatt Y^I)\leq 2-\chi(Y)<\pi(w)~.
\]
Since $\pio{{\Gamma_u^I}}$ is a subgroup of
${F}$ of rank less than $\pi(w)$, $w$ represents a
primitive element of $\pio{{\Gamma_u^I}}$. By
Proposition~\ref{prop: reducible iff basis}, $\hatt Y^I$ reduces to
a graph so, by Lemma~\ref{lem: nielsenreduces}, $Y$ reduces to a
graph.
The proof of item (ii) is similar. If $Y$ is a graph there is
nothing to prove. If $w$ is primitive in $\pio{{\Gamma_u^I}}$ then, as in
the previous paragraph, $Y$ reduces to a graph. Otherwise,
$\rk{\pio{{\Gamma_u^I}}}=\pi(w)$ and $w$ is not primitive in
$\pio{{\Gamma_u^I}}$, so there is a $w$--subgroup $\KK_{i}$ of ${F}$
containing $\pio{{\Gamma_u^I}}$. Since $Y^{(1)}$ is a core graph,
${{\Gamma_u^I}}$ is also a core graph, and so the immersion
${{\Gamma_u^I}}\looparrowright {{\Omega}}$ factors through
$Q_i^{(1)}=\Lambda_i$. Therefore $\hatt Y^I\looparrowright X$ factors
through $Q_{i}\looparrowright X$, and so $Y\looparrowright X$ also factors
through $Q_{i}$.
\end{proof}
\subsection{Homomorphisms from finitely generated groups}
In this section we combine the observations from the previous
subsections and finally prove Theorem~\ref{thm: f.g. subgroups}. The
first lemma provides a tool for promoting results about immersions to
results about subgroups.
\begin{lemma}\label{lem: Folding 2-complexes}
A combinatorial map of finite 2-complexes $X\to Y$ factors as
\[
X\to Z\looparrowright Y
\]
where $X\to Z$ is surjective and $\pi_1$-surjective.
\end{lemma}
\begin{proof}
Folding shows that the map of 1-skeletons factors as
\[
X^{(1)}\to Z^{(1)}\looparrowright Y^{(1)}
\]
where $X^{(1)}\to Z^{(1)}$ is surjective and $\pi_1$-surjective. We
now construct $Z$ by pushing the attaching maps of the 2-cells of
$X$ forward to $Z^{(1)}$ and identifying any 2-cells with the same
image in $Y$ and equal boundary maps. The resulting map $X\to Z$ is
surjective and $\pi_1$-surjective. It remains to check that the
natural map $Z\to Y$ is an immersion.
Since $Z\to Y$ is combinatorial, it can only fail to be locally
injective at a point $z\in Z$ if two higher-dimensional cells
incident at $z$ have the same image in $Y$. Since the map of
1-skeleta is an immersion, this can only occur if two 2-cells
$e_1,e_2$ in $Z$, incident at $z$, have the same image in $Y$.
Since the attaching maps of $e_1$ and $e_2$ agree at $z$ and
$Z^{(1)}\to Y^{(1)}$ is an immersion, it follows that the attaching
maps of $e_1$ and $e_2$ agree everywhere. Therefore, $e_1$ and $e_2$
are equal in $Z$ by construction.
\end{proof}
This has the following useful consequence.
\begin{lemma}\label{lem: immersing fp subgroups}
Let $Y$ be a finite 2-complex, and let $f\colon H\to\pio{Y}$ be a
homomorphism from a finitely presented group. Then there is a an
immersion from a finite, connected 2-complex $g\colon Z\looparrowright Y$
and a surjection $h\colon H\to \pio{Z}$ such that $f=g_*\circ h$.
\end{lemma}
\begin{proof}
Let $\langle x_1,\ldots, x_m\mid r_1,\ldots, r_n\rangle$ be a finite
presentation for $H$. Let $R \to Y$ be a combinatorial map from a
rose $R$ with petals corresponding to the $x_i$. Each relator $r_j$
is the boundary of a singular disc diagram $D_j\to Y$. Let $X$ be
constructed by gluing the $D_j$ to $R$ along their boundaries.
There is a combinatorial map $X\to Y$ realizing the homomorphism
$f$. Applying Lemma \ref{lem: Folding 2-complexes}, $X\to Y$
factors through an immersion $Z\looparrowright Y$.
\end{proof}
For homomorphisms from finitely generated groups, we obtain the
following, slightly weaker, result.
\begin{lemma}\label{lem: Immersing fg subgroups}
Let $Y$ be a finite 2-complex, and let $f\colon H\to\pio{Y}$ be a
homomorphism from an $n$--generator group. There is a sequence of
$\pi_1$-surjective immersions of finite, connected 2-complexes without free faces
\[
Z_0\looparrowright Z_1\looparrowright\cdots \looparrowright Z_i\looparrowright\cdots~,
\]
an immersion $g$ from the direct limit $Z=\varinjlim Z_i$ into $Y$
and a $\pi_1$-surjection $h\colon H\to\pio{Z}$ such that
$f=g_*\circ h$. Furthermore, we may take $\rk{\pio{Z_0}}\leq n$.
\end{lemma}
\begin{proof}
Consider a sequence of surjections of groups
\[
H_0\to H_1\to\cdots\to H_i\to\cdots\to H
\]
so that each $H_i$ is finitely presented and
$\varinjlim H_i=H$. We may of course take $H_0$ to be free of
rank $n$. As in the proof of Lemma \ref{lem: immersing fp
subgroups}, each $H_i$ may be realized as the fundamental group
of a compact 2-complex $X_i$ so that the homomorphisms
$H_i\to H_{i+1}$ are realized by combinatorial maps
$X_i\to X_{i+1}$.
We now use Lemma \ref{lem: Folding 2-complexes} repeatedly to improve these maps to immersions. For each $i$, set $Z'_{i,0}=X_i$ and define $Z'_{i,j}$ inductively as the result of applying Lemma \ref{lem: Folding 2-complexes} to the map $Z'_{i,j-1}\to Z'_{i+1,j-1}$, to obtain a factorization
\[
Z'_{i,j-1}\to Z'_{i,j}\looparrowright Z'_{i+1,j-1}~.
\]
Since the maps $Z'_{i,j-1}\to Z'_{i,j}$ are surjective maps of finite complexes, they eventually stabilize at some finite stage $j(i)$; let $Z'_i=Z_{i,j(i)}$. For each $i$, let $Z_i$ be the result of collapsing any free faces of $Z'_i$. Since the preimage of a free face under an immersion is a free face, each immersion $Z'_i\looparrowright Z'_{i+1}$ restricts to an immersion $Z_i\looparrowright Z_{i+1}$. This yields a sequence of immersions of finite complexes
\[
Z_0\looparrowright Z_1\looparrowright\ldots\looparrowright Z_i\looparrowright\ldots\looparrowright Y
\]
as required, and by construction the homomorphism $H\to\pio{Y}$
factors through a $\pi_1$-surjection to $Z=\varinjlim Z_i$.
\end{proof}
In general, when one applies Lemma \ref{lem: Folding 2-complexes}
there may be no relation between the Euler characteristics of the
complexes $X$ and $Z$. However, we will obtain some control using a
theorem of Howie. Recall that a group is \emph{locally indicable} if
every non-trivial finitely generated subgroup has infinite
abelianization.
\begin{thm}[{\cite[Corollary~4.2]{howie-pairs}}]\label{thm: Howie}
If $X$ is a 2-complex and $Y\subseteq X$ is a connected subcomplex
such that $\pio{Y}$ is locally indicable and $H_2(X,Y)=0$ then the
map $\pio{Y}\to\pio{X}$ induced by inclusion is injective.
\end{thm}
We use Howie's theorem to prove the following lemma, which can also be
deduced from earlier results of Stallings
\cite[p171]{stallings-homology}.
\begin{lemma}\label{homologylemma}
If $X$ is a connected, aspherical 2-complex and $\pio{X}$ is
generated by $n$ elements, then
\[
\chi(X)\geq 1-n
\]
with equality if and only if $\pio{X}$ is free on $n$ generators.
\end{lemma}
\begin{proof}
Let $x_1,\ldots,x_n$ be a generating set for $\pio{X}$. Since $X$ is
2-dimensional and $b_1(X)\leq n$ it is clear that $\chi(X)\geq 1-n$,
so it suffices to show $\pio{X}$ is free on the $x_i$ if
$\chi(X)=1-n$. We can realize the $x_i$ by a combinatorial
$\pi_1$-surjection of a rose $f\colon R\to X$. Let $M$ be the mapping
cylinder of $f$, a 2-complex homotopy-equivalent to $X$. If
$\chi(M)=1-n$ then $H_2(M)=0$ and the natural map $H_1(R)\to H_1(M)$
is injective. Therefore, by the long exact sequence of a pair,
$H_2(M,R)=0$ and so by Howie's theorem, $\pio{R}\to\pio{M}$
is injective, since free groups are locally indicable. Therefore,
$\pio{M}=\pio{X}$ is free on the $x_i$.
\end{proof}
We can now prove the group-theoretic analogue of Lemma~\ref{lem:
Negative immersions}, from which Theorem \ref{thm: k-free} follows
immediately.
\begin{lemma}\label{lem: Universal property of w-subgroups}
Let $\GG={F}/\ncl{w}$ be a one-relator group with $\pi(w)>1$, and
let $f\colon H\to\GG$ be a homomorphism from a finitely generated
group $H$.
\begin{enumerate}[(i)]
\item If $\rk{H}<\pi(w)$ then $f$ factors through a free group.
\item If $\rk{H}=\pi(w)$ and $H$ is not free of rank $\pi(w)$
then either $f$ factors through a free group or $f(H)$ is
conjugate into some $w$--subgroup $P_k$.
\end{enumerate}
\end{lemma}
\begin{proof}
By Lemma \ref{lem: Immersing fg subgroups}, there is a sequence of $\pi_1$-surjective immersions of finite, connected 2-complexes without free faces
\[
Z_0\looparrowright Z_1\looparrowright\cdots\looparrowright Z_i\looparrowright\cdots
\]
so that $f$ factors through $\pio{Z}$, where $Z=\varinjlim Z_i$.
Therefore, if $f$ does not factor through a free group, $\pio{Z}$ is
not free. Since free groups are Hopfian, $\pio{Z_i}$ is not free for
all but finitely many $i$, and so we may assume without loss of
generality that $\pio{Z_i}$ is not free for any $i$.
If $\rk{H}<\pi(w)$ then, for all $i$,
\[
\chi(Z_i)\geq 2-\rk{H}>2-\pi(w)
\]
by Lemma \ref{homologylemma}, and so $Z_i$ Nielsen reduces to a graph
by Lemma \ref{lem: Negative immersions}, which contradicts the
assumption that $\pio{Z_i}$ is not free. This proves item (i).
If $\rk{H}=\pi(w)$ then, similarly, $\chi(Z_i)\geq 2-\pi(w)$ for all
$i$, and since $\pio{Z_i}$ is not free, we must have
$\chi(Z_i)=2-\pi(w)$. Therefore, by Lemma \ref{lem: Negative
immersions}, each immersion $Z_i\looparrowright X$ factors through some
$Q_{k(i)}\looparrowright X$. Since there are only finitely many $Q_k$ by
Lemma \ref{lem: Finitely many w-subgroups}, there is a $k$ such that
$Z_i\looparrowright X$ factors through $Q_k$ for infinitely
many $i$, whence $f$ factors through $P_k$. This proves
item (ii).
\end{proof}
\subsection{$w$--subgroups are subgroups}
At last we can prove, as claimed, that the $P_i$ really are subgroups
of the one-relator group $\GG$.
\begin{theorem}\label{thm: Q is a subgroup}
Let ${F}$ be a free group with $w\in {F}$. The natural maps
$P_i\to\GG$ are injective.
\end{theorem}
\begin{proof}
We assume that $w$ is nontrivial and that $\pi(w)>1$, since the case
$\pi(w)=1$ is well-known, as noted in Example \ref{ex: w-subgroups of powers}.
Let $\gamma\colon S^1\to Q_i$ be an edge loop whose image in $X$ is
null-homotopic. Let $D$ be a van Kampen diagram for $\gamma$. Let
$R=Q_i\cup_\gamma D$, which comes equipped with a natural map
$R\to X$. By Lemma \ref{lem: Folding 2-complexes}, this factors as
\[
R\to Z\looparrowright X
\]
with $R\to Z$ a $\pi_1$-surjection; in particular, we obtain a
$\pi_1$-surjection $Q_i\looparrowright Z$. The complex $Z$ retracts to a
subcomplex $Y\subseteq Z$ without free faces, and since $Q_i$ has no
free faces the immersion $Q_i\to X$ factors through the retraction
to $Y$. Now, $H=\pio{Y}$ is generated by $\pi(w)$ elements and is
not free of rank $\pi(w)$ since it is a quotient of $P_i$, so by
Lemma \ref{homologylemma}, $\chi(Y)\geq 2-\pi(w)$. Therefore, by
Lemma \ref{lem: Negative immersions}, either $Y$ reduces to a graph
or it factors through some immersion $Q_j\looparrowright X$. But the
immersion $Q_i\looparrowright X$ factors through the immersion
$Q_i\looparrowright Y$, so by Lemma~\ref{lem: nielsenreduces}, if $Y$
reduces to a graph then $Q_i$ does too, contradicting the definition
of a $w$--subgroup.
Therefore $Y\looparrowright X$ factors through some $Q_j$. It follows that
$\KK_i<\KK_j$ (where these are the $w$--subgroups of ${F}$
corresponding to $Q_i$ and $Q_j$ respectively) so, by the definition
of a $w$--subgroup, $i=j$ and $Q_i\to Q_j$ is an isomorphism.
Therefore, $R$ retracts to $Q_i$, so $\gamma$ was already
null-homotopic in $Q_i$. This proves the theorem.
\end{proof}
Using Remark \ref{rem: w-subgroups are one-ended}, we see that $\pi(w)$ is an invariant of the isomorphism type of the one-relator group $\GG$.
\begin{corollary}
If $w\in{F}$ is a word in a free group then $\pi(w)$ is the minimal rank of a non-free subgroup of the one-relator group $\GG={F}/\ncl{w}$.
\end{corollary}
\subsection{The case $\pi(w)=2$}
As explained in the introduction, the results of the previous section
show that, when $\pi(w)>2$, the subgroup structure of
$\GG={F}/\ncl{w}$ is like the subgroup structure of a
hyperbolic group. In this section, we examine the case $\pi(w)=2$, and
notice that the non-negatively curved behaviour of $\GG$ is
concentrated in a particular subgroup. This follows from the next
result, which shows that in this case there is a unique $w$--subgroup
of ${F}$.
\begin{proposition}
\label{prop: Maximal rank-two}
Let ${F}$ be a free group and
$w\in {F}$ an indivisible, imprimitive, non-trivial
element. If $H_1$ and $H_2$ are rank-two subgroups of
${F}$ with $w$ contained in, but not primitive in,
both $H_1$ and $H_2$, then $\langle H_1,H_2\rangle$ also
has rank two.
If $\pi(w)=2$ then there is a unique $w$-subgroup.
\end{proposition}
\begin{proof}
Since $w$ is indivisible and imprimitive in both $H_1$ and
$H_2$, Theorem \ref{introthm: main} applies to give
\[
1\leq
(\rk{H_1}-1)+(\rk{H_2}-1)-(\rk{\langleH_1,H_2\rangle}-1)~,
\]
and since $\rk{H_1}=\rk{H_2}=2$, it follows that
$\rk{\langleH_1,H_2\rangle}=2$ as required; $w$ is imprimitive
in $\langleH_1,H_2\rangle$ and so
$\langleH_1,H_2\rangle\in\mathcal{H}$.
Suppose that $\pi(w)=2$. Let $\mathcal{H}=\{H_i\}$ be the set of
rank-two subgroups of ${F}$ so that $w\inH_i$ and $w$ is not
primitive in $H_i$; $\mathcal{H}$ is finite by Lemma \ref{lem:
Finitely many w-subgroups}, and since $\pi(w)=2$, $\mathcal{H}$ is
non-empty. Considering the partial order on $\mathcal{H}$ given by
inclusion, the previous paragraph now implies that each pair has an
upper bound, and it follows that $\mathcal{H}$ has a unique maximal
element $K$, which is necessarily the unique $w$--subgroup.
\end{proof}
Therefore, in this case, we drop the unnecessary subscript $i$ and
write $P$ for the $w$--subgroup of $G$. In light of
Conjecture~\ref{conj: rel hyp} we make the following definition.
\begin{definition}\label{defn: NN associate}
If $\pi(w)=2$ then $P$ is the \emph{peripheral subgroup} of $\GG$.
\end{definition}
We do not currently know how to prove that $P$ is uniquely defined in
$\GG$ up to isomorphism. However, Lemma \ref{lem: Universal property
of w-subgroups} shows that if
$\GG\congF/\ncl{w}\congF'/\ncl{w'}$
are isomorphic then the corresponding peripheral subgroups $P$ and
$P'$ are conjugate into each other, which somewhat justifies the term
`peripheral'. If Conjecture \ref{conj: rel hyp} held then $P$
would be malnormal in $\GG$, and therefore would be a well-defined
isomorphism invariant.
\bibliographystyle{amsalpha}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
{
"timestamp": "2018-03-08T02:08:55",
"yymm": "1803",
"arxiv_id": "1803.02671",
"language": "en",
"url": "https://arxiv.org/abs/1803.02671"
}
|
\section{Introduction}
It is well accepted that more than 80\% of the matter content of the universe is in the form of invisible dark matter (DM) \cite{Ade:2015xua}.
Although its particle physics nature remains unknown, some properties can be inferred from experiments: for example, we know DM has to be (almost) electrically neutral, colorless and stable (at least on cosmological timescales). The search for DM proceeds primarily on three fronts: direct detection experiments look for the recoil of nuclei after interaction with DM~\cite{Akerib:2016vxi,Aprile:2017iyp,Cui:2017nnn}; indirect detection searches seek Standard Model (SM) particles resulting from DM annihilation~\cite{Adriani:2008zr,TheFermi-LAT:2017vmf,Aguilar:2013qda}; finally, collider experiments aim at producing DM from SM states~\cite{Basalaev:2017hni,Aaboud:2017phn,Sirunyan:2018gka}.
Of course, DM could just be the lightest state of a whole dark sector, consisting of several other particles, which may carry electric or color charges.
Particularly interesting is the case in which the DM is accompained by a long-lived particle (LLP) which travels a measurable distance before decaying~\cite{Kaplan:2009ag,Baumgart:2009tn,Dienes:2012yz,Kim:2013ivd,Co:2015pka,Hochberg:2015vrg,Davoli:2017swj}. LLPs have also been studied in different contexts of physics beyond the Standard Model (BSM), e.g. supersymmetry or composite-Higgs models \cite{Wells:2003tf,ArkaniHamed:2004fb,Arvanitaki:2012ps,ArkaniHamed:2012gw,Cui:2014twa,Csaki:2015uza,Barnard:2015rba}.
The possibility that the dark sector consists of colored particles in addition to the dark matter has also attracted recent interest \cite{deSimone:2014pda, Baker:2015qna, ElHedri:2017nny, Garny:2017rxs}.
In the context of LHC searches for the dark matter, this scenario is particularly remarkable because the phenomenology benefits from enhanced QCD-driven production rates of the colored partners.
A great deal of phenomenological properties of colored dark sectors are somewhat model-independent, in the sense that they only depend on the representation of the colored partner under the SU(3)$_c$ gauge group \cite{deSimone:2014pda}.
However, as we will discuss below, important features for collider phenomenology are indeed reliant on the interaction between the dark sector and the SM.
In this paper, we consider the SM augmented by a dark sector constisting of a DM particle and a nearly-degenerate colored state, in the adjoint representation of SU(3)$_c$.
This dark sector communicates with the SM via
a dimension-5 effective operator (the validity of effective theories for DM searches has been widely discussed in the literature, see e.g. Refs.~\cite{Busoni:2013lha,Morgante:2014kra,Busoni:2014uca,Busoni:2014haa,Busoni:2014sya,Bell:2015sza,DeSimone:2016fbz,Bruggisser:2016nzw,Kahlhoefer:2017dnp}).
Such a scenario is particularly interesting because the colored partner could hadronize in bound states like ordinary quarks and gluons. In a supersymmetric context this is a well-known possibility, and such bound states, originally introduced in Ref.~\cite{Farrar:1978xj}, are called $R$-hadrons. We use here the same terminology, although our considerations do not assume any underlying supersymmetry. For more recent papers about $R$-hadrons, see e.g. Refs.~\cite{Aaboud:2016uth,Aaboud:2016dgf,CMS:2017rlw,Bond:2017wut,Garny:2018icg}.
In addition, since the decay of the colored partner is governed by a suppressed non-renormalizable operator, such a bound state can easily travel macroscopic distances and leave tracks in the collider detector.
The paper is organized as follows: in section~\ref{sec:model}, we introduce the model and discuss some of its features and implications; in section~\ref{sec:analysis}, we consider LHC constraints derived from monojet and $R$-hadron searches, focusing on the interplay between them. Finally, we conclude in section~\ref{sec:conclusion}.
\section{Chromo-electric dipole dark matter}
\label{sec:model}
\subsection{Model}
\label{subsec:model}
Dark matter, despite being neutral, can be coupled to colored Standard Model particles. In order to allow such a coupling, colored particles within the dark sector are required~\cite{Pierce:2007ut,Hamaguchi:2014pja,Ibarra:2015nca,Ellis:2015vaa,Liew:2016hqo,Mitridate:2017izz,Garny:2017rxs,DeLuca:2018mzn,Biondini:2018pwp}.
In this work, we consider an extension of the minimal scenario, where the DM particle $\chi_1$ is accompanied by a slightly heavier
partner $\chi_2$. We denote the masses of these particles by $m_1$ and $m_2\equiv m_1+\Delta m$, respectively. Both $\chi_1$ and $\chi_2$ are Majorana fermions.
At the renormalizable level, scalar or fermionic partners in the fundamental representation of SU(3) can be responsible for the coupling of DM with the SM quarks~\cite{Garny:2014waa,Giacchino:2015hvk,Garny:2017rxs}.
If, instead, we are to consider a coupling to gluons, the lowest dimensional operator has $D=5$ and involves a colored partner $\chi_2$ in the adjoint representation of SU(3).
If we denote the dark matter particle by $\chi_1$, the free Lagrangian for the dark sector is:
\begin{equation}
\mathcal{L}_0=\frac{1}{2}\bar{\chi}_1\left(i\slashed{\partial}-m_1\right)\chi_1+\frac{1}{2}\bar{\chi}_2^a\left(i\slashed D-m_2\right)\chi_2^a\,,
\end{equation}
with $a$ being the color index in the adjoint representation.
The coupling to gluons can be attained via effective operators mimicking the (chromo-) electric and (chromo-)magnetic dipole moments, as follows:
\begin{equation}
\mathcal{L}_\mathrm{int}=\frac{i}{2 m_1}\bar{\chi}_2^a\sigma^{\mu\nu}\left(\mu_\chi-id_\chi\gamma^5\right)\chi_1 G^a_{\mu\nu}\,,
\label{chromo_int}
\end{equation}
where $\sigma^{\mu\nu}=i/2[\gamma^\mu,\gamma^\nu]$ and $G_{\mu\nu}^a$ is canonically normalized.
The two operators in Eq.~\eqref{chromo_int} give rise to similar phenomenology and no interference effect arises in any of the observables we study in this paper. Therefore, for simplicity, we limit ourselves to study only the operator with $d_\chi$ in the rest of the paper.
Effective operators describing dipole moments typically arise after integrating out heavy particles of the underlying ultraviolet theory at loop-level. If this is the case, the operator in Eq.~\eqref{chromo_int} should be further suppressed by $\alpha_s/(4\pi)$, and so the importance of higher-order operators may not be negligible. A more complete theoretical analysis of the origin of the interaction in Eq.~\eqref{chromo_int} and of the role of higher-order operators is left to a future work.
The interactions of the dark sector with the SM particles are then described by the parameters $\{m_1,\Delta m,d_\chi\}$. In particular, we require that $d_\chi \ll 1$: this interaction term, in fact, could be written as an effective term suppressed by $1/\Lambda$, with $\Lambda$ being the scale of some underlying new physics. It is then natural to formally identify $d_\chi\sim m_1/\Lambda$, which has to be small in order for the effective theory to be reliable.
For the values of $d_\chi$ considered in our analysis, the energy scales of the processes of interest are always well below the operator scale $\Lambda$, thus ensuring we are in the regime of valid effective field theory.
The simplest process leading to the decay of $\chi_2$ is $\chi_2\rightarrow \chi_1 g$, whose width, at leading order in $\Delta m/m_1$, is:
\begin{equation}
\label{eq:x2decay}
\Gamma_{\chi_{2}}=\frac{d_\chi^2}{\pi}\frac{\Delta m^3}{m_1^2}.
\end{equation}
Since $d_\chi$ is required to be small, we would naturally expect $\chi_2$ to be a long-lived particle with lifetime on the detector timescale, as will be explored later.
\subsection{Relic density}\label{sec:relic}
A first constraint on the parameter space can be obtained by requiring that the model reproduces the observed dark matter abundance $\Omega h^2=0.1194$ \cite{Ade:2015xua}. Such a relic density is determined by processes of the form $\sigma(\chi_i\chi_j\rightarrow \text{SM}\,\text{SM})$. The expressions for the corresponding cross sections, at leading order in $\Delta m/m_1$ and $m_f/m_1$ (where $m_f$ is the mass of a generic SM fermion $f$), are shown in table~\ref{table_sigma_chromo}, although the complete expressions, which can be found in Appendix \ref{app:sigma}, have been used in the calculations. In order to determine the relic density predicted by this model, two modifications to the standard procedure have, in principle, to be taken into account: first, if the mass splitting between $\chi_1$ and $\chi_2$ is small compared to their masses, co-annihilations must be included~\cite{Griest:1990kh,Baker:2015qna}; second, due to the color charge of $\chi_2$, Sommerfeld enhancement (introduced below) modifies the value of $\sigma(\chi_2\chi_2\rightarrow \text{SM}\,\text{SM})$.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|}\hline
\diagbox[dir=NW,width=2.6cm, height=1.2cm]{\;\;\;\;\;\;$ij$}{$SM SM$} & $q\,\bar q$ & $g\,g$\\\hline\hline
& $\dfrac{}{}$& \\[-1em]
11 & & $\dfrac{2\,d_\chi^4}{\pi}\,\dfrac{1}{m_1^2}\,$ \\
& & \\[-1em]\hline
& & \\[-1em]
$12$ & $\dfrac{d_\chi^2\,g_s^2}{96\pi}\,\dfrac{1}{m_1^2}v^2$ & $\dfrac{3\,d_\chi^2\,g_s^2}{16\pi}\,\dfrac{1}{m_1^2}$ \\
& & \\[-1em]\hline
& & \\[-1em]
$22$ &$\dfrac{3\,g_s^4}{256\pi}\,\dfrac{1}{m_1^2}$ & $\dfrac{27\,g_s^4}{512\pi}\,\dfrac{1}{m_1^2}\,+\, \mathcal O(d_\chi^2)$
\\[1em]\hline
\end{tabular}
\end{center}
\caption{\label{table_sigma_chromo}Different contributions to the effective cross-section $\langle\sigma v\rangle_{\chi_i\chi_j\to SM SM}$. The QCD coupling is denoted by $g_s$, while $v$ is the relative velocity in the $\chi_i\chi_j$ center-of-mass frame.}
\end{table}
As far as the co-annihilations are concerned, the effective cross-section which determines the observed abundance of DM in the universe is:
\begin{equation}
{\langle\sigma v\rangle}_\mathrm{eff}=\frac{1}{{\left(1+\alpha\right)}^2}\left({\langle\sigma v\rangle}_{11}+2\alpha{\langle\sigma v\rangle}_{12}+\alpha^2{\langle\sigma v\rangle}_{22}\right)\,,
\label{sigma_eff}
\end{equation}
where $\alpha\equiv g_2/g_1{(1+\Delta m/m_1)}^{3/2}e^{-x\Delta m/m_1}$, $x\equiv m_1/T$, ${\langle\sigma v\rangle}_{ij}\equiv{\langle\sigma v\rangle}_{\chi_i\chi_j\to SM\,SM}$ and $g_i$ is the number of degrees of freedom of $\chi_i$.
The relic abundance is then related to this effective cross-section as:
\begin{equation}
\Omega h^2=\frac{0.03}{\displaystyle\int_{x_F}^\infty dx\,\frac{\sqrt{g_*}\,}{x^2}\,\frac{{\langle\sigma v\rangle}_\mathrm{eff}}{\SI{1}{\pico\barn}}}\,,
\label{relic_abundance}
\end{equation}
where $g_*$ is the number of relativistic degrees of freedom at the freeze-out temperature $T_F$, determined by the implicit equation:
\begin{equation}
x_F=25+\log\left[\frac{1.67\,n_F}{\sqrt{g_*x_F}}\,\frac{m_1}{\SI{100}{\giga\electronvolt}}\,\frac{{\langle\sigma v\rangle}_\mathrm{eff}}{\SI{1}{\pico\barn}}\right]\,,
\end{equation}
with $n_F= g_1(1+\alpha)$ being the effective number of degrees of freedom of the system $(\chi_1,\chi_2)$.
In the following, we take $g_*=106.75$ as a reference.
As already mentioned, Sommerfeld enhancement also plays an important role in the determination of the relic abundance~\cite{ANDP:ANDP19314030302,Feng:2010zp,ElHedri:2016onc}: Sommerfeld enhancement is a non-perturbative effect due to the exchange of soft gluons betweeen the colored particles in the initial state. This is therefore relevant for the self-annihilation of $\chi_2$. Model independent discussion of this effect can be found in Refs.~\cite{deSimone:2014pda,ElHedri:2017nny}. These analyses assume that the relic DM density is dominated by QCD, remaining agnostic about the particular phenomenology deriving from the new BSM coupling. This is a reasonable assumption for the model we consider since, as already stated, it is natural (and indeed necessary) to assume that $d_\chi\ll1$. Co-annihilations where the DM annihilation cross-section contributes negligibly to the relic density have been recently analyzed in Ref.~\cite{DAgnolo:2018wcn} in the more general context of \emph{sterile co-annihilations}.
When the final state is characterized by a single representation $Q$, the Sommerfeld-corrected cross-section is $\sigma_\text{Somm}=S\left(C_Q \alpha_s/\beta\right)\sigma_\text{Pert}$, where $S$ is the non-perturbative correction depending on the final representation (through the Casimir element $C_Q$) and on the velocity of the particles $\beta$. If, on the other hand, we have more than one possible final state representation, we need to consider the decomposition $R\otimes R'=\oplus_Q Q$, where $R$ and $R'$ are the initial state representations (in our case $R=R'=8$) and $Q$'s are the final state ones. Each representation $Q$ gives a contribution to the total cross-section and has its own value of $C_Q$. After group decomposition, the final result is given by eqs.~(2.24, 2.25) of Ref.~\cite{deSimone:2014pda}.
As a result, the contour yielding the correct relic density, with and without the inclusion of such a non-perturbative effect, can be found in fig.~\ref{relic:chromo}. In this plot, we only consider the dominant contributions from QCD self-annihilations, not including the sub-leading contributions of processes proportional to $d_\chi$, which will be negligible if $d_\chi \ll 1$. This is actually well motivated from the previous discussion about the magnitude of $d_\chi$.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{ChromoRelic3.pdf}
\caption{\label{relic:chromo}
Contours corresponding to the measured relic abundance $\Omega h^2=0.1194\pm
0.0022\, (1\sigma)$, together with its 3-$\sigma$ bands, in the case of domination by QCD processes. The perturbative result and the result including Sommerfeld enhancement are both shown. Note that part of the parameter space is already excluded by LHC searches, as explained in section~\ref{sec:comparison} and shown in fig.~\ref{fig:chromo_comparison}.}
\end{figure}
While the relic density is dominated by QCD processes, we see from eq.~\eqref{eq:x2decay} that the decay length instead depends quadratically on $d_\chi$.
Therefore the smallness of $d_\chi$ leads to macroscopic decay lengths, which are an interesting feature we will use in our analysis.
From here on, we fix the mass splitting $\Delta m$ as a function of the mass of the DM candidate $m_1$,
using the Sommerfeld corrected curve in fig.~\ref{relic:chromo}.
This imposes the correct relic density for all points in parameter space that we consider. As a consequence, the decay length now only depends on the mass of $\chi_1$ and the coupling $d_\chi$. The full numerical results for the decay length can be found in fig.~\ref{fig:chromo_decay}, where we show contours of the proper decay length of $\chi_2$.
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{ChromoLengths.pdf}
\caption{Decay lengths at rest for the heavy partner $\chi_2$ in a parameter space where $\Gamma_{\chi_2}^{-1}$ is macroscopic (in the $\SI{}{\centi\meter}-\SI{}{\meter}$ range). The mass splitting $\Delta m$ is fixed, for given $m_1$, by the relic density as shown in fig.~\ref{relic:chromo}. Small values of $d_\chi$ and large values of $m_1$ give origin to larger values of decay length. }\label{fig:chromo_decay}
\end{figure}
\subsection{Departure from chemical equilibrium}
The co-annihilation paradigm, described in section~\ref{sec:relic}, implicitly assumes that chemical equilibrium is maintained until the DM freeze-out.
Under particular conditions, however, it is possible that this assumption might not be valid~\cite{Garny:2017rxs}: this can happen, for instance, when the relic abundance is dominated by a SM (here, QCD) coupling: in this case, the coupling characterizing the BSM physics ($d_\chi$ in the case at hand) remains unconstrained. This translates into the fact that very small values for such a coupling are in principle allowed, leading to a possible breakdown of chemical equilibrium.
The important ratios to evaluate are $\Gamma_{\chi_i\chi_j}/H$, where $\Gamma_{\chi_i\chi_j}$ generically represents the rate of a process involving $\chi_i$ and $\chi_j$: it can be the scattering $\chi_2\,\chi_2\to\chi_1\,\chi_1$, the decay $\chi_2\to\chi_1\,g$, the conversions $\chi_2\,g\to\chi_1\,g$ and $\chi_2\,q\to\chi_1\,q$, as well as all the inverse reactions.
The rates of decay and conversion are proportional to $d_\chi^2$, while that for the scattering is proportional to $d_\chi^4$; therefore this latter process is expected to be the most sensitive to $d_\chi$, and is therefore expected to have the smallest rate.
When the largest of these rates $\Gamma_{\chi_i\chi_j}^{\mathrm{(max)}}$ is such that $\Gamma_{\chi_i\chi_j}^{\mathrm{(max)}}/H\lesssim1$, the assumption of chemical equilibrium (which eq.~\eqref{sigma_eff} relies on) ceases to be valid. If this is the case, a numerical integration of the complete set of Boltzmann equations, including conversions, is necessary. The ratios $\Gamma/H$ for these three processes (in the direction $\chi_2\to\chi_1$) are shown in fig.~\ref{fig:chromo_chem}.
Since the rate corresponding to $\langle\sigma v\rangle_{\chi_2\,g\leftrightarrow\chi_1\,g}\propto g_s^2\,d_\chi^2$ turns out to be the dominant contribution, scatterings with gluons are ultimately responsible for maintaining chemical equilibrium.
In order to test the possible breakdown of chemical equilibrium before and during freeze-out (where eq.~\eqref{sigma_eff} has its validity), the ratio $\Gamma_{\chi_2\,\chi_1}/H\equiv H^{-1}\,n\langle\sigma v\rangle_{\chi_2\,g\leftrightarrow\chi_1\,g}$ can be investigated in the region $20\lesssim x_F\lesssim30$. From fig.~\ref{fig:chromo_chem}, we see that in this region $\Gamma_{\chi_2\,\chi_1}/H\sim10^{4}$ for $d_\chi=10^{-6}$. We can therefore estimate the breakdown of chemical equilibrium to occur when:
\begin{equation}
\frac{\Gamma_{\chi_2\,\chi_1}}{H}\lesssim1\quad\Leftrightarrow\quad d_\chi\lesssim10^{-8}\,.
\end{equation}
This simple scaling argument is actually in agreement with the explicit result shown in fig.~\ref{fig:chromo_chem}.
\begin{figure}
\centering\includegraphics[width=0.9\textwidth]{ChromoChem.pdf}
\caption{Interaction rates for the case $m_1=\SI{1}{\tera\electronvolt}$ and $d_\chi=\SI{e-6}{}$. Different choices for the mass of the DM give similar results, and the scattering with gluons always turns out to be the relevant contribution for the determination of departure from chemical equilibrium.}\label{fig:chromo_chem}
\end{figure}
In the following, we therefore assume $d_\chi\gtrsim10^{-8}$, in order to be in the regime of chemical equilibrium.
\section{LHC searches}\label{sec:analysis}
In this section we analyze the constraints on the model coming from the two most important channels: $R$-hadrons and monojet.
In principle, it would also be possible to have limits from dijet-resonance bounds coming from the production and fragmentation of a bound state of two $\chi_2$ particles, similar to a gluinonium. Since this results in rather weak constraints, we have described it in Appendix~\ref{app:dijet}.
\subsection{$R$-hadron constraints}\label{sec:rhadrons}
The color charge of the $\chi_2$ particle implies that it can hadronize with SM particles on the detector timescale, forming particles analogous to the $R$-hadrons in supersymmetry. If stable on a detector timescale, these colorless composite states can be detected via an ionization signature as they travel through the detector at speeds significantly less than the speed of light.
We apply ATLAS constraints on the $\chi_2$ production cross-section from Ref.~\cite{Aaboud:2016uth}, which searches for $R$-hadrons at $\sqrt{s} = 13$ TeV with 3.2 fb$^{-1}$ of data. The relevant constraints are those on gluinos, since $\chi_2$ is a color octet.
We also consider an approximate high-luminosity (HL) projection of these limits to $\mathcal{L}=\SI{3000}{\per\femto\barn}$, using the procedure outlined in Ref.~\cite{Barnard:2015rba}, applied to the ATLAS analysis. The relevant results are the background counts in Table 3 of Ref.~\cite{Aaboud:2016uth} for the gluino search, which we rescale with the increased luminosity. We assume that the same efficiencies of Table 3 apply to the HL bounds. It should be noted that in the HL regime the
results are limited by systematics rather than statistics.
The signal simulations are the same for the current luminosity and for higher luminosities.
In order to simulate the pair production of $\chi_2$ particles at parton level we have used \textsc{Madgraph5\_aMC@NLO}~\cite{Alwall:2014hca}, where the model has been implemented using \textsc{FeynRules} \cite{Alloul:2013bka}, and apply the $R$-hadronisation routine from \textsc{Pythia~8.230}~\cite{Sjostrand:2007gs}. The probability of each $\chi_2$ being stable at least up to the edge of the ATLAS calorimeter is given by
\begin{equation}
\mathcal{P}(\ell > \ell_{\rm calo}) = \exp\left(-\frac{\ell_{\rm calo}}{\ell_T}\right),
\end{equation}
where $\ell_{\rm calo}=\SI{3.6}{\meter}$ is the transverse distance to the edge of the calorimeter and we defined $\ell_T=p_2^T/(m_2\,\Gamma_{\chi_2})$. This probability is applied on an event-by-event basis to find the effective cross-section of events yielding at least one $R$-hadron. This relies on the assumption that the lifetime of the resultant $R$-hadron is at least as long as the unhadronized $\chi_2$ lifetime. Following Ref.~\cite{Aaboud:2016uth}, we assume that 90\% of the $\chi_2$ form charged $R$-hadrons.
\begin{figure}[t]
\centering
\includegraphics[width = 0.525\linewidth]{contours}
\includegraphics[width = 0.375\linewidth]{d_limit}
\caption{\label{fig:r_hadrons}\emph{Left:} Contours showing the value of the coupling $d_\chi$ yielding a given production cross-section for calorimeter-stable $\chi_2$, overlayed with the region currently excluded by ATLAS and a high-luminosity projection of exclusions. \emph{Right:} Lower limit on $d_\chi$ from current (solid) and projected high-luminosity (dashed) $R$-hadron constraints as a function of the $\chi_2$ mass. Smaller values of $d_\chi$ will increase the $\chi_2$ decay length, exceeding the limit on the production cross-section of calorimeter-stable $\chi_2$. Note that $m_2$ is related to the DM mass by the values of $\Delta m$ given in fig.~\ref{relic:chromo}.}
\end{figure}
Contours showing the relationship between this effective production cross-section, $m_2$ and $d_\chi$ are shown in Fig.~\ref{fig:r_hadrons}, along with current and projected future ATLAS limits on the cross-section.
\subsection{Monojet }\label{sec:monojet_general}
A generic particle physics model for DM is usually sensitive to so-called `monojet' searches, where DM produced in a collider recoils from a high-energy jet, leaving a large missing energy ($E_T^{\rm miss}$) signature as it passes through the detector without interacting~\cite{Aaboud:2016tnv,Vannerom:2016pxv,Aaboud:2017phn,Sirunyan:2017qaj}.
For the chromo-electric model the production processes leading to the monojet signature are of the form $p\,p\,\rightarrow\,\chi_i\,\chi_l\, j$ with $i, l \in \{1,2\}$. Since $d_\chi\ll 1$, the leading contribution will be from the QCD-mediated production channel $p\,p\,\rightarrow\,\chi_2\,\chi_2\, j$, since all other terms are proportional to powers of $d_\chi$. In this regime, the relic density profile shown in fig.~\ref{relic:chromo} will apply.
We apply the latest monojet constraints from ATLAS~\cite{Aaboud:2017phn}, which searches for events with large missing energy and at least one high-energy jet, with center-of-mass energy of $\SI{13}{\tera\electronvolt}$ and integrated luminosity of $\SI{36.1}{\per\femto\barn}$.
Events are required to satisfy the conditions $E_T^{\rm miss}>\SI{250}{\giga\electronvolt}$, leading-$p_T>\SI{250}{\giga\electronvolt}$ and also
$|\eta|_{\mathrm{leading-jet }}<2.4$. In addition, a maximum of four jets with $p_T>\SI{30}{\giga\electronvolt}$ and $|\eta|<2.8$ are allowed, and the condition $\Delta\phi(\mathrm{jet},\boldsymbol p_T^{\rm{miss}})>0.4$ must be satisfied for each selected jet.
The analysis then uses ten different signal regions, which differ from each other by the choice of cut on $E_T^{\rm miss}$: in particular, the weakest one is denoted (for the inclusive analysis) by IM1, and requires $E_T^{\rm miss}>\SI{250}{\giga\electronvolt}$; while IM10 requires $E_T^{\rm miss}>\SI{1000}{\giga\electronvolt}$.
We simulate events at parton level using \textsc{Madgraph5\_aMC@NLO}~\cite{Alwall:2014hca}, then apply the same cuts as Ref.~\cite{Aaboud:2017phn}.
In models where a colored partner is produced at the LHC, monojet constraints will only apply if the colored partner decays promptly, i.e. within the beamline radius, $\ell_{\rm beam} = 2.5$ cm. Otherwise, if it enters the detector material, it will form an $R$-hadron within a very short timescale, roughly $\Lambda_{\mathrm{QCD}}^{-1}\sim\SI{e-24}{\second}$. We take this into account by considering the probability for each particle $\chi_2$ to decay with transverse decay length $\ell_T$ less than $d_{\rm beam}$ \cite{ElHedri:2017nny}:
\begin{equation}
\mathcal{P}(\ell_T<\ell_\text{beam})=1 - \exp\left(-\dfrac{\ell_\text{beam}}{\ell_T}\right)\, ,
\end{equation}
where $\ell_T=p_2^T(i)/(m_2\,\Gamma_{\chi_2})$ is the transverse distance traveled by $\chi_2$ in an event $i$. Each event is weighted by this probability in order to find the effective cross-section where $\chi_2$ decays promptly, before forming an $R$-hadron. We assume here that all the colored particles reaching the detecting stage hadronize.
In order to obtain a limit on the number $N_\text{NP}$ of new physics events, for both current and future luminosities, we apply a $\chi^2$ analysis with $95\%$ CL with unit efficiency and acceptance, according to~\cite{deSimone:2014pda}:
\begin{equation}
\chi^2=\frac{\left[N_\text{obs}-(N_\text{SM}+N_\text{NP})\right]^2}{N_\text{NP}+N_\text{SM}+\sigma_\text{SM}^2}\,,
\end{equation}
where the error on the SM background is assumed to be normally distributed.
To find the strongest constraint, we consider the different signal regions from Ref.~\cite{Aaboud:2017phn}, differing by the cut on $E_T^{\rm miss}$. For a given value of $m_1$, we use the ratio between our simulated cross section and the bound from the ATLAS paper and find that the strongest bound comes from IM9 (which requires $E_T^{\rm miss}>\SI{900}{\giga\electronvolt}$) as can be seen in fig.~\ref{signal_regions}. It should be noted that changing the mass varies both the value of the cross section and the kinematic distribution of the particles, so that the results from fig.~\ref{signal_regions} cannot be trivially recast into a bound on the mass.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.55\textwidth]{ChromoBinning.pdf}
\end{center}
\caption{\label{signal_regions} Ratio between the cross section from our model and the bound from Ref.~\cite{Aaboud:2017phn} in the case of $m_1=\SI{860}{\giga\electronvolt}$ as a function of the inclusive regions.}
\end{figure}
For our optimal bin, the number of events in this signal region is:
\begin{equation}
N_\text{SM} = 464\pm34\quad,\quad N_\text{obs}=468\,.
\end{equation}
Then the cross section of new physics (NP) has to satisfy the constraint $\sigma_{NP}<\SI{2.3}{\femto\barn}$ for $\mathcal L=\SI{36.1}{\per\femto\barn}$.
Using this value and the procedure outlined earlier in this section, we find a lower bound on the mass of the DM of $\SI{860}{\giga\electronvolt}$ for $d_\chi \gtrsim 3\times 10^{-7}$. Full results are shown as the blue lines in Fig.~\ref{fig:chromo_comparison}. For smaller values of $d_\chi$, $\chi_2$ begins to travel into the detector and form $R$-hadrons before decaying, as discussed earlier in this section.
We extrapolate the monojet bound from Ref.~\cite{Aaboud:2017phn} to higher luminosity by considering the statistical and systematic uncertainties separately. The relative statistical error scales with the inverse square root of the number of events (and hence of the luminosity); on the other hand, it is generally not straightforward to predict how the relative systematic uncertainty will evolve with the luminosity. For this reason, we parametrize it in general as $\delta^{\mathrm{sys}}(\mathcal L_2)\equiv r\,\delta^{\mathrm{sys}}(\mathcal L_1)$.
Using the published upper bound on the cross-section of new physics, $\sigma_{NP}$, at a luminosity $\mathcal L_1$, we can then estimate the corresponding upper bound on $\sigma_{NP}$ at a different luminosity $\mathcal L_2$ as:
\begin{equation}
\sigma_{NP}^{(\mathcal L_2)}(r)\leq\sigma_{NP}^{(\mathcal L_1)}\,\sqrt{r^2+\left(\frac{\mathcal L_1}{\mathcal L_2}-r^2\right)\frac{N_1}{\delta N_1^2}}\,.
\label{projection_sigma}
\end{equation}
We carry out this HL projection to 3000 fb$^{-1}$, in an optimistic scenario where systematic uncertainties have been cut to half the current values. In this case, we find a limit on $m_1$ from monojet of $\SI{1020}{\giga\electronvolt}$. Results are shown as the dashed blue curve in Fig.~\ref{fig:chromo_comparison}.
For reasons of completeness, we also considered the extreme cases in which the systematics will be unchanged with respect to their current value and the case in which they will be completely negligible, getting respectively the bounds $m_1>\SI{900}{\giga\electronvolt}$ and $m_1>\SI{1250}{\giga\electronvolt}$.
\subsection{Comparison between different searches}\label{sec:comparison}
\begin{figure}
\centering\includegraphics[width=0.8\textwidth]{ChromoComp.pdf}
\caption{Current (solid) and foreseen future (dashed) status of the parameter space as excluded by monojet and $R$-hadrons searches, in blue and orange, respectively. The mass splitting $\Delta m$ is fixed, for given $m_1$, by the relic density as shown in fig.~\ref{relic:chromo}. The assumptions made regarding the scaling of the error at high luminosity can be found in the text.}\label{fig:chromo_comparison}
\end{figure}
The chromo-electric dipole dark matter model has been analyzed in the light of different LHC signals, namely monojet and $R$-hadrons.
Since the different LHC signals are most effective in different regions of parameter space, it is important to understand the interplay between them. A first noteworthy feature is that the monojet analysis is insensitive to the value of the BSM coupling $d_\chi$ in most of the parameter space, but at some point this search becomes ineffective due to the fact that the colored partners are forming $R$-hadrons, rather than decaying to the DM. Since the observed events in this region of parameter space are the $R$-hadrons, the search for these states becomes the one giving the most stringent bound, as can be seen from fig.~\ref{fig:chromo_comparison}, where the result of the previous section are summarized. Note that the $R$-hadron results are shown here in terms of $m_1$, since relic density fixes $\Delta m$ for given $m_1$ and $d_\chi$.
The complementarity of the searches emerges from the fact that for $d_\chi\gtrsim\SI{3e-7}{}$ the most stringent bound is given by monojet searches, while for $d_\chi\lesssim\SI{3e-7}{}$ the $R$-hadron search gives the best result.
Furthermore, the different analyses are affected by different errors, meaning that increased luminosity has a distinct effect on each of them. This suggests that a high-luminosity projection might tell us which of these searches will become more interesting in the future. This as well is shown in fig.~\ref{fig:chromo_comparison}, where the role of higher luminosity in probing the parameter space is manifest.
As a side remark, we also checked the indirect detection limits by applying bounds on the self-annihilation rate derived from cosmic-antiproton fluxes~\cite{Cuoco:2017iax} to our model.
An upper bound on $d_\chi$, as a function of $m_1$, is obtained from the upper bound on the annihilation cross section $\sigma(\chi_1\chi_1\rightarrow g g)$. The bounds were found to be very weak compared to those from collider searches, and in a region where the requirement $d_\chi \ll 1$ was not satisfied, e.g. the upper limit on $d_\chi$ was found to be $d_\chi\leq0.2$ for $m_1=\SI{1}{\tera\electronvolt}$ and $d_\chi\leq1$ for $m_1=\SI{5}{\tera\electronvolt}$.
\section{Conclusions}
\label{sec:conclusion}
In this paper, we have explored a remarkable possibility for DM phenomenology at the LHC: the combination of monojet and $R$-hadron searches. We performed our analysis using a simple effective operator of dark matter, as a case study giving rise to such a situation.
Since the cosmological abundance is dominated by QCD interactions, the coupling of the effective operator $d_\chi$ is not fixed by the relic density requirement, but it remains a free parameter. The only assumption we make is $d_\chi\ll1$, in order for the effective theory to be reliable.
If $d_\chi$ is small enough, the chemical equilibrium can break down before the dark matter freeze-out. We analyzed such a situation and concluded that for the parameter space of interest for LHC searches ($d_\chi\gtrsim10^{-8}$) there is no need to take into account the breakdown of chemical equilibrium.
Our main analysis consisted of the combination of monojet and $R$-hadrons searches, and found the regions of the $(m_1,d_\chi)$ parameter space excluded by current searches (see Figure \ref{fig:chromo_comparison}): while current monojet is able to exclude all points of the parameter space for $m_1\lesssim\SI{900}{\giga\electronvolt}$ for $d_\chi\gtrsim3\times10^{-7}$, current $R$-hadron results are instead able to constrain the parameter space for larger masses, but smaller couplings. This complementarity is maintained in higher-luminosity projections.
These results show once more the importance of finding complementary phenomenological signatures and the power of their combination in strengthening the reach of LHC searches for dark matter.
\acknowledgments
We would like to thank M.~Rinaldi, R.~Rattazzi and F.~Riva for insightful discussions.
|
{
"timestamp": "2019-11-20T02:20:49",
"yymm": "1803",
"arxiv_id": "1803.02861",
"language": "en",
"url": "https://arxiv.org/abs/1803.02861"
}
|
\section{Introduction}
\label{Introduction}
The strong interaction, described by the QCD in the framework of the Standard
Model, is still hiding many mysteries, especially in the low-energy limit, the so
called non-perturbative regime.
Particularly interesting is the strong interaction involving the strange quark which,
belonging to the light quarks but having a mass of about $100\, MeV/c^2$, much
heavier than the few $MeV/c^2$ up and down quark masses, plays a peculiar role.
Since kaons and antikaons are the lowest mass particles containing strange quarks,
for decades their interaction with nucleons and nuclei in the low-energy regime has been subject
of intensive studies in experiment and theory (for reviews see \cite{intro1,intro2}).
Effective field theories contain appropriate degrees of
freedom to describe physical phenomena occurring at the nucleon-meson
scale and chiral perturbation theory was extremely successful
in describing systems like pionic atoms; however it is not directly
applicable for the kaonic systems, where non-perturbative coupled-channel
techniques can be used (\cite{coupled}). These theories are still waiting to be verified by experimental data.
There are two experimental approaches to probe the kaon-nucleus strong interaction,
both exploited at LNF-INFN.
One is by studying the scattering and the reaction channels between the kaon and the nucleus, to directly
search for bound states and extract the potential of the strong interaction;
this is the experimental method followed by the AMADEUS collaboration (\cite{amad,had20,had21}).
The other method consists in the precision X-ray measurement of the shift and the broadening of the
energy levels of kaonic atoms caused by the kaon-nucleus strong interaction.
This latter one, exploited by the SIDDHARTA and SIDDHARTA-2 collaborations (\cite{sid1,sid2,sid3,sid4,sid5,sid6,sid7}),
is of significant importance, since it is the only method able to provide direct information on the
kaon-nucleus system at threshold.
\subsection{Kaonic atoms}
\label{Introduction-2}
Kaonic atoms are formed when a negatively charged kaon is stopped in a target and replaces one of the electrons in an atom;
due to the much higher $K^-$ mass with respect to the $e^-$ one, this newly formed exotic atom radius is
smaller than the one of the usual atom and the $K^-$ is stopped at a distance from the nucleus corresponding to the $n\simeq25$ excited state.
The subsequent cascade of the antikaon to the ground level occurs via several different processes;
in particular, the last transitions on the 1s level are radiative and photons are emitted in the X-ray region.
For light atoms, especially for the hydrogen and the deuterium, a detectable energy
shift from the electromagnetic value of the ground state is expected, as well as
a broadening of the ground state level, caused by nuclear absorption (see fig.\ref{levels}).
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm]{levels}
\caption{After the atomic capture of the kaon a kaonic atom is
formed in a highly excited state and a few kaons will cascade to
ground state. 1s level is shifted and broadened by the strong interaction (\cite{kmass})}
\label{levels}
\end{figure}
By measuring these observables, kaonic atoms offer the unique possibility to determine
the s-wave antikaon-nucleon scattering lengths at vanishing energy,
alternatively to scattering experiments were an extrapolation to zero energy is necessary.
With the advance of the experimental techniques,
both in the accelerator and detector sectors, we are presently able to perform very
high precision measurements, resulting in a deeper and more complete
understanding of the many open questions in the QCD.
In particular, with the advent of clean kaon beams, as
the one provided by the DA$\Phi$NE collider, and of performant, fast, X-ray detectors
as the Silicon Drift Detectors, the kaonic atoms studies entered the precision era.
\section{SIDDHARTA experiment}
\label{2009}
In 2009, with the SIDDHARTA experiment at DA$\Phi$NE,
the strong interaction induced shift of the ground state of kaonic hydrogen atoms
and the absorption width were measured with the highest accuracy up to now \cite{sid2,sid3}.
Measurements of the 2p shift and width of kaonic helium ( $^4He$ as well as $^3He$) \cite{sid4,sid5,sid6} completed the SIDDHARTA program.
Conscient about the importance of the Kd measurement an attempt was also done in this direction but,
due to the very low yield of the transition, only an upper limit could be exctracted and no values for the
1s level shift and width were delivered \cite{sid2}.
\subsection{Experimental setup}
\label{2009-2}
The SIDDHARTA setup consisted of two main components, the light-weight cryogenic
target cell and a specially developed large area, high resolution X-ray detector system
made of Silicon Drift Detectors (SDDs).
The experiment made use of the $K^+K^-$ pairs coming from the $\Phi$ decays with a 49\% branching
ratio.
The kaons leaving the interaction point through
the SIDDHARTA beam pipe were degraded in energy and entered the cryogenic gaseous
hydrogen (helium) target placed above the beam pipe, forming a kaonic atom and emitting X-rays
during the $2p\rightarrow1s$ (hydrogen) and $3d\rightarrow2p$ (helium) transitions (see fig. \ref{setup-2009}).
\begin{figure}[htbp]
\centering
\includegraphics[width=7cm]{setup-2009}
\caption{A schematic cutaway view of the SIDDHARTA setup at the DA$\Phi$NE interaction point.
The charged kaon pairs are identified with two plastic scintillators, and the $K^-$ induced x-rays
detected by the SDDs are identified from the time correlation to the kaon pair events.}
\label{setup-2009}
\end{figure}
The $K^+K^-$ pairs were emitted in back-to-back configuration and identified by the kaon detectors,
made of two plastic scintillators placed above and below the interaction point as illustrated in
fig. \ref{setup-2009}. The kaons were distinguished from the minimum ionizing particles using the time of
flight information at the kaon detectors, and the coincidence of the two scintillators defined the
kaon trigger, which marked the timing of the incident kaons.
A fraction of the negatively charged kaons activating the kaon trigger were successfully stopped inside the
volume of the gaseous target placed about $20\, cm$ above the interaction point to form kaonic
atoms. A working pressure of 0.3 MPa was achieved, which led to a hydrogen gas
density of 2\% of liquid hydrogen density, at a working temperature of 25 K.
Special Silicon Drift Detectors were developed with excellent energy resolution ($FWHM \simeq\, 150\, eV$ @ 6 keV)
and timing capability in the order of $\mu s$ \cite{sdd} when operated at $140\,K$ temperature.
Using the X-ray signal from the SDDs in coincidence with the $K^+K^-$ pair, the continuous
machine background, as well as unwanted fluorescence X-rays, could be efficiently suppressed.
The SDDs had a total active area of $144\, cm^2$ , covering about 10\% of the solid angle around
the target cell. The energy calibrations of the SDDs were done every few hours,
using the X-ray tube activated $K_{\alpha}$ lines of Ti ($4.5\, keV$), Mn ($5.5\, keV$) and Cu ($8.0\, keV$) to determine the scale of the energy
spectra. The energy resolution at $6\, keV$ was stable at about $150\, eV$ (FWHM) throughout the
measurement. More details about the configuration and the performance of the detectors can
be found in \cite{sid2,sid3,sid4,sid5,sid6}.
Data were accumulated with gaseous targets of hydrogen ($1.3\,g/l$),
deuterium ($2.50\,g/l$), helium-3 ($0.96\, g/l$), and helium-4 ($\,1.65 g/l$ and $2.15\, g/l$).
\subsection{Kaonic Helium measurements}
\label{2009-3}
For what concerns the kaonic helium, its transition to 1s level couldn't be
observed due to the very low yield and to a transition energy out of the SDD dynamic range.
Before the SIDDHARTA experiment, there existed only four measurements and the situation was rather ambiguous:
three measurements \cite{HEL1,HEL2,HEL3}, performed more than
20 years ago, giving, within few $\sigma$s, results which are more than an order of magnitude
higher with respect to the theoretical predictions and a more recent one,
performed at KEK \cite{HEL4}, which was, instead, compatible with the theoretical predictions (\cite{helpred,helpred2}), but incompatible with the previous experiments.
A conclusive precise measurement on $K^4He$ was then needed in order to solve the "puzzle", together with
the first measurement of $K^{3}He$, fundamental to obtain valuable information on the $K^-p$ and the $K^-n$ interactions.
The kaonic helium spectra obtained by the SIDDHARTA experiment are shown in fig. \ref{SPECHEL} \cite{sid4,sid5,sid6}.
\begin{figure}[htbp]
\centering
\mbox{\includegraphics[width=11.cm]{spechel2.png}}
\caption{Fitted spectra of the kaonic $^3He$ (right) and $^4He$ (left) X-rays. The $3d\rightarrow2p$
transitions are seen around 6 keV. Together with these
peaks, others smaller are seen, which are the kaonic atom X-ray lines produced by
kaons stopping in the target window made of Kapton, and the Ti and Mn
lines.}
\label{SPECHEL}
\end{figure}
In the left picture, the peak seen at 6.2 keV is identified as the $K^3He$ $L_{\alpha}$ line (the $3d\rightarrow2p$
transition). In the right one, the peak seen at 6.4 keV is identified as the $K^4He$ $L_{\alpha}$ line.
In addition to these lines, other smaller peaks are clearly visible which are the kaonic atom X-ray lines produced by
kaons stopping in the target window made of Kapton, and the Ti and Mn
lines.
The strong-interaction shifts of the kaonic helium 2p states
were obtained from the difference between the experimentally determined
values and the QED calculated ones \cite{elcalc1,elcalc2}. The results are:
$$\varepsilon_{2p}(K^3He) = -2 \pm 2 \,(stat) \pm 4 \,(syst) \,eV $$
$$\varepsilon_{2p}(K^4He) = 5 \pm 5 \,(stat) \pm 4 \,(syst) \,eV $$
\noindent Thus a shift compatible with 0 eV of the 2p level from experiment is established, which is in agreement with
the theoretical estimations and, within the errors, with the
results reported by the E570 \cite{HEL4} collaboration, while is not in agreement with the previous measurements (\cite{HEL1,HEL2,HEL3}).
This is probably due to the fact that, in those experiments, the helium target was a liquid one in which, as indeed happened also in the KEK experiment (\cite{HEL4}), a big Compton effect is present and has to be taken into account in the analysis procedure.
\subsection{Kaonic Hydrogen and Kaonic Deuterium measurement}
\label{2009-4}
The lightest kaonic atom is the $K^-p$ atom in which the principal electromagnetic interaction is
accompanied by the strong interaction of the kaon with the proton which is measurable by X-ray spectroscopy of the
radiative transitions from the np states (2p, 3p, ...) to the 1s ground state (K transitions).
The $K^-p$ scattering length $a_{K^-p}$ can be obtain from the equation
$$\varepsilon_{1s}+\frac{i}{2}\Gamma_{1s}=2\alpha^3\mu_c^2a_{K^-p}(1-2\alpha\mu_c(ln\alpha-1)a_{K^-p})$$
\noindent being $\varepsilon_{1s}$ and $\Gamma_{1s}$ the shift and width of the transition to the 1s level,
(for more details on the various terms see \cite{deser}) and it is related to the isospin dependent scattering lengths by
$$a_{K^-}p=\frac{1}{2}(a_0+a_1)$$
\noindent Historically there were several measurements of the strong-interaction shift $\varepsilon_{1s}$ and width $\Gamma_{1s}$ of kaonic hydrogen (\cite{KHPuz1,KHPuz2,KHPuz3,KHPuz4,KHPuz5}).
In the 1970s and the 1980s three groups (\cite{KHPuz1,KHPuz2,KHPuz3}) reported a measured
attractive shift (positive $\varepsilon_{1s}$ ), while the information extracted from the analyses of the low energy KN data (\cite{KHPuz6,KHPuz7,KHPuz8}) shows a repulsive shift (negative $\varepsilon_{1s}$). This
contradiction has been known as the "kaonic hydrogen puzzle".
In 1997, the first distinct peaks of the kaonic-hydrogen X-rays were observed by the KEK-PS
E228 group \cite{KHPuz4} with a significant improvement in the signal-to-background ratio by the use of
a gaseous hydrogen target, where previous experiments had employed liquid hydrogen. It was
crucial to use a low-density target, namely a gaseous target, because the X-ray yields quickly
decrease towards higher density due to the Stark mixing effect. The observed repulsive shift
was consistent in sign with the analysis of the low energy KN scattering data, resolving the
long-standing discrepancy.
More recent values reported by the DEAR group in 2005 \cite{KHPuz5}, with substantially reduced
errors, firmly established the repulsive shift obtained in the previous E228
experiment.
The latest kaonic hydrogen and deuterium X-ray energy spectra, obtained by the SIDDHARTA experiment, are shown in fig. \ref{SPECHYD} \cite{sid2}.
\begin{figure}[htbp]
\centering
\mbox{\includegraphics[width=6.cm]{spechyd.png}}
\caption{A global simultaneous fit result of the X-ray energy spectra of hydrogen
and deuterium data. (a) Residuals of the measured kaonic-hydrogen X-ray spectrum
after subtraction of the fitted background, clearly displaying the kaonic-hydrogen
K -series transitions. The fit components of the K-p transitions are also shown,
where the sum of the function is drawn for the higher transitions (greater than $K_{\beta}$).
(b), (c) Measured energy spectra with the fit lines. Fit components of the background
X-ray lines and a continuous background are also shown. The dot-dashed
vertical line indicates the EM value of the kaonic-hydrogen $K_{\alpha}$ energy. (Note that
the fluorescence $K_{\alpha}$ line consists of $K_{\alpha 1}$ and $K_{\alpha 2}$ lines, both of which are shown.)}
\label{SPECHYD}
\end{figure}
The K-series X-rays of kaonic hydrogen were clearly
observed while those for kaonic deuterium were not visible. This
appears to be consistent with the theoretical expectation of lower
X-ray yield and greater transition width for deuterium (\cite{deuprev}) than for kaonic hydrogen. However, the kaonic deuterium spectrum can be used to characterize the background.
The vertical dot-dashed line in fig.\ref{SPECHYD} indicates the X-ray energy
of kaonic hydrogen $K_{\alpha}$ calculated using only the electromagnetic
interaction (EM). Comparing the kaonic hydrogen $K_{\alpha}$ measured peak and the
EM value, a repulsive shift of the kaonic hydrogen
1s energy level is easily seen.
Many other lines from kaonic atom X-rays were detected in both spectra as indicated with arrows
in the figure. These kaonic atom lines result from high n X-ray
transitions of kaons stopped in the target cell wall made of kapton
($C_{22}H_{10}O_5N_2$) and its support frames made of aluminium. There
are also characteristic X-rays from titanium and copper foils installed
for X-ray energy calibration.
A global simultaneous fit of the hydrogen and
deuterium spectra has been performed, whose results are shown in fig. \ref{SPECHYD} (b) and (c).
The kaonic hydrogen lines were represented
by Lorentz functions convoluted with the detector response function,
where the Lorentz width corresponds to the strong interaction
broadening of the 1s state. The region of interest of Kd X-rays is illustrated in fig. \ref{SPECHYD} (c).
The 1s level shift $\varepsilon_{1s}$ and width $\Gamma_{1s}$ of kaonic hydrogen
were determined to be \cite{sid2}:
$$ \varepsilon_{1s} = -283 \pm 36 \,(stat) \pm 6 \,(syst) \,eV$$
$$ \Gamma_{1s} = 541 \pm 89 \,(stat) \pm 22 \, (syst) \,eV$$
The quoted systematic error is a quadratic summation
of the contributions from the systematic errors on the SDD
gain shift, the SDD response function, the ADC linearity, the low energy
tail of the kaonic hydrogen higher transitions, the energy
resolution, and the procedural dependence shown by independent
analysis.
This measurement represents the best precision available untill today, and it has been used to
set constraints on the calculated real and imaginary part of the $K^-p$ amplitude (\cite{constr}).
\section{SIDDHARTA-2 experiment}
\label{2019}
The case of kaonic deuterium is more challenging than kaonic hydrogen mainly due to the larger
widths of the K lines and the lower X-ray yield expected.
Experimentally the case of kaonic deuterium is still open; the SIDDHARTA experiment measured the X-ray
spectrum with a pure deuterium target but, due to the limited statistics and high background,
the determination of $\varepsilon_{1s}$ and $\Gamma_{1s}$ was impossible. An upper limit for the X-ray yield of the K lines
could be extracted from the data: total yield < 0.0143, $K_{\alpha}$ yield < 0.0039 \cite{sid7}.
Since the kaonic deuterium X‐ray measurement represents the most important experimental
information missing in the field of the low‐energy antikaon‐nucleon interactions today,
a new experiment (SIDDHARTA-2) is planned, based on a much improved apparatus.
\subsection{Experimental setup upgrade}
\label{2019-2}
Thanks to the knowledge acquired in 2009 with the SIDDHARTA experiment, a new version
of the experimental apparatus, aiming to increase the signal over
background ratio (S/B) by a factor about 20 allowing the kaonic deuterium measurement,
has been developed by the SIDDHARTA-2 collaboration.
The major upgrades introduced to fulfill the kaonic deuterium measurement goal, are shown in fig. \ref{setup-2019} and listed below (\cite{mihail}):
\begin{figure}[htbp]
\centering
\mbox{\includegraphics[width=8.cm]{setup-2019}}
\caption{The SIDDHARTA-2 layout. The elements of the apparatus are indicated in the figure.}
\label{setup-2019}
\end{figure}
\begin{itemize}
\item Larger area and faster SDD detector array; the solution to improve the SDD time resolution consists in
the reduction of the single element size (from $10x10$ to $8x8\, mm^2$) and the replacement of the integrated J-FET
(thermally limited), by a newly-developed amplifier on the ceramics, able to operate at very low
temperatures (below 50 K). The shorter path and the higher carrier mobility consent a faster charge
packet drift to the anode and therefore, a reduced time window (350 ns instead of 900 ns) for each
trigger, with consequent a suppression of the asynchronous background. The new SDD detectors are
produced by Fondazione Bruno Kessler (FBK) in Trento, Italy.
\item A veto detector to measure the prompt time of the secondaries from $K^-$ absorption on nuclei.
The system consists in scintillators surrounding the
vacuum chamber, read at both ends by PMs coupled to mirrors and light-guides (to cope with the
narrow space between the setup and the shielding against beam background) \cite{lshape}.
\item A new cryogenic target in reinforced kapton (13 cm diameter, 7 cm height), operating few hundred
mK above the liquid point (~25 K) at a pressure of 4 bar (5\% LHD), for more efficient kaon stopping.
\item A veto system, consisting in scintillators read by SiPMs, placed behind each SDD array, to reject
the hadronic background coming from border hits of Minimum Ionizing Particles (MIPs), depositing energy in the X-ray range.
\item A kaon trigger with geometric acceptance optimized to match the kaon gas stopping distribution.
\item An improved X-ray calibration system, providing low-rate in-situ calibration, as well as high rate
calibration between physics runs, to compensate very small fluctuations in each single SDD response.
\item Mechanical and cryogenic improvements of the vacuum chamber, necessary to add more cooling
power to the SDDs and to the cryogenic target.
\end{itemize}
\subsection{Kaonic deuterium expected measurement}
\label{2019-3}
All the above described items were optimized by GEANT4 simulation, considered reliable after having
reproduced, in the same framework, the SIDDHARTA results, both in terms of signal and background,
to 7\% accuracy.
Using theoretical values as inputs to the Monte Carlo simulation, the expected spectrum of the $K^-d$ transitions,
for an acquired luminosity of $800\, pb^{-1}$ and assuming a yield of 0.1\%, is shown in fig. \ref{mcdeut}.
\begin{figure}[htbp]
\centering
\mbox{\includegraphics[width=8.cm]{mcdeut}}
\caption{The simulated $K^-d$ spectrum for SIDDHARTA-2 for $800\,pb^{-1}$ integrated luminosity (the $K_{\alpha}$ line is at 7 keV,
while from 8 to 10 keV there is the K-complex.}
\label{mcdeut}
\end{figure}
\noindent The fit of the simulated spectrum shows how the $K^-d$ $2p\rightarrow1s$ transition, influenced by the strong interaction,
can be determined with a precision of 30 eV for the shift and 70 eV for the width.
\section{Conclusions}
\label{Conclusions}
The SIDDHARTA experiment at the DA$\Phi$NE electron-positron collider measured the strong interaction induced
shift and width of kaonic hydrogen and helium transitions with unprecedented accuracy.
Important implications for the theory of the strong interaction with strangeness in
the low energy regime were provided by the SIDDHARTA constraints.
In addition, SIDDHARTA measured for the first time ever the kaonic helium 3 transitions to the 2p level and
the first, similar, measurement, of kaonic helium 4 with a gaseous target.
Still open is the challenging case of kaonic deuterium which will be attacked by the follow-up experiment SIDDHARTA2, aiming at the determination
of the hadronic shift and width to enable the extraction of the antikaon-nucleon isospin dependent scattering lengths $a_0$ and $a_1$.
The SIDDHARTA-2 collaboration is preparing a series of modifications and upgrades of the apparatus and
experimental configurations, aiming to measure the kaonic deuterium x-rays with a precision
compatible with that achieved in the hydrogen target measurement starting with 2018, when the apparatus will be installed on the DA$\Phi$NE collider.
\section*{Acknowledgments}
\label{Acknowledgments}
We thank C. Capoccia, G. Corradi, B. Dulach, and D. Tagnani from LNF-INFN;
and H. Schneider, L. Stohwasser, and D. St\"ukler from Stefan-Meyer-Institut, for their fundamental
contribution in designing and building the SIDDHARTA setup.
We thank as well the DA$\Phi$NE staff for the excellent working conditions and permanent support.
Part of this work was supported by the Austrian Science Fund (FWF) (P24756-N20); Austrian Federal Ministry of Science
and Research BMBWK 650962/0001 VI/2/2009; Croatian Science Foundation under Project
No. 1680; the Grant-in-Aid for Specially Promoted Research (20002003), MEXT, Japan;
Ministero degli Affari Esteri e della Cooperazione Internazionale, Direzione Generale per la Promozione del Sistema Paese (MAECI),
Strange Matter project; the Polish National Science Center (NCN) through grant No. UMO-2016/21/D/ST2/01155.
|
{
"timestamp": "2018-03-22T01:06:31",
"yymm": "1803",
"arxiv_id": "1803.02587",
"language": "en",
"url": "https://arxiv.org/abs/1803.02587"
}
|
\section{Introduction} \label{sec:intro}
Massive stars shed mass prodigiously via their
radiation-driven stellar winds \citep{Lucy1970, Castor1975,
Pauldrach1986} and perhaps even more dramatically through
pulsationally driven ejection events
\citep{Glatzel1999,Kraus2015,Yadav2017}. For single massive
stars, the mass-loss rate, $\dot M$, integrated over the
star's lifetime determines its final act (i.e., type of
supernova or $\gamma$-ray burst), end product (i.e.,
neutron star, black hole, none), and its radiant and
nucleosynthetic contribution to the cosmos. For the
$\simeq$50\% of massive stars having a close companion
\citep{Sana2012,Kobulnicky2014}, the effects of mass
exchange and common envelope evolution are expected to be a
more significant evolutionary influence, but wind-driven
mass loss must still play a large role when integrated over
the lifetime of a star. Observational and theoretical work
broadly agree that mass-loss rates range from $<10^{-8}$
$M_\odot$~yr$^{-1}$\ for weak-winded late-O stars to few$\times$10$^{-5}$
$M_\odot$~yr$^{-1}$\ for the most luminous evolved massive stars, with
rates being factors of several lower at low metallicities
\citep{Vink2001,Martins2005b,Fullerton2006,Mokiem2007,Muijres2012,Massa2017}.
Luminous blue variables and related objects in unstable
phases of evolution may eject shells from several tenths to
several solar masses in discrete eruptive events, resulting
in time-averaged rates of 0.01--few $M_\odot$~yr$^{-1}$\
\citep{SmithOwocki2006}. Such large excretion events
influence not only the evolution of the star but the
appearance of subsequent explosive phenomena, such as when
fast supernova ejecta encounter dense circumstellar
material, creating unusually luminous supernovae
\citep{Smith2007,Miller2009,Chevalier2011}. Given that most
massive stars are also members of (close!) multiple systems
\citep{Kobulnicky2007,Sana2012,Kobulnicky2016}, companion
interactions are certain to play an important (but poorly
characterized, at present) role. Reviews of massive star
winds and mass loss include \citet{Kudritzki2000},
\citet{Puls2008}, and \citet{Smith2014}.
Mass-loss rates for massive stars have been measured using a
variety of techniques. These include observations of
H$\alpha$ recombination lines
\citep{Leitherer1988,Lamers1993,Puls1996,Markova2004,Martins2005b},
radio-continuum and infrared free-free emission
\citep{Abbott1981,Nugis1998,Puls2006,Massa2017}---so-called
$n^2$ diagnostics because the excess flux scales as the
square of the density of material in the wind. Massive
star winds are demonstrably not ideal isotropic structures
with smooth density gradients. Observational signatures of
density inhomogeneities (i.e., ``clumping'') in OB star
winds are abundant, including time-variable line profiles
and discrete absorption components
\citep{Ebbets1982,Fullerton1996,Lepine1999,Prinja2002},
especially in supergiants, and the presence of large-scale
optically-thick clumps \citep{Prinja2010}. Accordingly, the
presence of clumps, which result inevitably from
instabilities in the line-driven wind
\citep{Owocki1988,Dessart2005,Muijres2011}, may skew observationally determined mass
loss estimates upward relative to unclumped calculations.
Although theoretical models for stellar winds include
provisions for a clumping in a heuristic way
\citep[e.g.,][]{Hillier1998,Puls2005}, the poorly
constrained clump geometries and kinematics introduce
uncertainties of one or two orders of magnitude in the
mass-loss rates from $n^2$ diagnostics. Furthermore, the
traditional $n^2$ diagnostics become insensitive to mass
loss below rates of about 10$^{-7}$ $M_\odot$~yr$^{-1}$\
\citep{Markova2004,Mokiem2007,Marcolino2009}, corresponding
to luminosities of about 10$^{5.2}$ (approximately an
O7.5V), demanding other diagnostics for the weaker winds
expected from less luminous stars.
Ultraviolet spectroscopy of metal resonance lines such as
\ion{C}{3}, \ion{N}{5}, \ion{Si}{4}, and \ion{P}{5} has
provided the other major diagnostic of mass-loss rates
\citep{Garmany1981,Howarth1989,Fullerton2006,Marcolino2009}.
UV-based estimates of mass-loss are less sensitive to
clumping because they depend linearly on density as long as
the optically depths in the line cores are small and the
dominant ionization species can be observed. Mass-loss
rates derived from UV resonance lines are often factors
several to hundreds lower than $n^2$ diagnostics
\citep{Fullerton2006} in the limited regimes where the two
methods overlap. For mass-loss rates greater than about
10$^{-7}$ $M_\odot$~yr$^{-1}$\ resonance lines begin to become optically
thick and derived $\dot M$ values become less certain,
especially if clumping is optically thick
\citep{Prinja2010}. UV estimates may also be systematically
low if coronal X-rays produce additional photoioinization
of the metal ionic species probed by UV spectra and ionization
correction factors are not properly applied
\citep{Waldron1984,Marcolino2009,Huenemoerder2012}.
Theoretical mass-loss rates predicted on the basis of the
``modified wind momentum''\footnote{The modified wind
momentum is $\dot M v_{\infty} (R_*/R_\odot)^{0.5}$, where
$v_\infty$ is the terminal stellar wind velocity, and $R_*$
is the stellar radius, after \citet{Puls1996}. }
\citep{Puls1996,Vink2001} agree with observations in the
limit of strong winds and luminous stars (i.e.,
$\log(L/L_\odot)\gtrsim5.2$, spectral type earlier than
about O7V), but in the limit of weak winds ($\dot
M\lesssim10^{-8}$ $M_\odot$~yr$^{-1}$, $\log(L/L_\odot)\lesssim5.2$),
UV-derived mass-loss rates are lower than theoretical
predictions by up to two orders of magnitude---a discrepancy
known as the ``weak wind problem'', discussed extensively in
the literature
\citep{Martins2005b,Mokiem2007,Marcolino2009,Muijres2012}.
Whether the discrepancy results from the effects of
clumping, unexpected ionization structure, variations in
$\dot M$ as a star evolves, limitations in the theoretical
treatment of the wind \citep{Lucy2010a,Krticka2017}, or some
combination, remains a matter of debate. In the limit of
very massive and luminous supergiants near 50 M$_\odot$ the
\citet{Vink2012} ``transition mass-loss rate'' near
10$^{-5}$ $M_\odot$~yr$^{-1}$\ suggests that the current reductions of 2--3
in model mass-loss rates is appropriate. However, much of
the O-star regime remains uncertain. The recognition that
some late-O stars exhibit much weaker winds than other O
stars of the same spectral type is regarded as a kind of
second-order weak wind problem \citep{Marcolino2009} that
might be solved along with with the resolution of the
canonical weak wind problem. \citet{Huenemoerder2012}
presented one possible resolution in their study of the
weak-wind O9.5V runaway star $\mu$ Col which evinces a
massive hot stellar wind visible in X-rays but only
tenuously detectable using UV metal absorption-line
spectroscopy.
Given the lingering order-of-magnitude uncertainties on
mass-loss rates, together with the
sensitivity\footnote{Changing mass-loss rates by factors of
two or less can dramatically alter the sequence of stellar
evolutionary phases, final masses, stellar endpoints, and
nucleosynthetic yields
\citep[e.g.,][]{Meynet1994,Renzo2017,Meynet2015}!} of
stellar and cosmic evolution to these values, alternative
observational diagnostics for $\dot M$ are warranted.
\citet{Kobulnicky2010} proposed using runaway
\citep{Blaauw1961,Gies1986} massive stars and the their
interstellar bowshock nebulae
\citep{vanBuren1988,Noriega1997,Gvaramadze2008} as a new
laboratory for measuring mass-loss rates. Following the
reasoning first articulated by \citet{Gull1979} for the
prototypical bowshock runaway star $\zeta$ Oph, we
employ the principle of balancing the momentum flux
between the stellar wind and the impinging interstellar
material,
\begin{equation}
\rho_{w} V_w^2 = \rho_{a} V_a^2 .
\end{equation}
\noindent Here, $\rho_w$ is the density of the stellar wind,
$V_w$ is the velocity of the wind, $\rho_a$ is the ambient
interstellar density, and $V_a$ is the velocity of the
ambient ISM in the rest frame of the star. We make the
assumption that the stellar wind is isotropic, \citep[but
mass loss could be enhanced along the polar axis or reduced
at the equatorial plane for rapidly rotating stars,
][]{Owocki1996,Langer1998,Mueller2014} so that the density
of the stellar wind can be expressed as,
\begin{equation}
\rho_w = {\dot{M} \over {4\pi R_0^2 V_w}} ~ ~ ,
\end{equation}
\noindent where $\dot{M}$ is the stellar mass-loss rate and
$R_0$ is the ``standoff'' radius---the distance between the
star and the point where the momentum fluxes are equal. By
substitution of Equation 2 into Equation 1 and rearranging, the mass loss
rate can be expressed in terms of observable stellar and
interstellar properties,
\begin{equation}
\dot M = { {4\pi R_0^2 V_a^2 \rho_a} \over {V_w}} .
\end{equation}
\noindent $R_0$ is simply $R_{0r} D$, the angular size of
the standoff distance in radians times the distance to the
star. The former is straightforward to measure from
infrared images, modulo the unknown factor for inclination;
arguably $\sin i\simeq 1$ in order that the bowshocks be
identified as arcuate nebulae \citep{Kobulnicky2017}.
Distances, $D$, may be obtained by spectroscopic parallax,
or by geometric parallax measurements
\citep{Perryman1997,Gaia2016a,Gaia2016b}. $V_w$ is taken
to be the terminal stellar wind speed, $V_\infty$,
appropriate to the spectral type and luminosity class, as
tabulated in the literature \citep[e.g.,][]{Mokiem2007},
although individual stars may vary significantly about the
mean. $V_a$ is expected to average 30 {km~s$^{-1}$}\ for runaway
stars, but individual values may again vary significantly.
``In-situ'' bowshocks \citep{Povich2008,Sexton2015} where the relative motion is caused by
an outflow of interstellar material at 10--15 {km~s$^{-1}$}\
from an \ion{H}{2} region \citet[e.g., the Carina
star-forming region,][]{Walborn2002} rather than a runaway
star, could be such exceptions. Specific space velocities
for any star may ordinarily be calculated from measurements
of proper motions, radial velocities, and distances. The
ambient ISM density is the most challenging quantity to
measure. \citet{Gvaramadze2012} applied this technique,
using the size of the \ion{H}{2} region surrounding $\zeta$
Oph and its ionizing flux to eliminate the ambient density
in Equation 3 (or equivalently, solve for it numerically).
They derived a mass loss rate 2.2$\times$10$^{-8}$ $M_\odot$~yr$^{-1}$,
comparable to updated theoretical predictions of
\citet{Lucy2010a}. However, they note that \ion{H}{2}
regions are not generally present around bowshock-producing
stars, limiting the utility of this approach to measuring
$\rho_a$. Here, we use the peak infrared surface brightness
of the nebula to estimate $\rho_a$, as described in
subsequent sections.
In this contribution we apply the principle of momentum
balance to derive mass-loss rates for the 20
bowshock-producing stars having well-characterized stellar
parameters from Table~5 of \citet{Kobulnicky2017}. As an
independent technique for estimating mass loss, our method
does not depend on requirements like optically thin atomic
lines, the adopted geometrical parameterization of clumping,
or a detailed treatment of the ionization structure in the
wind. Massive stars are known to exhibit temporal
variability in their line profiles, indicating probable
variation in clumping and mass loss. We expect that our
approach has the added benefit of averaging over short-term
fluctuations in wind structure and mass-loss rate. This
method will undoubtedly entail a different suite
uncertainties and potential biases than the traditional
ones, including the difficult-to-parameterize effects a
star moving through non-uniform ambient ISM, a possibility
that we neglect in this initial treatment.
Nevertheless, most of our targets are late-O dwarf stars,
making this sample especially relevant for addressing the
weak-wind problem. In Section~2 we describe our methods for
determining the requisite stellar and interstellar
parameters. In Section~3 we present new mass-loss rates
for this well-characterized sample of stars and compare them
to existing observational and theoretical determinations
for stars of similar luminosity and evolutionary stage.
Section~4 summarizes implications for these new results and
outlines prospects for future progress.
\section{Measuring $\dot M$}
\subsection{Sample Selection}
Table~1 lists the 20 stars from Table~5 of
\citet{Kobulnicky2017} selected from among the 709 bowshock
candidates of \citet{Kobulnicky2016} as having secure
distances, spectral types, and infrared photometric
measurements\footnote{Photometric data include a measurement
at the {\it Wide-Field Infrared Explorer (WISE)} 22 $\mu$m
band or {\it Spitzer Space Telescope (SST)} 24 $\mu$m band
and the {\it Herschel Space Observatory (HSO)} 70 $\mu$m
band.} at multiple bandpasses covering the adjoining
nebula. Column 1 contains the index number using the
numeration of \citet{Kobulnicky2016}. Column 2 lists common
name of the star, followed by the generic name in galactic
coordinates in column 3. Columns 4 and 5 provide the
adopted spectral type/luminosity class and corresponding
literature references, respectively. Only 3--4 objects have
evidence for being part of a multiple-star system. This is
significant because it means that, in the majority systems,
one star is the dominant source of stellar wind, thereby
simplifying any the ensuing interpretation. Columns 6 and 7
contain the adopted effective temperatures and radii, using
the theoretical O-star temperature scale (Tables 1---3) of
\citet{Martins2005a}. For the few B stars we use the
temperatures and radii of \citet{Pecaut2013}. Column 8
gives the adopted stellar mass. Column 9 provides the
adopted terminal wind speed calcualted by averaging galactic
O and B stars of the same spectral type from Table A.1 of
\citet{Mokiem2007} and Table 3 of \citet{Marcolino2009}.
These values are uncertain at the level of 50\%, based on
the dispersion among multiple measurements at a given
spectral type. Early-B dwarfs are particularly uncertain
owing to the difficulty in measuring wind lines. Column 10
lists the adopted distance along with the corresponding
reference in column 11. Most the sources have distances
estimated through their association with a star cluster or
molecular cloud having distance measurements from eclipsing
binaries (such as in Cygnus OB2), main-sequence fitting, or
radio VLBI geometric parallaxes of masers within the
adjoining star-forming complex. The O8V KGK2010 10 deserves
special mention. At $\ell=$77\fdg0, $b$=-0\fdg6, it lies
more than two degrees from the main body of the Cygnus OB2
Association---outside the nominal boundaries of the Cygnus-X
star-forming complex, and possibly at a different distance.
At $V=15$ mag it is also 2--3 magnitudes fainter at V band
than other O8V stars of similar reddening
\citep[c.f.,][Table 5]{MT91} in Cygnus OB2. It may be a
background object at a distance similar to Cygnus X-3
\citep[][3.4 kpc or 9.3 kpc]{Ling2009}. Based on its
similar reddening ($A_{\rm V}=$5.4 mag) to other
V$\approx$12.5 O8V stars in Cygnus OB2 at 1.32 kpc
\citep{Kiminki2015}, we use the factor of 9.8 in flux,
corresponding to a factor of 3.1 in distance, to adopt a
scaled distance of 4.1 kpc, while cautioning of a more
uncertain distance for this object. Column 12 lists the
standoff distance, $R_0$, in arcsec, while column 13 lists
the standoff distance in pc, calculated from the distance
in column 10 and the angular separation.\footnote{We apply a
statistical correction factor of 1/sin(65\degr)$=$1.10 for
geometrical projection effects when computing $R_0$ in pc.
This is a suitable correction because inclinations
substantially smaller than about 50\degr\ would begin to
mask the bowshock morphology and make the object unlikely
to be included in the list of bowshock candidates (e.g., see
\citet{Acreman2016} for numerical simulations of bowshocks
at various inclination angles.)} Column 14 lists the peak
$HSO$ 70 $\mu$m surface brightness above adjacent background
levels in Jy sr$^{-1}$, occurring at a location near the
apex of the nebula. We use the 70 $\mu$m measurement because
the majority of bowshock candidates are detected and
stochastic heating effects are unlikly to affect dust at
this wavelength compared to 24 $\mu$m. Column 15 lists
angular diameter in arcsec of the nebulae along a chord
($l$) intersecting the peak surface brightness.
Figure~\ref{fig:ExampleFig} shows the G078.2889+00.7829
nebula surrounding the central star LSII+39~53 (O7V).
Red/green/blue depict the $HSO$ 160 and 70 $\mu$m and $SST$
24 $\mu$m data, respectively. The white asterisk marks the
location of the central star. White lines represent the
standoff distance, $R_0$, and the chord diameter, $\ell$.
\subsection{Calculation of $\dot M$}
Beginning with Equation~3, we express the standoff distance,
$R_0$ as the product of the distance to the source, $D$,
times the angular distance from star to apex in radians,
$R_{0r}$. Given that very few bowshock stars have
measured space velocities, we adopt for $V_a$ a typical
``runaway'' speed of 30 {km~s$^{-1}$}\ \citep{Gies1986}. For $\zeta$
Oph we adopt the 26.5 {km~s$^{-1}$}\ calculated from its proper
motion and radial velocity \citep{Gvaramadze2012}.
Velocities much smaller than this are unlikely to produce
bowshocks while stars moving much faster than this would
fall on the tail of the Maxwell-Boltzmann velocity
distribution and be quite rare. Thirty kilometers per
second agrees well with the measured space velocity for the
nearest bowshock star, $\zeta$ Oph \citep{Gvaramadze2012},
and is a reasonable average value in the absence of
individual data. The ambient interstellar density,
$\rho_a$, can be expressed as $n_a\bar m$, the ambient
number density and mean particle mass, respectively. We
adopt $\bar m=2.3\times10^{-24}$ g, appropriate to the Milky
Way interstellar medium (ISM). While the ambient number
density $n_a$ preceding the bowshock is challenging to
measure, the density within a bowshock nebula, $n_N$, can be
estimated from knowledge of the infrared surface brightness
(i.e., specific intensity), the dust emissivity, the
line-of-sight path length through the nebula, and a
reasonable assumption for the gas-to-dust ratio. $I_\nu$
is the specific intensity at a selected infrared frequency
in erg s$^{-1}$ cm$^{-2}$ sr$^{-1}$, such as the $HSO$ 70
$\mu$m band, where the optical depth of dust is thought to
be low for essentially all Galactic sightlines. $I_\nu$ is
directly measurable from the $HSO$ images in Jy sr$^{-1}$.
The path length through the dust nebula is $\ell$,
calculated from the projected angular diameter of the chord
in radians, $\ell_{r}$, times the adopted distance, $D$.
This assumes rotational symmetry of the nebula such that the
line-of-sight depth is adequately represented by the
projected diameter. The dust emission coefficient per
nucleon, $j_\nu$ in Jy cm$^2$ sr$^{-1}$ nucleon$^{-1}$ as
tabulated by \citet{DL07}\footnote{Note, \citet{DL07} use
the emission coefficient, $j_\nu$, to express the energy
emitted per second per steradian per Hertz per {\it nucleon}
instead of the conventional definition of energy
emitted per second per steradian per Hertz {\it per volume}.},
is determined by fitting their dust models to the infrared
spectral energy distribution for each object, as performed
in \citet{Kobulnicky2017} for the tabulated list of 20
objects. The \citet{DL07} Milky Way dust models assume a
standard 1:136 dust grain to atomic particle (principally H
+ He) ratio, so that the models provide directly the desired
nucleon number density within the infrared nebulae,
\begin{equation}
n_N = { I_\nu \over {\ell ~j_\nu} }.
\end{equation}
\noindent This is really an {\it average} density along
the chord length intersecting the bowshock apex; the peak
density near the apex of the bowshock could be larger.
However, we desire to measure the {\it ambient} ISM density
preceding the bowshock, $n_a$, not $n_N$. A physically
motivated conversion would be $n_a=0.25 n_N$, given that
the density increases by a factor of four across a strong
shock \citep[e.g.,][]{Landau1987}, which we expect given
the highly supersonic nature of the $>$1000 {km~s$^{-1}$}\ stellar
winds. Therefore,
\begin{equation}
n_a = {{0.25 I_\nu} \over {\ell ~j_\nu} }.
\end{equation}
With these substitutions
Equation 3 then becomes
\begin{equation}
\dot M = { {4\pi {R^2_{0r}} D^2 V_a^2 I_\nu 0.25 {\bar m}} \over {V_w \ell_r D j_\nu}} ,
\end{equation}
\noindent which eliminates once factor of distance simplifies to
\begin{equation}
\dot M = { {\pi {\bar m} {R^2_{0r}} D V_a^2 I_\nu} \over {V_w \ell_{r} j_\nu}} .
\end{equation}
\noindent In convenient astrophysical units this can be expressed as a mass-loss rate in solar masses per year,
\begin{equation}
\dot M (M_\odot ~yr^{-1}) = 1.67\times10^{-28} { { {[R_{0}(arcsec)]^2} D(kpc)~[V_a(km~s^{-1})]^2 ~I_\nu(Jy~sr^{-1})}
\over {V_w(km~s^{-1}) ~\ell(arcsec) ~ j_\nu(Jy~cm^2~sr^{-1} nucleon^{-1})}} .
\end{equation}
\noindent This expression is linear in all of the crucial
quantities except the relative velocity between star and ISM
and the angular standoff distance. The angular quantities
$R_{0r}$ and $\ell_{r}$ are measured to about 10\% from
infrared images. $I_\nu$ is measured to about 20\%, the
dominant source of uncertainty being the definition of a
suitable local background level. Mean stellar wind
velocities are also measured to, perhaps, 50\%. For this
sub sample distances are known to better than 25\% in the
majority of cases. The stellar space velocities are
approximations based on mean values for runaway stars;
these may not be known to better than a factor of two.
This makes the error budget on $\dot M$ nearly 60\%, neglecting
stellar velocity uncertainty, or about factor of two with it
included. The model-dependent choice of $j_\nu$ is the
most significant remaining variable.
\cite{DL07} provide a grid of models for interstellar dust,
yielding $j_\nu$ as a function of wavelength and as a
function of incident radiant energy density, $U$. The
models are parameterized in terms three variables:
$U_{min}$, the minimum radiant energy density\footnote{$U$
is defined in \citet{DL07} as the ratio of the incident
radiant energy density (in erg~cm$^{-3}$) to the mean
interstellar radiant energy density estimated by
\citet{Mathis1983}.} to which dust is exposed, a maximum
radiant energy density $U_{max}$ to which dust is exposed,
and a fraction of PAH molecules by mass, $q_{PAH}$, within
the material. \citet{Kobulnicky2017} fitted DL07 models to
the infrared spectral energy distributions of objects from
Table~1, concluding that, in the majority of cases, models
with a single radiant energy density (i.e.,
$U_{min}=U_{max}$) provided the best match to the data.
They also tabulated an estimate for $U$ in each nebula,
calculated from the star's luminosity and standoff
distance, assuming that the central star was the dominant
source of illumination. Typical values for $U$ ranged from
few$\times$10$^2$ to few $\times$10$^4$, lending credence to
the proposition that the central star dominates the
energetics of each nebulae. Furthermore, models with
the minimum PAH mass fraction, $q_{PAH}$=0.47\%, were
preferred, suggesting that PAHs are either
destroyed or not present in the bowshock nebulae.
Accordingly, we adopt for each object the emission
coefficient, $j_\nu$, given for the bandpass-averaged $HSO$
$PACS$\footnote{Photoconductor Array Camera and Spectrometer
\citep[PACS;][]{Poglitsch2010}.} 70 $\mu$m band from the
single-$U$ DL07 model appropriate to the $U$ for each
object. Because DL07 provide models only at discrete
radiant energy densities of $U$=10$^2$, 10$^3$, 10$^4$, 10$^5$,
we employ linear interpolation to obtain a $j_\nu$
appropriate to each object.
Figure~\ref{fig:jnu} plots the
DL07 model emission coefficients versus the radiation
density parameter ({\it solid line and crosses}). The
dashed lines show, for reference, power-law descriptions
$j_\nu \propto U^{1.0}$ and $j_\nu \propto U^{0.5}$.
Figure~\ref{fig:jnu} shows that, over the range of $U$
covered by sample objects, $j_\nu$ is approximately
proportional to $\sqrt{U}$. This means that $j_\nu$ is
relatively insensitive to the adopted $U$. $U$ itself is
proportional to $R_*^2~T_{eff}^4/R_0^2$, where $R_*$ is the
stellar radius, $T_{eff}$ is the effective stellar
temperature, and $R_0$ is the standoff distance which we
previously expressed as $R_{0r} D$. This means that our
estimate of $j_\nu$ implicitly contains a dependence on
these quantities,
\begin{equation}
j_\nu \propto \sqrt{U} \propto {\sqrt{{R_*^2~T_{eff}^4} \over {R^2_{0r} D^2}}} = {{R_*~T_{eff}^2} \over {R_{0r} D}}.
\end{equation}
\noindent It can now be seen that Equation~7 goes as,
\begin{equation}
\dot M \propto { {{\bar m} {R^3_{0r}} D^2 V_a^2 I_\nu} \over {V_w \ell_{r} R_* T_{eff}^2 }},
\end{equation}
\noindent so that this expression of the mass-loss rate
ultimately entails something close to a $D^2$ dependence,
via the emission coefficient. Accordingly, our analysis
here is restricted to objects that have well-constrained
distances. If we characterize $j_\nu$ (via the DL07 models
and knowledge of $R_*$ and $T_{eff}$) as uncertain at the
30\% level, the error budget for Equation 7 grows to about
70\%, or a factor of two if stellar velocity uncertainties
are included. Accordingly, we estimate the uncertainties on
$\dot M$ to be 0.3 dex for this sample. Deviation of the
mean stellar space velocities from the adopted $V_a$=30
{km~s$^{-1}$}\ (not included in the above error budget) would
represent a systematic error shifting the mass-loss rates
by a factor $V_{30}^2$, where $V_{30}$ is the relative
star-ISM velocity in units of 30 {km~s$^{-1}$}.
\section{Calculation of Mass-Loss Rates and Comparison to Prior Estimates}
Table~\ref{tab:derived} lists quantities derived from the
basic data in Table~\ref{tab:basic}. Columns 1--4 contain
the identifying numeral, name, generic name, and spectral
type, as in Table~\ref{tab:basic}. Column~5 contains the
stellar luminosity in units of 10$^4$ solar luminosities.
Column 6 contains the radiation density parameter, $U$,
calculated from the basic data. Column 7 lists the
corresponding emission coefficient interpolated from the
DL07 models. Column 8 is the ambient interstellar number
density, $n_a$, derived from the 70 $\mu$m specific
intensity, as described in the previous section. Densities
range between 1.2 cm$^{-3}$ and 160 cm$^{-3}$, with a median
value of 16 cm$^{-3}$. These are typical of densities within
the cool neutral phase ($\approx$30 cm$^{-3}$) of the
interstellar medium and somewhat higher than the warm
neutral phase ($\approx$0.6 cm$^{-3}$) \citep[c.f.,][Table
1.3]{Draine2011}. Column 9 contains the mass-loss rate
calculated from Equation 7. Values range from
2$\times$10$^{-9}$ $M_\odot$~yr$^{-1}$\ to 1.3$\times$10$^{-6}$ $M_\odot$~yr$^{-1}$, with
a median of 6$\times$10$^{-8}$ $M_\odot$~yr$^{-1}$. These are consistent
with the broad range of mass-loss rates for O stars found in
the literature and obtained using other methods.
It is particularly instructive to compare our results for
the well-studied prototypical bowshock star, $\zeta$ Oph
with other analyses.\footnote{We note here that the angular
standoff distance for $\zeta$ Oph is incorrectly listed in
Table~5 \cite{Kobulnicky2017} and Table~1 of
\citet{Kobulnicky2016} as 29\arcsec\ instead of
299\arcsec, making the linear distance 0.159 pc for the
distance adopted here of 0.110 kpc.} Our inferred density
of 2.3 cm$^{-3}$ compares favorably with the 3.6 cm$^{-3}$
computed by \citet{Gvaramadze2012} and $\simeq$3 cm$^{-3}$
estimated from the radio free-free and H$\alpha$ surface
brightness of the surrounding \ion{H}{2} region
\citep{Gull1979}. The resulting mass-loss rate is $\dot
M=$5.4$\times$10$^{-8}$ $M_\odot$~yr$^{-1}$. For comparison,
\citet{Gull1979} report $\dot M=$2.2$\times$10$^{-8}$ $M_\odot$~yr$^{-1}$\
using similar physical reasoning. \citet{Gvaramadze2012}
list an identical $\dot M=$2.2$\times$10$^{-8}$ $M_\odot$~yr$^{-1}$. Our
value is almost a factor of 30 larger than the $\dot
M=$1.8$\times$10$^{-9}$ $M_\odot$~yr$^{-1}$\ inferred by
\citet{Marcolino2009} by fitting model atmospheres to UV and
optical wind lines. Our result is a factor of 2.4 smaller
than the $\dot M=$1.3$\times$10$^{-7}$ $M_\odot$~yr$^{-1}$\ predicted from
the prescription of \citet{Vink2001} for the luminosity and
temperature of $\zeta$ Oph. Predictions from the updated
moving reversing layer theory \citep{Lucy1970} by
\citet[][Table 1]{Lucy2010b} for the adopted parameters of
$\zeta$ Oph ($T_{eff}=31,000$~K, $\log$ $g$=-3.75) indicate
$\dot M=$5.0$\times$10$^{-8}$ $M_\odot$~yr$^{-1}$, in excellent agreement
with our value. This agreement is all the more impressive
given that $\zeta$ Oph is regarded as a weak-winded O star.
Given the general consistency of the momentum-balance
technique with theoretical expectations and some other
mass-loss measurements for this prototypical bowshock star,
we proceed to use the results from Table~\ref{tab:derived}
to assess its general applicability to mass loss from
massive stars.
Figure~\ref{fig:mdot} plots the calculated mass-loss rates
versus stellar effective temperature.\footnote{Although the
mass-loss rate is expected to scale with {\it luminosity}
rather than {\it temperature}, we choose here to plot the
latter to facilitate direct comparison with the
\citet{Lucy2010a} models and because most of our targets
are of similar V--IV luminosity class.} Black filled
symbols denote the 20 sample objects: a star for $\zeta$
Oph, circles for main-sequence stars, and hexagons for
evolved stars. Blue crosses depict model predictions for
each object using the formulation of \citet[][Equation
24]{Vink2001} computed using the stellar data from Table~1
and assuming $v_{\infty}/v_{esc}$=2.
Hence, each filled data point is paired
vertically with a blue $x$ at the same temperature, although
the s's sometimes overlap. Red squares connected by lines
depict the model predictions from \citet{Lucy2010b} for
main-sequence ($\log$ $g$=4.0) and giant ($\log$ $g$=3.5)
stars, as labeled. Red triangles and dotted lines show the
predictions for B main-sequence stars from
\citet{Krticka2014}. A cross above the legend depicts the
typical measurement uncertainties on each
axis. The dispersion in $\dot M$ at fixed temperature
is 0.35 dex, roughly consistent with our stated
uncertainties, but doubtless inflated by the inclusion of a
six luminosity class IV/III objects among the sample of 20.
Key objects are labeled with spectral type and common
nomenclature.
Figure~\ref{fig:mdot} demonstrates that there is good
agreement between the \citet{Vink2001} and \citet{Lucy2010b}
predictions. Our new data fall $\approx$0.4 dex below the
model predictions, but follow the same trend of increasing
mass-loss rate with effective temperature. The O4If star
BD+43 3654 lies two orders of magnitude below the $\dot M >
10^5$ $M_\odot$~yr$^{-1}$\ levels predicted by the \citet{Lucy2010b}
relation for giants and an order of magnitude below the
\citet{Vink2001} value for its temperature and
luminosity. The O8V CPR2002 A10 and O7.5V/III BD+60 586
lie near the model predictions but at the upper envelope of
the objects in our sample. In the former case the spectral
classification comes from our own yellow--red optical
spectra \citep{Chick2018} which are not especially sensitive
to surface gravity. This object could be of a more evolved
luminosity class which would lead to an expected mass-loss
rate more consistent with its position. The {\it Spitzer
Space Telescope} 4.5/8.0/24 micron image of this object in
\citet{Kobulnicky2016} reveals a very high surface
brightness nebula (indeed, the 70 $\mu$m surface brightness
is the largest in our sample) that appears to be more like a
partial bubble than a bowshock. The inferred ambient
density of 72 cm$^{-3}$ is an outlier and is the second
largest in our sample. If this is a windblown bubble,
meaning that the star's velocity actually quite low (perhaps
$<$10 {km~s$^{-1}$}\ instead of the assumed 30 {km~s$^{-1}$}) the resulting
mass-loss rate would drop by a factor of ten into the
regime consistent with other stars of the O8V
classification. In the case of BD+60~586, \citet{Conti1974}
designate it as an O8III rather than O7.5V
\citep{Hillwig2006}, which would explain its position at the
high-$\dot M$ side of our sample.
Objects having O9--O9.5 spectral types form a tightly
bunched vertical band in Figure~\ref{fig:mdot} covering the
range $10^{-8}$ $M_\odot$~yr$^{-1}$\ $< \dot M < 10^{-7}$ $M_\odot$~yr$^{-1}$, a factor of two--three
lower than both sets of models, on average. The three
evolved stars (including $\zeta$ Oph) lie toward the upper
end of this distribution. The dispersion in this subsample
is somewhat larger than the 0.3 dex uncertainties,
suggesting some degree of variation in mass-loss rates in
this regime, although uncertainties on distance likely also
play a role given that $\dot M\propto D^2$. We conclude
that the data for late-O dwarfs (where the weak winds are
observed to be common and the weak-wind problem is thought
to be most pronounced), subdwarfs, and one giant show nearly
an order of magnitude of dispersion and lie systematically
below model predictions by a factor of about two, on average.
For the three early-B stars the situation is less clear. The
B0IVe star HD53367 lies in the lower left of
Figure~\ref{fig:mdot} at $\dot M=$2.2$\times$10$^{-9}$ $M_\odot$~yr$^{-1}$,
well below two of the three models, but in excellent
agreement with the \citet{Krticka2014} prediction. The
B0III star FN CMa lies nearly an order of magnitude above
model \citet{Lucy2010b} expectations and above the other
data points but quite near the \citet{Vink2001} prediction
at $\dot M=$2.9$\times$10$^{-7}$ $M_\odot$~yr$^{-1}$. Its prominent
bowshock nebula appears well-defined and
well-characterized. Its distance is somewhat poorly
constrained by parallax at 0.94$^{+1.1}_{-0.47}$ kpc. This
may be a case where the space velocity is significantly
different than the assumed 30 {km~s$^{-1}$}. If its space velocity
were to be much lower, perhaps 10 {km~s$^{-1}$}, this object would be
consist with \citet{Lucy2010b} model expectations and with
extrapolation of the trend defined by late-O stars.
Finally, the B1V star KGK2010 2 lies an order of magnitude
above the \citet{Vink2001} prediction, outside of the regime
of the \citet{Lucy2010b} models, and two orders of
magnitude above the \citet{Krticka2017} prediction. This
nebula has the third highest surface brightness in our
sample and has one of the smallest standoff distance
distances, making it very compact. Our multiple optical
spectra of this star allow a range of spectral types,
B2--B0, but the luminosity class is not well constrained.
Its reddening and broadband magnitudes make it consistent
with an early-B star at the 1.32 kpc distance of Cygnus
OB2. A larger distance would only exacerbate the extremity
of its apparent mass loss rate. The infrared images of its
nebula in \citet{Kobulnicky2016, Kobulnicky2017} show a
strikingly bright and bowshock-like morphology visible at
3.6 $\mu$m through 160 $\mu$m, making it one of the few
objects among the 709-object bowshock catalog
\citep{Kobulnicky2016} detected across all seven $SST$ and
$HSO$ infrared bandpasses. The inferred ambient number
density of 160 cm$^{-3}$ is, by far, the largest in our
sample. This, coupled with the detection at even the
shortest $SST$ bandpasses suggests an unusual interstellar
environment. This object may be running into a molecular
cloud, for instance. \citet{Kobulnicky2016} note that the
infrared SED is one of the few objects better fit by a dust
model with large PAH fraction, $q_{PAH}$=4.58\%. It has the
coolest 24-to-70 $\mu$m color temperatures among the sample
\citep[$T_{24/70}$=70~K, c.f.,][Table 5]{Kobulnicky2017}.
We attempted using DL07 models having a larger PAH fraction
with similar radiation density, but these yield smaller
emission coefficients, which only serve to increase the
resulting mass-loss rate. We conclude that the DL07 dust
models may not be adequate in this case. Perhaps the PAHs
and dust at the surface of a colder molecular structure are
being fragmented in the wind shock so that the interstellar
dust grain size distribution or grain composition adopted
by DL07 is not appropriate here.
Figure~\ref{fig:mdot2} replicates Figure~\ref{fig:mdot} with
the addition mass loss rates measured for the set of
galactic O3--O9.5 dwarf stars studied separately by
\citet{Martins2005b} ({\it blue open circles}) and
\citet{Howarth1989} ({\it blue open squares}),
respectively. The discrepancy discussed in
\citet{Martins2005b} is obvious here, with the open squares
lying 0.5--1.5 dex above the open circles.\footnote{We have
shifted the effective temperatures assigned by
\citet{Martins2005b} by $-$2000 K for consistency with the
O9.5V objects in our sample.} The \citet{Martins2005b}
mass-loss rates derived from UV spectra appear consistent
with the bowshock sample at the upper end of the temperature
range but lie well below the bowshock sample in the O8--O9
regime. The mass-loss rates given by \citet{Howarth1989}
are consistently higher than the observational results
presented here at the same spectral type ({\it black filled
symbols}), but there is considerable scatter and some
overlap. The \citet{Howarth1989} values are broadly
consistent with the theoretical expectations for dwarfs.
Figure~\ref{fig:mdot2} also shows the late-O dwarf and giant
stars with mass-loss rates determined from ultraviolet
P$^{4+}$ absorption lines \citep[][{\it green x's and +'s,
respectively}]{Fullerton2006} and the same set of stars
determined from the H$\alpha$ line \citep[][{\it cyan x's
and +'s}]{Repolust2004,Markova2004}. The P$^{4+}$
measurements show a large dispersion at any given effective
temperature, but generally lie an order of magnitude below
the bowshock sample. The H$\alpha$ results lie
significantly above the bulk of the data and models,
although most of the points for dwarfs are upper limits so,
thereby, formally consistent with the other data without
providing strong constraints. These upper limits underscore
the difficulty in measuring mass-loss rates for late-O
stars using the H$\alpha$ line.
\section{Discussion and Conclusions}
Mass-loss rates derived from the principle of momentum
balance and those predicted by two theoretical frameworks
\citep{Vink2001, Lucy2010b} in Figure~\ref{fig:mdot} display
a similar trend with effective temperature but are offset by
about an average factor of about two lower. Knowledge of
the star's velocity, stellar wind velocity, ambient density,
and bowshock size, yield mass-loss rates in good agreement
with the \citet{Howarth1989} UV analysis, but substantially
larger than more modern analyses of UV spectra in
conjunction with atmosphere models such as CMFGEN
\citep{Hillier1998}. Our results are factors of several lower than
H$\alpha$-based measurements, uncorrected for clumping, consistent with current
consensus that a correction by factors of several for clumping is required.
That the dispersion in present results
and the overall slope and zero point of the $\dot M$-$T_{eff}$ relation is
similar to other techniques and models suggests promise for
the momentum-balance method, employed here
for the first time using a sizable sample. Concomitantly,
this could be seen as an indirect validation of the DL07
emission coefficients for dust within bowshock nebulae---an
environment where it cannot be taken for granted that the
prescriptions for typical interstellar dust size
distributions, compositions, PAH absorption cross sections,
grain heat capacities, dielectric functions, etc., will
apply. It would not be unreasonable, {\it a priori}, to
expect that within bowshock nebulae shocks act to fragment
grains (as we speculate in the case of KGK2010 2) so that
the DL07 models are inappropriate. This is evidently not
the case for the majority of our sample. Viewed from
another perspective, the general agreement in
Figure~\ref{fig:mdot} could be seen as a validation of the
\citep{Vink2001} and \citet{Lucy2010b} theoretical
predictions using an observational technique that is
unaffected by effects like clumping that plague
density-squared diagnostics or the ionization structure
uncertainties associated with absorption line diagnostics.
It remains unclear whether the 0.2--0.4 dex offset between
the theoretical expectations and the bowshock sample can
best be reconciled by identifying a systematic problem with
the bowshock formalism outlined here or by further
refinement in the theoretical treatment of stellar winds.
Mass-loss rates derived here lie closer to model
expectations than other observational results, ameliorating,
but not fully resolving, the weak wind problem. This
qualified success of the momentum-balance approach may be
used to refine traditional mass loss diagnostics for
application to stars which lack bowshock nebulae (the vast
majority!).
The disparity between various $\dot M$ determinations
evidenced in Figure~\ref{fig:mdot2} reflects the historical
measurement problems discussed extensively in the literature
regarding late-O stars. Resolving disparities with $n^2$
methods such as H$\alpha$ is generally attempted by invoking
{\it ad hoc} wind clumping factors of several
\citep{Fullerton2006,Prinja2010}. Resolving disparities
with UV absorption diagnostics coupled with theoretical model
atmospheres has been attempted by invoking modifications of
the ionization structure \citep{Lucy2010b} or X-ray ionized
winds \citep{Huenemoerder2012}. While the new measurements
here do not help identify specific problems with classical $\dot
M$ measurements, the self-consistency, lack of adjustable
parameters, and better agreement with recent theoretical
developments may help in revising those techniques.
The inferred interstellar ambient densities in
Table~\ref{tab:derived} provide some insight regarding the
conditions where bowshocks form. \citet{Peri2012} and
\citet{Peri2015} concluded that only 10--15\% of
high-proper motion massive stars showed evidence of infrared
bowshocks. \citet{Huthoff2002} argued that that
interstellar density likely plays a larger role than space
velocity or stellar spectral type in creating an observable
nebula, a conclusion supported by hydrodynamical simulations
of \citet{Comeron1998} and \citet{Meyer2016}. They found
that slightly supersonic velocities, strong stellar winds,
and larger ambient interstellar densities
$n_a>$0.1~cm$^{-3}$ resulted in the visible bowshock
nebulae. Our range of densities runs from $n_a=$1.2 to 160
cm$^{-3}$ with a mean of $n_a=$24 cm$^{-3}$. $\zeta$ Oph
has the second lowest density with $n_a=$2.3 cm$^{-3}$.
Its 70 $\mu$m surface brightness is the lowest in our
sample, leading \citet{Kobulnicky2017} to note that it would
not likely be detectable if it were not located 23\degr\
above the Galactic Plane in a region of low infrared
background. For most of the objects which lie within a
degree of the Plane, we infer that ambient densities of at
least $n_a\gtrsim$5 cm$^{-3}$ appear to be required.
The uncertainties on our measurements are dominated, at
present, by the lack of data on space velocities for each
star in the frame of the local interstellar medium. Because
mass-loss rates scale as $V_a^2$, our adoption of a single
$V_a$=30 {km~s$^{-1}$}\ likely leads to significant errors in some
cases. True three-dimensional space velocities and accurate
distances, such as will be provided by the GAIA mission
\citep{Gaia2016a}, should be available for many stars of
interest in the near future so that more precise mass loss
rates will be possible. We are presently conducting a
spectroscopic survey of bowshock stars that will provide
needed data (spectral classifications, stellar temperatures,
radii, radial velocities) for a much larger sample. An
enhanced sample of early-B stars will result from this work
so that mass-loss rates in this low-temperature, weak-wind
regime should finally be possible.
\acknowledgments This work has been supported by the
National Science Foundation through grants AST-1412845 and
AST-1560461 (REU). We thank Nathan Smith and an anonymous
reviewer for suggestions that improved
this manuscript.
\vspace{5mm}
\facilities{SST, WISE, HSO, HIPPARCOS}
\newpage
\begin{deluxetable}{rcrcrrrrrrrrrrr}
\tablecaption{Measured \& adopted parameters for stars and their bowshock nebulae \label{tab:basic}}
\rotate
\tablehead{
\colhead{ID} &\colhead{Name}&\colhead{Alt. name}&\colhead{Sp.T.}&\colhead{Ref.}&\colhead{T$_{eff}$}&\colhead{R$_*$ } &\colhead{Mass}&\colhead{$V_\infty$} &\colhead{D} &\colhead{Ref. }&\colhead{$R_0$} &\colhead{$R_0$}&\colhead{Peak$_{70}$} &\colhead{$\ell$} \\
\colhead{} &\colhead{ }&\colhead{ }&\colhead{ }&\colhead{ } &\colhead{(K)} &\colhead{($R_\odot$)}&\colhead{(M$_\odot$)} &\colhead{({km~s$^{-1}$})} &\colhead{(kpc)}&\colhead{ } &\colhead{(arcsec)} &\colhead{(pc)} &\colhead{(10$^7$ Jy sr$^{-1}$)}&\colhead{(arcsec)} \\
\colhead{(1)}&\colhead{(2)} &\colhead{(3)} &\colhead{(4)} &\colhead{(5)} &\colhead{(6)} &\colhead{(7)} &\colhead{(8)} &\colhead{(9)} &\colhead{(10)} &\colhead{(11)}&\colhead{(12)} &\colhead{(13)} &\colhead{(14)} &\colhead{(15)}
}
\startdata
13 & $\zeta$ Oph & G006.2812+23.5877 & O9.2IV & S1 & 31000 & 10 & 19 & 1300 & 0.11 & D1 &299 & 0.175 & 12.3 & 277 \\
67 & NGC 6611 ESL 45& G017.0826+00.9744 & O9V & S2 & 31500 & 7.7 & 18 & 1300 & 1.99 & D2 &7.5 & 0.080 & 64.0 & 15 \\
329 & KGK 2010 10 & G077.0505$-$00.6094 & O8V & S3 & 33400 & 8.5 & 23 & 2000 & 4.1 & D3\tablenotemark{a}&10 & 0.219 & 15.4 & 27 \\
331 & LS II +39 53 & G078.2869+00.7780 & O7V & S4 & 35500 & 9.3 & 26 & 2500 & 1.32 & D3\tablenotemark{a}&42 & 0.296 & 12.0 & 55 \\
338 & CPR2002A10 & G078.8223+00.0959 & O8V: & S3 & 33400 & 8.5 & 23 & 1200 & 1.32 & D3\tablenotemark{a}&23 & 0.162 & 79.8 & 29 \\
339 & CPR2002A37 & G080.2400+00.1354 & O5V((f)) & S5 & 41500 & 11.1& 37 & 2900 & 1.32 & D3\tablenotemark{a}&70 & 0.493 & 29.9 & 47 \\
341 & KGK2010 1 & G080.8621+00.9749 & O9V & S3 & 31500 & 7.7 & 18 & 1300 & 1.32 & D3\tablenotemark{a}&20 & 0.141 & 5.0 & 31 \\
342 & KGK2010 2 & G080.9020+00.9828 & B1V: & S3 & 26000 & 6 & 10 & 800 & 1.32 & D3\tablenotemark{a}&10 & 0.070 & 57.8 & 14 \\
344 & BD +43 3654 & G082.4100+02.3254 & O4If & S6 & 40700 & 19 & 58 & 3000 & 1.32 & D3\tablenotemark{a}&193 & 1.359 & 58.6 & 170 \\
368 & KM Cas & G134.3552+00.8182 & O9.5V((f)) & S7 & 30500 & 7.4 & 16 & 1200 & 1.95 & D4\tablenotemark{b}&14 & 0.146 & 24.9 & 22 \\
369 & BD +60 586 & G137.4203+01.2792 & O7.5V/O8III & S8 & 34400 & 8.9 & 24 & 2500 & 1.95 & D4\tablenotemark{b}&73 & 0.759 & 7.9 & 39 \\
380 & HD 53367 & G223.7092$-$01.9008 & B0IVe & S9 & 28000 & 7 & 15 & 1200 & 0.26 & D1 &15 & 0.021 & 35.7 & 49 \\
381 & HD 54662 & G224.1685$-$00.7784 & O7Vzvar?\tablenotemark{c} & S1 & 35500 & 9.4 & 26 & 2500 & 0.63 & D1 &220 & 0.739 & 4.6 & 200\\
382 & FN CMa & G224.7096$-$01.7938 & B0III & S9 & 28000 & 15 & 20 & 1200 & 0.94 & D1 &101 & 0.506 & 11.2 & 70 \\
406 & HD 92607 & G287.1148$-$01.0236 & O9IVn & S1 & 31100 & 10 & 20 & 1300 & 2.35 & D5\tablenotemark{d}&16 & 0.201 & 29.1 & 26 \\
407 & HD 93249 & G287.4071$-$00.3593 & O9III+O: & S1 & 30700 & 13.6& 23 & 1300 & 2.35 & D5\tablenotemark{d}&7.8 & 0.098 & 58.2 & 25 \\
409 & HD 93027 & G287.6131$-$01.1302 & O9.5IVvar\tablenotemark{e}& S10 & 30300 & 10 & 16 & 1200 & 2.35 & D5\tablenotemark{d}&7.4 & 0.093 & 20.8 & 17 \\
410 & HD 305536 & G287.6736$-$01.0093 & O9.5V+?\tablenotemark{f} & S1 & 30500 & 7.4 & 15 & 1200 & 2.35 & D5\tablenotemark{d}&3.7 & 0.046 & 91.4 & 14 \\
411 & HD 305599 & G288.1505$-$00.5059 & O9.5V & S11 & 30500 & 7.4 & 15 & 1200 & 2.35 & D5\tablenotemark{d}&4.2 & 0.052 & 41.5 & 16 \\
413 & HD 93683 & G288.3138$-$01.3085 & O9V+B0V\tablenotemark{g} & S11 & 31500 & 7.7 & 18 & 1300 & 2.35 & D5\tablenotemark{d}&15 & 0.188 & 15.4 & 24 \\
\enddata
\tablecomments{(1) Identifier from \citet{Kobulnicky2016}, (2) Common name, (3) generic identifier in galactic coordinates, (4) spectral classification, (5) reference for
spectral classification, (6) effective temperature based on spectral classification using the theoretical scale of \citet{Martins2005a}, (7) stellar radius
based on spectral classification using the theoretical scale of \citet{Martins2005a}, (8) adopted stellar mass, (9) adopted terminal wind velocity from \citet{Mokiem2007}, (10) adopted distance,
(11) reference for distance, (12) standoff distance in arcsec, (13) standoff distance in pc
using the adopted distance and angular size from \citet{Kobulnicky2017} adjusted by a statistical factor of 1.1 for projection effects, (14) peak 70 $\mu$m surface
brightness above adjacent background, (15) angular diameter of the nebula in arcsec defined by a chord intersecting the location of peak surface brightness.
References for spectral types: S1--\citet{Sota2014}; S2--\citet{Evans2005}; S3--\citet{Chick2018}; S4--\citet{Vijapurkar1993}; S5--\citet{Hanson2003};
S6--\citet{Comeron2007}; S7--\citet{Massey1995}; S8--\citet{Hillwig2006} gives O7.5V but \citet{Conti1974} lists O8III; S9--\citet{Tjin2001}, S10--\citet{Sota2011}, S11--\citet{Alexander2016}
References for Distances: D1--\citet{Vanleeuwen2007} ; D2---\citet{Hillenbrand1993}; D3--\citet{Kiminki2015}; D4--\citet{Xu2006}, D5--\citet{Smith2006}
}
\tablenotetext{a}{Assumed to be at a similar distance as Cygnus OB2 \citep{Kiminki2015} based on similar magnitude and reddening, but see notes in text on KGK2010 10.}
\tablenotetext{b}{Assumed to be in the Perseus spiral arm as part of the Cas OB6 Association near the W3/W4/W5 star forming regions
having maser parallax measurements by \citet{Xu2006}. This is consistent with the the open cluster photometric distance of 2.2$\pm$0.2 kpc \citep{Lim2014}.}
\tablenotetext{c}{The possible double-lined nature (O6.5V+O7-9V 2119 d period) of this source reported by \citet{Boyajian2007} but was not confirmed by \cite{Sota2014}.}
\tablenotetext{d}{Understood to be part of the Carina Nebula complex at 2.35 kpc distance \citet{Smith2006}, consistent with other contemporary determinations.}
\tablenotetext{e}{A single-lined eclipsing binary according to \citet{Sota2011}, suggesting a significant difference in mass between the primary and secondary star. }
\tablenotetext{f}{A possible single-lined spectroscopic binary according to \citet{Levato1990}. }
\tablenotetext{f}{A double-lined spectroscopic binary according to \citet{Alexander2016}. }
\end{deluxetable}
\newpage
\begin{deluxetable}{rcrcrrrrrr}
\tablecaption{Derived parameters for stars and their bowshock nebulae \label{tab:derived}}
\tablehead{
\colhead{ID} &\colhead{Name}&\colhead{Alt. name}&\colhead{Sp.T.}&\colhead{Lum.} &\colhead{$U$}&\colhead{$j_\nu$} &\colhead{$n_a$} &\colhead{$\dot M$} \\
\colhead{} &\colhead{ }&\colhead{ }&\colhead{ }&\colhead{(10$^4$ L$_\odot$)}&\colhead{} &\colhead{(Jy sr $^{-1}$ cm$^2$ nuc$^{-1}$)} &\colhead{(cm$^{-3}$)}&\colhead{(M$_\odot$~yr$^{-1}$)} \\
\colhead{(1)}&\colhead{(2)} &\colhead{(3)} &\colhead{(4)} &\colhead{(5)} &\colhead{(6)}&\colhead{(7)} &\colhead{(8)} &\colhead{(9)}
}
\startdata
13 & $\zeta$ Oph & G006.2812+23.5877 & O9.2IV & 8.1 & 4.2$\times10^3$ & 8.7$\times10^{-12}$ & 2.3& 5.4$\times$10$^{-8}$ \\
67 & NGC 6611 ESL 45& G017.0826+00.9744 & O9V & 5.1 & 1.3$\times10^4$ & 1.1$\times10^{-11}$ & 32 & 6.2$\times$10$^{-8}$ \\
329 & KGK 2010 10 & G077.0505$-$00.6094 & O8V+? & 7.9 & 2.7$\times10^3$ & 8.4$\times10^{-12}$ & 1.3& 2.5$\times$10$^{-8}$ \\
331 & LS II +39 53 & G078.2869+00.7780 & O7V & 12.0& 2.2$\times10^3$ & 8.3$\times10^{-12}$ & 6.6& 4.5$\times$10$^{-8}$ \\
338 & CPR2002A10 & G078.8223+00.0959 & O8V: & 7.9 & 4.9$\times10^3$ & 8.9$\times10^{-12}$ & 72 & 3.3$\times$10$^{-7}$ \\
339 & CPR2002A37 & G080.2400+00.1354 & O5V((f)) & 32.0& 2.1$\times10^3$ & 8.3$\times10^{-12}$ & 21 & 3.1$\times$10$^{-7}$ \\
341 & KGK2010 1 & G080.8621+00.9749 & O9V & 5.1 & 4.2$\times10^3$ & 8.7$\times10^{-12}$ & 15 & 1.4$\times$10$^{-8}$ \\
342 & KGK2010 2 & G080.9020+00.9828 & B1V: & 1.4 & 4.7$\times10^3$ & 8.9$\times10^{-12}$ & 160& 1.4$\times$10$^{-7}$ \\
344 & BD +43 3654 & G082.4100+02.3254 & O4If & 87.0& 7.6$\times10^2$ & 8.0$\times10^{-12}$ & 19 & 1.3$\times$10$^{-6}$ \\
368 & KM Cas & G134.3552+00.8182 & O9.5V((f)) & 4.1 & 3.2$\times10^3$ & 8.5$\times10^{-12}$ & 21 & 7.7$\times$10$^{-8}$ \\
369 & BD +60 586 & G137.4203+01.2792 & O7.5V & 9.7 & 2.7$\times10^2$ & 7.8$\times10^{-12}$ & 18 & 1.9$\times$10$^{-7}$ \\
380 & HD 53367 & G223.7092$-$01.9008 & B0IVe & 2.6 & 9.9$\times10^4$ & 3.0$\times10^{-11}$ & 16 & 2.2$\times$10$^{-9}$ \\
381 & HD 54662 & G224.1685$-$00.7784 & O7Vzvar? & 12.0& 3.7$\times10^2$ & 7.9$\times10^{-12}$ & 1.2& 6.4$\times$10$^{-8}$ \\
382 & FN CMa & G224.7096$-$01.7938 & B0III & 12.0& 7.7$\times10^2$ & 8.0$\times10^{-12}$ & 18 & 2.9$\times$10$^{-7}$ \\
406 & HD 92607 & G287.1148$-$01.0236 & O9IVn & 8.2 & 3.3$\times10^3$ & 8.5$\times10^{-12}$ & 16 & 1.1$\times$10$^{-7}$ \\
407 & HD 93249 & G287.4071$-$00.3593 & O9III+O: & 14.0& 2.5$\times10^4$ & 1.3$\times10^{-11}$ & 12 & 3.5$\times$10$^{-8}$ \\
409 & HD 93027 & G287.6131$-$01.1302 & O9.5IVvar & 7.4 & 1.4$\times10^4$ & 1.1$\times10^{-11}$ & 9 & 2.2$\times$10$^{-8}$ \\
410 & HD 305536 & G287.6736$-$01.0093 & O9.5V+? & 4.1 & 3.1$\times10^4$ & 1.5$\times10^{-11}$ & 29 & 2.1$\times$10$^{-8}$ \\
411 & HD 305599 & G288.1505$-$00.5059 & O9.5V & 4.1 & 2.4$\times10^4$ & 1.3$\times10^{-11}$ & 6 & 1.2$\times$10$^{-8}$ \\
413 & HD 93683 & G288.3138$-$01.3085 & O9V+B0V & 5.1 & 2.4$\times10^3$ & 8.3$\times10^{-12}$ & 11 & 5.7$\times$10$^{-8}$ \\
\enddata
\tablecomments{(1) Identifier from \citet{Kobulnicky2016}, (2) Common name, (3) generic identifier in galactic coordinates, (4) spectral classification, (5)
stellar luminosity computed from effective temperature and radius in Table~\ref{tab:basic}, (6) dimensionless ratio of the radiant energy
density (in erg~cm$^{-3}$) from to the star to the mean
interstellar radiant energy density estimated by \citet[][MMP83]{Mathis1983}, as tabulated by \citet{Kobulnicky2017}, (7) dust emission coefficient
expressing the energy emitted per second per steradian per Hertz per {\it nucleon}, (8) ambient interstellar number density, computed from Equation 5,
(9) computed mass-loss rate in solar masses per year.}
\end{deluxetable}
\newpage
|
{
"timestamp": "2018-03-08T02:12:03",
"yymm": "1803",
"arxiv_id": "1803.02794",
"language": "en",
"url": "https://arxiv.org/abs/1803.02794"
}
|
\section{Discussion}
In this paper we have presented an algorithm, {\textsc{Sever}}{}, that has both strong
theoretical robustness properties in the presence of outliers, and performs well
on real datasets. {\textsc{Sever}}{} is based on the idea that learning can often be cast as the
problem of finding an approximate stationary point of the loss, which can in turn
be cast as a robust mean estimation problem, allowing us to leverage existing
techniques for efficient robust mean estimation.
There are a number of directions along which {\textsc{Sever}}{} could be improved: first,
it could be extended to handle more general assumptions on the data; second,
it could be strengthened to achieve better error bounds in terms of the fraction
of outliers; finally, one could imagine \emph{automatically learning} a feature representation
in which {\textsc{Sever}}{} performs well. We discuss each of these ideas in detail below.
\paragraph{More general assumptions.}
The main underlying assumption on which {\textsc{Sever}}{} rests is that the top singular value
of the gradients of the data is small. While this appeared to hold true on the datasets
we considered, a common occurence in practice is for there to be \emph{a few} large singular
values, together with \emph{many} small singular values. It would therefore be
desirable to design a version of {\textsc{Sever}}{} that can take advantage of such phenomena. In addition,
it would be worthwhile to do a more detailed empirical analysis across a wide variety of
datasets investigating properties
that can enable robust estimation (the notion of \emph{resilience} in~\cite{steinhardt2018resilience} could provide a template for finding
such properties).
\paragraph{Stronger robustness to outliers.}
In theory, {\textsc{Sever}}{} has a $O(\sqrt{\epsilon})$ dependence in error
on the fraction $\epsilon$ of outliers (see Theorem~\ref{thm:stationary-point-inf}).
While without stronger assumptions this is
likely not possible to improve,
in practice we would prefer to have a dependence closer to $O(\epsilon)$.
Therefore, it would also be useful to improve {\textsc{Sever}}{} to have such an $O(\epsilon)$-dependence
under stronger but realistic assumptions. Unfortunately, all existing algorithms for robust mean
estimation that achieve error better than $O(\sqrt{\epsilon})$ either rely on strong
distributional assumptions such as Gaussianity \cite{DKKLMS16,LaiRV16},
or else require expensive computation involving e.g.~sum-of-squares optimization
\cite{HL17,KStein17,KS17}. Improving the robustness of {\textsc{Sever}}{} thus requires improvements
on the robust mean estimation algorithm that {\textsc{Sever}}{} uses as a primitive.
\paragraph{Learning a favorable representation.}
Finally, we note that {\textsc{Sever}}{} performs best when the features have small covariance and
strong predictive power. One situation in particular where this holds is when there are
many approximately independent features that are predictive of the true signal.
It would be interesting to try to
learn a representation with such a property. This could be done, for instance, by training a neural network with some cost function that encourages independent features (some ideas along these general lines are discussed in \cite{bengio2017consciousness}). An issue is how to learn such a representation robustly; one idea is learn a representation on a dataset that is known to be free of outliers, and hope that the representation is useful on other datasets in the same application domain.\\
\\
Beyond these specific questions, we view the general investigation of robust methods (both empirically and theoretically) as an important step as machine learning moves forwards.
Indeed, as machine learning is applied in increasingly many situations and in
increasingly automated ways, it is important to attend to robustness considerations
so that machine learning systems behave reliably and
avoid costly errors. While the bulk of recent work has highlighted the vulnerabilities
of machine learning (e.g.~\cite{szegedy2014intriguing,li2016data,steinhardt2017certified,evtimov2017robust,chen2017targeted}), we are optimistic that practical algorithms backed by principled
theory can finally patch these vulnerabilities and lead to truly reliable systems.
\section{Additional Experimental Results}
\label{sec:experiments-app}
In this section, we provide additional plots of our experimental results, comparing with all baselines considered.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\begin{axis}[errplottriple,name=linreg_synth_app, align=center, title={Regression: Synthetic data}, xticklabel style={/pgf/number format/.cd, fixed, fixed zerofill, precision=2,/tikz/.cd}, legend columns= 4, legend style={anchor=north west, xshift=-0.15 \plotwidth, yshift=-0.8\plotheight}]
\addplot[teal] table[x=eps, y=err] {figures/linreg_synth/uncorrupted.txt};
\addplot[red] table[x=eps, y=err] {figures/linreg_synth/corrupted.txt};
\addplot[cyan] table[x=eps, y=err] {figures/linreg_synth/l2.txt};
\addplot[violet] table[x=eps, y=err] {figures/linreg_synth/loss.txt};
\addplot[magenta] table[x=eps, y=err] {figures/linreg_synth/gradient.txt};
\addplot[gray] table[x=eps, y=err] {figures/linreg_synth/ransac.txt};
\addplot[black] table[x=eps, y=err] {figures/linreg_synth/sever.txt};
\legend{uncorrupted, {\bf noDefense}{}, {\bf l2}{}, {\bf loss}{}, {\bf gradientCentered}{}, {\bf RANSAC}{}, {\textsc{Sever}}{}}
\end{axis}
\begin{axis}[errplottriple,name=linreg_qsar_app, align=center, at=(linreg_synth_app.north east),anchor=north west, xticklabel style={/pgf/number format/.cd, fixed, fixed zerofill, precision=2,/tikz/.cd}, xshift=\plotxspacing,ignore legend, title={Regression: Drug discovery data}, ymin = 1, ymax = 2]
\addplot[teal] table[x=eps, y=err] {figures/linreg_qsar/uncorrupted.txt};
\addplot[red] table[x=eps, y=err] {figures/linreg_qsar/corrupted.txt};
\addplot[cyan] table[x=eps, y=err] {figures/linreg_qsar/l2.txt};
\addplot[violet] table[x=eps, y=err] {figures/linreg_qsar/loss.txt};
\addplot[magenta] table[x=eps, y=err] {figures/linreg_qsar/gradient.txt};
\addplot[gray] table[x=eps, y=err] {figures/linreg_qsar/ransac.txt};
\addplot[black] table[x=eps, y=err] {figures/linreg_qsar/sever.txt};
\end{axis}
\begin{axis}[errplottriple,name=linreg_qsar_worst_app, align=center, at=(linreg_qsar_app.north east),anchor=north west, xticklabel style={/pgf/number format/.cd, fixed, fixed zerofill, precision=2,/tikz/.cd}, xshift=\plotxspacing,ignore legend, title={Regression: Drug discovery data, \\ attack targeted against {\textsc{Sever}}{}}, ymin = 1, ymax = 2]
\addplot[teal] table[x=eps, y=err] {figures/linreg_qsar_worst/uncorrupted.txt};
\addplot[red] table[x=eps, y=err] {figures/linreg_qsar_worst/corrupted.txt};
\addplot[cyan] table[x=eps, y=err] {figures/linreg_qsar_worst/l2.txt};
\addplot[violet] table[x=eps, y=err] {figures/linreg_qsar_worst/loss.txt};
\addplot[magenta] table[x=eps, y=err] {figures/linreg_qsar_worst/gradient.txt};
\addplot[gray] table[x=eps, y=err] {figures/linreg_qsar_worst/ransac.txt};
\addplot[black] table[x=eps, y=err] {figures/linreg_qsar_worst/sever.txt};
\end{axis}
\end{tikzpicture}
\caption{$\epsilon$ vs test error for baselines and {\textsc{Sever}}{} on synthetic data and the drug discovery dataset. The left and middle figures show that {\textsc{Sever}}{} continues to maintain statistical accuracy against our attacks which are able to defeat previous baselines. The right figure shows an attack with parameters chosen to increase the test error {\textsc{Sever}}{} on the drug discovery dataset as much as possible. Despite this, {\textsc{Sever}}{} still has relatively small test error.}
\label{label:acc-vs-eps-linreg-app}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\begin{axis}[errplot,name=svm_synth_loss_app, legend style={anchor=north west, xshift=-0.7 \plotwidth, yshift=-1.2\plotheight}, legend columns=4, title={SVM: Strongest attacks against {\bf loss}{} on synthetic data}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/uncorrupted.txt};
\addplot[red, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/corrupted.txt};
\addplot[cyan, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/l2.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/loss.txt};
\addplot[magenta, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/gradient.txt};
\addplot[orange, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/gradientCentered.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/sever.txt};
\legend{uncorrupted, {\bf noDefense}{}, {\bf l2}{}, {\bf loss}{}, {\bf gradient}{}, {\bf gradientCentered}{}, {\textsc{Sever}}{}}
\end{axis}
\begin{axis}[errplot,name=svm_synth_sever_app, at=(svm_synth_loss.north east),anchor=north west, xshift=\plotxspacing,ignore legend, title={SVM: Strongest attacks against {\textsc{Sever}}{} on synthetic data}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/uncorrupted.txt};
\addplot[red, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/corrupted.txt};
\addplot[cyan, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/l2.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/loss.txt};
\addplot[magenta, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/gradient.txt};
\addplot[orange, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/gradientCentered.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/sever.txt};
\end{axis}
\end{tikzpicture}
\caption{$\epsilon$ vs test error for baselines and {\textsc{Sever}}{} on synthetic data. The left figure demonstrates that {\textsc{Sever}}{} is accurate when outliers manage to defeat previous baselines.
The right figure shows the result of attacks which increased the test error the most against {\textsc{Sever}}{}. Even in this case, {\textsc{Sever}}{} performs much better than the baselines.}
\label{fig:svm-synthetic-app}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\begin{axis}[errplottriple,name=svm_enron_gradientCentered-app, align=center, title={SVM: Strongest attacks against \\ {\bf gradientCentered}{} on Enron}, legend columns= 4, legend style={anchor=north west, xshift=-0.2 \plotwidth, yshift=-0.8\plotheight}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/uncorrupted.txt};
\addplot[red, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/corrupted.txt};
\addplot[cyan, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/l2.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/loss.txt};
\addplot[magenta, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/gradient.txt};
\addplot[orange, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/gradientCentered.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/sever.txt};
\legend{uncorrupted, {\bf noDefense}{}, {\bf l2}{}, {\bf loss}{}, {\bf gradient}{}, {\bf gradientCentered}{}, {\textsc{Sever}}{}}
\end{axis}
\begin{axis}[errplottriple,name=svm_enron_loss-app, align=center, at=(svm_enron_gradientCentered-app.north east),anchor=north west, xshift=\plotxspacing,ignore legend, title={SVM: Strongest attacks \\ against {\bf loss}{} on Enron}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/uncorrupted.txt};
\addplot[red, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/corrupted.txt};
\addplot[cyan, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/l2.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/loss.txt};
\addplot[magenta, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/gradient.txt};
\addplot[orange, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/gradientCentered.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/sever.txt};
\end{axis}
\begin{axis}[errplottriple,name=svm_enron_sever-app, align=center, at=(svm_enron_loss-app.north east),anchor=north west, xshift=\plotxspacing,ignore legend, title={SVM: Strongest attacks \\ against {\textsc{Sever}}{} on Enron}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/uncorrupted.txt};
\addplot[red, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/corrupted.txt};
\addplot[cyan, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/l2.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/loss.txt};
\addplot[magenta, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/gradient.txt};
\addplot[orange, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/gradientCentered.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/sever.txt};
\end{axis}
\end{tikzpicture}
\caption{$\epsilon$ versus test error for baselines and {\textsc{Sever}}{} on the Enron spam corpus.
The left and middle figures are the attacks which perform best against two baselines, while the right figure performs best against {\textsc{Sever}}{}.
Though other baselines may perform well in certain cases, only {\textsc{Sever}}{} is consistently accurate.
The exception is for certain attacks at $\epsilon = 0.03$, which, as shown in Figure~\ref{fig:spam-histogram}, require three rounds of outlier removal for any method to obtain reasonable test error -- in these plots, our defenses perform only two rounds.}
\label{fig:spam-results-app}
\end{figure}
\section{Experiments}
\label{sec:experiments}
In this section we apply {\textsc{Sever}}{} to regression and classification problems.
Our code is available at~\url{https://github.com/hoonose/sever}.
As our base learners, we used ridge regression and an SVM, respectively. We
implemented the latter as a quadratic program, using Gurobi~\cite{gurobi2016}
as a backend solver and YALMIP~\cite{lofberg2004} as the modeling language.
In both cases, we ran the base learner and then extracted gradients for each data point
at the learned parameters. We then centered the gradients and ran MATLAB's \texttt{svds} method
to compute the top singular vector $v$, and removed the top $p$ fraction of points $i$ with the
largest \emph{outlier score} $\tau_i$,
computed as the squared magnitude of the projection onto $v$
(see Algorithm~\ref{alg:sever}).
We repeated this for $r$ iterations in total. For classification,
we centered the gradients separately (and removed points separately) for each class,
which improved performance.
We compared our method to six baseline methods. All but one of these all have the same high-level form
as {\textsc{Sever}}{} (run the base learner then filter top $p$ fraction of points with the largest score),
but use a different definition of the score $\tau_i$ for deciding which points to filter:
\begin{itemize}
\item {\bf noDefense}: no points are removed.
\item {\bf l2}: remove points where the covariate $x$ has large $\ell_2$ distance from the mean.
\item {\bf loss}: remove points with large loss (measured at the parameters output by the base learner).
\item {\bf gradient}: remove points with large gradient (in $\ell_2$-norm).
\item {\bf gradientCentered}: remove points whose gradients are far from the mean gradient in $\ell_2$-norm.
\item {\bf RANSAC}: repeatedly subsample points uniformly at random, and find the best fit with the subsample. Then, choose the best fit amongst this set of learners. Note that this method is not ``filter-based''.\footnote{In practice, heuristics must often be applied to choose the best fit. In our experiments, we ``cheat'' slightly by in fact choosing the best fit post-hoc by reporting the best error achieved by any learner in this way. Despite strengthening {\bf RANSAC}{} in this way, we observe that it still has poor performance.}
\end{itemize}
Note that {\bf gradientCentered}{} is similar to our method, except that it removes large gradients in terms of
$\ell_2$-norm, rather than in terms of projection onto the top singular vector.
As before, for classification we compute these metrics separately for each class.
Both ridge regression and SVM have a single hyperparameter (the regularization coefficient).
We optimized this based on the uncorrupted data and then kept it fixed throughout our
experiments. In addition, since the data do not already have outliers, we added varying
amounts of outliers (ranging from $0.5\%$ to $10\%$ of the clean data); this process is
described in more detail below.
\input{linreg-experiments}
\input{svm-experiments}
\section{An Alternative Algorithm: Robust Filtering in Each Iteration}
\label{sec:general-algo}
In this section, we describe another algorithm for robust stochastic optimization. This algorithm uses standard robust mean estimation techniques to compute approximate gradients pointwise, which it then feeds into a standard projective gradient descent algorithm. This algorithm in practice turns out to be somewhat slower than the one employed in the rest of this paper, because it employs a filtering algorithm at every step of the projective gradient descent, and does not remember which points were filtered between iterations. On the other hand, we present this algorithm for two reasons. Firstly, because it is a conceptually simpler interpretation of the main ideas of this paper, and secondly, because the algorithm works under somewhat more general assumptions. In particular, this algorithm only requires that for each $w\in\mathcal{H}$ that there is a corresponding good set of functions, rather than that there exists a single good set that works simultaneously for all $w$.
In particular, we can make do with the following somewhat weaker assumption:
\begin{assumption}
\label{ass:many-good-sets}
Fix $0<\eps<1/2$ \new{and parameter $\sigma \in \R_+$.}
For each $w \in \mathcal{H}$, there exists an unknown set $I_{\mathrm{good}} \new{= I_{\mathrm{good}}(w)} \subseteq [n]$
with $|I_{\mathrm{good}}| \geq (1-\eps)n$
of ``good'' functions $\{f_i\}_{i \in I_{\mathrm{good}}}$ such that:
\begin{equation}
\Big\|\E_{I_{\mathrm{good}}}\big[\big(\nabla f_i(w) - \nabla \bar{f}(w)\big)\big(\nabla f_i(w) - \nabla \bar{f}(w)\big)^T\big]\Big\|_2 \leq \sigma^2 \;,
\end{equation}
and
\begin{equation}
\|\nabla \fhat(w) - \nabla \bar{f}(w)\|_2 \leq \sigma\sqrt{\epsilon},
\textrm{ where } \fhat \stackrel{\text{def}}{=} \frac{1}{|I_{\mathrm{good}}|} \sum_{i \in I_{\mathrm{good}}} f_i \;.
\end{equation}
\end{assumption}
We make essential use of the following result, which appears in both~\cite{DKK+17, steinhardt2018resilience}:
\begin{theorem}
\label{thm:mean-estimation}
Let $\mu \in \mathbb{R}^d$ and a collection of points $x_i \in \mathbb{R}^d$, $i \in [n]$ and $\sigma>0$.
Suppose that there exists $I_{\mathrm{good}} \subseteq [n]$ \new{with $|I_{\mathrm{good}}| \geq (1-\eps)n$}
satisfying the following:
\begin{equation}
\label{eq:mean-estimation-assumptions}
\frac{1}{|I_{\mathrm{good}}|} \sum_{i \in I_{\mathrm{good}}} (x_i - \mu)(x_i - \mu)^{\top} \preceq \sigma^2 I \text{ and } \big\|\frac{1}{|I_{\mathrm{good}}|} \sum_{i \in I_{\mathrm{good}}} (x_i - \mu)\big\|_2 \leq \sigma \sqrt{\epsilon}.
\end{equation}
Then, if $\epsilon < \epsilon_0$ for some universal constant $\epsilon_0$,
there is an efficient algorithm, Algorithm ${\cal A}$, which outputs an estimate $\hat{\mu} \in \mathbb{R}^d$ such that
$\|\hat{\mu} - \mu\|_2 = O(\sigma \sqrt{\epsilon})$.
\end{theorem}
Our general robust algorithm for stochastic optimization will make calls to Algorithm ${\cal A}$
in a black-box manner, as well as to the projection operator onto $\mathcal{H}$.
We will measure the cost of our algorithm by the total number of such calls.
\begin{remark}
While it is not needed for the theoretical results established in this subsection,
we note that the robust mean estimation algorithm of~\cite{DKK+17}
relies on an iterative outlier removal method only requiring basic eigenvalue computations
(SVD), while the~\cite{steinhardt2018resilience} algorithm employs semidefinite programming.
In our experiments, we use the algorithm in~\cite{DKK+17} and variants thereof.
\end{remark}
\medskip
Using the above black-box, together with known results on convex optimization with errors, we obtain
the following meta-theorem:
\begin{theorem}\label{thm:opt-theorem}
For functions $f_1, \ldots, f_n: \mathcal{H} \to \R$, bounded below on a closed domain $\mathcal{H}$, suppose that either
Assumption \ref{ass:many-good-sets} is satisfied with some parameters $\eps, \sigma>0$. Then there exists an efficient algorithm that finds an $O(\sigma\sqrt{\eps})$-approximate critical point of $\bar{f}$.
\end{theorem}
\begin{proof}
We note that by applying Algorithm $\mathcal{A}$ on $\{\nabla f_i(w)\}$, we can find an approximation to $\nabla \bar{f}(w)$ with error $O(\sigma\sqrt{\eps})$. We note that standard projective gradient descent algorithms can be made to run efficiently even if the gradients given are only approximate, and this can be used to find our $O(\sigma\sqrt{\eps})$-approximate critical point.
\end{proof}
\subsection{Comparison with {\textsc{Sever}}{}}
\label{sec:generic-vs-sever}
In this section we give a brief comparison of this algorithm to {\textsc{Sever}}{}.
The algorithm presented in this section is much simpler to state, and also requires weaker conditions on the data.
However, because the algorithms work in somewhat different settings, the comparison is a bit delicate, so we explain in more detail below.
There is a major conceptual difference between these two algorithms: namely, {\textsc{Sever}}{} works with a \emph{black-box non-robust learner}, and requires the filter algorithm for robust mean estimation.
In contrast, the algorithm in this section works with a \emph{black-box robust mean estimation algorithm}, and plugs it into a specific non-robust learning algorithm, specifically (approximate) stochastic gradient descent.
When instantiated with the same primitives, these algorithms have similar theoretical runtime guarantees.
However, in the practical implementation, we prefer {\textsc{Sever}}{} for a couple of reasons.
First, in practice we find that in practice, {\textsc{Sever}}{} often only requires a constant number of runs of the base black-box learner, and so incurs only a constant factor overhead.
In contrast, the algorithm presented in this section requires at least linear time per iteration of SGD, since it needs to run a robust mean estimation algorithm on the entire dataset (and the total number of iterations needed is comparable).
In contrast, SGD typically runs in constant time per iteration, so this presents a major bottleneck for scalability.
Second, we find it is much more useful from a practical point of view to allow for black-box non-robust learners, than to allow for black-box robust mean estimation algorithms.
This is simply because the former allows {\textsc{Sever}}{} to be much more problem-specific, and allow for optimizations for each individual learning problem.
For instance, it is what allows us to use optimized libraries in our experiments for linear regression and SVM.
In contrast, there is relatively little reward to allow for black-box robust mean estimation algorithms, as not only are there relatively few options, but also this does not allow us really to specialize the algorithm to the problem at hand.
\section{Applications of the General Algorithm}
\label{sec:app-general}
In this section, we present three concrete applications of our general robust algorithm.
In particular, we describe how to robustly optimize models for linear regression, support vector machines, and logistic regression, in Sections~\ref{sec:app-linreg},~\ref{sec:app-svm},~\ref{sec:app-logreg}, respectively.
\subsection{Linear Regression}
\label{sec:app-linreg}
In this section, we demonstrate how our results apply to linear regression.
We are given pairs $(X_i, Y_i) \in \R^{d} \times \R$ for $i \in [n]$.
The $X_i$'s are drawn i.i.d.\ from a distribution $D_x$, and $Y_i = \langle w^*, X_i \rangle + e_i$,
for some unknown $w^* \in \R^d$ and the noise random variables $e_i$'s are drawn i.i.d.\ from some distribution $D_e$.
Given $(X_i, Y_i) \sim D_{xy}$, the joint distribution induced by this process, let $f_i(w) = (Y_i - \langle w, X_i \rangle)^2$.
The goal is then to find a $\widehat{w}$ approximately minimizing the objective function
\[
\Ef(w) = \E_{(X, Y) \sim D_{xy}} [(Y - \langle w, X \rangle)^2] \;.
\]
We work with the following assumptions:
\begin{assumption}
\label{ass:linreg}
Given the model for linear regression described above, assume the following conditions for $D_e$ and $D_x$:
\begin{itemize}
\item $\E_{e \sim D_e}[e] = 0$;
\item $\Var_{e \sim D_e}[e] \leq \xi$;
\item $\E_{X \sim D_x}[XX^T] \preceq \sigma^2 I $ for some $\sigma > 0$;
\item There is a constant $C > 0$, such that for all unit vectors $v$, $\E_{X \sim D_x} \left[ \langle v, X \rangle^4 \right] \leq C \sigma^4.$
\end{itemize}
\end{assumption}
Our main result for linear regression is the following:
\begin{theorem}\label{thm:linreg}
Let $\eps > 0$, and let $D_{xy}$ be a distribution over pairs $(X,Y)$ which satisfies the conditions of Assumption~\ref{ass:linreg}.
Suppose we are given $O\left(\frac{d^5}{\eps^2}\right)$ $\eps$-noisy samples from $D_{xy}$.
Then in either of the following two cases, there exists an algorithm that, with probability at least $9/10$, produces a $\widehat{w}$ with the following guarantees:
\begin{enumerate}
\item If $\E_{X \sim D_x}[XX^T] \succeq \gamma I$ for $\gamma \geq O(1) \cdot \sigma \sqrt{C \eps}$, then $\Ef(\widehat{w}) \leq \Ef(w^*) + O\left(\frac{(\xi+\eps)\eps}{\gamma}\right)$ and $\|\widehat{w}-w^*\|_2 = O\left(\frac{\sqrt{\xi\eps}+\eps}{\gamma}\right)$.
\item If $\|w^*\|_2 \leq r$, then $\Ef(\widehat{w}) \leq \Ef(w^*) + O(((\sqrt{\xi}+\sqrt{\eps}) r + \sqrt{C}r^2)\sqrt{\epsilon})$.
\end{enumerate}
\end{theorem}
The proof will follow from two lemmas (proved in Section~\ref{sec:linreg-cov-bound} and~\ref{sec:linreg-sample-complexity}, respectively).
First, we will bound the covariance of the gradient, in Lemma~\ref{lem:lin-reg-cov-bound}:
\begin{lemma}
\label{lem:lin-reg-cov-bound}
Suppose $D_{xy}$ satisfies the conditions of Assumption~\ref{ass:linreg}.
Then for all unit vectors $v \in \R^d$, we have
\[
v^\top \Cov_{(X, Y) \sim D_{xy}} \left[ \nabla f_i (w,(X,Y)) \right] v \leq 4\sigma^2 \xi + 4C\sigma^4 \|w^* - w\|_2^2 \;.
\]
\end{lemma}
With this in hand, we can prove Lemma~\ref{lem:linreg-sample-complexity}, giving us a polynomial sample complexity which is sufficient to satisfy the conditions of Assumption~\ref{ass:one-good-set}.
\begin{lemma} \label{lem:linreg-sample-complexity}
Suppose $D_{xy}$ satisfies the conditions of Assumption~\ref{ass:linreg}.
Given $O(d^5/\eps^2)$ $\eps$-noisy samples from $D_{xy}$, then with probability at last $9/10$, they satisfy Assumption \ref{ass:one-good-set} with parameters $\sigma_0=30\sqrt{\xi}+\sqrt{\eps}$ and $\sigma_1=18\sqrt{C+1}$.
\end{lemma}
The proof concludes by applying Corollary~\ref{cor:strongly-convex-sever} or case (i) of Corollary~\ref{cor:convex-sever} for the first and second cases respectively.
\subsubsection{Proof of Lemma~\ref{lem:lin-reg-cov-bound}}
\label{sec:linreg-cov-bound}
Note that for this setting we have that $f(w, z) = f(w, x, y) = (y - \langle w, x \rangle)^2$.
We then have that $\nabla_w f(w, z) = -2 (\langle w^{\ast}-w, x \rangle + e) x$.
Our main claim is the following:
\begin{claim} \label{claim:cov-grad-formula}
We have that $\Cov[\nabla_w f(w, z)] = 4 \E_{X \sim D} \left[ \langle w^{\ast}-w, x \rangle^2 (xx^T) \right] + 4\Var[E] \Sigma
- 4 \Sigma (w^{\ast}-w)(w^{\ast}-w)^T \Sigma$.
\end{claim}
\begin{proof}
Let us use the notation $A = \nabla_w f(w, z)$ and $\mu = \E[A]$.
By definition, we have that
$\Cov[A] = \E[AA^T] - \mu \mu^T$.
Note that $\mu = \E_z [\nabla_w f(w, z)] = \E_z [ (- 2 \langle w^{\ast}-w, x \rangle + e) x] = -2 \Sigma (w^{\ast}-w)$,
where we use the fact that $\E_z[e] = 0$ and $e$ is independent of $x$.
Therefore, $\mu \mu^T = 4 \Sigma (w^{\ast}-w) (w^{\ast}-w)^T \Sigma$.
To calculate $\E[AA^T]$, note that
$A = \nabla_w f(w, z) = -2 (\langle w^{\ast}-w, x \rangle + e)x$, and
$A^T = -2 (\langle w^{\ast}-w, x \rangle + e) x^T$.
Therefore,
$AA^T = 4 (\langle w^{\ast}-w, x \rangle^2 + e^2 + 2 \langle w^{\ast}-w, x \rangle e) (xx^T)$
and
$$\E_z [AA^T] = 4 \E_x [\langle w^{\ast}-w, x \rangle^2 (xx^T)] + 4 \Var[e] \Sigma + 0 \;,$$
where we again used the fact that the noise $e$ is independent of $x$ and its expectation is zero.
By gathering terms, we get that
$$\Cov[\nabla_w f(w, z)] = 4 \E_x [\langle w^{\ast}-w, x \rangle^2 (xx^T)] +
4 \Var[e] \Sigma - 4 \Sigma (w^{\ast}-w) (w^{\ast}-w)^T \Sigma \;.$$
This completes the proof.
\end{proof}
Given the above claim, we can bound from above the spectral norm of the covariance matrix
of the gradients as follows: Specifically, for a unit vector $v$,
the quantity $v^T \Cov[\nabla_w f(w, z)] v$ is bounded from above by a constant times the following
quantities:
\begin{itemize}
\item The first term is $v^T \E_x [\langle w^{\ast}-w, x \rangle^2 (xx^T)] v =
\E_x [\langle w^{\ast}-w, x \rangle^2 \cdot \langle v, x \rangle^2)]$. By Cauchy-Schwarz and our 4th moment bound,
this is at most $C\sigma^4 \|w^{\ast}-w\|_2^2$, where $\Sigma \preceq \sigma^2 I$.
\item The second term is at most the upper bound of the variance of the noise $\xi$ times $\sigma^2$.
\item The third term is at most $v^T \Sigma (w^{\ast}-w) (w^{\ast}-w)^T \Sigma v$, which by our
bounded covariance assumption is at most $\sigma^4 \|w^{\ast}-w\|_2^2.$
\end{itemize}
This gives the parameters in the meta-theorem.
\subsubsection{Proof of Lemma~\ref{lem:linreg-sample-complexity}}
\label{sec:linreg-sample-complexity}
Let $S$ be the set of uncorrupted samples and $I$ be the subset of $S$
with $\|X\|_2 \leq 2\sqrt{d}/\eps^{1/4}$. We will take $I_{\mathrm{good}}$ to be the subset of $I$ that are not corrupted.
Firstly, we show that with probability at least $39/40$, at most an $\eps/2$-fraction of points in
$S$ have $\|X\|_2 > 2\sqrt{d}/\eps^{1/4}$, and so $|I_{\mathrm{good}}| \geq (1-\eps)|S|$.
Note that $\E_D[\|X\|_2^4]= \E_D[(\sum_{j=1}^d X_j^2)^2] \leq \sum_{j=1}^d \sum_{k=1}^d \sqrt{\E_D[X_j^2]\E[X_k^2]} \leq C d^2$,
since $\E_{D_x}[X X^T] \preceq I$. Thus, by Markov's inequality,
$\Pr_D[\|X\|_2 > 2\sqrt{d}(C/\eps)^{1/4}]=\Pr_D[\|X\|_2^4 > 16d^2/\eps] \leq \eps/16$.
By a Chernoff bound, since $n \geq 10\eps^2$ this probability is at most $\eps/2$ for the uncorrupted samples with probability at least $39/40$.
Next, we show that (\ref{eq:norm-bound}) holds with probability at least $39/40$.
To do this, we will apply Lemma \ref{lem:lin-reg-cov-bound} to $I_{\mathrm{good}}$.
Since $S$ consists of independent samples,
the variance over the randomness of $S$ of $|S|\E_S[e^2]$ is at most $|S|\xi$.
By Chebyshev's inequality, except with probability $1/99$,
we have that $\E_S[e^2] \leq 99\xi$ and since $I_{\mathrm{good}} \subset S$,
$\E_{I_{\mathrm{good}}}[e^2] \leq |S|\E_S[e^2]/|I| \leq 100 \xi$.
This is condition (i) of Lemma \ref{lem:lin-reg-cov-bound}.
We note that $I$ consists of $\Omega(d^5/\eps^2)$ independent samples from $D$
conditioned on $\|X\|_2 < 2\sqrt{d}/\eps^{1/4}$, a distribution that we will call $D'$.
Since the VC-dimension of all halfspaces in $\R^d$ is $d+1$,
by the VC inequality, we have that, except with probability $1/80$,
for any unit vector $v$ and $T \in \R$ that $|\Pr_I[v \cdot X > T] - \Pr_{D'}[v \cdot X > T]| \leq \eps/d^2$.
Note that for unit vector $v$ and positive integer $m$, $\E[(v.X)^m]=\int_0^\infty m(v \cdot X)^{m-1} \Pr[v \cdot X > T] dT$.
Thus we have that
\begin{align*}
\E_I[(v.X)^m] & = \int_0^\infty m(v \cdot X)^{m-1} \Pr_I[v \cdot X > T] dT \\
& \leq \int_0^{2d^{1/2}(C/\eps)^{1/4}} m(v \cdot X)^{m-1} (\Pr_{D'}[v \cdot X > T] + \eps/d^2) dT \\
& = \E_{D'}[(v.X)^m] + (2d^{1/2}(C/\eps)^{1/4})^m (\eps/d^2) \\
& \leq (1+\eps)\E_D[(v.X)^m] + 2^m C^{m/4} (\eps/d^2)^{1-m/4} \;.
\end{align*}
Applying this for $m=2$ gives $\E_I[X X^T] \preceq (1 + \eps + 4\sqrt{C}\eps/d^2) I \preceq 2 I$
and with $m=4$ gives $ \E_I[(v.X)^4] \leq (1+\eps)C + 16C$.
Similar bounds apply to $I_{\mathrm{good}}$, with an additional $1+\eps$ factor.
Thus, with probability at least $39/40$, $I_{\mathrm{good}}$ satisfies the conditions of
Lemma \ref{lem:lin-reg-cov-bound} with $\xi := 100 \xi$, $\sigma^2 := 2$ and $C := 5C$.
Hence, it satisfies (\ref{eq:norm-bound}) with $\sigma_0=20\sqrt{\xi}$ and $\sigma_1=18\sqrt{C+1}$.
For (\ref{eq:mean-everywhere}), note that $\nabla_w f_i(w)=(w \cdot x_i - y_i)x_i =((w - w^* ) \cdot x_i)x_i - e_i x_i$.
We will separately bound $\|\E_{I_{\mathrm{good}}}[((w - w^* ) \cdot X)X] - \E_D[((w - w^* ) \cdot X)X]\|_2$ and $\|\E_{I_{\mathrm{good}}}[eX]-\E_D[eX]\|_2$.
We will repeatedly make use of the following, which bounds how much removing points or probability mass affects an expectation in terms of its variance:
\begin{claim} \label{clm:expect-remove-points}
For a mixture of distributions $P=(1-\delta)Q + \delta R$ for distributions $P,Q,R$ and a real valued function $f$, we have that $|\E_{X \sim P}[f(X)]-\E_{X \sim Q}[f(X)]| \leq 2\sqrt{\delta \E_{X \sim P} [f(X)^2]}/(1-\delta)$\end{claim}
\begin{proof} By Cauchy-Schwarz $|\E_{X \sim R} [f(X)]| \leq \sqrt{\E_{X \sim R} [f(X)^2]} \leq \sqrt{\E_{X \sim P} [f(X)^2]/\delta}$. Since $\E_{X \sim P}[f(X)]=(1-\delta)\E_{X \sim Q}[f(X)] + \delta \E_{X \sim R} [f(X)]$, this implies that $|\E_{X \sim P}[f(X)]/(1-\delta) -\E_{X \sim Q}[f(X)]| \leq \sqrt{\delta \E_{X \sim P} [f(X)^2]}/(1-\delta)$. However $|\E_{X \sim P}[f(X)]/(1-\delta) - \E_{X \sim P}[f(X)]| = (\delta/(1-\delta)) |\E_{X \sim P}[f(X)]| \leq \sqrt{\delta \E_{X \sim P} [f(X)^2]}/(1-\delta)$ and the triangle inequality gives the result.
\end{proof}
We can apply this to $P=I$ and $Q=I_{\mathrm{good}}$ with $\delta=\eps/2$
and also to $P=D$ and $Q=D'$ with $\delta=\eps/16$, with error $2\sqrt{\delta}/(1-\delta) \leq 2\sqrt{\eps}$ in either case.
For the first of term we wanted to bound, we have $\|\E_{I_{\mathrm{good}}}[((w - w^* ) \cdot X)X] - \E_D[((w - w^* ) \cdot X)X]\|_2 = \|(w-w^*)^T \left(\E_{I_{\mathrm{good}}}[X X^T] - \E_D[X X^T]\right)\|_2 \leq \| w - w^*\|_2 \|\E_{I_{\mathrm{good}}}[X X^T] - \E_D[X X^T]\|_2$. For any unit vector $v$, the VC dimension argument above gave that
$|\E_I[(v \cdot X)^2] - \E_{D'}[(v \cdot X)^2)]| \leq 4\sqrt{C}\eps/d^2$ and Claim \ref{clm:expect-remove-points} both gives that $|\E_I[(v \cdot X)^2] - \E_{I_{\mathrm{good}}}[(v \cdot X)^2)]| \leq 2 \sqrt{\eps \E_I[(v.X)^4]} \leq 10 \sqrt{C \eps}$ and that $|\E_D[(v \cdot X)^2] - \E_{D'}[(v \cdot X)^2)]| \leq 2 \sqrt{\eps \E_D[(v.X)^4]} \leq 2 \sqrt{C \eps}$. By the triangle inequality, we have that $|\E_D[(v \cdot X)^2] - \E_{I_{\mathrm{good}}}[(v \cdot X)^2)]| \leq 16\sqrt{C}\eps$. Since this holds for all unit $v$ and the matrices involved are symmetric, we have that $ \|\E_{I_{\mathrm{good}}}[X X^T] - \E_D[X X^T]\|_2 \leq 16\sqrt{C\eps}$. The overall first term is bounded by $\|\E_{I_{\mathrm{good}}}[((w - w^* ) \cdot X)X] - \E_D[((w - w^* ) \cdot X)X]\|_2 \leq 16\sqrt{C\eps} \| w - w^*\|_2$.
Now we want to bound the second term, $\|\E_{I_{\mathrm{good}}}[eX]-\E_D[eX]\|_2$. Note that $\E_D[eX]=\E_D[e]\E_D[X]=0$. So we need to bound $\E_{I_{\mathrm{good}}}[eX]$.
First we bound the expectation and variance on $D'$ using Claim \ref{clm:expect-remove-points}. It yields that, for any unit vector $v$, $|\E_{D'}[e(v \cdot X)]| \leq 2 \sqrt{\eps \E_D[e^2(v \cdot X)^2]} \leq 2\sqrt{\eps \xi}$.
Next we bound the expectation on $I$. Since $I$ consists of independent samples from $D'$, the covariance matrix over the randomness on $I$ of $|I|\E_I[eX-\E_{D'}[eX]]$ is $|I|\E_{D'}[(eX-\E_{D'}[eX])(eX-\E_{D'}[eX])^T] \leq |I|\E_{D'}[X X^T] \leq |I|(1+\eps)I$ and its expectation is $0$. Thus the expectation over the randomness of $I$ of $(|I|^2 \|\E_I[eX]-\E_{D'}[eX]\|_2)^2$ is $\mathrm{Tr}(|I|\E_{D'}[(eX-\E_{D'}[eX])(eX-\E_{D'}[eX])^T]) \leq |I|(1+\eps+4\xi\eps|I|)d$. By Markov's inequality, except with probability $1/40$, $\Pr[\|\E_I[eX]\|_2 \geq 2\sqrt{\xi \eps} + \eps] \leq d/|I|\eps^2$. Since $|I| \geq 40 d/\eps^2$. This happens with probability at least $1/40$.
Next we bound the expectation on $I_{\mathrm{good}}$ which follows by a slight variation of Claim \ref{clm:expect-remove-points}. Let $J=I-I_{\mathrm{good}}$. Then, for any $v$, $\E_J[e(v \cdot X)] \leq \sqrt{\E_J[e^2]\E_J[(v \cdot X)^2]} \leq \sqrt{\E_S[e^2]\E_I[(v \cdot X)^2]}|J|/\sqrt{|S||I|} \leq \sqrt{100\xi (1 + \eps + 4\sqrt{C}\eps/d^2)} |J|/\sqrt{|S||I|} \leq 20 |J|\sqrt{\xi/|S||I|}$ by bounds we obtained earlier. Now $\|\E_{I_{\mathrm{good}}}[eX]\|_2 = \|(|I|/|I_{\mathrm{good}}|)\E_{I}[eX] - (|J|/|I_{\mathrm{good}}|)\E_J[e X] \|_2 \leq 20\sqrt{\xi\eps}+\eps + (1+\eps)\sqrt{\xi\eps/16} \leq 30 \sqrt{\xi\eps} + \eps$.
We can thus take $\sigma_0 = 30\sqrt{\xi}+\sqrt{\eps}$ and $\sigma_1=18\sqrt{C+1} \geq 16\sqrt{C}$ to get (\ref{eq:mean-everywhere}).
To get both (\ref{eq:mean-everywhere}) and (\ref{eq:norm-bound}) hold with $\sigma_0=30\sqrt{\xi}+\sqrt{\eps}$ and $\sigma_1=18\sqrt{C+1})$.
This happens with probability at least $9/10$ by a union bound on the probabilistic assumptions above.
\subsection{Support Vector Machines}
\label{sec:app-svm}
In this section, we demonstrate how our results apply to learning support vector machines (i.e., halfspaces under hinge loss).
In particular, we describe how SVMs fit into the GLM framework described in Section~\ref{sec:glms}.
We are given pairs $(X_i, Y_i) \in \R^{d} \times \{\pm 1\}$ for $i \in [n]$, which are drawn from some distribution $D_{xy}$.
Let $L(w,(x,y))=\max\{0, 1 - y(w \cdot x) \}$, and $f_i(w) = L(w, (x_i, y_i))$.
The goal is to find a $\widehat{w}$ approximately minimizing the objective function
\[
\Ef(w) = \E_{(X, Y) \sim D_{xy}} [L(w, (X, Y))].
\]
One technical point is that $f_i$ does not have a gradient everywhere -- instead, we will be concerned with the sub-gradients of the $f_i$'s.
All our results which operate on the gradients also work for sub-gradients.
To be precise, we will take the sub-gradient to be $0$ when the gradient is undefined:
\begin{definition}
Let $\nabla f_i$ be the \emph{sub-gradient} of $f_i(w)$ with respect to $w$, where $\nabla f_i = -y_ix_i$ if $y_i (w \cdot x_i) < 1$, and $0$ otherwise.
\end{definition}
To get a bound on the error of hinge loss, we will need to assume the marginal distribution $D_x$ is anti-concentrated.
\begin{definition}
A distribution is \emph{$\delta$-anticoncentrated} if at most
an $O(\delta)$-fraction of its probability mass is within Euclidean distance
$\delta$ of any hyperplane.
\end{definition}
We work with the following assumptions:
\begin{assumption}
\label{ass:svm}
Given the model for SVMs as described above, assume the following conditions for the marginal distribution $D_x$:
\begin{itemize}
\item $\E_{X \sim D_x}[XX^T] \preceq I $;
\item $D_x$ is $\eps^{1/4}$-anticoncentrated.
\end{itemize}
\end{assumption}
Our main result on SVMs is the following:
\begin{theorem} \label{thm:result-svm}
Let $\eps > 0$, and let $D_{xy}$ be a distribution over pairs $(X,Y)$, where the marginal distribution $D_x$ satisfies the conditions of Assumption~\ref{ass:svm}.
Then there exists an algorithm that with probability $9/10$, given $O(d\log(d/\eps)/\eps)$ $\eps$-noisy samples from $D_{xy}$, returns a $\widehat{w}$ such that for any $w^{\ast}$,
$$\E_{(X,Y) \sim D_{xy}} [ L(\widehat{w},(X,Y)) ] \leq \E_{(X,Y) \sim D_{xy}}[ L(w^{\ast},(X,Y)) ] + O(\eps^{1/4}).$$
\end{theorem}
Our approach will be to fit this problem into the GLM framework developed in Section~\ref{sec:glms}.
First, we will restrict our search over $w$ to $\mathcal{H}$, a ball of radius $r=\eps^{-1/4}$.
As we argue in Lemma~\ref{lem:antigood-svm}, this restriction comes at a cost of at most $O(\eps^{1/4})$ in our algorithm's loss.
With this restriction, we will argue that the problem satisfies the conditions of Proposition~\ref{prop:sample-GLM}.
This allows us to argue that, with a polynomial number of samples, we can obtain a set of $f_i$'s satisfying the conditions of Assumption~\ref{ass:one-good-set-glm}.
This will allow us to apply Theorem~\ref{thm:glms-sever}, concluding the proof.
We start by showing that, due to anticoncentration of $D$, there is a $w' \in \mathcal{H}$ with loss close to $w^*$:
\begin{lemma} \label{lem:antigood-svm}
Let $w'$ be a rescaling of $w^*$, such that $\|w'\|_2 \leq \eps^{-1/4}$ (i.e. $w' = \min\{1, \eps^{-1/4}/\|w^*\|_2\} w^*$). Then $\E_{(X,Y) \sim D_{xy}} [ L(w',(X,Y)) ] \leq \E_{(X,Y) \sim D_{xy}}[ L(w^{\ast},(X,Y)) ] + O(\eps^{1/4})$.
\end{lemma}
\begin{proof}
If $w'=w^{\ast}$, then $\E_{(X,Y) \sim D_{xy}} [ L(w',(X,Y)) ] = \E_{(X,Y) \sim D_{xy}}[ L(w^{\ast},(X,Y)) ]$.
Otherwise, we break into case analysis, based on the value of $(x,y)$:
\begin{itemize}
\item $|w' \cdot x| > 1$: If $y(w' \cdot x) > 1$, then $L(w',(x,y)) = L(w^*, (x,y))= 0$.
If $y(w' \cdot x) < -1$, then $L(w',(x,y)) = 1 - y(w' \cdot x) \leq 1 -y (w^* \cdot x) = L(w^*, (x,y))$.
Both cases use the fact that $\|w'\|_2 < \|w^{\ast}\|_2$.
\item $|w' \cdot x| \leq 1$: In this case, we have that $L(w', (x,y)) \leq 2$.
Since $L(w^*, (x,y)) \geq 0$, we have that $L(w', (x,y)) \leq L(w^*, (x,y)) + 2$.
\end{itemize}
Note that if $|w' \cdot x| \leq 1$, then $x$ is within $1/\|w'\|_2=\eps^{1/4}$ of the hyperplane defined by the normal vector $w'$.
Since $D_x$ is $\eps^{1/4}$-anticoncentrated, we have that $\Pr_{X \sim D_x}[|w' \cdot X| \leq 1] \leq \eps^{1/4}$.
Thus, we have that $\E_{(X,Y) \sim D_{xy}}[L(w',(X,Y))] \leq \E_{(X,Y) \sim D_{xy}}[L(w^*,(X,Y)) + 2 \cdot \mathbbm{1}(|w' \cdot X| \leq 1)] \leq \E_{(X,Y) \sim D_{xy}}[L(w^{\ast},(X,Y))] + O(\eps^{1/4})$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:result-svm}]
We first show that this problem fits into the GLM framework, in particular, satisfying the conditions of Proposition~\ref{prop:sample-GLM}.
The link function is $\sigma_y(t) = \max\{0, 1 - y t \}$, giving us the loss function $L(w,(x,y)) = \sigma_y(w \cdot x)$.
We let $\mathcal{H}$ be the set $\|w\|_2 \leq \eps^{-1/4}$, giving us the parameter $r = \eps^{-1/4}$.
Condition 1 is satisfied by Assumption~\ref{ass:svm}.
For $y \in \{-1,1\}$, $\sigma'_y(t)=0$ for $yt \geq 1$ and $\sigma'_y(t)= -y$ for $yt < 1$.
Thus we have that $|\sigma'_1(t)| \leq 1$ for all $t$ and $y$, satisfying Condition 2.
Finally, one can observe that $\sigma_y(0)=1$ for all $y$, satisfying Condition 3.
Thus we can apply Proposition \ref{prop:sample-GLM}: if we take $O(d\log(dr/\eps)/\eps)$ $\eps$-corrupted samples, then they satisfy Assumption \ref{ass:one-good-set-glm} on $\mathcal{H}$ with $\sigma_0=2$, $\sigma_1=0$ and $\sigma_2=1+\eps^{-1/4}$, with probability $9/10$.
Now we can apply the algorithm of Theorem~\ref{thm:glms-sever}.
Since the loss is convex, we get a vector $\widehat{w}$ with
$\bar{f}(\widehat{w}) - \bar{f}({w^*}') = O((\sigma_0r + \sigma_1r^2+\sigma_2)\sqrt{\epsilon})
=O((2 \eps^{-1/4} +\eps^{-1/4}) \sqrt{\eps}) = O(\eps^{1/4})$ where ${w^*}'$ is the minimizer of $\bar{f}$ on $\mathcal{H}$.
We thus have that $\bar{f}(\hat w) \leq \bar{f}({w^*}') + O(\eps^{1/4}) \leq \bar{f}(w')+O(\eps^{1/4}) \leq \bar{f}(w^*)+O(\eps^{1/4})$.
The second inequality follows because ${w^*}'$ is the minimizer of $\bar{f}$ on $\mathcal{H}$, and the third inequality follows from Lemma~\ref{lem:antigood-svm}.
\end{proof}
\subsection{Logistic Regression}
\label{sec:app-logreg}
In this section, we demonstrate how our results apply to logistic regression.
In particular, we describe how logistic regression fits into the GLM framework described in Section~\ref{sec:glms}.
We are given pairs $(X_i, Y_i) \in \R^{d} \times \{\pm 1\}$ for $i \in [n]$, which are drawn from some distribution $D_{xy}$.
Let $\phi(t)=\frac{1}{1+\exp(-t)}$.
Logistic regression is the model where $y = 1$ with probability $\phi(w \cdot x)$, and $y = -1$ with probability $\phi(-w \cdot x)$.
We define the loss function to be the log-likelihood of $y$ given $x$.
More precisely, we let $f_i(w,(x_i,y_i)) = L(w,(x_i,y_i))$, which is defined as follows:
$$L(w,(x,y))= \frac{1+y}{2}\ln \left(\frac{1}{\phi(w \cdot x)}\right) + \frac{1-y}{2}\ln \left(\frac{1}{\phi(-w \cdot x)}\right)=\frac{1}{2}\left(-\ln\left(\phi(w \cdot x)\phi(-w \cdot x)\right) -y(w \cdot x)\right).$$
The gradient of this function is $\nabla L(w, (x,y)) = \frac12(\phi(w \cdot x)-\phi(-w \cdot x)-y)x$.
The goal is to find a $\widehat{w}$ approximately minimizing the objective function
\[
\Ef(w) = \E_{(X, Y) \sim D_{xy}} [L(w, (X, Y))].
\]
We work with the following assumptions:
\begin{assumption}
\label{ass:logreg}
Given the model for logistic regression as described above, assume the following conditions for the marginal distribution $D_x$:
\begin{itemize}
\item $\E_{X \sim D_x}[XX^T] \preceq I $;
\item $D_x$ is $\eps^{1/4}\sqrt{\log(1/\eps)}$-anticoncentrated.
\end{itemize}
\end{assumption}
We can get a similar result to that for hinge loss for logistic regression:
\begin{theorem}\label{thm:logreg}
Let $\eps > 0$, and let $D_{xy}$ be a distribution over pairs $(X,Y)$, where the marginal distribution $D_x$ satisfies the conditions of Assumption~\ref{ass:logreg}.
Then there exists an algorithm that with probability $9/10$, given $O(d\log(d/\eps)/\eps)$ $\eps$-noisy samples from $D_{xy}$, returns a $\widehat{w}$ such that for any $w^{\ast}$,
$$\E_{(X,Y) \sim D_{xy}} [ L(\widehat{w},(X,Y)) ] \leq \E_{(X,Y) \sim D_{xy}}[ L(w^{\ast},(X,Y)) ] + O(\eps^{1/4}\sqrt{\log(1/\eps)}).$$
\end{theorem}
The approach is very similar to that of Theorem~\ref{thm:result-svm}, which we repeat here for clarity.
First, we will restrict our search over $w$ to $\mathcal{H}$, a ball of radius $r=\eps^{-1/4}\sqrt{\log(1/\eps)}$.
As we argue in Lemma~\ref{lem:antigood-logit}, this restriction comes at a cost of at most $O(\eps^{1/4}\sqrt{\log(1/\eps)})$ in our algorithm's loss.
With this restriction, we will argue that the problem satisfies the conditions of Proposition~\ref{prop:sample-GLM}.
This allows us to argue that, with a polynomial number of samples, we can obtain a set of $f_i$'s satisfying the conditions of Assumption~\ref{ass:one-good-set-glm}.
This will allow us to apply Theorem~\ref{thm:glms-sever}, concluding the proof.
We start by showing that, due to anticoncentration of $D$, there is a $w' \in \mathcal{H}$ with loss close to $w^*$:
\begin{lemma} \label{lem:antigood-logit}
Let $w'$ be a rescaling of $w^*$, such that $\|w'\|_2 \leq \eps^{-1/4} \sqrt{\ln(1/\eps)}$ (i.e. $w' = \min\{1, \eps^{-1/4}\sqrt{\ln(1/\eps)}/\|w^*\|_2\} w^*$). Then $\E_{(X,Y) \sim D_{xy}} [ L(w',(X,Y)) ] \leq \E_{(X,Y) \sim D_{xy}}[ L(w^{\ast},(X,Y)) ] + O(\eps^{1/4}\sqrt{\ln(1/\eps)})$.
\end{lemma}
\begin{proof}
We need the following claim:
\begin{claim}
\label{clm:logreg-clm}
$$|t| \leq -\ln(\phi(t)\phi(-t)) \leq |t| + 3 \exp(-|t|)$$
\end{claim}
\begin{proof}
Recalling that $\phi=1/(1+\exp(-t))$, we have that $-\ln(\phi(t)\phi(-t)) = \ln(\exp(t)+\exp(-t)+2)$.
Since $\exp(t)+\exp(-t)+2 \geq \exp(|t|)$, we have $|t| \leq -\ln(\phi(t)\phi(-t))$.
On the other hand,
$\ln(\exp(t)+\exp(-t)+2) = |t|+\ln(1+2\exp(-|t|)+\exp(-2|t|)) \leq |t| + \ln(1+3\exp(-|t|)) \leq |t| + 3 \exp(-|t|)$.
\end{proof}
For any $x \in \R^d$, we have that:
\begin{align*}
-\ln(\phi(w' \cdot x)\phi(-w' \cdot x)) - y(w' \cdot x) -3\exp(-3|w' \cdot x|)
&\leq |w' \cdot x| - y(w' \cdot x) \\
&\leq |w^* \cdot x| - y(w^* \cdot x) \\
&\leq -\ln(\phi(w^* \cdot x)\phi(-w^* \cdot x)) - y(w^* \cdot x)
\end{align*}
The first and last inequality hold by Claim~\ref{clm:logreg-clm}.
For the second inequality, we do a case analysis on $y$.
When $y = \mathrm{sign}(w' \cdot x) = \mathrm{sign}(w^* \cdot x)$, then both sides of the inequality are $0$.
When $y = -\mathrm{sign}(w' \cdot x) = -\mathrm{sign}(w^* \cdot x)$, then the inequality becomes $2|w' \cdot x| \leq 2|w^* \cdot x|$, which holds since $\|w'\|_2 \leq \|w^*\|_2$.
We thus have that for any $y \in \{\pm 1\}$, $L(w',(x,y)) \leq L(w^*,(x,y))+ \frac{3}{2}\exp(-3|w' \cdot x|)$.
If $|w' \cdot x| \leq \frac{1}{3}\ln(1/\eps)$, then $L(w',(x,y)) \leq L(w^*,(x,y))+ \frac{3}{2}$.
If $|w '\cdot x| \geq \frac{1}{3}\ln(1/\eps)$, then $L(w',(x,y)) \leq L(w^*,(x,y))+ \frac{3}{2}\eps$.
Since $\|w'\|_2 \leq \eps^{-1/4} \sqrt{\ln(1/\eps)}$ and $D_x$ is $\eps^{1/4} \sqrt{\ln(1/\eps)}$-anticoncentrated, we have that $\Pr_{D_x}[|w' \cdot x| \leq \frac{1}{3}\ln(1/\eps)] \leq O(\eps^{1/4} \sqrt{\ln(1/\eps)})$.
Thus, $\E_{(X,Y) \sim D_{xy}} [ L(w',(X,Y)) ] \leq \E_{(X,Y) \sim D_{xy}}[ L(w^{\ast},(X,Y)) ] + O(\eps^{1/4}\sqrt{\ln(1/\eps)})$, as desired.
\end{proof}
With this in hand, we can conclude with the proof of Theorem~\ref{thm:logreg}.
\begin{proof}[Proof of Theorem~\ref{thm:logreg}]
We first show that this problem fits into the GLM framework, in particular, satisfying the conditions of Proposition~\ref{prop:sample-GLM}.
The link function is $\sigma_y(t) = \frac{1}{2}(-\ln(\phi(t)\phi(-t)) -yt)$, giving us the loss function $L(w,(x,y)) = \sigma_y(w \cdot x)$.
We let $\mathcal{H}$ be the set $\|w\|_2 \leq \eps^{-1/4}\sqrt{\ln(1/\eps)}$, giving us the parameter $r = \eps^{-1/4}\sqrt{\ln(1/\eps)}$.
Condition 1 is satisfied by Assumption~\ref{ass:logreg}.
For $y \in \{-1,1\}$, $\sigma'_y(t)=\frac{1}{2}(\phi(t)-\phi(-t)-y)$, which gives that $|\sigma'_y(t)| \leq 1$ for all $t$ and $y$, satisfying Condition 2.
Finally, $\sigma_y(0) = \ln 2 < 1$ for all $y$, satisfying Condition 3.
Thus we can apply Proposition \ref{prop:sample-GLM}: if we take $O(d\log(dr/\eps)/\eps)$ $\eps$-corrupted samples, then they satisfy Assumption \ref{ass:one-good-set-glm} on $\mathcal{H}$ with $\sigma_0=2$, $\sigma_1=0$ and $\sigma_2=1+\eps^{-1/4}\sqrt{\ln(1/\eps)}$, with probability $9/10$.
Now we can apply the algorithm of Theorem~\ref{thm:glms-sever}.
Since the loss is convex, we get a vector $\widehat{w}$ with
$\bar{f}(\widehat{w}) - \bar{f}({w^*}') = O((\sigma_0r + \sigma_1r^2+\sigma_2)\sqrt{\epsilon})
=O((2 \eps^{-1/4}\sqrt{\ln(1/\eps)} +\eps^{-1/4}\sqrt{\ln(1/\eps)}) \sqrt{\eps}) = O(\eps^{1/4}\sqrt{\ln(1/\eps)})$ where ${w^*}'$ is the minimizer of $\bar{f}$ on $\mathcal{H}$.
We thus have that $\bar{f}(\hat w) \leq \bar{f}({w^*}') + O(\eps^{1/4}\sqrt{\ln(1/\eps)}) \leq \bar{f}(w')+O(\eps^{1/4}\sqrt{\ln(1/\eps)}) \leq \bar{f}(w^*)+O(\eps^{1/4}\sqrt{\ln(1/\eps)})$.
The second inequality follows because ${w^*}'$ is the minimizer of $\bar{f}$ on $\mathcal{H}$, and the third inequality follows from Lemma~\ref{lem:antigood-logit}.
\end{proof}
\section{Analysis of {\textsc{Sever}}{} for GLMs} \label{sec:glms}
A case of particular interest is that of Generalized Linear Models (GLMs):
\begin{definition}
\label{def:glm}
Let $\mathcal{H} \subseteq \R^d$ and $\mathcal{Y}$ be an arbitrary set.
Let $D_{xy}$ be a distribution over $\mathcal{H} \times \mathcal{Y}$.
For each $Y \in \mathcal{Y}$, let $\sigma_Y: \R \to \R$ be a convex function.
The \emph{generalized linear model} (GLM) over $\mathcal{H} \times \mathcal{Y}$ with \emph{distribution} $D_{xy}$ and \emph{link functions} $\sigma_Y$ is the function $\bar{f}: \R^d \to \R$ defined by $\bar{f} (w) =\E_{X,Y}[f_{X,Y} (w)]$, where
\[
f_{X,Y}(w) := \sigma_Y(w\cdot X) \; .
\]
A \emph{sample} from this GLM is given by $f_{X, Y} (w)$ where $(X, Y) \sim D_{xy}$.
\end{definition}
\noindent
Our goal, as usual, is to approximately minimize $\bar{f}$ given $\eps$-corrupted samples from $D_{xy}$.
Throughout this section we assume that $\mathcal{H}$ is contained in the ball of radius $r$ around $0$, i.e. $\mathcal{H} \subseteq B(0, r)$.
Moreover, we will let $w^* = \argmin_{w \in \mathcal{H}} \bar{f} (w)$ be a minimizer of $\bar{f}$ in $\mathcal{H}$.
This case covers a number of interesting applications, including SVMs and logistic regression.
Unfortunately, the tools developed in Appendix~\ref{sec:sever-general} do not seem to be able to cover this case in a simple manner.
In particular, it is unclear how to demonstrate that Assumption~\ref{ass:one-good-set} holds after taking polynomially many samples from a GLM.
To rectify this, in this section, we demonstrate a different deterministic regularity condition under which we show {\textsc{Sever}}{} succeeds, and we show that this condition holds after polynomially many samples from a GLM.
Specifically, we will show that {\textsc{Sever}}{} succeeds under the following deterministic condition:
\begin{assumption} \label{ass:one-good-set-glm}
Fix $0<\eps<1/2$. There exists an unknown set $I_{\mathrm{good}} \subseteq [n]$ with $|I_{\mathrm{good}}| \geq (1-\eps)n$
of ``good'' functions $\{f_i\}_{i \in I_{\mathrm{good}}}$ and parameters $\sigma_0, \sigma_2 \in \R_+$
such that such that the following conditions simultanously hold:
\begin{itemize}
\item
Equation (\ref{eq:norm-bound}) holds with $\sigma_1 = 0$ and the same $\sigma_0$, and
\item
The following equations hold:
\begin{align}
\|\nabla \fhat(w^*) - \nabla \bar{f}(w^*)\|_2 \leq \sigma_0 \sqrt{\epsilon} \; ~\mathrm{, and} \label{eq:mean-at-min} \\
|\fhat(w)-\bar{f}(w)| \leq \sigma_2\sqrt{\eps}, \textrm{ for all } w \in \mathcal{H} \; , \label{eq:loss-close}
\end{align}
where $\fhat \stackrel{\text{def}}{=} \frac{1}{|I_{\mathrm{good}}|} \sum_{i \in I_{\mathrm{good}}} f_i$.
\end{itemize}
\end{assumption}
\noindent In this section, we will show the following two statements.
The first demonstrates that Assumption~\ref{ass:one-good-set-glm} implies that {\textsc{Sever}}{} succeeds, and the second shows that Assumption~\ref{ass:one-good-set-glm} holds after polynomially many samples from a GLM.
Formally:
\begin{theorem}\label{thm:glms-sever}
For functions $f_1, \ldots, f_n: \mathcal{H} \to \R$, suppose that Assumption~\ref{ass:one-good-set-glm}
holds and that $\mathcal{H}$ is convex.
Then, for some universal constant $\epsilon_0$, if $\epsilon < \epsilon_0$,
there is an algorithm which, with probability at least $9/10$, finds a $w \in \mathcal{H}$ such that
\[
\bar{f}(w) - \bar{f}(w^*) = r (\gamma + O(\sigma_0 \sqrt{\eps})) + O(\sigma_2 \sqrt{\eps}) \; .
\]
If the link functions are $\xi$-strongly convex, the algorithm finds a $w \in \mathcal{H}$ such that
\[
\bar{f}(w) - \bar{f}(w^*) = 2 \frac{(\gamma + O(\sigma_0 \sqrt{\eps}))^2}{\xi} + O(\sigma_2 \sqrt{\eps}) \; .
\]
\end{theorem}
\begin{proposition} \label{prop:sample-GLM}
Let $\mathcal{H} \subseteq \R^d$ and let $\mathcal{Y}$ be an arbitrary set.
Let $f_1,\ldots,f_n$ be obtained by picking $f_i$ i.i.d. at random from a GLM $\bar{f}$ over $\mathcal{H} \times \mathcal{Y}$ with distribution $D_{xy}$ and link functions $\sigma_Y$, where
\[
n = \Omega \left( \frac{d\log(dr/\eps)}{\eps} \right) \; .
\]
Suppose moreover that the following conditions all hold:
\begin{enumerate}
\item
$E_{X \sim D_{xy}} [XX^T] \preceq I$,
\item
$|\sigma_Y' (t)| \leq 1$ for all $Y \in \mathcal{Y}$ and $t \in \R$, and
\item
$|\sigma_Y (0)| \leq 1$ for all $Y \in \mathcal{Y}$.
\end{enumerate}
Then with probability at least $9 / 10$ over the original set of samples, there is a set of $(1-\eps)n$ of the $f_i$ that satisfy Assumption \ref{ass:one-good-set-glm} on $\mathcal{H}$ with $\sigma_0=2$, $\sigma_1=0$ and $\sigma_2=1+r$.and $\sigma_2=1+r$.
\end{proposition}
\subsection{Proof of Theorem~\ref{thm:glms-sever}}
As before, since {\textsc{Sever}}{} either terminates or throws away at least one sample, clearly it cannot run for more than $n$ iterations.
Thus the runtime bound is simple, and it suffices to show correctness.
We first prove the following lemma:
\begin{lemma}
\label{lem:stationary-point-glm}
Let $f_1, \ldots, f_n$ satisfy Assumption~\ref{ass:one-good-set-glm}.
Then with probability at least $9 / 10$, {\textsc{Sever}}{} applied to $f_1, \ldots, f_n, \sigma_0$ returns a point $w \in \mathcal{H}$ which is a $(\gamma + O(\sigma_0 \sqrt{\eps}))$-approximate critical point of $\hat{f}$.
\end{lemma}
\begin{proof}
We claim that the empirical distribution over $f_1, \ldots, f_n$ satisfies Assumption~\ref{ass:one-good-set} for the function $\hat{f}$ with $\sigma_0$ as stated and $\sigma_1 = 0$, with the $I_{\mathrm{good}}$ in Assumption~\ref{ass:one-good-set} being the same as in the definition of Assumption~\ref{ass:one-good-set-glm}.
Clearly these functions satisfy \eqref{eq:mean-everywhere} (since the LHS is zero), so it suffices to show that they satisfy \eqref{eq:norm-bound}
Indeed, we have that for all $w \in \mathcal{H}$,
\[
\E_{I_{\mathrm{good}}} [(\nabla f_i (w) - \nabla \hat{f} (w)) (\nabla f_i (w) - \nabla \hat{f} (w))^\top ] \preceq \E_{I_{\mathrm{good}}} [(\nabla f_i (w) - \nabla \Ef (w)) (\nabla f_i (w) - \nabla \Ef (w))^\top ] \; ,
\]
so they satisfy \eqref{eq:norm-bound}, since the RHS is bounded by Assumption~\ref{ass:one-good-set-glm}.
Thus this lemma follows from an application of Theorem~\ref{thm:stationary-point}.
\end{proof}
\noindent With this critical lemma in place, we can now prove Theorem~\ref{thm:glms-sever}:
\begin{proof}[Proof of Theorem~\ref{thm:glms-sever}]
Condition on the event that Lemma~\ref{lem:stationary-point-glm} holds, and let $w \in \mathcal{H}$ be the output of {\textsc{Sever}}.
By Assumption~\ref{ass:one-good-set-glm}, we know that $\hat{f}(w^*) \geq \Ef(w^*) - \sigma_2 \sqrt{\eps}$, and moreover, $w^*$ is a $\gamma +\sigma_0 \sqrt{\eps}$-approximate critical point of $\hat{f}$.
Since each link function is convex, so is $\hat{f}$.
Hence, by Lemma~\ref{lem:derivatives-to-sizes}, since $w$ is a $(\gamma + O(\sigma_0 \sqrt{\eps}))$-approximate critical point of $\hat{f}$, we have $\hat{f}(w) - \hat{f}(w^*) \leq r (\gamma + O(\sigma_0 \sqrt{\eps}))$.
By Assumption~\ref{ass:one-good-set}, this immediately implies that $\bar{f}(w) - \bar{f} (w^*) \leq r (\gamma + O(\sigma_0 \sqrt{\eps})) + O(\sigma_2 \sqrt{\eps})$, as claimed.
The bound for strongly convex functions follows from the exact argument, except using the statement in Lemma~\ref{lem:derivatives-to-sizes} pertaining to strongly convex functions.
\end{proof}
\subsection{Proof of Proposition~\ref{prop:sample-GLM}}
\begin{proof}
We first note that $\nabla f_{X,Y}(w) = X \sigma_Y'(w\cdot X).$
Thus, under Assumption~\ref{ass:one-good-set-glm}, we have for any $v$ that
$$
\E_i[(v\cdot(\nabla f_i(w) -\nabla \bar{f}(w)))^2] \ll \E_i[(v \cdot \nabla f_i(w))^2]+1 \ll \E_i[(v\cdot X_i)^2]+1 \;.
$$
In particular, since this last expression is independent of $w$, we only need to check this single matrix bound.
We let our good set be the set of samples with $|X|\leq 80\sqrt{d/\eps}$ that were not corrupted. We use Lemma A.18 of~\cite{DKK+17}. This shows that with $90\%$ probability that the non-good samples make up at most an $\eps/2+\eps/160$-fraction of the original samples, and that $\E[XX^T]$ over the good samples is at most $2I$. This proves that the spectral bound holds everywhere. Applying it to the $\nabla f_{X,Y}(w^{\ast})$, we find also with $90\%$ probability that the expectation over all samples of $\nabla f_{X,Y}(w^{\ast})$ is within $\sqrt{\eps}/3$ of $\nabla \bar{f}(w^{\ast})$. Additionally, throwing away the samples with $|\nabla f_{X,Y}(w^{\ast})-\nabla \bar{f}(w^{\ast})| > 80\sqrt{d/\eps}$ changes this by at most $\sqrt{\eps}/2$. Finally, it also implies that the variance of $\nabla f_{X,Y}(w^{\ast})$ is at most $3/2I$, and therefore, throwing away any other $\eps$-fraction of the samples changes it by at most an additional $\sqrt{3\eps/2}$.
We only need to show that $\left|\E_{i \ \mathrm{good}}[f_i(w)]-\E_X[f_X(w)]\right|\leq \sqrt{\eps}$ for all $w\in \mathcal{H}$. For this we note that since the $f_X$ and $f_i$ are all $1$-Lipschitz, it suffices to show that $\left|\E_{i \ \mathrm{good}}[f_i(w)]-\E_X[f_X(w)]\right|\leq (1+|w|)\sqrt{\eps}/2$ on an $\eps/2$-cover of $\mathcal{H}$. For this it suffices to show that the bound will hold pointwise except with probability $\exp(-\Omega(d\log(r/\eps)))$. We will want to bound this using pointwise concentration and union bounds, but this runs into technical problems since very large values of $X\cdot w$ can lead to large values of $f$, so we will need to make use of the condition above that the average of $X_iX_i^T$ over our good samples is bounded by $2I$. In particular, this implies that the contribution to the average of $f_i(w)$ over the good $i$ coming from samples where $|X_i\cdot {w}| \geq 10|w|/\sqrt{\eps}$ is at most $\sqrt{\eps}(1+|w|)/10$. We consider the average of $f_i(w)$ over the remaining $i$. Note that these values are uniform random samples from $f_X(w)$ conditioned on $|X|\leq 80\sqrt{d/\eps}$ and $|X_i\cdot {w}| < 10|w|/\sqrt{\eps}$. It will suffices to show that taking $n$ samples from this distribution has average within $(1+|w|)\sqrt{\eps}/2$ of the mean with high probability. However, since $|f_X(w)|\leq O(1+|X\cdot w|)$, we have that over this distribution $|f_X(w)|$ is always $O(1+|w|)/\sqrt{\eps}$, and has variance at most $O(1+|w|)^2$.
Therefore, by Bernstein's Inequality, the probability that $n$ random samples from $f_{X}(w)$ (with the above conditions on $X$) differ from their mean by more than $(1+|w|)\sqrt{\eps}/2$ is
$$\exp(-\Omega(n^2(1+|w|)^2\eps/((1+|w|)^2+n(1+|w|)^2)))=\exp(-\Omega(n\eps)).$$
Thus, for $n$ at least a sufficiently large multiple of $d\log(dr/\eps)/\eps$, this holds for all $w$ in our cover of $\mathcal{H}$ with high probability. This completes the proof.
\end{proof}
\section{Introduction}
Learning in the presence of outliers is a ubiquitous challenge in machine learning;
nevertheless, most machine learning methods are very sensitive to outliers in
high dimensions. The focus of this work is on designing algorithms that are
outlier robust while remaining competitive in terms of accuracy and running time.
We highlight two motivating applications. The first is biological data (such as gene expression
data), where mislabeling or measurement errors can create systematic outliers \cite{RP-Gen02,Li-Science08} that require
painstaking manual effort to remove \cite{Pas-MG10}. Detecting outliers in such settings is often important
either because the outlier observations are of interest themselves
or because they might contaminate the downstream statistical analysis.
The second motivation is machine learning security, where outliers can be introduced through
\emph{data poisoning} attacks \cite{barreno2010security} in which an adversary inserts fake data into the training set
(e.g., by creating a fake user account).
Recent work has shown that for high-dimensional datasets, even a small fraction
of outliers can substantially degrade the learned model \cite{biggio2012poisoning,
newell2014practicality,koh2017understanding,steinhardt2017certified,koh2018stronger}.
Crucially, in both the biological and security settings above, the outliers are not ``random''
but are instead highly correlated, and could have a complex internal structure that is difficult
to model. This leads us to the following conceptual question underlying the present work:
\emph{Can we design training algorithms that are robust to the presence of an
$\epsilon$-fraction of arbitrary (and potentially adversarial) outliers?}
Estimation in the presence of outliers is a prototypical goal in robust
statistics and has been systematically studied since the pioneering work of Tukey \cite{tukey1960survey}.
Popular methods include RANSAC \cite{fischler1981random}, minimum covariance determinant \cite{rousseeuw1999fast},
removal based on $k$-nearest neighbors \cite{breunig2000lof}, and Huberizing the loss \cite{owen2007robust}
(see \cite{hodge2004survey} for a comprehensive survey).
However, these classical methods either break down in high dimensions, or only handle ``benign''
outliers that are obviously different from the rest of the data (see Section~\ref{sec:related-work}
for futher discussion of these points).
Motivated by this,
recent work in theoretical computer science has developed efficient robust estimators
for classical problems such as linear classification \cite{klivans2009learning,awasthi2014power},
mean and covariance estimation \cite{DKKLMS16, LaiRV16}, clustering \cite{CSV17},
and regression \cite{BhatiaJK15, BhatiaJKK17, BDLS17}.
Nevertheless,
the promise of practical high-dimensional
robust estimation is yet to be realized; indeed, the aforementioned results generally suffer from one of
two shortcomings--either they use sophisticated convex optimization algorithms
that do not scale to large datasets, or they are tailored to specific problems of interest
or specific distributional assumptions on the data, and hence do not have good accuracy
on real data.
In this work, we address these shortcomings. We propose an algorithm, {\textsc{Sever}}, that is:
\begin{itemize}
\item {\bf Robust:} it can handle arbitrary outliers with only a small increase in error, even in high dimensions.
\item {\bf General:} it can be applied to most common learning problems including regression and classification,
and handles non-convex models such as neural networks.
\item {\bf Practical:} the algorithm can be implemented with standard machine learning libraries.
\end{itemize}
\input pipeline
At a high level, our algorithm (depicted in Figure~\ref{fig:pipeline} and
described in detail in Section~\ref{sec:alg}) is a simple ``plug-in'' outlier detector--first,
run whatever learning procedure would be run normally (e.g., least squares in the case of linear regression).
Then, consider the matrix of gradients at the optimal parameters, and compute the top singular
vector of this matrix. Finally, remove any points whose projection onto this singular vector
is too large (and re-train if necessary).
Despite its simplicity, our algorithm possesses strong theoretical guarantees:
As long as the real (non-outlying) data is not too heavy-tailed,
{\textsc{Sever}}{} is provably robust to outliers--see Section~\ref{sec:theory} for detailed
statements of the theory. At the same time, we show that our algorithm
works very well in practice and outperforms a number of natural baseline outlier detectors.
In line with our original motivating biological and security applications,
we implement our method on two tasks--a linear regression task
for predicting protein activity levels
\cite{olier2018qsar},
and a spam classification task based on emails from the Enron corporation
\cite{metsis2006spam}. Even with a small fraction of outliers, baseline methods perform
poorly on these datasets; for instance, on the Enron spam dataset with
a $1\%$ fraction of outliers, baseline errors range from $13.4\%$ to $20.5\%$,
while {\textsc{Sever}}{} incurs only $7.3\%$ error (in comparison, the error is $3\%$ in the absence
of outliers).
Similarly, on the drug design dataset, with $10\%$ corruptions, {\textsc{Sever}}{} achieved $1.42$ mean-squared error test error, compared to $1.51$-$2.33$ for the baselines, and $1.23$ error on the uncorrupted dataset.
\subsection{Comparison to Prior Work}
\label{sec:related-work}
As mentioned above, the myriad classical approaches to robust estimation perform
poorly in high dimensions or in the presence of worst-case outliers. For instance,
RANSAC \cite{fischler1981random} works by removing enough points at random that no outliers remain with
decent probability; since we need at least $d$ points to fit a $d$-dimensional model, this requires the
number of outliers to be $O(1/d)$.
$k$-nearest neighbors \cite{breunig2000lof} similarly
suffers from the curse of dimensionality when $d$ is large.
The minimum covariance determinant estimator \cite{rousseeuw1999fast}
only applies when the number of data points $n$ exceeds $2d$, which does not hold for
the datasets we consider (it also has other issues such as computational intractability).
A final natural approach is to limit the effect of points with large loss (via e.g.~Huberization
\cite{owen2007robust}), but as \cite{koh2018stronger} show (and we confirm in our experiments),
correlated outliers often have \emph{lower} loss than the real data under the learned model.
These issues have motivated work on high-dimensional robust statistics going back to
Tukey \cite{Tukey75}. However, it was not until much later that efficient algorithms with
favorable properties were first proposed.
\cite{klivans2009learning} gave the first efficient algorithms
for robustly classification
under the assumption that the distribution of the good data is isotropic and log-concave.
Subsequently,~\cite{awasthi2014power} obtained an improved and nearly optimal robust algorithm
for this problem.
Two concurrent works~\cite{DKKLMS16, LaiRV16} gave the first efficient robust
estimators for several other tasks including mean and covariance estimation.
There has since been considerable study of
algorithmic robust estimation in high dimensions,
including learning graphical models~\cite{DiakonikolasKS16b},
understanding computation-robustness tradeoffs~\cite{DKS17-sq, DKKLMS17},
establishing connections to PAC learning~\cite{DKS17-nasty},
tolerating more noise by outputting a list of hypotheses~\cite{CSV17, meister2017data, DKS17-mixtures},
robust estimation of discrete structures~\cite{steinhardt2017clique,qiao2017learning,steinhardt2018resilience},
and robust estimation via sum-of-squares~\cite{KS17, HL17, KStein17}.
Despite this progress, these recent theoretical papers typically focus on designing specialized algorithms for
specific settings (such as mean estimation or linear classification for specific families of distributions)
rather than on designing general algorithms.
The only exception is \cite{CSV17}, which provides a robust meta-algorithm for stochastic convex optimization in a
similar setting to ours. However, that algorithm (i) requires solving a large semidefinite program
and (ii) incurs a significant loss in performance relative to standard training {\em even in the absence of outliers}.
On the other hand, \cite{DKK+17} provide a practical implementation of
the robust mean and covariance estimation algorithms of \cite{DKKLMS16},
but do not consider more general learning tasks.
A number of papers~\cite{nasrabadi2011robust, nguyen2013exact, BhatiaJK15, BhatiaJKK17}
have proposed efficient algorithms for a type of robust linear regression.
However, these works consider a restrictive corruption model
that only allows adversarial corruptions to the responses (but not the covariates).
On the other hand, \cite{BDLS17} studies (sparse) linear regression and, more broadly,
generalized linear models (GLMs) under a robustness model very similar
to the one considered here. The main issues with this algorithm are that (i)
it requires running the ellipsoid method (hence does not scale) and (ii) it crucially assumes
Gaussianity of the covariates, which is unlikely to hold in practice.
In a related direction, \cite{steinhardt2017certified} provide a method for analyzing outlier
detectors in the context of linear classification, either certifying robustness or
generating an attack if the learner is not robust.
The outlier detector they analyze is brittle in high dimensions,
motivating the need for the robust algorithms presented in the current work.
Later work by the same authors showed how to bypass a number of common outlier detection methods
\cite{koh2018stronger}. We use these recent strong attacks as part of our evaluation and
show that our algorithm is more robust.
\paragraph{Concurrent Works.} \cite{PSBR18} independently obtained a robust algorithm
for stochastic convex optimization by combining gradient descent with robust mean estimation.
This algorithm is similar to the one we present in Appendix~\ref{sec:general-algo}, and in that section we discuss in more detail the comparison between these two techniques.
For the case of linear
regression,~\cite{DKoS18} provide efficient robust algorithms with near-optimal error guarantees under various distributional
assumptions and establish matching computational-robustness tradeoffs.
\subsection{Ridge Regression}
For ridge regression, we tested our method on a synthetic Gaussian dataset as well as a
drug discovery dataset.
The synthetic dataset consists of observations $(x_i, y_i)$ where
$x_i \in \mathbb{R}^{500}$ has independent standard Gaussian entries,
and $y_i = \langle x_i, w^* \rangle + 0.1 z_i$, where $z_i$ is also Gaussian.
We generated $5000$ training points and $100$ test points.
The drug discovery dataset was obtained from the ChEMBL database and was originally curated by
\cite{olier2018qsar}; it consists of $4084$ data points in $410$ dimensions; we split
this into a training set of $3084$ points and a test set of $1000$ points.
\paragraph{Centering}
We found that centering the data points decreased error noticeably on the drug discovery
dataset, while scaling each coordinate to have variance $1$ decreased error by a small
amount on the synthetic data.
To center in the presence of outliers, we used the robust mean estimation algorithm
from \cite{DKK+17}.
\paragraph{Adding outliers.}
We devised a method of generating outliers that fools all of the baselines while still
inducing high test error. At a high level, the outliers cause ridge regression to output
$w = 0$ (so the model always predicts $y = 0$).
If $(X, y)$ are the true data points and responses, this can be achieved by
setting each outlier point $(X_{\mathrm{bad}}, y_{\mathrm{bad}})$ as
\[ X_{\mathrm{bad}} = \frac{1}{\alpha \cdot n_{\mathrm{bad}}}y^\top X ~~\mbox{and}~~~y_{\mathrm{bad}} = -\beta \; , \] where $n_{\mathrm{bad}}$ is the number of outliers we add, and $\alpha$ and $\beta$ are hyperparameters.
If $\alpha = \beta$, one can check that $w = 0$ is the
unique minimizer for ridge regression on the perturbed dataset.
By tuning $\alpha$ and $\beta$, we can then obtain attacks that fool all the baselines while
damaging the model (we tune $\alpha$ and $\beta$ separately to give an additional degree of
freedom to the attack).
To increase the error, we also found it useful to perturb each individual
$X_{\mathrm{bad}}$ by a small amount of Gaussian noise.
In our experiments we found that this method generated successful attacks as long as
the fraction of outliers was at least roughly $2\%$ for synthetic data,
and roughly $5\%$ for the drug discovery data.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\begin{axis}[errplottriple,name=linreg_synth, align=center, title={Regression: Synthetic data}, xticklabel style={/pgf/number format/.cd, fixed, fixed zerofill, precision=2,/tikz/.cd}, legend columns= 5, legend style={anchor=north west, xshift=-0.6 \plotwidth, yshift=-0.8\plotheight}, legend columns=6]
\addplot[teal] table[x=eps, y=err] {figures/linreg_synth/uncorrupted.txt};
\addplot[cyan] table[x=eps, y=err] {figures/linreg_synth/l2.txt};
\addplot[violet] table[x=eps, y=err] {figures/linreg_synth/loss.txt};
\addplot[magenta] table[x=eps, y=err] {figures/linreg_synth/gradient.txt};
\addplot[gray] table[x=eps, y=err] {figures/linreg_synth/ransac.txt};
\addplot[black] table[x=eps, y=err] {figures/linreg_synth/sever.txt};
\legend{uncorrupted, {\bf l2}{}, {\bf loss}{}, {\bf gradientCentered}{},{\bf RANSAC}{}, {\textsc{Sever}}{}}
\end{axis}
\begin{axis}[errplottriple,name=linreg_qsar, align=center, at=(linreg_synth.north east),anchor=north west, xticklabel style={/pgf/number format/.cd, fixed, fixed zerofill, precision=2,/tikz/.cd}, xshift=\plotxspacing,ignore legend, title={Regression: Drug discovery data}, ymin = 1, ymax = 2]
\addplot[teal] table[x=eps, y=err] {figures/linreg_qsar/uncorrupted.txt};
\addplot[cyan] table[x=eps, y=err] {figures/linreg_qsar/l2.txt};
\addplot[violet] table[x=eps, y=err] {figures/linreg_qsar/loss.txt};
\addplot[magenta] table[x=eps, y=err] {figures/linreg_qsar/gradient.txt};
\addplot[gray] table[x=eps, y=err] {figures/linreg_qsar/ransac.txt};
\addplot[black] table[x=eps, y=err] {figures/linreg_qsar/sever.txt};
\end{axis}
\begin{axis}[errplottriple,name=linreg_qsar_worst, align=center, at=(linreg_qsar.north east),anchor=north west, xticklabel style={/pgf/number format/.cd, fixed, fixed zerofill, precision=2,/tikz/.cd}, xshift=\plotxspacing,ignore legend, title={Regression: Drug discovery data, \\ attack targeted against {\textsc{Sever}}{}}, ymin = 1, ymax = 2]
\addplot[teal] table[x=eps, y=err] {figures/linreg_qsar_worst/uncorrupted.txt};
\addplot[cyan] table[x=eps, y=err] {figures/linreg_qsar_worst/l2.txt};
\addplot[violet] table[x=eps, y=err] {figures/linreg_qsar_worst/loss.txt};
\addplot[magenta] table[x=eps, y=err] {figures/linreg_qsar_worst/gradient.txt};
\addplot[gray] table[x=eps, y=err] {figures/linreg_qsar_worst/ransac.txt};
\addplot[black] table[x=eps, y=err] {figures/linreg_qsar_worst/sever.txt};
\end{axis}
\end{tikzpicture}
\caption{$\epsilon$ vs test error for baselines and {\textsc{Sever}}{} on synthetic data and the drug discovery dataset. The left and middle figures show that {\textsc{Sever}}{} continues to maintain statistical accuracy against our attacks which are able to defeat previous baselines. The right figure shows an attack with parameters chosen to increase the test error {\textsc{Sever}}{} on the drug discovery dataset as much as possible. Despite this, {\textsc{Sever}}{} still has relatively small test error.}
\label{label:acc-vs-eps-linreg}
\end{figure}
\begin{figure*}[h!]
\includegraphics[width=0.33\textwidth]{figures/hist-qsar-l2.png}
\includegraphics[width=0.33\textwidth]{figures/hist-syn-losses.png}
\includegraphics[width=0.33\textwidth]{figures/hist-qsar-svd.png}
\caption{A representative set of histograms of scores for baselines and {\textsc{Sever}}{} on synthetic data and a drug discovery dataset. From left to right: scores for the {\bf l2}{} defense on the drug discovery dataset, scores for {\bf loss}{} on synthetic data, and scores for {\textsc{Sever}}{} on the drug discovery dataset, all with the addition of 10\% outliers.
The scores for the true dataset are in blue, and the scores for the outliers are in red.
For the baselines, the scores for the outliers are inside the bulk of the distribution and thus hard to detect, whereas the scores for the outliers assigned by {\textsc{Sever}}{} are clearly within the tail of the distribution and easily detectable. }
\label{label:hist-linreg}
\end{figure*}
\paragraph{Results.}
In Figure \ref{label:acc-vs-eps-linreg} we compare the test error of our defense against the baselines
as we increase the fraction $\epsilon$ of added outliers.
To avoid cluttering the figure, we only show the performance of {\bf l2}, {\bf loss}, {\bf gradientCentered}, {\bf RANSAC},
and {\textsc{Sever}}{}; the performance of the remaining baselines is qualitatively similar to the
baselines in Figure \ref{label:acc-vs-eps-linreg}.
For both the baselines and our algorithms, we iterate the defense $r=4$ times,
each time removing the $p=\epsilon/2$ fraction of points with largest score.
For consistency of results, for each defense and each value of $\epsilon$ we ran the defense
3 times on fresh attack points and display the median of the 3 test errors.
When the attack parameters $\alpha$ and $\beta$ are tuned to defeat the baselines
(Figure~\ref{label:acc-vs-eps-linreg} left and center),
our defense substantially outperforms the baselines as soon as we cross $\epsilon \approx 1.5\%$
for synthetic data, and $\epsilon \approx 5.5\%$ for the drug discovery data.
In fact, most of the baselines do worse than not removing any outliers at all
(this is because they end up mostly removing good data points, which causes the
outliers to have a larger effect).
Even when $\alpha$ and $\beta$ are instead tuned to defeat {\textsc{Sever}}{}, its resulting error remains
small (Figure~\ref{label:acc-vs-eps-linreg} right).
To understand why the baselines fail to detect the outliers,
in Figure \ref{label:hist-linreg} we show a representative sample of the histograms of scores of the uncorrupted points overlaid
with the scores of the outliers, for both synthetic data and the drug discovery dataset with $\epsilon = 0.1$,
after one run of the base learner.
The scores of the outliers lie well within the distribution of scores of the uncorrupted points.
Thus, it would be impossible for the baselines to remove them
without also removing a large fraction of uncorrupted points.
Interestingly, for small $\epsilon$ all of the methods improve upon the uncorrupted test
error for the drug discovery data; this appears to be due to the presence of a small number of
natural outliers in the data that all of the methods successfully remove.
\section*{Appendix}
\input{setup-app}
\input{sever-app}
\input{glm}
\input{generic-algo}
\input{generic-apps}
\input{experiments-app}
\end{document}
\section{Preliminaries} \label{ssec:notation}
In this section, we formally introduce our setting for robust stochastic optimization.
\paragraph{Notation.} For $n \in \Z_+$, we will denote $[n] \stackrel{\text{def}}{=} \{1, \ldots, n\}$.
For a vector $v$, we will let $\| v \|_2$ denote its Euclidean norm.
For any $r \geq 0$ and any $x \in \R^d$, let $B(x, r)$ be the $\ell_2$ ball of radius $r$ around $x$.
If $M$ is a matrix, we will let $\| M \|_2$ denote its spectral norm and $\| M \|_F$ denote its Frobenius norm.
We will write $X \sim_u S$ to denote that $X$ is drawn from the empirical distribution defined by $S$.
We will sometimes use the notation $\E_{S}$, instead of $\E_{X\sim S}$, for the corresponding expectation.
We will also use the same convention for the covariance, i.e. we let $\Cov_S$ denote the covariance over the empirical distribution.
\paragraph{Setting.}
We consider a stochastic optimization setting with outliers.
Let $\mathcal{H} \subseteq \mathbb{R}^d$ be a space of parameters.
We observe $n$ functions $f_1, \ldots, f_n: \mathcal{H} \to \mathbb{R}$
and we are interested in (approximately) minimizing some target function
$\bar{f}: \mathcal{H} \to \mathbb{R}$, \new{related to the $f_i$'s.}
We will assume for simplicity that the $f_i$'s are differentiable with gradient $\nabla f_i$.
(Our results can be easily extended for the case that only a sub-gradient is available.)
In most concrete applications we will consider, there is some true underlying distribution $p^{\ast}$
over functions $f : \mathcal{H} \to \mathbb{R}$, and our goal is to find a parameter vector
$w^{\ast} \in \mathcal{H}$ minimizing $\overline{f}(w) \stackrel{\text{def}}{=} \mathbb{E}_{f \sim p^{\ast}}[f(w)]$.
Unlike the classical \new{realizable} setting, where we assume that
$f_1, \ldots, f_n \sim p^{\ast}$, we allow for an $\epsilon$-fraction of the points to be
arbitrary outliers. This is captured in the following definition (Definition~\ref{def:eps-contam})
that we restate for convenience:
\begin{definition}[$\epsilon$-corruption model]
Given $\eps > 0$ and a distribution $p^{\ast}$ over functions $f : \mathcal{H} \to \mathbb{R}$, data is generated as follows:
first, $n$ clean samples $f_1, \ldots, f_{n}$ are drawn from $p^{\ast}$.
Then, an \emph{adversary} is allowed to inspect the samples and replace
any $\epsilon n$ of them with arbitrary samples.
The resulting set of points is then given to the algorithm.
\end{definition}
In addition, some of our bounds will make use of the following quantities:
\begin{itemize}
\item The $\ell_2$-radius $r$ of the domain $\mathcal{H}$: $r = \max_{w \in \mathcal{H}} \|w\|_2$.
\item The strong convexity parameter $\xi$ of $\bar{f}$, if it exists.
This is the maximal $\xi$ such that $\bar{f}(w) \geq \bar{f}(w_0) + \langle w - w_0, \nabla \bar{f}(w_0) \rangle + \frac{\xi}{2} \|w-w_0\|_2^2$ for all $w, w_0 \in \mathcal{H}$.
\item The strong smoothness parameter $\beta$ of $\bar{f}$, if it exists.
This is the minimal $\beta$ such that $\bar{f}(w) \leq \bar{f}(w_0) + \langle w - w_0, \nabla \bar{f}(w_0) \rangle + \frac{\beta}{2} \|w-w_0\|_2^2$ for all $w, w_0 \in \mathcal{H}$.
\item The Lipschitz constant $L$ of $\bar{f}$, if it exists.
This is the minimal $L$ such that $\bar{f}(w) - \bar{f}(w_0) \leq L \|w-w_0\|_2$ for
all $w, w_0 \in \mathcal{H}$.
\end{itemize}
\section{General Analysis of {\textsc{Sever}}{}} \label{sec:sever-general}
This section is dedicated to the analysis of Algorithm~\ref{alg:sever}, where
we do not make convexity assumptions about the underlying functions $f_1, \ldots, f_n$.
In this case, we can show that our algorithm finds an approximate critical point of
$\bar{f}$.
When we specialize to convex functions, this immediately implies that we find an approximate minimal point of $\bar{f}$.
Our proof proceeds in two parts.
First, we define a set of deterministic conditions under which our algorithm finds an approximate minimal point of $\bar{f}$.
We then show that, under mild assumptions on our functions, this set of deterministic conditions holds with high probability after polynomially many samples.
For completeness, we recall the definitions of a $\gamma$-approximate critical
point and a $\gamma$-approximate learner:
\approxcrit*
\approxlearner*
\paragraph{Deterministic Regularity Conditions}
We first explicitly demonstrate a set of deterministic conditions on the (uncorrupted) data points.
Our deterministic regularity conditions are as follows:
\begin{assumption}
\label{ass:one-good-set}
Fix $0<\eps<1/2$. There exists an unknown set $I_{\mathrm{good}} \subseteq [n]$ with $|I_{\mathrm{good}}| \geq (1-\eps)n$
of ``good'' functions $\{f_i\}_{i \in I_{\mathrm{good}}}$ and parameters $\sigma_0, \sigma_1 \in \R_+$
such that:
\begin{equation} \label{eq:norm-bound}
\Big\|\E_{I_{\mathrm{good}}}\big[\big(\nabla f_i(w) - \nabla \bar{f}(w)\big)\big(\nabla f_i(w) - \nabla \bar{f}(w)\big)^T\big]\Big\|_2 \leq (\sigma_0 + \sigma_1 \|w^* - w\|_2)^2, \textrm{ for all } w \in \mathcal{H} \;,
\end{equation}
and
\begin{equation} \label{eq:mean-everywhere}
\|\nabla \fhat(w) - \nabla \bar{f}(w)\|_2 \leq (\sigma_0 + \sigma_1 \|w^* - w\|_2)\sqrt{\epsilon}, \textrm{ for all } w \in \mathcal{H},
\textrm{ where } \fhat \stackrel{\text{def}}{=} \frac{1}{|I_{\mathrm{good}}|} \sum_{i \in I_{\mathrm{good}}} f_i \;.
\end{equation}
\end{assumption}
In Section~\ref{sec:stationary-point}, we prove the following theorem,
which shows that under Assumption~\ref{ass:one-good-set} our algorithm succeeds:
\begin{restatable}{theorem}{stationarypoint}
\label{thm:stationary-point}
Suppose that the functions $f_1,\ldots,f_n,\bar{f}:\mathcal{H}\rightarrow \R$ are bounded below, and
that Assumption~\ref{ass:one-good-set} is satisfied, where $\sigma \stackrel{\text{def}}{=} \sigma_0 + \sigma_1 \|w^* - w\|_2.$
Then {\textsc{Sever}}{} applied to $f_1,\ldots,f_n, \sigma$ returns a point $w \in \mathcal{H}$
that, with probability at least $9/10$,
is a $(\gamma+O(\sigma \sqrt{\eps}))$-approximate critical point of $\bar{f}$.
\end{restatable}
Observe that the above theorem holds quite generally; in particular, it holds for non-convex functions.
As a corollary of this theorem, in Section~\ref{sec:sever-convex} we show that this immediately
implies that {\textsc{Sever}}{} robustly minimizes convex functions, if Assumption~\ref{ass:one-good-set} holds:
\begin{corollary}\label{cor:convex-sever}
For functions $f_1, \ldots, f_n: \mathcal{H} \to \R$, suppose that Assumption~\ref{ass:one-good-set}
holds and that $\mathcal{H}$ is convex.
Then, with probability at least $9/10$, for some universal constant $\epsilon_0$, if $\epsilon < \epsilon_0$,
the output of {\textsc{Sever}}{} satisfies the following:
\begin{enumerate}
\item[(i)] If $\Ef$ is convex, the algorithm finds a $w \in \mathcal{H}$ such that
$\bar{f}(w) - \bar{f}(w^*) = O((\sigma_0r + \sigma_1r^2)\sqrt{\epsilon} + \gamma r)$.
\item[(ii)] If $\Ef$ is $\xi$-strongly convex, the algorithm finds a $w \in \mathcal{H}$ such that
\[
\bar{f}(w) - \bar{f}(w^*) = O\left(\frac{\eps}{\xi} (\sigma_0 + \sigma_1 r)^2 + \frac{\gamma^2}{\xi} \right) \; .
\]
\end{enumerate}
\end{corollary}
\new{In the strongly convex case and when $\sigma_1 > 0$, we can remove the dependence on $\sigma_1$ and $r$ in the above by repeatedly applying {\textsc{Sever}}{} with decreasing $r$:
\begin{corollary}\label{cor:strongly-convex-sever}
For functions $f_1, \ldots, f_n: \mathcal{H} \to \R$, suppose that Assumption~\ref{ass:one-good-set}
holds, that $\mathcal{H}$ is convex and that $\Ef$ is $\xi$-strongly convex for $\xi \geq C \sigma_1 \sqrt{\eps}$ for some absolute constant $C$.
Then, with probability at least $9/10$, for some universal constant $\epsilon_0$, if $\epsilon < \epsilon_0$,
we can find a $\widehat{w}$ with
\[
\bar{f}(\widehat{w}) - \bar{f}(w^*) = O\left(\frac{\eps \sigma_0^2 +\gamma^2}{\xi} \right) \; .
\]
and
\[
\|\widehat{w}-w^*\|_2 = O\left(\frac{\sqrt{\eps} \sigma_0 + \gamma}{\xi}\right)
\]
using at most $O(\log(r\xi/(\gamma +\sigma_0\sqrt{\epsilon})))$ calls to {\textsc{Sever}}{}.
\end{corollary}
}
To concretely use Theorem~\ref{thm:stationary-point}, Corollary~\ref{cor:convex-sever}, and Corollary~\ref{cor:strongly-convex-sever}, in Section~\ref{sec:sever-sample-complexity} we
show that the Assumption~\ref{ass:one-good-set} is satisfied with high probability under mild conditions on the distribution over the functions, after drawing polynomially many samples:
\begin{proposition} \label{prop:sample-bound}
Let $\mathcal{H} \subset \R^d$ be a closed bounded set with diameter at most $r$.
Let $p^{\ast}$ be a distribution over functions $f:\mathcal{H}\rightarrow \R$ with $\bar{f}=\E_{f \sim p^{\ast}}[f]$
so that $f-\bar{f}$ is $L$-Lipschitz and $\beta$-smooth almost surely.
Assume furthermore that for each $w\in \mathcal{H}$ and unit vector $v$ that
$\E_{f \sim p^{\ast}}[(v\cdot (\nabla f(w)-\bar{f}(w)))^2] \leq \sigma^2 /2.$
Then for
\[
n = \Omega \left( \frac{d L^2 \log (r \beta L / \sigma^2 \eps)}{\sigma^2 \eps} \right) \; ,
\]
an $\eps$-corrupted set of points $f_1,\ldots,f_n$ with high probability
satisfy Assumption~\ref{ass:one-good-set}.
\end{proposition}
\noindent
The remaining subsections are dedicated to the proofs of Theorem~\ref{thm:stationary-point}, Corollary~\ref{cor:convex-sever}, Corollary~\ref{cor:strongly-convex-sever}, and Proposition~\ref{prop:sample-bound}.
\subsection{Proof of Theorem~\ref{thm:stationary-point}}
\label{sec:stationary-point}
\noindent Throughout this proof we let $I_{\mathrm{good}}$ be as in Assumption~\ref{ass:one-good-set}.
We require the following two lemmata.
Roughly speaking, the first states that on average, we remove more corrupted points than uncorrupted points, and the second states that at termination, and if we have not removed too many points, then we have reached a point at which the empirical gradient is close to the true gradient.
Formally:
\begin{lemma}\label{lem:bad-elts}
If the samples satisfy \eqref{eq:norm-bound} of Assumption~\ref{ass:one-good-set}, and if $|S|\geq 2n/3$ then if $S'$ is the output of $\textsc{Filter}(S, \tau, \sigma)$, we have that
$$
\E[|I_{\mathrm{good}}\cap (S\backslash S')|] \leq \E[|([n]\backslashI_{\mathrm{good}})\cap(S\backslash S')|].
$$
\end{lemma}
\begin{lemma}\label{lem:final-set}
If the samples satisfy Assumption~\ref{ass:one-good-set}, $\textsc{Filter}(S, \tau, \sigma) = S$, and $n-|S| \leq 11 \eps n$, then
$$
\left\|\nabla \bar{f}({w}) - \frac{1}{|I_{\mathrm{good}}|}\sum_{i\in S} \nabla f_i({w})\right\|_2 \leq O(\sigma \sqrt{\eps})
$$
\end{lemma}
Before we prove these lemmata, we show how together they imply Theorem~\ref{thm:stationary-point}.
\begin{proof}[{\bf Proof of Theorem~\ref{thm:stationary-point} assuming Lemma~\ref{lem:bad-elts} and Lemma~\ref{lem:final-set}}]
First, we note that the algorithm must terminate in at most $n$ iterations.
This is easy to see as each iteration of the main loop except for the last must decrease the size of $S$ by at least $1$.
It thus suffices to prove correctness.
Note that Lemma \ref{lem:bad-elts} says that each iteration will on average throw out as many elements not in $I_{\mathrm{good}}$ from $S$ as elements in $I_{\mathrm{good}}$. In particular, this means that $|([n]\backslashI_{\mathrm{good}})\cap S| + |I_{\mathrm{good}}\backslash S|$ is a supermartingale. Since its initial size is at most $\eps n$, with probability at least $9 / 10$, it never exceeds $10\eps n$, and therefore at the end of the algorithm, we must have that $n-|S| \leq \eps n + |I_{\mathrm{good}}\backslash S| \leq 11\eps n$. This will allow us to apply Lemma \ref{lem:final-set} to complete the proof, using the fact that $w$ is a $\gamma$-approximate critical point of $\frac{1}{|I_{\mathrm{good}}|}\sum_{i\in S} \nabla f_i({w})$.
\end{proof}
\noindent
Thus it suffices to prove these two lemmata.
We first prove Lemma~\ref{lem:bad-elts}:
\begin{proof}[{\bf Proof of Lemma \ref{lem:bad-elts}}]
Let $S_{\mathrm{good}} = S\cap I_{\mathrm{good}}$ and $S_{\mathrm{bad}} =S\backslash I_{\mathrm{good}}$. We wish to show that the expected number of elements thrown out of $S_{\mathrm{bad}}$ is at least the expected number thrown out of $S_{\mathrm{good}}$.
We note that our result holds trivially if $\textsc{Filter}(S, \tau, \sigma) = S$.
Thus, we can assume that $\E_{i\in S}[\tau_i] \geq 12\sigma$.
It is easy to see that the expected number of elements thrown out of $S_{\mathrm{bad}}$ is proportional to $\sum_{i\in S_{\mathrm{bad}}}\tau_i$, while the number removed from $S_{\mathrm{good}}$ is proportional to $\sum_{i\in S_{\mathrm{good}}}\tau_i$ (with the same proportionality).
Hence, it suffices to show that $\sum_{i\in S_{\mathrm{bad}}}\tau_i \geq \sum_{i\in S_{\mathrm{good}}}\tau_i$.
We first note that since $\Cov_{i\in I_{\mathrm{good}}} [ \nabla f_i(w) ] \preceq \sigma^2 I$, we have that
\begin{align*}
\Cov_{i\in S_{\mathrm{good}}} [ v\cdot \nabla f_i(w)] &\stackrel{(a)}{\leq} \frac{3}{2} \Cov_{i \in I_{\mathrm{good}}} [v \cdot \nabla f_i (w)] \\
&= \frac{3}{2} \cdot v^\top \Cov_{i \in I_{\mathrm{good}}} [\nabla f_i (w)] v \leq 2 \sigma^2 \; ,
\end{align*}
where (a) follows since $|S_{\mathrm{good}}| \geq \frac{3}{2} I_{\mathrm{good}}$.
Let $\mu_{\mathrm{good}} =\E_{i\in S_{\mathrm{good}}}[v\cdot \nabla f_i(w)]$ and $\mu=\E_{i\in S}[v\cdot \nabla f_i(w)]$.
Note that
\[
\E_{i\in S_{\mathrm{good}} } [\tau_i] = \Cov_{i\in S_{\mathrm{good}}}[ v\cdot \nabla f_i(w)] + (\mu-\mu_{\mathrm{good}})^2 \leq 2\sigma + (\mu-\mu_{\mathrm{good}})^2 \; .
\]
\noindent We now split into two cases.
Firstly, if $(\mu-\mu_{\mathrm{good}})^2 \geq 4\sigma^2$, we let $\mu_{\mathrm{bad}}=\E_{i\in S_{\mathrm{bad}}}[v\cdot \nabla f_i(w)]$, and note that $| \mu -\mu_{\mathrm{bad}} | |S_{\mathrm{bad}}| = |\mu-\mu_{\mathrm{good}}||S_{\mathrm{good}}|$. We then have that
\begin{align*}
\E_{i\in S_{\mathrm{bad}}} [\tau_i] &\geq (\mu-\mu_{\mathrm{bad}})^2 \\
&\geq (\mu-\mu_{\mathrm{good}})^2 \left( \frac{|S_{\mathrm{good}}|}{|S_{\mathrm{bad}}|} \right)^2 \\
&\geq 2 \left( \frac{|S_{\mathrm{good}}|}{|S_{\mathrm{bad}}|} \right) (\mu-\mu_{\mathrm{good}})^2 \\
&\geq \left( \frac{|S_{\mathrm{good}}|}{|S_{\mathrm{bad}}|} \right)\E_{i\in S_{\mathrm{good}}} [\tau_i].
\end{align*}
Hence, $\sum_{i\in S_{\mathrm{bad}}}\tau_i \geq \sum_{i\in S_{\mathrm{good}}}\tau_i$.
On the other hand, if $(\mu-\mu_{\mathrm{good}})^2 \leq 4\sigma^2$, then $\E_{i\in S_{\mathrm{good}}} [\tau_i] \leq 6\sigma^2 \leq \E_{i\in S} [\tau_i]/2$. Therefore $\sum_{i\in S_{\mathrm{bad}}}\tau_i \geq \sum_{i\in S_{\mathrm{good}}}\tau_i$ once again.
This completes our proof.
\end{proof}
\noindent
We now prove Lemma \ref{lem:final-set}.
\begin{proof}[{\bf Proof of Lemma \ref{lem:final-set}}]
We need to show that
$$
\delta := \left\| \sum_{i\in S} (\nabla f_i(w) -\nabla \bar{f}(w)) \right\|_2 = O(n \sigma \sqrt{ \eps}).
$$
We note that
\begin{align*}
& \left\| \sum_{i\in S} (\nabla f_i(w) -\nabla \bar{f}(w)) \right\|_2\\
\leq & \left\|\sum_{i\in I_{\mathrm{good}}} (\nabla f_i(w) -\nabla \bar{f}(w)) \right\|_2 + \left\|\sum_{i\in (I_{\mathrm{good}}\backslash S)} (\nabla f_i(w) -\nabla \bar{f}(w)) \right\|_2 + \left\| \sum_{i\in (S\backslash I_{\mathrm{good}})} (\nabla f_i(w) -\nabla \bar{f}(w)) \right\|_2\\
= & \left\|\sum_{i\in (I_{\mathrm{good}}\backslash S)} (\nabla f_i(w) -\nabla \bar{f}(w)) \right\|_2 + \left\|\sum_{i\in (S\backslash I_{\mathrm{good}})} (\nabla f_i(w) -\nabla \bar{f}(w)) \right\|_2 + O(n\sqrt{\sigma^2 \eps}).\\
\end{align*}
First we analyze
$$
\left\|\sum_{i\in (I_{\mathrm{good}}\backslash S)} (\nabla f_i(w) -\nabla \bar{f}(w)) \right\|_2.
$$
This is the supremum over unit vectors $v$ of
$$
\sum_{i\in (I_{\mathrm{good}}\backslash S)} v\cdot (\nabla f_i(w) -\nabla \bar{f}(w)).
$$
However, we note that
$$
\sum_{i\in I_{\mathrm{good}}} (v\cdot (\nabla f_i(w) -\nabla \bar{f}(w)))^2 = O(n \sigma^2).
$$
Since $|I_{\mathrm{good}}\backslash S| = O( n\eps)$, we have by Cauchy-Schwarz that
$$
\sum_{i\in (I_{\mathrm{good}}\backslash S)} v\cdot (\nabla f_i(w) -\nabla \bar{f}(w)) = O(\sqrt{(n\sigma^2)(n\eps)}) = O(n\sqrt{\sigma^2 \eps}),
$$
as desired.
We note that since for any such $v$ that
$$
\sum_{i\in S} (v\cdot (\nabla f_i(w) -\nabla \bar{f}(w)))^2 = \sum_{i\in S} (v\cdot (\nabla f_i(w) -\nabla \hat{f}(w)))^2 + \delta^2 = O(n \sigma^2) + \delta^2
$$
(or otherwise our filter would have removed elements) and since $|S\backslash I_{\mathrm{good}}| = O(n\eps)$, and so we have similarly that
$$
\left\|\sum_{i\in (S\backslash I_{\mathrm{good}})} \nabla f_i(w) -\nabla \bar{f}(w)\right\|_2 = O(n \sigma \sqrt{\eps}+\delta\sqrt{n\eps}).
$$
Combining with the above we have that
$$
\delta = O(\sigma\sqrt{\eps}+\delta\sqrt{\eps/n}),
$$
and therefore, $\delta=O(\sigma\sqrt{\eps})$ as desired.
\end{proof}
\subsection{Proof of Corollary~\ref{cor:convex-sever}}
\label{sec:sever-convex}
In this section, we show that the {\textsc{Sever}}{} algorithm finds an approximate global optimum
for convex optimization in various settings, under Assumption~\ref{ass:one-good-set}.
We do so by simply applying the guarantees of Theorem~\ref{thm:stationary-point} in a fairly black box manner.
Before we proceed with the proof of Corollary~\ref{cor:convex-sever}, we record
a simple lemma that allows us to translate an approximate critical point guarantee to an approximate
global optimum guarantee:
\begin{lemma}\label{lem:derivatives-to-sizes}
Let $f: \mathcal{H} \to \R$ be a convex function and let $x \neq y \in \mathcal{H}$. Let $v = y-x / \|y-x\|_2$
be the unit vector in the direction of $y-x$. Suppose that for some $\delta$ that $v\cdot (\nabla f(x)) \geq -\delta$ and $ -v\cdot (\nabla f(y)) \geq -\delta$ . Then we have that:
\begin{enumerate}
\item $|f(x)-f(y)|\leq \|x-y\|_2 \delta$.
\item If $f$ is $\xi$-strongly convex, then $|f(x)-f(y)| \leq 2\delta^2 / \xi$ \new{ and $\|x-y\|_2 \leq 2\delta/\xi$}.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $r=\|x-y\|_2 >0$ and $g(t)=f(x+tv)$.
We have that $g(0)=f(x),g(r)=f(y)$ and that $g$ is convex (or $\xi$-strongly convex)
with $g'(0)\geq -\delta$ and $g'(r)\leq \delta$. By convexity, the derivative of $g$ is increasing on $[0,r]$
and therefore $|g'(t)|\leq \delta$ for all $t\in [0,r]$. This implies that
$$
|f(x)-f(y)| = |g(r)-g(0)| = \left| \int_0^r g'(t)dt \right| \leq r\delta \;.
$$
To show the second part of the lemma, we note that if $g$ is $\xi$-strongly convex that $g''(t)\geq \xi$ for all $t$.
This implies that $g'(r)>g'(0)+\xi r$. Since $g'(r)-g'(0)\leq 2\delta$, we obtain
that $r\leq 2\delta/\xi$, from which the second statement follows.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:convex-sever}]
By applying the algorithm of Theorem~\ref{thm:stationary-point},
we can find a point $w$ that is a $\gamma' \stackrel{\text{def}}{=} (\gamma + O(\sigma \sqrt{\eps}))$-approximate critical
point of $\bar{f}$, where $\sigma \stackrel{\text{def}}{=} \sigma_0+ \sigma_1 \|w^{\ast}-w\|_2$. That is,
for any unit vector $v$ pointing towards the interior of $\mathcal{H}$,
we have that $v\cdot \nabla \bar{f}(w) \geq - \gamma'.$
To prove (i), we apply Lemma~\ref{lem:derivatives-to-sizes} to $\bar{f}$ at $w$ which gives that
$$
|\bar{f}(w)-\bar{f}(w^{\ast})| \leq r \cdot \gamma'.
$$
To prove (ii), we apply Lemma~\ref{lem:derivatives-to-sizes} to $\bar{f}$ at $w$ which gives that
$$
|\bar{f}(w)-\bar{f}(w^{\ast})| \leq 2 {\gamma'}^2/\xi.
$$
Plugging in parameters appropriately then immediately gives the desired bound.
\end{proof}
\subsection{Proof of Corollary~\ref{cor:strongly-convex-sever}}
We apply ${\textsc{Sever}}{}$ iteratively starting with a domain $\mathcal{H}_1=\mathcal{H}$ and radius $r_1=r$. After each iteration, we know the resulting point is close to $w^{\ast}$ will be able to reduce the search radius.
At step $i$, we have a domain of radius $r_i$.
As in the proof of Corollary~\ref{cor:convex-sever} above, we apply algorithm of Theorem~\ref{thm:stationary-point},
we can find a point $w_i$ that is a $\gamma'_i \stackrel{\text{def}}{=} (\gamma + O(\sigma'_i \sqrt{\eps}))$-approximate critical
point of $\bar{f}$, where $\sigma'_i \stackrel{\text{def}}{=} \sigma_0+ \sigma_1 r_i$.
Then using Lemma~\ref{lem:derivatives-to-sizes}, we obtain that $\|w_i-w^{\ast}\|_2 \leq 2\gamma'_i/\xi$.
Now we can define $\mathcal{H}_{i+1}$ as the intersection of $\mathcal{H}$ and the ball of radius $r_{i+1} = 2\gamma'_i/\xi$ around $w_i$ and repeat using this domain.
We have that $r_{i+1} = 2\gamma'_i/\xi= 2\gamma/\xi + O(\sigma_0 \sqrt{\eps}/\xi + \sigma_1 \sqrt{\eps} r_i/\xi)$. Now if we choose the constant $C$ such that the constant in this $O()$ is $C/4$, then using our assumption that $\xi \geq 2 \sigma_1 \sqrt{\eps}$, we obtain that
$$r_{i+1} \leq 2\gamma/\xi + C \sigma_0 \sqrt{\eps}/4\xi+ C\sigma_1 \sqrt{\eps} r_i/4\xi \leq
2\gamma/\xi + C \sigma_0 \sqrt{\eps}/4 + r_i/4$$
Now if $r_i \geq 8\gamma/\xi + 2C \sigma_0 \sqrt{\eps}/\xi$, then we have $r_{i+1} \leq r_i/2$ and if
$r_i \leq 8\gamma/\xi + 2C \sigma_0 \sqrt{\eps}/\xi$ then we also have $r_{i+1} \leq 8\gamma/\xi + 2C \sigma_0 \sqrt{\eps}/\xi$ . When $r_i$ is smaller than this we stop and output $w_i$.
Thus we stop in at most $O(\log(r) -\log(8\gamma/\xi + 2C \sigma_0 \sqrt{\eps}/\xi))=O(\log(r\xi/(\gamma + \sigma_0 \sqrt{\eps}))$ iterations and have $r_i=O(\gamma/\xi + C \sigma_0 \sqrt{\eps})$. But then $\gamma'_i =\gamma + O(\sigma'_i \sqrt{\eps})) \leq \gamma + C(\sigma_0 + \sigma_1 r'_i) \sqrt{\eps}/8 = O(\gamma + \sigma_0 \sqrt{\eps}).$ Using Lemma~\ref{lem:derivatives-to-sizes} we obtain that
$$
|\bar{f}(w_i)-\bar{f}(w^{\ast})| \leq 2 \gamma'^2_i/\xi = O(\gamma^2/\xi + \sigma_0^2 \eps/\xi).
$$
as required.
The bound on $\|\widehat{w} - w^*\|_2$ follows similarly.
\begin{remark}
While we don't give explicit bounds on the number of calls to the approximate learner needed by {\textsc{Sever}}{},
such bounds can be straightforwardly obtained under appropriate assumptions on the $f_i$ (see, e.g., the following subsection).
Two remarks are in order. First, in this case we cannot
take advantage of assumptions that only hold at $\bar{f}$ but might not on the corrupted average $f$.
Second, our algorithm can take advantage of a closed form for the minimum.
For example, for the case of linear regression considered in Section~\ref{sec:app-general},
$f_i$ is not Lipschitz with a small constant if $x_i$ is far from the mean,
but there is a simple closed form for the minimum of the least squares loss.
\end{remark}
\subsection{Proof of Proposition~\ref{prop:sample-bound}}
\label{sec:sever-sample-complexity}
We let $I_{\mathrm{good}}$ be the set of uncorrupted functions $f_i$. It is then the case that $|I_{\mathrm{good}}|\geq (1-\eps)n$. We need to show that for each $w\in\mathcal{H}$ that
\begin{equation}\label{eqn:cov-bound}
\Cov_{i\in I_{\mathrm{good}}}[\nabla f_i(w)] \leq 3\sigma^2 I/4
\end{equation}
and
\begin{equation}\label{eqn:average-grad-error-bound}
\left\|\nabla \bar{f}(w) - \frac{1}{|I_{\mathrm{good}}|}\sum_{i\inI_{\mathrm{good}}} \nabla f_i(w)\right\|_2 \leq O(\sigma^2 \sqrt{\eps}).
\end{equation}
We will proceed by a cover argument. First we claim that for each $w\in \mathcal{H}$ that \eqref{eqn:cov-bound} and \eqref{eqn:average-grad-error-bound} hold with high probability. For Equation \eqref{eqn:cov-bound}, it suffices to show that for each unit vector $v$ in a cover $\mathcal{N}$ of size $2^{O(d)}$ of the sphere that
\begin{equation}\label{eqn:direction-var-bound}
\E_{i\inI_{\mathrm{good}}}[(v\cdot (\nabla f_i(w)-\bar{f}))^2] \leq 2\sigma^2 /3.
\end{equation}
However, we note that
$$
\E_{p^\ast}[(v\cdot (\nabla f(w)-\bar{f}))^2] \leq \sigma^2/2.
$$
Since $|v\cdot (\nabla f(w)-\bar{f})|$ is always bounded by $L$, Equation \eqref{eqn:direction-var-bound} holds for each $v,w$ with probability at least $1-\exp(-\Omega(n\sigma^2 /L^2))$ by a Chernoff bound (noting that the removal of an $\eps$-fraction of points cannot increase this by much). Similarly, to show Equation \ref{eqn:average-grad-error-bound}, it suffices to show that for each such $v$ that
\begin{equation}\label{eqn:directional-average-grad-error-bound}
\E_{i\inI_{\mathrm{good}}}[(v\cdot (\nabla f_i(w)-\bar{f}))] \leq O(\sigma \sqrt{\eps}).
\end{equation}
Noting that
$$
\E_{p^\ast}[(v\cdot (\nabla f(w)-\bar{f}))]=0
$$
A Chernoff bound implies that with probability $1-\exp(-\Omega(n\sigma^2 \eps/L^2))$ that the average over our original set of $f$'s of $(v\cdot (\nabla f(w)-\bar{f}))$ is $O(\sigma \sqrt{\eps})$.
Assuming that Equation \eqref{eqn:direction-var-bound} holds, removing an $\eps$-fraction of these $f$'s cannot change this value by more than $O(\sigma \sqrt{\eps})$.
By union bounding over $\mathcal{N}$ and standard net arguments, this implies that
Equations \eqref{eqn:cov-bound} and \eqref{eqn:average-grad-error-bound} hold with probability $1-\exp(\Omega(d - n\sigma^2 \eps/L^2))$ for any given $w$.
To show that our conditions hold for all $w\in \mathcal{H}$, we note that by $\beta$-smoothness, if Equation \eqref{eqn:average-grad-error-bound} holds for some $w$, it holds for all other $w'$ in a ball of radius $\sqrt{\sigma^2 \eps}/\beta$ (up to a constant multiplicative loss). Similarly, if Equation \eqref{eqn:cov-bound} holds at some $w$, it holds with bound $\sigma^2 I$ for all $w'$ in a ball of radius $\sigma^2 /(2L\beta)$. Therefore, if Equations \eqref{eqn:cov-bound} and \eqref{eqn:average-grad-error-bound} hold for all $w$ in a $\min(\sqrt{\sigma^2 \eps}/\beta,\sigma/(2L\beta))$-cover of $\mathcal{H}$, the assumptions of Theorem \ref{thm:stationary-point} will hold everywhere. Since we have such covers of size $\exp(O(d\log(r\beta L/(\sigma^2 \eps))))$, by a union bound, this holds with high probability if
\[
n = \Omega \left( \frac{d L^2 \log (r \beta L / \sigma^2 \eps)}{\sigma^2 \eps} \right) \; ,
\]
as claimed.
\subsection{Support Vector Machines}
We next describe our experimental results for SVMs; we tested our method on
a synthetic Gaussian dataset as well as a spam classification task.
Similarly to before, the synthetic data consists of observations $(x_i, y_i)$, where $x_i \in \mathbb{R}^{500}$ has independent standard Gaussian entries,
and $y_i = \operatorname{sign}(\langle x_i, w^* \rangle + 0.1z_i)$, where $z_i$ is also Gaussian
and $w^*$ is the true parameters (drawn at random from the unit sphere).
The spam dataset comes from the Enron corpus~\cite{metsis2006spam}, and
consists of $4137$ training points and $1035$ test points in $5116$ dimensions.
To generate attacks, we used the data poisoning algorithm presented in
\cite{koh2018stronger}.
In contrast to ridge regression, we did not perform centering and rescaling for these
datasets as it did not seem to have a large effect on results.
In all experiments for this section, each method removed the top $p=\frac{n_- + n_+}{\min\{n_+,n_-\}} \cdot \frac{\epsilon}{r}$ of highest-scoring
points for each of $r = 2$ iterations, where $n_+$ and $n_-$ are the number of positive and negative training points respectively.
This expression for $p$ is chosen in order to account for class imbalance, which is extreme in the case of the Enron dataset -- if the attacker plants all the outliers in the smaller class, then a smaller value of $p$ would remove too few points, even with a perfect detection method.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\begin{axis}[errplot,name=svm_synth_loss, legend style={anchor=north west, xshift=-0.35 \plotwidth, yshift=-1.2\plotheight}, legend columns=4, title={SVM: Strongest attacks against {\bf loss}{} on synthetic data}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/uncorrupted.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/loss.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_synth_loss/sever.txt};
\legend{uncorrupted, {\bf loss}{}, {\textsc{Sever}}{}}
\end{axis}
\begin{axis}[errplot,name=svm_synth_sever, at=(svm_synth_loss.north east),anchor=north west, xshift=\plotxspacing,ignore legend, title={SVM: Strongest attacks against {\textsc{Sever}}{} on synthetic data}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/uncorrupted.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/loss.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_synth_sever/sever.txt};
\end{axis}
\end{tikzpicture}
\caption{$\epsilon$ versus test error for {\bf loss}{} baseline and {\textsc{Sever}}{} on synthetic data. The left figure demonstrates that {\textsc{Sever}}{} is accurate when outliers manage to defeat {\bf loss}{}.
The right figure shows the result of attacks which increased the test error the most against {\textsc{Sever}}{}. Even in this case, {\textsc{Sever}}{} performs much better than the baselines.}
\label{fig:svm-synthetic}
\end{figure}
\paragraph{Synthetic results.}
We considered fractions of outliers ranging from $\epsilon = 0.005$ to $\epsilon = 0.03$.
By performing a sweep across hyperparameters of the attack, we generated
$56$ distinct sets of attacks for each value of $\epsilon$.
In Figure~\ref{fig:svm-synthetic}, we show results for the attack where the {\bf loss}{} baselines does the
worst, as well as for the attack where our method does the worst.
When attacks are most effective against {\bf loss}{}, {\textsc{Sever}}{} substantially outperforms it, nearly matching the test accuracy of $5.8\%$ on the uncorrupted data, while {\bf loss}{} performs worse than $30\%$ error at just a $1.5\%$ fraction of injected outliers.
Even when attacks are most effective against {\textsc{Sever}}{}, it still outperforms {\bf loss}{}, achieving a test error of at most $9.05\%$.
We note that other baselines behaved qualitatively similarly to {\bf loss}{}, and the results are displayed in Section~\ref{sec:experiments-app}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\begin{axis}[errplottriple,name=svm_enron_gradientCentered, align=center, title={SVM: Strongest attacks against \\ {\bf gradientCentered}{} on Enron}, legend columns= 3, legend style={anchor=north west, xshift=-0.15 \plotwidth, yshift=-0.8\plotheight}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/uncorrupted.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/loss.txt};
\addplot[magenta, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/gradient.txt};
\addplot[orange, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/gradientCentered.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_enron_gradientCentered/sever.txt};
\legend{uncorrupted, {\bf loss}{}, {\bf gradient}{}, {\bf gradientCentered}{}, {\textsc{Sever}}{}}
\end{axis}
\begin{axis}[errplottriple,name=svm_enron_loss, align=center, at=(svm_enron_gradientCentered.north east),anchor=north west, xshift=\plotxspacing,ignore legend, title={SVM: Strongest attacks \\ against {\bf loss}{} on Enron}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/uncorrupted.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/loss.txt};
\addplot[magenta, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/gradient.txt};
\addplot[orange, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/gradientCentered.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_enron_loss/sever.txt};
\end{axis}
\begin{axis}[errplottriple,name=svm_enron_sever, align=center, at=(svm_enron_loss.north east),anchor=north west, xshift=\plotxspacing,ignore legend, title={SVM: Strongest attacks \\ against {\textsc{Sever}}{} on Enron}]
\addplot[teal, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/uncorrupted.txt};
\addplot[violet, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/loss.txt};
\addplot[magenta, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/gradient.txt};
\addplot[orange, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/gradientCentered.txt};
\addplot[black, mark=|] table[x=eps, y=err] {figures/svm_enron_sever/sever.txt};
\end{axis}
\end{tikzpicture}
\caption{$\epsilon$ versus test error for baselines and {\textsc{Sever}}{} on the Enron spam corpus.
The left and middle figures are the attacks which perform best against two baselines, while the right figure performs best against {\textsc{Sever}}{}.
Though other baselines may perform well in certain cases, only {\textsc{Sever}}{} is consistently accurate.
The exception is for certain attacks at $\epsilon = 0.03$, which, as shown in Figure~\ref{fig:spam-histogram}, require three rounds of outlier removal for any method to obtain reasonable test error -- in these plots, our defenses perform only two rounds.}
\label{fig:spam-results}
\end{figure}
\begin{figure*}[h!]
\includegraphics[width=0.33\textwidth]{figures/hist-enron-sever-hidden1.png}
\includegraphics[width=0.33\textwidth]{figures/hist-enron-sever-hidden2.png}
\includegraphics[width=0.33\textwidth]{figures/hist-enron-sever-hidden3.png}
\caption{An illustration of why multiple rounds of filtering are necessary. Histograms of scores assigned by {\textsc{Sever}}{} in three subsequent iterations of outlier removal. Inliers are blue, and outliers are red (scaled up by a factor of 10). In early iterations, a significant fraction of outliers may be ``hidden'' (i.e. have 0 loss) by being correctly classified in one iteration. However, once previous outliers are removed, these points may become incorrectly classified, thus significantly degrading the quality of our solution but simultaneously becoming evident to {\textsc{Sever}}{}.}
\label{fig:spam-histogram}
\end{figure*}
\paragraph{Spam results.}
For results on Enron, we used the same values of $\epsilon$, and considered $96$ distinct
hyperparameters for the attack. There was not a single attack that simultaneously defeated
all of the baselines, so in Figure~\ref{fig:spam-results} we show two attacks that do well
against different sets of baselines, as well as the attack that performs best against our method.
At $\epsilon = 0.01$, the worst performance of our method against all attacks was $7.34\%$,
in contrast to $13.43\%-20.48\%$ for the baselines (note that the accuracy is $3\%$ in the absence of outliers). However, at $\epsilon = 0.03$, while we still outperform the baselines, our error is
relatively large---$13.53\%$.
To investigate this further, we looked at all $48$ attacks and found that while on
$42$ out of $48$ attacks our error never exceeded $7\%$, on $6$ of the attacks (including
the attack in Figure~\ref{fig:spam-results}) the error was substantially higher.
Figure~\ref{fig:spam-histogram} shows what is happening.
The leftmost figure displays the scores assigned by {\textsc{Sever}}{} after the first iteration, where red bars indicate outliers.
While some outliers are assigned extremely large scores and thus detected, several outliers are correctly classified and thus have 0 gradient.
However, once we remove the first set of outliers, some outliers which were previously correctly classified now have large score, as displayed in the middle figure.
Another iteration of this process produces the rightmost figure, where almost all the remaining outliers have large score and will thus be removed by {\textsc{Sever}}{}.
This demonstrates that some outliers may be hidden until other outliers are removed, necessitating multiple iterations.
Motivated by this, we re-ran our method against the $6$ attacks using $r = 3$ iterations instead of
$2$ (and decreasing $p$ as per the expression above). After this
change, all $6$ of the attacks had error at most $7.4\%$.
\section{Framework and Algorithm}
\label{sec:theory}
In this section, we describe our formal framework as well as the {\textsc{Sever}}{} algorithm.
\subsection{Formal Setting}
We will consider stochastic optimization tasks, where there is some true
distribution $p^{\ast}$ over functions $f : \mathcal{H} \to \mathbb{R}$, and our goal is to
find a parameter vector $w^{\ast} \in \mathcal{H}$ minimizing $\overline{f}(w) \stackrel{\text{def}}{=} \mathbb{E}_{f \sim p^{\ast}}[f(w)]$.
Here we assume $\mathcal{H} \subseteq \mathbb{R}^d$ is a space of possible parameters.
As an example, we consider linear regression with squared loss, where $f(w) = \frac{1}{2}(w \cdot x - y)^2$
for $(x,y)$ drawn from the data distribution; or support vector machines with hinge loss, where
$f(w) = \max\{ 0, 1 - y (w \cdot x) \}$.
We will use the former as a running example for the theory part of the body of this paper.
To help us learn the parameter vector $w^{\ast}$, we have access to a \emph{training set}
of $n$ functions $f_{1:n} \stackrel{\text{def}}{=} \{f_1, \ldots, f_n\}$. (For linear regression, we would have
$f_i(w) = \frac{1}{2}(w \cdot x_i - y_i)^2$, where $(x_i, y_i)$ is an observed data point.)
However, unlike the classical (uncorrupted) setting where we assume that
$f_1, \ldots, f_n \sim p^{\ast}$, we allow for an $\epsilon$-fraction of the points to be
arbitrary outliers:
\begin{definition}[$\epsilon$-contamination model] \label{def:eps-contam}
Given $\eps > 0$ and a distribution $p^{\ast}$ over functions $f : \mathcal{H} \to \mathbb{R}$, data is generated as follows:
first, $n$ clean samples $f_1, \ldots, f_{n}$ are drawn from $p^{\ast}$.
Then, an \emph{adversary} is allowed to inspect the samples and replace
any $\epsilon n$ of them with arbitrary samples.
The resulting set of points is then given to the algorithm.
\new{We will call such a set of samples {\em $\eps$-corrupted (with respect to $p^{\ast}$)}.}
\end{definition}
In the $\epsilon$-contamination model, the adversary is allowed to both add and remove points.
Our theoretical results hold in this strong robustness model.
Our experimental evaluation uses corrupted instances
in which the adversary is only allowed to add corrupted points. Additive corruptions
essentially correspond to Huber's contamination model~\cite{Huber64}
in robust statistics.
Finally, we will often assume access to a black-box learner, which we denote
by $\mathcal{L}$, which takes in functions $f_1, \ldots, f_n$ and outputs a
parameter vector $w \in \mathcal{H}$. We want to stipulate that $\mathcal{L}$ approximately
minimizes $\frac{1}{n} \sum_{i=1}^n f_i(w)$. For this purpose, we introduce the
following definition:
\begin{restatable}[$\gamma$-approximate critical point]{definition}{approxcrit}
\label{def:approx-crit}
Given a function $f:\mathcal{H}\rightarrow \R$, a $\gamma$-approximate critical point of $f$,
is a point $w\in \mathcal{H}$ so that for all unit vectors $v$ where $w+\delta v\in \mathcal{H}$
for arbitrarily small positive $\delta$, we have that $v\cdot \nabla f(w) \geq -\gamma$.
\end{restatable}
Essentially, the above definition means that the value of $f$ cannot be decreased
much by changing the input $w$ locally, while staying within the domain.
The condition enforces that moving in any direction $v$ either causes us
to leave $\mathcal{H}$ or causes $f$ to decrease at a rate at most $\gamma$.
\new{It should be noted that when $\mathcal{H} = \R^d$, our above notion of approximate
critical point reduces to the standard notion of approximate stationary point
(i.e., a point where the magnitude of the gradient is small).}
We are now ready to define the notion of a \emph{$\gamma$-approximate} learner:
\begin{restatable}[$\gamma$-approximate learner]{definition}{approxlearner}
\label{def:approx-learner}
A learning algorithm $\mathcal{L}$ is called \emph{$\gamma$-approximate} if, for any
functions $f_1, \ldots, f_n : \mathcal{H} \to \mathbb{R}$ each bounded below on a closed domain $\mathcal{H}$, the output ${w} = \mathcal{L}(f_{1:n})$ of
$\mathcal{L}$ is a $\gamma$-approximate critical point of $f(x):=\frac{1}{n}\sum_{i=1}^n f_i(x)$.
\end{restatable}
In other words, $\mathcal{L}$ always finds an approximate critical point of the empirical
learning objective. We note that most common learning algorithms (such as stochastic gradient
descent) satisfy the $\gamma$-approximate learner property.
For our example of linear regression, gradient descent could be performed using the gradient $\frac1n\sum_{i=1}^n x_i(w \cdot x_i - y_i)$.
However, in some cases, a more efficient and direct method is to set the gradient equal to 0 and solve for $w$.
In our linear regression example, this gives us a closed form solution for the optimal parameter vector.
\subsection{Algorithm and Theory}
\label{sec:alg}
As outlined in Figure~\ref{fig:pipeline}, our algorithm works by post-processing
the gradients of a black-box learning algorithm.
The basic intuition is as follows: we want to ensure that the outliers do not have a
large effect on the learned parameters. Intuitively, for the outliers to have such
an effect, their corresponding gradients should be (i) large in magnitude and (ii)
systematically pointing in a specific direction. We can detect this via singular value
decomposition--if both (i) and (ii) hold then the outliers should be responsible for a
large singular value in the matrix of gradients, which allows us to detect and remove them.
This is shown more formally via the pseudocode in Algorithm~\ref{alg:sever}.
\input pseudocode
\input pseudocode-filter
For concreteness, we describe how the algorithm would work for our running example of linear regression.
First, we would solve for the optimal parameter vector on the dataset, disregarding issues of robustness.
Specifically, we let $\hat w$ be the solution to $\sum_{i=1}^n x_i (\hat w \cdot x_i - y_i) = 0$: setting the gradient equal to $0$ will give us a critical point, as desired.
We compute the average gradient, $\frac1n\sum_{i=1}^n x_i (\hat w \cdot x_i - y_i),$ and use this to compute the matrix of centered gradients $G$.
That is, the $j$th row of $G$, $G_j$, is the vector $x_j (\hat w \cdot x_j - y_j) - \frac1n\sum_{i=1}^n x_i (\hat w \cdot x_i - y_i)$.
We compute the top right singular vector of $G$, project the data into this direction, and square the resulting magnitudes to derive a score for each point: $\tau_j = (G_j\cdot v)^2$.
With these scores in place, we run Algorithm~\ref{alg:filter}, to (randomly) remove some of the points with the largest scores.
We re-run the entire procedure on this subset of points, until Algorithm~\ref{alg:filter} does not remove any points, at which point we terminate.
\paragraph{Theoretical Guarantees.}
Our first theoretical result says that as long as the data is not too heavy-tailed,
{\textsc{Sever}}{} will find an approximate critical point of the true function $\overline{f}$,
even in the presence of outliers.
\begin{theorem} \label{thm:stationary-point-inf}
Suppose that functions $f_1,\ldots,f_n,\bar{f}:\mathcal{H}\rightarrow \R$ are bounded below on a closed domain $\mathcal{H}$,
and suppose that they satisfy the following deterministic regularity conditions: There exists a set $I_{\mathrm{good}} \subseteq [n]$
with $|I_{\mathrm{good}}| \geq (1-\eps)n$ and $\sigma>0$ such that
\begin{itemize}
\item[(i)] $\Cov_{I_{\mathrm{good}}}[\nabla f_i(w)] \preceq \sigma^2 I$, $w \in \mathcal{H}$,
\item[(ii)] $\|\nabla \fhat(w) - \nabla \bar{f}(w)\|_2 \leq \sigma \sqrt{\epsilon}$,
$w \in \mathcal{H}$, where $\fhat \stackrel{\text{def}}{=} (1/|I_{\mathrm{good}}|) \sum_{i \in I_{\mathrm{good}}} f_i$.
\end{itemize}
Then our algorithm {\textsc{Sever}}{} applied to $f_1,\ldots,f_n, \sigma$ returns a point $w \in \mathcal{H}$
that, with probability at least $9/10$, is a $(\gamma+O(\sigma \sqrt{\eps}))$-approximate critical point of $\bar{f}$.
\end{theorem}
The key take-away from Theorem~\ref{thm:stationary-point-inf}
is that the error guarantee has no dependence on
the underlying dimension $d$. In contrast, most natural algorithms incur an
error that grows with $d$, and hence have poor robustness in high dimensions.
\new{We show that under some niceness assumptions on $p^{\ast}$,
the deterministic regularity conditions are satisfied with high probability
with polynomially many samples:
\begin{proposition}[Informal] \label{prop:sample-bound-inf}
Let $\mathcal{H} \subset \R^d$ be a closed bounded set with diameter at most $r$.
Let $p^{\ast}$ be a distribution over functions $f:\mathcal{H}\rightarrow \R$ and $\bar{f}=\E_{f \sim p^{\ast}}[f]$.
Suppose that for each $w\in \mathcal{H}$ and unit vector $v$ we have
$\E_{f \sim p^{\ast}}[(v\cdot (\nabla f(w)-\bar{f}(w)))^2] \leq \sigma^2.$
Under appropriate Lipschitz and smoothness assumptions,
for $n = \Omega( d\log(r/(\sigma^2\eps))/(\sigma^2 \eps))$,
an $\eps$-corrupted set of functions drawn i.i.d. from $p^\ast$, $f_1,\ldots,f_n$ with high probability
satisfy conditions (i) and (ii).
\end{proposition}
\noindent The reader is referred to Proposition~\ref{prop:sample-bound} in the appendix for a detailed formal statement.
}
While Theorem~\ref{thm:stationary-point-inf} is very general and holds even for non-convex
loss functions, we might in general hope for more than an approximate critical point.
In particular, for convex problems, we can guarantee that we find an approximate global minimum.
This follows as a corollary of Theorem~\ref{thm:stationary-point-inf}:
\begin{corollary}\label{cor:convex-sever-inf}
Suppose that $f_1, \ldots, f_n: \mathcal{H} \to \R$ satisfy the regularity conditions (i) and (ii),
and that $\mathcal{H}$ is convex with $\ell_2$-radius r.
Then, with probability at least $9/10$, the output of {\textsc{Sever}}{} satisfies the following:
\begin{enumerate}
\item[(i)] If $\Ef$ is convex, the algorithm finds a $w \in \mathcal{H}$ such that
$\bar{f}(w) - \bar{f}(w^*) = O((\sigma \sqrt{\epsilon} + \gamma) r)$.
\item[(ii)] If $\Ef$ is $\xi$-strongly convex, the algorithm finds a $w \in \mathcal{H}$ such that
$ \bar{f}(w) - \bar{f}(w^*) = O\left((\eps\sigma^2 + \gamma^2)/{\xi} \right)$.
\end{enumerate}
\end{corollary}
\paragraph{Practical Considerations.}
For our theory to hold, we need to use the randomized filtering algorithm
shown in Algorithm~\ref{alg:filter} (which is essentially the robust mean estimation
algorithm of~\cite{DKK+17}), and filter until the stopping condition
in line \ref{until-step} of Algorithm~\ref{alg:sever} is satisfied. However,
in practice we found that the following simpler algorithm worked well: in
each iteration simply remove the top $p$ fraction of outliers according to the scores
$\tau_i$, and instead of using a specific stopping condition, simply repeat
the filter for $r$ iterations in total. This is the version of {\textsc{Sever}}{} that
we use in our experiments in Section~\ref{sec:experiments}.
\paragraph{Concrete Applications}
We also provide several concrete applications of our general theorem,
particularly involved with optimization problems related to learning generalized linear models.
In this setting, we are given a set of pairs $(X,Y)$ where $X\in \R^d$ and $Y$ is in some (usually discrete) set.
One then tries to find some vector $w$ that minimizes some appropriate loss function $L(w,(X,Y)) = \sigma_Y(w\cdot X)$.
For example, the standard hinge-loss has $Y\in \{\pm 1\}$ and $L = \max(0,1-Y(w\cdot X))$.
Similarly, the logistic loss function is $-\log(1+\exp(-Y(w\cdot X)))$. In both cases, we show that an approximate
minimizer to the empirical loss function can be found with a near-optimal number of samples,
even under $\eps$-corruptions (for exact theorem statements see Theorems \ref{thm:result-svm} and \ref{thm:logreg}).
\begin{theorem}[Informal Statement]
Let $D_{X,Y}$ be a distribution over $\R^d\times\{\pm 1\}$ so that $\E[XX^T]\preceq I$
and so that not too many $X$ values lie near any hyperplane.
Let $(X_1,Y_1),\ldots,(X_n,Y_n)$ be $n=\tilde O(d/\eps)$ $\eps$-corrupted samples from $D_{X,Y}$.
Let $L$ be either the hinge loss or logistic loss function.
Then there exists a polynomial time algorithm that with probability $9/10$ returns a vector $w$ that minimizes
$
\E_{(X,Y)\sim D_{X,Y}}(L(w,(X,Y)))
$ up to an additive $\tilde O(\eps^{1/4})$ error.
\end{theorem}
Another application allows us to use the least-squares loss function ($L(w,(X,Y))=(Y-w\cdot X)^2$)
to perform linear regression under somewhat more restrictive assumptions (see Theorem \ref{thm:linreg} for the full statement):
\begin{theorem}[Informal Statement]
Let $D_{X,Y}$ be a distribution over $\R^d\times \R$ where $Y=w^{\ast}\cdot X+e$ for some independent $e$
with mean $0$ and variance $1$. Assume furthermore, that $\E[XX^T]\preceq I$ and that $X$ has bounded fourth moments.
Then there exists an algorithm that given $O(d^5/\eps^2)$ $\eps$-corrupted samples from $D$,
computes a value $w\in \R^d$ so that with high probability $\|w-w^{\ast}\|_2 = O(\sqrt{\eps})$.
\end{theorem}
\subsection{Overview of {\textsc{Sever}}{} and its Analysis}
For simplicity of the exposition, we restrict ourselves
to the important special case where the functions involved are convex.
We have a probability distribution $p^{\ast}$ over convex functions
on some convex domain $\mathcal{H} \subseteq \R^d$
and we wish to minimize the function $\bar{f} = \mathbb{E}_{f \sim p^{\ast}}[f]$.
This problem is well-understood in the absence of corruptions:
Under mild assumptions, if we take sufficiently many samples from $p^{\ast}$,
their average $\hat{f}$ approximates $\bar{f}$ pointwise with high probability. Hence,
we can use standard methods from convex optimization
to find an approximate minimizer for $\hat{f}$, which will in turn serve as an approximate
minimizer for $\bar{f}$.
In the robust setting, stochastic optimization becomes quite challenging:
Even for the most basic special cases of this problem (e.g., mean estimation, linear regression)
a {\em single} adversarially corrupted sample can substantially
change the location of the minimum for $\hat{f}$. Moreover, naive outlier removal
methods can only tolerate a negligible fraction $\eps$ of corruptions (corresponding to $\eps = O(d^{-1/2})$).
A first idea to get around this obstacle is the following: We consider the standard
(projected) gradient descent method used to find the minimum of $\hat{f}$.
This algorithm would proceed by repeatedly computing the gradient of $\hat{f}$
at appropriate points and using it to update the current location.
The issue is that adversarial corruptions can completely compromise
this algorithm's behavior, since they can substantially
change the gradient of $\hat{f}$ at the chosen points.
The key observation is that approximating the gradient of $\bar{f}$ at a given point,
given access to an $\eps$-corrupted set of samples,
can be viewed as a robust mean estimation problem.
We can thus use the robust mean estimation algorithm of~\cite{DKK+17},
which succeeds under fairly mild assumptions about the good samples.
Assuming that the covariance matrix of $\nabla f(w)$, $f \sim p^{\ast}$,
is bounded, we can thus ``simulate'' gradient descent and compute an
approximate minimum for $\bar{f}$.
In summary, the first algorithmic idea is to use a robust mean estimation routine as a
black-box in order to robustly estimate the gradient at {\em each} iteration of (projected) gradient descent.
This yields a simple robust method for stochastic optimization
with polynomial sample complexity and running time in a very general setting
(See Appendix~\ref{sec:general-algo} for details.)
We are now ready to describe {\textsc{Sever}}{} (Algorithm~\ref{alg:sever}) and the main insight behind it.
Roughly speaking, {\textsc{Sever}}{} only calls our robust mean estimation routine
(which is essentially the filtering method of~\cite{DKK+17} for outlier removal)
each time the algorithm reaches an approximate critical point of $\hat{f}$.
There are two main motivations for this approach:
First, we empirically observed that if we iteratively filter samples,
keeping the subset with the samples removed, then few iterations of the filter remove points.
Second, an iteration of the filter subroutine (Algorithm~\ref{alg:filter})
is more expensive than an iteration of gradient descent. Therefore, it is advantageous
to run many steps of gradient descent on the current set of corrupted samples
between consecutive filtering steps. This idea is further improved by
using stochastic gradient descent, rather than computing the average at each step.
\new{
An important feature of our analysis is that {\textsc{Sever}}{} does not use a robust mean estimation routine
as a black box. In contrast, we take advantage of the performance guarantees of
our filtering algorithm. The main idea for the analysis is as follows:
Suppose that we have reached an approximate critical point $w$ of $\hat{f}$ and
at this step we apply our filtering algorithm. By the performance guarantees of the latter algorithm
we are in one of two cases: either the filtering algorithm removes a set of corrupted functions
or it certifies that the gradient of $\hat{f}$ is ``close'' to the gradient of $\bar{f}$ at $w$.
In the first case, we make progress as we produce a ``cleaner'' set of functions.
In the second case, our certification implies that the point $w$ is also an approximate critical point of $\bar{f}$
and we are done.}
|
{
"timestamp": "2019-05-31T02:04:02",
"yymm": "1803",
"arxiv_id": "1803.02815",
"language": "en",
"url": "https://arxiv.org/abs/1803.02815"
}
|
\section{Introduction}
\vspace{-2pt}
In the wake of severe damage to Olive View Hospital during the 1971 San Fernando, California earthquake, Bertero, et al., \cite{RN77}, directed attention of the engineers to coherent acceleration pulses. These pulses in the earthquake time history results large displacement demands in the structures. Also during 1994 Northridge, California and 1995 Kobe, Japan earthquakes many structures – especially tall moment resisting frames – which designed by that times seismic codes failed during the earthquake because of short story failure \cite{RN32, RN120, RN86, RN73}.
To prevent the soft story failure in structures, various studies had been conducted \cite{RN56, RN20, RN18}. One of the early works that introduced the concept of coupling a rocking wall with a moment resisting frame was work of \cite{RN114} and recently the seismic retrofitting of an 11-story building in Tokyo University in Japan has been done using a pinned rocking wall \cite{RN5, RN43}. Following the works of \cite{RN5, RN43}, several publications appeared to promote the seismic protection of a moment resisting frame structure when coupled with a rocking wall \cite{RN1, RN6, RN100, RN67}. Also with the progress that has been made in the technology of precast shear walls as seismic resisting system for structures in seismically active areas (PCI Ad Hoc Committee on Precast Walls, \cite{RN112}) several studies conducted using precast walls \cite{RN29, RN8, RN7, RN63, RN62}.
Most of the studies mentioned above are based on the seminal paper by Housner \cite{RN24}, which introduced advantages of rocking solitary column. Theses tall, slender columns exhibit remarkable performance and seismic stability. In his 1963 paper Housner shows that there is a safety margin between uplifting and overturning and that as the size of the free-standing column increases or the frequency of the excitation pulse increases, this safety margin increases appreciably to the extent that large free-standing columns enjoy ample seismic stability. Also \cite{RN65} recently explained that as the size of the free-standing rocking column increases, the enhanced seismic stability primarily originates from the difficulty to mobilize the rotational inertia of the column (wall) which increases with the square of the column (wall) size.
Accordingly, while, it becomes evident that most of the seismic resistance of tall free-standing columns (or walls) essentially originates from the difficulty to mobilize their large rotational inertia, the main emphasis on the behavior and capacity analysis of the coupled moment-frame-rocking-wall system as documented in the above-referenced studies is on the inelastic behavior of the structural system (inelastic behavior of the rocking wall-foundation interface) without analyzing the true dynamics of the system and the potential significance of considering the coupled dynamic effects. Clearly, there are cases where the response of the moment-resisting frame dominates the overall response and the rotational inertia effects of the rocking wall are negligible. Nevertheless, given that in principle the dynamics of the rocking wall is not negligible and in some cases, it may be unfavorable since it may drive the structure; the main motivation for this study is to examine to what extent the dynamics of a stepping or a pinned rocking wall influence the dynamic response of the coupled elastic oscillator.
The motivation for coupling of a moment-resisting frame with a strong rocking wall is to primarily enforce a uniform distribution of interstory drifts; therefore, the first mode of the frame becomes dominant as was first indicated in the seminal paper by Alavi and Krawinkler \cite{RN86}. Further analytical evidence to the first-mode dominated response is offered in \cite{RN5}. These results together with additional evidence by other investigators were critically evaluated in a recent paper Grigorian \cite{RN1}, who concluded that a moment resisting frame coupled with a rocking wall can be categorized as a single-degree-of-freedom (SDOF) system. Accordingly, in this study we adopted the SDOF idealization shown in Figure 1.
\vspace{-6pt}
\section{DYNAMICS OF AN ELASTIC OSCILLATOR COUPLED WITH A STEPPING ROCKING WALL}
Dynamics of an elastic single degree of freedom oscillator coupled with a stepping rocking wall is investigated in this part. The schematic of the problem is shown in Figure 1. An oscillator with stiffness k, damping of c and the mass of $m_w$ is coupled with a stepping rocking wall with mass of $m_w$, wall size of $R=\sqrt{b^2+h^2 }$, slenderness, $tan\alpha=b⁄h$ and moment of inertia about pivoting points $O$ and $O'$, $I=4/3 m_w R^2$. For the sake of simplicity, it is assumed that the link between the wall and the oscillator is located between center of the mass of the wall and oscillator in the height of h from the foundation of the stepping rocking wall as shown in Figure 1.
\begin{figure}[t]
\centering
\includegraphics[width=10cm]{Fig01}
\caption{Elastic SDOF oscillator coupled with a stepping rocking wall}
\label{fig:1}
\end{figure}
While the block starts to uplift, center of mass of the wall goes upward by $v$, so the coupling arm rotates by an angle of $\phi$. So, the translation of center of mass of the wall, $x$, is related to horizontal displacement of the oscillator mass, $m_s$, and can be expressed via, $cos\phi=1-(u-x)/L$; in which $\phi=sin^{-1}(v/L)$. Hence the horizontal displacement, $u$, is related to the horizontal displacement of center of the mass of the wall,$x$, through the following equation:
\begin{equation} \label{eq:1a}
\dfrac{u}{L}=1+\dfrac{x}{L}-\sqrt{1-\dfrac{v^2}{L^2}}
\end{equation}
In this paper, the coupling arm is assumed to be long enough so that $v^2/L^2$ is much smaller that unity $(v^2⁄L^2 <<1)$; and in this case $u=x$. Clearly, there are cases where the coupling arm is short and in this case the term $v^2/L^2$ is not negligible. Nevertheless, a recent study by Makris and Aghagholizadeh 2016 \cite{RN119} on the response of an elastic oscillator coupled with a rocking wall showed that the effect due to a shorter coupling arm is negligible.
The system under consideration is a single-degree-of-freedom system where the lateral translation of the mass, u is related to the rotation of the stepping rocking wall $\theta$ via the expression:
\begin{equation} \label{eq:1}
u=\pm R[\sin\alpha-\sin(\alpha \mp \theta)]
\end{equation}
\begin{equation} \label{eq:2}
\dot{u}= R~\dot{\theta} \cos(\alpha \mp \theta)
\end{equation}
\begin{equation} \label{eq:3}
\ddot{u}=R[\ddot{\theta}\cos(\alpha \mp \theta) \pm \dot{\theta^2}\sin(\alpha \mp \theta) ]
\end{equation}
In equations (\ref{eq:1}) to (\ref{eq:3}) whenever there is a double sign (say $\pm$), the top sign is for $\theta>0$ and the bottom sign is for $\theta<0$.
\noindent Dynamic equilibrium of the mass $m_{s}$ gives:
\begin{equation} \label{eq:4}
m_{s} (\ddot{u}+\ddot{u}_{g})=-ku-c \dot{u}+T
\end{equation}
In equation (\ref{eq:4}), $T$, represents the axial force in the coupling arm which is positive when force is positive.\\
\noindent \textit{Case 1: $\theta>0$} :
\noindent For positive rotations $(\theta>0)$, dynamic equilibrium of the roting restrained stepping wall with mass $m_w$, gives:
\begin{equation} \label{eq:7}
\begin{split}
I\ddot{\theta}=-TR\cos(\alpha-\theta)-m_w g R\sin(\alpha-\theta)-m_w \ddot{u}_g R\cos(\alpha-\theta)
\end{split}
\end{equation}
\noindent The axial force $T$ appearing in equation (\ref{eq:7}) is replaced with the help of equation (\ref{eq:4}) and for a rectangular stepping wall $(I=\dfrac{4}{3}m_w R^2$ ), equation (\ref{eq:7}) assumes the form:
\begin{equation} \label{eq:8}
\begin{aligned}
&\dfrac{4}{3} m_w R^2 \ddot{\theta}+[m_s (\ddot{u}+\ddot{u}_g)+
ku+c\dot{u} ]R \cos(\alpha-\theta)=-m_w R[\ddot{u}_g\cos(\alpha-\theta)
+g \sin(\alpha-\theta) ]
\end{aligned}
\end{equation}
\noindent upon dividing with $m_wR^2$ and inserting equations (\ref{eq:1}) to (\ref{eq:3}) instead of $u$, $\dot{u}$ and $\ddot{u}$, equation (\ref{eq:8}) assumes the form:
\begin{equation} \label{eq:9}
\begin{aligned}
&[\frac{4}{3}+\gamma \cos^2(\alpha-\theta) ]\ddot{\theta}+\gamma \cos(\alpha-\theta) \big[\omega_o^2 (\sin\alpha-sin(\alpha-\theta) ) \\
&+2\xi\omega_o \dot{\theta} \cos(\alpha-\theta)+\dot{\theta}^2 sin(\alpha-\theta) \big]=-\frac{g}{R} \big[(\gamma+1)\frac{\ddot{u}_g}{g} \cos(\alpha-\theta)+\sin(\alpha-\theta)\big]
\end{aligned}
\end{equation}
\noindent in which $\gamma=m_s/m_w$ is the mass ratio parameter, $\omega_o=\sqrt{k/m_s}$ is undamped frequency and $\xi$ is the viscous damping ratio of the SDOF oscillator.\\
\noindent \textit{Case 2: $\theta<0$}:\\
\noindent For negative rotations one can follow the same reasoning and the equation of the coupled system shown in Figure (\ref{fig:1}) is:
\begin{equation} \label{eq:12}
\begin{aligned}
&[\frac{4}{3}+\gamma \cos^2(\alpha+\theta) ]\ddot{\theta}-\gamma \cos(\alpha+\theta) \big[\omega_o^2 (\sin\alpha-sin(\alpha+\theta) ) \\
&-2\xi\omega_o \dot{\theta} \cos(\alpha+\theta)+\dot{\theta}^2 sin(\alpha+\theta) \big]=\frac{g}{R} \big[-(\gamma+1)\frac{\ddot{u}_g}{g} \cos(\alpha+\theta)+\sin(\alpha+\theta)\big]
\end{aligned}
\end{equation}
In equations (\ref{eq:9}) and (\ref{eq:12}), term that are multiplied by $\gamma=m_s⁄m_w$ , are related to dynamic response of elastic oscillator and other terms are related to dynamics of stepping rocking wall. In the absence of the elastic oscillator ($\gamma=\omega_o=\xi=0$), equations (\ref{eq:9}) and (\ref{eq:12}) reduce to the equation of motion of the solitary free-standing column \cite{RN79, RN81}.
\noindent During the oscillatory motion of the coupled system shown in Figure~\ref{fig:1}, aside from the energy that is dissipated from the inelastic behavior of the SDOF oscillator and the idealized viscous damping, additional energy is also lost during impact when the angle of rotation reverses. At this instant it is assumed that the rotation of the rocking wall continues smoothly from points $O$ to $O'$; nevertheless, the angular velocity, $\dot{\theta}_2$, after the impact is smaller than the angular velocity, $\dot{\theta}_1$, before the impact. Given that the energy loss during impact is a function of the wall-foundation interface, the coefficient of restitution, $e=\dot{\theta}_2/\dot{\theta}_1<1$, is introduced as a parameter of the problem. In this study the coefficient of restitution assumes the value of $e= 0.9$.\\
Minimum acceleration needed to initiate rocking can be calculated as follows: \cite{RN165, RN119, RN166}.
\begin{equation} \label{eq:13}
\ddot{u}_g \geqslant \dfrac{g~tan\alpha}{\gamma+1}
\end{equation}
\section{DYNAMICS OF AN ELASTIC OSCILLATOR COUPLED WITH A PINNED ROCKING WALL}
Using frame retrofitting method that introduced by \cite{RN86}, in their study \cite{RN43} and \cite{RN5} proposed a pinned rocking wall for the seismic protection of an 11-story moment resistant frame in Tokyo University, Japan. The novelty in these studies is that the rocking wall does not alternate pivot points (it is not a stepping wall) given that it is pinned at mid-width as shown in Figure 2.
\begin{figure}[b]
\centering
\includegraphics[width=10cm]{Fig02}
\caption{Elastic SDOF oscillator coupled with a pinned rocking wall}
\label{fig:2}
\end{figure}
The pinned rocking wall shown in Figure 2 is a SDOF (with the same reasoning of the previous case) system and translation of oscillator mass can be expressed in terms of rotation of the pinned wall and can be expressed as follows:
\begin{equation}
u=h\sin\theta
\label{eq:pin_disp}
\end{equation}
And time derivations can be expressed as:
\begin{equation}
\dot{u}=h\dot{\theta}\cos\theta
\label{eq:pin_velo}
\end{equation}
\begin{equation}
\ddot{u}=h\ddot{\theta}\cos\theta-h\dot{\theta}^2\sin\theta
\label{eq:pin_acc}
\end{equation}
System shown in Figure 2 is a single degree of freedom oscillator with mass, $m_s$, stiffness, k, and damping c, that is coupled with pinned wall of size $R=\sqrt{b^2+h^2}$, slenderness, $tan\alpha=b/h$, mass, $m_w$ and moment of inertia about the pin O, $I=m_w R^2 (1/3+cos^2\alpha)$.
Dynamic equilibrium of the mass, $m_s$, of the oscillator is similar to equation 5. In this case equation of motion for the pinned rocking wall is the same for positive and negative rotation:
\begin{equation}
I\ddot{\theta}=-Th \cos\theta+m_w gh sin\theta-m_w \ddot{u}_g h cos\theta
\label{eq:15_p}
\end{equation}
Note that as it can be seen in equation (\ref{eq:15_p}), wall mass $m_w$, in this case works against the stability of the system. With similar steps as it described for the stepping rocking wall one can derive the equation of motion for pinned rocking wall using equations (\ref{eq:pin_disp}) to (\ref{eq:15_p}).
\begin{equation}
\begin{aligned}
&[\frac{1}{3}+(1+\gamma \cos^2\theta)cos^2\alpha]\ddot{\theta}+\gamma \cos^2\alpha \cos\theta [(\omega_o^2-\dot{\theta}^2 ) \sin\theta+2\xi \omega_o \dot{\theta}cos\theta]\\
&=-\frac{g}{R} cos\alpha [(\gamma+1) \frac{\ddot{u}_g}{g} cos\theta-sin\theta]
\end{aligned}
\label{eq:EOM_pin}
\end{equation}
Equation (\ref{eq:EOM_pin}) is equation of motion for pinned rocking wall both for positive and negative rotations and all the parameters are similar to the stepping rocking wall.
\section{RESPONSE SPECTRA OF AN ELASTIC OSCILLATOR COUPLED WITH A ROCKING-WALL}
\begin{figure}[t]
\centering
\includegraphics[width=11cm]{Kobe}
\caption{Displacement spectra of an elastic SDOF oscillator coupled with a stepping wall (left) and a pinned wall (right) for three values of the mass ratio $\gamma=m_s/m_w=5$,10 and 20 and two values of the wall size, $\omega_o/p=10$ and 15 when subjected to the Takarazuka/000 ground motion recorded during the 1995 Kobe, Japan earthquake (bottom).}
\label{fig:kobe}
\end{figure}
In order to find earthquake response spectra of the systems shown in Figures 1 and 2, equations (\ref{eq:9}), (\ref{eq:12}) and (\ref{eq:EOM_pin}) are used. In Figure 3 displacement spectra for stepping rocking wall (left) versus pinned rocking wall (right) is shown when systems are excited by the Takarazuka/000 ground motion recorded during the 1995 Kobe, Japan earthquake (bottom). The top plots are for $\omega_o/p=10$; whereas the bottom plots are for $\omega_o/p=15$---that is for a larger wall at any given structural frequency, $\omega_o=2\pi/T_o$.
When reading the earthquake spectra shown in Figures \ref{fig:kobe} and \ref{fig:PCD} the reader needs to recognize that as the period, $T_o$ of the SDOF oscillator increases, for a given ratio of $\omega_o/p$, the size of the coupled wall also increases. For instance, for the top plots which are for $ω_o/p=10$, the frequency parameter, p, of the wall that is coupled to a structure with $T_o=0.5$ sec is $p=\omega_o/10=\frac{2\pi}{0.5}\frac{1}{10}=1.26$ rad/sec, which corresponds to a value of $R=3g/4p^2=4.66$ m; therefore, the wall with slenderness, $tan\alpha=1/6$, is $9.20$ m tall.
When a structure with $T_o=1.0$ sec is of interest, the frequency parameter, p, of the wall is $p=\omega_o/10=\frac{2\pi}{10}\frac{1}{10}$ rad/sec, which corresponds to a value of $R=3g/4p^2=18.6$ m; therefore, the wall with a slenderness, $tan\alpha=1/6$, is 36.80 m tall. When observing Figure (\ref{fig:kobe}), what is worth noting is that in the case where the SDOF oscillator is coupled with a stepping wall (left plots), the presence of the stepping wall suppresses the displacement response (with the heavier wall, $\gamma=5$ being most effective), for flexible structures (large values of $T_o$).
\begin{figure}[t]
\centering
\includegraphics[width=11cm]{PCD}
\caption{Displacement spectra of an elastic SDOF oscillator coupled with a stepping wall (left) and a pinned wall (right) for three values of the mass ratio $\gamma=m_s/m_w=5$,10 and 20 and two values of the wall size, $\omega_o/p=10$ and 15 when subjected to the Pacoima Dam/164 ground motion recorded during the 1971 San Fernando, California earthquake (bottom).}
\label{fig:PCD}
\end{figure}
In contrast in the case where the SDOF oscillator is coupled with a pinned wall (right plots), the presence of the pinned wall amplifies the response for the most of the spectrum with the heavier wall ($\gamma=5$) being most detrimental. This mainly happens because in the case of the pinned wall, the moment from its weight $=+m_w gh sin\theta$ works against stability as shown in equation (15).
Similar trends are shown in Figure 4 which shows displacement spectra when the coupled elastic SDOF oscillator-rocking wall system is subjected to the Pacoima Dam/164 ground motion recorded during the 1971 San Fernando, California earthquake. (In addition to earthquake spectra \cite{RN119}, analyzed spectra of these systems under symmetric Ricker wavelet \cite{RN96} pulse acceleration).
\section{OpenSees MODELING OF AN ELASTIC OSCILLATOR COUPLED WITH A STEPPING ROCKING WALL}
In this section, a simple model representing an elastic oscillator coupled with a stepping rocking wall is presented. The system is shown in Figure (\ref{fig:Ops}) is a fixed end column with period of $T_o=0.64$ s, and a concentrated mass at the top, $m_s$. The column model defined using elastic beam column element in OpenSees \cite{RN59}. The rocking surface between ground and bottom of the stepping rocking wall is modeled using zero-length fiber cross section element with nonlinear elastic compression and no tension material, placed between them. This type of cross section enables simulation of the rocking motion.
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{OpS}
\caption{Simple OpenSees model representing an elastic oscillator coupled with a stepping rocking wall}
\label{fig:Ops}
\end{figure}
The only issue with this type of model is the energy dissipation of the wall when it changes the pivot point cannot be considered. To simulate energy dissipation in each impact, a rotational viscous damper is defined at the bottom of the wall. The specification of this damper and its coefficient is selected using study of \cite{RN22}. Viscous damper constant is defined as follows:
\begin{equation}
c=110~\alpha^2m_wg^{0.5}R^{1.5}
\end{equation}
In which $\alpha$ is the wall slenderness, $m_w$, is the wall mass and R, is wall size (as all shown in equations of motion calculated in previous sections).
\begin{figure}[b]
\centering
\includegraphics[width=12cm]{Comp}
\caption{Response of the system (with) and without (left) damper) when subjected to the CO2/065 ground motion recorded during the 1966 Parkfield, California earthquake.}
\label{fig:Comp}
\end{figure}
Figure (\ref{fig:Comp}) shows response of the system when subjected to the $CO2/065$ ground motion recorded during the 1966 Parkfield, California earthquake. Responses of the system—shown in Figure (\ref{fig:Ops})---using OpenSees framework is compared with the results of a system with similar parameters (period $T_o=0.64 s$ and mass ration, $\gamma=m_s⁄m_w=5$) using equation 8 and 9 from MATLAB. Figure (\ref{fig:Comp}-left) shows response of the system when there is no viscous damper added. Similarly, response of the system with viscous damper is shown in Figure (\ref{fig:Comp}-right).
This result clearly shows that using a rotational viscous damper is a practical way to simulate energy dissipation during wall impact and the results have a good agreement with the solution from equations of motion 8 and 9.
\section{Conclusion}
This paper studied dynamics of a single-degree-of-freedom, elastic oscillator when it is coupled with a stepping rocking wall and pinned rocking wall. The full nonlinear equations of motion for both cases have been derived and analyzed subjected to different earthquake time histories. This study reaches to the following conclusions.
In the case that SDOF oscillator is coupled with a stepping rocking wall, presence of the wall suppresses the displacement of the system, especially for flexible oscillators. In the other hand, pinned rocking wall amplifies the responses of the system, and heavier wall has more amplification disadvantage. This happens mainly because the weight of pinned wall works against the stability of the system.
Also, a simple model of oscillator coupled with a stepping rocking wall is modeled and analyzed using OpenSees framework. This study showed that in order to capture the energy dissipation of the wall when it changes the pivot point, using a rotational viscous damper is a practical an accurate method. Comparison of time history response of the system compared with equation of motion solution from MATLAB shows a good agreement.
The study of the response of a yielding SDOF oscillator coupled with a rocking wall is ongoing and will be presented in a future publication.
\ack The author wishes to express his acknowledgement and gratitude to Dr. Nicos Makris whose guidance and comments helped throughout this research.
|
{
"timestamp": "2018-09-21T02:13:33",
"yymm": "1803",
"arxiv_id": "1803.02669",
"language": "en",
"url": "https://arxiv.org/abs/1803.02669"
}
|
\section{Introduction and motivations}
\label{introduction}
A variety of new radio telescopes, precursors (e.g. ASKAP~\citep{johnston2008science}, MeerKAT~\citep{jonas2009meerkat}) and Pathfinders (e.g. LOFAR~\citep{van2013lofar}, NenuFAR~\citep{zarka2015nenufar}) for the Square Kilometre Array~\citep{dewdney2009square} (SKA) \Refcom{are under} development or
used \Refcom{to image} wide field of view \Refnew{(FoV, i.e. the fractional portion of the primary beam at the full width at half maximum (FWHM))} sky surveys at high sensitivity, wide bandwidth and high spectral and temporal resolution. These radio telescopes produce an extremely
large volume of data, such that data storage and analysis are becoming more challenging for scientific research and engineering requirements e.g., to
transmit the data from the \Refcom{receivers} to the \Refcom{correlator} or in data reduction such as calibration and imaging.
A typical example is the LOFAR telescope. Its $uv$-data (visibilities), assuming 24 core stations (excluding the remote and international stations) using 244 sub-bands with 64 channels per sub-band, 4 hours observation time with a 1 s temporal resolution is predicted to
be $\sim$8376 GB using the dual high band antenna (see LOFAR calculator\footnote{{\tt lofar.astron.nl/service/pages/storageCalculator\\/calculate.jsp}}). However, observations with all the LOFAR national and international stations are capable of producing data volumes of the order of petabytes~\citep{sabater2017calibration}.
Survey capabilities with the future SKA (unprecedented sensitivity, resolution and bandwidth) are expected to generate data by many orders of magnitude higher than any existing radio interferometer.
This data volume will be \Refcom{even larger} for any SKA survey science that will integrate multiple beams and/or multiple phase tracking e.g., African Very Long
Baseline Interferometry (VLBI) Network~\citep{gaylard2014african}, European VLBI Network (EVN)~\citep{keipema2015sfxc}, etc.
New techniques for data compression and storage systems must be
developed for the transition from the current radio interferometers to the SKA.
Data compression is an advantageous solution for increasing the speed of the data transmission and to decrease the computational requirements for post-processing. Data compression also offers an alternative possibility for wide FoV observations because it offers significant reduction of \Refcom{the data volume} while preserving useful information to improve discovery and analysis accuracy.
Traditionally, radio interferometric correlators compress the visibility data by simply averaging the data, which may be averaged further in post-correlation to speed up processing.
\Refcom{However, the challenge} in compressing the visibilities by simple averaging is that these visibilities decorrelate and the decorrelation is time-frequency dependent and baseline-dependent. The visibility from a baseline $pq$ \Refcom{(with vector $\bmath{u}_{pq}=(u,v,w)$)} of a point source
with brightness $S$ and coordinates $\bmath{l}=(l,m,n-1)$ is given by:
\begin{equation}
V_{pq} = S \exp \big \{-\mathrm{i} \phi \big \}, ~ \phi(\bmath{u}_{pq}) = 2 \pi \bmath{u}_{pq}\cdot\bmath{l}.\label{eq:phase}
\end{equation}
\ATM{For sources with an increasing separation from the phase centre, the phase $\phi$ is increasingly large for a given baseline, and at some distance phase-wrapping within the averaging time-frequency will cause a strong decorrelation of the signal.}
\Refcom{Figure~\ref{fig:srcat30arcmin_avg} is a simulated observation with MeerKAT at 1.4 GHz showing the amplitude decorrelation for a 1 Jy point source located at $0.65$ deg, $1.32$ deg and $2.25$ deg away from the phase tracking centre as a function of East-West baseline length.}
At this frequency, a MeerKAT survey must be able to image sources up to an angular distance of 0.65 deg (edge of the FoV at the FWHM of the primary beam (PB)) from the phase tracking centre with little to no smearing effects.
But modern calibration and imaging techniques such as MeqTrees~\citep{noordam2010meqtrees} or DDFacet~\citep{ctasseDDFacet} are able to correct for PB effects
\Refcom{far exceed the second sidelobe of the PB~\citep{mitra2015incorporation}}.
An accurate PB model is necessary for calibrating out the effects of the PB, and for improving image fidelity. A good PB model can significantly reduce artefacts in the image and improve its dynamic range, and an appropriate direction-dependent calibration procedure can further reduce artefacts and increase the \Refcom{dynamic range~\citep{mitra2015incorporation}.}
\Refcom{Throughout this paper, we use the term Field-of-Interest (FoI) to differentiate from the FoV when the region of interest to be imaged exceed \lq\lq the fractional portion of the PB at the FWHM’’.}
The first and the second null of the PB of MeerKAT at 1.4 GHz fall at $\sim$1.32 and $\sim$2.25 deg respectively.
\begin{figure*}
\centering
\includegraphics[width=.4\textwidth]{effect_time_averaging_amplitude_src30-50arcmin.pdf}%
\includegraphics[width=.4\textwidth]{effect_bandwidth_averaging_amplitude_src30-50arcmin.pdf}%
\caption{\ATMR{
Amplitude loss: the apparent intensity of a 1 Jy source at 0.65 deg, 1.32 deg and 2.25 deg as seen by MeerKAT at 1.4 GHz
as a function of East-West baseline components; \Refcom{(Left): data is \Refnew{simple} averaged across 15 s in time and frequency resolution is fixed to 84 kHz;
(Right) data is \Refnew{simple} averaged across 0.84 MHz in frequency and time resolution is fixed to 1 s}.
}}\label{fig:srcat30arcmin_avg}
\end{figure*}
\Refcom{In Figure~\ref{fig:srcat30arcmin_avg} the pre-averaged data is simulated using 1 s and 84 kHz for time and frequency resolutions respectively. To evaluate the time smearing the data is \Refnew{simple} averaged across 15 s
and the frequency resolution remains fixed to 84 kHz. Similarly, for the bandwidth smearing the time resolution is maintains to 1 s and the
data is \Refnew{simple} averaged across 0.84 MHz in frequency. Results show that decorrelation/smearing is severe on
longer East-West baselines than shorter East-West baselines and that smearing is a function of source position in the sky.}
\Refcom{Simple averaging could be used in a way to increase the signal-to-noise ratio (SNR) within the FoI by suppressing the sidelobes from sources out of the FoI, but the drawback is that sources at the edges of the FoI will be smeared~\citep{lonsdale2004efficient,atemkeng2016}.
However, increasing the SNR based on averaging is feasible only if both the FoI and its edges are preserved from smearing, and sources out of the FoI are suppressed. The later is resumed mathematically as follows:}
\begin{equation}
\mathrm{SNR} \approx \frac{S_\mathrm{smear}}{C_\mathrm{noise}+T_\mathrm{noise}}, \label{eq:snr}
\end{equation}
where $S_\mathrm{smear}$ is the signal of a source in the FoI \Refcom{(including the edges)} that \ATMR{must be preserved from smearing}, $C_\mathrm{noise}$ the signal from sources outside the FoI (i.e. confusion noise) that must be
subtracted from the FoI \ATMR{or must strongly decorrelate} and $T_\mathrm{noise}$ the thermal noise which is usually Gaussian and intrinsic to the visibility measurement process. Ideally, one wants an increase in $S_\mathrm{smear}$ and a decrease in $C_\mathrm{noise}$ within the FoI, so that the overall SNR increased even if there is an increase in $T_\mathrm{noise}$ in the case of weighted averaging.
\Refcom{If the $uv$-coverage of an interferometer is condensed at the centre then must of the data comes from the shorter baselines. An example of this type of centrally condensed $uv$-coverage along with the $uv$-coverage histogram is illustrated in
Figure~\ref{fig:meerkat}.} The histogram shows the $uv$-coverage data density as a function of effective baseline length.
\EDIT{I}f more samples \EDIT{should be} \Refcom{averaged at the centre} and fewer at the outer,
decorrelation \EDIT{can be} avoided
on the longer baselines and data compression \EDIT{would be} carried out on the shorter baselines. This method\EDIT{,}
often referred to as baseline-dependent averaging (BDA),
was first proposed by \citet{cotton1989special,cotton1999special} as an approach for \EDIT{dealing with} wide field imaging with little to no bandwidth and time averaging effects.
The idea of BDA is thus not novel, and has also been subject of discussion in many radio interferometry conferences, particularly \ATMR{the ability to use BDA for the SKA data processors.}
\Refnew{\citet{atemkeng2016} discussed a baseline-dependent window functions (BDWFs) scheme that has the effect to shape the FoI. Several other techniques to shape the FoI using window functions have been proposed~\citep{lonsdale2004efficient,parsons2009calibration,parsons2016optimized}.
BDWFs are weighted-moving averaging of the irregular sampled visibilities in the $uv$-space. The mathematical derivations for the
BDWFs show that the dirty image is the apparent sky multiplied by the inverse Fourier transform of each of the BDWFs.}
\ATMX{\Refnew{This work removes the restriction of irregular sampling in $uv$-space adopted in \citet{atemkeng2016} and considers regular sampling and averaging in the $uv$-space as a BDA formalism.
To shape the FoI, the BDA formalism is applied to BDWFs, i.e. applying weighted-moving averaging to the regular sampled visibilities in the $uv$-space. Throughout this paper, we will be referring BDA applied to BDWFs as BDAWFs.}
Since an unweighted average represents theoretically maximum sensitivity at the centre of the FoI, a weighted averaging will result in a loss in nominal sensitivity. However, to alleviate the decrease in sensitivity,
BDWFs are further extended by \citet{atemkeng2016}, showing that the use of overlapping BDWFs has the benefit of suppressing the far FoI sources compared to simple averaging, and could even recover some of the lost sensitivity while decreasing the overall far-field confusion noise. Overlapping BDWFs are sets of polyphase finite impulse response filters with order depending on the overlapping bins in the $uv$-space. The overlapping bins compensate for the missing bins windowed with the BDWF.
We refer the reader to \citet{atemkeng2016} for an intensive discussion on BDWFs and properties of overlapping BDWFs. \Refnew{The
mathematical framework derived from the BDAWFs
formalism shows that the dirty image is the apparent sky
multiplied by the inverse Fourier transform of a single BDWF.}}
\begin{figure*}
\centering
\includegraphics[width=.4\textwidth]{uvcov.png}%
\includegraphics[width=.4\textwidth]{uvcov-histo.pdf}%
\caption{MeerKAT $uv$-coverage at 1.4 GHz and histogram depicting the data density as a function baselines length, \Refcom{4 hr} observation and 8 MHz bandwidth showing clearly that the data \EDIT{are}
condensed at the centre. \Refcom{Most of the data at the centre come from the short baselines.}}\label{fig:meerkat}
\end{figure*}
\section{Mathematical background}
\label{sect:simpleavg}
We use the radio interferometry measurement equation (RIME) formalism, which
provides a model of a generic interferometer.
For details on the RIME formalism
see~\citet{hamaker1996understanding,smirnov2011revisiting2,smirnov2011revisiting}. \Refcom{In a single mathematical equation, the RIME
describes all the direction-dependent and direction-independent effects that may occur when an interferometric measurement is in process.}
\Refcom{The 2-D Fourier transform full sky RIME\EDIT{,} following~\citet{smirnov2011revisiting2,smirnov2011revisiting}\EDIT{,} is given by:}
\begin{equation}
\mathcal{V}_{pq} = \bmath{\mathrm{G}}_{pt\nu} \Big(\iint\limits_{lm}\bmath{\mathrm{D}}_{pt\nu} \mathcal{I}\bmath{\mathrm{D}}_{qt\nu}^\mathrm{H}\mathrm{e}^{-\mathrm{i}\phi}\textup{\textrm{d}} l \textup{\textrm{d}} m\Big)\bmath{\mathrm{G}}^\mathrm{H}_{qt\nu},\label{eq:rime:smirnov}
\end{equation}
where the superscript $(.)^\mathrm{H}$ denotes a Hermitian transpose operator. Here a single visibility value is denoted by $V_{pq}$ or in \Refnew{functional form} by $\mathcal{V}_{pq}\equiv\mathcal{V}(\bmath{u}_{pq})$
and the sky distribution function by $\mathcal{I}\equiv\mathcal{I}(l,m)$.
The formalism groups the product of \EDIT{direction-independent} Jones matrices corresponding to antenna $p$ into the matrix $\bmath{\mathrm{G}}_{pt\nu}$, and all its \EDIT{direction-dependent} effects
into the matrix $\bmath{\mathrm{D}}_{pt\nu}$.
We note that the PB pattern of each of the antenna that defines the directional sensitivity and the FoV of each of the antennas is part of the direction-dependent effects.
The term $\bmath{\mathrm{D}}_{pt\nu} \mathcal{I}\bmath{\mathrm{D}}_{qt\nu}^\mathrm{H}$ is the apparent
sky seen by baseline $pq$, and varies in time and frequency. \Refcom{For simplicity, throughout this work we assume
that both the sky and the direction-dependent gain are invariant; therefore each of the baselines will see the same apparent sky throughout the measurement process.}
Rotation of the Earth causes the baseline phase to vary in time, and for multi-frequency observations
the phase is constantly changing with time and frequency.
\Refcom{In practical situations an interferometer can only
measure an average visibility over a fixed time-frequency lengths as given by the \textit{sampling bin}:}
\begin{equation}
\mathsf{B}^{[\Delta t\Delta\nu]}_{\RefEq{kr}} = \bigg [ t_k-\frac{\Delta t}{2},t_k+\frac{\Delta t}{2} \bigg ]
\times
\bigg [ \nu_r-\frac{\Delta\nu}{2},\nu_r+\frac{\Delta\nu}{2} \bigg ], \label{eq:samplingbinnormal}
\end{equation}
where $\Delta t$ centered at $t_k$ and $\Delta \nu$ centred at $\nu_r$ are the sampling intervals in time and frequency respectively. \Refcom{The sampling bin has two dimensions: the width and height measured in time and frequency respectively.}
Let us denote $\mathcal{V}_{pq}(\bmath{u}_{}(t,\nu))\equiv\mathcal{V}(\bmath{u}_{pq}(t,\nu))$ as the ideal visibility distribution. After averaging in the correlator, the measured visibility becomes:
\begin{equation}
\textcolor{black}{\widetilde{V}}}%^\mathrm{M}_{pq\RefEq{kr}} = \frac{1}{\Delta t \Delta \nu}
\iint\limits_{\mathsf{B}^{[\Delta t\Delta\nu]}_{\RefEq{kr}}}
\mathcal{V}(\bmath{u}_{pq}(t,\nu))\textup{\textrm{d}} \nu \textup{\textrm{d}} t.
\label{eq2:conti}
\end{equation}
In the time-frequency space the bins are sampled equally on each baseline (assuming baseline-independent sampling), while in contrast in $uv$-space, they are not.
Ideally, all spatial frequencies up to the resolution of the longest baseline are
sampled in a 2-D continuous sky image. This requires Nyquist sampling of
the time-frequency space up to the highest spatial frequencies, corresponding to the longest baselines.
This is rarely possible because of the
unsampled $uv$-space ``holes'' during an observation, the lower spatial frequency cut-off due to
physical element limitations and sampling bias in the low spatial frequency
region of the \Refnew{$uv$-space} compared to higher spatial frequencies due to
baseline distribution. For a fixed time-frequency length, a long baseline will cover a
longer track in $uv$-space compared to a shorter baseline, which results in
the \Refcom{lower Fourier modes} being oversampled compared to \Refcom{higher Fourier modes}. On
shorter baselines, the sampling bin \Refcom{width and height are smaller} compared to longer baselines; assuming baseline-dependent mapping. However, \Refnew{this work considers} two major sub-domains. (1) The correlator domain or the $t\nu$-space where the baselines are sampled equally onto a rectangular grid. (2) The visibility domain or $uv$-space where the baselines are sampled differently and the overall data are mapped onto \Refcom{elliptical arcs/ribbons}.
Let us denote by $\mathsf{B}^{[uv]}_{pq\RefEq{kr}}$ the matched $uv$-space sampling bin, which is baseline-dependent. The relation in Eq.~(\ref{eq2:conti}) can be rewritten as:
\begin{equation}
\textcolor{black}{\widetilde{V}}}%^\mathrm{M}_{pq\RefEq{kr}} = [ \mathcal{V}_{pq} \circ \Pi^{[t\nu]} ](t_k,\nu_r) \text{, in } t\nu\text{-space or}\label{eq:avscontnuconvtv}
\end{equation}
\begin{equation}
\textcolor{black}{\widetilde{V}}}%^\mathrm{M}_{pq\RefEq{kr}} = [ \mathcal{V}_{pq} \circ \Pi^{[uv]}_{pq\RefEq{kr}} ](\bmath{u}_{pq}(t_k,\nu_r)) \text{, in } uv\text{-space.}
\label{eq:avscontnuconvtvuv}
\end{equation}
\Refcom{Here $\circ$ stands for the convolution operator, and} $\Pi^{[t\nu]}$, $\Pi^{[uv]}_{pq\RefEq{kr}}$ are normalised boxcar window functions
defined in $t\nu$-space and $uv$-space respectively. The detailed derivations for these equations are developed in ~\citet{atemkeng2016}.
Eq.~(\ref{eq:avscontnuconvtv}) and~(\ref{eq:avscontnuconvtvuv}) are of importance because they clearly show that
visibility averaging is equivalent to convolution at the centre of the sampling bin of the true visibilities and the boxcar window function.
We emphasise that the discussion above provides an alternative way to look at decorrelation/smearing. With averaging in effect, a useful mathematical model
may be of the following form:
\begin{equation}
\textcolor{black}{\widetilde{\mathcal{V}}}}%^\mathrm{M}_{pq\RefEq{kr}} = \delta_{pq\RefEq{kr}} ( \mathcal{V}\circ\Pi^{[uv]}_{pq\RefEq{kr}} ),\label{eq:avscontnuconvtvuvdelta}
\end{equation}
where $\delta_{pq\RefEq{kr}}$ \Refcom{denotes the Dirac delta functions i.e. a single nail sampling function.}
\subsection{Imaging}
To derive the effect of averaging on the image, we can reformulate Eq.~(\ref{eq:avscontnuconvtvuvdelta}) as:
\begin{equation}
\textcolor{black}{\widetilde{\mathcal{V}}}}%^\mathrm{M}_{pq\RefEq{kr}} = \mathcal{F}\big\{\mathcal{P}_{pq\RefEq{kr}}\big\} \bigg( \mathcal{F}\big\{\mathcal{I}\big\} \circ \Pi^{[uv]}_{pq\RefEq{kr}} \bigg),\label{eq:avscontnuconvtvuvdeltaim}
\end{equation}
where the apparent sky $\mathcal{I}$ is the inverse Fourier transform of the ideal visibility measurement $\mathcal{I}=\mathcal{F}^{-1}\big\{\mathcal{V}\big\}$ and the point spread function $\mathcal{P}_{pq\RefEq{kr}}$ is the inverse Fourier transform of the sampling function
for the baseline $pq$ at the discrete time-frequency bin $kr$, i.e. $\mathcal{P}_{pq\RefEq{kr}}=\mathcal{F}^{-1}\big\{\delta_{pq\RefEq{kr}}\big\}$. Here $\mathcal{F}^{}$ and $\mathcal{F}^{-1}$ represent the Fourier transform and its inverse respectively.
Inverting the Fourier transform of the sum over all baselines of
Eq.~(\ref{eq:avscontnuconvtvuvdeltaim}) and sampling at each $\RefEq{kr}$ results in an estimate
of the sky image i.e. the ``dirty image'':
\begin{equation}
\label{eq:imaging}
\mathcal{I}^\mathrm{D} = \mathcal{F}^{-1}\bigg\{ \sum_{pq\RefEq{kr}}W_{pq\RefEq{kr}} \textcolor{black}{\widetilde{\mathcal{V}}}}%^\mathrm{M}_{pq\RefEq{kr}}\bigg\} ,
\end{equation}
where $W_{pq\RefEq{kr}}$ is the weight at the sampled point $pq\RefEq{kr}$; in all the extent of the $uv$-space
$\mathcal{W}=\sum_{pq\RefEq{kr}}W_{pq\RefEq{kr}}\delta_{pq\RefEq{kr}}$ in functional form, i.e. the weighted-sampling function.
Substituting Eq.~(\ref{eq:avscontnuconvtvuvdeltaim}) into Eq.~(\ref{eq:imaging}) and applying the convolution theorem,
we now have:
\begin{equation}
\mathcal{I}^\mathrm{D} = \sum_{pq\RefEq{kr}} W_{pq\RefEq{kr}} \mathcal{P}_{pq\RefEq{kr}} \circ (\mathcal{I}\cdot\mathcal{T}_{pq\RefEq{kr}}),\label{eq:dirtybdboxcar}
\end{equation}
with the apparent sky $\mathcal{I}$ now tapered by the baseline-dependent \emph{window response function} $\mathcal{T}_{pq\RefEq{kr}}$, the latter being the inverse Fourier transform of the baseline-dependent boxcar window:
\begin{equation}
\mathcal{T}_{pq\RefEq{kr}} = \mathcal{F}^{-1}\Big\{ \Pi^{[uv]}_{pq\RefEq{kr}} \Big\}.
\end{equation}
Interestingly, Eq.~(\ref{eq:dirtybdboxcar}) explicitly enforces conditions on the dirty image which has the dependence on all the individual image-plane response (IPR) tapers, $\mathcal{T}_{pq\RefEq{kr}}$.
It should be noted that these IPR tapers are not completely arbitrary; in the sense that they depend on each baseline length and orientation.
\Refcom{Longer baselines have narrower IPR and are thus prone more to smearing than shorter baselines.}
In synthesis imaging, we assume that the sky is a constant signal (transient events are ignored), but a time
variable signal is measured because the projected baseline change in orientation and length as the Earth rotates. Also, the frequency coverage and array layout are used to fill in the synthesised aperture, making the signal depending on frequency and array layout.
The boxcar window functions
are linear but depend on baseline length, which varies with time and frequency: this is why in the entire $uv$-space, simple averaging is \textbf{not} a \textit{true-convolution} \Refcom{as demonstrated in \citet{atemkeng2016}}. We refer this as a ``\textit{pseudo-convolution}''. However, if one considers only a single East-West baseline, then simple averaging becomes a \textit{true-convolution} because the lengths of the boxcar window do not change along the $uv$-track. Simple averaging still remains a \textit{pseudo-convolution} for a baseline \Refcom{with a non-zero South-North component}.
When considering the entire $uv$-space then it is not sufficient to simply analyse the boxcar \Refnew{window functions} IPRs.
As opposed to true-convolution the pseudo-convolution is a linear time-frequency variant system, which leads to complexity in the analysis of the signal conditioning.
\textcolor{black}{In practical situations, all the boxcar window functions are window-function-unweighted,
moving averages of the measured visibilities, rather than the ideal visibilities. Consider that $V^\mathrm{S}_{pqij}$ is the measured visibility sample at $pqij$ with high temporal and spectral resolution. In this sense, we assume that $V^\mathrm{S}_{pqij}\equiv V_{pqij}$ if the noise term across all the visibility samples is ignored. Averaging becomes a discrete convolution:}
\begin{equation}
\textcolor{black}{\widetilde{V}}}%^\mathrm{M}_{pq\RefEq{kr}}= \frac{\displaystyle\sum\limits_{{i,j}\in \mathsf{B}_{\RefEq{kr}}} V^\mathrm{S}_{pqij} \Pi^{[uv]}_{pq\RefEq{kr}}(\bmath{u}_{pqij}-\bmath{u}_{pq\RefEq{kr}})}
{\displaystyle\sum\limits_{{i,j}\in \mathsf{B}_{\RefEq{kr}}} \Pi^{[uv]}_{pq\RefEq{kr}}(\bmath{u}_{pqij}-\bmath{u}_{pq\RefEq{kr}})},\label{eq:avscontnuconvtvuvdelta11}
\end{equation}
where the set $\mathsf{B}_{\RefEq{kr}}$ corresponds to the bin indices of the \Refcom{sampling bin}, \Refcom{i.e. $\mathsf{B}_{\RefEq{kr}} = \{ ij:~t_i\nu_j \in \mathsf{B}^{[\Delta t\Delta\nu]}_{\RefEq{kr}} \}$}.
\Refcom{This work investigates an alternative approach for visibilities sampling, which emphasises that in the entire $uv$-space all the baselines should be regularly sampled then window function should be applied to shape the FoI. If the window function is a boxcar window or a BDWF then the regular sampling will results to an invariant window length in $uv$-space, which is now a \textit{true-convolution} in the entire $uv$-space as opposed to the work discussed in \citet{atemkeng2016}. A true-convolution in the entire $uv$-space means that in the $t\nu$-space, the time-frequency sampling intervals now varies across baselines: longer sampling intervals on short baselines and shorter on long baselines. Using this novel approach, the sampling bin defined in Eq.~(\ref{eq:samplingbinnormal}) becomes baselines-dependent: the width and height of the sampling bin vary as a function of East-West baselines length. Also, with the novel approach the BDWFs in the $t\nu$-space are sampled equally but are changing in lengths and resolution across baselines. Each of these properties are shown in Figure~\ref{fig:bda-boxcar-uvleng-directtion}.
Interest in such techniques comes from the fact that:}
\begin{itemize}
\item There are some longer baselines where the data should be averaged more than some shorter baselines. This can be seen in the histogram of Figure~\ref{fig:meerkat}, where data are condensed for baseline lengths between $\sim$3.5 km and $\sim$4.2 km than some shorter baselines. These longer baselines have smaller East-West components and are less prone to decorrelation/smearing, and so the data should be averaged more.
\item \ATMR{The \Refcom{sampling bin} for a single baseline \Refcom{with a non-zero East-West and South-North components} should vary along the baseline $uv$-track depending on the baseline direction. \Refnew{This variation of the sampling bin should be taken into account for regular sampling in the $uv$-space.}}
\item The IPR taper for all the baselines may result in the same degree of decorrelation/smearing if the \Refcom{visibilities are regular sampled in the $uv$-space.}
\item One may adapt signal processing methods that assume a \textit{true-convolution} to find the optimal matched IPR. Finding an optimal matched IPR is beyond the scope of this paper, and part of an ongoing study.
\end{itemize}
\section{\Refcom{Baseline-dependent sampling and averaging: BDA}}
\label{sec:bda}
\subsection{ Effect on the image}
\label{sec:bdaeffectImage}
\begin{figure*}
\includegraphics[width=.25\textwidth]{uv-bd-freq_resp_boxgrey.pdf}%
\includegraphics[width=.25\textwidth]{longest-baseline-astron.pdf}%
\includegraphics[width=.25\textwidth]{medium-baseline-astron.pdf}%
\includegraphics[width=.25\textwidth]{shortest-baseline-astron.pdf}\\
\includegraphics[width=.25\textwidth]{time-freq-bd-freq_resp_boxgrey.pdf}%
\includegraphics[width=.25\textwidth]{longest-time-baseline-astron.pdf}%
\includegraphics[width=.25\textwidth]{medium-time-baseline-astron.pdf}%
\includegraphics[width=.25\textwidth]{shortest-time-baseline-astron.pdf}
\caption{
An East-West interferometer array: BDAWF defined in $\Refcom{uv}$-space (top) and in $t\nu$-space (bottom). In $uv$-space, the \Refcom{sampling bin}, \Refcom{the window resolution and length} remain constant across all the baselines, while the sampling rate varies with respect to the baseline length with shorter baselines oversampled and longer baselines downsampled.
In $t\nu$-space, all the baselines are sampled equally but the \Refcom{sampling bin, window resolution and length are now varying.}}
\label{fig:bda-boxcar-uvleng-directtion}
\end{figure*}
An interferometer measures the average visibility over a rectangular time-frequency bin given by $\Delta t$ and $\Delta \nu$\EDIT{:} this is the sampling bin
defined in Eq.~(\ref{eq:samplingbinnormal}). In $t\nu$-space, for a fixed length of \Refcom{time-frequency the corresponding sampling bin swept by different baselines in $uv$-space are not equal: shorter East-West baselines sweep smaller sampling bin and vice-versa.
Similarly, for a fixed sampling bin across all baselines in $uv$-space \Refnew{(baseline-independent sampling bin in $uv$-space)}, the corresponding time-frequency intervals in the $t\nu$-space vary with East-West baseline length: shorter time-frequency intervals on long East-West baselines and longer time-frequency intervals on short East-West baselines.} \Refnew{Let us consider a baseline-independent sampling bin in $uv$-space} and let us denote the variant \Refnew{time and frequency} intervals by $\Delta_{\bmath{u}_{pq}} t$ and $\Delta_{\bmath{u}_{pq}} \nu$ \Refnew{ in $t\nu$-space respectively}.
The \Refcom{sampling bin} becomes \Refnew{baseline-dependent in $t\nu$-space} (indicated here by the extra index $\bmath{u}_{pq}$, which is not found in Eq.~(\ref{eq:samplingbinnormal})):
\begin{alignat}{2}
\mathsf{B}^{[\Delta_{\uu_{pq}} t, \Delta_{\uu_{pq}} \nu]}_{\RefEq{kr}} =& \bigg [ t_k-\frac{\Delta_{\bmath{u}_{pq}} t}{2},t_k+\frac{\Delta_{\bmath{u}_{pq}} t}{2} \bigg ]\nonumber\\
&\times
\bigg [ \nu_r-\frac{\Delta_{\bmath{u}_{pq}}\nu}{2},\nu_r+\frac{\Delta_{\bmath{u}_{pq}}\nu}{2} \bigg ].
\end{alignat}
Figure~\ref{fig:bda-boxcar-uvleng-directtion} shows a typical \Refnew{baseline-independent sampling bin in $uv$-space} (top-left) and baseline-dependent \Refcom{sampling bin} \Refnew{in $t\nu$-space (bottom-left).}
\Refnew{If we denote in \Refnew{function form} by
$\mathcal{D}$ the area of the baseline-independent sampling bin} in $uv$-space then we have:
\begin{eqnarray*}
\mathcal{D}: \mathsf{B}^{[\Delta_{\uu_{pq}} t, \Delta_{\uu_{pq}} \nu]} &\rightarrow& \mathbf{\mathcal{R}}\\
t,\nu &\mapsto& d_{\bmath{u}_{pq\RefEq{kr}}},
\end{eqnarray*}
where $\mathbf{\mathcal{R}}$ is the set of real numbers. \Refcom{One can decomposed $d_{\bmath{u}_{pq\RefEq{kr}}}$ as the product of the width $d_{\bmath{u}_{pqk}}$ and height $d_{\bmath{u}_{\RefEq{pqr}}}$ of the sampling bin}:
\begin{alignat}{2}
d_{\bmath{u}_{pq\RefEq{kr}}}&=d_{\bmath{u}_{pqk}}\times d_{\bmath{u}_{\RefEq{pqr}}}.
\end{alignat}
For
$(t_i,\nu_j)\neq (t_k,\nu_r)$, \Refcom{ $d_{\bmath{u}_{pqk}}$ and $d_{\bmath{u}_{\RefEq{pqr}}}$} are
given by:
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{D}\mathrm{im}}{\mathcal{D}\mathrm{im}}
\begin{equation}
d_{\bmath{u}_{pqk}}=\sum_{t_i\nu_j}\norm{\bmath{u}_{pq}(t_i-t_k, \nu)},
\end{equation}
\begin{equation}
d_{\bmath{u}_{\RefEq{pqr}}}=\sum_{t_i\nu_j}\norm{\bmath{u}_{pq}(t, \nu_j-\nu_r)}\EDIT{,}
\end{equation}
\Refcom{where $t_i\nu_j\in\mathsf{B}^{[\Delta_{\uu_{pq}} t, \Delta_{\uu_{pq}} \nu]}_{\RefEq{kr}}$.}
\Refcom{ If the visibilities are regular sampled along all the baselines in the $uv$-space then for all East-West baselines $\alpha \beta \neq pq$ with $\norm{\bmath{u}_{\alpha \beta}}\neq \norm{\bmath{u}_{pq}}$
the following constraints must be satisfied:}
\begin{alignat}{2}
d_{\bmath{u}_{\alpha \beta k}}&=d_{\bmath{u}_{pq k}}~ \mathrm{and}~ d_{\bmath{u}_{ \RefEq{\alpha\beta r}}}&=d_{\bmath{u}_{\RefEq{pqr}}}.
\end{alignat}
Let us \EDIT{see} what Eq.~(\ref{eq:dirtybdboxcar}) becomes in the case \Refcom{of regular sampling along all the baselines in $uv$-space.}
The $uv$-space boxcar window\EDIT{,} $\Pi^{[uv]}_{pq\RefEq{kr}}$ is now approximately equal in length across all East-West baselines, i.e. for all East-West baselines $\alpha \beta \neq pq$:
\begin{equation}
\Pi^{[uv]}_{\alpha\beta \RefEq{kr}} \approx \Pi^{[uv]}_{pq\RefEq{kr}}.\label{eq:bdabox}
\end{equation}
\Refcom{Does this meant that $\mathcal{T}_{\alpha\beta \RefEq{kr}} \approx \mathcal{T}_{pq\RefEq{kr}}$?} The latter will be always true in theory and not in practice. \Refcom{Note that} while the length of the boxcar window is equal for all baselines in $uv$-space, the boxcar window is sampled differently (the top panel of Figure~\ref{fig:bda-boxcar-uvleng-directtion} illustrates this in the case where the boxcar window is replaced with a sinc-like window).
The boxcar window is \Refnew{downsampled} on the longer East-West baselines and \Refnew{oversampled} on the shorter East-West baselines, which then results to $\mathcal{T}_{\alpha\beta \RefEq{kr}} \neq \mathcal{T}_{pq\RefEq{kr}}$.
However,
if the pre-averaged visibilities are sampled at significantly higher temporal and
spectral resolution (at the cost of computation) then one can
assume that all these boxcar windows at different baselines are sampled equally.
Considering this assumption, we can write:
\begin{equation}
\mathcal{T}_{pq\RefEq{kr}}\approx \mathcal{T}_{\alpha\beta \RefEq{kr}}\EDIT{.}\label{eq:bda2taper}
\end{equation}
Eq.~(\ref{eq:dirtybdboxcar}) becomes:
\begin{equation}
\mathcal{I}^\mathrm{D} \approx \sum_{pq\RefEq{kr}} W_{pq\RefEq{kr}} \mathcal{P}_{pq\RefEq{kr}} \circ \mathcal{I}\cdot\mathcal{T}_{}, \label{eq:bda2}
\end{equation}
where $\mathcal{T}_{}= \mathcal{T}_{pq\RefEq{kr}}\approx \mathcal{T}_{\alpha\beta \RefEq{kr}}$ is the smearing response, which is
now the effect of a single taper on the image. One can summarise Eq.~\Refcom{(}\ref{eq:bda2}\Refcom{)} as:
\begin{equation}
\mathcal{I}^\mathrm{D} \approx \mathcal{I}^\mathrm{A}\cdot\mathcal{T}_{}, \label{eq:bda2sumarrise}
\end{equation}
where $\mathcal{I}^\mathrm{A}$ is the apparent image corrupted by all the effects that affect the signal from the source to the measurement and noise.
\Refnew{The result in Eq.~(\ref{eq:bda2sumarrise}) is one of the mathematical derivation achieved in this work, which shows that with BDA or BDAWFs in effect, the dirty image is the apparent sky multiplied by a single taper.}
\subsection{Implementation with current \Refcom{storage schemes}}
\label{BDA:impl}
In practice, most existing software implementations assume that the correlation matrix is a regular grid in time and frequency.
Averaging entries in this correlation matrix
over long times for short baselines and short times for long baselines results in an irregular grid.
A better idea is to map this irregular grid onto a correlation matrix (i.e. regular grid) by either
flagging out the supplementary points, or duplicating the averaged values onto these supplementary points.
{\textit{Flagging:}}
Most of the radio interferometric data reduction software has a flagging capability, \EDIT{through which} bad data can be flagged and ignored. For BDA, we exploit this capability to force interferometric data reduction software to ignore some entries of the regularly gridded plane (e.g. the correlation matrix).
In the flagging procedure, one has to make sure that the \Refcom{sampling bin} contains an
odd number of data points in time as well as in frequency.
This condition must be verified on all baselines otherwise the average
baseline vector may not coincide with the mid-time and \Refnew{mid-}frequency vector and \EDIT{this} could lead to a phase shift.
If this condition is satisfied, the average value is assigned to the midpoint of the \Refcom{sampling bin}. The other entries of the \Refcom{sampling} bin are flagged.
This flag will cause missing samples to be ignored during post-processing.
{\textit{Duplication:}}
This method consists of duplicating the average value at all entries of the \Refcom{sampling bin in $t\nu$-space}. While this process is easier to implement than the flagging method, it may not serve the purpose of data compression and/or quick computation for post-processing. It is easier to implement in the sense that one may not care or always verify that the
number of visibilit\EDIT{y} points in the \Refcom{sampling bin} is an odd number. Furthermore, the data size of the resulting
data set remains the same as the pre-averaged data set, \EDIT{since} all values are duplicated along the pre-averaged data set.
This method may be used in practice for cases where one does not want to estimate the averaged $uv$-coordinates from
the pre-averaged data set.
{\textit{Semi-duplication and flagging:}}
This method consists of combining the flagging and the duplicate methods in order to benefit from their full advantages.
\EDIT{In} so doing, we seek both data compression and quick computation\EDIT{,} while making implementation easier to handle.
The idea is to duplicate the averaged bin along two central entries of the \Refcom{sampling bin} if the total number of entries within this \Refcom{sampling bin} is even\EDIT{,} otherwise, the averaged bin is assigned only \EDIT{to} the central \Refnew{point} of the \Refcom{sampling}
bin. Any other entry is then flagged.
\subsection{Compression and computation}
The compression factor is defined as the ratio between the sizes of the pre-averaged (high-res) data and the averaged (low-res) data. In terms of the number of visibility samples, the high-res data size
is:
\begin{alignat}{2}
N_\mathrm{vis}^\mathrm{hires} &= N_{\mathrm{bl}}\times N_\mathrm{sub} \times N_\mathrm{pol} \times N_{t}^\mathrm{hires} \times N_{\nu}^\mathrm{hires}
,\label{eq:memorychap4hires}
\end{alignat}
where $N_{\mathrm{bl}}$ is the number of baselines, $N_\mathrm{sub}$ the number of
sub-bands, $N_\mathrm{pol}$ the number of polarisation, $N_{t}^\mathrm{hires}$ and $N_{\nu}^\mathrm{hires}$ the number
of timeslots and channels of the high-res data respectively.
\Refcom{For $n_{pq\RefEq{kr}}=n_{pqk}\times n_{\RefEq{pqr}}$ number of samples in the \Refcom{sampling} bin for
a given baseline $pq$, with $n_{pqk}$ and $n_{\RefEq{pqr}}$ the baseline number of time and frequency samples respectively.}
\Refcom{If one were to adopt a new storage scheme for BDA where there is no flagging or duplicated visibility samples, the data size \textcolor{black}{in terms of number of visibility samples} will be}:
\begin{alignat}{2}
N_\mathrm{vis}^\mathrm{\Refcom{\scalebox{0.5}{BDA}}} &=\sum_{pq\RefEq{kr}} N_\mathrm{sub} \times N_\mathrm{pol} \times \frac{N_{t}^\mathrm{hires}\times N_{\nu}^\mathrm{hires}}{n_{pqk}\times n_{\RefEq{pqr}}}.\label{eq:memorychap4bd}
\end{alignat}
The compression factor after simplifications is then:
\begin{alignat}{2}
\mathrm{CF}_\mathrm{} &=\frac{N_\mathrm{vis}^\mathrm{hires}}{N_\mathrm{vis}^\mathrm{\Refcom{\scalebox{0.5}{BDA}}}}
&=N_\mathrm{{bl}}\times \Bigg(\sum_{pq\RefEq{kr}} \frac{1}{n_{pqk}\times n_{\RefEq{pqr}}}\Bigg)^{-1}\label{compresionbdafactorx}.
\end{alignat}
In the case of simple averaging $n_{pqk}=n_{t}$, $n_{\RefEq{pqr}}=n_{\nu}$
with $n_t$ and $n_{\nu}$ the number of time and frequency samples averaged on each of the baselines. After simplifying Eq.~(\ref{compresionbdafactorx}) we have:
\begin{alignat}{2}
\mathrm{CF}_\mathrm{} &=n_t\times n_{\nu}.
\end{alignat}
In the following sections, we refer to \EDIT{the} compression factor as \CF{$\mathrm{CF}_t$}{$\mathrm{CF}_{\nu}$}, where $\mathrm{CF}_t$ and $\mathrm{CF}_{\nu}$ are the compression factors in time
and frequency for the interferometer array respectively. The notations \CF{$\mathrm{CF}_t$}{$1$} and \CF{$1$}{$\mathrm{CF}_{\nu}$} imply that the data \EDIT{are}
compressed only in time by a factor of $\mathrm{CF}_{t}$ and only in frequency
by a factor of $\mathrm{CF}_{\nu}$ respectively.
\textcolor{black}{\Refcom{For BDA formalism,} the shorter baselines are compressed by much more than $\mathrm{CF}$ and the longer baselines
by much less\EDIT{, while this corresponds} to $\mathrm{CF}$ for the interferometer overall compression factor, which remains constant for all the baselines with simple averaging.}
\ATMR{The computational cost $C^\mathrm{cost}$ during the compression of the overall data for an individual interferometer remains equivalent for both BDA and simple averaging if their resulting compressed data are of the same size. The compression cost will scale as:}
\begin{alignat}{2}
C^\mathrm{cost}&\sim \mathcal{O} (N_\mathrm{vis}^\mathrm{\Refcom{\scalebox{0.5}{BDA}}}\mathrm{CF})\\
&\sim\mathcal{O} (N_\mathrm{bl}N_v \mathrm{CF})\\
&\sim\mathcal{O} (N_\mathrm{bl}N_v n_tn_{\nu}),
\end{alignat}
where $N_v$ is the number of visibilities and $ \mathcal{O} (N_v n_t n_{\nu})$ the compression cost on each individual baseline after simple averaging respectively. But note that on each individual baseline the cost $C^{\mathrm{cost}}_{pq}$ then varies for BDA which scale as:
\begin{alignat}{2}
C_{pq}^\mathrm{cost}&\sim \mathcal{O} (N_{pqv} n_{pqk} n_{\RefEq{\RefEq{pqr}}}),
\end{alignat}
with $N_{pqv}$ the baseline-dependent number of resulting visibilities on $pq$ after BDA.
For shorter baselines $C_{pq}^\mathrm{cost}\ll \mathcal{O} (N_v n_t n_{\nu})$ while on the longer baselines $C_{pq}^\mathrm{cost}\gg \mathcal{O} (N_v n_t n_{\nu})$ but the overall computation cost leads to:
\begin{alignat}{2}
C^\mathrm{cost}&\sim\mathcal{O}\Big(\sum_{pq\RefEq{kr}} N_{pqv} n_{pqk} n_{\RefEq{pqr}}\Big)\\
&\sim \mathcal{O} \big(N_\mathrm{bl}N_v n_t n_{\nu}\big).
\end{alignat}
\subsection{Noise and noise penalty}
\label{sect:noisepenalty}
Let us look at what the estimates theoretical thermal noise induced by BDA become in each of the averaged visibilities.
If for the high-res data, we assume that the noise term has constant r.m.s $\sigma_\mathrm{s}$ across all the baselines, then the noise induced in each of the \Refcom{BDA} visibility is given by:
\begin{equation}
\sigma_{pq\RefEq{kr},\Refcom{\scalebox{0.5}{BDA}}}^2 = \frac{1}{n_{pq\RefEq{kr}}^2 } \sum_{i=1}^{n_{pq\RefEq{kr}}} \sigma_\mathrm{s}^2 = \frac{\sigma_\mathrm{s}^2}{n_{pq\RefEq{kr}}}\label{noise:avgbin}.
\end{equation}
Let us assume that the noise is uncorrelated across averaged visibilities.
The average of the squared error norm in each pixel of the dirty image is then:
\begin{equation}
\sigma_{pix,\Refcom{\scalebox{0.5}{BDA}}}^2 = \frac{ (\sum_{pq\RefEq{kr}} W_{pq\RefEq{kr}}^2 \sigma_{pq\RefEq{kr},\Refcom{\scalebox{0.5}{BDA}}}^2) }{ (\sum_{pq\RefEq{kr}} W_{pq\RefEq{kr}})^2 },
\end{equation}
which for natural image weighting $W\equiv1$ simplifies to:
\begin{equation}
\sigma_{pix,\Refcom{\scalebox{0.5}{BDA}}}^2 = \Bigg(\frac{\mathrm{CF}\sigma_\mathrm{s}}{N_\mathrm{vis}^\mathrm{hires}}\Bigg)^2\sum_{pq\RefEq{kr}} \frac{1}{n_{pq\RefEq{kr}}}\label{noise:bda_pixel}.
\end{equation}
It is clear that the noise induced by BDA is completely different across baseline visibility samples because the number of averaged samples are quite different; this is expected from Eq.~(\ref{noise:avgbin}). In the case of simple averaging, Eq.~(\ref{noise:bda_pixel}) is reduced to:
\begin{alignat}{2}
\sigma_{pix,\Refcom{\scalebox{0.5}{AVG}}}^{2} &= \frac{\mathrm{CF}}{N_\mathrm{vis}^\mathrm{hires}n_tn_{\nu}}\sigma_{\mathrm{s}}^2\\
&=\frac{1}{N_\mathrm{vis}^\mathrm{hires}}\sigma_{\mathrm{s}}^2
=\frac{1}{N_\mathrm{vis}^\mathrm{\Refcom{\scalebox{0.5}{AVG}}}n_tn_{\nu}}\sigma_{\mathrm{s}}^2,\label{noise:bda_pixelx1}
\end{alignat}
where $N_\mathrm{vis}^\mathrm{\Refcom{\scalebox{0.5}{AVG}}}$ is the number of visibilities in the simple averaged data, \Refnew{the index AVG stands for simple averaging}. Refer to Appendix~A for a detailed proof of Eq.~(\ref{noise:bda_pixel}) and~(\ref{noise:bda_pixelx1}). The derivation in Eq.~(\ref{noise:bda_pixelx1}) matches the result of the mathematical expectation of the squared error
norm in each pixel of the dirty image in the case of simple averaged as shown in \citet{atemkeng2016}.
\ATMR{It is clearly shown in Eq.~(\ref{noise:bda_pixelx1}) that $\sigma_{pix,\Refcom{\scalebox{0.5}{BDA}}}=\sigma_{pix,\Refcom{\scalebox{0.5}{AVG}}}$.
Note that this is always true because both compression methods use a boxcar window as a weighting function in the $uv$-space which means
that all the pre-averaged visibilities are equally weighted for both BDA and simple averaging.}
If we \Refcom{compress} the visibilities using a BDWF \Refcom{$X(u,v)$} or a BDAWF $X_{\Refcom{\scalebox{0.5}{BDA}}}(u,v)$, the noise
term still remains different per each visibility $pq\RefEq{kr}$:
\begin{equation}
\label{eq:noise:bdwf}
\sigma_{X_{pq\RefEq{kr}} }^2 = \frac{\sum X^2(\bmath{u}_{pqij}-\bmath{u}_{pq\RefEq{kr}})}
{\big [ \sum X(\bmath{u}_{pqij}-\bmath{u}_{pq\RefEq{kr}}) \big ]^2 } \, \sigma_\mathrm{s}^2,
\end{equation}
where the sums are taken over the baseline-independent \Refcom{sampling bin indices and}
\begin{equation}
\label{eq:noise:bdabdwf}
\sigma_{X_{pq\RefEq{kr}}, \Refcom{\scalebox{0.5}{BDA}}}^2 = \frac{\sum X_{\Refcom{\scalebox{0.5}{BDA}}}^2(\bmath{u}_{pqij}-\bmath{u}_{pq\RefEq{kr}})}
{\big [ \sum X_{\Refcom{\scalebox{0.5}{BDA}}}(\bmath{u}_{pqij}-\bmath{u}_{pq\RefEq{kr}}) \big ]^2 } \, \sigma_\mathrm{s}^2,
\end{equation}
where the sums are taken over the baseline-dependent \Refcom{sampling bins indices}.
Eq.~(\ref{eq:noise:bdwf}) and~(\ref{eq:noise:bdabdwf}) are of critical importance on the squared error norm in each pixel of the dirty image and so they merit detailed explanation:
\begin{enumerate}[1)]
\item \Refcom{In $t\nu$-space}, the length of the window $X(u,v)$ \Refnew{(BDWF)} remains constant across all baselines while the window \Refcom{resolution varies} on different baselines: in this sense, $X(u,v)$ is baseline-dependent. Because the length of $X(u,v)$ is constant along all the baselines, the compression factor also remains constant across all the baselines, as when applying a simple averaging \Refcom{(see \citet{atemkeng2016})}.
\item In $t\nu$-space, the window $X_{\Refcom{\scalebox{0.5}{BDA}}}(u,v)$ (BDAWF) varies in length (hence the extrat index ${\Refcom{\scalebox{0.5}{BDA}}}$) and \Refcom{resolution} across all baselines. Because the length of $X_{\Refcom{\scalebox{0.5}{BDA}}}(u,v)$ varies along baselines,
the compression factor thus varies on different baselines (looking back to Figure~\ref{fig:bda-boxcar-uvleng-directtion}).
\item If one were to constrain the compression factor $\mathrm{CF}$ to be equal for both ``BDWF'' and ``BDAWF'', the squared error norm in each pixel of the dirty image will change radically. This can be understood by looking at
steps 1) and 2): $X(u,v)$ and $X_{\Refcom{\scalebox{0.5}{BDA}}}(u,v)$ produce completely different weights for each $(u,v)$ point. In other words, $X(u,v)\neq X_{\Refcom{\scalebox{0.5}{BDA}}}(u,v)$ for a given $(u,v)$ point.
\end{enumerate}
The visibility noise penalty induced by BDA or BDAWF is the relative increase in noise over simple averaging:
\begin{alignat}{2}
\Xi_{X_{\mu}} &= \frac{\sigma_{\mathrm{X_{\mu}}}}{\sigma_{\Refcom{\scalebox{0.5}{AVG}}}}.
\end{alignat}
Here, $\sigma_{\Refcom{\scalebox{0.5}{AVG}}}=\sigma_{\mathrm{s}}^2/(n_tn_{\nu})$ is the noise on the
simple averaged visibility and $\sigma_{\mathrm{X_{\mu}}}$ is either the noise induced by BDA or BDAWF.
The centre pixel noise penalty in the image with imaging weights $W$:
\begin{equation}
\Xi^W_{\mu} =
\frac{\sigma_{{pix},X}^2}{\sigma_{{pix}}} =
\frac{ (\sum_{\mu} W_{\mu}^2 \Xi_{X\mu}^2) }{ (\sum_{\mu} W_{\mu})^2}. \label{eq:noisepenalty:natural}
\end{equation}
Note that the noise penalty properties induced by overlapping BDWFs defined in \citet{atemkeng2016} remains valid for BDA and BDAWF.
\ATMR{Simulations confirm the theoretical noise penalty estimate discussed above.
The simulation consists of two datasets; the high-res and the low-res datasets using the MeerKAT \Refnew{telescope}.
The high-res dataset is simulated with $\sigma_s=1$ Jy thermal noise during a total period of 4 hr with 1 s integration time and 84 MHz bandwidth divided into channels of 84 kHz. We then \Refcom{compress} the high-res using simple averaging, then BDA and BDAWFs, and save the resulting visibilities to the low-res dataset. For both \Refcom{compression} schemes, we fixed the compression factors to \CF{15}{10} and \CF{30}{20}, which then correspond to simple averaging across \BIN{15}{0.84} and \BIN{30}{1.68} respectively.
We use the $\textcolor{black}{\text{sinc}}$ tuned to a FoI of $1.3^\circ$ with overlap factors of $6\times5$ of the baseline-dependent \Refcom{sampling} bins. For each case of compression, we then consider the r.m.s pixel noise as an estimator of $\sigma_{pix}$ (simple averaging) and $\sigma_{{pix},\scalebox{0.5}{X}}$ (BDA or BDAWFs).
The analytical estimated and simulated noise penalty are compared in Table~\ref{tab:noise-comparison}.
Results confirm that
both analytical estimates and simulations agree.}
\section{Simulations and results}
\label{results:simulation}
Having explored the mathematics and implementation of BDA,
we now turn to the simulation aspects. The simulations are performed with the MeerKAT and the EVN \Refnew{telescopes}. The simulated images are not calibrated and deconvolved to avoid introducing additional effects relative to calibration and/or deconvolution algorithms.
Two test scenarios are considered and both of them
are simulated using MeqTrees~\citep{noordam2010meqtrees}:
\begin{itemize}
\item We consider a 1 Jy point source at various sky positions, with no noise or other corruptions included.
We evaluate the efficiency of a BDA correlator using
two different procedures. Firstly, we simulate the source at a fixed sky position\EDIT{,} apply BDA, BDAWFs and measure
the \Refnew{compression} effects separately on each baseline. Secondly, we simulate the point source at various angular distance\EDIT{s}
from the phase
centre and apply BDA and BDAWFs, thereby evaluating the interferometer
cumulative decorrelation effects on all baselines.
We measure the source
peak amplitude in each dirty image after \Refcom{compression}. Since each dirty image corresponds to a single source, the peak gives
us the degree of smearing associated with a given \Refcom{compression} method and compression factor.
\item \Refcom{The PB on its own could be used for source suppression, the higher the frequency the less sources out of the FoI contaminate the image. Tests are performed when the PB is included during the simulations, BDA and BDWFs are applied
to evaluate the combined degree of suppression for sources out of the FoI.}
\end{itemize}
\subsection{Application to MeerKAT data}
\subsubsection{Source amplitude and East-West baselines}
\label{subsection:meerkaysupression}
\begin{table}
\begin{tabular}{lll}
\hline
{\bf Filters} & {\bf $\Xi$ theo} & {\bf $\Xi$ sim}\\
\hline\hline
\Refcom{BDA} \BIN{15}{0.84}&1.00&1.03\\
\Refcom{BDA} \BIN{30}{1.68}&1.00 &1.004\\
\hline
\WF{\Refcom{BDA}-\textcolor{black}{\text{sinc}}-}{6}{5}-1.3deg \BIN{15}{0.84} &1.19 &1.23\\
\WF{\Refcom{BDA}-\textcolor{black}{\text{sinc}}-}{6}{5}-1.3deg \BIN{30}{1.68} &1.51&1.56\\
\hline
\end{tabular}
\caption{A comparison of image noise penalties associated with different BDA and BDAWFs, computed analytically ($\Xi$ theo)
vs. simulations ($\Xi$ sim). The analytical noise penalty for BDA is equal to 1, this is
\Refcom{straightforward} by looking at Eq.~(\ref{noise:bda_pixel}) and~(\ref{noise:bda_pixelx1}).
}
\label{tab:noise-comparison}
\end{table}
\newcommand{\bdaWF}[3]{{#1}$#2${}$\times${}$#3$}
\begin{figure*}
\centering
\includegraphics[width=.4\textwidth]{effect_time_averaging_amplitude_src135-bda.pdf}
\includegraphics[width=.4\textwidth]{effect_bandwidth_averaging_amplitude_src135-bda.pdf}
\includegraphics[width=.4\textwidth]{dtimecf30x20.pdf}%
\includegraphics[width=.4\textwidth]{dfreqcf30x20.pdf}\\
\caption{(Top) Amplitude loss: the apparent intensity of a 1 Jy source at 2.25 deg as seen by MeerKAT at 1.4 GHz,
as a function of East-West baseline components; (left) compression carried out only in time with compression factor fixed
to 15 time-bins; (right) compression is carried out only in frequency with compression factor fixed to 10 frequency-bins. (Bottom) Baseline-dependent compression factors in time (left) and frequency (rigth)
both in logarithm scale as a function of East-West baseline length. }\label{fig:srcat30arcminx}
\end{figure*}
The experiment in Figure~\ref{fig:srcat30arcmin_avg} is repeated. The simulation consists of two high-res measurement sets (MSs), each with a source at 2.25 deg relative to the observation phase centre.
Two low-res MSs are generated to receive the \Refcom{compressed} visibilities.
The results of the decorrelation when applying simple averaging and BDA are compared in the top panel of Figure~\ref{fig:srcat30arcminx} and the BDA compression factors \Refcom{achieved with the simulation} are plotted in the bottom panel of Figure~\ref{fig:srcat30arcminx}.
\begin{itemize}
\item Time decorrelation and compression factors, Figure~\ref{fig:srcat30arcminx} (left): the MS consists of 64 frequency channels of 84 kHz width each, and
7200 s timeslots of 1 s integration time. The compression factor is fixed to \CF{15}{1} \EDIT{for both} simple averaging and BDA.
For BDA, the shorter baselines are compressed by a lot more than 15 and the longer baselines by a lot less, while for simple averaging this corresponds to 15 factor of compression along all the baselines.
\item Bandwidth decorrelation and compression factors, Figure~\ref{fig:srcat30arcminx} (right): The MS consists of 100 timeslots of 1 s integration, and
1000 frequency channels of 84 kHz (total bandwidth of 84 MHz). The compression factor is fixed to \CF{1}{10} both for simple averaging and BDA.
For BDA, the shorter baselines are compressed by a lot more than 10 and the longer baselines by a lot less, and for simple averaging this corresponds to a compression factor of 10.
\end{itemize}
It is \EDIT{clearly} noticeable in the top panels of Figure~\ref{fig:srcat30arcminx} that on shorter baselines, the smearing rates of simple averaging and BDA are approximately equivalent despite the little percentage of signal lost with BDA in the region between 0.2 km and 0.8 km.
This can be understood by looking at the MeerKAT histogram depicted in Figure~\ref{fig:meerkat}, this is the region where one wants to compress the data as bigger as possible.
However, for a source at 2.25 deg and at these BDA compression factors the degree of the decorrelation remains approximately equal across all the baselines. This result confirms our mathematical prediction in Eq.~(\ref{eq:bda2taper}).
It appears from the \Refcom{simulated} time and frequency BDA compression factors depicted in the bottom of
Figure~\ref{fig:srcat30arcminx} that the data are compressed more in frequency than in time. This is because, for MeerKAT, the $uv$-track along 0.84 MHz is smaller than the $uv$-track along 15 s.
We can still constrain the compression factors to be equal in both time and frequency, in principle, the shape of the 2-D $uv$-track should be square-like. To derive this, we note that the averaged bandwidth must be equal to $w_e \nu_r \Delta t$, where the constant $w_e$, is the Earth rotation velocity~\citep{thompson2001interferometry}.
\subsubsection{Source amplitude and distance from the phase centre}
\label{subsect:meerkat}
We simulate data at high time-frequency resolution of 1 s integration during 4 hr and 84 kHz channels width
for a total bandwidth of 84 MHz centred at 1.4 GHz. The sky model is a single 1 Jy point source at
a given distance from the phase centre. Three MSs are generated to store the
\Refcom{compressed} visibilities:
\begin{itemize}
\item \textcolor{black}{Two MSs contain the \Refcom{compressed} visibilities for \BIN{15}{0.84} and \BIN{30}{1.68}, this result in compression factors of \CF{15}{10} and \CF{30}{20} respectively.}
\item A third MS to receives the \Refcom{compressed} visibilities for BDA and BDAWFs. This MS is a copy of the high-res MS where \Refnew{the flagging implementation for BDA}
\EDIT{described} in Sect.~\ref{BDA:impl} is applied. Two compression factors \EDIT{are} adopted for the BDA and BDAWFs: \CF{15}{10} and \CF{30}{20}.
\end{itemize}
Figure \ref{fig:bda-sn-bessel-2ge1} shows the performance of different \Refnew{compression schemes} and compression factors associated with their noise penalty. BDA applied to a sinc-like BDWF is considered in this test and is turned
to three different FoI settings, as indicated by the plot: 0.65 deg, 1.32 deg and 2.25 deg.
The results can be alternatively appreciated by regarding the performance of BDAWFs:
BDA with \CF{15}{10} provides good results in flux recovery\EDIT{,} i.e. for $6\%$ smearing we can image up to \Refnew{4.5 deg FoI}, while simple averaging at the same compression factor can only recover this FoI at
$10\%$ smearing.
The BDA with compression factor \CF{30}{20} still provides better source recovery compared to simple averaging at the same compression factor.
We can also \EDIT{note} that at the same compression factor, the source suppression performance of BDA is worse than that of simple averaging.
\Refnew{At the different compression factors, we see that all the BDAWFs filters provide excellent performance in source recovery and far-field suppression} compared to simple averaging or BDA: smearing across the FoI is less than 2\% (horizontal grey dashed-line), and out-of-FoI suppression is almost two orders of magnitude higher \EDIT{than} simple averaging or BDA. Note the tapering \EDIT{behaviour} for BDAFWs at the different compression factors. As the compression factor increases, the response of BDAWFs becomes \textbf{flat}: this clearly illustrates their excellent performance. The reason for this is that, a unique sinc-like window function is applied on all the baselines (\Refcom{recall from Figure~\ref{fig:bda-boxcar-uvleng-directtion}}). For larger compression factors the sinc-like window function becomes more proximate to the ``sinc''\EDIT{,} which results in a more optimal ``boxcar-like'' taper in the image domain. In general, the noise penalty does depend on the compression scheme and parameters, this is the case for BDAWF, where all the parameters i.e. compression factors, overlapping bins and FoI result in noise penalty-dependent.
\begin{figure*}
\centering
\includegraphics[width=.9\textwidth]{meerkat-effect_time_sinc-15x10.pdf}
\caption{Amplitude loss: the apparent intensity of a 1 Jy source as seen by the MeerKAT telescope at 1.4 GHz
as a function of distance from phase centre, for simple averaging with \BIN{15}{0.84} and \BIN{30}{1.68} bins, and for BDA and BDAWFs.
The compression factor is fixed to \CF{15}{10} and \CF{30}{20} for all the \Refcom{compression} methods.}\label{fig:bda-sn-bessel-2ge1}
\end{figure*}
\subsubsection{Relative SNRs using MeerKAT data }
\label{sect:relativeSNR}
\begin{table*}
\begin{tabular}{ |p{5cm}||p{2cm}|p{2cm}|p{2cm}|p{2cm}| }
\hline
\bf FILTERS& \bf 0.65 deg&\bf 1.32 deg&\bf 2.25 deg\\
\hline
\Refcom{AVG} \BIN{15}{0.84} &16.827& 16.437 & 15.672 \\
\Refcom{BDA} \BIN{15}{0.84} &14.767 & 14.544 & 14.072 \\
\WF{\Refcom{BDA}-\textcolor{black}{\text{sinc}}-}{6}{5}-{1.3deg} \BIN{15}{0.84} &64.354 &11.590&1.144 \\
\WF{\Refcom{BDA}-\textcolor{black}{\text{sinc}}-}{6}{5}-{2.6deg} \BIN{15}{0.84} &40.256 & 64.538& 9.554 \\
\WF{\Refcom{BDA}-\textcolor{black}{\text{sinc}}-}{6}{5}-{4.5deg} \BIN{15}{0.84} & 32.576&32.569& 60.249 \\
\hline
\end{tabular}
\caption{Simulated SNR as decribed in Eq.~\Refcom{(}\ref{eq:snr}\Refcom{)}, i.e. $\mathrm{SNR}\approx S_\mathrm{smear}/(C_\mathrm{noise}+T_\mathrm{noise})$, where $S_\mathrm{smear}$, $C_\mathrm{noise}$ and $T_\mathrm{noise}$ are defined
as the signal of a source of interest, and the contamination signals that affect the signal of interest and the thermal noise respectively.
Here $T_\mathrm{noise}=\sigma_{{pix}, X}$ defined in Sect.~\ref{sect:noisepenalty}.}
\label{tab:relativeSNR}
\end{table*}
Simulations are used to separate the variables $S_\mathrm{smear}$, $C_\mathrm{noise}$ and $T_\mathrm{noise}$
in Eq.~\Refcom{(}\ref{eq:snr}\Refcom{)}. The simulated MS in Sect.~\ref{subsect:meerkat} is reused. We consider to evaluate the SNR of an image of $\sim$0.5 square degree centered at 0.65 deg, 1.32 deg and 2.25 deg. For each case, we know $S_\mathrm{smear}$ from Figure~\ref{fig:bda-sn-bessel-2ge1}.
To evaluate the contamination, and for each case, we simulate two sources: a nearby source of 1 Jy (1 deg away from each case) and a distant source of 10 Jy (20 degrees away from from each case),
and make an image. The image will be empty, except for the contribution from these two sources. For the thermal noise, an empty sky is simulated with 1 Jy thermal noise for each of the cases listed above.
The different compression methods are applied and their resulting SNRs are listed in Table~\ref{tab:relativeSNR}. \ATMR{Results show that our compression technique demonstrates
better performance in SNR when compared to simple averaging. Comparatively, using BDAWFs provide the best performance in SNR, up to a factor of $\sim$4 higher than simple averaging or BDA.
Note that in regions where the source suppression response of BDAWFs kicks in, the SNR quickly drops, since BDAWFs are suppressing the source signal itself at this point.}
\subsubsection{ BDAWFs combined with the primary beam and source suppression}
\label{sect:beamandBDAWFs}
The additional degree of source suppression provided by BDAWFs auguments the source suppression provided by the PB, as investigated by e.g.~\citep{mort2016analysing}. Note that BDA by itself (without window functions) actually provides \lq\lq less’’ source suppression than simple averaging, at the same compression factor.
In this section, we investigate and compare the combined suppression factor achieved by the PB and averaging, BDA and BDAWFs.
\ATMR{
A PB model for MeerKAT at 1.4 GHz along with a \Refnew{nearby} 20 Jy source located at the second sidelobe of the PB is simulated using the MS described in Sect.~\ref{subsect:meerkat}.
We supposed imaging up to the FWHM of the MeerKAT PB at 1.4 GHz (i.e. 0.65 deg away from the field centre). Three filters are considered and compared, \Refcom{AVG \BIN{15}{0.84}}, \Refcom{BDA \BIN{15}{0.84}} and \Refcom{BDA}-sinc-6$\times$5-1.3deg \Refcom{\BIN{15}{0.84}} both having for compression factor CF=15$\times$10.
Figure~\ref{fig:srcat30arcmin_avg-x} shows dirty images of size $40\times40$ arcmin \Refcom{at different pixel scales}. These images should be empty except the contamination from the \Refcom{nearby} source.
The top-left and top-right images \Refnew{of Figure~\ref{fig:srcat30arcmin_avg-x} show} the high-res (i.e. image produced with the pre-averaged MS) and the simple averaged images respectively.
The bottom-left and bottom-right images are produced after applying \Refcom{BDA} \Refcom{\BIN{15}{0.84}} and \Refcom{BDA}-sinc-6$\times$5-1.3deg \Refcom{\BIN{15}{0.84}} respectively.
For both cases, the high-res image is confusion noises dominated across the FoI. The compressed images show a more confusion noise-free images.
Unlike BDA that considers only
a flux recovery in the image domain, BDAWFs consider both flux recovery in the given FoI and source suppression out of this FoI. This is clearly seeing in Figure~\ref{fig:srcat30arcmin_avg-x} that BDA
on its own does not remove the contamination than simple averaging but BDAWF does remarkably well. }
\begin{figure*}
\centering
\includegraphics[width=1.\textwidth]{compression_dirty_images.pdf}
\caption{Contamination in the FoI from a 20 Jy source located at the second null of the MeerKAT primary beam. Initially, the data is imaged without data compression been carried out (top-left panel). After data compression is
applied using \Refcom{AVG} \Refcom{\BIN{15}{0.84}} (top-right), \Refcom{BDA} \Refcom{\BIN{15}{0.84}} (bottom-left) and
\Refcom{BDA}-sinc-6$\times$5-1.3deg \Refcom{\BIN{15}{0.84}} (bottom-right). \Refcom{The colourbars of the images are in Jansky and are in different scales.} BDAWFs offer better
reduction in source contamination compared to \Refcom{AVG} \Refcom{\BIN{15}{0.84}} and \Refcom{BDA} \Refcom{\BIN{15}{0.84}}.}
\label{fig:srcat30arcmin_avg-x}
\end{figure*}
\subsection{BDAWFs and the EVN}
In VLBI the baselines are so long (up to $\sim$10000 km) that the FoV is always limited, and normally it is only a tiny fraction of the PB at the FWHM because of decorrelation due to time and \Refnew{bandwidth} averaging. To keep decorrelation/smearing at acceptable level one may apply wide-FoV correlation, but handling the resulting data volumes has been challenging (e.g. \citet{chi2013deep}).
Another solution is to $uv$-shift wide-field correlated data to various phase centres and apply averaging then to obtain a number of smaller FoV within the PB~\citep{morgan2011vlbi, ruiz2017faint}. This has been fully
implemented in the EVN Software Correlator (SFXC;~\citet{keipema2015sfxc}). Multi-phase centre correlation makes milliarcsecond-resolution imaging of a-priori known sources spread
over a wide-\Refnew{FoV} possible, this has now been applied
routinely at the EVN.
But some applications (e.g. transient search within the full PB in
VLBI data, or to
build up a wide-FoV EVN archive) would require storing the raw data
from all telescopes,
however, this results in very large volumes unless there are
alternative approaches.
We investigate the possibility of using BDA and BDAWFs in VLBI to
preserve a significant
fraction of the PB while significantly reducing the data volume. We repeated the
simulation scenarios described in Sect.~\ref{subsect:meerkat} using the full EVN (i.e.
Badary, Effelsberg, Hartebeesthoek, Jodrell Bank, Medicina, Noto,
Onsala, Shanghai, Svetloe, Torun, Westerbork, Zelenchukskaya) at 1.6
GHz. The results are given in Figure~\ref{fig:bda-sn-bessel-2ge3}. It can be seen
that for a certain compression rate with simple averaging that would
result in a FoI of
6 arcmin, an equivalent compression rate using BDA or BDAWFs would
result in a FoI
of 18 arcmin. We also note that, if one aims at imaging a FoI of
18 arcmin with
simple averaging, then this is possible with BDA reducing data by a factor of 9.38,
and the factor can be even higher with BDAWFs. While these initial
tests are very
promising, in VLBI there is a significant trade-off in sensitivity and
resolution,
therefore the best approach should be investigated in detail
independently for each
science application.
\begin{figure*}
\centering
\includegraphics[width=.9\textwidth]{evn-effect_time_averaging.pdf}
\caption{Amplitude loss: the apparent intensity of a 1 Jy source as seen by EVN at 1.6 GHz as
a function of distance from phase centre. \ATMR{Results show that the data can be compressed alot more than a factor of 9.38 using BDAWFs.}
}\label{fig:bda-sn-bessel-2ge3}
\end{figure*}
\section{Conclusion and perspectives}
\label{conclusion}
As discussed above compression of visibilities by simple averaging shows that
decorrelation/smearing is more significant on longer baselines than shorter ones and that decorrelation can only be avoided if the correlator performs the averaging procedure over \Refcom{smaller} bins, which however results in high
data rates.
We now make predictions pertaining to \Refcom{sample the visibilities regularly across all the baselines in the entire $uv$-space and apply BDA and BDAWFs.} Intuitively, in the time-frequency space (or the correlator domain) this corresponds to averaging
within sufficiently large \Refcom{sampling bin} for shorter baselines, while the longer baselines are
averaged within shorter \Refcom{sampling bin}.
The question is then whether such averaging technique
will not only decrease smearing within the observation \EDIT{FoI}, but, also reduce the data size.
The second \EDIT{question} pertains to calibration issues for this method given that calibration
is a complex visibilities correction process.
BDA could introduce complexity further down the line: it could, for example, mean that a dynamic calibration solution interval would become necessary.
This implies that the calibration solution interval will
change differently with baselines and each of the frequency and/or time intervals.
We have established that BDA by itself
can only achieve data compression but not FoI shaping: BDA does
decrease smearing over the FoI, while on the other hand, sources out-of-FoI are not suppressed compared to simple averaging.
We have found that \Refnew{BDAWFs} result \EDIT{in} excellent tapering behaviour,
which can decrease smearing to about $2\%$ or less over a selected FoI, with out-of-FoI source suppression
almost two orders of magnitude higher \EDIT{than} simple averaging, while the data \EDIT{are} compressed at the same rate.
We should note that like simple averaging, BDA and BDAWFs also distort the point spread function (PSF), \Refnew{which becomes position dependent} and reacts differently compared to simple averaging. However, for an efficient use of BDA \Refnew{and} BDAWFs, one requires to \EDIT{predict} this PSF at different sky positions during deconvolution. \Refnew{There exist\Refcom{s} a faceting imaging framework that accounts for this PSF variation during deconvolution when applying BDA (see DDFacet~\citep{ctasseDDFacet}). DDFacet uses the brute-force approach to compute the PSF at the centre of each facet, and this PSF is used to deconvolve the facet. However, a brute-force computing load is tolerable for small size facets. For large facets and for any non-faceting deconvolution algorithm an approximation based method to derive all these PSFs must be implemented with the aims to reduce computing cost~\citep[in prep.]{atem2}.}
This paper opens up several possibilities for future work. Firstly,
designing an optimally matched filter for a BDAWF is an interesting avenue of further
research. In practical situations, the IPR of a sinc-like lowpass filter
is far from ideal in the sense that a sinc-like filter is band-limited (zero outside some
intervals) and sampled. Filter design theory for lowpass filters could, therefore, be used
to explore an ideal IPR, by using an approximation to define the ideal
\Refnew{filter} coefficients and parameters, such as the passband, the transition band and the stopband.
The second avenue involves evaluating the degree of source suppression as a function of
array layout and \Refnew{BDAWFs} parameters, i.e. the passband, transition band, stopband and
the size of the filter.
The third avenue of exploration consists of investigating and exploring calibration with
BDA and BDAWFs. Currently, BDA and BDAWFs can
only be used post-calibration. Exploring the calibration parameters for BDA and BDAWFs
could open a new research avenue in radio interferometry, in view of the effective use of
BDA and BDAWFs.
Another possible work on BDA will be to explore a possible new \Refnew{storage scheme} to take full advantage of the compression capabilities of BDA. In this work, we have considered and used only data structures that a MS and other software packages we used can support. The \Refcom{MS has} a lot of flagging entries that still reside in memory.
Finally, this document was restricted to simulations. The next step will be to implement each
of the techniques presented in this work in practical research scenarios, e.g. applying the
filters to real interferometric data.
\section*{Acknowledgements}
This work is based upon research supported by the South African Research Chairs Initiative of the Department of
Science and Technology and National Research Foundation. The European VLBI Network is a joint facility of independent European,
African, Asian, and North American radio astronomy institutes. M. Atemkeng is grateful to Tammo Jan
Djimeka
for valuable discussions on BDA and its applicability to LOFAR real data during his visit at ASTRON. The visit to ASTRON was
made possible by the FP7 MIDPREP program.
We
thank our colleagues Kshitij Thorat, Modhurita Mitra, Etienne Bonnassieux, Diana G. Klutse and Sphesihle Makhathini for their insights and comments on early drafts of this paper.
We would also like to thank Khan Asad for making available the MeerKAT primary beam model used in Sect.~\ref{sect:beamandBDAWFs}.
\Refnew{The authors would like to thank the reviewer for his valuable comments and suggestions that strongly improved the quality of the paper.}
\bibliographystyle{mn2e}
|
{
"timestamp": "2018-03-08T02:05:51",
"yymm": "1803",
"arxiv_id": "1803.02569",
"language": "en",
"url": "https://arxiv.org/abs/1803.02569"
}
|
\section{Introduction}
MacMahon's classical theorem \cite{Mac} on plane partitions fitting in a given box (see \cite{Mac}, and \cite{Stanley}, \cite{Andrews}, \cite{Kup}, \cite{Stem}, \cite{Zeil},\cite{CK}, \cite{Ciu1}, \cite{Tri1}, \cite{Tri2}, \cite{CL} for more recent developments) is equivalent to the fact that the number of lozenge tilings of a centrally symmetric hexagon of side-lengths $a,b,c,a,b,c$ (in clockwise order, starting from the north side) on the triangular lattice is equal to
\begin{equation}\label{MacMahoneq}
\mathbf{P}(a,b,c):=\frac{\operatorname{H}(a)\operatorname{H}(b)\operatorname{H}(c)\operatorname{H}(a+b+c)}{\operatorname{H}(a+b)\operatorname{H}(b+c)\operatorname{H}(c+a)},
\end{equation}
where the \emph{hyperfactorial} function $\operatorname{H}(n)$ is defined as $\operatorname{H}(n):=0!\cdot 1!\cdot 2!\cdots(n-1)!$. Here a \emph{lozenge} is union of any two equilateral triangles sharing an edge; a \emph{lozenge tiling} of a region on the triangular lattice is a covering of the region by lozenges without gaps or overlaps.
The striking formula of MacMahon motivates us to find similar ones. In particular, we would like to investigate lozenge tilings of hexagons with certain `defects', and the most popular defect is a removal of a collection of one or more equilateral triangles. Strictly speaking, one would like to classify this defect based on the position where the triangle has been removed as follows. If a triangle is removed inside the hexagon, we call it a \emph{(triangular) hole}; if the triangle is removed along the boundary of the hexagon, we call it a \emph{(triangular) dent}.
The tale of tiling enumerations of hexagons with holes (also known as `\emph{holey hexagons}') originally came from an (ex-)open problem posed by James Propp. In 1999, James Propp published an article \cite{Propp} tracking the progress on a list of 20 open problems in the field of exact enumeration of tilings, which he presented
in a lecture in 1996, as part of the special program on algebraic combinatorics
organized at MSRI. The article also presented
a list of 12 new open problems. Problem 2 on the list asks for a tiling formula for a hexagon of side-lengths\footnote{From now on, we always list the side-lengths of a hexagon in the clockwise order, starting from the north side.} $n,
n+ 1, n, n+ 1, n, n+ 1$ with the central unit triangle removed (see Figure \ref{fig:centralhole}(a)). Ciucu \cite{Ciu2} solved and generalized this problem to $(a, b+ 1, b, a+ 1, b, b+ 1)$-hexagons with the central unit triangle
removed (shown in Figure \ref{fig:centralhole}(b)). Gessel and Helfgott later obtained this result independently by a different method \cite{HG}. S. Okada and
C. Krattenthaler \cite{OK} have solved an even more general problem for a 3-parameter family of holey hexagons, $(a, b+ 1, c, a+ 1, b, c+ 1)$-hexagons with the central unit
triangle removed (illustrated in Figure \ref{fig:centralhole}(c)).
One readily sees that, in the above results, the central triangular holes have all size $1$. A milestone in this line of work is when Ciucu, Eisenk\"{o}lbl, Krattenthaler and Zare \cite{CEKZ} showed that the tilings of a hexagon are still enumerated by a simple product formula if we remove a triangle of an arbitrary side-length in the `center'\footnote{Strictly speaking, the triangular hole is only located exactly at the center when all sides of the hexagon have the same parity; in the other cases, it is $1/2$ unit off the center.}, called a `\emph{cored hexagon}' (see Figure \ref{fig:centralhole}(d) for an example). In 2013, Ciucu and Krattenthaler \cite{CK} extended the structure of the central triangular hole in the cored hexagons to a cluster of four triangular holes, called a `\emph{shamrock hole}'. The explicit enumeration of a hexagon with a shamrock hole in the center (called a `\emph{$S$-cored hexagon}' or a `\emph{shamrock-cored hexagon}') also yields a nice asymptotic result that they mentioned as a `dual' of MacMahon's theorem (see Figure \ref{fig:centralhole}(e) for a $S$-cored hexagon). Ciucu \cite{Ciu1} later considered a new structure, called a `\emph{fern}', a string of triangles with alternating orientations, and a hexagon with a fern removed in the center is called a `\emph{$F$-cored hexagon}' or a `\emph{fern-cored hexagon}' (illustrated in Figure \ref{fig:centralhole}(f)). This new structure also yields a nice tiling formula and another dual of MacMahon's theorem. We refer the reader to \cite{CL, Halfhex1, Halfhex2, Halfhex3} for more recent work on the fern structure.
\begin{figure}\centering
\includegraphics[width=13cm]{centralhole.eps}
\caption{Several hexagons with central holes whose tilings are enumerated by simple product formulas (ordered in chronological order). The black triangles indicates the triangular holes.} \label{fig:centralhole}
\end{figure}
For a sequence $\textbf{a}:=(a_i)_{i=1}^{m}$, we denote $o_a:=\sum_{\text{$i$ odd}} a_i$ and $e_a:=\sum_{\text{$i$ even}}a_i$. Let $S(a_1,a_2,\dotsc,a_m)$ denote the upper half of a hexagon of side-lengths $e_a,o_a,o_a,e_a,o_a,o_a$ in which $k:=\lfloor\frac{m}{2}\rfloor$ triangles of side-lengths $a_1,a_3,a_5,\dots,a_{2k+1}$ removed from the base, such that the distance between the $i$-th and the $(i+1)$-th removed triangles is $a_{2i}$ (see Figure \ref{fig:semihex} for an example). We call the region $S(a_1,a_2,\dotsc,a_m)$ a \emph{dented semihexagon}. Cohn, Larsen and Propp \cite{CLP} interpreted semi-strict Gelfand--Tsetlin patterns as lozenge tilings of the dented semihexagon $S(a_1,a_2,\dotsc,a_m)$, and obtained the following tiling formula
\begin{align}\label{semieq}
s(a_1,a_2,\dots,a_{2l-1})&=s(a_1,a_2,\dots,a_{2l})\\
&=\dfrac{1}{\operatorname{H}(a_1+a_{3}+a_{5}+\dotsc+a_{2l-1})}\dfrac{\prod_{\substack{1\leq i<j\leq 2l-1\\
\text{$j-i$ odd}}}\operatorname{H}(a_i+a_{i+1}+\dotsc+a_{j})}{\prod_{\substack{1\leq i<j\leq 2l-1\\
\text{$j-i$ even}}}\operatorname{H}(a_i+a_{i+1}+\dotsc+a_{j})},
\end{align}
where $s(a_1,a_2,\dotsc,a_m)$ denotes the number of tilings\footnote{We include here the original formula of Cohn--Larsen--Propp for convenience. Let $T_{m,n}(x_1,\dotsc,x_n)$ be the region obtained from the semihexagon of side-lengths $m$, $n$, $m+n$, $n$ (clockwise from the top) by removing the $n$ up-pointing unit triangles from its bottom that are in the positions $x_1,x_2,\dotsc,x_n$ as counted from left to right. Then the number tilings of the resulting region is given by
\begin{equation*}
\prod_{1\leq i<j\leq n}\frac{x_j-x_i}{j-i}.
\end{equation*}} of $S(a_1,a_2,\dotsc,a_m)$.
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{8cm}{!}{
\begin{picture}(0,0)%
\includegraphics{semihexmultiple.eps}%
\end{picture}%
\begin{picture}(6994,4016)(1923,-5559)
\put(4502,-1877){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_2+a_4+a_6$}%
}}}}
\put(2178,-4122){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_1+a_3+a_5+a_7$}%
}}}}}
\put(2296,-5544){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_1$}%
}}}}
\put(3136,-5544){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_2$}%
}}}}
\put(3916,-5544){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_3$}%
}}}}
\put(4771,-5544){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_4$}%
}}}}
\put(5589,-5544){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_5$}%
}}}}
\put(6219,-5529){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_6$}%
}}}}
\put(7186,-5536){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_7$}%
}}}}
\put(7263,-2603){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_1+a_3+a_5+a_7$}%
}}}}}
\end{picture}}
\caption{The dented semihexagon $S(2,2,2,3,1,2,4)$.}
\label{fig:semihex}
\end{figure}
Even though there are a number of elegant enumerations for hexagons with holes and for hexagons with dents, to the best of the author's knowledge, there are \emph{not} any known results for hexagons in which \emph{both} holes and dents are apparent. In this paper, we consider a number of such `rare' families of regions. In particular, our region can be considered as a \emph{multi-parameter} generalization of Ciucu's $F$-cored hexagons in \cite{Ciu1}, besides a fern removed in the center of the hexagon, we remove two more ferns at the same level from two sides of the hexagons. See Figure \ref{fig:threefern} for two examples. The precise definition of our regions will be given in the next section. We would like to emphasize that the side-lengths and the number of triangles in each of the three ferns are all \emph{arbitrary}. Our main theorems (Theorem \ref{main1}--\ref{mainQ4}) show that the numbers of tilings of our new regions are always given by a certain product of the tiling number of a cored hexagon, the tiling numbers of two dented semihexagons determined by the ferns, and a simple multiplicative factor. When the two side ferns vanish, our work implies exactly Ciucu's main result in \cite[Theorem 2.1]{Ciu1}. As a consequence, our results generalize almost all known enumerations of hexagons with central holes listed above (the only exception is the enumeration of the $S$-cored hexagons in \cite{CK}). Especially, our theorems also imply a new `dual' of MacMahon's classical theorem on plane partitions, that generalizes the dual of Ciucu in \cite[Theorem 1.1]{Ciu1}.
\begin{figure}\centering
\includegraphics[width=14cm]{threefern.eps}
\caption{Two hexagons with three ferns removed. The black triangles indicate the ones removed.} \label{fig:threefern}
\end{figure}
The rest of this paper is organized as follows. In Section \ref{Statement1}, we give exact tiling enumerations for eight families of regions corresponding to the `central case'. The new dual of MacMahon's theorem is also presented in the same section. The proofs of these theorems are provided in Section \ref{sec:proof1}. We wrap up the paper by several remarks and open questions in Section \ref{sec:remark}.
\section{Precise statements of the main results}\label{Statement1}
\subsection{Cored hexagons and Ciucu--Eisenk\"{o}lbl--Krattenthaler--Zare's Theorems}
Continuing the line of work about hexagons with a unit triangle removed in the center in \cite{Ciu2}, \cite{HG} and \cite{OK}, Ciucu, Eisenk\"{o}lbl, Krattenthaler and Zare enumerated the \emph{cored hexagon} (or \emph{punctured hexagon}) $C_{x,y,z}(m)$ that are obtained by removing the central equilateral triangle of side-length $m$ from the hexagon $H$ of side-lengths $x, y+m, z,x+m,y,z+m$ (see Figure \ref{fig:core} for examples). We define this region in detail in the next paragraph.
\begin{figure}\centering
\includegraphics[width=12cm]{corehexagon.eps}
\caption{(a) The cored hexagon $C_{2,6,4}(2)$. (b) The cored hexagon $C_{3,6,4}(2)$. (c) The cored hexagon $C_{2,5,4}(2)$. (d) The cored hexagon $C_{2,6,3}(2)$. }\label{fig:core}
\end{figure}
We start with an \emph{auxiliary hexagon} $H_0$ with side-lengths $x,y,z,x,y,z$ (indicated by the hexagons with the dashed boundary in Figure \ref{fig:core}). Next, we push the north, the northeast, and the southeast sides of the hexagon $m$ units outward, and keep other sides staying put. This way we get a larger hexagon $H$, called the \emph{base hexagon}, of side-lengths $x, y+m, z,x+m,y,z+m$. Finally, we remove an up-pointing $m$-triangle such that its left vertex is located at the closest lattice point to the center of the auxiliary hexagon $H_0$. Precisely, there are four cases to distinguish based on the parities of $x,y,z$. When $x$, $y$ and $z$ have the same parity, the center of the hexagon is a lattice vertex and our removed triangle has the left vertex at the center. One readily sees that, in this case, the triangular hole stays evenly between each pair of parallel sides of the hexagon $H$. In particular, the distance between the north side of the hexagon and the top of the triangular hole and the distance between the base of the triangular hole and the south side of the hexagon are both $\frac{y+z}{2}$; the distances corresponding to the northeast and southwest sides of the hexagon are both $\frac{x+z}{2}$; the distances corresponding to the northwest and southeast sides of the hexagon are both $\frac{x+y}{2}$ (see Figure \ref{fig:core}(a); the hexagon wit the dashed boundary indicates the auxiliary hexagon). Next, we consider the case when $x$ has parity different from that of $y$ and $z$. In this case, the center of the auxiliary hexagon $H_0$ is \emph{not} a lattice vertex anymore. It is the middle point of a horizontal unit lattice interval. We now place the triangular hole such that its leftmost is $1/2$ unit to the left of the center of the auxiliary hexagon (illustrated in Figure \ref{fig:core}(b); the larger shaded dot indicates the center of the auxiliary hexagon). Similarly, if $y$ has the opposite parity to $x$ and $z$, we place the triangular hole $1/2$ unit to the northwest of the center of the auxiliary hexagon $H_0$ (shown in Figure \ref{fig:core}(c)). Finally, if $z$ has parity different from that of $x$ and $y$, the hole is located $1/2$ unit to the southwest of the center of $H_0$ (see Figure \ref{fig:core}(d)).
Next, we extend the definition of the hyperfactorial function to the case of half-integers:
\begin{equation}\label{hyper2}
\operatorname{H}(n)=\begin{cases}
\prod_{k=0}^{n-1}\Gamma(k+1) & \text{for $n$ a positive integer;}\\
\prod_{k=0}^{n-\frac{1}{2}}\Gamma(k+\frac{1}{2}) & \text{for $n$ a positive half-integer.}
\end{cases}
\end{equation}
where $\Gamma$ denotes the classical gamma function. Recall that $\Gamma(n+1)=n!$ and $\Gamma(n+\frac{1}{2})=\frac{(2n)!}{4^nn!}\sqrt{\pi}$, for a nonnegative integer $n$.
We can combine Theorems 1 and 2 in \cite{CEKZ} as follows:
\begin{thm}[Ciucu-Eisenk\"{o}lbl-Krattenthaler-Zare \cite{CEKZ}]\label{corethm}
Assume that $m,x,y,z$ are nonnegative integers, such that $y$ and $z$ have the same parity. Then the number of lozenge tilings of the cored hexagon $C_{x,y,z}(m)$ is given by
\begin{align}\label{coreeqx}
\operatorname{M}(C_{x,y,z}(m))=&\frac{\operatorname{H}(m+x)\operatorname{H}(m+y)\operatorname{H}(m+z)\operatorname{H}(m+x+y+z)}{\operatorname{H}(m+x+y)\operatorname{H}(m+y+z)\operatorname{H}(m+z+x)}\notag\\
&\times \frac{\operatorname{H}(m+\left\lfloor\frac{x+y+z}{2}\right\rfloor )\operatorname{H}(m+\left\lceil\frac{x+y+z}{2}\right\rceil)}{\operatorname{H}(m+\left\lceil\frac{x+y}{2}\right\rceil)\operatorname{H}(m+\frac{y+z}{2})\operatorname{H}(m+\left\lfloor\frac{z+x}{2}\right\rfloor)}\notag\\
&\times \frac{\operatorname{H}(\frac{m}{2})^2\operatorname{H}(\left\lfloor\frac{x}{2}\right\rfloor)\operatorname{H}(\left\lceil\frac{x}{2}\right\rceil)\operatorname{H}(\left\lfloor\frac{y}{2}\right\rfloor)\operatorname{H}(\left\lceil\frac{y}{2}\right\rceil)\operatorname{H}(\left\lfloor\frac{z}{2}\right\rfloor)\operatorname{H}(\left\lceil\frac{z}{2}\right\rceil)}{\operatorname{H}(\frac{m}{2}+\left\lfloor\frac{x}{2}\right\rfloor)\operatorname{H}(\frac{m}{2}+\left\lceil\frac{x}{2}\right\rceil)\operatorname{H}(\frac{m}{2}+\left\lfloor\frac{y}{2}\right\rfloor)\operatorname{H}(\frac{m}{2}+\left\lceil\frac{y}{2}\right\rceil)\operatorname{H}(\frac{m}{2}+\left\lfloor\frac{z}{2}\right\rfloor)\operatorname{H}(\frac{m}{2}+\left\lceil\frac{z}{2}\right\rceil)}\notag\\
&\times \frac{\operatorname{H}(\frac{m}{2}+\left\lfloor\frac{x+y}{2}\right\rfloor)\operatorname{H}(\frac{m}{2}+\left\lceil\frac{x+y}{2}\right\rceil)\operatorname{H}(\frac{m}{2}+\frac{y+z}{2})^2\operatorname{H}(\frac{m}{2}+\left\lfloor\frac{z+x}{2}\right\rfloor)\operatorname{H}(\frac{m}{2}+\left\lceil\frac{z+x}{2}\right\rceil)}{\operatorname{H}(\frac{m}{2}+\left\lfloor\frac{x+y+z}{2}\right\rfloor)\operatorname{H}(\frac{m}{2}+\left\lceil\frac{x+y+z}{2}\right\rceil)\operatorname{H}(\left\lfloor\frac{x+y}{2}\right\rfloor)\operatorname{H}(\frac{y+z}{2})\operatorname{H}(\left\lceil\frac{z+x}{2}\right\rceil)}.
\end{align}
Here we use the notation $\operatorname{M}(R)$ for the number of lozenge tilings of the region $R$.
\end{thm}
By the symmetry, if $x$ and $y$ have the same parity, then the number of tilings of $C_{x,y,z}(m)$ is exactly the expression on the right-hand side of (\ref{coreeqx}) above with $x$ replaced by $z$, $y$ replaced by $x$, and $z$ replaced by $y$. Similarly, if $x$ and $z$ have the same parity, then the number of tilings of $C_{x,y,z}(m)$ is exactly the expression on the right-hand side of (\ref{coreeqx}) with $x$ replaced by $y$, $y$ replaced by $z$, and $z$ replaced by $x$.
Inspired by the cored hexagons above, we will define our eight families of hexagons with three collinear ferns removed in the next subsection. Depending on the height of the common horizontal lattice line $\ell$ along which our three ferns are lined up, there are two cases to distinguish: $\ell$ leaves the west and east vertices of the hexagon on opposite sides (see Figure \ref{fig:threefern}(a) for an example) or $\ell$ leaves the two vertices on the same side (see Figure \ref{fig:threefern}(b)). By the symmetry, we can assume from now on that the east vertex of the hexagon is always below the line $\ell$.
\subsection{The case $\ell$ separates the west and east vertices of the hexagon}
We now define our four families of hexagons with three collinear ferns removed, in the case when the horizontal lattice line $\ell$ separates the east and west vertices of the hexagon, as follows. Our definitions are illustrated by Figures \ref{fig:construct1}--\ref{fig:construct4}. However, we ignore the inner hexagons and the arrows in these figures at the moment (these details will be used later in the alternative definitions of our regions in Subsection 2.4). We call them \emph{$R$-families}. Assume that $x,y,z$ are nonnegative integers and that $\textbf{a}=(a_1,a_2,\dotsc,a_m)$, $\textbf{b}:=(b_1,b_2,\dotsc,b_n)$, $\textbf{c}=(c_1,c_2,\dotsc,c_k)$ are three (may be empty) sequences of nonnegative integers. The three sequences $\textbf{a}, \textbf{b}, \textbf{c}$ determine the side-lengths of triangles in the left, the right, and the central ferns, respectively. Set
\begin{align}
e_a:=\sum_{i\ even}a_i, &\quad o_a:=\sum_{i \ odd} a_i,\notag\\
e_b:=\sum_{j\ even}b_j, &\quad o_b:=\sum_{j \ odd} b_j,\notag\\
e_c:=\sum_{t\ even}c_t, &\quad o_c:=\sum_{t \ odd} c_t,
\end{align}
and $a:=a_1+a_2+\cdots+a_m$, $b:=b_1+b_2+\cdots+b_n$, $c:=c_1+c_2+\cdots+c_k$.
\medskip
Assume that $x$ and $z$ have the same parity, we definition of our first $R$-family of regions $R^{\odot}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$ in the next paragraph.
We start with the base hexagon $H$ of side-lengths $x+o_a+e_b+e_c,$ $2y+z+e_a+o_b+e_c+ |a-b|$, $z+o_a+e_b+e_c,$ $x+e_a+o_b+e_c$, $2y+z+o_a+e_b+e_c+ |a-b|,$ $z+e_a+o_b+e_c$, in which $x$ and $z$ have the same parity (see the outermost hexagon in Figure \ref{fig:construct1}). Suppose first that the total length $a$ of the left fern is not greater than the total length $b$ of the right fern. Next, we remove at the level $y$ above the east vertex of the hexagon $H$ three ferns as follows. The left fern consists of $m$ triangles of alternating orientations with side-lengths $a_1,a_2,\dotsc,a_m$ as they appear from left to right, and starts with a down-pointing triangle. The right fern consists of $n$ alternating-oriented triangles of side-lengths $b_1, b_2,\dotsc,b_n$ as they appear from \emph{right to left}, and starts with an \emph{up-pointing} triangle. It is easy to see that the distance between the rightmost of the left fern and the leftmost of the right ferns is $c+x+z$. The middle fern of length $c$ consists of alternating-oriented triangles of side-lengths $c_1,c_2,\dots,c_k$ and starts with an up-pointing triangle. We next put this fern equally between the left and the right ferns as indicated by three strings of black triangles in Figure \ref{fig:construct1} (i.e. the distances between two consecutive ferns are both $\frac{x+z}{2}$, which is an integer as $x+z$ is even in this case). If $a>b$, we define the region similarly, the only difference is that we now remove the three ferns at the level $y+(a-b)$ above the east vertex of the hexagon (as opposed to removing at the level $y$ as in the previous case).
Next, we define the second $R$-family consisting of regions $R^{\leftarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, in the case when $x$ has different parity from that of $z$. We follow the same process as in the case of the $R^{\odot}$-type regions above, the only difference is that, since $x+z$ is now odd, we place the middle fern $1$ unit closer to the left fern than the right one, that is the distance between the left and the middle ferns is $\left\lfloor\frac{x+z}{2}\right\rfloor$ and the distance between the middle and the right ferns is $\left\lceil\frac{x+z}{2}\right\rceil$ (see Figure \ref{fig:construct2} for an example in the case $a>b$).
\medskip
The third and the fourth $R$-families are defined little differently, as we allow $y$ to take the value $-1$ in certain situations.
\medskip
Our third $R$-family is for the case when $x$ and $z$ have the same parity and is defined as follows. We now start with a slightly different base hexagon, that has side-lengths $x+o_a+e_b+e_c,$ $2y+z+e_a+o_b+e_c+ |a-b|+1$, $z+o_a+e_b+e_c,$ $x+e_a+o_b+e_c$, $2y+z+o_a+e_b+e_c+ |a-b|+1,$ $z+e_a+o_b+e_c$ (indicated by the outermost hexagon in Figure \ref{fig:construct3}). Next, we repeat the process in the definition of the first $R$-family, the only difference is that we are now removing the three ferns at the level $y+1$ above the east vertex of the hexagon if $a< b$, and at the level $y+(a-b)+1$ if $a\geq b$. Moreover, in the case $a<b$, the parameter $y$ may take the value $-1$. Denote by $R^{\nwarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$ the resulting region.
Our fourth $R$-family consists of the regions $R^{\swarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$ when $x$ has different parity from that of $z$. In this case, our base hexagon $H$ is the same as that in the third $R$-family, however, we now remove the our ferns in the same way as in the second family. In particular, we remove the three ferns at the level $y$ or $y+a-b$, depending on whether $a\leq b$ or $a>b$, such that the distance between the left and middle ferns is $\left\lfloor\frac{x+z}{2}\right\rfloor$ (illustrated in Figure \ref{fig:construct4}). Similar to the third $R$-family, we allow $y$ to take the value $-1$ when $a>b$.
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{construct3fernb.eps}%
\end{picture}%
\begin{picture}(10318,10458)(1347,-10639)
\put(6673,-9981){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,1}$y$}%
}}}}
\put(9946,-8091){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,1}$y$}%
}}}}
\put(5638,-6018){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_1$}%
}}}}
\put(6252,-5700){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_2$}%
}}}}
\put(6763,-6018){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_3$}%
}}}}
\put(7274,-5663){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_4$}%
}}}}
\put(10548,-6018){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_1$}%
}}}}
\put(9832,-5663){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_2$}%
}}}}
\put(9116,-6018){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_3$}%
}}}}
\put(2589,-5663){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_1$}%
}}}}
\put(2978,-6018){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_2$}%
}}}}
\put(3388,-5663){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_3$}%
}}}}
\put(3797,-6018){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_4$}%
}}}}
\put(4410,-5663){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$\frac{x+z}{2}$}%
}}}}
\put(8093,-5663){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$\frac{x+z}{2}$}%
}}}}
\put(7134,-3624){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b+c$}%
}}}}}
\put(5320,-3885){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$e_a+o_b+o_c$}%
}}}}}
\put(3759,-4246){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a$}%
}}}}}
\put(3663,-6910){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a$}%
}}}}}
\put(5378,-9437){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$o_a+e_b+e_c$}%
}}}}}
\put(7904,-7580){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b+c$}%
}}}}}
\put(6445,-1169){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,1}$y+b-a$}%
}}}}
\put(3809,-2822){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,1}$y+b-a$}%
}}}}}
\put(5740,-467){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(6149,-10624){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(10241,-9207){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(9729,-2534){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$y+z+e_a+o_b+o_c$}%
}}}}}
\put(2496,-3419){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(2350,-6799){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(4297,-10145){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,1}$y$}%
}}}}
\put(11481,-5769){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,1}$y$}%
}}}}
\end{picture}}
\caption{Construction of the hexagon with 3 ferns removed $R^{\odot}_{2,1,4}(1,1,1,1;\ 2,2,1; \ 2,1,1,2)$.}\label{fig:construct1}
\end{figure}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{construct3fernc.eps}%
\end{picture}%
\begin{picture}(10369,10132)(2543,-10209)
\put(6706,-2889){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$e_a+o_b+o_c$}%
}}}}}
\put(9341,-3657){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b+c$}%
}}}}}
\put(9211,-6217){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b+c$}%
}}}}}
\put(6817,-8138){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$o_a+e_b+e_c$}%
}}}}}
\put(4946,-5718){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a$}%
}}}}}
\put(5320,-2702){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a$}%
}}}}}
\put(3691,-4524){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_1$}%
}}}}
\put(4301,-4902){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_2$}%
}}}}
\put(5133,-4520){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_3$}%
}}}}
\put(7262,-4922){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_1$}%
}}}}
\put(7876,-4568){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_2$}%
}}}}
\put(8223,-4797){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_3$}%
}}}}
\put(11454,-4891){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_1$}%
}}}}
\put(11149,-4568){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_2$}%
}}}}
\put(10740,-4922){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_3$}%
}}}}
\put(10436,-4486){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_4$}%
}}}}
\put(6037,-4486){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$\lfloor\frac{x+z}{2}\rfloor$}%
}}}}
\put(9106,-4486){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$\lceil\frac{x+z}{2}\rceil$}%
}}}}
\put(6369,-369){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(10511,-1430){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(11326,-8874){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(7276,-10194){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(3446,-6112){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(3379,-2923){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\end{picture}%
}
\caption{Construction of the hexagon with 3 ferns removed $R^{\leftarrow}_{3,1,4}(2,2,2;\ 1,1,1,1; \ 2,1,1)$.}\label{fig:construct2}
\end{figure}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{construct3fernd.eps}%
\end{picture}%
\begin{picture}(9784,9637)(2880,-10235)
\put(6616,-890){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(3786,-3503){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(4059,-6811){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c+1$}%
}}}}}
\put(7237,-10220){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(11282,-9200){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(10507,-2651){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c+1$}%
}}}}}
\put(8604,-4599){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b+c$}%
}}}}}
\put(6445,-4358){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$e_a+o_b+o_c$}%
}}}}}
\put(8926,-7387){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b+c$}%
}}}}}
\put(6817,-8846){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$o_a+e_b+e_c$}%
}}}}}
\put(5279,-6720){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a$}%
}}}}}
\put(5218,-4946){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a$}%
}}}}}
\put(5971,-5686){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$\frac{x+z}{2}$}%
}}}}
\put(8746,-5611){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$\frac{x+z}{2}$}%
}}}}
\put(4297,-5691){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_1$}%
}}}}
\put(4576,-6046){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_2$}%
}}}}
\put(5011,-5686){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_3$}%
}}}}
\put(5416,-6046){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_4$}%
}}}}
\put(6886,-6031){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_1$}%
}}}}
\put(7696,-5701){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_2$}%
}}}}
\put(8184,-6011){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_3$}%
}}}}
\put(11431,-6091){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_1$}%
}}}}
\put(10546,-5641){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_2$}%
}}}}
\put(9961,-6031){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_3$}%
}}}}
\put(9571,-5716){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_4$}%
}}}}
\end{picture}%
}
\caption{Construction of the region $R^{\nwarrow}_{2,1,2}(1,1,1,1;\ 2,2,1;\ 2,2,1,1)$.}\label{fig:construct3}
\end{figure}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{construct3ferne.eps}%
\end{picture}%
\begin{picture}(10857,11232)(15557,-11825)
\put(22197,-7899){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b+c$}%
}}}}}
\put(20259,-9961){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$o_a+e_b+e_c$}%
}}}}}
\put(25021,-6495){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_1$}%
}}}}
\put(24509,-6081){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_2$}%
}}}}
\put(24100,-6672){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_3$}%
}}}}
\put(23486,-6200){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b_4$}%
}}}}
\put(20228,-6554){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_1$}%
}}}}
\put(21047,-6200){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_2$}%
}}}}
\put(21763,-6613){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$c_3$}%
}}}}
\put(16541,-6200){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_1$}%
}}}}
\put(17359,-6554){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_2$}%
}}}}
\put(18075,-6140){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_3$}%
}}}}
\put(18587,-6554){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a_4$}%
}}}}
\put(21699,-4224){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b+c$}%
}}}}}
\put(20017,-3791){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$e_a+o_b+o_c$}%
}}}}}
\put(17904,-4681){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a$}%
}}}}}
\put(18324,-7784){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$a$}%
}}}}}
\put(16580,-7819){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+_a+e_b+e_c+1$}%
}}}}}
\put(20284,-11810){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(24679,-10770){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(23942,-2854){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c+1$}%
}}}}}
\put(19164,-885){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(16289,-3925){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(19396,-6031){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\lfloor\frac{x+z}{2}\rfloor$}%
}}}}
\put(22351,-6084){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\lceil\frac{x+z}{2}\rceil$}%
}}}}
\end{picture}%
}
\caption{The region $R^{\swarrow}_{2,1,3}(2,2,1,2; \ 1,1,1,2; 2,2,1)$.}\label{fig:construct4}
\end{figure}
The very special case of our regions when $\textbf{a}=\textbf{b}=\emptyset$ gives exactly the $F$-cored hexagons in \cite{Ciu1}, and if we specialize further with $\textbf{c}=(m)$, we get the cored hexagons in \cite{CEKZ}. This is visually apparent when the $y$-parameter of the $F$-cored hexagon (or cored hexagon) is greater than or equal to the $z$-parameter. In the other case, when the $y$-parameter less than the $z$-parameter, we get back the $F$-cored hexagons $F^{\odot}_{x,z,y+2z}(\textbf{c}), F^{\leftarrow}_{x,z,y+2z}(\textbf{c})$, $F^{\nwarrow}_{x,z,2y+z+1}(\textbf{c})$ and $F^{\swarrow}_{x,z,2y+1}(\textbf{c})$ (as denoted in \cite{Ciu1}) by reflecting the region $R^{\odot}_{x,y,z}(\emptyset;{}^{0}\textbf{c};\emptyset), R^{\leftarrow}_{x,y,z}(\emptyset;{}^{0}\textbf{c};\emptyset)$, $R^{\swarrow}_{x,y,z}(\emptyset;{}^{0}\textbf{c};\emptyset)$ and $R^{\nwarrow}_{x,y,z}(\emptyset;{}^{0}\textbf{c};\emptyset)$ over a horizontal line, respectively. Here we denote ${}^{0}\textbf{s}$ the sequence obtained by including a $0$ term in front of the sequence $\textbf{s}$, i.e ${}^{0}\textbf{s}=(0,s_1,s_2,\dots,s_n)$ if $\textbf{s}=(s_1,s_2,\dots,s_n)$.
\begin{thm}\label{main1}
Assume that $\textbf{a}=(a_1,a_2,\dotsc,a_m)$, $\textbf{b}:=(b_1,b_2,\dotsc,b_n)$, $\textbf{c}=(c_1,c_2,\dotsc,c_k)$ are three sequences of nonnegative integers and that $x,y,z$ are three nonnegative integers, such that $x$ and $z$ have the same parity.
(a) If $a\geq b$, then
\begin{align}\label{maineq1a}
\operatorname{M}&(R^{\odot}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2a,z}(c))\notag\\
&\times s\left(y,a_1,\dotsc, a_{m},\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(a_1,\dotsc, a_{m-1},a_{m}+\frac{x+z}{2}+c_1,\dotsc,c_{k},\frac{x+z}{2},b_n,\dotsc,b_1,y+a-b\right)\notag\\
&\times \frac{\operatorname{H}(c+\frac{x+z}{2})}{\operatorname{H}(c)\operatorname{H}(\frac{x+z}{2})}\frac{\operatorname{H}(a+y+\frac{x+z}{2})}{\operatorname{H}(a+c+y+\frac{x+z}{2})}\notag\\
&\times \frac{\operatorname{H}(a+y+z)\operatorname{H}(a+c+y+z)}{\operatorname{H}(e_a+o_b+o_c+y+z)\operatorname{H}(a+o_a-o_b+e_c+y+z)}\notag\\
&\times \frac{\operatorname{H}(e_a+o_b+o_c+y)\operatorname{H}(a+o_a-o_b+e_c+y)}{\operatorname{H}(a+y)^2}
\end{align}
if $m,n,k$ are even. The other cases, when one or more numbers among $m,n,k$ are odd, can be reduced to the even case by including an empty triangle at the end of the corresponding ferns.
(b) If $a< b$, then
\begin{align}\label{maineq1b}
\operatorname{M}&(R^{\odot}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2b,z}(c))\notag\\
&\times s\left(y+b-a,a_1,\dotsc, a_{m},\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b
_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(a_1,\dotsc, a_{m-1},a_{m}+\frac{x+z}{2}+c_1,\dotsc,c_{k},\frac{x+z}{2},b_n,\dotsc,b_1,y\right)\notag\\
&\times \frac{\operatorname{H}(c+\frac{x+z}{2})}{\operatorname{H}(c)\operatorname{H}(\frac{x+z}{2})}\frac{\operatorname{H}(b+y+\frac{x+z}{2})}{\operatorname{H}(b+c+y+\frac{x+z}{2})}\notag\\
&\times \frac{\operatorname{H}(b+y+z)\operatorname{H}(b+c+y+z)}{\operatorname{H}(b+o_b-o_a+o_c+y+z)\operatorname{H}(o_a+e_b+e_c+y+z)}\notag\\
&\times \frac{\operatorname{H}(b+o_b-o_a+o_c+y)\operatorname{H}(o_a+e_b+e_c+y)}{\operatorname{H}(b+y)^2},
\end{align}
for even $m,n,k$. The other cases follow the even case in the same way as in part (a).
\end{thm}
The formulas in Theorem \ref{main1} can be combined into a \emph{single} formula as follows:
\begin{align}\label{maineq1c}
\operatorname{M}&(R^{\odot}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2\max(a,b),z}(c))\notag\\
&\times s\left(y+b-\min(a,b),a_1,\dotsc, a_{m},\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(a_1,\dotsc, a_{m-1},a_{m}+\frac{x+z}{2}+c_1,\dotsc,c_{k},\frac{x+z}{2},b_n,\dotsc,b_1,y+a-\min(a,b)\right)\notag\\
&\times \frac{\operatorname{H}(c+\frac{x+z}{2})}{\operatorname{H}(c)\operatorname{H}(\frac{x+z}{2})}\frac{\operatorname{H}(\max(a,b)+y+\frac{x+z}{2})}{\operatorname{H}(\max(a,b)+c+y+\frac{x+z}{2})}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)+y+z)\operatorname{H}(\max(a,b)+c+y+z)}{\operatorname{H}(\max(a,b)-o_a+o_b+o_c+y+z)\operatorname{H}(\max(a,b)+o_a-o_b+e_c+y+z)}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)-o_a+o_b+o_c+y)\operatorname{H}(\max(a,b)+o_a-o_b+e_c+y)}{\operatorname{H}(\max(a,b)+y)^2}.
\end{align}
For the sake of brevity, we use similar combined formulas in our next main theorems.
\begin{thm}\label{main2}
Assume that $\textbf{a}=(a_1,a_2,\dotsc,a_m)$, $\textbf{b}:=(b_1,b_2,\dotsc,b_n)$, $\textbf{c}=(c_1,c_2,\dotsc,c_k)$ are three sequences of nonnegative integers and that $x,y,z$ are three nonnegative integers, such that $x$ has parity opposite to $z$. Then
\begin{align}\label{maineq2a}
\operatorname{M}&(R^{\leftarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2\max(a,b),z}(c))\notag\\
&\times s\left(y+b-\min(a,b),a_1,\dotsc, a_{m},\left\lfloor\frac{x+z}{2}\right\rfloor,c_1,\dotsc,c_{k}+\left\lceil\frac{x+z}{2}\right\rceil+b_n,b_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(a_1,\dotsc, a_{m-1},a_{m}+\left\lfloor\frac{x+z}{2}\right\rfloor+c_1,\dotsc,c_{k},\left\lceil\frac{x+z}{2}\right\rceil,b_n,\dotsc,b_1,y+a-\min(a,b)\right)\notag\\
&\times\frac{\operatorname{H}(c+\left\lfloor\frac{x+z}{2}\right\rfloor)}{\operatorname{H}(c)\operatorname{H}(\left\lfloor\frac{x+z}{2}\right\rfloor)}\frac{\operatorname{H}(\max(a,b)+y+\left\lfloor\frac{x+z}{2}\right\rfloor)}{\operatorname{H}(\max(a,b)+c+y+\left\lfloor\frac{x+z}{2}\right\rfloor)}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)+y+z)\operatorname{H}(\max(a,b)+c+y+z)}{\operatorname{H}(\max(a,b)-o_a+o_b+o_c+y+z)\operatorname{H}(\max(a,b)+o_a-o_b+e_c+y+z)}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)-o_a+o_b+o_c+y)\operatorname{H}(\max(a,b)+o_a-o_b+e_c+y)}{\operatorname{H}(\max(a,b)+y)^2}
\end{align}
if $m,n,k$ are even. The other cases, when one or more numbers among $m,n,k$ are odd, can be reduced to the even case as in Theorem \ref{main1}.
\end{thm}
\begin{thm}\label{main3}
Assume that $\textbf{a}=(a_1,a_2,\dotsc,a_m)$, $\textbf{b}:=(b_1,b_2,\dotsc,b_n)$, $\textbf{c}=(c_1,c_2,\dotsc,c_k)$ are three sequences of nonnegative integers and that $x,z$ are two nonnegative integers, such that $x$ and $z$ have the same parity. Assume in addition that $y$ is an integer, such that $y\geq 0$ when $b\leq a$ and $y\geq -1$ when $b>a$. Then
\begin{align}\label{maineq3a}
\operatorname{M}&(R^{\nwarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2\max(a,b)+1,z}(c))\notag\\
&\times s\left(y+b-\min(a,b),a_1,\dotsc, a_{m},\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(a_1,\dotsc, a_{m-1},a_{m}+\frac{x+z}{2}+c_1,\dotsc,c_{k},\frac{x+z}{2},b_n,\dotsc,b_1,y+a+1-\min(a,b)\right)\notag\\
&\times\frac{\operatorname{H}(c+\frac{x+z}{2})}{\operatorname{H}(c)\operatorname{H}(\frac{x+z}{2})}\frac{\operatorname{H}(\max(a,b)+y+\frac{x+z}{2})}{\operatorname{H}(\max(a,b)+c+y+\frac{x+z}{2})}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)+y+z+1)\operatorname{H}(\max(a,b)+c+y+z)}{\operatorname{H}(\max(a,b)-o_a+o_b+o_c+y+z)\operatorname{H}(\max(a,b)+o_a-o_b+e_c+y+z+1)}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)-o_a+o_b+o_c+y)\operatorname{H}(\max(a,b)+o_a-o_b+e_c+y+1)}{\operatorname{H}(\max(a,b)+y)\operatorname{H}(\max(a,b)+y+1)},
\end{align}
for even $m,n,k$. The other cases, when one or more numbers among $m,n,k$ are odd, can be reduced to the even case as in Theorem \ref{main1}.
\end{thm}
\begin{thm}\label{main4}
Assume that $\textbf{a}=(a_1,a_2,\dotsc,a_m)$, $\textbf{b}:=(b_1,b_2,\dotsc,b_n)$, $\textbf{c}=(c_1,c_2,\dotsc,c_k)$ are three sequences of nonnegative integers and that $x,z$ are two nonnegative integers, such that $x$ and $z$ have different parities. Assume in addition that $y$ is an integer, such that $y\geq 0$ when $a\leq b$ and $y\geq -1$ when $a>b$, and that $m,n,k$ are all even (the cases, when at least one of $m,n,k$ is odd, follow by including a $0$-triangle to the end of the ferns if needed). Then
\begin{align}\label{maineq4a}
\operatorname{M}&(R^{\swarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2\max(a,b)+1,z}(c))\notag\\
&\times s\left(y+1+b-\min(a,b),a_1,\dotsc, a_{m},\left\lfloor\frac{x+z}{2}\right\rfloor,c_1,\dotsc,c_{k}+\left\lceil\frac{x+z}{2}\right\rceil+b_n,b_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(a_1,\dotsc, a_{m-1},a_{m}+\left\lfloor\frac{x+z}{2}\right\rfloor+c_1,\dotsc,c_{k},\left\lceil\frac{x+z}{2}\right\rceil,b_n,\dotsc,b_1,y+a-\min(a,b)\right)\notag\\
&\times\frac{\operatorname{H}(c+\left\lfloor\frac{x+z}{2}\right\rfloor)}{\operatorname{H}(c)\operatorname{H}(\left\lfloor\frac{x+z}{2}\right\rfloor)}\frac{\operatorname{H}(\max(a,b)+y+\left\lceil\frac{x+z}{2}\right\rceil)}{\operatorname{H}(\max(a,b)+c+y+\left\lceil\frac{x+z}{2}\right\rceil)}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)+y+z)\operatorname{H}(\max(a,b)+c+y+z+1)}{\operatorname{H}(\max(a,b)-o_a+o_b+o_c+y+z+1)\operatorname{H}(\max(a,b)+o_a-o_b+e_c+y+z)}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)-o_a+o_b+o_c+y+1)\operatorname{H}(\max(a,b)+o_a-o_b+e_c+y)}{\operatorname{H}(\max(a,b)+y)\operatorname{H}(\max(a,b)+y+1)}.
\end{align}
\end{thm}
\subsection{The case when the west and east vertices of the hexagon are both below $\ell$}
Besides the above four `$R$-families', we have four more `$Q$-families' of regions in which the line $\ell$ containing three ferns stays above the west and the east vertices of the hexagon (as opposed to separating these two vertices as in the case of the $R^{\odot}$-, $R^{\leftarrow}$-, $R^{\nwarrow}$-, $R^{\swarrow}$-type regions).
The definitions of our $Q$-families are illustrated in Figures \ref{fig:constructQ1}--\ref{fig:constructQ4}. For the purpose of our definitions, we ignore all the inner hexagons and the arrows in these figures in the moment. These details will be used later in the alternative definitions of the regions in Subsection 2.4.
Assume that $x,y,z$ are nonnegative integers and that $\textbf{a}=(a_1,\dotsc,a_m)$, $\textbf{b}=(b_1,\dotsc,b_n)$, $\textbf{c}=(c_1,\dotsc,c_k)$ are three sequences of nonnegative integers as usual.
Our first $Q$-family is obtained from the base hexagon $H$ of side-lengths $x+e_a+e_b+e_c, y+z+o_a+o_b+o_c+\max(a-b,0) ,y+z+e_a+e_b+e_c+\max(b-a,0),x+o_a+o_b+o_c,y+z+e_a+e_b+e_c+\max(a-b,0),y+z+o_a+o_b+o_c+\max(b-a,0)$, in which $x$ and $z$ have the same parity (see the outermost hexagon in Figure \ref{fig:constructQ1}). We remove at the level $y+\max(a-b,0)$ above the east vertex of the hexagon $H$ three ferns with sequences of side-lengths $\textbf{a}, \textbf{c}, \textbf{b}$ as in the case of the $R^{\odot}$-type regions. The only difference here is that \emph{all} three ferns have now the first triangle up-pointing (note that the right fern still runs in the opposite direction to the left and the middle ferns, i.e. from right to left). We still arrange the three ferns so that the left and the right ferns touch the northwest and the northeast sides of the hexagon, respectively, and the middle fern is located evenly between of the latter ones. Denote this region by $Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$.
The second $Q$-family, consisting of the regions $Q^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, is similar to the first one, the only differences are $x$ and $z$ have different parities and the middle fern is now 1-unit closer to the left fern (see Figure \ref{fig:constructQ2}).
\begin{figure}
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{13cm}{!}{
\begin{picture}(0,0)%
\includegraphics{constructQ1.eps}%
\end{picture}%
\begin{picture}(10619,11535)(2904,-18631)
\put(12170,-12708){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_1$}%
}}}}
\put(11764,-16464){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$y+z+b-a+e_a+e_b+e_c$}%
}}}}}
\put(7201,-18616){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(3376,-14929){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(4424,-10972){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+o_b+o_c$}%
}}}}}
\put(4382,-12708){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_1$}%
}}}}
\put(5206,-12316){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_2$}%
}}}}
\put(7246,-12646){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_1$}%
}}}}
\put(8181,-12294){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_2$}%
}}}}
\put(8693,-12589){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_3$}%
}}}}
\put(11352,-12294){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_2$}%
}}}}
\put(10534,-12708){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_3$}%
}}}}
\put(9380,-13884){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b+c$}%
}}}}}
\put(8747,-10270){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b+c$}%
}}}}}
\put(7468,-7370){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(7366,-10145){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$o_a+o_b+o_c$}%
}}}}}
\put(6956,-15460){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$e_a+e_b+e_c$}%
}}}}}
\put(5836,-10854){\rotatebox{1.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a$}%
}}}}}
\put(5529,-13688){\rotatebox{1.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a$}%
}}}}}
\put(11355,-9332){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+o_b+o_c$}%
}}}}}
\put(6136,-12226){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$\frac{x+z}{2}$}%
}}}}
\put(9286,-12316){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$\frac{x+z}{2}$}%
}}}}
\end{picture}%
}
\caption{How to construct the region $Q^{\odot}_{2,2,4}(2,2;\ 2,2,1; \ 2,2,2)$.}\label{fig:constructQ1}
\end{figure}
\begin{figure}
\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{13cm}{!}{
\begin{picture}(0,0)%
\includegraphics{constructQ2.eps}%
\end{picture}%
\begin{picture}(10178,10924)(15840,-17691)
\put(20746,-17676){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(24489,-16482){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(16143,-14021){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$y+z+a-b+e_a+e_b+e_c$}%
}}}}}
\put(19821,-7056){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(23444,-8737){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+o_b+o_c$}%
}}}}}
\put(17114,-10180){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+o_b+o_c$}%
}}}}}
\put(22391,-11961){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$\lceil\frac{x+z}{2}\rceil$}%
}}}}
\put(19081,-11931){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$\lfloor\frac{x+z}{2}\rfloor$}%
}}}}
\put(19936,-9766){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$o_a+o_b+o_c$}%
}}}}}
\put(19916,-14701){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$e_a+e_b+e_c$}%
}}}}}
\put(16921,-12321){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_1$}%
}}}}
\put(17691,-11976){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_2$}%
}}}}
\put(18351,-12271){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_3$}%
}}}}
\put(20186,-12336){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_1$}%
}}}}
\put(20766,-11966){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_2$}%
}}}}
\put(21546,-12316){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_3$}%
}}}}
\put(24246,-12346){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_1$}%
}}}}
\put(23666,-11956){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_2$}%
}}}}
\end{picture}}
\caption{How to construct the region $Q^{\leftarrow}_{3,2,4}(2,2,1;\ 2,1,2;\ 2,1)$.}\label{fig:constructQ2}
\end{figure}
\begin{figure}
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{13cm}{!}{
\begin{picture}(0,0)%
\includegraphics{constructQ3.eps}%
\end{picture}%
\begin{picture}(10391,11528)(3908,-19507)
\put(11951,-13101){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_2$}%
}}}}
\put(12761,-13521){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_1$}%
}}}}
\put(11141,-13531){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_3$}%
}}}}
\put(7850,-16533){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$e_a+e_b+e_c$}%
}}}}}
\put(10167,-14389){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b+c$}%
}}}}}
\put(9322,-11111){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b+c$}%
}}}}}
\put(8184,-10736){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$o_a+o_b+o_c$}%
}}}}}
\put(5628,-11535){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+o_b+o_c$}%
}}}}}
\put(11839,-9776){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$z+1+o_a+o_b+o_c$}%
}}}}}
\put(8362,-8271){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(12509,-17822){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$y+z+b-a+e_a+e_b+e_c$}%
}}}}}
\put(4195,-15832){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$y+z+1+e_a+e_b+e_c$}%
}}}}}
\put(7945,-19492){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(5934,-12541){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a$}%
}}}}
\put(6160,-14059){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a$}%
}}}}
\put(6752,-13098){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$\frac{x+z}{2}$}%
}}}}
\put(9921,-13114){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$\frac{x+z}{2}$}%
}}}}
\put(5291,-13501){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_1$}%
}}}}
\put(5811,-13091){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_2$}%
}}}}
\put(7971,-13461){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_1$}%
}}}}
\put(8481,-13161){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_2$}%
}}}}
\put(9161,-13461){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_3$}%
}}}}
\end{picture}}
\caption{How to construct the region $Q^{\nwarrow}_{2,2,4}(1,2;\ 2,1,2;\ 2,2,2)$.}\label{fig:constructQ3}
\end{figure}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{13cm}{!}{
\begin{picture}(0,0)%
\includegraphics{constructQ4.eps}%
\end{picture}%
\begin{picture}(10630,11528)(3908,-19507)
\put(13169,-13521){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_1$}%
}}}}
\put(12359,-13101){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_2$}%
}}}}
\put(11549,-13531){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b_3$}%
}}}}
\put(7850,-16533){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$e_a+e_b+e_c$}%
}}}}}
\put(10127,-14513){\rotatebox{330.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b+c$}%
}}}}}
\put(9322,-11111){\rotatebox{30.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$b+c$}%
}}}}}
\put(8184,-10736){\rotatebox{90.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$o_a+o_b+o_c$}%
}}}}}
\put(5628,-11535){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+o_b+o_c$}%
}}}}}
\put(12071,-9808){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$z+1+o_a+o_b+o_c$}%
}}}}}
\put(8362,-8271){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(12714,-18058){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$y+z+b-a+e_a+e_b+e_c$}%
}}}}}
\put(4195,-15832){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$y+z+1+e_a+e_b+e_c$}%
}}}}}
\put(7945,-19492){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(6036,-12541){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a$}%
}}}}
\put(6138,-14161){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a$}%
}}}}
\put(6952,-13098){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$\lceil\frac{x+z}{2}\rceil$}%
}}}}
\put(10321,-13114){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$\lfloor\frac{x+z}{2}\rfloor$}%
}}}}
\put(5291,-13501){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_1$}%
}}}}
\put(5811,-13091){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$a_2$}%
}}}}
\put(8371,-13461){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_1$}%
}}}}
\put(8881,-13161){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_2$}%
}}}}
\put(9402,-13452){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$c_3$}%
}}}}
\end{picture}}
\centering
\caption{How to construct the region $Q^{\nearrow}_{3,2,4}(1,2;\ 2,1,2;\ 2,2,2)$.}\label{fig:constructQ4}
\end{figure}
We next define the third and the fourth $Q$-families, in which the parameter $y$ is taking the value $-1$ when $b>a$.
To define the third $Q$-family, we start with a slightly different base hexagon of side-lengths $x+e_a+e_b+e_c, y+z+o_a+o_b+o_c+\max(a-b,0)+1,y+z+e_a+e_b+e_c+\max(b-a,0),x+o_a+o_b+o_c,y+z+e_a+e_b+e_c+\max(a-b,0)+1,y+z+o_a+o_b+o_c+\max(a-b,0)$, in which $zx$ and $z$ have the same parity (see the outermost hexagons in Figure \ref{fig:constructQ3}). We now remove our three ferns in the same way as in the first $Q$-family at the level $y+\max(b-a,0)$ above the east vertex of the hexagon. Denote by $Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ by the newly defined region. When $x$ and $z$ have different parities, the fourth $Q$-family is obtained similarly by removing the three ferns from the same base hexagon as in the definition of the $Q^{\nwarrow}$-type regions. However, we now remove the middle fern 1-unit closer to the \emph{right} fern. Denote by $Q^{\nearrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ the resulting region (illustrated in Figure \ref{fig:constructQ4}).
\begin{thm}\label{mainQ1}
Assume that $x,y,z$ are nonnegative integers and that $\textbf{a}=(a_1,\dotsc,a_m),$ $\textbf{b}=(b_1,\dotsc,b_n)$, $\textbf{c}=(c_1,\dotsc,c_k)$ are sequences of nonnegative integers. Assume in addition that $x$ and $z$ have the same parity and that $m,n,k$ are all even\footnote{Similar to Theorems \ref{main1}--\ref{main4}, for the next enumerations in this paper, we can assume that each of our ferns consists of an even number of triangles (as other cases can be reduced to this case by appending a $0$-triangle to the ferns if needed).}. Then
\begin{align}\label{maineqQ1}
\operatorname{M}&(Q^{\odot}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2\max(a,b),z}(c))\notag\\
&\times s\left(a_1,\dotsc, a_{m}+\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(y+b-\min(a,b), a_1,\dotsc, a_{m-1},a_{m},\frac{x+z}{2}+c_1,\dotsc,c_{k},\frac{x+z}{2},b_n,\dotsc,b_1,y+a-\min(a,b)\right)\notag\\
&\times \frac{\operatorname{H}(c+\frac{x+z}{2})}{\operatorname{H}(c)\operatorname{H}(\frac{x+z}{2})}\frac{\operatorname{H}(\max(a,b)+y+\frac{x+z}{2})}{\operatorname{H}(\max(a,b)+c+y+\frac{x+z}{2})}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)+y+z)\operatorname{H}(\max(a,b)+c+y+z)}{\operatorname{H}(o_a+o_b+o_c+z)\operatorname{H}(|a-b|+e_a+e_b+e_c+2y+z)}\notag\\
&\times \frac{\operatorname{H}(o_a+o_b+o_c)\operatorname{H}(|a-b|+e_a+e_b+e_c+2y)}{\operatorname{H}(\max(a,b)+y)^2}.
\end{align}
\end{thm}
\begin{thm}\label{mainQ2}
Assume that $x,y,z$ are nonnegative integers and that $\textbf{a}=(a_1,\dotsc,a_m),$ $\textbf{b}=(b_1,\dotsc,b_n)$, $\textbf{c}=(c_1,\dotsc,c_k)$ are sequences of nonnegative integers. We also assume that $x$ and $z$ have different parities, and that $m,n,k$ are all even. Then
\begin{align}\label{maineqQ2}
\operatorname{M}&(Q^{\leftarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2\max(a,b),z}(c))\notag\\
&\times s\left(a_1,\dotsc, a_{m}+\left\lfloor\frac{x+z}{2}\right\rfloor,c_1,\dotsc,c_{k}+\left\lceil\frac{x+z}{2}\right\rceil+b_n,b_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(y+b-\min(a,b),a_1,\dotsc, ,a_{m},\left\lfloor\frac{x+z}{2}\right\rfloor+c_1,\dotsc,c_{k},\left\lceil\frac{x+z}{2}\right\rceil,b_n,\dotsc,b_1,y+a-\min(a,b)\right)\notag\\
&\times\frac{\operatorname{H}(c+\left\lfloor\frac{x+z}{2}\right\rfloor)}{\operatorname{H}(c)\operatorname{H}(\left\lfloor\frac{x+z}{2}\right\rfloor)}\frac{\operatorname{H}(\max(a,b)+y+\left\lfloor\frac{x+z}{2}\right\rfloor)}{\operatorname{H}(\max(a,b)+c+y+\left\lfloor\frac{x+z}{2}\right\rfloor)}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)+y+z)\operatorname{H}(\max(a,b)+c+y+z)}{\operatorname{H}(o_a+o_b+o_c+z)\operatorname{H}(|a-b|+e_a+e_b+e_c+2y+z)}\notag\\
&\times \frac{\operatorname{H}(o_a+o_b+o_c)\operatorname{H}(|a-b|+e_a+e_b+e_c+2y)}{\operatorname{H}(\max(a,b)+y)^2}.
\end{align}
\end{thm}
\begin{thm}\label{mainQ3}
Assume that $x,z$ are nonnegative integers of the same parity, $y$ is an integer at least $-1$, and $y$ can only take the value $-1$ when $a<b$. Assume in addition that $\textbf{a}=(a_1,\dotsc,a_m),$ $\textbf{b}=(b_1,\dotsc,b_n)$, $\textbf{c}=(c_1,\dotsc,c_k)$ are sequences of an even number of nonnegative integers. Then
\begin{align}\label{maineq3a}
\operatorname{M}&(Q^{\nwarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2\max(a,b)+1,z}(c))\notag\\
&\times s\left(a_1,\dotsc, a_{m}+\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(y+b-\min(a,b),a_1,\dotsc,a_{m},\frac{x+z}{2}+c_1,\dotsc,c_{k},\frac{x+z}{2},b_n,\dotsc,b_1,y+a+1-\min(a,b)\right)\notag\\
&\times\frac{\operatorname{H}(c+\frac{x+z}{2})}{\operatorname{H}(c)\operatorname{H}(\frac{x+z}{2})}\frac{\operatorname{H}(\max(a,b)+y+\frac{x+z}{2})}{\operatorname{H}(\max(a,b)+c+y+\frac{x+z}{2})}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)+y+z+1)\operatorname{H}(\max(a,b)+c+y+z)}{\operatorname{H}(o_a+o_b+o_c+z)\operatorname{H}(|a-b|+e_a+e_b+e_c+2y+z+1)}\notag\\
&\times \frac{\operatorname{H}(o_a+o_b+o_c)\operatorname{H}(|a-b|+e_a+e_b+e_c+2y+1)}{\operatorname{H}(\max(a,b)+y)\operatorname{H}(\max(a,b)+y+1)}.
\end{align}
\end{thm}
\begin{thm}\label{mainQ4}
Assume that $x,z$ are nonnegative integers of different parities, $y$ is an integer at least $-1$, and $y$ can only take the value $-1$ when $a<b$. Assume in addition that $\textbf{a}=(a_1,\dotsc,a_m),$ $\textbf{b}=(b_1,\dotsc,b_n)$, $\textbf{c}=(c_1,\dotsc,c_k)$ are sequences of an even number of nonnegative integers. Then
\begin{align}\label{maineq3a}
\operatorname{M}&(Q^{\nearrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,2y+z+2\max(a,b)+1,z}(c))\notag\\
&\times s\left(a_1,\dotsc, a_{m}+\left\lceil\frac{x+z}{2}\right\rceil,c_1,\dotsc,c_{k}+\left\lfloor\frac{x+z}{2}\right\rfloor+b_n,b_{n-1},\dotsc,b_1\right)\notag\\
&\times s\left(y+b-\min(a,b),a_1,\dotsc, a_{m},\left\lceil\frac{x+z}{2}\right\rceil+c_1,\dotsc,c_{k},\left\lfloor\frac{x+z}{2}\right\rfloor,b_n,\dotsc,b_1,y+a-\min(a,b)+1\right)\notag\\
&\times\frac{\operatorname{H}(c+\left\lfloor\frac{x+z}{2}\right\rfloor)}{\operatorname{H}(c)\operatorname{H}(\left\lfloor\frac{x+z}{2}\right\rfloor)}\frac{\operatorname{H}(\max(a,b)+y+\left\lceil\frac{x+z}{2}\right\rceil)}{\operatorname{H}(\max(a,b)+c+y+\left\lceil\frac{x+z}{2}\right\rceil)}\notag\\
&\times \frac{\operatorname{H}(\max(a,b)+y+z)\operatorname{H}(\max(a,b)+c+y+z+1)}{\operatorname{H}(o_a+o_b+o_c+z)\operatorname{H}(|a-b|+e_a+e_b+e_c+2y+z+1)}\notag\\
&\times \frac{\operatorname{H}(o_a+o_b+o_c)\operatorname{H}(|a-b|+e_a+e_b+e_c+2y+1)}{\operatorname{H}(\max(a,b)+y)\operatorname{H}(\max(a,b)+y+1)}.
\end{align}
\end{thm}
\medskip
One readily sees that when the middle fern is empty, then our eight regions (4 $R$-regions and 4 $Q$-regions) become special cases of the `\emph{doubly-intruded hexagons}' in \cite{CL}. More precisely, the regions in \cite{CL} depend on four parameters $x,y,z,t$, besides the two ferns, and the our regions here only depend on three parameters $x,y,z$. Moreover, the $q$-enumeration in \cite{CL} does not appear in our regions.
\subsection{Alternative definitions of the $R$-and $Q$-families}
The above direct definitions of the $R$- and $Q$-families are straightforward, however, to see more clearly that our regions are common generalizations of the cored hexagons in \cite{CEKZ} and the $F$-cored hexagons in \cite{Ciu1}, we give an equivalent constructive definition as follows.
We start with an auxiliary hexagon $H_0$ of side-lengths $x,z,z,x,z,z$ (see the inner hexagon with the dashed contour in Figures \ref{fig:construct1} and \ref{fig:construct2}). Next, we push out all six sides of $H_0$ as follows. We push the north, northeast, southeast, south, southwest, and northwest sides of $H_0$ outward $e_a+o_b+o_c$, $b+c$, $b+c$, $o_a+e_b+e_c$, $a,$ $a$ units, respectively. We obtain the hexagon $H_1$ with side-lengths $x+o_a+e_b+e_c,$ $z+e_a+o_b+e_c$, $z+o_a+e_b+e_c,$ $x+e_a+o_b+e_c$, $z+o_a+e_b+e_c,$ $z+e_a+o_b+e_c$ (indicated by the hexagon with the solid bold contour in the above figures).
If the total length of the left fern is greater than or equal to the total length of the right fern, i.e. $a> b$, we push in addition the south, southeast, north, and northwest sides of the hexagon $H_1$ respectively $y+a-b$, $y+a-b$, $y$, and $y$ units outward; otherwise, if $a\leq b$, we push out these sides $y$, $y$, $y+b-a$, and $y+b-a$ units, respectively. This way the hexagon $H_1$ is extended to the base hexagon $H$ of side-lengths $x+o_a+e_b+e_c,$ $y+e_a+o_b+e_c+ |a-b|$, $z+o_a+e_b+e_c,$ $x+e_a+o_b+e_c$, $y+o_a+e_b+e_c+ |a-b|,$ $z+e_a+o_b+e_c$ as in the direct definition of the regions above (the extension of $H_1$ is indicated by the portion with the dashed boundary in Figures \ref{fig:construct1} and \ref{fig:construct2}).
Finally, we remove the middle fern, consisting of triangles of side-lengths $c_i$'s, such the leftmost of the fern is exactly at the center of the of the auxiliary hexagon $H_0$ if $x$ and $z$ have the same parity, or is $1/2$ unit to the left of the center of $H_0$ in the case when $x$ and $z$ have opposite parities. The left fern and the right fern are removed on the same level as the middle fern, such that the leftmost of the left fern and the rightmost of the right fern touch the boundary of the hexagon. This gives us the regions $R_{x,y,z}^{\odot}(\textbf{a};\textbf{c};\textbf{b})$ and $R_{x,y,z}^{\leftarrow}(\textbf{a};\textbf{c};\textbf{b})$, respectively.
To define the third and the fourth $R$-families, we start instead with an auxiliary hexagon of side-lengths $x,z+1,z,x,z+1,z$
(see the inner hexagons in Figures \ref{fig:construct3} and \ref{fig:construct4}). We still perform the above 2-stage pushing process to obtain the base hexagon of side-lengths
$x+o_a+e_b+e_c,$ $y+e_a+o_b+e_c+ |a-b|+1$, $z+o_a+e_b+e_c,$ $x+e_a+o_b+e_c$, $y+o_a+e_b+e_c+ |a-b|+1,$ $z+e_a+o_b+e_c$. As mentioned in the direct definitions in Subsection 2.2, in the case when the region $R_{x,y,z}^{\nwarrow}(\textbf{a};\textbf{c};\textbf{b})$ has $b>a$
and in the case when the region $R_{x,y,z}^{\swarrow}(\textbf{a};\textbf{c};\textbf{b})$ has $b<a$, we allow $y$ to take the \emph{negative} value $-1$. Here, we understand that pushing outward `$-1$ unit' is equivalent to pushing inward $1$ unit.
We obtain the region $R_{x,y,z}^{\nwarrow}(\textbf{a};\textbf{c};\textbf{b})$ or the region $R_{x,y,z}^{\swarrow}(\textbf{a};\textbf{c};\textbf{b})$ if the middle fern is placed $1/2$ unit to the northwest or $1/2$ unit to the southwest of the center of the auxiliary hexagon $H_0$
(corresponding to the case when $x$ and $z$ have the same parity or the case when they have different parities).
We note that this constructive definition of our regions also explains the use of the super scripts $\odot, \leftarrow, \nwarrow, \swarrow$ in our notations. These super scripts clarify the relative position of the leftmost of the middle fern to the center of the auxiliary hexagon $H_0$. We have adopted these notations in \cite{Ciu1}.
\begin{rmk}\label{rmk1}
In the above constructive definition of the second, the third and the fourth $R$-families, there are actually three more families of regions corresponding to the case when the leftmost of the middle fern is located $1/2$ unit to the east, the southeast, or the northeast of the center of the auxiliary hexagon $H_0$. However, we do \emph{not} consider in detail these regions here, as they can be viewed as $180^{\circ}$ rotations of our three $R$-families.
\end{rmk}
\medskip
Next, we provide the constructive definitions for the four $Q$-families.
The construction of the regions $Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $Q^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ are shown in Figures \ref{fig:constructQ1} and \ref{fig:constructQ2}, respectively. We start with an auxiliary hexagon $H_0$ of side-lengths $x,z,z,x,z,z$ (illustrated by the inner hexagons with the dashed boundary), and we push out all the sides (in clockwise order from the north side) of this hexagon by $o_a+o_b+o_c, b+c,b+c,e_a+e_b+e_c,a,a$ units, respectively. This way, we get a larger hexagon $H_1$ of side-lengths $x+e_a+e_b+e_c, z+o_a+o_b+o_c, z+e_a+e_b+e_c, x+o_a+o_b+o_c,z+e_a+e_b+e_c, z+o_a+o_b+o_c$ (shown as the hexagon with the bold solid boundary). The second pushing depends on whether $a\geq b$ or $b> a$. If $a\leq b$, we push out the southeast, the south and the southwest sides of the hexagon $H_1$ respectively $y, y+b-a,y+b-a$ units; otherwise we push these sides respectively $y+a-b,y+a-b,y$ units (these are indicated by the portion with the dashed boundary outside the solid contour). If $x$ and $z$ have the same parity, i.e. the center of the auxiliary hexagon $H_0$ is a lattice vertex, we arrange the middle fern so that its leftmost point is exactly at the center, the left and the right ferns are located at the same level, such that they touch the northwest and the northeast sides of the hexagon, respectively (see Figure \ref{fig:constructQ1}). The resulting region is exactly the region
$Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ defined above. In the case when $x$ has parity opposite to $z$, we arrange the middle fern $1/2$ unit to the left of the center of the auxiliary hexagon $H_0$ (the left and right ferns are still lined up in the same way as in the definition of the $R$-type regions) and obtain the region $Q^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ (see Figure \ref{fig:constructQ2}).
Next, the construction of the regions $Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $Q^{\nearrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ are shown in Figures \ref{fig:constructQ3} and \ref{fig:constructQ4}, respectively. We are allowing $y$ to take the value $-1$ when $b>a$ with the convention that: pushing outward `$-1$ unit' is exactly pushing inward $1$ unit. We now start with a different auxiliary hexagon $H_0$ of side-lengths $x,z+1,z,z+1,z$ . We still perform the same 2-stage pushing process as above to obtain the base hexagon $H$. We now choose the middle fern, such that its leftmost point is $1/2$ unit to the northwest of the center of the auxiliary hexagon if $x$ and $z$ have the same parity; otherwise, we put the middle fern $1/2$ unit to the northeast of the center of the auxiliary hexagon $H_0$ (the other two ferns are still chosen in the same way as in the $Q^{\odot}$- and $Q^{\leftarrow}$-type regions above). We have here the regions $Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $Q^{\nearrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, respectively.
\subsection{Dual of MacMahon's theorem on plane partitions}
\begin{figure}\centering
\includegraphics[width=10cm]{ContourDual.eps}
\caption{(a) The boundary of the hexagon in MacMahon's theorem \cite{Mac}. (b) The boundary of the concave hexagon in the dual of MacMahon's theorem \cite{CK}. (c) The boundary of the concave polygon in the dual of MacMahon's theorem \cite{Ciu1}.}\label{ContourDual}
\end{figure}
MacMahon's classical theorem on boxed plane partitions \cite{Mac} yields the beautiful product formula (\ref{MacMahoneq}) for the number of lozenge tilings of the interior of a convex hexagon on the triangular lattice obtained by turning $60^{\circ}$ after drawing each side (see Figure \ref{ContourDual}(a)). In \cite{CK}, Ciucu and Krattenthaler considered a counterpart of MacMahon's theorem, corresponding to the \emph{exterior} of a concave hexagon by turning $120^{\circ}$ after drawing each side. In particular, they consider the asymptotic behavior of the ratio between tiling number of a regular $S$-cored hexagon and tiling number of a normalized version of the $S$-cored hexagon (see Figure \ref{ContourDual}(b)). Based on their explicit tiling formula of a $S$-cored hexagon, Ciucu and Krattenthaler showed that the later ratio tends to a product of two instances of MacMahon's product (\ref{MacMahoneq}) (see Theorem 1.1 in \cite{CK}). They called this striking asymptotic result a `\emph{dual}' of MacMahon's theorem. Ciucu later obtained another dual of MacMahon's theorem (see Theorem 1.1 in \cite{Ciu1}), corresponding to the exterior of a concave polygon obtained by turning $120^{\circ}$ after drawing each side (see Figure \ref{ContourDual}(c)). More precisely, using his explicit tiling enumeration for $F$-cored hexagons, Ciucu showed that the ratio between tiling numbers of a $F$-cored hexagon and a normalized version of this $F$-cored hexagon tends to a nice product formula. Interestingly, this formula is a product of two instances of Cohn--Larsen--Propp's product formula (\ref{semieq}), which in turn can be considered as a generalization of MacMahon's formula (\ref{MacMahoneq}).
In this subsection, we use our tiling formulas for the $R^{\odot}$- and $R^{\leftarrow}$-type regions above to obtain a new dual of MacMahon's theorem. Our dual corresponds to the exterior of the union of three concave polygons that are similar to that in Ciucu's dual in \cite{Ciu1}.
Let $x,z$ be fixed positive real numbers, and let $\textbf{a}=(a_1,\dotsc,a_m)$, $\textbf{c}=(c_1,\dotsc,c_k)$, $\textbf{b}=(b_1,\dotsc,b_n)$ be three fixed sequences of nonnegative integers, such that $a=\sum_{i}a_i=\sum_{j}b_j=b$. We consider the behavior of the ratio between the numbers of tilings of the two $R$-regions $R_{\lfloor xN\rfloor ,0,\lfloor zN\rfloor}(\textbf{a}; \textbf{c}; \textbf{b})$ and $R_{\lfloor xN\rfloor,0,\lfloor zN\rfloor}(e_a,o_a;\ e_c, o_c;\ e_b,o_b)$,
where
\begin{equation}
R_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b}):=
\begin{cases}
R^{\odot}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}) &\text{if $x$ and $z$ have the same parity}\\
R^{\leftarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b}) &\text{if $x$ has parity opposite to $z$.}
\end{cases}
\end{equation}
We show that this ratio tends to a product of \emph{six} instances of Cohn--Larsen--Propp's product formula, as $N$ gets large.
\begin{thm}\label{asymthm}
For three given sequences of nonnegative integers $\textbf{a}=(a_1,\dotsc,a_m)$, $\textbf{c}=(c_1,\dotsc,c_k)$, $\textbf{b}=(b_1,\dotsc,b_n)$, such that $a=b$, and for two positive numbers $x,z$, we have
\begin{align}\label{dualeq}
\lim_{N\to \infty}\frac{\operatorname{M}(R_{\lfloor xN\rfloor ,0,\lfloor zN\rfloor}(\textbf{a}; \textbf{c}; \textbf{b}))}{\operatorname{M}(R_{\lfloor xN\rfloor,0,\lfloor zN\rfloor}(e_a,o_a;\ e_c, o_c;\ e_b,o_b))}=&s(a_1,\dotsc,a_{m-1})s(a_2,\dotsc,a_m)s(b_1,\dotsc,b_{n-1})s(b_2,\dotsc,b_{n})\notag\\
&\times s(c_1,\dotsc,c_{k-1})s(c_2,\dotsc,c_{k}).
\end{align}
Recall that $s(a_1,\dotsc,a_{n})$ denotes the tiling number of the dented semihexagon $S(a_1,\dotsc,a_{n})$ defined in (\ref{semieq}). The above theorem can be visualized as in the Figure \ref{fig:geointer}.
\end{thm}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{8cm}{!}{
\begin{picture}(0,0)%
\includegraphics{semihexmultiple2.eps}%
\end{picture}%
\begin{picture}(6833,4101)(1915,-5731)
\put(4741,-1911){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_2+a_4+a_6$}%
}}}}
\put(7303,-2601){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_1+a_3+a_5+a_7$}%
}}}}}
\put(2161,-4601){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_1+a_3+a_5+a_7$}%
}}}}}
\put(2660,-5716){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_1$}%
}}}}
\put(3351,-5711){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_2$}%
}}}}
\put(4101,-5681){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_3$}%
}}}}
\put(5061,-5681){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_4$}%
}}}}
\put(5971,-5691){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_5$}%
}}}}
\put(6621,-5681){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_6$}%
}}}}
\put(7521,-5681){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a_7$}%
}}}}
\end{picture}%
}
\caption{Obtaining the region $S(2,2,2,3,1,2,4)$ (the shaded region with the bold contour) from the region $T_{7,8}(1,2,5,6,10,13,14,15)$ by removing several vertical forced lozenges; the black triangles indicate the unit triangle removed in the region $T_{7,8}(1,2,5,6,10,13,14,15)$.}\label{semihexmultiple2}
\end{figure}
\begin{proof}
First, by Theorems \ref{main1} and \ref{main2}, we have
\begin{align}\label{dualeq2}
\lim_{N\to \infty}\frac{\operatorname{M}(R_{\lfloor xN\rfloor ,0,\lfloor zN\rfloor}(\textbf{a}; \textbf{c}; \textbf{b}))}
{\operatorname{M}(R_{\lfloor xN\rfloor,0,\lfloor zN\rfloor}(e_a,o_a;\ e_c, o_c;\ e_b,o_b))}=\lim_{N\to \infty}\frac{\operatorname{M}(S^+)\operatorname{M}(S^-)}{\operatorname{M}(\overline{S}^+)\operatorname{M}(\overline{S}^-)},
\end{align}
Here $S^+$ and $S^-$ are the two dented semihexagons whose dents obtained by dividing the region $R_{\lfloor xN\rfloor ,0,\lfloor zN\rfloor}(\textbf{a}; \textbf{c}; \textbf{b})$
along the line that our three ferns are resting on ($S^+$ denotes the upper semihexagon, and $S^-$ denotes the lower semihexagon).
Similarly, $\overline{S}^+$ and $\overline{S}^-$ denote the two dented
semihexagons corresponding to the region $R_{\lfloor xN\rfloor,0,\lfloor zN\rfloor}(e_a,o_a;\ e_c, o_c;\ e_b,o_b)$.
For two ordered sets $E=(s_1,\dotsc, s_m)$ and $E'=(s'_1,\dotsc, s'_n)$, we define the operator $\Delta$ as follows
$\Delta(E)=\prod_{i<j} (s_j-s_i)$, and $\Delta(E,E')=\prod_{i,j}(s'_j-s_i)$. We also use the notation $[a,b]$ to indicate the set of all integers $x$, such that $a\leq x\leq b$, and $y+[a,b]:=[a+y,b+y]$. Finally, we use the notation $\tau_i(\textbf{a})$ for the $i$-th partial sum of the sequence $\textbf{a}=(a_1,a_2,\dotsc,a_m)$, i.e. $\tau_i(\textbf{a})=\sum_{j=1}^{i}a_j$.
We only need
to show that the ratio on the right-hand side of (\ref{dualeq2}) tends to the product of the tiling numbers of the six dented semihexagons on the right-hand side of (\ref{dualeq}), as $N$ gets large.
To do so, we use Cohn--Larsen--Propp's original formula for the number of tilings of a semihexagon with dents as mentioned
in the footnote on page 3. In particular, each semihexagon $S(a_1,a_2,\dots,a_m)$ is obtained from the region $T_{o_a,e_a}\left( \bigcup_{i\geq 1} [\tau_{2i-1}(\textbf{a})+1,\tau_{2i}(\textbf{a})] \right)$ by removing several forced vertical lozenges (see Figure \ref{semihexmultiple2}). Therefore, the two regions have the same number of tilings. Recall that $T_{m,n}(x_1,x_2,\dots,x_n)$ is the region obtained from the semihexagon of side-lengths $m,n,m+n,n$ (clockwise from the top) by removing $n$ up-pointing unit triangles from its bottom that are in the positions $x_1,x_2,\dots,x_n$ as counted from left to right, and that the number of tilings of $T_{m,n}(x_1,x_2,\dots,x_n)$ is given by the product $\prod_{1\leq i<j \leq n}\frac{x_j-x_i}{j-i}$.
We first consider $S^+$. It has the same number of tilings as the semihexagon $T_{x+e_a+o_b+o_c,z+o_a+e_b+e_c}(A\cup B\cup C)$, where
\[A=\bigcup_{i\geq 1}[\tau_{2i-1}(\textbf{a})+1,\tau_{2i}(\textbf{a})],\]
\[C=\bigcup_{i\geq 1}\left(a+\left\lfloor\frac{\lfloor xN\rfloor+\lfloor zN\rfloor}{2}\right\rfloor\right)+[\tau_{2i-1}(\textbf{c})+1,\tau_{2i}(\textbf{c})],\]
\[B=\bigcup_{i\geq 1}\left(a+c+\lfloor xN\rfloor+\lfloor zN\rfloor\right)+[\tau_{2i-1}(\textbf{b})+1,\tau_{2i}(\textbf{b})].\]
It means that $A,B,C$ are the position sets corresponding to the up-pointing triangles in the left, the right and the middle ferns, respectively. For convenience, assume that
$\alpha_1,\dotsc, \alpha_{e_a}$ are the positions in set $A$, $(a+c+\lfloor xN\rfloor+\lfloor zN\rfloor)+\beta_{1},\dotsc, (a+c+\lfloor xN\rfloor+\lfloor zN\rfloor)+\beta_{o_b}$ are the positions in $B$, and $(a+\left\lfloor\frac{\lfloor xN\rfloor+\lfloor zN\rfloor}{2}\right\rfloor)+\gamma_{1},\dotsc, (a+\left\lfloor\frac{\lfloor xN\rfloor+\lfloor zN\rfloor}{2}\right\rfloor)+\gamma_{o_c}$ are the positions in $C$. Similarly, we denote by
\[A'=[1,e_a],\]
\[C'=a+[1,o_c],\]
\[B'=a+c+[1,o_b].\]
the position sets corresponding the semihexagon $\overline{S}^+$.
By Cohn--Larsen--Propp's original formula, the ratio of the tilings number between the above two dented semihexagons can be written as
\begin{align}
\frac{\operatorname{M}(S^+)}{\operatorname{M}(\overline{S}^+)}&=\frac{\Delta(A\cup B \cup C)}{\Delta(A'\cup B'\cup C')}\notag\\
&=\frac{\Delta(A)}{\Delta(A')}\frac{\Delta(B)}{\Delta(B')}\frac{\Delta(C)}{\Delta(C')}\frac{\Delta(A,B)}{\Delta(A',B')}\frac{\Delta(A,C)}{\Delta(A',C')}\frac{\Delta(B,C)}{\Delta(B',C')}.
\end{align}
The first three ratios give us the first, the third and the fifth $s$-functions on the right-hand side of (\ref{dualeq}).
We can write the ratio $\frac{\Delta(A,C)}{\Delta(A',C')}$ as
\begin{equation}
\frac{\Delta(A,C)}{\Delta(A',C')}=\prod_{i,j}\frac{\left(a+\left\lfloor\frac{\lfloor xN\rfloor+\lfloor zN\rfloor}{2}\right\rfloor\right)+\gamma_j-\alpha_i}{\left(a+\left\lfloor\frac{\lfloor xN\rfloor+\lfloor zN\rfloor}{2}\right\rfloor\right)+\gamma'_j-\alpha'_i}.
\end{equation}
For given $i,j$, each factor in the above product tends to $1$, as $N$ gets large (because $|\gamma_j-\alpha_i|, |\gamma'_j-\alpha'_i|\leq a+c$, for any $i,j$). This means that
\begin{equation}
\lim_{N \to \infty} \frac{\Delta(A,C)}{\Delta(A',C')}=1.
\end{equation}
Similarly, we have
\begin{equation}
\lim_{N \to \infty} \frac{\Delta(A,C)}{\Delta(A',C')}=\lim_{N \to \infty} \frac{\Delta(B,C)}{\Delta(B',C')}=1.
\end{equation}
This implies that
\begin{align}
\lim_{N \to \infty}\frac{\operatorname{M}(S^+)}{\operatorname{M}(\overline{S}^+)}=s(a_1,\dotsc,a_{m-1})s(b_1,\dotsc,b_{n-1})s(c_1,\dotsc,c_{k-1}).
\end{align}
Similarly, we get
\begin{align}
\lim_{N \to \infty}\frac{\operatorname{M}(S^-)}{\operatorname{M}(\overline{S}^-)}=s(a_2,\dotsc,a_m)s(b_2,\dotsc,b_{n})s(c_2,\dotsc,c_{k}).
\end{align}
This finishes our proof.
\end{proof}
This theorem implies the dual of MacMahon's theorem introduced by Ciucu (Theorem 1.1 in \cite{Ciu1}) by specializing $\textbf{a}=\textbf{b}=\emptyset$ and $x=z=1$.
\begin{figure}\centering
\includegraphics[width=15cm]{geointer.eps}
\caption{The dual of MacMahon's theorem for three ferns.}\label{fig:geointer}
\end{figure}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{8cm}{!}{
\begin{picture}(0,0)%
\includegraphics{notsymmetry.eps}%
\end{picture}%
\begin{picture}(6658,7899)(3876,-9834)
\put(6777,-9819){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_c$}%
}}}}
\put(5949,-5341){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$c_1$}%
}}}}
\put(6721,-4989){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$c_2$}%
}}}}
\put(7419,-5349){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$c_3$}%
}}}}
\put(7989,-4966){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$c_4$}%
}}}}
\put(10350,-5206){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z$}%
}}}}
\put(3928,-5206){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z$}%
}}}}
\put(9197,-3292){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+o_c$}%
}}}}}
\put(4629,-4083){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+o_c$}%
}}}}}
\put(6702,-2221){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_c$}%
}}}}
\put(9220,-8538){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_c$}%
}}}}}
\put(4361,-7286){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_c$}%
}}}}}
\end{picture}}
\caption{A symmetric hexagon with (not necessarily symmetric) fern removed along the symmetric axis.}\label{fig:nosymmetic}
\end{figure}
\medskip
\subsection{Combined theorems and symmetric $F$-cored hexagons}
We start this subsection by noticing that one can combine Theorems \ref{main1}, \ref{main2}, \ref{mainQ1} and \ref{mainQ2} into a single theorem as follows.
\medskip
Let $x,z$ be nonnegative integers, and \textbf{a}, \textbf{b}, \textbf{c} be three sequence of nonnegative integers,
such that $a=\sum_i a_i=\sum_j b_j=b$. Consider three ferns whose side-lengths of the triangles are the terms of the sequences \textbf{a}, \textbf{b}, \textbf{c}. We now do \emph{not} have any requirement on the orientations of the first triangle of the three ferns as in the definition of the $R$- and $Q$-families before.
Consider a symmetric hexagon of side-lengths $x+d_a+d_b+d_c,z+u_a+u_b+u_c,z+d_a+d_b+d_c,x+u_a+u_b+u_c,z+d_a+d_b+d_c,z+u_a+u_b+u_c$, where $u_a$ and $d_a$ denote the sums of the side-lengths of all up-pointing triangles and down-pointing triangles in the $a$-fern, and $u_b,d_b,u_c,d_c$ are defined similarly.
On the lattice line containing the west and the east vertices of the hexagon, we remove three ferns such that the sequences of side-lengths of the left, right and middle ferns are \textbf{a}, \textbf{b}, \textbf{c}, and that the leftmost of the left fern
is touching the west vertex of the hexagon, the rightmost of the right fern is touching the east vertex of the hexagon. For the middle fern, we place it evenly between the left and the right ferns in the sense that the distance between the left fern and the middle fern is $\lfloor \frac{x+z}{2} \rfloor$ and the distance between the middle fern and the
right fern is $\lceil \frac{x+z}{2} \rceil$. Denote by $H_{x,z}(\textbf{a};\textbf{c};\textbf{b})$ the resulting region.
\begin{thm}[Combination of Theorems \ref{main1}, \ref{main2}, \ref{mainQ1} and \ref{mainQ2}]\label{combinethm1}
Assume that $x,z$ are nonnegative integers and that $\textbf{a}=(a_1,\dotsc,a_m),$ $\textbf{b}=(b_1,\dotsc,b_n)$, $\textbf{c}=(c_1,\dotsc,c_k)$ are sequences of nonnegative integers, such that $a=b$.
Assume in addition that $m,n,k$ are all even (as the case when at least one of them are odd can be reduced to the even case by appending a $0$-triangle to the corresponding ferns). Then
\begin{align}\label{combineeq1}
\operatorname{M}&(H_{x,z}(\textbf{a};\textbf{c};\textbf{b}))=\operatorname{M}(C_{x,z+2a,z}(c))\operatorname{M}(S^+)\operatorname{M}(S^-)\notag\\
&\times\frac{\operatorname{H}(c+\left\lfloor\frac{x+z}{2}\right\rfloor)}{\operatorname{H}(c)\operatorname{H}(\left\lfloor\frac{x+z}{2}\right\rfloor)}\frac{\operatorname{H}(a+\left\lfloor\frac{x+z}{2}\right\rfloor)}{\operatorname{H}(a+c+\left\lfloor\frac{x+z}{2}\right\rfloor)}\notag\\
&\times \frac{\operatorname{H}(a+z)\operatorname{H}(a+c+z)}{\operatorname{H}(u_a+u_b+u_c+z)\operatorname{H}(d_a+d_b+d_c+z)}\notag\\
&\times \frac{\operatorname{H}(u_a+u_b+u_c)\operatorname{H}(d_a+d_b+d_c)}{\operatorname{H}(a)^2},
\end{align}
where $S^+$ and $S^-$ are the two semihexagons with dents obtained by dividing the region along the line $\ell$ (the lattice line containing all bases of triangles of the three ferns); the dents of $S^+$ and $S^-$ are defined by the configurations of the three ferns.
\end{thm}
One readily sees that after removing the forced lozenges in $H_{x,z}(\textbf{a};\textbf{c};\textbf{b})$, the remaining region is an $Q^{\odot}$- or $Q^{\leftarrow}$-type region if the left and right ferns both have the first triangles up-pointing, and we obtain an $R^{\odot}$- or $R^{\leftarrow}$-type region when the left fern starts by a down-pointing triangle and the right fern starts by an up-pointing triangle. This means that Theorem \ref{combinethm1} implies all four Theorems \ref{main1}, \ref{main2}, \ref{mainQ1} and \ref{mainQ2}, say after some appropriate changes of variables.
One can obtain similarly a combination of Theorems \ref{main3}, \ref{main4}, \ref{mainQ3} and \ref{mainQ4}.
\medskip
Next, we consider an interesting special case of the $Q^{\odot}$-type region when $\textbf{a}=\textbf{b}=\emptyset$.
\begin{thm}\label{symmetricthm}
Let $x,y,z$ be nonnegative integers and let $\textbf{c}=(c_1,c_2,\dotsc,c_k)$ be a sequence of nonnegative integers. Assume in addition that $x$ and $y$ have the same parity. Let $B_{x,y,z}(c_1,c_2,\dotsc,c_k)$ be the region obtained from the symmetric hexagon of side-lengths $x+e_c,y+z+o_c,y+z+e_c,x+o_c,y+z+e_c,y+z+o_c$ by removing a fern consisting triangles of side-lengths $c_1,c_2,\dotsc,c_k$ at the level $z$ above the west vertex of the hexagon, so that the distances between two endpoints of the fern and the northwest and northeast sides of the hexagon are the same. The number of tilings of $B_{x,y,z}(c_1,c_2,\dotsc,c_k)$ is given by
\begin{align}
\operatorname{M}(B_{x,y,z}(c_1,c_2,\dotsc,c_k))=&\operatorname{M}(C_{x,y+2z,y}(c))\notag\\
&\times s\left(c_1,\dotsc,c_{k-1}\right)\cdot s\left(z,c_1+\frac{x+y}{2},\dotsc,c_{k},\frac{x+y}{2},z\right)\notag\\
&\times \frac{\operatorname{H}(c+\frac{x+y}{2})}{\operatorname{H}(c)\operatorname{H}(\frac{x+y}{2})}\frac{\operatorname{H}(z+\frac{x+y}{2})}{\operatorname{H}(z+c+\frac{x+y}{2})}\frac{\operatorname{H}(y+z)\operatorname{H}(c+y+z)}{\operatorname{H}(o_c+y)\operatorname{H}(e_c+y+2z)}\frac{\operatorname{H}(o_c)\operatorname{H}(e_c+2z)}{\operatorname{H}(z)^2}.
\end{align}
\end{thm}
Recall that, in general, if we move the removed fern in a $F$-cored hexagon away from the center, the tiling number is not given by a simple product anymore. However, this theorem says that in the case of symmetric hexagons, we can remove a fern at \emph{any} positions perpendicularly to the symmetry axis and still get a simple product formula. Interestingly, the fern does \emph{not} need to be symmetric\footnote{This phenomenon was first observed by Ciucu (private communication).}. This theorem generalizes the author's previous work in \cite{Halfhex2} where we required in additional that the fern is also symmetric.
\section{Combined proof of Theorems \ref{main1}--\ref{mainQ4}}\label{sec:proof1}
\subsection{Organization of the proof}\label{subsec:organize}
Recall that our 8 regions, $R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $R^{\swarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $R^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $Q^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $Q^{\nearrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, are all obtained from a certain base hexagon $H$ by removing three ferns along a common lattice line $\ell$. The base hexagons of the regions $R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ are both of side-lengths $x+o_a+e_b+e_c,$ $2y+z+e_a+o_b+e_c+ |a-b|$, $z+o_a+e_b+e_c,$ $x+e_a+o_b+e_c$, $2y+z+o_a+e_b+e_c+ |a-b|,$ $z+e_a+o_b+e_c$; while the base hexagons of the regions $R^{\swarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $R^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ are of side-lengths $x+o_a+e_b+e_c,$ $2y+z+e_a+o_b+e_c+ |a-b|+1$, $z+o_a+e_b+e_c,$ $x+e_a+o_b+e_c$, $2y+z+o_a+e_b+e_c+ |a-b|+1,$ $z+e_a+o_b+e_c$. The perimeter of the base hexagon is then $2x+4y+4z+3a+3b+2|a-b|$ or $2x+4y+4z+3a+3b+2|a-b|+2$, respectively. Similarly, one readily sees that the perimeter of the base hexagons of the regions $Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $Q^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ always have perimeter equal to $2x+4y+4z+3a+3b+2|a-b|$, and the perimeters of the base hexagons of the regions $Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $Q^{\nearrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ are both $2x+4y+4z+3a+3b+2|a-b|+2$. We call the perimeter of the base hexagon the \emph{quasi-perimeter} of our regions, denoted by $p$ in the rest of the proof.
One readily sees that
\begin{claim}\label{claimp}
\[p \geq 2x+4z.\]
\end{claim}
\begin{proof}
If $y\geq 0$, then by the explicit formula of the quasi-perimeter above, we have $p\geq 2x+4z$. We only need to consider the case $y=-1$. However, $y=-1$ only happens in the $R^{\nwarrow}$-, $R^{\swarrow}$-, $Q^{\nwarrow}$-, and $Q^{\nearrow}$-type regions with $|a-b|\geq 1$. In these cases, we have
\begin{equation}
p=2x+4y+4z+3a+3b+2|a-b|+2\geq 2x-4+4z+2|a-b|+2\geq 2x+4z.
\end{equation}
\end{proof}
We aim to prove \emph{all} eight Theorems \ref{main1}--\ref{mainQ4} at once by induction on $h:=p+x+z$, where $p$ is the quasi-perimeter of the region. Our proof is organized as follows. In Subsection 3.2, we quote the particular versions the Kuo condensation that will be employed in our proofs. Next, in Subsections 3.3--3.10, we will present carefully 18 recurrences for our 8 families of regions obtained by applying Kuo condensation. Each family of regions will have two or three different recurrences, depending on whether $a> b$, $a=b$, or $a> b$. We would like to emphasize that, due to the difference in the structures of our regions, the universal recurrence seems \emph{not} to exist. Subsection 3.11 is devoted to the main arguments of the inductive proof. Finally, in Subsection 3.12, we handle the algebraic verification that completes our main proof.
\subsection{Kuo condensation and other preliminary results}\label{subsec:kuo}
In general, the tilings of a region $R$ can carry `weights'. In the weighted case, the notation $\operatorname{M}(R)$ stands for the sum of the weights of all tilings of the region $R$, where the \emph{weight} of a tiling is the product of weights of its lozenges.
A \emph{forced lozenge} in a region $R$ on the triangular lattice is a lozenge contained in any tilings of $R$. Assume that we remove several forced lozenges $l_1,l_2\dotsc,l_n$ from the region $R$ and get a new region $R'$. Then
\begin{equation}\label{forcedeq}
\operatorname{M}(R)=\operatorname{M}(R')\prod_{i=1}^{n}wt(l_i),
\end{equation}
where $wt(l_i)$ denotes the weight of the lozenge $l_i$.
A region on the triangular lattice is said to be \emph{balanced} if it has the same number of up- and down-pointing unit triangles. The following useful lemma allows us to decompose a large region into several smaller ones.
\begin{lem}[Region-splitting Lemma \cite{Tri1, Tri2}]\label{RS}
Let $R$ be a balanced region on the triangular lattice. Assume that a sub-region $Q$ of $R$ satisfies the following two conditions:
\begin{enumerate}
\item[(i)] \text{\rm{(Separating Condition)}} There is only one type of unit triangles (up-pointing or down-pointing) running along each side of the border between $Q$ and $R-Q$.
\item[(ii)] \text{\rm{(Balancing Condition)}} $Q$ is balanced.
\end{enumerate}
Then
\begin{equation}
\operatorname{M}(R)=\operatorname{M}(Q)\, \operatorname{M}(R-Q).
\end{equation}
\end{lem}
Let $G$ be a finite simple graph without loops. A \emph{perfect matching} of $G$ is a collection of disjoint edges covering all vertices of $G$. The \emph{(planar) dual graph} of a region $R$ on the triangular lattice is the graph whose vertices are unit triangles in $R$ and whose edges connect precisely two unit triangles sharing an edge. In the weighted case, the edges of the dual graph carry the same weights as the corresponding lozenges. We can identify the tilings of a region and perfect matchings of its dual graph. In this point of view, we use the notation $\operatorname{M}(G)$ for the sum of the weights of all perfect matchings in $G$, where the weight of a perfect matching is the product of weights of its constituent edges. In the unweighted case, i.e. when all edges of the graph have weight 1, $\operatorname{M}(G)$ is exactly number the perfect matchings of the graph $G$.
The following two theorems of Kuo are the keys of our proofs in this paper.
\begin{thm}[Theorem 5.1 \cite{Kuo}]\label{kuothm1}
Let $G=(V_1,V_2,E)$ be a (weighted) bipartite planar graph in which $|V_1|=|V_2|$. Assume that $u, v, w, s$ are four vertices appearing in a cyclic order on a face of $G$ so that $u,w \in V_1$ and $v,s \in V_2$. Then
\begin{equation}\label{kuoeq1}
\operatorname{M}(G)\operatorname{M}(G-\{u, v, w, s\})=\operatorname{M}(G-\{u, v\})\operatorname{M}(G-\{ w, s\})+\operatorname{M}(G-\{u, s\})\operatorname{M}(G-\{v, w\}).
\end{equation}
\end{thm}
\begin{thm}[Theorem 5.2 \cite{Kuo}]\label{kuothm2}
Let $G=(V_1,V_2,E)$ be a (weighted) bipartite planar graph in which $|V_1|=|V_2|$. Assume that $u, v, w, s$ are four vertices appearing in a cyclic order on a face of $G$ so that $u,v \in V_1$ and $w,s \in V_2$. Then
\begin{equation}\label{kuoeq2}
\operatorname{M}(G-\{u, s\})\operatorname{M}(G-\{v, w\})=\operatorname{M}(G)\operatorname{M}(G-\{u, v, w, s\})+\operatorname{M}(G-\{u,w\})\operatorname{M}(G-\{v, s\}).
\end{equation}
\end{thm}
Theorems \ref{kuothm1} and \ref{kuothm2} are usually mentioned as two variants of \emph{Kuo condensation}. Kuo condensation (or \emph{graphical condensation} as called in \cite{Kuo}) can be considered as a combinatorial interpretation of the well-known \emph{Dodgson condensation} in linear algebra (which is based on the Jacobi--Desnanot identity, see e.g. \cite{Abeles}, \cite{Dod} and \cite{Mui}, pp. 136--148, and \cite{Zeil} for a bijective proof). The Dodgson condensation was named after Charles Lutwidge Dodgson (1832--1898), better known by his pen name Lewis Carroll, an English writer, mathematician, and photographer.
The preliminary version of Kuo condensation (when the for vertices $u,v,w,s$ in Theorem \ref{kuothm1} form a $4$-cycle in the graph $G$) was originally conjectured by Alexandru Ionescu in context of Aztec diamond graphs, and was proved by Propp in 1993 (see e.g. \cite{Propp2}). Eric H. Kuo introduced Kuo condensation in his 2004 paper \cite{Kuo} with four different versions, two of them are Theorems \ref{kuothm1} and \ref{kuothm2} stated above. Kuo condensation has become a powerful tool in the enumeration of tilings with a number of applications. We refer the reader to \cite{Ciucu, Ful, Knuth, Kuo06, speyer, YYZ, YZ} for various aspects and generalizations of Kuo condensation, and e.g. \cite{CF, CK, CL, KW, LMNT, Lai15a, Tri1, Tri2, Halfhex1, Halfhex2, LM, LR, Ranjan1, Ranjan2} for recent applications of the method.
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter1.eps}%
\end{picture}%
\begin{picture}(14027,22390)(1794,-22426)
\put(14383,-8474){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(11436,-7754){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(6216,-8406){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$u$}%
}}}}
\put(2158,-9584){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(2235,-11309){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(4738,-14226){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(7338,-13635){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(14718,-13635){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(12118,-14226){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(7003,-8474){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(4056,-7754){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(10972,-18484){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$w$}%
}}}}
\put(9594,-17197){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(9671,-18922){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(12174,-21839){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(14774,-21248){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(14439,-16087){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(11492,-15367){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(6400,-15945){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$u$}%
}}}}
\put(2342,-17130){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(2419,-18855){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(4922,-21772){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(9615,-11309){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(9538,-9584){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(7522,-21181){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(10951,-8311){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$s$}%
}}}}
\put(7187,-16020){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(4240,-15300){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(9357,-2083){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(9434,-3808){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(11937,-6725){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(14537,-6134){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(6646,-13606){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$v$}%
}}}}
\put(10916,-10863){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$w$}%
}}}}
\put(14202,-973){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(11255,-253){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(11236,-7531){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$s$}%
}}}}
\put(10737,-3373){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$w$}%
}}}}
\put(13861,-6106){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$v$}%
}}}}
\put(13415,-898){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$u$}%
}}}}
\put(1981,-2079){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(2058,-3804){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(4561,-6721){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(10786,-766){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$s$}%
}}}}
\put(3765,-15833){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$s$}%
}}}}
\put(7161,-6130){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(6826,-969){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(3879,-249){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(14093,-21263){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{$v$}%
}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the regions $R^{\odot}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, when $a<b$. Kuo condensation is applied to the region $R^{\odot}_{2,1,2}(1,1 ;\ 1,2,1 ;\ 1,2)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter1}
\end{figure}
\subsection{Recurrences for $R^{\odot}$-type regions}\label{subsec:recurR1}
Recall that we are assuming that $x$ and $z$ have the same parity and that the leftmost vertex of the middle fern is exactly at the center of auxiliary hexagon $H_0$ of side-lengths $x,z,z,x,z,z$.
If $a< b$ (i.e. the total length of the left fern is not greater than that of the right fern), we apply Kuo condensation (Theorem \ref{kuothm1}) to the dual graph $G$ of $R^{\odot}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b})$ with the four vertices $u,v,w,s$ corresponding to the shaded unit triangles with the same label in Figure \ref{fig:kuocenter1}(b). In particular, the $u$-triangle is the up-pointing shaded unit triangle on the northeast corner of the region, the $v$-triangle is the down-pointing shaded unit triangle on the southeast corner, the $w$-triangle is the up-pointing shaded unit triangle attached to the rightmost point of the left fern, and the $s$-triangle is the down-pointing shaded unit triangle on the northwest corner. The six regions in Figure \ref{fig:kuocenter1} correspond to the six terms in identity (\ref{kuoeq1}). Strictly speaking, Figure \ref{fig:kuocenter1} shows the regions corresponding to the graphs in this identity.
Let us consider the region corresponding to the graph $G-\{u,v,w,s\}$ shown in picture (b). The removal of the four unit triangles with labels $u,v,w,s$ gives forced lozenges along the north, the northwest and the south sides of the hexagon, as well as the side of the last triangle of the left fern. By removing these forced lozenges, we get a new region with the same number of tilings (see the region, restricted by the bold contour). This new region is exactly an $R^{\leftarrow}$-type region with the $z$-parameter reduced by $1$ unit, the side-length of the last triangle in the left fern extended by $1$ unit (precisely, if the left fern ends with an up-pointing triangle, then the removal of the forced lozenges extends its side-length by $1$; in the case when the left fern ends with a down-pointing triangle, then the removal of the $w$-triangle forms a new up-pointing triangle of side-length 1 at the end of the left fern. However, in the latter case, we regard the fern as having $m+1$ triangles, the last of which is of side-length $0$). Moreover, the center of the new auxiliary hexagon (now with side-lengths $x,z-1,z-1,x,z-1,z-1$) is $1/2$ unit to the right of that of the original auxiliary center. This means that the leftmost point of the middle fern is now $1/2$ unit to the left of the center of the auxiliary hexagon. That explains why the type of our region was changed.
For convenience, we denote, from now on, by $\textbf{a}^{+1}$ the sequence obtained from the sequence $\textbf{a}$ by adding 1 to the last term if $\textbf{a}$ has an even number of terms, otherwise, including a new term $1$ to the end of $\textbf{a}$. We have just established the identity:
\begin{equation}
\operatorname{M}(G-\{ u,v,w,s\})=\operatorname{M}(R^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})).
\end{equation}
Working throughout the next four regions in the Figure \ref{fig:kuocenter1}(c)--(f), we get respectively:
\begin{equation}
\operatorname{M}(G-\{ u,v\})=\operatorname{M}(R^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})),
\end{equation}
\begin{equation}
\operatorname{M}(G-\{ w,s\})=M(R^{\leftarrow}_{x-1,y,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})),
\end{equation}
\begin{equation}
\operatorname{M}(G-\{ u,s\})=\operatorname{M}(R^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b})),
\end{equation}
\begin{equation}
\operatorname{M}(G-\{ v,w\})=\operatorname{M}(R^{\swarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})).
\end{equation}
One should note that the change of the parameters $x, y,z,$ and the sequence $\textbf{a}$ leads to the change the position of the center of the auxiliary hexagon, as a consequence, the types of our regions are also changed.
Plugging the above identities into identity (\ref{kuoeq1}) of the Kuo condensation, we get the recurrence:
\begin{align}\label{centerrecur1a}
\operatorname{M}(R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))&=
\operatorname{M}(R^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}) )\operatorname{M}(R^{\leftarrow}_{x-1,y,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+\operatorname{M}(R^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c}; \ \textbf{b}))\operatorname{M}(R^{\swarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})),
\end{align}
when $a< b$.
We also note that the above recurrence also works well in the case $a=0$, by regarding that the sequence $\textbf{a}$ consists of a single triangle of side length $0$.
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter2.eps}%
\end{picture}%
\begin{picture}(14268,22409)(1785,-22427)
\put(14602,-16082){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(11655,-15362){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(2189,-17320){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(2266,-19045){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(4769,-21962){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(7369,-21371){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(7034,-16210){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(4087,-15490){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(9772,-9640){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(9849,-11365){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(12352,-14282){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(14952,-13691){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(14617,-8530){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(11670,-7810){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(2197,-9760){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(2274,-11485){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(4777,-14402){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(7377,-13811){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(7042,-8650){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(4095,-7930){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(9757,-2072){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(9834,-3797){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(12337,-6714){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(14937,-6123){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(14602,-962){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(11655,-242){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(1981,-2079){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(2058,-3804){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(4561,-6721){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(7161,-6130){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(6826,-969){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(3879,-249){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(14937,-21243){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(12337,-21834){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(9834,-18917){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(9757,-17192){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\itdefault}{$z+e_a+o_b+o_c$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $R^{\odot}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, when $a\geq b$. Kuo condensation is applied to the region $R^{\odot}_{2,1,2}(1,2 ;\ 2,1,1 ;\ 1,1)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter2}
\end{figure}
By applying the Kuo condensation with the same choices of the vertices $u,v,w,s$ in the case $a\geq b$, we get a slightly different recurrence (see Figure \ref{fig:kuocenter2}):
\begin{align}\label{centerrecur1b}
\operatorname{M}(R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\leftarrow}_{x,y-1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))&=
\operatorname{M}(R^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\leftarrow}_{x-1,y-1,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+\operatorname{M}(R^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b}))\operatorname{M}(R^{\swarrow}_{x,y-1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})),
\end{align}
when $a\geq b$.
The only differences between the above two recurrences (\ref{centerrecur1a}) and (\ref{centerrecur1b}) are the $y$-parameters in the second, the fourth, and the sixth regions.
It is not hard to verify that the $h$-parameters (the sum of the quasi-perimeter and the $x$- and $z$-parameters) of all the five regions, that are different from $R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ in the recurrences (\ref{centerrecur1a}) and (\ref{centerrecur1b}), are strictly less than $h$.
Indeed, let $p$ denote the quasi-perimeter of the region $R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$. The quasi-perimeters of the other five regions in each of the above recurrences are respectively $p-3$, $p-2$, $p-1$, $p-2$, and $p-1$. Moreover, the sum of the $x$- and $z$- perimeters are respectively, $x+z-1$, $x+z$, $x+z-1$, $x+z$, $x+z-1$.
\subsection{Recurrences for $R^{\leftarrow}$-type regions}\label{subsec:recurR2}
We are now obtaining recurrences for the $R^{\leftarrow}$-type regions. We note that the same application of Kuo condensation in Theorem \ref{kuothm1} as in the case of $R^{\odot}$-type regions does \emph{not} work here. The reason is that the removal of the unit triangles $u,v,w,s$ as in Figures \ref{fig:kuocenter1} and \ref{fig:kuocenter2} may `push' the center of the auxiliary hexagon too far away from the leftmost point of the middle fern, and the forced lozenges removal yields a new region that are not one of the eight types: $R^{\odot}$-, $R^{\leftarrow}$-, $R^{\swarrow}$-, $R^{\nwarrow}$-, $Q^{\odot}$-, $Q^{\leftarrow}$-, $Q^{\nearrow}$-, and $Q^{\nwarrow}$-types.
We now apply Kuo condensation as in Figure \ref{fig:kuocenter3} instead. The $u$-triangle is still on the northeast corner of the region, however, the positions of the others three unit triangles are changed as shown in Figure \ref{fig:kuocenter3}(b).
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter3.eps}%
\end{picture}%
\begin{picture}(19001,29131)(1407,-28730)
\put(11715,-22046){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(11841,-24353){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(15577,-28188){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(19359,-27254){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(18299,-20844){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(14350,-19477){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(1897,-22054){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(2023,-24361){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(5759,-28196){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(9541,-27262){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(4297,101){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8246,-1266){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(9306,-7676){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(5524,-8610){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(1788,-4775){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(1662,-2468){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14115,109){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(18064,-1258){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(19124,-7668){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(15342,-8602){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11606,-4767){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(11480,-2460){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(4307,-9585){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8256,-10952){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(9316,-17362){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(5534,-18296){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(1798,-14461){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(1672,-12154){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14125,-9577){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(18074,-10944){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(19134,-17354){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(15352,-18288){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11616,-14453){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(11490,-12146){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(4532,-19485){\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8481,-20852){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{16}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $R^{\leftarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, when $a\leq b$. Kuo condensation is applied to the region $R^{\leftarrow}_{3,2,2}(2,1,1 ;\ 2,2;\ 2,1,2)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter3}
\end{figure}
Figure \ref{fig:kuocenter3} tells us that the product of the numbers of tilings of the two regions in the top row is equal to the product of the tiling numbers of the two regions in the middle row, plus the product of the tiling numbers of the two regions in the bottom row. The figure shows the case when $\textbf{b}$ has an odd number of terms, the removal of the $v$-triangle give a new triangle of side length $1$ to the right fern. In the case $\textbf{b}$ has an even number of terms, this removal increases the side length of the last triangle of the right fern by $1$ unit. By considering forced lozenges as shown in the figure, we get
\begin{align}\label{centerrecur2a}
\operatorname{M}(R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\odot}_{x,y-1,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1}))&=\operatorname{M}(R^{\nwarrow}_{x,y-1,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1}))\operatorname{M}(R^{\swarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(R^{\leftarrow}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\odot}_{x-1,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1})),
\end{align}
for the case $a\leq b$.
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter4.eps}%
\end{picture}%
\begin{picture}(19035,29106)(1421,-28726)
\put(8256,-10952){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(9316,-17362){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(5534,-18296){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(1798,-14461){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(1672,-12154){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14125,-9577){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(18074,-10944){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(19134,-17354){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(15352,-18288){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11616,-14453){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(11490,-12146){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(4532,-19485){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8481,-20852){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(9541,-27262){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(5759,-28196){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(2023,-24361){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(1897,-22054){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14350,-19477){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(18299,-20844){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(19359,-27254){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(15577,-28188){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11841,-24353){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(11715,-22046){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(4297,101){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8246,-1266){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(9306,-7676){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(5524,-8610){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(1788,-4775){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(1662,-2468){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14115,109){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(18064,-1258){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(19124,-7668){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(15342,-8602){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11606,-4767){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(11480,-2460){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(4307,-9585){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\end{picture}}
\caption{Obtaining the recurrence for the region $R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, when $a> b$. Kuo condensation is applied to the region $R^{\leftarrow}_{3,2,2}(2,2,1;\ 2,2;\ 2,1,1)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter4}
\end{figure}
Similarly, Figure \ref{fig:kuocenter4} tells us that
\begin{align}\label{centerrecur2b}
\operatorname{M}(R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}) )\operatorname{M}(R^{\odot}_{x,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1}))&=\operatorname{M}(R^{\nwarrow}_{x,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1}))\operatorname{M}(R^{\swarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(R^{\leftarrow}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\odot}_{x-1,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1})),
\end{align}
for the case $a> b$.
Similar to the case of the $R^{\odot}$-type regions, the $h$-parameters of all regions, except the first one, in the above two recurrences are strictly less than the $h$-parameter of the region $R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$.
\subsection{Recurrences for $R^{\swarrow}$-type regions}\label{subsec:recurR3}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter5.eps}%
\end{picture}%
\begin{picture}(18668,27745)(1421,-28847)
\put(12508,-24684){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(15985,-28363){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(19050,-27412){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(18435,-21794){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(14640,-20129){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(3176,-22130){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(3489,-24690){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(6966,-28369){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(10031,-27418){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(9416,-21800){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(5621,-20135){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(11595,-12764){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(11908,-15324){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(15385,-19003){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(18450,-18052){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(17835,-12434){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(4179,-1321){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(7974,-2986){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(8589,-8604){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(5524,-9555){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(2047,-5876){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(1734,-3316){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(13198,-1315){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(16993,-2980){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(17608,-8598){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(14543,-9549){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11066,-5870){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(10753,-3310){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(5021,-10775){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8816,-12440){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(9431,-18058){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(6366,-19009){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(2889,-15330){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(2576,-12770){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14040,-10769){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(12195,-22124){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $R^{\swarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, when $a\leq b$. Kuo condensation is applied to the region $R^{\swarrow}_{3,2,2}(2,1 ;\ 2,2 ;\ 2,2)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter5}
\end{figure}
We apply Kuo condensation in Theorem \ref{kuothm1} to the dual graph $G$ of the region $R^{\swarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ with the four vertices $u,v,w,s$ chosen as shown in Figure \ref{fig:kuocenter5}(b) in the case $a\leq b$. In particular, the $u$-triangle corresponding to the vertex $u$ is now on the northwest corner of the region, the $v$-, $w$-, $s$-triangles are the shaded ones appearing on the boundary of the region as we go in the clockwise order from the $u$-triangle. By removing forced lozenges in the regions corresponding to the graphs $G-\{u,v,w,s\}$, $G-\{u,v\}$, $G-\{w,s\}$, and $G-\{u,s\}$ (as shown in Figures \ref{fig:kuocenter5}(b)--(e), respectively), we get the regions $R^{\odot}_{x-1,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1})$, $R^{\odot}_{x,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1})$, $R^{\swarrow}_{x-1,y-1,z+1}(\textbf{a};\ \textbf{c};\ \textbf{b})$, and $R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, respectively.
Unlike the situations in the $R^{\odot}$- and $R^{\leftarrow}$-type regions considered above, after removing forced lozenges from the region corresponding to the graph $G-\{v,w\}$, we do \emph{not} get any regions of one of the four $R$-types (see Figure \ref{fig:kuocenter5}(f)). We next rotate this resulting region by $180^{\circ}$, we get the region $R^{\nwarrow}_{x-1,y-1,z}(\textbf{b}^{+1};\ \overline{\textbf{c}};\ \textbf{a})$. Here $\overline{\textbf{c}}$ denotes the sequence obtained from the sequence $\textbf{c}$ by reverting the order of the terms if we have an even number of terms, otherwise we revert the sequence and include a new $0$ term in front of the sequence. This way, we get the following recurrence for the $R^{\swarrow}$-type regions, when $a\leq b$:
\begin{align}\label{centerrecur3a}
\operatorname{M}(R^{\swarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\odot}_{x-1,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1}))&=\operatorname{M}(R^{\odot}_{x,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1}))\operatorname{M}(R^{\swarrow}_{x-1,y-1,z+1}(\textbf{a};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\nwarrow}_{x-1,y-1,z}(\textbf{b}^{+1};\ \overline{\textbf{c}};\ \textbf{a})).
\end{align}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter6.eps}%
\end{picture}%
\begin{picture}(18668,27751)(1421,-28847)
\put(12508,-24684){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(15985,-28363){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(19050,-27412){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(18435,-21794){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(14640,-20129){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(3176,-22130){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(3489,-24690){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(6966,-28369){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(10031,-27418){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(9416,-21800){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(5621,-20135){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(11595,-12764){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(11908,-15324){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(15385,-19003){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(18450,-18052){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(17835,-12434){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(4179,-1321){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(7974,-2986){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(8589,-8604){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(5524,-9555){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(2047,-5876){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(1734,-3316){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(13198,-1315){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(16993,-2980){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(17608,-8598){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(14543,-9549){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11066,-5870){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(10753,-3310){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(5021,-10775){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8816,-12440){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(9431,-18058){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(6366,-19009){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(2889,-15330){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(2576,-12770){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14040,-10769){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(12195,-22124){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $R^{\swarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, when $a> b$. Kuo condensation is applied to the region $R^{\swarrow}_{3,2,2}(2,2;\ 2,2 ;\ 1,2)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter6}
\end{figure}
Similarly, when $a> b$, we apply Kuo condensation to the dual graph $G$ of the region $R^{\swarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$ in the same way as shown in Figure \ref{fig:kuocenter6}. The removal of forced lozenges yields a slightly different recurrence from that in the case $a\leq b$ above:
\begin{align}\label{centerrecur3b}
\operatorname{M}(R^{\swarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\odot}_{x-1,y,z}(\textbf{a};\ \textbf{c}; \ \textbf{b}^{+1}))&=\operatorname{M}(R^{\odot}_{x,y+1,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1}))\operatorname{M}(R^{\swarrow}_{x-1,y-1,z+1}(\textbf{a};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}) )\operatorname{M}(R^{\nwarrow}_{x-1,y,z}(\textbf{b}^{+1};\ \overline{\textbf{c}}; \ \textbf{a})).
\end{align}
Here the second factor of the second term on the right-hand side is also obtained by rotating $180^{\circ}$ the region restricted in the bold contour in Figure \ref{fig:kuocenter6}(f).
\subsection{Recurrences for $R^{\nwarrow}$-type regions}\label{subsec:recurR4}
We now consider the recurrences for the last $R$-type regions, the region $R^{\nwarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$. We apply Kuo's Theorem \ref{kuothm1} to the dual graph $G$ of the region for the case $a<b$. The four vertices $u,v,w,s$ correspond to the four shaded unit triangles of the same labels as illustrated in Figure \ref{fig:kuocenter7}(b). The difference from the cases treated above is that only two of these four unit triangles are on the boundary of the base hexagon; the other two are at the ends of the left and right ferns. By considering forced lozenges arising from the removal of the four shaded triangles, we get
\begin{equation}
\operatorname{M}(G-\{u,v,w,s\})= \operatorname{M}(R^{\odot}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1})),
\end{equation}
\begin{equation}
\operatorname{M}(G-\{w,s\})= \operatorname{M}(R^{\leftarrow}_{x,y+1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})),
\end{equation}
\begin{equation}
\operatorname{M}(G-\{u,s\})=\operatorname{M}(R^{\nwarrow}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1})),
\end{equation}
\begin{equation}
\operatorname{M}(G-\{v,w\})=\operatorname{M}(R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})),
\end{equation}
(see Figures \ref{fig:kuocenter7}(b), (d), (e), and (f), respectively).
For the region corresponding to $G-\{u,v\}$, we rotate $180^{\circ}$ the leftover region after removing the forced lozenges to obtain the region $R^{\swarrow}_{x-1,y-1,z}(\textbf{b}^{+1};\ \overline{\textbf{c}}; \ \textbf{a})$ (see Figure \ref{fig:kuocenter7}(c)). This means that we get the following recurrence for $a<b$:
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter7.eps}%
\end{picture}%
\begin{picture}(16191,25832)(3056,-26739)
\put(9479,-8143){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(6757,-8970){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(3279,-5073){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(3240,-3065){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14123,-9753){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(16987,-10970){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(17868,-16780){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(15146,-17607){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11668,-13710){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(11629,-11702){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(5941,-9752){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8805,-10969){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(9686,-16779){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(6964,-17606){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(3486,-13709){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(3447,-11701){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14530,-18245){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(17394,-19462){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(18275,-25272){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(15553,-26099){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(12075,-22202){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(12036,-20194){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(6348,-18244){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(9212,-19461){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(10093,-25271){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(7371,-26098){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(3893,-22201){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(3854,-20193){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(13916,-1117){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(16780,-2334){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(17661,-8144){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(14939,-8971){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11461,-5074){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(11422,-3066){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(5734,-1116){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8598,-2333){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $R^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, when $a< b$. Kuo condensation is applied to the region $R^{\nwarrow}_{2,2,2}(2,1 ;\ 2,1 ;\ 2,2)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter7}
\end{figure}
\begin{align}\label{centerrecur4a}
\operatorname{M}(R^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\odot}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1}))&=\operatorname{M}(R^{\swarrow}_{x-1,y-1,z}(\textbf{b}^{+1};\ \overline{\textbf{c}}; \ \textbf{a}))\operatorname{M}(R^{\leftarrow}_{x,y+1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(R^{\nwarrow}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1})) \operatorname{M}(R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})).
\end{align}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter8.eps}%
\end{picture}%
\begin{picture}(16194,25834)(3053,-26737)
\put(9479,-8143){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(6757,-8970){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(3279,-5073){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(3240,-3065){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14123,-9753){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(16987,-10970){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(17868,-16780){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(15146,-17607){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11668,-13710){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(11629,-11702){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(5941,-9752){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o+a+e_b+e_c$}%
}}}}
\put(8805,-10969){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(9686,-16779){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(6964,-17606){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(3486,-13709){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(3447,-11701){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(14530,-18245){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(17394,-19462){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(18275,-25272){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(15553,-26099){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(12075,-22202){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(12036,-20194){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(6348,-18244){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(9212,-19461){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(10093,-25271){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(7371,-26098){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(3893,-22201){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(3854,-20193){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(13916,-1117){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(16780,-2334){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\put(17661,-8144){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+o_a+e_b+e_c$}%
}}}}}
\put(14939,-8971){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+o_b+o_c$}%
}}}}
\put(11461,-5074){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+o_a+e_b+e_c+|a-b|+1$}%
}}}}}
\put(11422,-3066){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$z+e_a+o_b+o_c$}%
}}}}}
\put(5734,-1116){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+e_b+e_c$}%
}}}}
\put(8598,-2333){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{$2y+z+e_a+o_b+o_c+|a-b|+1$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $R^{\nwarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, when $a> b$. Kuo condensation is applied to the region $R^{\nwarrow}_{2,2,2}(2,2 ;\ 2,1 ;\ 1,2)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter8}
\end{figure}
By working similarly as in Figure \ref{fig:kuocenter8} in the case when $a> b$, we get
\begin{align}\label{centerrecur4b}
\operatorname{M}(R^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\odot}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1}))&=\operatorname{M}(R^{\swarrow}_{x-1,y,z}(\textbf{b}^{+1};\ \overline{\textbf{c}}; \ \textbf{a}))\operatorname{M}(R^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(R^{\nwarrow}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1})) \operatorname{M}(R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})).
\end{align}
Finally, our choice of the four vertices $u,v,w,s$ causes an additional case when $a=b$ (the corresponding picture for Kuo condensation is not shown here). Processing similarly to the two cases treated above gives us:
\begin{align}\label{centerrecur4c}
\operatorname{M}(R^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(R^{\odot}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1}))&=\operatorname{M}(R^{\swarrow}_{x-1,y-1,z}(\textbf{b}^{+1};\ \overline{\textbf{c}}; \ \textbf{a}))\operatorname{M}(R^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(R^{\nwarrow}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1})) \operatorname{M}(R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}))
\end{align}
when $a=b$.
\subsection{Recurrences for $Q^{\odot}$-type regions}\label{subsec:recurQ1}
We now setup recurrences for the $Q^{\odot}$-type regions.
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter9.eps}%
\end{picture}%
\begin{picture}(16984,26942)(1630,-28821)
\put(16354,-18240){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+b-a$}%
}}}}}
\put(13294,-19150){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(10954,-15910){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(10864,-14490){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+b-a$}%
}}}}}
\put(4930,-20392){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(8130,-20852){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(7770,-27222){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+b-a$}%
}}}}}
\put(4710,-28132){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(2370,-24892){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(2280,-23472){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+b-a$}%
}}}}}
\put(13718,-20515){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(16918,-20975){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(16558,-27345){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+b-a$}%
}}}}}
\put(13498,-28255){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(11158,-25015){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(11068,-23595){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+b-a$}%
}}}}}
\put(4501,-2201){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(7701,-2661){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(7341,-9031){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+b-a$}%
}}}}}
\put(4281,-9941){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(1941,-6701){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(1851,-5281){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+b-a$}%
}}}}}
\put(13289,-2324){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(16489,-2784){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(16129,-9154){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+b-a$}%
}}}}}
\put(13069,-10064){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(10729,-6824){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(10639,-5404){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+b-a$}%
}}}}}
\put(4726,-11287){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(7926,-11747){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(7566,-18117){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+b-a$}%
}}}}}
\put(4506,-19027){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(2166,-15787){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(2076,-14367){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+b-a$}%
}}}}}
\put(13514,-11410){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(16714,-11870){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $Q^{\odot}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, when $a< b$. Kuo condensation is applied to the region $Q^{\odot}_{2,2,2}(1,2 ;\ 1,2 ;\ 2,2)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter9}
\end{figure}
We apply again Kuo's Theorem \ref{kuothm1} to the dual graph $G$ of the region $Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ as in Figure \ref{fig:kuocenter9} with the four vertices $u,v,w,s$ chosen as in picture (b). The six regions in Figure \ref{fig:kuocenter9} correspond to the six terms in the equation of Theorem \ref{kuothm1}. Again, the figure says that the product of tiling numbers of the two regions in the top row equals the product of the tiling numbers of the two regions in the middle row, plus the product of tiling numbers of the two regions in the bottom row. By considering forced lozenges as shown in the figure, the above identity is converted into the following recurrence for $Q^{\odot}$-regions:
\begin{align}\label{centerrecur5a}
\operatorname{M}(Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))&=\operatorname{M}(Q^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\leftarrow}_{x-1,y,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(Q^{\nearrow}_{x,y,z-1}(\textbf{b};\ \textbf{c}^{\leftrightarrow}; \ \textbf{a}^{+1}))\operatorname{M}(Q^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b})),
\end{align}
for the case $a< b$. Strictly speaking, the region obtained by removing forced lozenges from the region corresponding to the graph $G-\{u,s\}$ is \emph{not} an $Q^{\odot}$-, $Q^{\leftarrow}$-, $Q^{\nwarrow}$-, or $Q^{\nearrow}$-type region. We need to reflect this region over a vertical line to get back the region $Q^{\nearrow}_{x,y,z-1}(\textbf{b};\ \textbf{c}^{\leftrightarrow}; \textbf{a}^{+1})$, where $\textbf{c}^{\leftrightarrow}$ denotes the sequence obtained by reserving the sequence $\textbf{c}$ if $\textbf{c}$ has an odd number of terms, and by reversing and adding a $0$ term in the beginning of $\textbf{c}$ in the case of even number of terms. The reader should distinguish the sequence $\textbf{c}^{\leftrightarrow}$ from it `dual', $\overline{\textbf{c}}$, in the recurrences for the regions $R^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $R^{\swarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ above.
Working in the same way as in the case $a< b
, we obtain:
\begin{align}\label{centerrecur5b}
\operatorname{M}(Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\leftarrow}_{x,y-1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))&=\operatorname{M}(Q^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\leftarrow}_{x-1,y-1,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(Q^{\nearrow}_{x,y-1,z-1}(\textbf{b};\ \textbf{c}^{\leftrightarrow};\ \textbf{a}^{+1}))\operatorname{M}(Q^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b}))
\end{align}
for $a\geq b$.
One may realize that the application of Kuo condensation to the $Q^{\odot}$-type regions is similar to that in the case of $R^{\odot}$-type regions treated before. However, the resulting recurrences in the two cases are \emph{not} the same.
\subsection{Recurrences for $Q^{\leftarrow}$-type regions}\label{subsec:recurQ2}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter10.eps}%
\end{picture}%
\begin{picture}(17318,26173)(2183,-27696)
\put(5015,-9464){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(4938,-1845){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(2404,-4949){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(2354,-6040){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+a-b$}%
}}}}}
\put(8580,-9158){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(8184,-2260){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+a-b$}%
}}}}}
\put(13938,-1834){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(11404,-4938){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(11354,-6029){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+a-b$}%
}}}}}
\put(14015,-9453){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(17580,-9147){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(17184,-2249){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+a-b$}%
}}}}}
\put(5148,-10701){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(2614,-13805){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(2564,-14896){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+a-b$}%
}}}}}
\put(5225,-18320){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(8790,-18014){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(8394,-11116){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+a-b$}%
}}}}}
\put(14148,-10690){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(11614,-13794){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(11564,-14885){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+a-b$}%
}}}}}
\put(14225,-18309){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(17790,-18003){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(17394,-11105){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+a-b$}%
}}}}}
\put(5132,-19455){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(2598,-22559){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(2548,-23650){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+a-b$}%
}}}}}
\put(5209,-27074){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(8774,-26768){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(8378,-19870){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+a-b$}%
}}}}}
\put(14132,-19444){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(11598,-22548){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(11548,-23639){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c+a-b$}%
}}}}}
\put(14209,-27063){\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(17774,-26757){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(17378,-19859){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{17}{20.4}{\rmdefault}{\mddefault}{\itdefault}{$y+z+o_a+o_b+o_c+a-b$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $Q^{\leftarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, when $a\geq b$.
Kuo condensation is applied to the region $Q^{\leftarrow}_{3,2,2}(2,2 ;\ 2,1 ;\ 1,2)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter10}
\end{figure}
We now apply Kuo condensation (Theorem \ref{kuothm1}) to the dual graph $G$ of the region $Q^{\leftarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$ with the choice of the four vertices $u,v,w,s$ similar to that in the case of $R^{\leftarrow}$-type regions (illustrated in Figure \ref{fig:kuocenter10}(b)). Again, we do not show these vertices directly, and we show here the unit triangles corresponding to them instead. The removal of the $u$-, $v$-, $w$-, $s$-triangles yields several forced lozenges along the boundary of the region and at the end of the left fern (see Figure \ref{fig:kuocenter10} for the case $a>b$; the case $a\leq b$ can be treated in the same manner). In all cases, after removing the forced lozenges, we recover a new region of the $Q^{\odot}$-, $Q^{\leftarrow}$, or $Q^{\nwarrow}$-type, except for the case of $G-\{w,s\}$. After removing the forced lozenges from the region corresponding to $G-\{w,s\}$, we need to reflect the resulting region along a vertical line to get the region $Q^{\nearrow}_{x,y-1,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow}; \textbf{a})$. In particular, we obtain the following recurrences:
\begin{align}\label{centerrecur6a}
\operatorname{M}(Q^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\odot}_{x,y-1,z-1}(\textbf{a};\ \textbf{c}; \textbf{b}^{+1}))&=\operatorname{M}(Q^{\nwarrow}_{x,y-1,z-1}(\textbf{a};\ \textbf{c}; \textbf{b}^{+1}))\operatorname{M}(Q^{\nearrow}_{x,y-1,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow}; \textbf{a}))\notag\\
&+
\operatorname{M}(Q^{\leftarrow}_{x+1,y,z-1}(\textbf{a};\ \textbf{c}; \textbf{b})) \operatorname{M}(Q^{\odot}_{x-1,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1})),
\end{align}
for the case $a\leq b$, and
\begin{align}\label{centerrecur6b}
\operatorname{M}(Q^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\odot}_{x,y,z-1}(\textbf{a};\ \textbf{c}; \textbf{b}^{+1}))&=\operatorname{M}(Q^{\nwarrow}_{x,y,z-1}(\textbf{a};\ \textbf{c}; \textbf{b}^{+1}))\operatorname{M}(Q^{\nearrow}_{x,y-1,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow}; \textbf{a}))\notag\\
&+
\operatorname{M}(Q^{\leftarrow}_{x+1,y,z-1}(\textbf{a};\ \textbf{c}; \textbf{b})) \operatorname{M}(Q^{\odot}_{x-1,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1})),
\end{align}
for the case $a> b$.
\subsection{Recurrences for $Q^{\nwarrow}$-type regions}\label{subsec:recurQ3}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter11.eps}%
\end{picture}%
\begin{picture}(16325,26809)(1830,-28106)
\put(7780,-20359){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(a-b)+1$}%
}}}}}
\put(4838,-19759){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(8094,-26619){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(5288,-27719){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(2248,-23939){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(a-b)+1$}%
}}}}}
\put(2418,-21899){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(13216,-19654){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(16158,-20254){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(a-b)+1$}%
}}}}}
\put(16472,-26514){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(13666,-27614){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(10626,-23834){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(a-b)+1$}%
}}}}}
\put(10796,-12809){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(10626,-14849){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(a-b)+1$}%
}}}}}
\put(13666,-18629){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(16472,-17529){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(16158,-11269){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(a-b)+1$}%
}}}}}
\put(4621,-1691){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(7563,-2291){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(a-b)+1$}%
}}}}}
\put(7877,-8551){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(5071,-9651){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(2031,-5871){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(a-b)+1$}%
}}}}}
\put(2201,-3831){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(12999,-1586){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(15941,-2186){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(a-b)+1$}%
}}}}}
\put(16255,-8446){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(13449,-9546){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(10409,-5766){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(a-b)+1$}%
}}}}}
\put(10579,-3726){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(4838,-10774){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(7780,-11374){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(a-b)+1$}%
}}}}}
\put(8094,-17634){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c$}%
}}}}}
\put(5288,-18734){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(2248,-14954){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(a-b)+1$}%
}}}}}
\put(2418,-12914){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\put(13216,-10669){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(10796,-21794){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $Q^{\nwarrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, when $a> b$. Kuo condensation is applied to the region $Q^{\nwarrow}_{2,2,2}(2,2 ;\ 1,2 ;\ 1,2)$ (picture (a)) as shown on the picture (b).}\label{fig:kuocenter11}
\end{figure}
Like the cases of the $Q^{\odot}$- and $Q^{\leftarrow}$-type regions treated above, the application of Kuo condensation to the $Q^{\nwarrow}$-type regions is similar to its $R$-counterpart, the $Q^{\nwarrow}$-type regions. In particular, we apply Kuo's Theorem \ref{kuothm1} to the dual graph $G$ of the region $Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ as shown in Figure \ref{fig:kuocenter11} for $a>b$ (the cases $a> b$ and $a=b$ are similar). By considering forced lozenges, we get
\begin{align}\label{centerrecur8a}
\operatorname{M}(Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\odot}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c}; \ \textbf{b}^{+1}))&=\operatorname{M}(Q^{\nearrow}_{x-1,y-1,z}(\textbf{a};\ \textbf{c}; \ \textbf{b}^{+1}))\operatorname{M}(Q^{\leftarrow}_{x,y+1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(Q^{\nwarrow}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c}; \ \textbf{b}^{+1})) \operatorname{M}(Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})),
\end{align}
for the case $a< b$,
\begin{align}\label{centerrecur8b}
\operatorname{M}(Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\odot}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c}; \ \textbf{b}^{+1}))&=\operatorname{M}(Q^{\nearrow}_{x-1,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1}))\operatorname{M}(Q^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(Q^{\nwarrow}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1})) \operatorname{M}(Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})),
\end{align}
for the case $a> b$, and
\begin{align}\label{centerrecur8c}
\operatorname{M}(Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\odot}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c}; \ \textbf{b}^{+1}))&=\operatorname{M}(Q^{\nearrow}_{x-1,y-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b}^{+1}))\operatorname{M}(Q^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\notag\\
&+
\operatorname{M}(Q^{\nwarrow}_{x-1,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}^{+1})) \operatorname{M}(Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})),
\end{align}
when $a=b$.
\subsection{Recurrences for $Q^{\nearrow}$-type regions}\label{subsec:recurQ4}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Kuocenter12.eps}%
\end{picture}%
\begin{picture}(16901,26307)(554,-28957)
\put(12141,-3076){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(4075,-28407){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(1185,-25037){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+1$}%
}}}}}
\put(1135,-23207){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(b-a)$}%
}}}}}
\put(15854,-21295){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+1$}%
}}}}}
\put(15554,-27655){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(b-a)$}%
}}}}}
\put(12474,-28525){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(9584,-25155){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+1$}%
}}}}}
\put(9534,-23325){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(b-a)$}%
}}}}}
\put(12611,-20596){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(12321,-11846){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(7155,-27537){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(b-a)$}%
}}}}}
\put(7455,-21177){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+1$}%
}}}}}
\put(4055,-20437){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(9189,-14595){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(b-a)$}%
}}}}}
\put(9239,-16425){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+1$}%
}}}}}
\put(12129,-19795){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(3681,-2941){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(7081,-3681){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+1$}%
}}}}}
\put(6781,-10041){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(b-a)$}%
}}}}}
\put(3701,-10911){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(811,-7541){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+1$}%
}}}}}
\put(761,-5711){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(b-a)$}%
}}}}}
\put(15480,-3799){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+1$}%
}}}}}
\put(15180,-10159){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(b-a)$}%
}}}}}
\put(12100,-11029){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(9210,-7659){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+1$}%
}}}}}
\put(9160,-5829){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(b-a)$}%
}}}}}
\put(3710,-11707){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+e_a+e_b+e_c$}%
}}}}
\put(7110,-12447){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+1$}%
}}}}}
\put(6810,-18807){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(b-a)$}%
}}}}}
\put(3730,-19677){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$x+o_a+o_b+o_c$}%
}}}}
\put(840,-16307){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+1$}%
}}}}}
\put(790,-14477){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+(b-a)$}%
}}}}}
\put(15509,-12565){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+o_a+o_b+o_c+1$}%
}}}}}
\put(15209,-18925){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{$y+z+e_a+e_b+e_c+(b-a)$}%
}}}}}
\end{picture}%
}
\caption{Obtaining the recurrence for the region $Q^{\nearrow}_{x,y,z}(\textbf{a};\textbf{c};\textbf{b})$, when $a< b$. Kuo condensation is applied to the region $Q^{\nearrow}_{3,2,2}(1,2 ;\ 1,2;\ 2,2)$ (picture (c)) as shown on the picture (d).}\label{fig:kuocenter12}
\end{figure}
We now need to use a different Kuo condensation from that in the previous cases. In particular, we apply here Theorem \ref{kuothm2} (as opposed to Theorem \ref{kuothm1} as in the previous cases) with the four vertices selected as in Figure \ref{fig:kuocenter12}(d). The regions in Figures \ref{fig:kuocenter12}(a)--(f) correspond to the terms in the equation of Theorem \ref{kuothm2}.
We first consider the case $a< b$. The removals of forced lozenges in the regions corresponding to $G-\{u,s\}$ and $G-\{u,w\}$ give us respectively the regions $Q^{\odot}_{x,y+1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})$ and $Q^{\nearrow}_{x+1,y,z-1}(\textbf{a};\ \textbf{c}\; \textbf{b})$. However, for the regions corresponding to $G-\{v,w\}$, $G-\{u,v,w,s\}$, $G-\{v,s\}$, we do not end up with a $Q$-region after removing forced lozenges. We need to use the symmetry of $Q$-regions by reflecting the leftover regions over a vertical line to get the regions $Q^{\leftarrow}_{x,y,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow};\ \textbf{a})$, $Q^{\nwarrow}_{x,y,z-1}(\textbf{b};\ \textbf{c}^{\leftrightarrow};\ \textbf{a}^{+1})$, and $Q^{\nwarrow}_{x-1,y,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow};\ \textbf{a}^{+1})$, respectively (see Figure \ref{fig:kuocenter12}). This way, we get the recurrence
\begin{align}\label{centerrecur7a}
\operatorname{M}(Q^{\odot}_{x,y+1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\operatorname{M}(Q^{\leftarrow}_{x,y,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow};\ \textbf{a}))&=\operatorname{M}(Q^{\nearrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\nwarrow}_{x,y,z-1}(\textbf{b};\ \textbf{c}^{\leftrightarrow};\ \textbf{a}^{+1}))\notag\\
&+
\operatorname{M}(Q^{\nearrow}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\nwarrow}_{x-1,y,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow};\ \textbf{a}^{+1})),
\end{align}
for the case $a< b$.
Working similarly for the case $a\geq b$, we have
\begin{align}\label{centerrecur7b}
\operatorname{M}(Q^{\odot}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}))\operatorname{M}(Q^{\leftarrow}_{x,y,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow}; \ \textbf{a}))&=\operatorname{M}(Q^{\nearrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\nwarrow}_{x,y-1,z-1}(\textbf{b};\ \textbf{c}^{\leftrightarrow};\ \textbf{a}^{+1}))\notag\\
&+
\operatorname{M}(Q^{\nearrow}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})) \operatorname{M}(Q^{\nwarrow}_{x-1,y-1,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow};\ \textbf{a}^{+1})).
\end{align}
\subsection{Two extremal cases}
In this subsection, we deal with two extremal cases when certain parameters of our 8 families of regions achieve their minimal values.
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Special4.eps}%
\end{picture}%
\begin{picture}(18044,9252)(1830,-10258)
\put(16141,-5821){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_4$}%
}}}}
\put(17896,-5821){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\put(18618,-5414){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(4604,-1311){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(7861,-2031){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(8911,-8781){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(5320,-9673){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(1944,-5905){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(2091,-3911){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(14167,-1287){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(15601,-5501){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_3$}%
}}}}
\put(15195,-9673){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(11633,-5735){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2y+z+e_a+o_b+o_c+|a-b|$}%
}}}}}
\put(17775,-2081){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2y+z+o_a+e_b+e_c+|a-b|$}%
}}}}}
\put(11906,-3829){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(18757,-8852){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(2746,-5371){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\put(3491,-5871){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_3$}%
}}}}
\put(5251,-5371){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\put(6361,-5811){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_2$}%
}}}}
\put(8851,-5391){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(7981,-5821){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\put(12646,-5791){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_1$}%
}}}}
\put(13351,-5326){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\put(14881,-5431){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\end{picture}%
}
\caption{Eliminating triangles of side-length $0$ from the ferns.}\label{fig:Special4}
\end{figure}
We first consider the case when one or more triangles in one of the three ferns have side-length $0$. The following lemma intuitively says that we can simply skip this case when working on our inductive proof on $h:=p+x+z$.
\begin{lem}\label{lem1}
For any regions of one of the eight types: $R^{\odot},$ $ R^{\leftarrow},$ $ R^{\nwarrow},$ $ R^{\swarrow},$ $Q^{\odot},$ $Q^{\leftarrow},$ $ Q^{\nwarrow}$, and $Q^{\nearrow}$. We can find a new region of the same type (1) whose number of tilings is the same,
(2) whose $h$-parameter is not greater, (3) whose left and right ferns consist of all triangles with positive side-lengths, and (4) whose middle fern contains
perhaps the first triangle of side-length $0$.
\end{lem}
\begin{proof}
We only consider the $R:=R^{\odot}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b})$ region, the other $7$ regions can be treated similarly.
We will show how to eliminate $0$-triangles in the three ferns without changing the tiling number or increasing the $h$-parameter. We consider the following three $0$-eliminating procedures for the left fern:
(1) If $a_1=a_2=\dotsc=a_{2i}=0$, for some $i\geq 1$, we can simply truncate the first $2i$ zero terms in the sequence $\textbf{a}$. The new
region is `exactly' the old one, however, strictly speaking, it has less $0$-triangles in the left fern.
(2) If $a_1=0$ and $a_2>0$, then we can remove forced lozenges along the southwest side of the region $R$ and obtain the region
$R^{\odot}_{x,y,z}(a_3,\dots,a_m;\ \textbf{c};\ \textbf{b})$ (see Figure \ref{fig:Special4}(a)). The new region has the same number of tilings as the original one, the
$h$-parameter $a_1$-unit less than $h$, and less $0$-triangles in the left fern.
(3) If $a_i=0$, for some $i>1$, then we can eliminate this $0$-triangle by combining the $(i-1)$-th and the $(i+1)$-th triangles in the fern (as shown
in Figure \ref{fig:Special4}(b)).
Repeating these three procedures if needed, one can eliminate all $0$-triangles from the left fern. Working similarly for the right fern, we obtain a region with no $0$-triangle in the left and right ferns. For the middle fern, we apply the procedure (3) to eliminate all $0$-triangles, except for the possible first $0$-triangle. This finishes our proof.
\end{proof}
The next lemma helps us handle the extremal case with respect to the $y$-parameter of our main proof provided in the next section.
\begin{lem}\label{lem2}
For any regions of one of the eight types with the minimal $y$-parameter (i.e. $y=0$ for the $R^{\odot}$-, $R^{\leftarrow}$-, $Q^{\odot}$, $Q^{\leftarrow}$-type regions; $y=0$ or $-1$ for the other four types of regions), we can find other regions of one of the eight types, whose number of tilings are the same and whose $h$-parameter is strictly smaller.
\end{lem}
\begin{figure}
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{SpecialR1.eps}%
\end{picture}%
\begin{picture}(18272,9633)(1376,-9602)
\put(2558,-5927){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_1$}%
}}}}
\put(3479,-5539){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\put(5013,-5480){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\put(5831,-5927){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_2$}%
}}}}
\put(9411,-5303){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(8388,-5893){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\put(7468,-5480){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_3$}%
}}}}
\put(4881,-401){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(8271,-1901){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c+b-a$}%
}}}}}
\put(9251,-8121){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(5451,-9041){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(2041,-6441){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(1391,-5261){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$b-a$}%
}}}}}
\put(1991,-3241){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(14906,-3608){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\put(15711,-4055){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_2$}%
}}}}
\put(14292,-258){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(17551,-1040){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(18547,-7351){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(14491,-8998){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(11257,-5610){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c+a-b$}%
}}}}}
\put(19189,-4038){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a-b$}%
}}}}}
\put(10873,-2807){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(11195,-4155){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_1$}%
}}}}
\put(12233,-3583){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\put(13358,-4021){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_3$}%
}}}}
\put(18268,-3667){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(17348,-3962){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\end{picture}%
}
\caption{Obtaining a $Q^{\odot}$-type (resp., $Q^{\leftarrow}$-type) region from a $R^{\odot}$-type (resp., $R^{\leftarrow}$-type) region by removing forced lozenges.}\label{SpecialR1}
\end{figure}
\begin{proof}
We first recall that the $y$-parameter can only obtains the value $-1$ in the following four cases: (1) the case of $R^{\nwarrow}$-type regions with $a<b$, (2) the case of $R^{\swarrow}$-type regions with $a>b$, (3) the case of $Q^{\nwarrow}$-type regions with $a<b$, and (4) the case of $Q^{\nearrow}$-type regions with $a<b$.
By Lemma \ref{lem1}, we can assume, without loss of generality, that all $a_i$'s, $b_j$'s and $c_t$'s are all positive for $i\geq 1, j\geq1, t\geq 2$.
If our region is of type $R^{\odot}$ or type $R^{\leftarrow}$ and having the left fern not longer than the right fern (i.e., $a\leq b$) and $y=0$, then there are several forced lozenges along the southeast side,
by removing these lozenges, we get an upside down $Q^{\odot}$-type region or an $Q^{\leftarrow}$- type region with $h$-parameter $1$-unit less than $h$. In particular, we have:
\begin{align}\label{specialeq1}
\operatorname{M}(R^{\odot}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b}))&=\operatorname{M}(Q^{\odot}_{x, \min(b_1,b-a),z}(\textbf{a};\ {}^0\textbf{c};\ b_2,\dotsc,b_n));\\
\operatorname{M}(R^{\leftarrow}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b}))&=\operatorname{M}(Q^{\leftarrow}_{x, \min(b_1,b-a),z}(\textbf{a};\ {}^0\textbf{c};\ b_2,\dotsc,b_n))
\end{align}
(see Figure \ref{SpecialR1}(a) for the case of $R^{\odot}$-type regions; the case of $R^{\leftarrow}$-type regions is analogous). Recall that ${}^0\textbf{c}$ denotes the sequence obtained by including a new $0$ term in front of the sequence $\textbf{c}$.
Similarly, if $a\geq b$ and $y=0$, then the removal of forced lozenges along the northwest side of the region $R^{\odot}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b})$
(resp., $R^{\leftarrow}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b})$) gives the region $Q^{\odot}_{x, \min(a_1,a-b),z}(a_2,\dotsc,a_m;\textbf{c}; \textbf{b})$ (resp., $Q^{\leftarrow}_{x, \min(a_1,a-b),z}(a_2,\dotsc,a_m;\textbf{c}; \textbf{b})$).
See Figure \ref{SpecialR1}(b) for the case of $R^{\odot}$-type regions; the case of $R^{\leftarrow}$-type regions is similar.
Next, let us consider the case of the $R^{\swarrow}$-type regions. If $a\leq b$ and $y=0$, then after removing forced lozenges as in the cases of the $R^{\odot}$- and $R^{\leftarrow}$-type regions above, we obtain
\begin{align}\label{specialeq2}
\operatorname{M}(R^{\swarrow}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b}))=Q^{\nearrow}_{x, \min(b_1,b-a),z}(\textbf{a};\ {}^0\textbf{c};\ b_2,\dotsc,b_n).
\end{align}
If $a>b$ and $y=-1$, then we have forced lozenges along the northwest side of the region $R^{\swarrow}_{x,-1,z}(\textbf{a};\textbf{c};\textbf{b})$. By removing these forced lozenges, we get the region $Q^{\nearrow}_{x,\min(a_1, a-b)-1,z}(\textbf{b};\ \textbf{c}^{\leftrightarrow}; \ a_2,\dotsc,a_m)$ (see Figure \ref{fig:SpecialR4}(a) for an example). Recall that $\textbf{c}^{\leftrightarrow}$ is the sequence obtained from $\textbf{c}$ by reverting its order if the number of term is odd, otherwise, it is obtained by reverting order and including a $0$ term in front of the resulting sequence.
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{SpecialR4.eps}%
\end{picture}%
\begin{picture}(18070,8619)(1404,-10647)
\put(4808,-2374){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(8695,-3189){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(9923,-5433){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$a-b-1$}%
}}}}}
\put(8695,-8905){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(1638,-4712){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(1842,-6602){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c+b-a-1$}%
}}}}}
\put(4808,-10027){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(14012,-2295){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(11437,-5135){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(17658,-3519){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c+b-a-1$}%
}}}}}
\put(11400,-7427){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(10843,-6471){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$b-a-1$}%
}}}}}
\put(14136,-10027){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(18101,-9295){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(3729,-5933){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_3$}%
}}}}
\put(5360,-5569){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\put(6234,-5962){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_2$}%
}}}}
\put(9147,-5540){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(8258,-5977){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\put(11899,-6865){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_1$}%
}}}}
\put(12715,-6443){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\put(14360,-6501){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\put(15161,-6894){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_2$}%
}}}}
\put(18351,-6501){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(17608,-6909){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\put(16792,-6472){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_3$}%
}}}}
\put(5579,-10491){\makebox(0,0)[lb]{\smash{{\SetFigFont{20}{24.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}(a)}%
}}}}
\put(15001,-10535){\makebox(0,0)[lb]{\smash{{\SetFigFont{20}{24.0}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}(b)}%
}}}}
\put(2113,-5940){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_1$}%
}}}}
\put(2943,-5518){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\end{picture}
}
\caption{(a) Obtaining a $Q^{\nearrow}$-type region from the region $R^{\swarrow}_{x,-1,z}(\textbf{a};\textbf{c};\textbf{b})$ when $a>b$. (b) Obtaining a $Q^{\nwarrow}$-type region from the region
$R^{\nwarrow}_{x,-1,z}(\textbf{a};\textbf{c};\textbf{b})$ when $a<b$.}\label{fig:SpecialR4}
\end{figure}
The case of the $R^{\nwarrow}$-type regions can be treated similarly to the case of the $R^{\swarrow}$-type regions above. If $a\geq b$ and $y=0$, then, by removing forced lozenges along the northwest side as in the case of $R^{\odot}$-
and $R^{\leftarrow}$-type regions, we get
\begin{align}\label{specialeq2}
\operatorname{M}(R^{\nwarrow}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b}))=Q^{\nwarrow}_{x, \min(a_1,a-b),z}(a_2,\dotsc,a_m;\textbf{c}; \textbf{b}).
\end{align}
If $a< b$ and $y=-1$, after removing forced lozenges along the southeast side of the region, we get the region $Q^{\nwarrow}_{x,\min (b_1,b-a)-1, z}(\textbf{a};\ {}^0\textbf{c};\ b_2,\dotsc,b_n)$ (shown in Figure \ref{fig:SpecialR4}(b)).
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{15cm}{!}{
\begin{picture}(0,0)%
\includegraphics{SpecialQ2.eps}%
\end{picture}%
\begin{picture}(19531,9611)(1207,-9736)
\put(5422,-401){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+e_a+e_b+e_c$}%
}}}}
\put(9002,-1405){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+o_b+o_c$}%
}}}}}
\put(9227,-7901){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+e_b+e_c+b-a$}%
}}}}}
\put(4297,-9082){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+o_a+o_b+o_c$}%
}}}}
\put(1529,-6699){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+e_b+e_c$}%
}}}}}
\put(2165,-4121){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+o_b+o_c$}%
}}}}}
\put(2456,-4598){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$a_1$}%
}}}}
\put(3321,-4981){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\put(4941,-4571){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\put(5841,-5051){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$c_2$}%
}}}}
\put(8081,-4481){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$b_3$}%
}}}}
\put(9081,-5021){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\put(9821,-4598){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(13094,-5071){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$a_1$}%
}}}}
\put(13959,-5454){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\put(15612,-5044){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\put(16469,-5539){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$c_2$}%
}}}}
\put(18004,-5071){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$b_3$}%
}}}}
\put(18822,-5439){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\put(19641,-5070){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(15512,-1241){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+e_a+e_b+e_c$}%
}}}}
\put(18936,-1981){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+o_b+o_c$}%
}}}}}
\put(19096,-8397){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+e_b+e_c+b-a-1$}%
}}}}}
\put(12948,-4164){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+o_a+o_b+o_c$}%
}}}}}
\put(1449,-5421){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b-a$}%
}}}}}
\put(12294,-5478){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$b-a-1$}%
}}}}}
\put(12493,-6366){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$z+e_a+e_b+e_c$}%
}}}}}
\put(15388,-8925){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\itdefault}{\color[rgb]{0,0,0}$x+o_a+o_b+o_c$}%
}}}}
\end{picture}%
}
\caption{(a) Obtaining a $R^{\leftarrow}$-type region from the region $Q^{\leftarrow}_{x,0,z}(\textbf{a}; \textbf{c}; \textbf{b})$, when $a\leq b$. (b) Obtaining a $R^{\nwarrow}$-type region from the region $Q^{\nwarrow}_{x,-1,z}(\textbf{a}; \textbf{c}; \textbf{b})$ when $a<b$.}\label{fig:SpecialQ2}
\end{figure}
Next, we consider the four $Q$-regions. The region $Q^{\odot}_{x,0,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ has forced lozenges along its southeast (resp., northwest) side when $a\leq b$ (resp., $a\geq b$).
By removing these forced lozenges, we get the region $R^{\odot}_{x,\min (b_1, b-a), z} (\textbf{a};\ {}^{0}\textbf{c};\ b_2,\dotsc,b_n)$
(resp., $R^{\odot}_{x,\min (a_1, a-b), z} (a_2,\dotsc,a_m;\ \textbf{c}; \ \textbf{b})$). Similarly, the removal of forced lozenges in the region $Q^{\leftarrow}_{x,0,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$
gives us the region $R^{\leftarrow}_{x,\min(b_1, b-a), z}(\textbf{a};\ {}^{0}\textbf{c};\ b_2,\dotsc,b_n)$ (up to a reflection) if $a\leq b$ (see Figure \ref{fig:SpecialQ2}(a)), or
$R^{\leftarrow}_{x,\min(a_1, a-b), z}(a_2,\dotsc,a_m;\ \textbf{c};\ \textbf{b})$ if $a\geq b$. Moreover, this lozenge-removal always reduces the $h$-parameter of the region.
If $a\geq b$, then the same thing happens for the regions $Q^{\nwarrow}_{x,0,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $Q^{\nearrow}_{x,0,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$. In particular, we have
\begin{align}\label{specialeq3}
\operatorname{M}(Q^{\nwarrow}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b}))&=\operatorname{M}(R^{\nwarrow}_{x, \min(a_1,a-b),z}(a_2,\dotsc,a_m;\ \textbf{c};\ \textbf{b}));\\
\operatorname{M}(Q^{\nearrow}_{x,y,z}(\textbf{a}; \textbf{c}; \textbf{b}))&=\operatorname{M}(R^{\swarrow}_{x, \min(a_1,a-b),z}(a_2,\dotsc,a_m;\ \textbf{c};\ \textbf{b})).
\end{align}
Finally, if $a<b$ and $y=-1$, we get the region $R^{\nwarrow}_{x,\min(b_1,b-a)-1,z}(\textbf{a};\ {}^{0}\textbf{c};\ b_2,\dotsc,b_n)$ and \\ $R^{\nearrow}_{x,\min(b_1,b-a)-1,z}(b_2,\dotsc,b_n;\ \textbf{c}^{\leftrightarrow};\ \textbf{a})$ from the regions $Q^{\nwarrow}_{x,-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ and $Q^{\nearrow}_{x,-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ by removing forced lozenges, respectively (see Figure \ref{fig:SpecialQ2}(b) for an example of $Q^{\nwarrow}_{x,-1,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ region).
\end{proof}
\subsection{The main proof of Theorems \ref{main1}--\ref{mainQ4}}
We are now ready to prove our theorems by induction on $h:=p+x+z$. Recall that $p$ denotes the quasi-perimeter of the region
(the perimeter of the base hexagon that our region is obtained by removing three ferns).
By Lemma \ref{lem1}, we can assume, without loss of generality, that $a_i$'s, $b_j$'s, and $c_t$'s are all positive (for $i\geq 1, j\geq 1,t\geq 2$) in the rest of the proof.
The base cases are the situations when at least one of the parameters $x,z$ is equal to $0$, and the case $p<6$.
We consider the first base case when $z=0$. We can divide the region $R^{\odot}_{x,y,0}(\textbf{a}; \textbf{c}; \textbf{b})$ into two balanced subregions satisfying the conditions in Regions-Splitting Lemma \ref{RS} by cutting along the lattice $\ell$ that the three ferns are resting on. The upper and lower halves are respectively the dented semihexagons corresponding to the two $s$-terms (with $z=0$) in the formula of Theorem \ref{main1}. This means that Theorem \ref{main1} follows from Cohn--Larsen--Propp's formula (\ref{semieq}). Similarly, we can verify the tiling formulas (in the case $z=0$) for all other $7$ regions in Theorems \ref{main2}--\ref{mainQ4}.
If $x=0$, we also apply Region--Splitting Lemma \ref{RS} to our eight regions by cutting along the lattice line $\ell$ containing the bases on triangles in the three ferns. The only difference is that we now add two `bumps' of lengths $\left\lfloor \frac{z}{2} \right\rfloor$ and $\left\lceil \frac{z}{2} \right\rceil$ to the cut at the positions of the `gaps' between two consecutive ferns. The upper part is a dented semihexagon, while the lower part is also isomorphic to a dented semihexagon after removing several vertical forced lozenges at the places corresponding to the bumps above. Again, by Cohn--Larsen--Propp's formula (\ref{semieq}), we can verify our theorems for the case $x=0$.
If $p<6$, by Claim \ref{claimp}, we have $2z+4z<6$. It means that at least one of $x$ and $z$ is $0$, this is reduced to the base cases treated above.
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{10cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Basecasethreefern.eps}%
\end{picture}%
\begin{picture}(10593,8716)(1466,-10616)
\put(2661,-6731){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_1$}%
}}}}
\put(3561,-6331){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\put(4361,-6731){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_3$}%
}}}}
\put(6171,-6221){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\put(7281,-6791){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_2$}%
}}}}
\put(10961,-6321){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(9921,-6891){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\put(8931,-6331){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_3$}%
}}}}
\put(5011,-6781){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{x}{2}$}%
}}}}
\put(7856,-6801){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{x}{2}$}%
}}}}
\put(5601,-2181){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(10071,-3331){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$y+z+e_a+o_b+o_c$}%
}}}}}
\put(11891,-6471){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$y$}%
}}}}
\put(10741,-9531){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(5901,-10601){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(2371,-7411){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$y+z+o_a+e_b+e_c$}%
}}}}}
\put(1551,-6041){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$y+b-a$}%
}}}}}
\put(2151,-4851){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\end{picture}%
}
\caption{Partitioning the region $R^{\odot}_{x,y,0}(\textbf{a};\textbf{c}; \textbf{b})$ into two dented semihexagons.}\label{Basecasethreefern}
\end{figure}
\begin{figure}\centering
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\resizebox{10cm}{!}{
\begin{picture}(0,0)%
\includegraphics{Basecasethreefern2.eps}%
\end{picture}%
\begin{picture}(11516,12159)(980,-15096)
\put(5291,-3211){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+o_a+e_b+e_c$}%
}}}}
\put(9371,-4801){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$y+z+e_a+o_b+o_c$}%
}}}}}
\put(11871,-9441){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$y+a-b$}%
}}}}}
\put(10421,-13781){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+o_a+e_b+e_c$}%
}}}}}
\put(6281,-15081){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x+e_a+o_b+o_c$}%
}}}}
\put(2803,-11211){\rotatebox{300.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$y+z+o_a+e_b+e_c$}%
}}}}}
\put(1509,-8971){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$y$}%
}}}}
\put(2501,-6611){\rotatebox{60.0}{\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$z+e_a+o_b+o_c$}%
}}}}}
\put(4891,-8821){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{z}{2}$}%
}}}}
\put(7756,-8814){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$\frac{z}{2}$}%
}}}}
\put(7456,-9526){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_2$}%
}}}}
\put(9922,-9555){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_2$}%
}}}}
\put(10639,-9086){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_1$}%
}}}}
\put(9095,-9082){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$b_3$}%
}}}}
\put(2599,-9437){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_1$}%
}}}}
\put(3544,-8846){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_2$}%
}}}}
\put(4607,-9437){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$a_3$}%
}}}}
\put(6497,-8964){\makebox(0,0)[lb]{\smash{{\SetFigFont{14}{16.8}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{1,1,1}$c_1$}%
}}}}
\end{picture}%
}
\caption{Dividing the region $R^{\odot}_{0,y,z}(\textbf{a}; \textbf{c}; \textbf{b})$ into two regions.}\label{Basecasethreefern2}
\end{figure}
\medskip
For the induction step, we assume that $x$ and $z$ are positive, that $p\geq 6$, and that Theorems \ref{main1}--\ref{mainQ4}
all hold for any $R^{\odot}$-, $R^{\leftarrow}$-, $R^{\swarrow}$-, $R^{\nwarrow}$-, $Q^{\odot}$-, $Q^{\leftarrow}$-, $Q^{\nearrow}$-, and $Q^{\nwarrow}$-type regions whose $h$-parameter is strictly less than $h=p+x+z$.
If $y$ achieves its minimal values (which is $0$ or $-1$), then by Lemma \ref{lem2} our region have the same tiling number as another region whose $h$ parameter is strictly less than $h$. Then our theorem follows from the induction hypothesis.
We know assume that $y$ does not achieve its minimal value. In this case, all of our 18 recurrences in Sections 3.3--3.10 work.
Let $\mathcal{R}$ be either the region $R^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $R^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $R^{\swarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $R^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $Q^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $Q^{\leftarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, $Q^{\nearrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$, or $Q^{\nwarrow}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})$ in the 18 recurrences. We also denote $\mathcal{R}_1, \mathcal{R}_2,\dotsc, \mathcal{R}_5$ the \emph{other} five regions appearing in the recurrences corresponding to $\mathcal{R}$, from left to right. In the next two paragraphs, we show that $\mathcal{R}_i$'s have $h$-parameter strictly smaller than $h=p+x+z$.
In particular, in each of the recurrences, the sum of the $x$- and $z$-parameters of $\mathcal{R}_i$'s
are always less than or equal to $x+z$. Moreover, the quasi perimeters of $\mathcal{R}_i$'s are $p$,
$p-1$, $p-2$, or $p-3$ (depending of how many of the four triangles $u,v,w,s$ are on the boundary of the base hexagon).
In particular, if the lozenge-removal pattern along one side of the base hexagon is not overlapping with any other, the portion of the old boundary that adjacent to the forced lozenges is replaced by an 1-unit shorter portion, this reduces the length of the boundary of the base hexagon by $1$
(see the pictures in the first row of Figure \ref{Boundaryreduce}).
In the case when two lozenge-removal patterns along two consecutive sides of the base hexagon are overlapping,
the portion of the old boundary corresponding to the combined pattern is replaced by a $2$-unit shorter one indicated
by the dotted line (see the two examples in the lower row of Figure \ref{Boundaryreduce}). This means that the quasi-parameter of $\mathcal{R}_i$ is
$p-k$, where $k$ is the number of triangles $u,v,w,s$ which are on the boundary of the base hexagon.
This means that, if at least one of the removed unit triangles $u,v,w,s$ lies on the boundary, then the corresponding $\mathcal{R}_i$ region has the
$h$-parameter strictly less than $h=p+x+z$. The other case only happens when the region $\mathcal{R}_i$ corresponds to the graph $G$ with two removed unit triangles appended to the end of the left and to the right ferns
(as in the second region in the recurrences (\ref{centerrecur4a}), (\ref{centerrecur4b}) and (\ref{centerrecur4c}) of the $R^{\nwarrow}$-type regions,
or recurrences (\ref{centerrecur8a}), (\ref{centerrecur8b}) and (\ref{centerrecur8c}) of the $Q^{\nwarrow}$-type regions), then
the quasi-parameter of $\mathcal{R}_i$ is exactly $p$. However, in this case, the sum of $x$- and $z$-parameters of $\mathcal{R}_i$
is always $x+z-2$. This implies that its $h$-parameter is $p+x+z-2=h-2$, which is still less than $h$.
\begin{figure}\centering
\includegraphics[width=13cm]{Boundaryreduce.eps}
\caption{Reduction of the length of the quasi-boundary after the removal of forced lozenges.}\label{Boundaryreduce}
\end{figure}
\medskip
In total, we can always write the number of tilings of our region in terms of tiling numbers of other regions
with $h$-parameter strictly less than, and the latter regions have tiling numbers given by explicit product formulas
by the induction hypothesis. Our final work is now to verify that the tiling formulas in Theorems \ref{main1}--\ref{mainQ4}
satisfy the same recurrences. This verification will be left to the next section.
\subsection{Verifying the formulas in Theorems \ref{main1}--\ref{mainQ4} satisfy the recurrences (\ref{centerrecur1a})--(\ref{centerrecur8b})}
We only show here the verification for the recurrences for $R^{\odot}$-type regions, as other 16 recurrences can be treated in the same manner. Without loss of generality, we can assume that each of the three ferns consists of an even number of triangles, i.e. $m,n,k$ are all even.
Let us denote by $g_{x,y,z}^{\odot}(\textbf{a}; \textbf{c}; \textbf{b}),$ $g_{x,y,z}^{\leftarrow}(\textbf{a}; \textbf{c}; \textbf{b}),$ $g_{x,y,z}^{\nwarrow}(\textbf{a}; \textbf{c}; \textbf{b}),$ and $g_{x,y,z}^{\swarrow}(\textbf{a}; \textbf{c}; \textbf{b}),$ the tiling formulas in Theorems \ref{main1}--\ref{main4} (for Theorem \ref{main1}, we consider here the combined formula (\ref{maineq1c})).
We first work on the case when $a<b$. We need to verify that
\begin{align}\label{verify1a}
g^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}) g^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})&=
g^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}) g^{\leftarrow}_{x-1,y,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})\notag\\
&+g^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c}; \ \textbf{b})g^{\swarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}).
\end{align}
Equivalently, we need to show that
\begin{align}\label{verify1b}
\frac{g^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})}{g^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})} \frac{g^{\leftarrow}_{x-1,y,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}{g^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}
+\frac{g^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c}; \ \textbf{b})}{g^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}) }\frac{g^{\swarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}{g^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}=1.
\end{align}
We first simplify the first fraction of the first term on the left-hand side of (\ref{verify1b}) as
\begin{align}\label{verify1c}
\frac{g^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})}{g^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})} &=\frac{\operatorname{M}(C_{x+1,2y+z+2b-1,z-1}(c))}{\operatorname{M}(C_{x,2y+z+2b,z}(c))}\notag\\
&\times \frac{\operatorname{H}(b+y+z-1)}{\operatorname{H}(b+y+z)}\frac{\operatorname{H}(b+c+y+z-1)}{\operatorname{H}(b+c+y+z)}\notag\\
&\times \frac{\operatorname{H}(b-o_a+o_b+o_c+y+z)}{\operatorname{H}(b-o_a+o_b+o_c+y+z-1)}\frac{\operatorname{H}(b+o_a-o_b+e_c+y+z)}{\operatorname{H}(b+o_a-o_b+e_c+y+z-1)}.
\end{align}
Similarly, we get
\begin{align}\label{verify1d}
\frac{g^{\leftarrow}_{x-1,y,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}{g^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})} &=\frac{\operatorname{M}(C_{x-1,2y+z+2b,z}(c))}{\operatorname{M}(C_{x,2y+z+2b-1,z-1}(c))}\notag\\
&\times \frac{\operatorname{H}(b+y+z)}{\operatorname{H}(b+y+z-1)}\frac{\operatorname{H}(b+c+y+z)}{\operatorname{H}(b+c+y+z-1)}\notag\\
&\times \frac{\operatorname{H}(b-o_a+o_b+o_c+y+z-1)}{\operatorname{H}(b-o_a+o_b+o_c+y+z)}\frac{\operatorname{H}(b+o_a-o_b+e_c+y+z-1)}{\operatorname{H}(b+o_a-o_b+e_c+y+z)}.
\end{align}
By (\ref{verify1c}) and (\ref{verify1d}), we have the first term on the left-hand side of (\ref{verify1b}) simplified as
\begin{align}\label{verify1e}
\frac{g^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b})}{g^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b})} \frac{g^{\leftarrow}_{x-1,y,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}{g^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}=\frac{\operatorname{M}(C_{x+1,2y+z+2b-1,z-1}(c))}{\operatorname{M}(C_{x,2y+z+2b,z}(c))}\frac{\operatorname{M}(C_{x-1,2y+z+2b,z}(c))}{\operatorname{M}(C_{x,2y+z+2b-1,z-1}(c))}.
\end{align}
We now work on the second term on the left-hand side of (\ref{verify1a}). By definition, the first fraction here can be written as
\begin{align}\label{verify1f}
\frac{g^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c}; \ \textbf{b})}{g^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}) } &=\frac{\operatorname{M}(C_{x,2y+z+2b-1,z}(c))}{\operatorname{M}(C_{x,2y+z+2b,z}(c))}\notag\\
&\times \frac{s\left(y+b-a-1,a_1,\dotsc, a_{m},\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)}{s\left(y+b-a,a_1,\dotsc, a_{m},\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)}\notag\\
&\times \frac{(b+y-1)!}{(b+c+y+z-1)!}\frac{(b+c+y+\frac{x+z}{2}-1)!}{(b+y+\frac{x+z}{2}-1)!} \frac{(b-o_a+o_b+o_c+y+z-1)!}{(b-o_a+o_b+o_c+y-1)!}.
\end{align}
Similarly, we have
\begin{align}\label{verify1g}
\frac{g^{\swarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}{g^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}&=\frac{\operatorname{M}(C_{x,2y+z+2b,z-1}(c))}{\operatorname{M}(C_{x,2y+z+2b-1,z-1}(c))}\notag\\
&\times \frac{s\left(y+b-\min(a,b),a_1,\dotsc, a_{m}+1,\frac{x+z}{2}-1,c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)}{s\left(y+b-\min(a,b)-1,a_1,\dotsc, a_{m}+1,\frac{x+z}{2}-1,c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)}\notag\\
&\times \frac{(b+c+y+z-1)!}{(b+y)!} \frac{(b+y+\frac{x+z}{2}-1)!}{(b+c+y+\frac{x+z}{2}-1)!} \frac{(b-o_a+o_b+o_c+y)!}{(b-o_a+o_b+o_c+y+z-1)!}.
\end{align}
By (\ref{verify1f}) and (\ref{verify1g}), we have the second term on the left-hand side of (\ref{verify1b}) simplified as
\begin{align}\label{verify1h}
&\frac{g^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c}; \ \textbf{b})}{g^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}) } \frac{g^{\swarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}{g^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})} =\frac{\operatorname{M}(C_{x,2y+z+2b-1,z}(c))}{\operatorname{M}(C_{x,2y+z+2b,z}(c))}\frac{\operatorname{M}(C_{x,2y+z+2b,z-1}(c))}{\operatorname{M}(C_{x,2y+z+2b-1,z-1}(c))}\notag\\
&\times \frac{s\left(y+b-a-1,a_1,\dotsc, a_{m},\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)}{s\left(y+b-a,a_1,\dotsc, a_{m},\frac{x+z}{2},c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)}\notag\\
&\times \frac{s\left(y+b-a,a_1,\dotsc, a_{m}+1,\frac{x+z}{2}-1,c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)}{s\left(y+b-a-1,a_1,\dotsc, a_{m}+1,\frac{x+z}{2}-1,c_1,\dotsc,c_{k}+\frac{x+z}{2}+b_n,b_{n-1},\dotsc,b_1\right)}\notag\\
&\times \frac{b-o_a+o_b+o_c+y}{b+y}.
\end{align}
We have the following claim as a direct consequence of Cohn--Larsen--Propp's formula (\ref{semieq}):
\begin{claim}\label{claimS}
Let $t_1,t_2,\dotsc,t_{2l}$ are non-negative integers. Then
\begin{align}
\dfrac{\dfrac{s(t_1,t_2,\dotsc,t_{2n-1},t_{2n},t_{2n+1},t_{2n+2},\dotsc, t_{2l})}{s(t_1,t_2,\dotsc,t_{2n-1},t_{2n}+1,t_{2n+1}-1,t_{2n+2},\dotsc, t_{2l})}}{ \dfrac{s(t_1-1,t_2,\dotsc,t_{2n-1},t_{2n},t_{2n+1},t_{2n+2},\dotsc, t_{2l})}{s(t_1-1,t_2,\dotsc,t_{2n-1}, t_{2n}+1,t_{2n+1}-1,t_{2n+2},\dotsc, t_{2l})} }=\dfrac{t_1+t_2+\dotsc+t_{2n}}{o_t-1}.
\end{align}
\end{claim}
Apply the claim to the $s$-terms on the right-hand side of (\ref{verify1h}), we get
\begin{align}\label{verify1i}
\frac{g^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c}; \ \textbf{b})}{g^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}) } \frac{g^{\swarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})}{g^{\leftarrow}_{x,y,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})} &=\frac{\operatorname{M}(C_{x,2y+z+2b-1,z}(c))}{\operatorname{M}(C_{x,2y+z+2b,z}(c))}\frac{\operatorname{M}(C_{x,2y+z+2b,z-1}(c))}{\operatorname{M}(C_{x,2y+z+2b-1,z-1}(c))}.
\end{align}
This means that we now only need to show that
\begin{align}\label{verify1k}
\frac{\operatorname{M}(C_{x+1,2y+z+2b-1,z-1}(c))}{\operatorname{M}(C_{x,2y+z+2b,z}(c))}&\frac{\operatorname{M}(C_{x-1,2y+z+2b,z}(c))}{\operatorname{M}(C_{x,2y+z+2b-1,z-1}(c))}\notag\\
&+\frac{\operatorname{M}(C_{x,2y+z+2b-1,z}(c))}{\operatorname{M}(C_{x,2y+z+2b,z}(c))}\frac{\operatorname{M}(C_{x,2y+z+2b,z-1}(c))}{\operatorname{M}(C_{x,2y+z+2b-1,z-1}(c))}=1,
\end{align}
or equivalently
\begin{align}\label{verify1l}
\operatorname{M}(C_{x,2y+z+2b,z}(c))&\operatorname{M}(C_{x,2y+z+2b-1,z-1}(c))=\notag\\
&\operatorname{M}(C_{x+1,2y+z+2b-1,z-1}(c))\operatorname{M}(C_{x-1,2y+z+2b,z}(c))\notag\\
&+\operatorname{M}(C_{x,2y+z+2b-1,z}(c))\operatorname{M}(C_{x,2y+z+2b,z-1}(c)).
\end{align}
This is straightforward from the tiling formulas of the cored hexagons in \cite{CEKZ}.
However, one can verify (\ref{verify1l}) \emph{without} using tiling formulas of cored hexagons by observing that it is actually a consequence of the recurrence (\ref{centerrecur1a}) as follows.
Apply recurrence (\ref{centerrecur1a}) to the region $R^{\odot}_{x+1,y+b-1,z}((0,0); (c); (0,1))$, we get
\begin{align}
\operatorname{M}(R^{\odot}_{x+1,y+b-1,z}&((0,0);\ (c);\ (0,1)) \operatorname{M}(R^{\leftarrow}_{x+1,y+b-1,z-1}((0,1);\ (c);\ (0,1)))=\notag\\
&\operatorname{M}(R^{\odot}_{x+2,y+b-1,z-1}((0,0);\ (c);\ (0,1))) \operatorname{M}(R^{\leftarrow}_{x,y+b-1,z}((0,1);\ (c);\ (0,1)))\notag\\
&+\operatorname{M}(R^{\nwarrow}_{x+1,y+b-2,z}((0,0);\ (c);\ (0,1)))\operatorname{M}(R^{\swarrow}_{x+1,y+b-1,z-1}((0,1);\ (c);\ (0,1))).
\end{align}
After removing forced lozenges along the northeast side from each region in the above recurrence, and removing forced lozenges along the southwest side of the the regions whose left fern corresponds to the sequence $\textbf{a}=(0,1)$, we get back the cored hexagons in (\ref{verify1l}). This finishes our verification for (\ref{verify1a}).
Similarly, we can verify that our tiling formulas satisfy the recurrence (\ref{centerrecur1b}) for $R^{\odot}$-type regions when $a\geq b$. It means that we need to show
\begin{align}\label{verify1m}
g^{\odot}_{x,y,z}(\textbf{a};\ \textbf{c};\ \textbf{b}) g^{\leftarrow}_{x,y-1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})&=
g^{\odot}_{x+1,y,z-1}(\textbf{a};\ \textbf{c};\ \textbf{b}) g^{\leftarrow}_{x-1,y-1,z}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b})\notag\\
&+g^{\nwarrow}_{x,y-1,z}(\textbf{a};\ \textbf{c}; \ \textbf{b})g^{\swarrow}_{x,y-1,z-1}(\textbf{a}^{+1};\ \textbf{c};\ \textbf{b}).
\end{align}
for $a\geq b$. However, this verification is essentially the same as that of the case $a<b$ treated above and is omitted.
\section{Concluding Remarks}\label{sec:remark}
As pointed out by Fulmek in \cite{Ful}, Kuo's graphical condensation is simply a special case of the determinant-permanent method. However, this paper shows a particular advantage of Kuo's method comparing with the classical determinant-permanent method when dealing with regions with complicated structures.
The main results in the series of papers \cite{Halfhex1, Halfhex2, Halfhex2} imply an explicit enumeration of a reflectively symmetric tilings of the $Q^{\odot}$-type regions (i.e. tilings which are invariant under a refection over a vertical symmetry axis). This results also extends Proctor's enumeration of transpose-complementary plane partitions \cite{Proc} and the related work of Ciucu \cite{Ciucu1} and Rohatgi \cite{Ranjan1}. We are also interested in the centrally symmetric tilings of the $R^{\odot}$-type regions (i.e. tilings which are invariant under $180^{\circ}$ rotations). Our data suggests that the number of these symmetric tilings would have nice prime factorizations.
\begin{con}
The number of centrally symmetric tilings of the region $R^{\odot}_{x,yz}(\textbf{a};\ \textbf{c};\ \textbf{b})$ is always given by a simple product formula.
\end{con}
This (if verified) gives a generalization of Stanley's enumeration of self-complementary plane partitions \cite[Eq. (3a)--(3c)]{Stanley}.
In \cite{CEKZ}, Ciucu, Eisenk\"{o}lbl, Krattenthaler and Zare posed two striking conjectures for the tiling formulas of a hexagon when the triangular hole is $1$ or $3/2$ unit off the center. The conjectures were recently proved by Rosengren \cite{Rosen} using lattice path combinatorics and Selberg integral. In the sequel of the paper, we will enumerate extensively $30$ different hexagons with three ferns removed, in which the middle fern is slightly off the center. Two of our enumerations have Ciucu--Eisenk\"{o}lbl--Krattenthaler--Zare's conjectures as two very-special cases (when the two side ferns are empty and the middle fern consists of a single triangle). This provides new proofs of the conjectures.
Intuitively, our main theorems (Theorems 2.3--2.9) say that the number of tilings of a hexagon with three ferns removed can be always factorized into the tiling number of a cored-hexagon and a simple multiplicative factor. One would ask for a similar factorization for the general case of an arbitrary number of collinear ferns removed. Such a factorization seems to exist and will be investigated in a separate paper.
It would be interesting to investigate whether Rosengren's weighted formula in \cite{Rosen} can be generalized to our hexagons with three ferns removed. This and several new duals and $q$-duals of MacMahon's theorem will also be considered in a separate paper.
\section*{Acknowledgement}
The author would like to thank Dennis Stanton and Hjalmar Rosengren for pointing out to him paper \cite{Rosen}.
|
{
"timestamp": "2019-05-21T02:20:53",
"yymm": "1803",
"arxiv_id": "1803.02792",
"language": "en",
"url": "https://arxiv.org/abs/1803.02792"
}
|
\subsection*{#1}
}
{
\endMakeFramed
\end{minipage}
\vspace{1em}
}
\newcommand{\url{https://github.com/phbasler/BSMPT}}{\url{https://github.com/phbasler/BSMPT}}
\newcommand{\url{https://phbasler.github.io/BSMPT}}{\url{https://phbasler.github.io/BSMPT}}
\newcommand{\text{BSMPT }}{\text{BSMPT }}
\newcommand{PathToYourLib}{PathToYourLib}
\newcommand{\$BSMPT}{\$BSMPT}
\allowdisplaybreaks
\begin{document}
\title{
\vspace*{-3cm}
\phantom{h} \hfill\mbox{\small KA-TP-06-2018
\\[1cm]
\textbf{BSMPT\\
Beyond the Standard Model Phase Transitions \\ A Tool for the
Electroweak Phase Transition in Extended Higgs Sectors\\[4mm]}}
\date{}
\author{
Philipp Basler$^{1\,}$\footnote{E-mail:
\texttt{philipp.basler@kit.edu}} ,
Margarete M\"{u}hlleitner$^{1\,}$\footnote{E-mail:
\texttt{margarete.muehlleitner@kit.edu}}
\\[9mm]
{\small\it
$^1$Institute for Theoretical Physics, Karlsruhe Institute of Technology,} \\
{\small\it 76128 Karlsruhe, Germany}}
\maketitle
\begin{abstract}
We provide the {\tt C++} tool {\tt BSMPT} for calculating the strength
of the electroweak phase transition in extended Higgs sectors. This
relies on the loop-corrected effective potential at finite temperature
including daisy resummation of the bosonic masses. The program
allows to compute the vacuum expectation value (VEV) $v$ of the potential
as a function of the temperature, and in particular the critical VEV
$v_c$ at the temperature $T_c$ where the phase transition takes
place. In addition, the loop-corrected trilinear Higgs self-couplings are
provided. We apply an 'on-shell' renormalization scheme in the sense
that the loop-corrected masses and mixing angles are required to be
equal to their tree-level input values. This allows for efficient
scans in the parameter space of the models. The models implemented so far
are the CP-conserving and the CP-violating 2-Higgs-Doublet Models (2HDM) and the
Next-to-Minimal 2HDM (N2HDM). The program structure is such that the
user can easily implement further models. Our tool can be used for the
investigation of electroweak baryogenesis in models with extended
Higgs sectors and the related Higgs self-couplings. The combination
with parameter scans in the respective models allows to study the
impact on collider phenomenology and to make a link between collider
phenomenology and cosmology. The program package can be downloaded at:
\url{https://github.com/phbasler/BSMPT}.
\end{abstract}
\thispagestyle{empty}
\vfill
\newpage
\setcounter{page}{1}
\section{Introduction}
The observed baryon asymmetry of the Universe (BAU)
\cite{Bennett:2012zja} is one of the unsolved puzzles within the
Standard Model (SM). Electroweak (EW) baryogenesis provides a
mechanism to generate the BAU dynamically in the early
Universe during a first order EW phase transition (EWPT)
\cite{Kuzmin:1985mm,Cohen:1990it,Cohen:1993nk,Quiros:1994dr,Rubakov:1996vz,Funakubo:1996dw,Trodden:1998ym,Bernreuther:2002uj,Morrissey:2012db}
provided all three Sakharov conditions \cite{sakharov} are
fulfilled. Although in the SM all three conditions can in
principle be fulfilled, the phase transition (PT) is not of strong first order
\cite{Morrissey:2012db,smnot,notsm2}, so that new
physics extensions are required that provide additional sources of
CP violation as well as further scalar states triggering a first order
EWPT. The investigation of the PT requires the computation of the
loop-corrected Higgs potential at finite temperature, in order find
the vacuum expectation value (VEV) $v_c$ at the critical temperature
$T_c$. The latter is defined as the temperature where two degenerate
global minima exist. A value of $\xi_c = v_c / T_c >1$ indicates a strong
first order PT \cite{Quiros:1994dr,Moore:1998swa}. \newline \vspace*{-3.5mm}
In this paper we present the program package
{\tt BSMPT} - 'Beyond the Standard Model Phase Transitions':
\begin{itemize}
\item[] A {\tt C++} tool for the calculation of the loop-corrected effective potential at
finite temperature \cite{ColemanWeinberg,Quiros:1999jp,Dolan:1973qd}
including the daisy resummation for the bosonic masses
\cite{Carrington:1991hz}. The latter is included in two different
approximations for the treatment of the thermal masses, the Parwani
\cite{Parwani:1991gq} and the Arnold-Espinosa method
\cite{Arnold:1992rz}, where the Arnold-Espinosa method is set as the
default one. The renormalization of the potential is based on
physical conditions. These are 'on-shell' conditions in the sense that the
loop-corrected masses and mixing angles extracted from the effective
potential are forced to be equal to their tree-level input
values.
\end{itemize}
The package can be used for:
\begin{itemize}
\item[-] The calculation of the EWPT: For a given point in the
parameter space, it calculates the global minimum of the potential
at a given temperature and determines the critical
temperature $T_c$ where the phase transition takes place together with the
corresponding VEV, $v_c$.\footnote{Note, that we do not consider the possibility of a
2-state PT \cite{} in our models.} These two values are then used to compute
the strength of the PT, parametrized by $\xi_c= v_c/T_c$.
\item[-] The calculation of the evolution of the
VEV(s)\footnote{In extended Higgs sectors we have several VEVs, which,
at zero temperature, combine to the total VEV $v \approx 246.22$~GeV.}
with the temperature.
\item[-] The calculation of the global minimum of the 1-loop corrected
potential at zero temperature.
\item[-] The calculation of the loop-corrected trilinear Higgs
self-couplings in the on-shell scheme.
\end{itemize}
For the combined investigation of the PT through EW baryogenesis
together with collider phenomenology it is recommended to use input
parameter points that already fulfill all relevant experimental and
theoretical constraints in order to pin down the viable parameter
space as much as possible. Our chosen on-shell renormalization has the
advantage to allow for efficient scans in the parameter space of the
investigated models and simultaneously take into account all relevant
theoretical and up-to-date experimental constraints. For sample
applications, see Refs.~\cite{Basler:2016obg} and \cite{Basler:2017uxn} in
the CP-conserving and CP-violating 2-Higgs Doublet Model
(2HDM), respectively. \newline \vspace*{-3.5mm}
The program was developed and tested on an OpenSuse 42.2, Ubuntu
14.04, Ubuntu 16.04 and Mac 10.13 system with {\tt g++
v6.2.1} and {\tt g++ v.7.2.1}. The package can be downloaded at:
\begin{center}
\url{https://github.com/phbasler/BSMPT}
\end{center}
The outline of the paper is as follows. In Section~\ref{sec:calc} we
present our calculation which also serves to set our notation. The
models that are already implemented in the package are introduced in
Section~\ref{sec:implmodels}.
In Section~\ref{sec:install} we explain how to install
and run the program.
Section~\ref{sec:executables} describes the available executables and
their corresponding output files.
Section~\ref{sec:newmodels} explains
with the help of a toy model how a new model can be added to the
program package. The summary is given in Section~\ref{sec:concl}. \newline \vspace*{-3.5mm}
\section{Calculation \label{sec:calc}}
In order to investigate the properties of the EWPT, the loop-corrected
effective potential $V$ at finite temperature $T$ has to be
computed. In terms of the static field configuration $\omega$ and the
temperature $T$ the potential
\begin{eqnarray}
V = V(\omega,T)
\end{eqnarray}
develops a minimum for the ground state $\omega = v(T)$.
In case $v= 0$ we are in the symmetric phase of the
model, for $v \ne 0$, we are in the broken
phase. Starting with the symmetric vacuum in the early universe, the
EWPT is defined as the point in the evolution of the
potential, where a second minimum with non-zero VEV $v_c$
developed at the critical temperature $T_c$, for which
\begin{eqnarray}
V (v=0, T_c) = V (v=v_c, T_c) \;.
\end{eqnarray}
The thermal evolution of the ground state of the potential is an
important criterion to judge the fulfillment of the Sakharov
criteria. In order to be a possible candidate for electroweak
baryogenesis the EWPT has to be of strong first order, defined as
\cite{Quiros:1994dr,Moore:1998swa}
\begin{eqnarray}
\xi_c \equiv \frac{v_c}{T_c} > 1 \;.
\end{eqnarray}
Because of the rich structure of the electroweak potential the
calculation of $v_c$ and $T_c$ is not possible in an analytic way and
we therefore present this program which calculates $v_c$ and $T_c$
numerically. \newline \vspace*{-3.5mm}
The loop-corrected effective potential at finite temperature $T$ as
function of the classical constant field configuration, generically denoted by
$\omega$, reads
\begin{eqnarray}
V (\omega,T) = V (\omega) + V^T (\omega,T) \equiv V^{(0)} (\omega) +
V^{\text{CW}} (\omega) + V^{\text{CT}} (\omega) + V^T (\omega,T) \;.
\label{eq:fielconfig}
\end{eqnarray}
In $V(\omega)$ we summarize the contributions that do not depend
explicitly on the temperature $T$. These are the tree-level potential
$V^{(0)}$, the Coleman-Weinberg potential $V^{\text{CW}}$ and the
coun\-ter\-term potential $V^{\text{CT}}$. The thermal corrections at finite
temperature $T$ are given by $V^T (\omega,T)$.
\subsection{Notation \label{sec:notation}}
We use the notation of Ref.~\cite{Camargo-Molina:2016moz}\footnote{The
additional terms appearing in \cite{Camargo-Molina:2016moz} do not
exist in our models and are therefore omitted here.} in which
the tree-level Lagrangian, relevant for the effective potential, can be cast into the form
\begin{align}
-\mathcal{L}_S &= L^i \Phi_i + \frac{1}{2!} L^{ij} \Phi_i \Phi_j+\frac{1}{3!} L^{ijk} \Phi_i
\Phi_j\Phi_k + \frac{1}{4!} L^{ijkl} \Phi_i \Phi_j \Phi_k \Phi_l \label{Eq:LS_Classical}\\
-\mathcal{L}_F &= \frac{1}{2} Y^{IJk} \Psi_I \Psi_J \Phi_k + c.c. \label{Eq:LF_Classical}\\
\mathcal{L}_{G} &= \frac{1}{4} G^{abij}
A_{a\mu}A_b^\mu\Phi_i\Phi_j \;, \label{Eq:LG_Classical}
\end{align}
for every model applied in the code. Here and in the
following we adopt the Einstein convention and sum over repeated
indices if one is up and the other down, otherwise not. In this description the
scalar multiplets are decomposed into $n_{\text{Higgs}}$ real scalar
fields $\Phi_i$, with $i= 1,\dots , n_{\text{Higgs}}$. The
fermion multiplets are represented through $n_{\text{fermion}}$ Weyl
spinors $\Psi_I$, with $I=1,\dots , n_{\text{fermion}}$. The gauge bosons are given by
the four-vectors $A_\mu^a$. The gauge group index $a$ runs over
$n_{\text{gauge}}$ gauge bosons in the adjoint representation of the
gauge group. The extended Higgs potential is
given by $-\mathcal{L}_S$ and is described through the tensors
$L^i,L^{ij},L^{ijk},L^{ijkl}$ and the real scalar fields $\Phi_i$
($i,j,k,l= 1, \dots , n_{\text{Higgs}}$). The
interactions between the scalar fields and the fermions $\Psi_I$ are
described by the tensor $Y^{IJk}$ ($I,J = 1\dots n_{\text{fermion}}$). The
interactions between the scalars and the gauge
bosons $A_\mu^a$ are given by $G^{abij}$ ($a,b = 1\dots
n_{\text{gauge}}$). After symmetry breaking the scalar fields $\Phi_i$
are expanded around a classical constant field configuration $\omega_i$ as
\begin{align}
\Phi_i(x) &= \omega_i + \phi_i(x) \;, \label{eq:dev}
\end{align}
where the $\phi_i(x)$ describe the quantum scalar field
fluctuations. After inserting Eq.~(\ref{eq:dev}) in
Eqs.~(\ref{Eq:LS_Classical})-(\ref{Eq:LG_Classical}), they can be
rewritten as
\begin{align}
-\mathcal{L}_S &= \Lambda + \Lambda^i_{(S)} \phi_i + \frac{1}{2} \Lambda_{(S)}^{ij} \phi_i\phi_j + \frac{1}{3!} \Lambda^{ijk}_{(S)} \phi_i\phi_j\phi_k + \frac{1}{4!} \Lambda_{(S)}^{ijkl}\phi_i\phi_j\phi_k\phi_l \\
-\mathcal{L}_F &= \frac{1}{2} M^{IJ} \Psi_I\Psi_J + \frac{1}{2}
Y^{IJk}\Psi_I\Psi_J\phi_k + c.c. \\
\mathcal{L}_G &= \frac{1}{2} \Lambda^{ab}_{(G)} A_{a\mu}A_b^{\mu}
+\frac{1}{2} \Lambda^{abi}_{(G)} A_{a\mu}A_b^{\mu}\phi_i + \frac{1}{4} \Lambda^{abij}_{(G)}
A_{a\mu}A_{b}^\mu\phi_i\phi_j \;,
\end{align}
where
\begin{align}
\Lambda &= V^{(0)}(\omega_i) = L^i \omega_i + \frac{1}{2!} L^{ij}\omega_i \omega_j +
\frac{1}{3!} L^{ijk} \omega_i \omega_j \omega_k + \frac{1}{4!} L^{ijkl} \omega_i
\omega_j \omega_k \omega_l \label{Eq:Tree-Level} \\
\Lambda_{(S)}^i &= L^i + L^{ij} \omega_j + \frac{1}{2} L^{ijk}\omega_j \omega_k
+ \frac{1}{6} L^{ijkl}\omega_j\omega_k\omega_l \\
\Lambda_{(S)}^{ij} &= L^{ij} + L^{ijk}\omega_k + \frac{1}{2}
L^{ijkl}\omega_k\omega_l \label{eq:scalarten}\\
\Lambda_{(S)}^{ijk} &= L^{ijk}+L^{ijkl}\omega_l \\
\Lambda_{(S)}^{ijkl} &= L^{ijkl} \\
\Lambda_{(G)}^{ab} &= \frac{1}{2} G^{abij}\omega_i\omega_j
\label{eq:gaugeten} \\
\Lambda_{(G)}^{abi} &= G^{abij}\omega_j \\
\Lambda_{(G)}^{abij} &= G^{abij} \label{eq:gabijdef}\\
\Lambda_{(F)}^{IJ} &= M^{\ast IL} M_{L}^{\; J} = Y^{\ast ILk}Y_L^{\; Jm} \omega_k\omega_m \;, \quad
\mbox{with} \label{eq:fermten}\\
M^{IJ} &= Y^{IJk}\omega_k \,.
\end{align}
Using this notation\footnote{For further details, we refer to
\cite{Camargo-Molina:2016moz}.} one only needs to provide
$\omega_i,L^i,L^{ij},L^{ijk},L^{ijkl},G^{abij}$ and $Y^{IJk}$ to the program.
\subsection{The Coleman-Weinberg Potential}
The temperature-independent one-loop corrected effective potential
in the Landau gauge is given by the Coleman-Weinberg
\cite{ColemanWeinberg} contribution as
\begin{align}
V^{\text{CW}}(\omega) &= \frac{\varepsilon}{4} \summe{X={S,G,F}}{}
\left(-1\right)^{2s_X} \left(1+2s_X\right) \mathrm{Tr}\left[
\left(\Lambda^{xy}_{(X)}\right)^2 \left( \log\left(
\frac{1}{\mu^2} \Lambda^{xy}_{(X)} \right) - k_X
\right) \right] \label{Eq:CWPot} \;,
\end{align}
where $s_X$ denotes the spin of the particle described by the field $X$ and
\begin{align}
\varepsilon &\equiv \frac{1}{\left(4\pi\right)^2} \,.
\end{align}
The indices $xy$ relate to the scalar indices $ij$, the gauge indices
$ab$ and the fermion indices $IJ$ for $X=S,G$ and $F$, respectively.
Note that the sum over $X$ has to be performed over all degrees of
freedom including the color degrees of freedom for the quarks.
The scalar tensor $\Lambda_{(S)}^{ij}$, the gauge tensor
$\Lambda_{(G)}^{ab}$ and the fermion tensor $\Lambda_{(F)}^{IJ}$ are
given by Eq.~(\ref{eq:scalarten}), Eq.~(\ref{eq:gaugeten}) and
Eq.~(\ref{eq:fermten}), respectively. The potential is renormalized in
the $\overline{\mbox{MS}}$ scheme, {\it i.e.}~the default values for
the renormalization constants are
\begin{eqnarray}
k_X = \left\{ \begin{array}{ll} \frac{5}{6} \;, & \quad \mbox{for
gauge bosons}
\\[0.1cm] \frac{3}{2} \;, & \quad \mbox{otherwise}
\end{array} \right.
\end{eqnarray}
In the program they are set in the file {\tt
ClassPotentialOrigin.h} and named {\it C\_CWcbFermion},
{\it C\_CWcbGB} and {\it C\_CWcbHiggs} for the fermions, gauge bosons
and scalars, respectively. The renormalization scale $\mu$ is by default set to
the VEV at $T=0$, $\mu = v(T=0) \approx 246.22$~GeV.
\subsection{The Counterterm Potential \label{sec:counterpot}}
The masses and mixing angles of the various involved particles are
derived from the loop-corrected potential and differ from the values
extracted from the tree-level potential. The tests for the
compatibility of the investigated model with the experimental
constraints have to implement these corrections. For an efficient scan
over the - often large - parameter space of the models it is
therefore more convenient to directly use loop-corrected masses and
angles as input. This is achieved by modifying the
$\overline{\mbox{MS}}$ renormalization of the Coleman-Weinberg
potential and applying the
renormalization prescription by which the one-loop masses and mixing
angles are enforced to be equal to their values at tree-level. In practice,
we add the counterterm potential $V_{\text{CT}}$ implementing the
corresponding renormalization conditions. After
replacing the bare parameters $p^{(0)}$ of the tree-level
potential $V^{(0)}$ by the renormalized ones, $p$, and the
counterterms $\delta p$, it is given by
\begin{align}
V^{\text{CT}} &= \sum_{i=1}^{n_p} \frac{\partial V^{(0)}}{\partial p_i} \delta p_i +
\sum_{k=1}^{n_v} \delta T_k \left(\phi_k + \omega_k \right) \label{Eq:CTPot} \,,
\end{align}
where $n_p$ is the number of parameters of the potential. The $\delta
T_k$ denote the counterterms of the tadpoles $T_k$ obtained from the
minimum conditions of the potential for the $n_v$ directions in field space
in which we allow for the development of a non-zero vacuum expectation
value. Note, that $n_v \le n_{\text{Higgs}}$. In Sec.~\ref{sec:implmodels},
we give some explicit examples for counterterm potentials. The
explicit forms of the finite counterterms are obtained from the renormalization
conditions. Applying our renormalization prescription to the one-loop
contribution of the effective potential at $T=0$,
{\it i.e.}~to $V^{\text{CW}}+V^{\text{CT}}$, yields the equations
($i,j=1,\dots ,n_v$)
\begin{align}
0 &= \left.\partial_{\phi_i} \left( V^{\text{CW}} + V^{\text{CT}}
\right)\right|_{\omega=\omega_{\text{tree}}} \label{Eq:Renorm1}
\\
0 &= \left.\partial_{\phi_i} \partial_{\phi_j} \left( V^{\text{CW}} + V^{\text{CT}} \right)
\right|_{\omega =\omega_{\text{tree}}} \label{Eq:Renorm2} \,,
\end{align}
where $\omega_{\text{tree}}$ is the minimum of the tree-level
potential and $\omega$ stands generically for the
$n_v$ values $\omega_i$. The solution of the renormalization
conditions Eqs.~(\ref{Eq:Renorm1}) and
(\ref{Eq:Renorm2}) requires the first and second derivatives of the
Coleman-Weinberg potential. The corresponding formulae have been derived in
\cite{Camargo-Molina:2016moz} and have been implemented in the
code. When a new model is added they can be obtained by calling
the functions {\tt WeinbergFirstDerivative} and {\tt WeinbergSecondDerivative}.
If no shifts to the finite parts are needed, {\it
i.e.}~if the $\overline{\mbox{MS}}$ scheme is applied, the
program will treat the finite parts of the counterterms as zero in the
new class corresponding to the new model.
\subsection{The Thermal Corrections}
The temperature dependent potential $V^{(T)}$ is given by
\cite{Dolan:1973qd,Quiros:1999jp}
\begin{align}
V^T (\omega,T) &= \summe{X={S,G,F}}{} (-1)^{2 s_X}
(1 + 2 s_X)\frac{T^4}{2\pi^2} J_{\pm}\left(\Lambda^{xy}_{(X)}/T^2
\right) \;, \label{Eq:TempPot}
\end{align}
with the functions $J_-$ for bosons and $J_+$ for fermions, respectively, reading
\begin{align}
J_{\pm}\left(\Lambda_{(X)}^{xy}/T^2\right) &= \mathrm{Tr}\left[ \bint{0}{\infty} \,\mathrm{dk}\,
k^2 \log\left[ 1 \pm \exp\left( -\sqrt{k^2 + \Lambda^{xy}_{(X)}/T^2}\right) \right] \right]
\,.
\end{align}
Furthermore we have to calculate the daisy corrections \cite{Carrington:1991hz}
$\Pi_{(S)}^{ij}$ and $\Pi_{(G)}^{ab}$ to the masses of the scalars and
gauge bosons, respectively. They are given by
\begin{align}
\Pi_{(S)}^{ij} =& \frac{T^2}{12} \left[ \left(-1\right)^{2s_S} \left(1+2s_S\right)
\summe{k=1}{n_{\text{Higgs}}} L^{ijkk} + \left(-1\right)^{2s_G} \left(1+2s_G\right)
\summe{a=1}{n_{\text{gauge}}} G^{aaij} \right. \notag \\
& \left. + \left(-1\right)^{2s_F} \left(1+2s_F\right) \frac{1}{2}
\summe{I,J=1}{n_{\text{fermion}}} \left(Y^{\ast
IJj}Y_{IJ}^j + Y^{\ast IJi}Y_{IJ}^{j} \right) \right] \label{eq:pis}\\
\Pi_{(G)}^{ab} =& T^2 \frac{2}{3} \left(\frac{\tilde{n}_H}{8} + 5 \right) \frac{1}{\tilde{n}_H}
\summe{m=1}{n_{\text{Higgs}}} \Lambda^{aamm}_{(G)}
\delta_{ab} \,, \label{eq:pigauge}
\end{align}
where only the longitudinal modes of the gauge bosons get the daisy
corrections and $\tilde{n}_H \le n_{\text{Higgs}}$ is the number of
Higgs fields coupling to the gauge bosons.
The tensors $L^{ijkk}$, $Y^{IJi}$ and $G^{aaij}$ have
been introduced in Eq.~(\ref{Eq:LS_Classical}),
Eq.~(\ref{Eq:LF_Classical}) and Eq.~(\ref{Eq:LG_Classical}), respectively. The
tensor $\Lambda^{aamm}_{(G)}$ has been defined in Eq.~(\ref{eq:gabijdef}).
There are two methods to evaluate these corrections.
\begin{itemize}
\item According to the Arnold-Espinosa method \cite{Arnold:1992rz}
one makes the replacement
\begin{align}
V^T (\omega,T) &\to V^T (\omega,T) + V_{\text{daisy}}(\omega,T) \label{eq:AE1}\,,\\
V_{\text{daisy}}(\omega,T) &= -\frac{T}{12\pi} \left[ \summe{i=1}{n_{\text{Higgs}}}
\left((\overline{m}^2_i)^{3/2} - (m_i^2)^{3/2}\right) +\summe{a=1}{n_{\text{gauge}}}
\left((\overline{m}^2_a)^{3/2} - (m_a^2)^{3/2}
\right) \right] \label{eq:AE2}
\end{align}
where
$m_i^2,\overline{m}_i^2, m_a^2, \overline{m}_a^2$ are the eigenvalues
of $\Lambda_{(S)}^{ij}, \Lambda_{(S)}^{ij} + \Pi^{ij}_{(S)} ,
\Lambda_{(G)}^{ab}, \Lambda_{(G)}^{ab} +
\Pi^{ab}_{(G)}$. Remark, that only the longitudinal modes of the gauge
bosons get the thermal corrections $\Pi^{ab}_{(G)}$.
Note also that $V^T (\omega,T)$ only depends
on masses excluding the thermal corrections.
\item In the Parwani method \cite{Parwani:1991gq}, on the other hand, one replaces
\begin{align}
\Lambda_{(S)}^{ij} &\to \Lambda_{(S)}^{ij} + \Pi^{ij}_{(S)} \label{eq:Parwani1}
\end{align}
in \eqref{Eq:CWPot} and \eqref{Eq:TempPot} and also
\begin{align}
\Lambda_{(G)}^{ab} &\to \Lambda_{(G)}^{ab} + \Pi^{ab}_{(G)} \label{eq:Parwani2}
\end{align}
for the longitudinal modes. The Debye corrected masses are hence also
used in $V^{\text{CW}}$.
\end{itemize}
\subsection{Treatment of $J_\pm$ \label{Sec:Jpm}}
The numerical evaluation of\footnote{For a recent {\tt C++} library
for the computation of these functions, see \cite{Fowlie:2018eiu}.}
\begin{align}
J_\pm(x^2) &= \bint{0}{\infty} \mathrm{dk}\, k^2 \log\left[ 1 \pm
\exp\left(-\sqrt{k^2+x^2}\right)\right]
\end{align}
is very time consuming and
therefore we use the series expansions in small $x^2 = m^2/T^2$,
\begin{align}
J_{+,s}(x^2,n) =& - \frac{7\pi^4}{360} + \frac{\pi^2}{24} x^2 +\frac{1}{32} x^4
\left(\log x^2 - c_+\right) \notag
\\ &- \pi^2 x^2 \summe{l=2}{n} \left( - \frac{1}{4\pi^2} x^2\right)^l
\frac{\left(2l-3\right)!!\zeta\left(2l-1\right)}{\left(2l\right)!!\left(l+1\right)}
\left(2^{2l-1}-1\right) \\
J_{-,s}(x^2,n) =& - \frac{\pi^4}{45} + \frac{\pi^2}{12} x^2 - \frac{\pi}{6} \left(x^2
\right)^{3/2} - \frac{1}{32} x^4 \left(\log x^2 - c_-\right) \notag \\ &+
\pi^2x^2 \summe{l=2}{n} \left(-\frac{1}{4\pi^2} x^2\right)^l\frac{\left(2l-3\right)!!
\zeta\left(2l-1\right)}{\left(2l\right)!!\left(l+1\right)} \,,
\end{align}
with
\begin{align}
c_+ &= \frac{3}{2} + 2\log \pi - 2\gamma_E \\
c_- &= c_+ + 2\log 4 \,,
\end{align}
where $\gamma_E$ denotes the Euler-Mascheroni constant, $\zeta(x)$ the
Riemann $\zeta$-function and $(x)!!$ the double factorial. For large $x^2$ we use
\begin{align}
J_{\pm,l}(x^2,n) &= -\exp\left(-\left(x^2\right)^{1/2}\right)\left(\frac{\pi}{2} \left(x^2
\right)^{3/2}\right)^{1/2} \summe{l=0}{n} \frac{1}{2^l l!}
\frac{\Gamma\left(5/2+l\right)}{\Gamma\left(5/2-l\right)} \left(x^2\right)^{-l/2} \,.
\end{align}
With
\begin{align}
x_+^2 &= 2.2161\,, \qquad \delta_+ = -0.015603 \,, \\
x_-^2 &= 9.4692\,, \qquad \delta_- = 0.0063109 \,,
\end{align}
we then calculate $J_\pm$ as
\begin{align}
J_+(x^2) &= \begin{cases} - J_{\pm,l}(x^2,3) & x^2 \geq x_+^2 \\
-\left(J_{+,s}(x^2,4) + \delta_+ \right) & x < x_+^2 \end{cases} \\
J_-(x^2) &= \begin{cases} J_{\pm,l}(x^2,3) & x^2 \geq x_-^2 \\ J_{-,s}(x^2,3) + \delta_-
& x^2 < x_-^2 \end{cases} \;.
\end{align}
The shifts $\delta_\pm$ arise because we choose the intersection point of
$J_s$ and $J_l$ to be such that the derivatives are continuous, and
these shifts then enforce the functions themselves to be continuous.
In the course of the scan over the parameter space it can happen that
the bosonic masses become negative, so that $J_-(x^2)$ will be called
for $x^2 < 0$.
In this case, only the real part of the function is taken \cite{Weinberg:1987vp}. In
practice, the integral is evaluated numerically from
$x^2=0$ down to $x^2 = -3000$ in steps of 1.
In the minimization procedure the result obtained from the linear
interpolation between these points is then used. \newline \vspace*{-3.5mm}
\begin{figure}[t]
\centering
\subfigure[The integral $J_-$ for the bosons and $x^2\geq
0$.]{\includegraphics[scale=0.5]{bosoncompare.pdf}} \qquad
\subfigure[The integral $J_+$ for the
fermions.]{\includegraphics[scale=0.5]{fermioncompare.pdf}} \\
\subfigure[The relative difference in $J_-$ for the series expansion
($S$) and the numerical
evaluation ($I$).]{\includegraphics[scale=0.5]{bosoncompare_Diff.pdf}}
\qquad
\subfigure[The relative difference in
$J_+$.]{\includegraphics[scale=0.5]{fermioncompare_Diff.pdf}}
\caption{
Comparison between the numerical integration and the series
expansion of $J_\pm(x^2)$ around $x_\pm^2$. \label{fig:CompareJpm}}
\end{figure}
In Fig.~\ref{fig:CompareJpm} (upper) we show the series expansion
around the transition point $x_-^2$ ($x_+^2$) for $J_-$ ($J_+$) on the
left (right) side compared to the numerical evaluation of $J_-$
($J_+$). The plots in the lower row show the relative difference
between the series expansion $S$ and the numerical evaluation $I$,
$(S-I)/I$ in per cent. As can be inferred from
Fig.~\ref{fig:CompareJpm} (c), it does not exceed
1\% in case of bosons, and for fermions it exceeds $1\%$ only around
the transition point, {\it cf.}~Fig.~\ref{fig:CompareJpm} (d).
\subsection{The Minimization of the Effective
Potential \label{sec:minimize}}
For the EWPT to be considered of strong first order, the ratio of the VEV $v_c$ at
the critical temperature $T_c$ has to fulfill $v_c/T_c > 1$. The
value $v$ of the VEV at a given temperature $T$ is obtained as
\begin{eqnarray}
v(T) = \left(\sum_{k=1}^{\tilde{n}_H} \bar{\omega}_k^2
\right)^{\frac{1}{2}} \;.
\label{eq:ewvev}
\end{eqnarray}
Here, $\tilde{n}_H$ means that the sum is performed over all
directions in field space in which we allow for the development of a
non-zero {\it electroweak} VEV, {\it i.e.}~the VEV for fields that couple to the EW gauge
bosons. We hence do not include here the VEV that a gauge
singlet field (as it appears for example in the Next-to-2HDM)
develops. The $\bar{\omega}_k$ denote the field configurations that minimize
the loop-corrected effective potential. Therefore,
Eq.~(\ref{eq:ewvev}) is the VEV that coincides at $T=0$ with $v=246.22$~GeV.
The critical temperature $T_c$
is the temperature where two degenerate minima of the potential
exist. In order to determine $T_c$ the effective potential including
the counterterm potential, {\it cf}.~Eq.~(\ref{eq:fielconfig}), is
minimized numerically at a given temperature $T$. In case of a first
order EWPT the VEV jumps from $v= v_c$ at the temperature $T_c$ to
$v=0$ at $T > T_c$. For the minimization we use the algorithm {\tt
CMAES} as implemented in {\tt libcmaes} \cite{CMAES}, which
finds the global minimum of a given function. As termination criterion
we require the relative tolerance of the value of the effective
potential between two iterations to be below $10^{-5}$.
For the determination of $T_c$ we employ a bisection method in the
interval $T \in [0,300] \,\mathrm{GeV}$ until the interval containing
$T_c$ is smaller than $10^{-2}$ GeV. The temperature $T_c$ is then set
to the lower bound of the final interval. We exclude parameter points
for which the individual VEVs obtained from the next-to-leading order
(NLO) potential at $T=0$ deviate by more than 1~GeV from their input
values as well as parameter points where no PT is found for $T\le 300$~GeV.
\section{Implemented Models \label{sec:implmodels}}
In this section we provide the tree-level potentials as well as the
counterterms for the already implemented models. For all implemented
models the code expects an input file that presents the input
parameters in the same way the program code {\tt ScannerS}
\cite{Coimbra:2013qq,scanners2} writes them into its default output files.
{\tt ScannerS} is a program that allows to perform extensive scans in
the parameter space of multi-Higgs models and checks for compatibility
with theoretical and experimental constraints. The viable parameter
points can then be fed in our program to investigate the compatibility
with a strong first order EWPT {\it e.g.}. So far, we have applied our code
for such an analysis in the CP-conserving or real 2HDM (R2HDM)
\cite{Basler:2016obg}, the CP-violating or complex 2HDM (C2HDM)
\cite{Basler:2017uxn} and the Next-to-2HDM (N2HDM)
\cite{jonasmueller}.
\subsection{The CP-Conserving 2HDM}
The tree-level Higgs potential of the CP-conserving 2HDM
\cite{Lee:1973iz,Branco:2011iw} with a softly broken $\mathbb{Z}_2$
symmetry, under which the two $SU(2)_L$ Higgs doublets
$\Phi_1$ and $\Phi_2$,
\begin{eqnarray}
\Phi_1 = \begin{pmatrix} \phi_1^+ \\ \phi_1^0 \end{pmatrix}
\quad \mbox{and} \quad
\Phi_2 = \begin{pmatrix} \phi_2^+ \\ \phi_2^0 \end{pmatrix} \;,
\end{eqnarray}
transform as $\Phi_1 \to \Phi_1$, $\Phi_2 \to - \Phi_2$, reads
\begin{eqnarray}
V^{(0)} &=& m_{11}^2 \Phi_1^\dagger \Phi_1 + m_{22}^2 \Phi_2^\dagger
\Phi_2 - m_{12}^2\left(\Phi_1^\dagger\Phi_2 + h.c. \right) + \frac{\lambda_1}{2}
\left(\Phi_1^\dagger\Phi_1\right)^ 2 + \frac{\lambda_2}{2}
\left(\Phi_2^\dagger\Phi_2\right)^2 \notag \\
&&+ \lambda_3\left(\Phi_1^\dagger\Phi_1\right)\left(\Phi_2^\dagger\Phi_2\right) +
\lambda_4\left(\Phi_1^\dagger\Phi_2\right)\left(\Phi_2^\dagger\Phi_1\right) +
\frac{\lambda_5}{2} \left[\left(\Phi_1^\dagger\Phi_2\right)^2 + h.c. \right] \;.
\end{eqnarray}
The mass parameters $m_{11}^2$, $m_{22}^2$ and $m_{12}^2$ as well as the
quartic couplings $\lambda_1 ... \lambda_5$ are real. The parameters
$m_{12}^2$ and $\lambda_5$ can be complex in the CP-violating
2HDM. After EWSB the two Higgs doublets acquire VEVs $\bar{\omega}
\in \mathbb{R}$
about which the Higgs fields can be
expanded in terms of the charged field components $\rho_i$ and
$\eta_i$ and the neutral CP-even and CP-odd fields $\zeta_i$ and
$\psi_i$ ($i=1,2$),
\begin{eqnarray}
\Phi_1 &=& \frac{1}{\sqrt{2}} \begin{pmatrix} \rho_1+\mathrm{i}\eta_1 \\
\zeta_1 + \bar{\omega}_1 + \mathrm{i} \psi_1 \end{pmatrix} \\
\Phi_2 &=& \frac{1}{\sqrt{2}} \begin{pmatrix} \rho_2 + \bar{\omega}_{\text{CB}} +\mathrm{i}\eta_2
\\ \zeta_2+\bar{\omega}_2 +
\mathrm{i}\left(\psi_2+\bar{\omega}_{\text{CP}}\right) \end{pmatrix} \;.
\end{eqnarray}
In order to be as general as possible we also allow for
CP-violating ($\bar{\omega}_{\text{CP}}$) and charge-breaking
($\bar{\omega}_{\text{CB}}$) VEVs although the latter obviously is unphysical.
Note that without loss of generality we have rotated the complex part
of the VEVs to the second doublet exclusively. We denote the VEVs of
our present vacuum by ($i=1,2,\mbox{CP},\mbox{CB}$)
\begin{eqnarray}
v_i \equiv \bar{\omega}_i|_{T=0} \;,
\end{eqnarray}
with
\begin{eqnarray}
v_{\text{CP}} = v_{\text{CB}} = 0 \;,
\end{eqnarray}
while the remaining two VEVs are related to the SM VEV $v \approx
246.22$~GeV through
\begin{eqnarray}
v_1^2 + v_2^2 = v^2 \;.
\end{eqnarray}
By introducing the angle $\beta$ as
\begin{eqnarray}
\tan \beta = \frac{v_2}{v_1} \;
\end{eqnarray}
we have
\begin{eqnarray}
v_1 = v \cos\beta \quad \mbox{and} \quad v_2 = v \sin \beta \;.
\end{eqnarray}
The mixing angle $\beta$ is the rotation angle from the gauge to the
mass eigenstates in the charged and in the CP-odd sector,
respectively, while we call the mixing angle in the CP-even sector
$\alpha$,
\begin{eqnarray}
\left( \begin{array}{c} G^\pm \\ H^\pm \end{array} \right) = R(\beta)
\left( \begin{array}{c} \phi_1^\pm \\ \phi_2^\pm \end{array} \right)
\;, \quad
\left( \begin{array}{c} G^0 \\ A \end{array} \right) = R(\beta)
\left( \begin{array}{c} \psi_1 \\ \psi_2 \end{array} \right) \;,
\quad
\left( \begin{array}{c} H \\ h \end{array} \right) = R(\alpha)
\left( \begin{array}{c} \zeta_1 \\ \zeta_2 \end{array} \right) \;,
\end{eqnarray}
with
\begin{eqnarray}
R(x) = \left( \begin{array}{cc} \cos x & \sin x \\ - \sin x & \cos x \end{array}\right) \;.
\end{eqnarray}
We have five physical mass eigenstates, the light and heavy CP-even
Higgs bosons $h$ and $H$, the pseudoscalar $A$ and a charged Higgs
pair $H^\pm$, while $G^0$ and $G^\pm$ represent the neutral and
charged massless Goldstone bosons. The counterterm potential is given as
\begin{align}
V^{\text{CT}} =& \delta m_{11}^2 \Phi_1^\dagger \Phi_1 + \delta m_{22}^2
\Phi_2^\dagger \Phi_2 - \delta m_{12}^2\left(\Phi_1^\dagger\Phi_2 + \Phi_2^\dagger
\Phi_1\right) + \frac{\delta \lambda_1}{2}\left(\Phi_1^\dagger\Phi_1\right)^ 2 +
\frac{\delta \lambda_2}{2} \left(\Phi_2^\dagger\Phi_2\right)^2 \notag \\
&+ \delta\lambda_3\left(\Phi_1^\dagger\Phi_1\right)\left(\Phi_2^\dagger\Phi_2\right)
+ \delta \lambda_4\left(\Phi_1^\dagger\Phi_2\right)\left(\Phi_2^\dagger\Phi_1\right)
+ \frac{\delta \lambda_5}{2} \left[ \left(\Phi_1^\dagger\Phi_2\right)^2 +
\left(\Phi_2^\dagger\Phi_1\right)^2\right] \notag \\
&+ \delta T_1\left(\zeta_1 + \omega_1 \right) + \delta
T_2\left(\zeta_2+\omega_2\right) + \delta
T_{\text{CP}} \left( \psi_2+\omega_{\text{CP}}\right) + \delta T_{\text{CB}}
\left(\rho_2 + \omega_{\text{CB}} \right) \,.
\end{align}
In order to avoid flavour-changing neutral currents (FCNC) at tree level, the
$\mathbb{Z}_2$ symmetry can also be extended to the Yukawa
sector \cite{Glashow:1976nt,Paschos:1976ay}. With four possible
$\mathbb{Z}_2$ charge assignments there are four different types of
2HDMs as summarized in Table~\ref{tab:chargassign}. \newline \vspace*{-3.5mm}
\begin{table}
\begin{center}
\begin{tabular}{rccc} \toprule
& $u$-type & $d$-type & leptons \\ \midrule
Type I & $\Phi_2$ & $\Phi_2$ & $\Phi_2$ \\
Type II & $\Phi_2$ & $\Phi_1$ & $\Phi_1$ \\
Lepton-Specific & $\Phi_2$ & $\Phi_2$ & $\Phi_1$ \\
Flipped & $\Phi_2$ & $\Phi_1$ & $\Phi_2$ \\ \bottomrule
\end{tabular}
\caption{The four Yukawa types of the softly broken
$\mathbb{Z}_2$-symmetric 2HDM. \label{tab:chargassign}}
\end{center}
\end{table}
The on-shell renormalization that we apply leads to the conditions
\begin{eqnarray}
\left.\partial_{\phi_i} V^{\text{CT}}\right|_{\phi =\langle \phi^c\rangle_{T=0}} &=&
-\left.\partial_{\phi_i} V^{\text{CW}}\right|_{\phi =\langle \phi^c \rangle_{T=0}} \\
\left.\partial_{\phi_i}\partial_{\phi_j} V^{\text{CT}}
\right|_{\phi=\langle \phi^c \rangle_{T=0}} &=& -\left.\partial_{\phi_i}
\partial_{\phi_j} V^{\text{CW}} \right|_{\phi=\langle\phi^c \rangle_{T=0}} \,,
\end{eqnarray}
where
\begin{eqnarray}
\phi_i \equiv \{ \rho_1, \eta_1, \rho_2, \eta_2, \zeta_1, \psi_1,
\zeta_2, \psi_2 \} \label{eq:vec2hdm}
\end{eqnarray}
and $\langle \phi^c \rangle_{T=0}$ denotes the field configuration in
the minimum at $T=0$,
\begin{eqnarray}
\langle \phi^c \rangle_{T=0} = (0,0,0,0,v_1,0,v_2,0) \;.
\end{eqnarray}
These conditions yield the counterterms
\begin{align}
\delta m_{11}^2 &= \frac{1}{2} H^{\text{CW}}_{\zeta_1,\zeta_1} +H^{\text{CW}}_{\psi_1,\psi_1} - \frac{5}{2}
H^{\text{CW}}_{\rho_1,\rho_1} + \frac{1}{2} \frac{v_2}{v_1}\left( H^{\text{CW}}_{\zeta_1,\zeta_2} -
H^{\text{CW}}_{\eta_1,\eta_2} \right) + t v_2^2 \\
\delta m_{22}^2 &= \frac{1}{2} \left( H^{\text{CW}}_{\zeta_2,\zeta_2} -3H^{\text{CW}}_{\eta_2,\eta_2} \right) +
\frac{1}{2} \frac{v_1}{v_2} \left( H^{\text{CW}}_{\zeta_1,\zeta_2} -
H^{\text{CW}}_{\eta_1,\eta_2}\right) + \frac{v_1^2}{v_2^2}\left( H^{\text{CW}}_{\psi_1,\psi_1} -
H^{\text{CW}}_{\rho_1,\rho_1}\right) + v_1^2 t \\
\delta m_{12}^2 &= H^{\text{CW}}_{\eta_1,\eta_2} + \frac{v_1}{v_2} \left(
H^{\text{CW}}_{\psi_1,\psi_1} - H^{\text{CW}}_{\rho_1,\rho_1}\right) + v_1v_2 t \\
\delta \lambda_1 &= \frac{1}{v_1^2} \left[ 2 H^{\text{CW}}_{\rho_1,\rho_1} -
H^{\text{CW}}_{\zeta_1,\zeta_1} - H^{\text{CW}}_{\psi_1,\psi_1}\right] - \frac{v_2^2}{v_1^2} t \\
\delta \lambda_2 &= \frac{1}{v_2^2} \left[ H^{\text{CW}}_{\eta_2,\eta_2} -
H^{\text{CW}}_{\zeta_2,\zeta_2}\right) +\frac{v_1^2}{v_2^4} \left( H^{\text{CW}}_{\rho_1,\rho_1} -
H^{\text{CW}}_{\psi_1,\psi_1}\right) - \frac{v_1^2}{v_2^2} t \\
\delta \lambda_3 &= \frac{1}{v_1v_2} \left( H^{\text{CW}}_{\eta_1,\eta_2}-H^{\text{CW}}_{\zeta_1,\zeta_2}
\right) + \frac{1}{v_2^2}\left(H^{\text{CW}}_{\rho_1,\rho_1} -H^{\text{CW}}_{\psi_1,\psi_1}\right) - t \\
\delta \lambda_4 &= t \\
\delta \lambda_5 &= \frac{2}{v_2^2}
\left(H^{\text{CW}}_{\psi_1,\psi_1}-H^{\text{CW}}_{\rho_1,\rho_1} \right) + t \\
\delta T_1 &= H^{\text{CW}}_{\eta_1,\eta_2} v_2 + H^{\text{CW}}_{\rho_1,\rho_1} v_1 - N^{\text{CW}}_{\zeta_1} \\
\delta T_2 &= H^{\text{CW}}_{\eta_1,\eta_2} v_1 + H^{\text{CW}}_{\eta_2,\eta_2} v_2 - N^{\text{CW}}_{\zeta_2} \\
\delta T_{\text{CP}} &= \frac{v_1^2}{v_2} H^{\text{CW}}_{\zeta_1,\psi_1} + H^{\text{CW}}_{\zeta_1,\psi_2} v_1 -
N^{\text{CW}}_{\psi_2} \\
\delta T_{\text{CB}} &= -N^{\text{CW}}_{\rho_2} \,,
\end{align}
where we used
\begin{align}
N^{\text{CW}}_{\phi} &= \left.\partial_{\phi} V^{\text{CW}}\right|_{\phi=\langle
\phi^c\rangle_{T=0}} \label{eq:nabbr} \\
H^{\text{CW}}_{\phi_1,\phi_2} &=\left.\partial_{\phi_1}\partial_{\phi_2} V^{\text{CW}}
\right|_{\phi_=\langle \phi^c \rangle_{T=0}} \,. \label{eq:habbr}
\end{align}
Having less renormalization constants than renormalization conditions,
the system of equations is overconstrained. Its consistent solution
is given by the following identities
\begin{align}
0 &= H^{\text{CW}}_{\eta_1,\eta_1} - H^{\text{CW}}_{\rho_1,\rho_1} \\
0 &= H^{\text{CW}}_{\eta_1,\eta_2} - H^{\text{CW}}_{\rho_1,\rho_2} \\
0 &= H^{\text{CW}}_{\eta_2,\eta_2} - H^{\text{CW}}_{\rho_2,\rho_2} \\
0 &= H^{\text{CW}}_{\psi_1,\psi_2} - H^{\text{CW}}_{\eta_1,\eta_2} + \frac{v_1}{v_2}
\left( H^{\text{CW}}_{\psi_1,\psi_1} - H^{\text{CW}}_{\rho_1,\rho_1} \right) \\
0 &= H^{\text{CW}}_{\psi_2,\psi_2} - H^{\text{CW}}_{\eta_2,\eta_2} +\frac{v_1^2}{v_2^2}
\left( H^{\text{CW}}_{\rho_1,\rho_1} - H^{\text{CW}}_{\psi_1,\psi_1}\right) \,,
\end{align}
leading to a one-dimensional solution space parametrized by the
parameter $t \in \mathbb{R}$. In the code $t$ is chosen such that
\begin{align}
\delta \lambda_4 &= 0 \,.
\end{align}
Note that the renormalization constants $\delta T_{\text{CP}}$
and $\delta T_{\text{CB}}$ always turn out to be zero as we do not have
CP violation\footnote{We set the CKM matrix to unity and
hence do not have explicit CP violation in the model.} nor
charge breaking. \newline \vspace*{-3.5mm}
For the eight parameters of the Higgs potential we can either
choose a more 'physics' inspired set involving the masses of the physical
Higgs bosons or a pure 'parametric' input set. The code requires the
'parametric' input based on $\lambda_{1...5}, m_{12}^2$ and
$\tan\beta$\footnote{The eighth parameter is the SM VEV $v$ that is
hard-coded in the program.}, which has to be given in the order
\begin{eqnarray}
\mbox{type} \;, \; \lambda_1 \;, \; \lambda_2 \;, \; \lambda_3 \;, \;
\lambda_4 \;, \; \lambda_5 \;, \; m_{12}^2 \;, \; \tan\beta \;.
\end{eqnarray}
The user furthermore has to specify through $\mbox{type} = 1,...,4$
the type of the 2HDM to be applied,
as given in Table~\ref{tab:chargassign} where $\mbox{type} = 1,...,4$
corresponds to type I, type II, lepton-specific and flipped.
Note that the minimum conditions of the potential lead to the following
relations among the parameters
\begin{align}
m_{11}^2 &= m_{12}^2 \frac{v_2}{v_1} - \frac{v_1^2}{2} \lambda_1^2 -
\frac{v_2^2}{2} \left(\lambda_3+\lambda_4+\lambda_5\right) \\
m_{22}^2 &= m_{12}^2 \frac{v_1}{v_2} - \frac{v_2^2}{2} \lambda_2^2 -
\frac{v_1^2}{2} \left(\lambda_3+\lambda_4+ \lambda_5\right) \,.
\end{align}
\subsection{The CP-violating 2HDM \label{sec:C2HDM}}
Incorporating the softly broken ${\mathbb Z}_2$ symmetry to avoid
FCNC at tree-level (implying the same four different types of 2HDM as
in the CP-conserving case, {\it cf.}~Table~\ref{tab:chargassign}), the
tree-level Higgs potential of the C2HDM
\cite{Ginzburg:2002wt}\footnote{For recent phenomenological analyses,
see \cite{Fontes:2014xva,c2hdmpheno,phenocomp}.} reads
\begin{align}
\begin{split}
V^{(0)} &= m_{11}^2 \Phi_1^\dagger \Phi_1 + m_{22}^2
\Phi_2^\dagger \Phi_2 - \left[m_{12}^2 \Phi_1^\dagger \Phi_2 +
\mathrm{h.c.} \right] + \frac{1}{2} \lambda_1 ( \Phi_1^\dagger
\Phi_1)^2 +\frac{1}{2} \lambda_2 (\Phi_2^\dagger \Phi_2)^2 \\
&\quad + \lambda_3 (\Phi_1^\dagger \Phi_1)(\Phi_2^\dagger\Phi_2) +
\lambda_4 (\Phi_1^\dagger \Phi_2)(\Phi_2^\dagger \Phi_1)
+ \left[ \frac{1}{2} \lambda_5 (\Phi_1^\dagger\Phi_2)^2 +
\mathrm{h.c.} \right] \; .
\end{split}\label{eq:treec2hdmpot}
\end{align}
In contrast to the CP-conserving 2HDM, the two parameters $m_{12}^2$ and $\lambda_5$
can now be complex. If $\mbox{arg}(m_{12}^2)=\mbox{arg}(\lambda_5)$
the complex phases of these two parameters can be absorbed by a basis
transformation. If additionally the VEVs of the doublets are assumed
to be real, we have the real 2HDM. Otherwise, we are in the C2HDM. In
the following, we will adopt the conventions of \cite{Fontes:2014xva}.
After EWSB the two Higgs doublets develop VEVs and allowing for the
most general vacuum configuration, the expansion about the minimum
reads
\begin{eqnarray}
\Phi_1 &=& \frac{1}{\sqrt{2}} \begin{pmatrix} \rho_1+\mathrm{i}\eta_1 \\
\zeta_1 + \bar{\omega}_1 + \mathrm{i} \psi_1 \end{pmatrix} \\
\Phi_2 &=& \frac{1}{\sqrt{2}} \begin{pmatrix} \rho_2 + \bar{\omega}_{\text{CB}} +\mathrm{i}\eta_2
\\ \zeta_2+\bar{\omega}_2 +
\mathrm{i}\left(\psi_2+\bar{\omega}_{\text{CP}}\right) \end{pmatrix} \;.
\end{eqnarray}
After introducing
\begin{eqnarray}
\zeta_3 = - \psi_1 \sin\beta + \psi_2 \cos \beta
\end{eqnarray}
the neutral mass eigenstates $H_i$ ($i=1,2,3$) are obtained from the
C2HDM basis $\zeta_1,$ $\zeta_2$ and $\zeta_3$ through the rotation
\begin{eqnarray}
\left( \begin{array}{c} H_1 \\ H_2 \\ H_3 \end{array} \right) =
R \left( \begin{array}{c} \zeta_1 \\ \zeta_2 \\ \zeta_3 \end{array} \right) \;.
\end{eqnarray}
The mass matrix $R$ can be parametrized in terms of three mixing
angles $\alpha_i$ ($i=1,2,3$) with $-\pi/2 \le \alpha_i < \pi/2$ as
\begin{eqnarray}
R = \left( \begin{array}{ccc} c_1 c_2 & s_1 c_2 & s_2 \\ - (c_1 s_2
s_3 + s_1 c_3) & c_1 c_3 - s_1 s_2 s_3 & c_2 s_3 \\ -c_1 s_2 c_3 +
s_1 s_3 & - (c_1 s_3 + s_1 s_2 c_3) & c_2 c_3 \end{array} \right) \;.
\end{eqnarray}
All neutral Higgs bosons mix and have no definite CP quantum
number. The masses are obtained from the diagonalization of the mass
matrix, derived from the Higgs potential, and the conventions are such
that $m_{H_1} \le m_{H_2} \le m_{H_3}$.
The charged sector does not change with respect to
the CP-conserving 2HDM, and the mixing angle diagonalizing the charged
mixing matrix is given by $\beta$. \newline \vspace*{-3.5mm}
The counterterm potential reads
\begin{align}
V^{\text{CT}} =& \delta m_{11}^2 \Phi_1^\dagger \Phi_1 + \delta m_{22}^2
\Phi_2^\dagger \Phi_2 - \left(\delta \mbox{Re}(m_{12}^2) + \mathrm{i}
\delta \mbox{Im}(m_{12}^2)\right) \Phi_1^\dagger\Phi_2 -
\left(\delta \mbox{Re}(m_{12}^2) - \mathrm{i} \delta \mbox{Im}(m_{12}^2)\right)
\Phi_2^\dagger \Phi_1 \notag \\
&+ \frac{\delta \lambda_1}{2} \left(\Phi_1^\dagger\Phi_1\right)^ 2 +
\frac{\delta \lambda_2}{2} \left(\Phi_2^\dagger\Phi_2\right)^2 +
\delta\lambda_3\left(\Phi_1^\dagger\Phi_1\right)\left(\Phi_2^\dagger\Phi_2\right)
+ \delta \lambda_4\left(\Phi_1^\dagger\Phi_2\right)\left(\Phi_2^\dagger\Phi_1\right)
\notag \\
&+ \frac{1}{2} \left( \delta \mbox{Re}(\lambda_5) + \mathrm{i}\delta
\mbox{Im}(\lambda_5) \right) \left(\Phi_1^\dagger\Phi_2\right)^2 +
\frac{1}{2} \left(\delta \mbox{Re}(\lambda_5) -\mathrm{i}\delta
\mbox{Im}(\lambda_5) \right)
\left(\Phi_2^\dagger\Phi_1\right)^2 \notag \\
&+ \delta T_1\left(\zeta_1+\omega_1\right)+\delta
T_2\left(\zeta_2+\omega_2\right) + \delta
T_{\text{CP}} \left(\psi_2+\omega_{\text{CP}}\right) + \delta
T_{\text{CB}}\left(\rho_2+\omega_{\text{CB}}\right) \,.
\end{align}
Using
\begin{eqnarray}
\phi_i \equiv \{ \rho_1, \eta_1, \rho_2, \eta_2, \zeta_1, \psi_1,
\zeta_2, \psi_2 \} \label{eq:vecc2hdm}
\end{eqnarray}
the 'on-shell' renormalization conditions yield
\begin{eqnarray}
\left.\partial_{\phi_i} V^{\text{CT}}\right|_{\phi =\langle \phi^c\rangle_{T=0}} &=&
-\left.\partial_{\phi_i} V^{\text{CW}}\right|_{\phi =\langle \phi^c \rangle_{T=0}} \\
\left.\partial_{\phi_i}\partial_{\phi_j} V^{\text{CT}}
\right|_{\phi=\langle \phi^c \rangle_{T=0}} &=& -\left.\partial_{\phi_i}
\partial_{\phi_j} V^{\text{CW}} \right|_{\phi=\langle\phi^c \rangle_{T=0}} \,,
\end{eqnarray}
with
\begin{eqnarray}
\langle \phi^c \rangle_{T=0} = (0,0,0,0,v_1,0,v_2,0) \;,
\end{eqnarray}
and lead to the counterterms
\begin{align}
\delta m_{11}^2 &= \frac{1}{2} \left[ H^{\text{CW}}_{\zeta_1,\zeta_1} -2H^{\text{CW}}_{\psi_1,\psi_1} -
H^{\text{CW}}_{\eta_1,\eta_2}\frac{v_2}{v_1} +H^{\text{CW}}_{\zeta_1,\zeta_2}\frac{v_2}{v_1} -
H^{\text{CW}}_{\rho_1,\rho_1} \right] +v_2^2 t \label{eq:delbeg} \\
\delta m_{22}^2 &= \left[ - \frac{1}{2} \frac{v_1}{v_2} \left( H^{\text{CW}}_{\eta_1,\eta_2} -
H^{\text{CW}}_{\zeta_1,\zeta_2} \right) + \frac{v_1^2}{v_2^2} \left(H^{\text{CW}}_{\rho_1,\rho_1} -
H^{\text{CW}}_{\psi_1,\psi_1}\right) - \frac{3}{2}H^{\text{CW}}_{\eta_2,\eta_2} + \frac{1}{2}
H^{\text{CW}}_{\zeta_2,\zeta_2} \right] + v_1^2 t \\
\delta\Re \,(m_{12}^2) &= \left[ H^{\text{CW}}_{\eta_1,\eta_2} -
\frac{v_1}{v_2} H^{\text{CW}}_{\psi_1,\psi_1} +
\frac{v_1}{v_2} H^{\text{CW}}_{\rho_1,\rho_1}\right] + v_1v_2 t \\
\delta \lambda_1 &= \frac{1}{v_1^2} \left[ H^{\text{CW}}_{\psi_1,\psi_1} -H^{\text{CW}}_{\zeta_1,\zeta_1}
\right] -\frac{v_2^2}{v_1^2} t\\
\delta \lambda_2 &= \frac{1}{v_2^2} \left[ H^{\text{CW}}_{\eta_2,\eta_2} - H^{\text{CW}}_{\zeta_2,\zeta_2}
+ \frac{v_1^2}{v_2^2} \left( H^{\text{CW}}_{\psi_1,\psi_1} - H^{\text{CW}}_{\rho_1,\rho_1}\right)\right] -
\frac{v_1^2}{v_2^2} t \\
\delta\lambda_3 &= \frac{1}{v_1v_2} \left[ H^{\text{CW}}_{\eta_1,\eta_2} -H^{\text{CW}}_{\zeta_1,\zeta_2}
+ \frac{v_1}{v_2} \left(H^{\text{CW}}_{\psi_1,\psi_1} - H^{\text{CW}}_{\rho_1,\rho_1}\right)\right] -t\\
\delta\lambda_4 &= \frac{2}{v_2^2} \left[ H^{\text{CW}}_{\rho_1,\rho_1} - H^{\text{CW}}_{\psi_1,\psi_1}
\right] +t \\
\delta\Re \,(\lambda_5) &= t \label{eq:delend} \\
\delta\Im \,(\lambda_5) &= -\frac{2}{v_2^2}H^{\text{CW}}_{\zeta_1,\psi_1} \\
\delta\Im \,(m_{12}^2) &= -\left[ H^{\text{CW}}_{\zeta_1,\psi_2} + 2\frac{v_1}{v_2}
H^{\text{CW}}_{\zeta_1,\psi_1} \right] \\
\delta T_1 &= H^{\text{CW}}_{\eta_1,\eta_2} v_2 + H^{\text{CW}}_{\rho_1,\rho_1} v_1- N^{\text{CW}}_{\zeta_1} \\
\delta T_2 &= H^{\text{CW}}_{\eta_1,\eta_2} v_1 + H^{\text{CW}}_{\eta_2,\eta_2} v_2 - N^{\text{CW}}_{\zeta_2} \\
\delta T_{\text{CP}} &= \frac{v_1^2}{v_2} H^{\text{CW}}_{\zeta_1,\psi_1} + H^{\text{CW}}_{\zeta_1,\psi_2} v_1
- N^{\text{CW}}_{\psi_2} \\
\delta T_{\text{CB}} &= -N^{\text{CW}}_{\rho_2} \;,
\end{align}
where we used the abbreviations Eqs.~(\ref{eq:nabbr}) and (\ref{eq:habbr}).
Again, the system of equations is overconstrained. Its one-dimensional
solution space is parametrized by $t \in \mathbb{R}$ which we
have chosen such that
\begin{align}
\delta \lambda_4 &= 0 \,.
\end{align}
With this choice Eqs.~(\ref{eq:delbeg})--(\ref{eq:delend}) simplify to
\begin{align}
\delta m_{11}^2 &= \frac{1}{2} \left[ H_{\zeta_1,\zeta_1}^{\text{CW}} +2H_{\psi_1,\psi_1}^{\text{CW}}
- \frac{v_2}{v_1} \left(H^{\text{CW}}_{\eta_1,\eta_2}-H^{\text{CW}}_{\zeta_1,\zeta_2}\right)
- 5H^{\text{CW}}_{\rho_1,\rho1} \right]\\
\delta m_{22}^2 &= \frac{1}{2} \left[ \frac{v_1}{v_2} \left(H^{\text{CW}}_{\zeta_1,\zeta_2} -
H^{\text{CW}}_{\eta_1,\eta_2}\right) - \frac{v_1^2}{v_2^2}\left(H^{\text{CW}}_{\rho_1,\rho_1}-
H^{\text{CW}}_{\psi_1,\psi_1}\right)- 3H^{\text{CW}}_{\eta_2,\eta_2} + H^{\text{CW}}_{\zeta_2,\zeta_2}\right] \\
\delta\Re \,(m_{12}^2) &= \frac{v_1}{v_2} \left(H^{\text{CW}}_{\psi_1,\psi_1} -H^{\text{CW}}_{\rho_1,\rho_1}
\right) +H^{\text{CW}}_{\eta_1,\eta_2} \\
\delta \lambda_1 &= \frac{1}{v_1^2} \left( 2H^{\text{CW}}_{\rho_1,\rho_1} -H^{\text{CW}}_{\psi_1,\psi_1} -
H^{\text{CW}}_{\zeta_1,\zeta_1}\right) \\
\delta \lambda_2 &= \frac{1}{v_2^2} \left[ \frac{v_1^2}{v_2^2}\left(H^{\text{CW}}_{\rho_1,\rho_1}
-H^{\text{CW}}_{\psi_1,\psi_1}\right)+ H^{\text{CW}}_{\eta_2,\eta_2} -H^{\text{CW}}_{\zeta_2,\zeta_2}\right] \\
\delta \lambda_3 &= \frac{1}{v_1v_2^2} \left[ \left(H^{\text{CW}}_{\rho_1,\rho_1} -
H^{\text{CW}}_{\psi_1,\psi_1}\right) v_1 + \left(H^{\text{CW}}_{\eta_1,\eta_2} -H^{\text{CW}}_{\zeta_1,\zeta_2}
\right)v_2 \right]
\\
\delta \lambda_4 &= 0 \\
\delta \Re \,(\lambda_5) &= \frac{2}{v_2^2} \left(H^{\text{CW}}_{\psi_1,\psi_1} -
H^{\text{CW}}_{\rho_1,\rho_1}\right) \;,
\end{align}
where we have applied the identities needed for the consistent solution,
\begin{align}
0 &= H^{\text{CW}}_{\eta_1,\eta_1} - H^{\text{CW}}_{\rho_1,\rho_1} \\
0 &= H^{\text{CW}}_{\eta_1,\eta_2} - H^{\text{CW}}_{\rho_1,\rho_2} \\
0 &= H^{\text{CW}}_{\eta_2,\eta_2} - H^{\text{CW}}_{\rho_2,\rho_2} \\
0 &= H^{\text{CW}}_{\psi_1,\psi_2} - H^{\text{CW}}_{\eta_1,\eta_2} +\frac{v_1}{v_2} \left(
H^{\text{CW}}_{\psi_1,\psi_1} -H^{\text{CW}}_{\rho_1,\rho_1} \right) \\
0 &= H^{\text{CW}}_{\psi_2,\psi_2} - H^{\text{CW}}_{\eta_2,\eta_2} +\frac{v_1^2}{v_2^2} \left(
H^{\text{CW}}_{\rho_1,\rho_1} - H^{\text{CW}}_{\psi_1,\psi_1}\right) \\
0 &= H^{\text{CW}}_{\zeta_1,\psi_2} v_2 + v_1H^{\text{CW}}_{\zeta_1,\psi_1} v_1 + N^{\text{CW}}_{\psi_1} \\
0 &= H^{\text{CW}}_{\rho_1,\eta_2} - H^{\text{CW}}_{\zeta_1,\psi_2} -\frac{v_1}{v_2} H^{\text{CW}}_{\zeta_1,\psi_1} \\
0 &= H^{\text{CW}}_{\zeta_1,\psi_2} + H^{\text{CW}}_{\eta_1,\rho_2} + \frac{v_1}{v_2}
H^{\text{CW}}_{\zeta_1,\psi_1} \\
0 &= H^{\text{CW}}_{\psi_1,\zeta_2} + H^{\text{CW}}_{\zeta_1,\psi_2} \\
0 &= \frac{v_1^2}{v_2^2} H^{\text{CW}}_{\zeta_1,\psi_1} + H^{\text{CW}}_{\zeta_2,\psi_2} \;.
\end{align}
Note that $\delta T_{\text{CB}}$ related to the charge breaking VEV turns out to
be zero as we do not have a charge-breaking vacuum. \newline \vspace*{-3.5mm}
The C2HDM is parametrized by nine independent parameters. In a
physics-inspired basis the masses are part of the input, in the 'parametric'
basis, used in the code, the input parameters in addition to the SM
VEV hard-coded in the program, are, in the order required by the
program,
\begin{eqnarray}
\mbox{type} \;, \; \lambda_1 \;, \;\lambda_2 \;, \;\lambda_3 \;, \;\lambda_4 \;, \;
\mbox{Re}\lambda_5 \;, \;\mbox{Im} (\lambda_5) \;, \;\mbox{Re}
(m_{12}^2) \;, \; \tan\beta \;.
\end{eqnarray}
\noindent
By setting $\mbox{type}=1,2,3$ or 4, the user chooses the C2HDM type.
The parameters $m_{11}^2,m_{22}^2$ and $\mbox{Im}(m_{12}^2)$ are
obtained from the minimum conditions
\begin{align}
m_{11}^2 &= \mbox{Re} (m_{12}^2) \frac{v_2}{v_1} -\frac{v_1^2}{2}\lambda_1^2
- \frac{v_2^2}{2} \left(\lambda_3+\lambda_4+\mbox{Re} (\lambda_5) \right) \\
m_{22}^2 &= \mbox{Re} (m_{12}^2) \frac{v_1}{v_2} - \frac{v_2^2}{2}\lambda_2^2
- \frac{v_1^2}{2} \left(\lambda_3+\lambda_4+\mbox{Re} (\lambda_5) \right) \\
\mbox{Im} (m_{12}^2) &= \frac{v_1v_2}{2} \mbox{Im} (\lambda_5) \,.
\end{align}
\subsection{The N2HDM}
The N2HDM is built from the CP-conserving 2HDM with a softly broken
$\mathbb{Z}_2$ symmetry upon extension by a singlet field
$\Phi_S$. If the latter does not acquire a VEV, we have a dark matter candidate
\cite{He:2008qm}. Here, we let the singlet field have a non-vanishing VEV. (For
the phenomenology of the N2HDM with a singlet VEV, see \cite{Chen:2013jvg} with
and \cite{phenocomp, Muhlleitner:2016mzt} without any
approximations. The NLO electroweak corrected
N2HDM and in particular its renormalization has been presented in
\cite{Krause:2017mal}.) The tree-level potential of the N2HDM is given
by
\begin{eqnarray}
V^{(0)} &=& m_{11}^2 \Phi_1^\dagger \Phi_1 + m_{22}^2\Phi_2^\dagger \Phi_2 -
m_{12}^2\left(\Phi_1^\dagger\Phi_2 +\Phi_2^\dagger \Phi_1\right) +
\frac{\lambda_1}{2}\left(\Phi_1^\dagger\Phi_1\right)^ 2 +
\frac{\lambda_2}{2}\left(\Phi_2^\dagger\Phi_2\right)^2 \nonumber \\
&&+\lambda_3\left(\Phi_1^\dagger\Phi_1\right)\left(\Phi_2^\dagger\Phi_2\right)
+\lambda_4\left(\Phi_1^\dagger\Phi_2\right)\left(\Phi_2^\dagger\Phi_1\right)
+ \frac{\lambda_5}{2} \left[\left(\Phi_1^\dagger\Phi_2\right)^2 +
\left(\Phi_2^\dagger\Phi_1\right)^2\right] \nonumber \\
&&+ \frac{1}{2} m_S^2 \Phi_S^2 + \frac{\lambda_6}{8} \Phi_S^4 +
\frac{\lambda_7}{2} \left(\Phi_1^\dagger\Phi_1\right)\Phi_S^2 +
\frac{\lambda_8}{2} \left(\Phi_2^\dagger\Phi_2\right)\Phi_S^2 \;,
\end{eqnarray}
where the first two lines describe the 2HDM part of the N2HDM and the
last line is the contribution of the singlet field $\Phi_S$. The
potential obeys two $\mathbb{Z}_2$ symmetries. The first one, named
$\mathbb{Z}_2$, is the trivial generalization of the usual 2HDM
$\mathbb{Z}_2$ symmetry to the N2HDM,
\begin{eqnarray}
\Phi_1 \to \Phi_1 \;, \quad \Phi_2 \to - \Phi_2 \;, \quad \Phi_S \to
\Phi_S \;,
\end{eqnarray}
and is softly broken by the term proportional to $m_{12}^2$. Its
extension to the Yukawa sector ensures the absence of FCNC and implies
different types of N2HDM that are the same as in the 2HDM, summarized
in Table \ref{tab:chargassign}. The second one, named
$\mathbb{Z}_2^\prime$, is given by
\begin{eqnarray}
\Phi_1 \to \Phi_1 \;, \quad \Phi_2 \to \Phi_2 \;, \quad \Phi_S \to
-\Phi_S \;,
\end{eqnarray}
and is not explicitly broken. For a non-vanishing VEV of $\Phi_S$ as
allowed here, there is mixing among all CP-even neutral scalars. This
is also the case if $m_{12}^2=0$, which will not be considered here,
however. After EWSB, the doublets and the singlet field acquire VEVs
about which they can be expanded as (allowing for the most general
vacuum configuration with CP- and (unphysical) CB-violating VEVs),
\begin{eqnarray}
\Phi_1 &=& \frac{1}{\sqrt{2}} \begin{pmatrix}
\rho_1 + \mathrm{i} \eta_1 \\ \zeta_1 + \bar{\omega}_1 + \mathrm{i} \psi_1
\end{pmatrix} \\
\Phi_2 &=& \frac{1}{\sqrt{2}} \begin{pmatrix}
\rho_2 + \bar{\omega}_{\text{CB}} + \mathrm{i} \eta_2 \\ \zeta_2 + \bar{\omega}_2 + \mathrm{i} \left(\psi_2 + \bar{\omega}_{\text{CP}} \right)
\end{pmatrix} \\
\Phi_S &=& \bar{\omega}_S + \rho_S \;.
\end{eqnarray}
The diagonalization of the mass matrix of the neutral scalar fields,
obtained after EWSB from the second derivative of the potential with respect to
these fields, leads to three neutral physical Higgs states, $H_{1}$,
$H_2$ and $H_3$ that are ordered by ascending mass, {\it
i.e.}~$m_{H_1} \le m_{H_2} \le m_{H_3}$. The CP-odd and the charged
sector do not change with respect to the real 2HDM, and we have a
pseudoscalar Higgs $A$ and two charged Higgs states $H^\pm$. The N2HDM
counterterm potential reads
\begin{eqnarray}
V^{\text{CT}} &=& \delta m_{11}^2 \Phi_1^\dagger \Phi_1 + \delta
m_{22}^2\Phi_2^\dagger \Phi_2 - \delta
m_{12}^2\left(\Phi_1^\dagger\Phi_2 +\Phi_2^\dagger \Phi_1\right) +
\frac{\delta\lambda_1}{2}\left(\Phi_1^\dagger\Phi_1\right)^ 2 +
\frac{\delta\lambda_2}{2}\left(\Phi_2^\dagger\Phi_2\right)^2 \nonumber \\
&&+\delta\lambda_3\left(\Phi_1^\dagger\Phi_1\right)\left(\Phi_2^\dagger\Phi_2\right)
+\delta\lambda_4\left(\Phi_1^\dagger\Phi_2\right)\left(\Phi_2^\dagger\Phi_1\right)
+ \frac{\delta\lambda_5}{2} \left[\left(\Phi_1^\dagger\Phi_2\right)^2 +
\left(\Phi_2^\dagger\Phi_1\right)^2\right] \nonumber \\
&&+ \frac{1}{2} \delta m_S^2 \Phi_S^2 + \frac{\delta \lambda_6}{8} \Phi_S^4 +
\frac{\delta \lambda_7}{2} \left(\Phi_1^\dagger\Phi_1\right)\Phi_S^2 +
\frac{\delta \lambda_8}{2} \left(\Phi_2^\dagger\Phi_2\right)\Phi_S^2 \nonumber \\
&&+ \delta T_1 (\zeta_1 + \omega_1) + \delta T_2 (\zeta_2 + \omega_2) + \delta T_{\text{CP}} (\psi_2 + \omega_{\text{CP}}) \nonumber \\
&& + \delta T_{\text{CB}} (\rho_2 + \omega_{\text{CB}}) + \delta T_S ( \rho_S + \omega_S ) \;.
\end{eqnarray}
Using
\begin{eqnarray}
\phi_i \equiv \{ \rho_1, \eta_1, \rho_2, \eta_2, \zeta_1, \psi_1,
\zeta_2, \psi_2, \rho_S \}
\end{eqnarray}
the 'on-shell' renormalization conditions yield
\begin{eqnarray}
\left.\partial_{\phi_i} V^{\text{CT}}\right|_{\phi =\langle \phi^c\rangle_{T=0}} &=&
-\left.\partial_{\phi_i} V^{\text{CW}}\right|_{\phi =\langle \phi^c \rangle_{T=0}} \\
\left.\partial_{\phi_i}\partial_{\phi_j} V^{\text{CT}}
\right|_{\phi=\langle \phi^c \rangle_{T=0}} &=& -\left.\partial_{\phi_i}
\partial_{\phi_j} V^{\text{CW}} \right|_{\phi=\langle\phi^c \rangle_{T=0}} \,,
\end{eqnarray}
with
\begin{eqnarray}
\langle \phi^c \rangle_{T=0} = (0,0,0,0,v_1,0,v_2,0,v_S) \;,
\end{eqnarray}
and lead to the counterterms
\begin{align}
\delta m_{11}^2 =& \frac{1}{2} \left[ \frac{v_s}{v_1} H^{\text{CW}}_{\rho_1,\rho_S} + \frac{v_2}{v_1}
\left(H^{\text{CW}}_{\rho_1,\rho_2} - H^{\text{CW}}_{\psi_1,\psi_2} \right) + 2H^{\text{CW}}_{\psi_1,\psi_1} -
5H^{\text{CW}}_{\psi_1,\psi_1} + H^{\text{CW}}_{\rho_1,\rho_1} \right] + t_H v_2^2 \\
\delta m_{22}^2 =& \frac{1}{2} \left[ \frac{v_s}{v_2} H^{\text{CW}}_{\rho_2,\rho_S} +
H^{\text{CW}}_{\rho_2,\rho_2} - 3H^{\text{CW}}_{\psi_2,\psi_2} + \frac{v_1}{v_2} \left(
H^{\text{CW}}_{\rho_1,\rho_2} - H^{\text{CW}}_{\psi_1,\psi_2} \right) + 5\frac{v_1^2}{v_2^2}
\left(H^{\text{CW}}_{\psi_1,\psi_1}-H^{\text{CW}}_{\psi_1,\psi_1}\right) \right] + t_H v_1^2 \\
\delta m_{12}^2 &= H^{\text{CW}}_{\psi_1,\psi_2} + \frac{v_1}{v_2}\left(H^{\text{CW}}_{\psi_1,\psi_1} -
H^{\text{CW}}_{\psi_1,\psi_1} \right) + t_H v_1v_2 \\
\delta \lambda_1 &= \frac{1}{v_1^2} \left( 2H^{\text{CW}}_{\psi_1,\psi_1} - H^{\text{CW}}_{\psi_1,\psi_1} -
H^{\text{CW}}_{\rho_1,\rho_1}\right) - t_H \frac{v_2^2}{v_1^2} \\
\delta \lambda_2 &= \frac{1}{v_2^2}\left(H^{\text{CW}}_{\psi_2,\psi_2}-H^{\text{CW}}_{\rho_2,\rho_2}\right)
+ 2\frac{v_1^2}{v_2^4} \left(H^{\text{CW}}_{\psi_1,\psi_1}-H^{\text{CW}}_{\psi_1,\psi_1}\right)
- t_H \frac{v_1^2}{v_2^2} \\
\delta \lambda_3 &= \frac{1}{v_2^2} \left( H^{\text{CW}}_{\psi_1,\psi_1} - H^{\text{CW}}_{\psi_1,\psi_1} \right)
+ \frac{1}{v_1v_2} \left( H^{\text{CW}}_{\psi_1,\psi_2} - H^{\text{CW}}_{\rho_1,\rho_2} \right) -t_H \\
\delta \lambda_4 &= t_H \\
\delta \lambda_5 &= \frac{2}{v_2^2} \left(H^{\text{CW}}_{\psi_1,\psi_1} - 2H^{\text{CW}}_{\psi_1,\psi_1}
\right) + t_H \\
\delta m_S^2 &= \frac{1}{2} \left( H^{\text{CW}}_{\rho_S,\rho_S} + \frac{v_2}{v_s} H^{\text{CW}}_{\rho_2,\rho_S} +
\frac{v_1}{v_s} H^{\text{CW}}_{\rho_1,\rho_S} - \frac{3}{v_S} N^{\text{CW}}_{\rho_S} \right) - t_S \frac{3}{2v_s} \\
\delta \lambda_6 &= \frac{1}{v_s^3} \left(N^{\text{CW}}_{\rho_S} - v_s H^{\text{CW}}_{\rho_S,\rho_S} \right)
- t_S \frac{1}{v_s^3}\\
\delta \lambda_7 &= - \frac{1}{v_sv_1} H^{\text{CW}}_{\rho_1,\rho_S} \\
\delta \lambda_8 &= - \frac{1}{v_sv_2} H^{\text{CW}}_{\rho_2,\rho_S} \\
\delta T_1 &= H^{\text{CW}}_{\psi_1,\psi_1}v_1 + H^{\text{CW}}_{\psi_1,\psi_2} v_2 - N^{\text{CW}}_{\rho_1} \\
\delta T_2 &= \frac{v_1^2}{v_2} \left(H^{\text{CW}}_{\psi_1,\psi_1} - H^{\text{CW}}_{\psi_1,\psi_1}\right)
+ H^{\text{CW}}_{\psi_1,\psi_2} v_1 + H_{\psi_2,\psi_2} v_2 - N^{\text{CW}}_{\zeta_2} \\
\delta T_S &= t_S \\
\delta T_{\text{CP}} &= \frac{v_1^2}{v_2} H^{\text{CW}}_{\rho_1,\psi_1} + H^{\text{CW}}_{\rho_1,\psi_2} v_1
- N^{\text{CW}}_{\psi_2} \\
\delta T_{\text{CB}} &= -N^{\text{CW}}_{\zeta_2} \;,
\end{align}
where we used the abbreviations Eqs.~(\ref{eq:nabbr}) and (\ref{eq:habbr}).
The overconstrained system of equations leads to a two-dimensional
solution space parametrized by $t_H, t_S \in \mathbb{R}$ that we set
in the code to
\begin{eqnarray}
t_H = t_S = 0 \;.
\end{eqnarray}
The identities to be applied to solve the system of equations are the
same as in the R2HDM and given by
\begin{align}
0 &= H^{\text{CW}}_{\eta_1,\eta_1} - H^{\text{CW}}_{\rho_1,\rho_1} \\
0 &= H^{\text{CW}}_{\eta_1,\eta_2} - H^{\text{CW}}_{\rho_1,\rho_2} \\
0 &=H^{\text{CW}}_{\eta_2,\eta_2} - H^{\text{CW}}_{\rho_2,\rho_2} \\
0 &= H^{\text{CW}}_{\psi_1,\psi_2} - H^{\text{CW}}_{\eta_1,\eta_2} + \frac{v_1}{v_2}
\left( H^{\text{CW}}_{\psi_1,\psi_1} - H^{\text{CW}}_{\rho_1,\rho_1} \right)
\\
0 &= H^{\text{CW}}_{\psi_2,\psi_2} - H^{\text{CW}}_{\eta_2,\eta_2} +
\frac{v_1^2}{v_2^2} \left( H^{\text{CW}}_{\rho_1,\rho_1} -
H^{\text{CW}}_{\psi_1,\psi_1}\right) \,.
\end{align}
As a charge-breaking vacuum is unphysical, $\delta
T_{\text{CB}}$ always turns out to be zero as it should.
The program code requires (in addition to the SM VEV that is
hard-coded) the 'parametric' input parameters for the
N2HDM to be given in the order
\begin{eqnarray}
\mbox{type} \; , \; \lambda_1 \; , \; \lambda_2 \; , \; \lambda_3 \; , \; \lambda_4 \; ,
\; \lambda_5 \; , \; \lambda_6 \; , \;
\lambda_7 \; , \; \lambda_8 \; , \; v_S \; , \; \tan\beta \; ,
\; m_{12}^2 \;.
\end{eqnarray}
In the first entry, the user has to specify the N2HDM type.
Note that the minimum conditions lead to the
following relations among the parameters
\begin{eqnarray}
\frac{v_2}{v_1} m_{12}^2 - m_{11}^2 &=& \frac{1}{2} (v_1^2 \lambda_1 +
v_2^2 \lambda_{345} + v_S^2 \lambda_7) \label{eq:n2hdmmin1} \\
\frac{v_1}{v_2} m_{12}^2 - m_{22}^2 &=& \frac{1}{2} (v_1^2 \lambda_{345} +
v_2^2 \lambda_2 + v_S^2 \lambda_8) \label{eq:n2hdmmin2} \\
- m_S^2 &=& \frac{1}{2} (v_1^2 \lambda_7 + v_2^2 \lambda_8 + v_S^2
\lambda_6) \;, \label{eq:n2hdmmin3}
\end{eqnarray}
with
\begin{eqnarray}
\lambda_{345} \equiv \lambda_3 + \lambda_4 + \lambda_5 \;.
\label{eq:l345}
\end{eqnarray}
\section{Installation \label{sec:install}}
\paragraph*{Download}
The program can be downloaded from \url{https://github.com/phbasler/BSMPT}\ . After extracting the
zip archive in the directory chosen by the user, to which we will from now
on refer as {\tt \$BSMPT}, there will be several subfolders. These are:
\vspace*{0.2cm}
\begin{tabular}{ll}
{\tt docs} & The {\tt docs} folder contains the documentation as html.\\[0.1cm]
{\tt example} & Here we put sample input files as well as the corresponding results
produced \\ & by the different executables (see
below). \\[0.1cm]
{\tt manual} & This subfolder contains a copy of this paper which is
kept up to date \\ & with changes in the code.
Additionally, we include the {\tt changelog} \\ & file
documenting corrected bugs and modifications of the
program. \\[0.1cm]
{\tt sh} & Here we put the script to install the libraries and to create the
makefile. Here \\ & we also provide the python files
prepareData\_XXX.py (XXX= R2HDM, \\
& C2HDM, N2HDM) that can be used to order the
data sample accordingly to \\
& the input requirements. \\[0.1cm]
{\tt src} & This subfolder contains the source files of the code and
is structured in three
\\ & subfolders.
\end{tabular}
\noindent
The subfolders of {\tt src} contain the following files:
\vspace*{0.2cm}
\begin{tabular}{ll}
\hspace*{-0.32cm} {\tt src/minimizer} & Here the source files for the
minimization routines are stored. \\
\hspace*{-0.32cm} {\tt src/models} & This directory contains the implemented
models. \\ & If a new model is added it must
be placed in this folder. \\ & There is also a template
class with instructions on how \\ & to add a new
model. Furthermore, there is the file \\ & {\tt SMparam.h}
with the Standard Model parameters. \\
\hspace*{-0.32cm} {\tt src/prog} & This directory contains the source code for
the executables.
\end{tabular}
\paragraph*{Required libraries}
For {\tt BSMPT} to work the following three libraries are needed:
\begin{itemize}
\item[$\ast$] The GNU Scientific Library ({\tt GSL}) \cite{GSL_Manual} is
assumed to be installed in PATH. {\tt GSL} is
required for the calculation of the Riemann-$\zeta$ functions, the
double factorial and for the minimization.
\item[$\ast$] The {\tt Eigen3} library \cite{eigenweb} is downloaded
during the installation process of {\tt BSMPT}. {\tt Eigen3} is used for all
the matrix calculations.
\item[$\ast$] The {\tt libcmaes} library \cite{CMAES} is required
for the minimization and is installed during the installation process.
\end{itemize}
\paragraph*{Compilation}
The compilation requires a {\tt C++} and {C} compiler
that support the {\tt C++11} standard. For the {\tt C++} compiler we
recommend {\tt g++-7\footnote{Although
earlier compiler versions can also be used, we strongly recommend to
use {\tt g++-7} as it significantly reduces the computation time.} and
for the {\tt C} compiler we recommend {\tt gcc-7}.}
After that, the following steps have to be performed:
\begin{enumerate}
\item Go to the folder {\tt \$BSMPT/sh} and call \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.2cm}
./InstallLibraries.sh {-}\dash lib=PathToYourLib{} {-}\dash CXX=C++Compiler {-}\dash CC=CCompiler
\end{kasten*} \\
where 'PathToYourLib', is the absolute path in which
{\tt Eigen3} and {\tt libcmaes} will be installed by this script.
\item To generate the Makefile in the {\tt \$BSMPT}\, folder call \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.2cm}
./autogen.sh {-}\dash lib=PathToYourLib{} {-}\dash CXX=C++Compiler
\end{kasten*}
\item Go back to {\tt \$BSMPT}\, and call \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.2cm}
make
\end{kasten*} \\
which will generate the executables {\tt BSMPT}, {\tt
CalcCT}, {\tt NLOVEV}, {\tt TripleHiggsNLO} and {\tt VEVEVO}.
\end{enumerate}
After that go to the folder {\tt PathToYourLib/libcmaes} and check if there
is either the folder {\tt lib} or {\tt lib64}. Then \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.2cm}
export LD\_LIBRARY\_PATH=\$LD\_LIBRARY\_PATH:PathToYourLib/libcmaes/LIB
\end{kasten*} \\
has to be executed where 'LIB' is either {\tt lib64} or {\tt lib}, depending
on which folder exists in {\tt PathToYourLib/libcmaes}. This can
also be added to {\tt bashrc} so that it is loaded
automatically with every new terminal that is opened.
\section{Executables \label{sec:executables}}
In this section we will briefly describe the executables that are
generated by the makefile. We begin with the definition of the input
parameters that are used by all executables:
\begin{itemize}
\item {\it Model} is the parameter by which the model is selected. The
CP-violating 2HDM (0), the
CP-conserving 2HDM (1) and the
CP-conserving N2HDM (2), as introduced in
Section~\ref{sec:implmodels}, are already implemented.
\item {\it Inputfile} sets the path and the name of the input file.
In the input file, the programs expect the first line
to be a header with the column names. Every
following line then corresponds to the input of one particular
parameter point. The parameters are required to be those of
the Lagrangian in the interaction basis. If a different format
for the input parameters is desired one needs to adapt the function
{\tt ReadAndSet} in the corresponding model file in
{\tt \$BSMPT/src/models}. For the format of the input files of the
already implemented models, we refer to the corresponding
subsections in Sec.~\ref{sec:implmodels}. Note, that the program
expects the input parameters to be separated by a tabulator. In the
folder {\tt \$BSMPT/sh/}, we provide {\tt python} scripts that prepare
the data accordingly.
\item {\it Outputfile} sets the path and the name of
the generated output file. We note, that the
program does not create new folders so that it has to be made sure that the
folder for the output file already exists.
\end{itemize}
If in the thermal corrections the Parwani method
Eqs.~(\ref{eq:Parwani1}), (\ref{eq:Parwani2}), should be used instead of the Arnold
Espinosa method, Eqs.~(\ref{eq:AE1}), (\ref{eq:AE2}), the variable
'C\_UseParwani' in line 132 of the file
{\tt \$BSMPT/src/models/ClassPotentialOrigin.h} has to be changed to
'true'. Afterwards, in {\tt \$BSMPT}\, the commands
'make clean' and subsequently 'make' have to be executed.
\subsection{BSMPT}
{\tt BSMPT} is the executable of the main program. It
calculates the EWPT for the parameter point(s) given in the input
file. It is executed through the command line \\[0.1cm]
\begin{kasten*}{BSMPT}
./bin/BSMPT Model Inputfile Outputfile LineStart LineEnd
\end{kasten*} \\
The user has to specify the model, the name and path of the input file
and the name and path of the output file through {\it Model}, {\it Inputfile} and {\it
Outputfile}, respectively. By {\it LineStart} and {\it LineEnd} the
numbers of the lines in the
input file are specified where the set of parameter points starts and
ends for which the program performs the calculations. Each line
corresponds to one parameter point. Note that the
first line of your data (the line with the legend) has the number
1. The code reads in a line from the input file, calculates the EWPT for
this parameter point and then writes out the line in the output file,
{\it i.e.} the information on the parameter point, and appends the
results of the calculations. These are $v_c$, $T_c$, $v_c/T_c$ and
the individual VEVs at $T_c$, {\it i.e.}, $\bar{\omega}_k (T_c)$
($k=1,...,n_v$). It also extends the legend from the input file by adding the entries
for the output. \newline \vspace*{-3.5mm}
Only results for those
points are written out for which $v_c/T_c > 1$. If the
check should not be against 1 but against a different value the
constant 'C\_PT' in line 154
of the file \\ {\tt \$BSMPT/src/model/ClassPotentialOrigin.h} has to be
changed to the desired value. Afterwards, in {\tt \$BSMPT}\, the commands
'make clean' and subsequently 'make' have to be executed.
\subsection{CalcCT}
{\tt CalcCT} is the executable for the calculation of the
counterterms for a given parameter point. It is executed through the
command line \\[0.1cm]
\begin{kasten*}{CalcCT}
./bin/CalcCT Model Inputfile Outputfile LineStart LineEnd
\end{kasten*}\\
in which the user first has to specify the model, the name and path of the
input file and the name and path of the output file through {\it
Model}, {\it Inputfile} and {\it Outputfile},
respectively. Furthermore, the line numbers of
the start and end parameter point have to be specified. For each line,
{\it i.e.}~each parameter point, the various counterterms of the model
are calculated. They are written out in the output file which contains
a copy of the parameter point and appended to it in the same line the
results for the counterterms. The first line of the output file
contains the legend describing the entries of the various columns.
\subsection{NLOVeV}
{\tt NLOVeV} is the executable calculating the
global minimum of the loop-corrected effective potential at $T=0$~GeV
for every point between the lines {\it LineStart} and {\it LineEnd} to
be specified in the command line for the execution of the program: \\[0.1cm]
\begin{kasten*}{NLOVeV}
./bin/NLOVeV Model Inputfile Outputfile LineStart LineEnd
\end{kasten*}\\
The model, the name and path of the input file and the name and path
of the output file are set through {\it Model}, {\it Inputfile} and {\it
Outputfile}, respectively. The output file contains the information
on the parameter point to which the computed values at zero
temperature of the NLO VEVs (in GeV) are appended in the same line, namely $v(T=0)$
and the individual VEVs $\bar{\omega}_k (T=0) \equiv v_k$
($k=1,...,n_v$). The first line of the output file again details the
entries of the various columns. Note, that it can happen that
the global minimum $v(T=0)$, that is obtained from the NLO effective potential,
is not equal to $v=246.22$~GeV any more. By writing out also
$v(T=0)$ the user can check for this phenomenological constraint.
\subsection{TripleHiggsCouplingsNLO}
{\tt TripleHiggsCouplingsNLO} is the executable of the program that
calculates the triple Higgs couplings, derived from the third
derivative of the potential with respect to the Higgs fields, for every point between
the lines {\it LineStart} and {\it LineEnd} to be specified in the
command line: \\[0.1cm]
\begin{kasten*}{TripleHiggsCouplingsNLO}
./bin/TripleHiggsNLO Model Inputfile Outputfile LineStart LineEnd
\end{kasten*}\\
The model, the name and path of the input file and the name and path
of the output file are set through {\it Model}, {\it Inputfile} and {\it
Outputfile}, respectively. The output file contains the trilinear
Higgs self-couplings derived from the tree-level potential, the
counterterm potential and the Coleman-Weinberg potential at $T=0$ for
all possible Higgs field combinations. The total NLO trilinear Higgs self-couplings are
then given by the sum of these three contributions. The first line
of the output file describes the entries of the various columns.
\subsection{VEVEVO}
{\tt VEVEVO} is the executable of the program that calculates the
temperature evolution of the VEVs for a given parameter point. It is
performed through the command line \\[0.1cm]
\begin{kasten*}{VEVEVO}
./bin/VEVEVO Model Inputfile Outputfile Line Tempstart Tempstep Tempend
\end{kasten*} \\
Again, the model, the name and path of the input file and the name and
path of the output file have to be specified through {\it Model}, {\it Inputfile} and {\it
Outputfile}, respectively. Furthermore,
\begin{itemize}
\item {\it Line} is the line number of the parameter point for which
the evolution shall be calculated.
\item {\it Tempstart} is the starting value of the temperature in GeV.
\item {\it Tempstep} is the step size of the temperature evolution for
which the VEVs are to be calculated.
\item {\it Tempend} is the end value of the temperature interval, in
which the potential should be minimized.
\end{itemize}
The output file contains the data for $T$ and the corresponding values of
$v$ and of the individual VEVs, {\it i.e.}~$\bar{\omega}_k (T)$
($k=1,...,n_v$). The first line of the output file is devoted to the
legend that specifies the entries of the various columns.
Note, that the program does not check whether the individual VEVs at the various
temperatures are positive or not but just writes out the results of the
numerical minimizer, and therefore the signs of the individual VEVs can
flip. \newline \vspace*{-3.5mm}
An example for the temperature evolution of a specific parameter point in the C2HDM,
described in section \ref{sec:C2HDM}, is depicted in
Fig.~\ref{fig:PlotGen}. The parameter point is given by the input
values
\begin{eqnarray}
\begin{array}{lcllcl}
\mbox{type} & = & 1 & \; \tan\beta & = & 6.94743 \\
\lambda_1 & = & 1.2248193823 & \;
\lambda_2 & = & 0.299419454432 \\
\lambda_3 & = & -0.514319430337 & \; \lambda_4 & = & 4.07718269395
\\
\mbox{Re} (\lambda_5) & = & -3.84704455054 & \; \mbox{Im} (\lambda_5) & = & -1.0875150879
\\
\mbox{Re} (m_{12}^2) & = & 8044.09 \,\mathrm{GeV}^2 \;.
\end{array} \label{eq:c2hdmex}
\end{eqnarray}
This implies the Higgs boson masses
\begin{eqnarray}
\begin{array}{lcllcl}
m_{H_1} &=& 125.09 \,\mathrm{GeV} & \;
m_{H_2} &=& 236.989 \,\mathrm{GeV} \\
m_{H_3} &=& 542.946 \,\mathrm{GeV} & \;
m_{H^\pm} &=& 223.758 \,\mathrm{GeV} \;.
\end{array}
\end{eqnarray}
For the critical temperature $T_c$, the VEV $v_c$ at $T_c$ and $\xi_c$
we find for this parameter point
\begin{eqnarray}
T_c &= 138.913\,\mathrm{GeV} \, , \; v_c = 139.274 \,\mathrm{GeV} \, ,
\; \xi_c = 1.0026 \;.
\end{eqnarray}
The individual doublet VEVs $\bar{\omega}_1$ and $\bar{\omega}_2$ and the CP- and
charge-breaking VEVs $\bar{\omega}_{\text{CP}}$ and
$\bar{\omega}_{\text{CB}}$ at $T_c$ are
\begin{eqnarray}
\begin{array}{lcllcl}
\bar{\omega}_1(T_c) &=& 16.9487\,\mathrm{GeV} & \;
\bar{\omega}_2(T_c) &=& 135.556\,\mathrm{GeV} \\
\bar{\omega}_{\text{CP}}(T_c) &=& 27.1021 \,\mathrm{GeV} & \;
\bar{\omega}_{\text{CB}}(T_c) &=& 0 \,\mathrm{GeV} \;.
\end{array}
\end{eqnarray}
We observe in Fig.~\ref{fig:PlotGen} (a) the jump for the symmetric
phase to a non-zero VEV with $v_c = 139.274$~GeV at $T_c=138.913$~GeV
corresponding to a strong first order EWPT with $\xi_c$ just above 1,
$\xi_c=1.0026$. For the chosen parameter point with $\tan\beta \approx
7$, the non-zero doublet VEV $\bar{\omega}_2$ is much larger than
$\bar{\omega}_1$, {\it
cf}~Fig.~\ref{fig:PlotGen} (b). Their squared sum approaches $v(T=0)=
\sqrt{\bar{\omega}_1^2+ \bar{\omega}_2^2} = 246.22$~GeV at zero
temperature. As can be inferred from
Fig.~\ref{fig:PlotGen} (c) and (d), at $T_c$, a
CP-violating phase $\bar{\omega}_3 \ne 0$ is generated
spontaneously at the EWPT. The non-physical charge-breaking VEV
$\bar{\omega}_{\text{CB}}$ on
the other hand remains zero throughout the whole scanned temperature
interval, as it should, {\it cf}~Fig.~\ref{fig:PlotGen} (d).
\begin{figure}[t]
\centering
\subfigure[Evolution of $v$.]{\includegraphics[scale=0.5]{PlotVev_v.pdf}} \qquad
\subfigure[Evolution of the doublet VEVs
$\bar{\omega}_k$
($k=1,2$).]{\includegraphics[scale=0.5]{PlotVev_v12.pdf}} \\
\subfigure[Evolution of the CP-violating phase $\tan(\varphi) =
\bar{\omega}_{\text{CP}}/\bar{\omega}_2$ of the second
doublet.]{\includegraphics[scale=0.5]{PlotVev_vphase.pdf}} \qquad
\subfigure[Evolution of the CP-violating VEV
$\bar{\omega}_{\text{CP}}$ and the charge-breaking VEV
$\bar{\omega}_{\text{CB}}$.]{\includegraphics[scale=0.5]{PlotVev_vCB.pdf}}
\caption{The temperature evolution obtained by {\tt VEVEVO} for the
C2HDM parameter point Eq.~(\ref{eq:c2hdmex}).}
\label{fig:PlotGen}
\end{figure}
\section{How to add a New Model \label{sec:newmodels}}
In this section we describe how a new model can be added to the
program. To illustrate this, we have generated the template class
{\tt ClassTemplate.cpp}, located in the directory {\tt
BSMPT/src/models/}, in which the functions (according to the given comments)
have to be edited. The functions to be modified are
\begin{itemize}
\item[] {\tt Class\_Template}
\item[] {\tt ReadAndSet}
\item[] {\tt addLegendCT}
\item[] {\tt addLegendTemp}
\item[] {\tt addLegendTripleCouplings}
\item[] {\tt addLegendVEV}
\item[] {\tt set\_gen}
\item[] {\tt set\_CT\_Pot\_Par}
\item[] {\tt write}
\item[] {\tt TripleHiggsCouplings}
\item[] {\tt calc\_CT}
\item[] {\tt MinimizeOrderVEV}
\item[] {\tt SetCurvatureArrays}
\item[] {\tt CalculateDebyeSimplified}
\item[] {\tt VTreeSimplified}
\item[] {\tt VCounterSimplified}
\item[] {\tt Debugging}\footnote{In fact,
the function {\tt Debugging} is not used by any of the programs and
is provided only for the user to perform some checks.}
\end{itemize}
Furthermore, the constant of the new
model with which it is selected by the program (through {\it
Model}) has to be defined in the file {\tt
IncludeAllModels.h}. After doing so, in the file \\ {\tt
IncludeAllModels.cpp} the corresponding entry in the
function {\tt Fchoose} has to be added, and the file
needs to be extended to include the new model.
Additionally, in {\tt ClassTemplate.h} the parameters of the model
have to be declared. All these files are also located in {\tt BSMPT/src/models/}.
\subsection{Example}
As example we take a model with one scalar particle $\phi$ which
develops a VEV $v$, couples to one fermion $t$ with the Yukawa
coupling $y_t$, and to one gauge boson $A$ with the gauge coupling
$g$. The relevant pieces of the Lagrangian are given by ($\Phi = \phi
+ v$)
\begin{align}
-\mathcal{L}_S &= \frac{m^2}{2} \left(\phi+v\right)^2 +
\frac{\lambda}{4!}
\left(\phi+v\right)^4 \label{eq:templatetree} \\
-\mathcal{L}_F &= y_t t_L t_R \left(\phi + v\right) \\
\mathcal{L}_G &= g^2A^2\left(\phi+v\right)^2 \,.
\end{align}
We therefore have $i,j,k,l = 1 , I,J = 1,2, a,b = 1$ for the tensors
defined in Eqs.~(\ref{Eq:LS_Classical}), (\ref{Eq:LF_Classical}) and
(\ref{Eq:LG_Classical}). Here $I,J=1,2$ corresponds to $t_L$ and
$t_R$, the left- and right-handed projections of the fermion $t$.
The tensors are given by
\begin{align}
L^i &= \left.\partial_{v} \left(-\mathcal{L}_S \right)\right|_{\phi=0,v=0} = 0 \\
L^{ij} &= \left.\partial_{v}^2 \left(-\mathcal{L}_S \right)\right|_{\phi=0,v=0} = m^2 \\
L^{ijk} &= \left.\partial_{v}^3 \left(-\mathcal{L}_S \right)\right|_{\phi=0,v=0} = 0 \\
L^{ijkl} &= \left.\partial_{v}^4 \left(-\mathcal{L}_S
\right)\right|_{\phi=0,v=0} = \lambda \\
Y^{IJk} &= \begin{cases} 0 & I = J \; (I,J=t_L,t_R)\\
y_t & I \ne J \; (I,J=t_L,t_R) \end{cases} \\
G^{abij} &= \partial_A^2 \partial_v^2 \left( \mathcal{L}_G \right) =
4g^2 \;.
\end{align}
The counterterm potential, given by Eq.~(\ref{Eq:CTPot}), reads
\begin{align}
V^{\text{CT}} &= \frac{\delta m^2}{2} \left(\phi+v\right)^2 + \frac{\delta
\lambda}{4!} \left(\phi+v\right)^4 + \delta T \left(\phi+
v\right) \;. \label{eq:templatecounterpot}
\end{align}
Application of Eqs.~(\ref{Eq:Renorm1}) and (\ref{Eq:Renorm2}) yields
\begin{align}
\delta T + v \delta m^2 + \frac{1}{6} v^3 \delta \lambda
&= -\left.\partial_{\phi} V^{CW}\right|_{\phi=0} \\
\delta m^2 + \frac{v^2}{2}\delta \lambda &= -\left. \partial^2_{\phi}
V^{CW}\right|_{\phi=0} \;.
\end{align}
The system of equations is overconstrained. Choosing
\begin{eqnarray}
\delta T = t \;, \quad \mbox{with} \quad t \in \mathbb{R} \;,
\label{eq:cttad}
\end{eqnarray}
we get
\begin{eqnarray}
\delta \lambda &=& \frac{3t}{v^3} + \frac{3}{v^3} \left(\left.\partial_{\phi}
V^{\text{CW}}\right|_{\phi=0}\right) - \frac{3}{v^2} \left(\left. \partial^2_{\phi}
V^{\text{CW}}\right|_{\phi=0} \right) \label{eq:ctlambda} \\
\delta m^2 &=& -\frac{3}{2v} \left( \left.\partial_{\phi}
V^{\text{CW}}\right|_{\phi=0}\right) + \frac{1}{2} \left(\left. \partial^2_{\phi}
V^{\text{CW}}\right|_{\phi=0}\right) - \frac{3t}{2v} \;. \label{eq:ctms}
\end{eqnarray}
To implement this model, several files need to be changed, as
described in the following.
\subsubsection{IncludeAllModels.h, IncludeAllModels.cpp,
ClassTemplate.h}
In {\tt IncludeAllModels.h} the constant with which the program
selects the new model has to be set. Please make sure that the new
model number is not used already by an implemented model. The program
would then not know which model to select. Choosing for the template
model {\it e.g.}~5, this results in adding the line \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.2cm}
const int C\_ModelTemplate=5;
\end{kasten*} \\
This model selection then has to be entered in {\tt
IncludeAllModels.cpp} by adding to the function {\tt Fchoose} the
line
\\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.2cm}
else if(choice == C\_ModelTemplate) \\
\{ \\
return std::unique\_ptr$<$Class\_Potential\_Origin$>$ \{ new Class\_Template \}; \\
\} \\
\end{kasten*} \\
In {\tt IncludeAllModels.cpp} the new model is included by adding the
line \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.2cm}
\#include ``ClassTemplate.h''
\end{kasten*} \\
In {\tt ClassTemplate.h} the variables for the potential
and for the remaining Higgs coupling parameters as well as for the
counterterm constants have to
be added, \\[0.1cm]
\begin{kasten*}{}
double ms, lambda, dms, dlambda, dT, yt, g;
\end{kasten*} \\
Here 'ms' denotes the mass parameter squared, $m^2$, and 'dms',
'dlambda' are the counterterms $\delta m^2$, $\delta \lambda$.
\subsubsection{ClassTemplate.cpp}
We will not describe here in detail every function in {\tt Class\_Template.cpp}
that can be modified as the functions are commented in the code. Instead, we
briefly describe here the most essential parts. \newline \vspace*{-3.5mm}
\paragraph*{Class\_Template()}
The numbers of Higgs particles, potential parameters,
counterterms and VEVs have to be specified in the constructor
{\tt Class\_Template}() and the variable 'Model' has to be set to
the selected model. In our simple example, this is \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align}
\text{Model} &= \text{C\_ModelTemplate}; \\
\text{NNeutralHiggs} &= 1; \\
\text{NChargedHiggs} &= 0; \\
\text{nPar} &= 2; \\
\text{nParCT} &= 3; \\
\text{nVEV} &= 1; \\
\text{NHiggs} &= \text{NNeutralHiggs} + \text{NChargedHiggs};
\end{align}
\vspace*{-0.6cm}
\end{kasten*} \\
When you implement a new model that is not called {\tt Template} but
{\it e.g.}~{\tt NewModel}, please make sure to replace in the corresponding {\tt .h} and
{\tt .cpp} files the name {\tt Class\_Template} by the name of the
newly implemented class. This means that {\tt
Class\_Template} has to be replaced by {\tt Class\_NewModel} wherever it appears.
\paragraph*{ReadAndSet(const std::string\& linestr, std::vector$<$double$>$\& par)}
In this function the input parameters of the model are read into the
vector
'par'. Each line in the input file corresponds to a new parameter
point. The line to be read in is given by the string
'linestr'. Via 'std::stringstream' the parameters of each line are
read into double variables. In our template model the input file would
contain the parameters 'ms' and 'lambda' so that in the
program it would look like this: \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
&\text{std::stringstream ss(linestr);} \\
&\text{double tmp;} \\
&\text{double lms,llambda;}\\
&\text{for(int k=1;k}<=\text{2;k++)} \\
&\text{\{}\\
&\hspace*{2em}\text{ss}>>\text{tmp;}\\
&\hspace*{2em}\text{if(k==1) lms = tmp; }\\
&\hspace*{2em}\text{else if(k==2) llambda = tmp;}\\
&\text{\}}\\
&\text{par[0] = lms;}\\
&\text{par[1] = llambda;}
\end{align*}
\vspace*{-0.6cm}
\end{kasten*}
\paragraph*{set\_gen(const std::vector$<$double$>$\& par)}
Here, the potential parameters are set from the
vector 'par' read in with the function
{\tt ReadAndSet(std::string linestr, double* par)}, as well as the
coupling parameters. In our sample model, the gauge coupling $g$ is
given by the SM gauge coupling, and the Yukawa coupling $y_t$ is given in
terms of the SM VEV and the top quark mass. The SM gauge coupling, the
SM VEV and the top quark mass are defined in {\tt \$BSMPT/src/models/SMparam.h}.
With the parameters 'ms' and 'lambda' this would then look like:
\\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
\text{ms} &= \text{p}[0]; \\
\text{lambda} &= \text{p}[1];\\
\text{g} &= \text{C\_g}; \\
\text{yt} &= \text{std::sqrt(2)/C\_vev0 * C\_MassTop};
\end{align*}
\vspace*{-0.6cm}
\end{kasten*} \\
More complicated Higgs sectors require
additional parameters. Furthermore, you can set here the potential
parameters that are not read in from the input parameters but are
calculated through the tree-level minimum conditions, like $m_{11}^2,
m_{22}^2$ and $\mbox{Im}(m_{12}^2)$ in the C2HDM {\it e.g.}
This function is also used to define the vectors {\tt vevTree}
and {\tt vevTreeMin}. The former vector refers to the complete field
configuration appearing in the effective potential. The size of the
vector is hence given by $n_{\text{Higgs}}$ ({\it
cf.}~Sec.~\ref{sec:notation}). For the (C)2HDM {\it
e.g.}, we would have $n_{\text{Higgs}}=8$ corresponding to the eight
real fields $\phi_i$ in
Eq.~(\ref{eq:vec2hdm}) (Eq.~(\ref{eq:vecc2hdm})). The vector {\tt
vevTreeMin} corresponds to the VEVs at $T=0$. Its size is given by
the field configurations
that develop a VEV, {\it i.e.}~$n_v$ ({\it
cf.}~Sec.~\ref{sec:counterpot}). This would be $n_v =4$
in the (C)2HDM, corresponding to the four VEVs $v_1,v_2,v_{\text{CP}}$
and $v_{\text{CB}}$. In our
simple template model $n_{\text{Higgs}}$ and $n_v$ coincide resulting
in two vectors {\tt vevTree} and {\tt vevTreeMin }of dimension 1
each. The value of {\tt vevTreeMin} is given by the SM VEV 'C\_vev0' that is
hard-coded in the program. In our sample model it would look like this: \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
&\text{vevTreeMin.resize(nVEV)} \,;\\
&\text{vevTreeMin}[0]= \text{C\_vev0} \,;\\
&\text{vevTree.resize(NHiggs)}\,;\\
&\text{MinimizeOrderVEV(vevTreeMin,vevTree)}\,;
\end{align*}
\vspace*{-0.6cm}
\end{kasten*}
Additionally, the $\overline{\mbox{MS}}$ renormalization scale can be
changed here through the command \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
\text{scale} &= \text{mu};
\end{align*}
\vspace*{-0.6cm}
\end{kasten*}\\
Here 'mu' is the chosen value in GeV for the renormalization
scale. The default value is 'mu = C\_vev0', {\it i.e.}~the EW VEV.
\paragraph*{MinimizeOrderVEV(const std::vector$<$double$>$\&
vevminimizer, \\
std::vector$<$double$>$\& vevFunction)}
Whenever we deal with the Higgs potential in the calculation, the
dimension of the vector describing the fields is $n_{\text{Higgs}}$. Not
all of these fields develop VEVs, however, so that the vector used in
the minimizer only has dimension $n_v$.
The function {\tt MinimizeOrderVEV} is used to convert the resulting
vector from the minimizer to the vector with the $n_{\text{Higgs}}$
entries. In order to do so the field(s) that develop(s) VEV(s) have to
be selected. In the template model we have only one field and it develops a
VEV so that we simply have to set \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
\text{VevOrder[0]} &= 0\,;
\end{align*}
\vspace*{-0.6cm}
\end{kasten*} \\
In a more complex model with {\it e.g.}~two fields where only one of
them develops a VEV, one would have to set '$\text{VeVOrder}[0]=0$' if
the field developing the VEV is in the first entry of the vector
describing the fields, and '$\text{VeVOrder}[0]=1$' if it is the field
in the second entry.
\paragraph*{SetCurvatureArrays()}
The tensors of the Lagrangian of the new model have to be implemented
in the function {\tt SetCurvatureArrays()}. The
notation is \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
\text{Curvature\_Higgs\_L1}[i] &= L^i \\
\text{Curvature\_Higgs\_L2}[i][j] &= L^{ij} \\
\text{Curvature\_Higgs\_L3}[i][j][k] &= L^{ijk} \\
\text{Curvature\_Higgs\_L4}[i][j][k][l] &= L^{ijkl} \\
\text{Curvature\_Gauge\_G2H2}[a][b][i][j] &= G^{abij} \\
\text{Curvature\_Quark\_F2H1}[I][J][k] &= Y^{IJk}\,.
\end{align*}
\vspace*{-0.6cm}
\end{kasten*} \\
Technically, one could use 'Curvature\_Quark\_F2H1' to store all quarks
and leptons there, but as they do not mix the program provides besides
'Curvature\_Quark\_F2H1' where $I,J$ run over all quarks, also the
structure 'Curvature\_Lepton\_F2H1[I][J][k]' where $I,J$ run over all
leptons. For our example this would look like \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
\text{Curvature\_Higgs\_L1}[0] &= 0 ;\\
\text{Curvature\_Higgs\_L2}[0][0] &= \text{ms} ;\\
\text{Curvature\_Higgs\_L3}[0][0][0] &= 0 ;\\
\text{Curvature\_Higgs\_L4}[0][0][0][0] &= \text{lambda} ;\\
\text{Curvature\_Gauge\_G2H2}[0][0][0][0] &= \text{4*std::pow(g,2)} ;\\
\text{Curvature\_Quark\_F2H1}[0][0][0] &= 0 ;\\
\text{Curvature\_Quark\_F2H1}[1][0][0] &= \text{yt} ;\\
\text{Curvature\_Quark\_F2H1}[0][1][0] &= \text{yt} ;\\
\text{Curvature\_Quark\_F2H1}[1][1][0] &= 0 ;
\end{align*}
\vspace*{-0.6cm}
\end{kasten*}{}
\paragraph*{set\_CT\_Pot\_Par(const std::vector$<$double$>$\& par)}
For the use of the counterterms, the corresponding vectors for the
counterterm potential have to be set. They are named
'Curvature\_Higgs\_CT\_L1', 'Curvature\_Higgs\_CT\_L2',
'Curvature\_Higgs\_CT\_L3' and \\ 'Curvature\_Higgs\_CT\_L4' and
defined analogously to 'Curvature\_Higgs\_L1' to \\ 'Curvature\_Higgs\_L4'.
\paragraph*{calc\_CT( std::vector$<$double$>$\& par)}
The counterterms are computed numerically in the function {\tt
calc\_CT( std::vector$<$double$>$\& par)}. To do so, the user has to implement the
formulae for the counterterms that were derived beforehand
analytically in terms of the derivatives of the Coleman-Weinberg
potential, {\it cf.}~Eqs.~(\ref{eq:cttad}), (\ref{eq:ctlambda}) and
(\ref{eq:ctms}) for our template
model. The derivatives of $V^{\text{CW}}$ are provided by the program
through the function calls {\tt WeinbergFirstDerivative} and {\tt
WeinbergSecondDerivative}. In detail, to calculate the counterterms $\delta m^2$,
$\delta \lambda$ and $\delta T$ of the template model, the following
steps have to be performed:
\begin{itemize}
\item To calculate the first and second derivative of the
Coleman-Weinberg potential call \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.2cm}
std::vector$\langle$double$\rangle$ WeinbergNabla,WeinbergHesse; \\
WeinbergFirstDerivative(WeinbergNabla); \\
WeinbergSecondDerivative(WeinbergHesse);
\end{kasten*} \\
and to save it in a vector and matrix class use \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.2cm}
VectorXd NablaWeinberg(NHiggs); \\
MatrixXd HesseWeinberg(NHiggs,NHiggs); \\
for(int i=0;i$<$NHiggs;i++) \\
\{ \\
\hspace*{2em} NablaWeinberg[i] = WeinbergNabla[i]; \\
\hspace*{2em} for(int j=0;j$<$NHiggs;j++) \\
\hspace*{2em}\{\\
\hspace*{4em} HesseWeinberg(i,j) = WeinbergHesse.at(j*NHiggs+i); \\
\hspace*{2em} \} \\
\}
\end{kasten*}
\item Implement the previously derived formulae for the
counterterms. In our example, these are
Eqs.~(\ref{eq:cttad}), (\ref{eq:ctlambda}) and (\ref{eq:ctms}),
where we set $t =0$, \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
\text{dT} =& 0 ; \\
\text{dlambda} =& \text{3.0/std::pow(C\_vev0,3) *
\text{NablaWeinberg[0]}} \\ &- \text{
3.0/std::pow(C\_vev0,2) *
\text{HesseWeinberg(0,0)}} ;\\
\text{dms} =& \text{-3.0/(2*C\_vev0) * \text{NablaWeinberg[0]} + 1.0/2.0
*\text{HesseWeinberg(0,0)}} ;
\end{align*}
\vspace*{-0.6cm}
\end{kasten*}
\item Insert the parameters in the vector 'par', \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
\text{par}[0] &= \text{dT} ; \\
\text{par}[1] &= \text{dms} ; \\
\text{par}[2] &= \text{dlambda} ;
\end{align*}
\vspace*{-0.6cm}
\end{kasten*}
\item Finally call \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.6cm}
\begin{align*}
\text{set\_CT\_Pot\_Par(par);}
\end{align*}
\vspace*{-0.6cm}
\end{kasten*} \\
so that everything is set correctly.
\end{itemize}
Afterwards, the values for dT, dmS and dlambda are set from the vector
'par' by the function {\tt set\_CT\_Pot\_Par(const std::vector$<$double$>$\& par)}.
\paragraph*{TripleHiggsCouplings()}
This function provides the trilinear loop-corrected
Higgs self-coup\-lings as obtained from the effective potential. They
are calculated from the third derivative of the Higgs potential
with respect to the Higgs fields in the gauge basis and then rotated
to the mass basis. Since the Higgs fields are ordered by mass,
{\it i.e.}~we have ascending indices with ascending mass, and the mass
order can change with each parameter point, this implies that for each
parameter point the indices of the vector
containing the trilinear Higgs coupling would refer to different Higgs
bosons. Therefore, it is necessary to order the Higgs bosons in the
mass basis irrespective of the mass order. This order is defined
through the vector {\tt HiggsOrder(NHiggs)}. \\[0.1cm]
\begin{kasten*}{}
\vspace*{-0.3cm}
for(int i=0;i$<$NHiggs;i++) \\
\{ \\
\hspace*{2em} HiggsOrder[i]= value; \\
\}
\end{kasten*} \\
The number 'value' is defined by the user according
to the ordering that this desired in the mass basis. Thus {\tt
HiggsOrder}$[0]=5$ {\it e.g.}~would assign the 6th lightest particle to the first
position. The particles can be selected through the mixing matrix elements.
\paragraph*{addLegendTripleCouplings()}
All the following functions {\tt addLegend...} extend the legends of
the output files by certain variables. The function {\tt
addLegendTripleCouplings} extends the legend by the column names for
the trilinear Higgs couplings derived from the tree-level, the
counterterm and the Coleman-Weinberg potential. In order to do so,
the user first has to make sure to define the names of the Higgs
particles of the model in the vector 'particles'. In our
model we only have one Higgs
particle that we call $H$ and hence set 'particles[0]="H";'.
\paragraph*{addLegendTemp()}
Here the column names for $T_c$, $v_c$ and the VEVs are added to the
legend. The order should be $T_c$, $v_c$ and then the names of the individual
VEVs. These VEVs have to be added in the same order as given
in the function {\tt MinimizeOrderVEV}.
\paragraph*{addLegendVEV()}
This function adds the column names for the VEVs
that are given out. The order has to be the same as given in the
function {\tt MinimizeOrderVEV}.
\paragraph*{addLegendCT()}
In this function, the legend for the counterterms is added. The order of
the counterterms has to be same as the one set in the function {\tt
set\_CT\_Pot\_Par(par)}.
\paragraph*{VTreeSimplified, VCounterSimplified}
The functions \\ {\tt VTreeSimplified(const std::vector$<$double$>$\&
v)} and \\
{\tt VCounterSimplified(const std::vector$<$double$>$\& v)} can be used to
explicitly implement the formulae for the tree-level and counterterm potential in
terms of the classical fields $\omega$, in our
example these are Eqs.~(\ref{eq:templatetree}) and
Eq.~(\ref{eq:templatecounterpot}), respectively, with
$\phi=0$ and $v \equalhat \omega$. Implementing these
may improve the runtime of the programs. An example is given in the template class.
\paragraph*{CalculateDebyeSimplified(), CalculateDebyeGaugeSimplified()}
The functions \\ {\tt CalculateDebyeSimplified()} and {\tt
CalculateDebyeGaugeSimplified()} can be used to implement
explicit formulae for the daisy corrections to the masses of the scalars,
{\it cf.}~Eq.~(\ref{eq:pis}), and gauge bosons,
Eq.~(\ref{eq:pigauge}), respectively. This is done by setting the vectors
'DebyeHiggs' and 'DebyeGauge' and finishing the function with a return true statement.
\paragraph*{write()}
The function {\tt write()} can be used to give a terminal output of
the potential parameters. For our example this would be \\[0.1cm]
\begin{kasten*}{}
\begin{align*}
\text{std::cout} &<< \text{"The parameters are : " $<<$ std::endl;}\\
\text{std::cout} &<< \text{"lambda = " $<<$ lambda $<<$ std::endl}\\
&<< \text{"\textbackslash tm\^{}2 = " $<<$ ms $<<$ std::endl;}\\
\text{std::cout} &<< \text{"The counterterm parameters are : " $<<$ std::endl;}\\
\text{std::cout} & << \text{ "dT = "$<<$ dT $<<$ std::endl}\\
&<< \text{"dlambda = " $<<$ dlambda $<<$ std::endl}\\
&<< \text{"dm\^{}2 = "$<<$ dms $<<$ std::endl;}\\
\text{std::cout} &<< \text{"The scale is given by mu = " $<<$ scale $<<$ " GeV " $<<$ std::endl;}
\end{align*}
\end{kasten*}
\section{Summary \label{sec:concl}}
We have presented the {\tt C++} package {\tt BSMPT} for the
investigation of electroweak baryogenesis in extended Higgs sectors
beyond the SM. The package calculates the loop-corrected effective
potential at finite temperature including daisy resummations of the
bosonic masses. It can be used for the computation of the VEV as a
function of the temperature and in particular for the determination of
$\xi_c = v_c/T_c$ which is related to the strength of the phase
transition. Furthermore, the loop-corrected trilinear Higgs self-couplings
are given out, allowing to investigate the interplay between
successful baryogenesis and the required size on the Higgs
self-interactions. The chosen 'on-shell' renormalization scheme
enables efficient scans in the parameter scans of the models and
allows for the analysis of the connection between collider
phenomenology and successful baryogenesis, so that a link between
collider phenomenology and cosmology can be made. The already
implemented models are the CP-conserving and CP-violating 2HDMs and
the N2HDM. The program
structure supports the implementation of new models, and we have
illustrated with the help of a toy model how this can be done. With
our new tool at hand, it is easy to further investigate the possibility
of baryogenesis in new physics models, the possible spontaneous
generation of CP-violating phases and make further links between
collider observables and phenomena like {\it e.g.}~gravitational waves. The
program is constantly updated to include new phenomenologically
interesting models. We are grateful for suggestions.
\section*{Acknowledgements}
The authors thank Jonas M\"uller, Jonas Wittbrodt and Alexander
Wlotzka for many useful discussions and assistance during the
debugging process. They furthermore thank Jonas Wittbrodt for the careful reading of
the manuscript. PB acknowledges financial support by the “Karlsruhe
School of Elementary Particle and Astroparticle Physics: Science and
Technology (KSETA)”.
\vspace*{1cm}
|
{
"timestamp": "2018-03-09T02:00:35",
"yymm": "1803",
"arxiv_id": "1803.02846",
"language": "en",
"url": "https://arxiv.org/abs/1803.02846"
}
|
\section{Introduction}
Network embedding, or network representation learning, is the task of learning latent representation that captures the internal relations of rich and complex network-structured data.
Inspired by the recent success of deep neural networks in computer vision and natural language processing, several recent studies~\cite{perozzi2014deepwalk,tang2015line,dong2017metapath} propose to employ deep neural networks to learn network embeddings. For example, DeepWalk~\citep{perozzi2014deepwalk} adopts Skip-gram~\citep{mikolov2013efficient} to randomly generate walking paths in a network; and LINE~\citep{tang2015line} tries to preserve two orders of proximity for nodes: first-order proximity (local) and second-order proximity (global).
Most existing studies focus on learning the representation of a homogeneous network that consists of singular type of nodes and relationships (links). However, in practice, many networks are often heterogeneous~\cite{DBLP:conf/ssdbm/QuLYJ14,dong2017metapath}, i.e., involving multiple types of nodes and relationships. The methods designed for homogeneous networks hardly learn the representations of such networks because they cannot distinguish different types of objects and relationships contained in the networks. Therefore, the learned representations lack heterogeneity behind the structural information.
To alleviate the aforementioned limitation, we propose a \textbf{G}raph \textbf{P}artition and \textbf{S}pace \textbf{P}rojection based approach (\emph{GPSP}) to learn the representation of a heterogeneous network. First, an edge-based graph partition method is used to partition the heterogeneous network into two types of atomic subnetworks: i) homogeneous networks that contain singular type of nodes and relationships; ii) bipartite networks that contain two types of vertices and one type of relationship. Second, we apply classic network embedding models~\cite{perozzi2014deepwalk,tang2015line} to learn the representations of homogeneous subnetworks. Third, for each bipartite subnetwork, the hidden projective relations are extracted by learning the projective embedding vectors for the related types of nodes. Finally, \textit{GPSP} concatenates the projective node vectors from bipartite subnetworks with the node vectors learned from homogeneous subnetworks to form the final representation of the heterogeneous network.
The main contribution of our approach is threefold:
i) we formalize the problem of bipartite network representation learning; ii) edge-type based graph partition and space projection are used to learn the representations of different types of nodes in different latent spaces; and iii) the experimental results demonstrate the effectiveness of \textit{GPSP} in network mining tasks.
\section{Our Model}
The definitions of homogeneous network~\cite{perozzi2014deepwalk} and heterogeneous network~\cite{dong2017metapath} are adopted. A bipartite network is defined:
\begin{definition}{\textbf{A Bipartite Network}}
is defined as a graph $G=(V,E)$ where $V=V_1\cup V_2$ and $E=E_{V_1V_2}$. $V_1$ and $V_2$ are two types of vertex sets. In bipartite network each edge $e_{v_1v_2}\in E_{V_1V_2}$ connects two different types of nodes $v_1\in V_1$ and $v_2\in V_2$.
\end{definition}
\paragraph{Edge-type based graph partition}
For a heterogeneous network $G$, we first build a type-table to record all types of relationships in the network. The network is then partitioned into a minimum number of subnetworks, where each subnetwork is either a homogeneous network or a bipartite network.
\paragraph{Homogeneous network embedding}
For homogeneous subnetworks, we employ conventional embedding algorithms such as LINE and DeepWalk to learn \emph{homogeneous embeddings}. The \emph{GPSP} framework with LINE and DeepWalk algorithms are recorded as GPSP-LINE and GPSP-DeepWalk, respectively.
\paragraph{Bipartite network embedding}
Unlike homogeneous networks, each edge in bipartite networks connects two different types of nodes.
After learning the representations of objects $O$ and $P$ in two different homogeneous networks (in two different low-dimensional spaces), we could treat the relationship between objects $O$ and $P$ in the bipartite networks as the implicit projection between two low-dimensional spaces. Based upon the projective relation between two types of nodes, space projection is performed to learn the \emph{projective representations} of nodes.
Equation 1 formulates the projective representation learning process. In a bipartite network that contains projective information from homogeneous network $A$ to homogeneous network $B$, each node $A_i$ in network $A$ could learn a projective representation in network $B$, denoted as $Embd_{A_i\to B}$:
\begin{equation}Embd_{A_i\to B}= \frac{1}{N} \sum_{j=1}^N(Embd_{B_{j}}*w_{A_i B_j}) \end{equation}
Where $\to$ represents the projection relation in two spaces, \{${B_{N}}$\} is the complete set of objects in network $B$ that each $B_j$ in $B_N$ has $A_i\to B_j$. $Embd_{B_{j}}$ is the learned homogeneous representation of $B_j$, and $w_{A_i B_j}$ is the projective weight between nodes $A_i$ and $B_j$.
\paragraph{Final homogeneous network embedding}
Finally, the learned homogeneous network embeddings and the bipartite network embeddings are concatenated to form the final homogeneous network embeddings in which each node contains one homogeneous embedding and potentially several projective embeddings from bipartite subnetworks. The final heterogeneous embedding contains the information from different latent spaces, thus it can be regarded as an ensemble embedding that improves the robustness and generalization performance of a set of embeddings.
\section{Experiments}
\subsection{Dataset}
We construct an academic heterogeneous network, based on the dataset from AMiner Computer Science \cite{tang2008arnetminer}. The constructed network consists of two types of nodes: authors and papers, and three types of edges representing (i) authors coauthor with each other; (ii) authors write papers; (iii) papers cite other papers. After performing edge-based graph partition, two homogeneous subnetworks \textemdash the coauthor network (Author-Author) and the citation network (Paper-Paper), and one bipartite network \textemdash writing network (Author-Paper), are generated.
\subsection{Baseline methods}
We compare our approach with several strong baseline methods including Line \citep{tang2015line}, DeepWalk \cite{perozzi2014deepwalk}, and Metapath2vec \cite{dong2017metapath}.
The dimensions for LINE-based embeddings and the rest are 256 and 128 respectively. We set the size of negative samples to 5.
The number of random walks to start at each node in DeepWalk and Metapath2vec is 10, and the walk length is 40.
\subsection{Multi-label node classification}
We first evaluate the performance of GPSP on the multi-label classification task. We adopt the labeled dataset generated by the study~\cite{dong2017metapath}, which groups authors into 8 categories based on authors' research fields. Following the strategy in \cite{dong2017metapath}, we try to match this label set with the author embeddings, and get 103,024 successfully matched author embeddings with their labels.
A SVM classifier is used to classify these embeddings. To evaluate the robustness of our model, we compare the performance of GPSP with competitors by varying the percentage of labeled data from 10\% to 90\%.
The Micro-F1 and Macro-F1 scores are summarized in Table 1.
GPSP-LINE and GSPS-DeepWalk substantially and consistently outperform the baseline methods by a noticeable margin on all experiments.
Note that the metapath method~\cite{dong2017metapath} has a poor performance in the experiments, probably because that metapath2vec heavily relies on well structured paths that are difficult to obtain in many applications.
\begin{table}[ht]
\begin{scriptsize}
\begin{center}
\begin{tabular}{ c| c| c c c c c}
\hline
Metric & Model & 10\% & 30\% & 50\% & 70\% & 90\%\\
\hline
\multirow{7}{5em}{Micro-F1}
& LINE & 0.7062 & 0.7067& 0.7074& 0.7062& 0.7075 \\
& DeepWalk & 0.6992 & 0.7010& 0.6992& 0.6986& 0.6988 \\
& metapath2vec & 0.6546 & 0.6549& 0.6547& 0.6552& 0.6529 \\
& metapath2vec++ & 0.6692 & 0.6681& 0.6676& 0.6677& 0.6651 \\
& GPSP-LINE & \textbf{0.7512}&\textbf{0.7557}&\textbf{0.7564}&\textbf{0.7554}&\textbf{0.7552}\\
& GPSP-DeepWalk &\textbf{0.7275}&\textbf{0.7318}&\textbf{0.7324}&\textbf{0.7320}&\textbf{0.7318}\\
\hline
\multirow{7}{5em}{Macro-F1}
& LINE & 0.7032 & 0.7036& 0.7043& 0.7035&0.7036 \\
& DeepWalk & 0.6964 &0.6982&0.6965&0.6963&0.6961 \\
& metapath2vec & 0.6307 & 0.6313& 0.6322& 0.6328& 0.6301 \\
& metapath2vec++ & 0.6478 & 0.6473& 0.6478& 0.6473& 0.6445 \\
& GPSP-LINE &\textbf{0.7482}&\textbf{0.7527}&\textbf{0.7534}&\textbf{0.7526}&\textbf{0.7522}\\
& GPSP-DeepWalk &\textbf{0.7253}&\textbf{0.7290}&\textbf{0.7298}&\textbf{0.7295}&\textbf{0.7289}\\
\hline
\end{tabular}
\end{center}
\end{scriptsize}
\caption{Multi-label node classification results}
\vspace{-0.3cm}
\end{table}
\subsection{Node clustering}
To further evaluate the quality of the latent representations learned by GPSP, we also perform a node clustering task. We adopt simple K-means as our clustering algorithm, working on the learned latent representations. Here, $K$ is assigned to 8. The evaluation metric is normalized mutual information (NMI), which measures the mutual information between the generated clusters and the labeled clusters.
The experimental results are demonstrated in Table 2. GPSP-DeepWalk achieves the best result, which improves 24\% in terms of NMI over the original DeepWalk method.
\begin{table}[ht]
\begin{scriptsize}
\begin{center}
\begin{tabular}{cc c c c c}
\hline
LINE &DeepWalk &metapath2v&metapath2v++& GPSP-LINE&GPSP-DeepWalk \\
\hline
0.2516 &0.2873&0.2403& 0.2470&\textbf{0.3118}&\textbf{0.3555}\\
\hline
\end{tabular}
\end{center}
\end{scriptsize}
\caption{Node clustering results (NMI scores)}
\vspace{-0.5cm}
\end{table}
\iffalse
Table 3 shows the result of LINE and GPSP-LINE with only 2nd order of proximity. Our GPSP-LINE's result surpasses the result of LINE by $6\%$. Overall, our models gain around $6\%$ improvement over the benchmarks in node clustering task with considering high order if proximity.
\begin{table}[ht]
\begin{scriptsize}
\begin{center}
\begin{tabular}{ c c c}
\hline
Model & LINE-2nd & GPSP-LINE-2nd \\
\hline
NMI &0.2529& \textbf{0.3118}\\
\hline
\end{tabular}
\end{center}
\end{scriptsize}
\caption{Node clustering results}
\end{table}
\fi
\section{Conclusion}
A novel heterogeneous network embedding model, \textit{GPSP}, is proposed, which supports the representation learning of multiple types of nodes and edges. Extensive experiments show the superiority of GPSP by the benchmarks in two network mining tasks, node classification and clustering.
\bibliographystyle{ACM-Reference-Format}
|
{
"timestamp": "2018-03-08T02:06:40",
"yymm": "1803",
"arxiv_id": "1803.02590",
"language": "en",
"url": "https://arxiv.org/abs/1803.02590"
}
|
\section{Introduction \label{intro}}
Solar flares are powerful outburst phenomena observed in the solar atmosphere, recorded to release an energy of up to $10^{33}$ erg in short time ($10^2-10^3$ seconds) intervals (see e.g. \cite{Hudson1983, Kopp2005}). Their energy source is believed to be the magnetic energy stored in the solar corona, released through the magnetic reconnection. During these events, plasma heating up to keV temperatures, and ion (electron) acceleration up to energies of tens of GeV (hundreds of MeV) are observed. The largest solar flares are also associated with coronal mass ejection (CME); for a review see e.g. \cite{Aschwanden2002, Benz2008, Fletcher2011}.
A significant fraction of the energy released during these events is inferred to be passed into non-thermal particles, with electrons dominating over protons at low ($<$~MeV) energies \cite{Aschwanden2017}. Despite their low-energy dominance, hard X-ray and $\gamma$-ray emission studies indicate that the spectra of electrons are soft above MeV energies (see e.g. \cite{Lin1982,Lin1985}). Furthermore, the $\gamma$-ray spectra of the brightest flares above 100~MeV would require extremely hard electron power-law spectrum ($J_e\sim E_e^{-\alpha}$ for $\alpha<2$)~\cite{Ajello2014} in order to be explained by an electron emission scenario. It is therefore natural to assume that hadrons dominate significantly over electrons at high energies, such that the $\gamma$-ray emission above 100~MeV is dominated by hadronic $\gamma$-ray production.
Solar flare $\gamma$-ray emission up to 100~MeV was first detected from the GRS instrument on board of the \textit{Solar Maximum Mission} (SMM) \cite{Rieger1983}. Following this, the GAMMA-1 Telescope \cite{Akimov1991} and EGRET on board of the \textit{Compton Gamma-Ray Observatory} (CGRO) \cite{Kanbach1993}, detected $\gamma$-rays emission above 100~MeV, reaching energies up to 2~GeV. The launch of the \textit{Fermi} mission in 2008 started a new precision era in the study of high energy $\gamma$-rays from the sun.
The \textit{Fermi} satellite has two detectors on board: the \textit{Fermi} Gamma-ray Burst Monitor (\textit{Fermi}-GBM) that is sensitive between 10~keV -- 30~MeV \cite{FermiGBM}, and the \textit{Fermi} Large Area Telescope (\textit{Fermi}-LAT) that is sensitive between 20~MeV -- 300~GeV \cite{FermiLAT}. The high statistics and energy resolution of the \textit{Fermi}-LAT has allowed accurate determination of the $\gamma$-ray spectra from solar flares. Moreover, recent release of the new PASS8 data has significantly increased the $\gamma$-ray sensitivity of the \textit{Fermi}-LAT below 1~GeV, of particular importance for the study of the ion distribution above 100~MeV/nuc in solar flare events.
Motivated by these recent improvements in solar flare observations, we here implement several improvements to the hadronic $\gamma$-ray production descriptions above 30~MeV. We firstly explore the application of new $p+p\to\pi$ cross sections, known to provide a particularly accurate description of the process for kinetic energies close to threshold. We also implement and explore the additional consideration of subthreshold pion and $\gamma$-ray continuum production above 30~MeV, produced via nuclear interactions, both of which have previously been neglected.
The layout of this paper is the following. In Sec.~\ref{sec:Fermi} we consider the \textit{Fermi}-LAT data for four major solar flares and investigate the impact that the new PASS8 data has on two of these flares. In Sec.~\ref{sec:GammaChannels} we revise the $\gamma$-ray production cross sections and demonstrate explicitly the contributions of the subthreshold pions and the so called \textit{hard photon} channels, indicating the further impact that the consideration of different energetic particle abundances have on the final $\gamma$-ray spectrum. In Sec.~\ref{sec:ResultDiscuss} we discuss the primary spectra parameters, and conclude with a summary of the main results.
\vspace{-0.3cm}
\section{\textit{Fermi}-LAT solar flare data \label{sec:Fermi}}
\subsection{\textit{Fermi}-LAT data}
We focus on four major solar flares, detected by \textit{Fermi}-LAT during solar cycle 24 between 2011 and 2013, for which $\gamma$-ray emission above an energy of 100~MeV was detected. They are: the flare 2011 March 7 and June 7 \cite{Ackermann2014}, the 2012 March 7 \cite{Ajello2014} and the 2013 October 11 \cite{Pesce-Rollins2015}, all analysed using the PASS7 \emph{Instrument Response Functions} (IRFs). The last two flares data are provided at different instances of their time evolution. These flares share a common feature, which is that their impulse phase is followed by a long and slowly varying $\gamma$-ray emission phase with $E_\gamma>100$~MeV. These data cover a wide energy interval from about 60~MeV to several GeV. All these spectral data carry similar features. They peak around $E_\gamma\approx200$~MeV and most of the data points above 1~GeV are upper limits.
\begin{table*}
\caption{Final values of the fit of the 2 solar flares that have been reanalyzed with PASS8 data. The parameter $\Phi_{100}$ indicates the flux above 100 MeV.}
\label{tab:2}
\begin{tabular}{ccccccc}
\toprule
Dataset & \multicolumn{2}{c}{Power Law}&~~~~ & \multicolumn{3}{c}{Power Law+cut-off} \\
\cline{2-3} \cline{5-7}
& $\Phi_{100}$ [$10^{-5}$ ph/cm$^2$/s] & $\Gamma$ & ~~~~& $\Phi_{100}$ [$10^{-5}$ ph/cm$^2$/s] & $\Gamma$ & $E_c$ [MeV] \\
\hline
2011 June 7 & $2.62 \pm 0.17$ & $2.45 \pm 0.07$ &~~~~& $3.22 \pm 0.21$ & $0.00 \pm 0.04$ & $103.6 \pm 6.6$ \\
2013 October 11 (07:16:40UT) & $14.9 \pm 0.4$ & $2.35 \pm 0.03$ &~~~~& $18.4 \pm 0.5$ & $0.13 \pm 0.17$ & $125 \pm 11$\\
2013 October 11 (07:35:00UT) & $22.7 \pm 0.7$ & $2.37 \pm 0.03$ &~~~~& $27.8 \pm 0.8$ & $0.22 \pm 0.17$ & $129 \pm 12$\\
\toprule
\end{tabular}
\end{table*}
\subsection{Reanalysis of the \textit{Fermi}-LAT data using the new PASS8 IRFs}
In 2015 \footnote{https://fermi.gsfc.nasa.gov/ssc/data/access/} the \textit{Fermi}-LAT Collaboration released a new IRFs version called PASS8. In comparison with the previously released software, there was a significant improvement in effective area, especially at energies below 1 GeV \footnote{ https://www.slac.stanford.edu/exp/glast/groups/ /canda/lat\_Performance.htm} which are particularly relevant for this study. For this reason, we asses the gain obtained with the reanalysis of the data focusing on two flares, namely the 2011 June 7 and the 2013 October 11. These two flaring events were chosen because they are short and easy to analyse without the need of particular techniques such as tracking \cite{Ackermann2014} or the use of tailored IRFs \cite{Ajello2014}.
The \textit{Fermi}-LAT data were analyzed with the standard binned likelihood method in an energy range from 60 MeV to 50 GeV. The region of interest (RoI) analyzed, was a square region of 24 degree size centred on the position of the Sun during the day of the flare. The localization of the centroid of the emission made use the data above 100 MeV to ensure a better point spread function, and was obtained using the standard tool {\tt gtfindsrc}. For the flare 2011 June 7, the Sun was close to the projected position of the Crab pulsar, so the centroid was extracted from a circle with a radius of 5 degrees to avoid contamination.
The emission of the Sun was modeled as a point like source centred in the centroid found previously and the model file used in the {\tt gtlike} routine included the diffuse model for the galactic and isotropic background as well. In the case of the 2011 flare, we added also the Crab pulsar. Because of the vicinity of this flare to the galactic plane, the normalizations of these extra sources were left free. For the 2013 flare the background models were instead fixed to the 3FGL catalogue \cite{2015ApJS..218...23A}.
The PASS8 data allows also an extra feature to reduce the amount of systematic uncertainties by taking into account the energy dispersion matrix. This step is particularly important when analysing, as in this case, energies below 100 MeV.
The SED points were computed following the procedure illustrated in \cite{Ackermann2014}, fixing the power law index at 2 and leaving free the normalization in each energy bin. The SEDs can be seen in Figure \ref{fig:comparisonP8} for the 2011 flares shows the improvement in the determination of the spectrum using the new software.
In Table \ref{tab:2} the result of the likelihood fit on the re-analyzed data are shown using power-law and power-law with exponential cut-off functions. The comparison with the already published data shows significant differences only for the power law with exponential cut-off fit, having the cut-off energy reconstructed at slightly lower energies.
\begin{figure}
\includegraphics[scale=0.42]{SF20110607_pass8_publish.pdf}
\caption{Comparison between the \emph{Fermi}-LAT data of the solar flare 2011 June 7 reported in \cite{Ackermann2014} (red squares) and the reanalysis made using the PASS8 IRFs (black circles). Beside the reduction of the size of the error bars, the upper limits above 10 GeV are more constraining.\label{fig:comparisonP8}}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.42]{NucContin_a=2.pdf}
\includegraphics[scale=0.42]{NucContin_a=4.pdf}
\caption{Nuclear $\gamma$-ray spectrum for $E_\gamma>30$~MeV. The primary ion flux is a power-law function of kinetic energy per nucleon with $J_i= (f_i\times\upsilon) \sim T_i^{-\alpha}$ for $\alpha=2$ (left panel) and $\alpha=4$ (right panel), see Eq.~(\ref{eq:qs}). The mass composition of the projectiles and target material are set to SEP \cite{Reames2014Abund}, a solar composition \cite{Lodders2009}, respectively. The full gray line shows the $\pi^0\to2\gamma$ contribution, the gray dash line is only the contribution from the subthreshold pions. The red long dash line shows the contribution from \textit{hard photons}. The black long dash dot line is the sum of the \textit{hard photon} and $\pi^0\to2\gamma$ channels. For comparison the $p+p\to\pi^0\to2\gamma$ is also shown (brown short dash dot line) and is multiplied by 1.8 (the nuclear enhancement factor) to compare with the nuclear spectrum. In the left panel the contribution from the subthreshold pions and \textit{hard photons} is small and is multiplied by a factor 10 in the figure. \label{fig:Continuum}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.5]{Newpprad2.pdf}
\includegraphics[scale=0.5]{NewSolarrad2.pdf}\\
\includegraphics[scale=0.5]{Newpprad4.pdf}
\includegraphics[scale=0.5]{NewSolarrad4.pdf}
\caption{Gamma-ray production spectrum from different leptonic and hadronic channels for $p+p$ interactions (left) and for SEP nuclei interacting with solar composition target (right). The primary ion flux considered, are power-laws in kinetic energy per nucleon $J_i \sim T_i^{-\alpha}$ for $\alpha=2$ (top panels) and $\alpha=4$ (bottom panels). The considered channels are: $\pi^0\to2\gamma$ (gray line), \textit{hard photons} (gray long-short dash line), $e^+$/$e^-$ bremsstrahlung (red/black line), $e^+$ annihilation in flight (red dash-dot line), primary electrons (black dash-dot line). The primary electron energy spectrum is assumed to be similar to proton one but with 1~\% of their flux. The black dash-line is the sum of all channels. \label{fig:1}}
\end{figure*}
\section{Gamma-ray production \label{sec:GammaChannels}}
\subsection{Interaction model}
We first consider a region in the solar atmosphere where accelerated primary particles interact with the ambient medium, producing secondary particles. Let $q_s(E_s,t)=d\dot{N}_s/dE_s$ be the secondary particle production rate per unit energy interval centered at energy $E_s$ at time $t$. The value of $q_s$ is computed as follows:
\begin{equation}\label{eq:qs}
q_s(E_s,t)= n_t \int\limits_{E^{\rm th}}^\infty dE\;f(E,t)\;\upsilon\; \frac{d\sigma}{dE_s}(E_i,E_s),
\end{equation}
\noindent where, $n_t$ is the target medium number density, $E$ is the projectile energy, $E^{\rm th}$ is the threshold energy for the given reaction, $f(E,t)$ is the instantaneous energy distribution of the projectile particles in the interaction region, $\upsilon$ is the projectile speed and the $d\sigma/dE_s$ is the secondary particle production differential cross section for the specific process. It is clear from Eq.~(\ref{eq:qs}) that the computation of $q_s$ in a given target medium density $n_t$, requires the primary particle energy distribution $f$ and the specific process differential cross section and threshold energy.
Let us suppose that the energetic primary particles are injected in the interaction region with a rate per unit energy $Q$. Assuming that after being injected, these particles can only lose energy or escape from the interaction region, their $f$ evolves with time (see e.g. \cite{Ginzburg1964}):
\begin{equation}\label{eq:EqfpEvolv}
\frac{\partial f}{\partial t} + \frac{\partial}{\partial E} \left(\frac{E\, f}{\tau_{\rm Eloss}}\right) + \frac{f}{\tau_{\rm esc}} = Q.
\end{equation}
\noindent Here $\tau_{\rm Eloss}$ is the energy loss time and $\tau_{\rm esc}$ is the particle residence time in the interaction region.
Note that the solution of Eq.~(\ref{eq:EqfpEvolv}) can be simplified for the two extreme limiting cases. In the first, the escape of particles from the region dominates over energy losses (ie. $\tau_{\rm esc}<\tau_{\rm Eloss}$), with the solution of Eq.~(\ref{eq:EqfpEvolv}) being $f=Q \times \tau_{\rm esc}$. In this case, the system is said to be in the \textit{thin target regime}. In the second case, if one can neglect particle escape (ie. $\tau_{\rm Eloss}<\tau_{\rm esc}$), the system is said to be in the \textit{thick target regime}. In this regime, $f$ evolves with time until the rate of injected particles in the region balances the rate of particles removed from it via energy losses. At this equilibrium point the evolution of $f$ reaches saturation.
\begin{figure*}
\centering
\includegraphics[scale=0.45]{Abundance.pdf}
\includegraphics[scale=0.45]{NucLinesRKL.pdf}
\caption{Left panel shows the elemental abundances for a solar composition (gray squares) \cite{Lodders2009} and solar energetic particles (SEP) for gradual events (cyan stars) and impulsive events (red triangles) \cite{Reames2014Abund,Reames2014}. The blue dash-line shows the threshold we use in our calculations to select the elements. Right hand side panel shows the MeV and the GeV $\gamma$-ray spectra for two different compositions of energetic particles: impulsive SEP (red line) and solar composition (gray line). The energetic particle flux are power-laws in kinetic energy per nucleon with index $\alpha=2$ and 4. The final $\gamma$-ray spectra are normalized to have the same value at high energies. The elements that produce the strognest nuclear $\gamma$-ray lines are identified. The region between the vertical dash gray lines is dominated by the compound and preequilibrium nuclear $\gamma$-ray continuum that has not been taken into account and that smoothly connect the nuclear lines with the higher energy emission. \label{fig:CRSolar}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.5]{fpEvolva4.pdf}
\includegraphics[scale=0.5]{SpecppEvola4.pdf}
\caption{Evolution of the proton energy distribution $f_p$ and the resulting $\gamma$-ray spectra from $p+p$ interactions. The proton injection rate is considered as a power-law function of the form $Q_p \sim T_p^{-\alpha}$ with $\alpha=4$. The number density and the magnetic field strength are set to $n_H=10^{13}\,{\rm cm^{-3}}$ and $B=100$~G, respectively. The left panel shows the proton energy distribution evolution for different values of the parameter $z=n_H\times t$ that are set to $z=10^{10}$ (long dash line), $10^{12}$ (dash dot line), $10^{14}$ (short dash line) and $10^{16}\,{\rm cm^{-3}\,s}$ (full line). For comparison, the saturated proton energy distribution is shown in red dash line which is reached for $z\gtrsim 5\times 10^{15}\,{\rm cm^{-3}\,s}$ ($t \gtrsim 5\times 10^{2}\,{\rm s}$). Their corresponding $\gamma$-ray spectra are shown on the right panel. \label{fig:evolution}}
\end{figure*}
\begin{figure}
\includegraphics[scale=0.42]{ppThinThicka4.pdf}
\caption{ Gamma-ray spectrum from $p+p$ interactions for a thin target regime (dash line) and a saturated spectrum for a thick target regime (full line). The proton injection rate is assumed to be $Q_p \sim T_p^{-\alpha}$ for $\alpha=4$ and the number density and the magnetic field strength are set to $n_H=10^{13}\,{\rm cm^{-3}}$ and $B=100$~G. The black lines correspond to $\pi^0\to2\gamma$ decay and the red lines correspond to electron and positron bremsstrahlung and annihilation in flight assuming saturated $e^\pm$ spectra. \label{fig:thinthick}}
\end{figure}
\subsection{Gamma-ray production cross sections}
Solar flares can accelerate ions up to mildly relativistic energies. Accurate description of the $\gamma$-rays they produce thus requires accurate low energy $\gamma$-ray production cross sections. Hydrogen is the most abundant element in the solar atmosphere. Energetic protons with energies above the threshold energy of 0.28~GeV, colliding with the ambient hydrogen, can produce $\gamma$-rays through pion production interactions. Nuclei on the other hand, although less abundant than hydrogen, can also significantly contribute to the final $\gamma$-ray spectrum. Unlike proton interactions, nuclei have the advantage of being able to produce pions via the so-called \textit{subthreshold pion} production. Furthermore, they can also produce (below 0.28~GeV/nuc) direct continuum emission at energies $E_\gamma>30$~MeV via the so-called \textit{hard photon} production. These low energy $\gamma$-ray production channels can be especially important for solar flares, as these produce steep primary particle spectra with low energy cut-offs. These processes can significantly change the nuclear $\gamma$-ray spectral shape, compared to that produced for the simple proton-only case, and therefore should not be ignored.
The production cross-section for $p+p\to\pi^0\to2\gamma$ has been recently parametrized in \cite{Kafexhiu2014}. This new parametrization is particularly useful for solar flare modelling, achieving high accuracies down to the kinematic threshold. It uses recent pion production experimental data for kinetic energies $T_p<2$~GeV, and at higher collision energies utilizes a Monte Carlo description. We adopt here this parametrization to compute the $\gamma$ spectra from $p+p$ collisions.
The production cross sections for $p+p\to\pi^\pm$ at low collision energies near the kinematic threshold, has also been recently parametrized in \cite{PionBump}. In this work the charged pion energy distribution in the laboratory frame has been parametrized as a function of proton collision energy using the Geant4 toolkit \citep{Geant42003, Geant42006}. The total cross sections of the charged pion production, on the other hand, are parametrized using publicly available experimental data, see \cite{PionBump}. We adopt this parametrization to compute the electron/positron spectra from $p+p$ collisions.
The $\gamma$-ray production cross sections for low energy nuclei interactions, including the production of subthreshold pions and \textit{hard photons}, have recently been parameterized in \cite{Subthresh2016}. These parametrizations are based on publicly available experimental data and give simple and accurate analytical formulae that are valid for ion kinetic energies $T_i\leq100$~GeV/nuc. We adopt here these formulae to compute the $\gamma$-ray and the $e^\pm$ spectra from all possible nuclear interactions.
To compute the electron and positron spectra from the decay of charged pions, produced via $p+p$ and nuclear interactions, we convolve the $\pi^\pm$ spectra with the $e^\pm$ energy distribution function for the monoenergetic pions \citep[see e.g.][]{Scanlon1965,Dermer1986}. The electron and positron spectra are then used to compute the $\gamma$-ray spectrum from the bremsstrahlung \cite{Blumenthal1970} and annihilation in flight \cite{Aharonian1981, Aharonian2000}. We note that in the case of nuclei, bremsstrahlung emission has a $Z^2$--dependence and the annihilation in flight has a $Z$--dependence on the nuclear charge number $Z$.
Although not the primary focus of this paper, nuclear interactions can also produce $\gamma$-ray emission below 30~MeV. The most prominent contributors of this emission are the nuclear $\gamma$-ray lines produced within the $0.1-10$~MeV energy interval, see e.g. \cite{Ramaty1979}. Their spectra have a strong dependence on the chemical composition of the target and projectile nuclei and the shape of the projectile particle spectrum below several hundreds of MeV/nuc. For the calculation of the nuclear $\gamma$-ray line spectra, we adopt the Monte Carlo method described in \cite{Ramaty1979, Kozlovsky2002}, describing the nuclear $\gamma$-ray lines below 8~MeV. Note that recent developments in both the experimental data and numerical descriptions have increased the accuracy of the nuclear $\gamma$-ray line spectra (see e.g. Refs.~\cite{Belhout2007, Murphy2009, Benhabiles-Mezhoud2011, Kiener2012}).
\begin{figure*}
\centering
\includegraphics[scale=0.45]{a4LowCut100MeVJp.pdf}
\includegraphics[scale=0.45]{RKL_LowCutLinesTpC100MeV.pdf}\\
\includegraphics[scale=0.45]{a4LowCut500MeVJp.pdf}
\includegraphics[scale=0.45]{RKL_LowCutLinesTpC500MeV.pdf}
\caption{Energy distribution of ions (left panels) and their corresponding broad-band $\gamma$-ray spectra (right panels). {\it Left panels}: Energy distribution of ions with power-law on the projectile kinetic energy per nucleon $T_i$ with index $\alpha=4$ that has a break at lower energies at $T_i^c=0.1$ (top) and 0.5~GeV/nuc (bottom). Curve (1) corresponds to the continuation of the power-law with $\alpha=4$ , curve (2) corresponds to $\alpha=2$ after the break and curve (3) implies a sharp cut-off at the break energy. {\it Right panels}: Corresponding $\gamma$-ray spectra due to: $\pi^0$ production; \textit{hard photons}; and nuclear $\gamma$-ray lines. The thin dash lines show the $p+p$ contribution. The region between the vertical dash gray lines is dominated by the compound and preequilibrium nuclear $\gamma$-ray continuum that has not been taken into account and that smoothly connect the nuclear lines with the higher energy emission. \label{fig:cuts2}}
\end{figure*}
\begin{figure}
\includegraphics[scale=0.45]{March2012.pdf}
\caption{Time evolution of the power-law index $\alpha$ for the solar flare 2012 March 7, see Table~\ref{tab:1}. The blue error bars show the results for nuclei (SEP interaction with solar composition target material), the black error bars show the results for pure hydrogen composition (using our updated $p+p$ cross sections \cite{Kafexhiu2014,PionBump}) and the red error bars show the results from \cite{Ajello2014}, see Table~\ref{tab:1}. \label{fig:tab1}}
\end{figure}
\section{Gamma-ray emission \label{sec:illustrate}}
\subsection{Leptonic and hadronic channels}
We next apply the above discussed cross sections to the emission zone. As a first example we show the ensemble $\gamma$-ray emission from various channels following the interaction of energetic ions with the target gas. The flux of energetic ions is assumed to be a power-law function in kinetic energy per nucleon, $J_i=f_i\times \upsilon \sim T_i^{-\alpha}$. Indexes of $\alpha=2$ and $4$ are considered. The projectiles have a gradual solar energetic particle composition (gradual SEP) \cite{Reames2014Abund} and the target material has a solar composition \cite{Lodders2009}. We refer here to this abundance combination as ``Nuclei''. The resulting $\gamma$-ray spectra from both hadronic and leptonic emission are shown in Figs.~\ref{fig:Continuum} and \ref{fig:1}. In Fig.~\ref{fig:Continuum} we explicitly show the $\gamma$-ray emission from neutral pion decay including the subthreshold production, and \textit{hard photon} component of the hadronic emission. We note that the $\gamma$-ray spectrum for the proton-only interaction scenario is multiplied by 1.8 (the nuclear enhancement factor) to facilitate its comparison with the total nuclear spectrum.
Figure~\ref{fig:1} includes the contribution from the leptonic $\gamma$-ray channels such as: $e^\pm$ bremsstrahlung, annihilation in flight and, primary electron bremsstrahlung. For this result we consider the maximum possible contribution from electrons, obtained for the case of a saturated (steady-state) $e^\pm$ spectra in the thick target regime. The $e^\pm$ injection rate is computed from the $\pi^\pm$ decays, whereas, for the primary electrons, we assume that their injection rate is similar to that of protons, but normalized to only 1~\% of the proton flux.
It is clear from Fig.~\ref{fig:1} that the presence of nuclei can have significant effects on the total $\gamma$-ray spectrum below $E_\gamma\lesssim200$~MeV. These effects are larger for the soft energetic particle spectral case considered ($\alpha=4$), for which proton-only interactions are unable to reproduce the $\gamma$-ray spectral shape. The additional inclusion of the leptonic channels further enhances such differences, see Fig.~\ref{fig:1}. For the example case shown here, the total differences between the proton and nuclear spectral shape are less than $60$~\% for $\alpha=2$ and a factor of 2 or more for the $\alpha=4$ case. It is therefore apparent that nuclear interactions produce notably different spectra compared to the proton-only case. Furthermore, the nuclear leptonic channels contribution is amplified by subthreshold $\pi^\pm$ production, a $e^-/e^+$ ratio close to unity, and the $Z^2$ dependence of the bremsstrahlung from the nucleus charge number $Z$. Note that unlike the low energy proton-only interactions where the ratio $e^-/e^+$ is close to zero, the low energy nuclear interactions produce a comparable amount of $e^\pm$ due to isospin symmetry and having equal number of protons and neutrons, see e.g. \cite{Subthresh2016}.
\subsection{Chemical composition of energetic particles}
We next explore the effect that different energetic particle chemical compositions have on the final $\gamma$-ray spectra. We adopt the same parameters as in the previous section, changing only the chemical composition of the energetic particles. Three of these compositions are considered: a solar composition (Solar), a gradual SEP and an impulsive SEP composition \cite{Reames2014Abund, Reames2014}. These abundances are plotted in the left panel of Fig.~\ref{fig:CRSolar}. As seen from the figure, the difference between a solar composition and a gradual SEP are not large. Therefore, when calculating the resulting $\gamma$-ray spectrum we consider only energetic particles with a solar or impulsive SEP type composition. The $\gamma$-ray spectra for these cases are shown on the right-hand panel of Fig.~\ref{fig:CRSolar}.
In addition to the $\gamma$-ray continuum above 30~MeV we have also computed the spectrum of nuclear $\gamma$-ray lines below 8~MeV. It is clear from Fig.~\ref{fig:CRSolar} that changes in the mass composition from solar to impulsive SEP do not notably effect the $\gamma$-ray spectrum above 30~MeV, the differences in the range 30-200~MeV being less than 40~\%. However, the same changes in mass composition do have dramatic effects in the nuclear $\gamma$-ray line region. These differences originate from the fact that nuclei heavier than helium are more abundant for the impulsive SEP composition. Their excited emission subsequently suffering Doppler broadening effects. This results in the production of broad nuclear lines, which blend together to form a quasi-continuum for the SEP composition scenario.
\subsection{Proton thick target emission}
Due to energy losses, the proton energy distribution evolves with time in the interaction region, until reaching saturation (stready-state). Here we compute this evolution and the resulting $\gamma$-ray spectra. We adopt the thick target regime for protons, with a power-law injection spectrum of the form $Q_p=\mathcal{N}\times T_p^{-\alpha}$ with $\alpha=4$ and $\mathcal{N}$ a normalization constant. The electrons and positrons produced via $\pi^\pm$ production are also injected into the interaction region, and are also assumed to be in the thick target regime. Proton energy losses are dominated by ionization losses and inelastic collisions, whereas, the energy losses for electrons are dominated by ionization, bremsstrahlung and the synchrotron losses, see e.g. Refs.~\cite{Blumenthal1970, PDG2016}. Since the proton energy losses are proportional to the target number density $n_H$, their energy distribution is better described by the quantity $z=n_H\times t$. Here we assume that $n_H=10^{13}\,{\rm cm^{-3}}$ and a magnetic field strength $B=100$~G as fiducial values for the solar atmosphere.
The left panel of Fig.~\ref{fig:evolution} shows the proton energy distribution evolution at four distinguishable epochs with $z=n_H\times t=10^{10}$, $10^{12}$, $10^{14}$ and $10^{16}\,{\rm cm^{-3}\,s}$, corresponding to evolution times of $t=10^{-3}$, $10^{-1}$, $10^{1}$ and $10^{3}\,{\rm s}$, respectively. The proton energy distribution evolution can be understood in simple terms. When the evolution time is much smaller than the cooling timescale, the effect of losses is negligible. Therefore, the proton energy distribution has the same energy dependence as the injection rate $Q_p$, with the population of particles increasing linearly with time $f_p\sim Q_p\,t$; see e.g. $z=10^{10}\,{\rm cm^{-3}\,s}$ curve. However, when the evolution time $t$ becomes comparable with the cooling timescale, energy losses become important, shifting the high energy population of particles towards lower energies. Consequently, $f_p$ starts to deviate from $Q_p$, with $f_p$ eventually reaching its saturation shape, after which its evolution ceases. For a steady injection rate, the proton energy distribution saturates for $z\gtrsim 5\times 10^{15}\,{\rm cm^{-3}\,s}$, corresponding for our example to $t \gtrsim 5\times 10^{2}\,{\rm s}$. Note that for an injection rate of the form $Q_p\sim T_p^{-\alpha}$, and energy losses of the form $\mathcal{P}\sim T_p ^\delta$, the saturated energy distribution is also a power-law $f_p\sim T_p^{-\beta}$ with $\beta=\alpha +\delta - 1$. This explains the broken power-law shape of the proton energy distribution in the non-relativistic region. The proton energy losses for $T_p>0.5$~GeV are dominated by inelastic collisions that have $\delta=1$, thus, $f_p$ has similar energy dependence as $Q_p$ because $\beta = \alpha$. At lower energies, however, where ionisation dominates energy losses, $\delta \approx -1$, the $f_p$ is a harder power-law with $\beta \approx \alpha -2$. The subsequent maximum value of the energy break is reached for the saturate spectrum with $T_p\sim0.4$~GeV, see Fig.~\ref{fig:evolution}.
The right-hand panel of Fig.~\ref{fig:evolution} shows the resulting $\gamma$-ray spectra evolution produced for the above described set-up, via proton-only interactions. We note that similar computations for nuclei are not straightforward. Due to nuclear reactions, the nucleus number of a given species changes in the interaction region. The presence of nuclear spallation processes cause evolution of the nuclear abundances that must also be taken into account when calculating the nuclear $\gamma$-ray spectrum. Such considerations, however, are beyond the scope of this paper.
Figure~\ref{fig:thinthick} shows the contribution of the leptonic and hadronic channels to the final $\gamma$-ray spectra for extreme cases, namely: the thin target regime and the thick target regime, labelled ''thick'' and ''thin'' in the figure, respectively. Note that the radiation from $e^\pm$ bremsstrahlung and annihilation in flight are calculated for their saturated spectral cases, corresponding to their maximal potential contribution. We recall that the bremsstrahlung and annihilation in flight for a thin target $e^\pm$ regime can be negligible.
\begin{figure}
\includegraphics[scale=0.45]{Figc_data.pdf}
\caption{The MeV and GeV $\gamma$-ray spectra from solar flare 2013 October 11 observed by the \textit{Fermi}-GBM and the \textit{Fermi}-LAT \cite{Pesce-Rollins2015}. The red line is the calculation using a power-law ion flux with index $\alpha=3.7$ derived by fitting the \textit{Fermi}-LAT data. The $\gamma$-ray flux below 10 MeV is calculated assuming a continuation of the power-law function toward lower energies. The flux below 10 MeV is also smoothen to take into account the 10~\% energy resolution of the \textit{Fermi}-GBM detector. The energetic ion abundances are set to SEP and for the target material to a solar composition. \label{fig:lines_pi}}
\end{figure}
\subsection{Low energy spectra of energetic particles}
Here we explore the effect that the low energy spectral shape of the energetic particle spectra has on the $\gamma$-ray emission. We assume a thin target regime for simplicity, with the ion flux being described by a broken power-law. We consider break energies of $T_i^c=100$ and 500~MeV/nuc. The high energy part of the power-law has a fixed index of $\alpha=4$. The shape below the break energy is described by: 1) a continuation of the $\alpha=4$ power-law, 2) an $\alpha=2$ power-law and 3) a sharp low energy cut-off.
The left-hand panel of Fig.~\ref{fig:cuts2} shows the energetic particle spectra, whereas, the right-hand side shows the respective $\gamma$-ray spectra. We do not include here the $\gamma$-ray production from secondary $e^\pm$ channels. It is clear from the figure that the low energy primary spectral shape has a dramatic effect on the $\gamma$-ray spectrum below 200~MeV, especially in the energy region of the nuclear $\gamma$-ray lines, where the emissivities can change by orders of magnitude. These effects will be magnified if the solar composition is replaced by a heavier one. Unlike the nuclear interactions, the resulting radiation spectrum from proton-only interactions remains practically unchanged.
\begin{table}
\caption{The MCMC results for the primary spectrum power-law index $\alpha$ for the considered \textit{Fermi}-LAT solar flare data. The ``Hydrogen'' are the results for pure hydrogen compositon, whereas, ``Nuclei'' are the results for the SEP interacting with a solar composition target material. The ``\textit{Fermi}'' column quotes the index $\alpha$ values that are published in \textit{Fermi}-LAT publications \cite{Ackermann2014, Ajello2014, Pesce-Rollins2015}. \label{tab:1}}
\begin{tabular}{lccc}
\toprule
Flare & Hydrogen & Nuclei & \textit{Fermi} \\
\hline
2011 March 7 & $4.27^{+0.22}_{-0.20}$ & $3.80^{+0.11}_{-0.09}$ & $4.5^{+0.2}_{-0.2}$ \\
2011 June 7 & $4.12^{+0.54}_{-0.43}$ & $3.48^{+0.19}_{-0.14}$ & $4.3^{+0.3}_{-0.3}$ \\
2012 March 7 (02:27:00UT) & $3.46^{+0.13}_{-0.11}$ & $3.33^{+0.09}_{-0.07}$ & $3.8^{+0.1}_{-0.1}$ \\
2012 March 7 (03:52:00UT) & $3.71^{+0.04}_{-0.04}$ & $3.53^{+0.02}_{-0.01}$ & $4.0^{+0.1}_{-0.1}$ \\
2012 March 7 (05:38:32UT) & $4.26^{+0.10}_{-0.06}$ & $4.09^{+0.04}_{-0.03}$ & $4.6^{+0.2}_{-0.2}$ \\
2012 March 7 (07:03:00UT) & $4.47^{+0.07}_{-0.07}$ & $4.22^{+0.01}_{-0.01}$ & $4.8^{+0.1}_{-0.1}$ \\
2012 March 7 (08:50:00UT) & $4.59^{+0.31}_{-0.27}$ & $3.97^{+0.14}_{-0.13}$ & $5.1^{+0.3}_{-0.3}$ \\
2012 March 7 (10:14:32UT) & $5.09^{+0.21}_{-0.13}$ & $4.56^{+0.03}_{-0.03}$ & $5.5^{+0.2}_{-0.2}$ \\
2013 October 11 (07:16:40UT) & $3.98^{+0.30}_{-0.24}$ & $3.71^{+0.21}_{-0.20}$ & $3.8^{+0.2}_{-0.2}$ \\
2013 October 11 (07:35:00UT) & $3.88^{+0.26}_{-0.22}$ & $3.62^{+0.19}_{-0.18}$ & $3.7^{+0.2}_{-0.2}$ \\
\toprule
\end{tabular}
\end{table}
\section{Results and Discussion \label{sec:ResultDiscuss}}
In this section we show the energetic particle spectral parameters obtained by fitting the \textit{Fermi}-LAT solar flare data described in section~\ref{sec:Fermi}. For this analysis we assume a thin target regime for ions and a thick target regime for the secondary electrons (i.e. adopting their saturated spectra). We also consider two chemical compositions, namely: a pure proton (hydrogen) and an SEP composition (Nuclei), interacting with solar abundance target material. We note that changing the chemical composition of energetic particles from gradual to impulsive SEP or to a solar composition has negligible effects in the energy range $E_\gamma\geq60$~MeV relevant for the \textit{Fermi}-LAT solar flare data, see Fig.~\ref{fig:CRSolar}. We consider a primary ion flux described by a power-law function of the form $J_i=N\times T_i^{-\alpha}$, where, the normalization constant $N$ and the power-law index $\alpha$ are free parameters. For exploring this spectral parameter space, we adopt the Goodman and Weare's affine invariant Markov Chain Monte Carlo Ensemble sampler (MCMC) as is implemented in \cite{emcee} and adopt the revised $\gamma$-ray production cross sections described in section~\ref{sec:GammaChannels}. The results of the analysis are summarized in Table~\ref{tab:1}.
We next compare the results obtained here for the hydrogen case with the same ones quoted in the \textit{Fermi}-LAT publications. As seen in Table~\ref{tab:1}, the index $\alpha$ obtained in this work has significant deviations from the values quoted in the literature. These changes can be predominantly explained by the differences in the $p+p$ cross sections adopted between our revised parametrizations and the ones used in the \textit{Fermi}-LAT publications \cite{Dermer1986a,Murphy1987}.
Further significant differences are also seen when nuclei are considered. The index $\alpha$ required to fit the $\gamma$-ray data is systematically smaller for nuclei than for the hydrogen case, see Table~\ref{tab:1}. Thus, for nuclei, the same $\gamma$-ray data require a harder primary spectrum than the corresponding proton-only values. These contrasts in the primary particle parameter space are a reflection of their different $\gamma$-ray spectral shape for $E_\gamma<200$~MeV.
Observations of the 2012 March 7 and 2013 October 11 solar flares by \textit{Fermi}-LAT has provided $\gamma$-ray data at different instances during the evolution of the flares. Specifically, the analysis of the 2012 March 7 flare data suggests that the power-law index $\alpha$ increases with time, see Fig.~\ref{fig:tab1}.
For the 2013 October 11 flare, the \textit{Fermi}-GBM data below 10~MeV are also provided \cite{Pesce-Rollins2015}. Figure~\ref{fig:lines_pi} shows the subsequent best-fit $\gamma$-ray spectrum to the \textit{Fermi}-LAT data, with a low energy comparison to the \textit{Fermi}-GBM data for the Nuclei composition case. We assume here that the same functional form of the primary spectra fit to the \textit{Fermi}-LAT data continues down to the lower energies relevant for nuclear $\gamma$-ray line production. As we can see in Fig.~\ref{fig:lines_pi}, the $\gamma$-ray flux predicted from the soft pure power-law primary flux fits well the high energy data, but over-predicts the MeV $\gamma$-ray flux. Note, however, that in the thick target regime, ionization losses will harden the non-relativistic part of the ion spectrum. The MeV $\gamma$-ray fluxes predicted here may therefore be reduced; e.g. see Fig.~\ref{fig:cuts2}. Furthermore, for the Nuclei composition case, with the energetic particles interacting in the thick target regime, the evolution of the nuclear states due to spallation will further complicate this picture. Interestingly, such an evolution may offer the future opportunity to probe the nuclear residence times using the nuclear $\gamma$-ray line information.
Lastly, we recall that our reanalysis of the 2011 June 7 and the 2013 October 11 data using the new PASS8 data shows improvements on the quality of the data by reducing the errorbars and by adding one more data point around 1~GeV, see Fig.~\ref{fig:comparisonP8}. Despite this, the final primary spectra parameters required to fit the $\gamma$-ray data do not show significant changes.
\section{Conclusions}
The high quality $\gamma$-ray solar flare observations carried out by \textit{Fermi}-LAT data, demands accurate modelling of this $\gamma$-ray emission above 30~MeV. In this work we have revised hadronic $\gamma$-ray emission calculations for both protons and nuclei, taking into account the secondary electrons produced. Utilizing our recent updates to the hadronic $\gamma$-ray production cross-sections for both protons and nuclei, the importance of the description of pion production close to threshold, nuclear subthreshold pion production, and \textit{hard photon} emission are highlighted. The neglection of these processes is found to be considerably detrimental in the recovery of the underlying projectile particle spectrum using the \textit{Fermi}-LAT $\gamma$-ray data.
\bibliographystyle{aasjournal}
|
{
"timestamp": "2018-03-08T02:08:06",
"yymm": "1803",
"arxiv_id": "1803.02635",
"language": "en",
"url": "https://arxiv.org/abs/1803.02635"
}
|
\section*{Introduction}
The Menger curve is a $1$-dimensional Peano continuum that is classically extracted from the cube in the same way that the Cantor space is extracted from the interval: subdivide $C_0=[0,1]^3$ into $3^3$ congruent subcubes; let $C_1$ be the union of these subcubes which intersect the one-skeleton of $[0,1]^3$; repeat this process on each subcube again and again to define $C_n$; the Menger curve is defined to be the intersection $\bigcap_n{}C_n$.
With this construction Menger found the first example of a universal space for the class of $1$-dimensional continua, that is,
a $1$-dimensional continuum in which every other $1$-dimensional continuum embeds \cite{Mg}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.3]{cube1} $\quad$ $\quad$ \includegraphics[scale=0.3]{cube2} $\quad$ $\quad$ \includegraphics[scale=0.6]{cube3}
\caption{From $C_0$ to $C_1$.}
\end{figure}
The Menger curve is a canonical continuum whose topological properties do not depend on the various geometric parameters of the
above iterative process. In fact, many other constructions of universal $1$-dimensional continua (e.g., \cite{Lf,No}) which appeared soon after \cite{Mg}
were later shown to produce the same space; see \cite{An}.
In this paper, we develop a combinatorial model for the Menger curve using an analogue of projective Fra{\"i}ss{\'e}{} theory from \cite{IS}. The \emph{Menger prespace}
$\mathbb{M}$ is a compact graph-structure on the Cantor space. In a sense, $\mathbb{M}$ is the \emph{generic inverse limit} in the category $\mathcal{C}$ of all \emph{connected epimorphisms} between \emph{finite connected graphs}. The edge relation on $\mathbb{M}$ turns out to be an equivalence relation and the Menger curve is then defined to be the quotient $|\mathbb{M}|=\mathbb{M}/R$ of $\mathbb{M}$ with respect to this relation.
This definition of the Menger curve as the \emph{topological realization} $|\mathbb{M}|$ of the combinatorial object $\mathbb{M}$ has certain technical and foundational advantages.
On the foundational side, the definition of $\mathbb{M}$ is canonical since it is constructed through $\mathcal{C}$ without making any ad-hoc choices for the bonding maps. Moreover, the
definition of $|\mathbb{M}|$ is intrinsic, in that it makes no reference to external spaces such as such as $[0,1]^3$. On the technical side, when proving results about the Menger curve, we can often replace various complications coming from $1$-dimensional topology of $|\mathbb{M}|$ with combinatorial problems about graphs in
$\mathcal{C}$.
Moreover, like any other projective Fra{\"i}ss{\'e}{} limit, $\mathbb{M}$ has the following \emph{projective extension property} built in by the construction: for every $g\in\mathcal{C}$ and any connected epimorphism $f$ as in the diagram, there is a connected epimorphism $h$ with $g \circ h=f$.
\begin{center}
\medskip{}
\begin{tikzcd}[column sep=small]
\mathbb{M} \arrow[drrrr, dotted, "h", swap] \arrow[rrrrrr, "f" ] & & & & & & A\\
& & & & B \arrow[urr, "g", swap] & &
\end{tikzcd}
\medskip{}
\end{center}
Having this universal property of $\mathbb{M}$ as our point of departure, and expanding on it using combinatorial properties of $\mathcal{C}$,
we can integrate various aspects of the Menger curve into a unified theory as follows.
\begin{itemize}
\item Anderson's homogeneity theorem \cite{An} states that any bijection between finite subsets of $|\mathbb{M}|$ extends to a global homeomorphism of $|\mathbb{M}|$. This theorem was later generalized in \cite{MOT} to the strongest possible homogeneity result for $|\mathbb{M}|$, namely, that every homeomorphism between locally non-separating, closed subsets of $|\mathbb{M}|$ extends to a global homeomorphism of $|\mathbb{M}|$. In Theorem \ref{T:2}, we prove a homogeneity result for ${\mathbb M}$ analogous to the homogeneity result for the Menger curve in \cite{MOT}. From that we recover Anderson's homogeneity result for $|\mathbb{M}|$.
Our proof of Theorem \ref{T:2} relies on $\mathcal{C}$ being closed under a certain mapping cylinder construction.
\item Anderson--Wilson's projective universality theorem states that $|\mathbb{M}|$ admits an open, continuous, and connected map onto any Peano continuum\footnote{In this paper we use the newer term \emph{connected map} in place of the synonymous term \emph{monotone map} used in \cite{An2,Wi}.}.
Moreover, all preimages of points under this map can be taken to be homeomorphic to the Menger curve \cite{An2,Wi}. In Theorem \ref{T:preMengerProjUniversality}, we prove a combinatorial analogue of the Anderson--Wilson theorem for $\mathbb{M}$. In the process, we isolate a combinatorial property of $\mathcal{C}$
that is responsible for this strong form of projective universality. In Corollary \ref{C:MengerProjUniversality},
we establish a variant of Anderson--Wilson's theorem for $|\mathbb{M}|$ where the map produced is weakly locally-connected instead of open.
\item
In Theorem \ref{T:ApproximateHomogeneityProperty}, we prove that $|\mathbb{M}|$ satisfies an approximate projective homogeneity property
that is analogous to the property satisfied by many other continua presented as topological realizations of projective Fra{\"i}ss{\'e} limits;
see \cite{BK} and \cite{IS} for examples. Namely, we show that
if $\gamma_0,\gamma_1\colon |\mathbb{M}|\to X$ are continuous connected maps onto a Peano continuum $X$,
then there is a sequence $(h_n)$ of homeomorphisms of $|\mathbb{M}|$ so that $(\gamma_0\circ h_n)$ converges uniformly to $\gamma_1$.
\end{itemize}
It is worth mentioning that throughout Section \ref{S: Homogeneity} one can find analogies with the abstract homotopy theory in the spirit of model categories.
Finally, pursuing an extension of this approach to higher dimensional Menger compacta, we define higher dimensional analogues of
$\mathcal{C}$, $\mathbb{M}$, and
$|\mathbb{M}|$. For every $n\in\{0,1,\ldots\}\cup\{\infty\}$, we define $\mathcal{C}_n$ to be the class of all $(n-1)$-connected epimorphisms between finite, $n$-dimensional,
$(n-1)$-connected simplicial complexes. We show that $\mathcal{C}_n$ is a projective Fra{\"i}ss{\'e}{} class. Interestingly, the same is shown to hold for the class
$\widetilde{\mathcal{C}}_n$, which is defined by replacing ``$(n-1)$-connected'' with ``$(n-1)$-acyclic'' in the definition of $\mathcal{C}_n$. As far as we are aware,
these ``homology $n$-Menger spaces'' introduced here---and for $n=\infty$ this ``homology Hilbert cube''---have not been considered before.
\tableofcontents
\section{The class $\mathcal{C}$ of finite connected graphs}\label{S:graphs}
Let $A$ be a set and let $R $ be any subset of $A^2$. We say that $R$ is a {\bf reflexive} if $R(a,a)$ holds for all $a\in A$. We say that $R$ is {\bf symmetric} if for every $a,b\in A$ we have that $R(a,b)$ implies $R(b,a)$. We finally say that $R$ is {\bf transitive} if the conjunction of $R(a,b)$ and $R(b,c)$ implies $R(a,c)$. By a {\bf graph} $(A,R^A)$, simply denoted by $A$, we mean a set $A$ together with a specified subset $R^{A}$ of $A^2$ that is both reflexive and symmetric.
Notice that reflexivity makes our definition of a graph non-standard but it allows us to treat graphs as $1$-dimensional simplicial complexes. A {\bf clique} of a graph $(A,R^A)$ is any subset $C$ of $A$ with the property that for all $a,b\in C$ we have that $R^A(a,b)$.
A map $f\colon B\to A$ is a {\bf homomorphism} between graphs if it maps edges to edges, that is, if $R^{B}(b,b')$ implies $R^{B}(f(b),f(b'))$, for every $b,b'\in B$. A homomorphism $f$ is an {\bf epimorphism} if it is moreover surjective on both vertices and edges. An isomorphism is an injective epimorphism. By a {\bf subgraph} of a graph we understand an induced subgraph.
We isolate a collection $\mathcal{C}$ of finite graphs together with special epimorphisms between them, the point being, that various topological and dynamical properties of the Menger curve are reflections of combinatorial properties of $\mathcal{C}$. A subset $X$ of a finite graph $A$ is {\bf connected} if, for all non-empty $U_1, U_2\subseteq X$ with $X = U_1\cup U_2$, there exist $x_1\in U_1$ and $x_2\in U_2$ such that $R^A(x_1, x_2)$. A graph $A$ is {\bf connected} if the domain of $A$ is a connected subset. An epimorphism $f\colon B\to A$ is {\bf connected} if the preimage of each connected subset of $A$ is a connected subset of $B$.
\begin{definition}
Let $\mathcal{C}$ be the category of all finite connected graphs with morphisms in ${\mathcal C}$
being connected epimorphisms.
\end{definition}
Our first task is to establish that $\mathcal{C}$ is a projective Fra{\"i}ss{\'e}{} class. Projective Fra{\"i}ss{\'e}{} theory was developed in \cite{IS} in the more general setting of
$\mathcal{L}$-structures. For the sake of perspective, we recall from \cite{IS} the Fra{\"i}ss{\'e}{} class axioms in this more general setup. For the unfamiliar reader, we point out that a graph is just an example of an $\mathcal{L}$-structure where the language $\mathcal{L}$ consists of a binary relation symbol $R$. An important difference between the definition below and the one in \cite{IS} is that here, a Fra{\"i}ss{\'e}{} class is allowed to consist of a strict subcollection of epimorphisms, e.g. only the epimorphisms which are connected.
Let $\mathcal{F}$ be a class of finite $\mathcal{L}$-structures with a fixed family of morphisms among the structures in $\mathcal{F}$. We assume that each morphism is an epimorphism with respect to $\mathcal L$.
We say that $\mathcal F$ is a {\bf projective Fra{\"i}ss{\'e} class} if
\begin{enumerate}
\item $\mathcal F$ is countable up to isomorphism, that is, any sub-collection of pairwise non-isomorphic structures of $\mathcal F$ is countable;
\item morphisms are closed under composition and each identity map is a morphism;
\item for $B, C\in {\mathcal F}$;
there exist $D\in {\mathcal F}$ and morphisms $f\colon D\to B$ and $g\colon D\to C$; and
\item for every two morphisms $f\colon B\to A$
and $g\colon C\to A$,
there exist morphisms $f'\colon D\to B$ and $g'\colon D\to C$ such that $f\circ f' = g\circ g'$.
\end{enumerate}
We will refer to the last property above as the {\bf projective amalgamation property}.
We have the following theorem.
\begin{theorem}\label{T:1}
$\mathcal{C}$ is a projective Fra{\"i}ss{\'e} family.
\end{theorem}
\begin{proof}
We check here that $\mathcal{C}$ satisfies the projective amalgamation property. The rest of the properties follow easily. Let $f \colon B\to A$ and $g \colon C\to A$ be connected epimorphisms and let $D$ be the subgraph of the product graph $B\times C$, induced on domain
\[
\{ (b,c) \in B\times C \colon f(b) = g(c)\}.
\]
Recall that in the product graph $B\times C$ there is an edge between $(b,c)$ and $(b',c')$ if and only if $R^{B}(b,b')$ and $R^{C}(c,c')$. Let also $f'=p_B\colon D\to B$, $g'=p_C\colon D\to C$ be the canonical projections. By the definition of $B\times C$ it is immediate that $\pi_B,\pi_C$ are homomorphisms.
We show that $p_B$ is a connected epimorphism. By symmetry, the same argument applies for $p_C$.
The fact that $g$ is surjective on vertexes implies that $p_B$ is surjective on vertexes since for every $b$ there is a $c_b$ with $f(b)=g(c_b)$, and hence there is $d=(b,c_b)$ with $\pi_B(d)=b$. By the same argument, and since $g$ is surjective on edges, we have that $p_B$ is surjective on edges as well. So $p_B$ is an epimorphism. Moreover, since $g$ is connected, $g^{-1}(f(b))$ is connected for every $b\in B$. Hence the point fibers \[p_B^{-1}(b)=\{b\}\times g^{-1}(f(b))\]
of $\pi_B$ are connected for every $b\in B$. The following general lemma implies therefore that $\pi_B$ is connected.
\end{proof}
\begin{lemma}\label{L:T1}
A function between two graphs of ${\mathcal C}$ is a connected epimorphism if and only if it is an
epimorphism and preimages of points are connected.
\end{lemma}
\begin{proof} Only the direction $\Leftarrow$ needs to be checked. Let $f\colon B\to A$ be an
epimorphism such that preimages of points are connected. It suffices to show that preimages of edges
are connected. Let $b_1, b_2\in B$ be such that $R^A(f(b_1), f(b_2))$. Since $f$ is an epimorphism, there
are $b_1', b_2'\in B$ that form an edge and are such that $f(b_1')=f(b_1)$ and $f(b_2')=f(b_2)$. Since the preimages
of $f(b_1)$ and $f(b_2)$ are connected, there is a path connecting $b_1$ with $b_1'$ and $b_2$ with $b_2'$. Since
$b_1'$ and $b_2'$ are connected by an edge, $b_1$ and $b_2$ are connected by a path, as required.
\end{proof}
\section{Topological graphs and Peano continua}\label{S: Topological graphs and Peano continua}
We import some notions from \cite{IS} and we apply them here in the special case of graphs. A {\bf topological graph} $K$ is a graph $(K,R^{K})$, whose domain $K$ is a $0$-dimensional, compact, metrizable topological space and $R^{K}$ is a closed subset of $K^2$. All types of morphisms we consider between topological graphs are assumed to be continuous. Moreover, we automatically view all finite graphs as topological structures endowed with the discrete topology.
We extend $\mathcal{C}$ to the class $\mathcal{C}^{\omega}$ of all topological graphs and epimorphisms which are ``approximable'' within $\mathcal{C}$. A concrete description of $\mathcal{C}^{\omega}$ is given in Proposition \ref{P:char}. The rest of the paragraph defines $\mathcal{C}^{\omega}$ in abstract terms. Let $(K_n,f^n_m,\mathbb{N})$ be an inverse system of finite connected graphs with bonding maps $f^n_m\colon K_n\to K_m$ from $\mathcal{C}$. It is easy to check that the inverse limit $K=\varprojlim (K_n,f^{n}_m)\in {\mathcal C}^\omega$ is a topological graph, where $(x_0,x_1,\ldots)$ is $R$-connected with $(y_0,y_1,\ldots)$ in $K$ if for every $n$, $x_n$ is $R$-connected with $y_n$ in $K_n$; see for example the proof of Proposition \ref{P:char}. We collect in ${\mathcal C}^\omega$ all topological graphs $K$ which are inverse limits of sequences with bonding maps from $\mathcal{C}$.
Notice that every finite connected graph is in $\mathcal{C}^{\omega}$. If $A\in {\mathcal C}$ and $K=\varprojlim (K_n,f^{n}_m)\in {\mathcal C}^\omega$, then an epimorphism $h\colon K\to A$ is in ${\mathcal C}^\omega$ if and only if there exists $m$, and a morphism $h'\colon K_m\to A$ in ${\mathcal C}$, such that $h$ is the composition of $h'$ with the canonical projection $f_m$ from $K$ to $B_m$. For two topological graphs $K,L\in {\mathcal C}^\omega$, an epimorphism $h\colon L\to K$ is said to be in ${\mathcal C}^\omega$
if for each $A\in {\mathcal C}$ and each $g\colon K\to A$ in ${\mathcal C}^\omega$, the composition $g\circ h$ is in ${\mathcal C}^\omega$. Finally, an epimorphism $h\colon L \to K$ is an isomorphism if it is injective and both $h,h^{-1}$ are in ${\mathcal C}^\omega$. Notice that $h$ is an isomorphism
between $K=\varprojlim (K_n,f^{n}_m)\in {\mathcal C}^\omega$ and $L=\varprojlim (L_n,g^{n}_m)\in {\mathcal C}^\omega$ if and only if there is a sequence $(h_i)$ of morphisms in $\mathcal{C}$ and
two strictly increasing sequences $(k_i)$ and $(l_i)$ of natural numbers such that for each $i$
\[
h_{2i}\circ h_{2i+1} = f^{k_{i+1}}_{k_{i}}\;\hbox{ and }\; h_{2i+1}\circ h_{2i+2} = g^{l_{i+1}}_{l_{i}}.
\]
We now give a more concrete description of the graphs and morphisms of ${\mathcal C}^\omega$. Let $K$ be a topological graph. We say a subset $X$ of $K$ is {\bf connected} if, for all open $U_1, U_2\subseteq K$ with $X\cap U_1\not=\emptyset \not= X\cap U_2$
and $X\subseteq U_1\cup U_2$, there exist $x_1\in X\cap U_1$ and $x_2\in X\cap U_2$ such that $R^K(x_1, x_2)$. We say that a topological graph $(K,R^{K})$ is {\bf connected} if $K$ is connected as a subset of the graph. We say that it is {\bf locally-connected} if it admits a basis of its topology consisting of connected sets in then above sense. Let $K,L$ be topological graphs and let $f\colon L \to K$ be an epimorphism. We say that $f$ is a {\bf connected epimorphism} if the preimage of each closed connected subset of $K$ is connected. Note that the above notions coincide with the analogous notions introduced for finite graphs.
\begin{proposition}\label{P:char}
$\mathcal{C}^\omega$ is the class of all connected epimorphisms between connected, locally-connected, topological graphs.
\end{proposition}
\begin{proof}
Let $K=\varprojlim (K_n,f^{n}_m)\in {\mathcal C}^\omega$ with $f^n_m\in\mathcal{C}$.
The underlying space of the graph $K$ is $0$-dimensional, compact, and metrizable, since it is a countable inverse limit of discrete finite spaces.
The set $R^{K}$ is closed and contains the diagonal as an intersection of closed relations containing the diagonal. This proves that that $K$ is a topological graph. We see now that $K$ is also connected. Let also $f_n\colon K\to K_n$ be the projection induced by the inverse system. Assume that $U_1, U_2$ are non-empty open subsets of $K$ with $K\subseteq U_1\cup U_2$. Since $K_0$ is connected, we can pick $x_0\in f_0(U_1)$ and $y_0\in f_0(U_2)$ with $R^{K_0}(x_0, y_0)$. Assume by induction that we picked $x_n \in f_n(U_1)$ and $y_n \in f_n(U_2)$, with $R^{K_n}(x_n, y_n)$, so that $f^n_{n-1}(x_n)=x_{n-1}$ and $f^n_{n-1}(y_n)=y_{n-1}$. Using the fact that $f^{-1}_n(\{x_n,y_n\})$ is connected and that $f^{n+1}_n$ is an epimorphism we can pick $x_{n+1}\in f_{n+1}(U_1)$ and $y_{n+1}\in f_{n+1}(U_2)$ with $R^{K_{n+1}}(x_{n+1}, y_{n+1})$, and so that $f^{n+1}_n(x_{n+1})=x_{n}$ and $f^{n+1}_n(y_{n+1})=y_{n}$. Hence, $(x_0,x_1,\ldots)\in U_1$ and $(y_0,y_1,\ldots)\in U_2$ are such that $R^K((x_0,x_1,\ldots), (y_0,y_1,\ldots))$. The exact same argument can be applied to show that every clopen set of $K$ of the form $f^{-1}_n(x)$, where $x\in K_n$, is connected. Hence $K$ is locally-connected as well.
Let now $L=\varprojlim (L_n,g^{n}_m)\in {\mathcal C}^\omega$ as well and let $h\colon L\to K$ be a morphism in $\mathcal{C}^\omega$. By definition, for every $m$ there is an $n$ and a connected epimorphism $h'\colon L_n \to K_m$, so that $h'\circ g_n= f_m\circ h$, where $g_n\colon K\to K_n$ is the canonical projection. Since every connected clopen subset $\Delta$ of $K$ is of the form $f_m^{-1}(X)$ for large enough $m$ and some connected subset $X$ of $K_m$, we have that $h^{-1}(\Delta)= (h' \circ g_n)^{-1}(X)$ is a connected clopen subset of $L$. The rest follows from the fact that every closed connected subsets of $K$ and $L$ are interstions of a decreasing sequence of connected clopen subsets of the same spaces.
We turn to the converse statements first for graphs and then for morphisms. Let $K$ be a connected, locally-connected, topological graph. It is not difficult to see that $K$ admits a basis $\mathcal{U}$ of connected clopen sets. Using compactness of $K$ as well as of every element of $\mathcal{U}$, we can find a sequence $\mathcal{U}_n$ of finite covers of $K$ so that $\mathcal{U}_n\subset \mathcal{U}$, $\mathcal{U}_n$ refines $\mathcal{U}_{n-1}$, if $U,V\in\mathcal{U}_n$ then $U\cap V=\emptyset$, and $\bigcup_n\mathcal{U}_n$ separates points of $K$. One can define a graph structure on $\mathcal{U}_n$ by putting an $R$-edge between $U$ and $V$ if there is $x\in U$ and $y\in V$ with $R^K(x,y)$. It is easy to see now that $f^{n}_m\colon \mathcal{U}_n\to \mathcal{U}_m$ is a connected epimorphism between finite connected graphs and that $K=\varprojlim (\mathcal{U}_n,f^{n}_m)$.
Let now $h\colon L\to K$ be a connected epimorphism between connected, locally-connected, topological graphs. By the previous paragraph $K=\varprojlim (K_n,f^{n}_m)$ and $L=\varprojlim (L_n,g^{n}_m)$, where $f^n_m,g^n_m\in\mathcal{C}$. It suffices to show that for every $m$ there is $n$, and a map $h'\colon L_n\to K_m$ with $h'\in\mathcal{C}$ and $g_m \circ h= h' \circ f_n$. Fix some $m$ and let $n$ large enough so that $\{g^{-1}_n(y)\colon y\in L_n\}$ refines $\{(f_m\circ h)^{-1}(x)\colon x\in K_m\}$. Let also $h'\colon L_n\to K_m$ be the unique map that witnesses this refinement. Using that $f_m\circ h$ and $g_n$ are connected epimorphisms it is easy to see that $h'$ is in $\mathcal{C}$ as well.
\end{proof}
Next we illustrate the relationship between topological graphs and Peano continua. Recall that a {\bf continuum} is a connected, compact, metrizable space. A {\bf Peano continuum} is a continuum that is locally-connected. A map $\phi\colon Y\to X$ between topological spaces is {\bf connected} if $\phi^{-1}(Z)$ is connected for every closed connected subset $Z$ of $X$.
Here connected and locally-connected refer to the standard topological notion. We also adopt the convention that the empty space is not connected.
We will always accompany ambiguous terminology such as ``connected'' with further specification such as ``graph'' or ``space'' to distinguish between our combinatorial and the standard topological notion of connectedness.
A topological graph $K\in\mathcal{C}^{\omega}$ is a {\bf prespace} if the edge relation $R$ is also transitive. In other words, if $K$ is a collection of cliques. This makes $R$ an equivalence relation and we denote by $[x]$ the equivalence class of $x\in K$. Similarly, for every subset $F$ of $K$ we denote by $[F]$ the set of all $y\in K$ which lie in some equivalence class $[x]$ with $x\in F$. The {\bf topological realization} $|K|$ of a prespace $K$ is defined to be the quotient
\[ K/R^K=\{[x]\colon x\in K\},\]
endowed with the quotient topology. We denote by $\pi$ the quotient map $K\mapsto |K|$.
Since $R^K$ is compact, $|K|$ is compact and metrizable. In fact, we have the following theorem.
\begin{theorem}\label{T: Peano <--> prespaces}
For a topological space $X$ the following are equivalent:
\begin{enumerate}
\item $X$ is a Peano continuum;
\item $X$ is homeomorphic to $|K|$ for some prespace $K\in\mathcal{C}^{\omega}$.
\end{enumerate}
\end{theorem}
We start with a lemma.
\begin{lemma}\label{L: basis}
Let $K=\varprojlim(K_n,g^n_m)\in\mathcal{C}^{\omega}$ be a prespace, let $x\in K$ and let $g_n\colon K\to K_n$ be the natural canonical projections. Consider the following families:
\begin{itemize}
\item $\mathcal{P}^x_1=\{g^{-1}(a)\colon \; g\in\mathcal{C}^{\omega}, \; g([x])=a\}$, where $g$ ranges over all maps $g\colon K\to A$ in $\mathcal{C}^{\omega}$, with $A\in \mathcal{C},$ and $a\in A$;
\item $\mathcal{P}^x_2=\{g^{-1}(Q)\colon \; g\in\mathcal{C}^{\omega}, \; g([x])=Q\}$, where $g$ ranges over all maps $g\colon K\to A$ in $\mathcal{C}^{\omega}$, with $A\in \mathcal{C},$ and $Q\subseteq A$;
\item $\mathcal{P}^x_3=\{g^{-1}(Q)\colon \; g\in\mathcal{C}^{\omega}, \; g([x])=Q\}$, where everything is as in $\mathcal{P}^x_2$, but $g$ ranges only over $\{g_n\colon n\in\mathbb{N}\}$.
\end{itemize}
If $\mathcal{P}^x$ is either of the above families, then $\mathcal{P}^x_{\pi}=\{\pi(P)\colon P\in \mathcal{P}^x\}$ is a neighborhood basis of $[x]$ in $|K|$ consisting of closed sets.
\end{lemma}
\begin{proof}
Let $P\in\mathcal{P}^{x}_i$ and set $U= [P^c]^c \subseteq K$. Notice that $[P^c]$ is the projection of the closed set
\[\{(x,y)\in K\times K\mid (x,y)\in \big(R^K\bigcap (K\times P^c)\big) \},\]
along the compact second coordinate and therefore $U$ is open. Since $R^K$ is an equivalence relation and $[P^c]$ is $R^K$ invariant, then so is $U$. Hence, $\pi(U)$ is an open subset of $|K|$, and it clearly holds that $[x]\in \pi(U)\subseteq \pi(P)$.
Since $\pi\colon K\to |K|$ is continuous and $P$ clopen we have that $\pi(P)$ is a closed neighborhood of $[x]$. Compactness of $K$ implies that any open cover of $K$ can be refined by a partition of the form $\{g^{-1}_n(b)\colon b\in K_n\}$, for large enough $n$. Hence $\mathcal{P}^x_3$ projects through $\pi$ to a neighborhood basis of $[x]$. It is not difficult now to see that $\mathcal{P}^x_1=\mathcal{P}^x_2\supseteq \mathcal{P}^x_3$.
\end{proof}
We turn now back to the proof of Theorem \ref{T: Peano <--> prespaces}.
\begin{proof}[Proof of Theorem \ref{T: Peano <--> prespaces}]
First we show that $(2)\implies (1)$. Let $\mathcal{P}$ be the collection of clopen subsets of $K$ of the form $f^{-1}(a)$, where $f$ ranges over all $f\colon K\to A$ in $\mathcal{C}^{\omega}$ and $a\in A$. By Lemma \ref{L: basis}, $\mathcal{P}$ projects via $\pi$ to a neighborhood basis of $|K|$. It suffices to show that $\pi(P)$ is connected for every $P\in\mathcal{P}$; see Theorem 2.5 \cite{GM}, for example. Since every $P\in \mathcal{P}$ is itself an element of $\mathcal{C}^{\omega}$, it suffice to show that $|K|$ is a connected space for every prespace $K\in {\mathcal C}^\omega$. But any clopen partition of $|K|$ pulls back through $\pi$ to a clopen partition $\{U_1,U_2\}$ of $K$ which is invariant, that is, $[U_1]=U_1$ and $[U_2]=U_2$. By Theorem \ref{P:char}, $U_1$ is either empty or the whole space.
For $(1)\implies (2)$, let $X$ be a Peano continuum. By Bing's Partition theorem (see \cite{Bi}) there is a sequence $(\mathcal{O}_n)$ of finite collections of disjoint open subsets of $X$, so that for all $n\in\mathbb{N}$ we have that:
\begin{enumerate}
\item $\bigcup\mathcal{O}_n$ is dense in $X$;
\item $O$ is connected, for all $O\in\mathcal{O}_n$;
\item $\mathcal{O}_{n+1}$ refines $\mathcal{O}_{n}$;
\item any open cover of $X$ is refined by $\mathcal{O}_{m}$, for large enough $m$.
\end{enumerate}
We turn each finite set $\mathcal{O}_n$ into a graph by putting an edge between $O $ and $O'$, if and only if, $\overline{O}\cap \overline{O'}\neq\emptyset$. Let $f^n_m\colon \mathcal{O}_n\to\mathcal{O}_m$ be the uniquely defined refinement map. Since every $O\in\bigcup_{n} \mathcal{O}_n$ is connected, it follows that $f^n_m\in\mathcal{C}$. Let $K=\varprojlim (\mathcal{O}_n,f^{n}_m)$. Let $\rho\colon K\to X$, mapping each point
$x=(O_1,O_2,\ldots)\in K$ to the unique---by property $(4)$ above---point $\rho(x)$ with $\{\rho(x)\}=\bigcap_{n}\overline{O_n}$. It is easy to see that $R^K$ is the pullback of equality on $X$ under $\rho$, and hence, $K$ is a prespace with $X\simeq |K|$.
\end{proof}
\section{The Menger curve}\label{S: Menger curve}
The next theorem is proved using the methods of \cite{IS}. For completeness we summarize the construction of ${\mathbb F}$ below.
\begin{theorem}\label{T: characterization}
If $\mathcal F$ is a projective Fra{\"i}ss{\'e} family, then there exists a unique topological structure
${\mathbb F}\in {\mathcal F}^\omega$ such that:
\begin{enumerate}
\item for each $A\in {\mathcal F}$, there exists a morphism in ${\mathcal F}^\omega$ from $\mathbb F$ to $A$;
\item for $A,B\in {\mathcal F}$ and morphisms $f\colon {\mathbb F}\to A$ and $g\colon B\to A$ in ${\mathcal F}^\omega$
there exists a morphism $h\colon {\mathbb F}\to B$ in ${\mathcal F}^\omega$ such that $f=g\circ h$.
\end{enumerate}
\end{theorem}
We say that $\mathbb F$ is the {\bf projective Fra{\"i}ss{\'e} limit of ${\mathcal F}$}. The second property in the above statement is called {\bf projective extension property}.
We briefly sketch here the construction of $\mathbb F$ out of ${\mathcal F}$.
For more details, see \cite{IS}.
\begin{construction}
We build $\mathbb F$ as an inverse limit of a \emph{generic sequence} $(L_n,t^n_m)$ of morphisms $t^n_m\in\mathcal{F}$. By property (1) in the definition of a Fra{\"i}ss{\'e}{} class we can make two countable lists $(A_n\colon n\geq 0)$, $(e_n\colon C_n\to B_n\colon n\geq 1)$ containing all isomorphism types of structures and morphisms of $\mathcal{F}$. Moreover we make sure that every morphism type contained in $\mathcal{F}$ appears infinitely often in $(e_n)$ above. Let $L_0=A_0$. Assume that $L_n$ has been defined together with all maps $t^n_i$, for all $i<n$. Using property (3) in the definition of a Fra{\"i}ss{\'e}{} class we get $H\in\mathcal{F}$ together with maps $f\colon H\to L_n$, $g\colon H\to A_{n+1}$. Notice now that since $H$ is finite, there is a finite list $s_1,\ldots,s_k$ of morphism types from $H$ to $B_{n+1}$ in $\mathcal{F}$. Using $k$-many times projective amalgamation we get $f'\colon H'\to H$ and $d_j\colon H'\to C_{n+1}$ in $\mathcal{F}$ with $s_j\circ f'=e_{n+1}\circ d_j$ for all $j\leq k$. Set $L_{n+1}=H'$ and $t^{n+1}_i = t^{n}_i \circ f \circ f'$. It is not difficult to see that the way ``saturated'' $(L_n,t^n_m)$ with respect to $(A_n)$ and $(e_n)$ endows $\mathbb{F}$ with properties (1) and (2) of Theorem \ref{T: characterization} above.
\end{construction}
As a consequence of Theorems ~\ref{T:1},\ref{T: characterization}, we can now consider projective Fra{\"i}ss{\'e} limit ${\mathbb M}$ of $\mathcal{C}$. We call ${\mathbb M}$ the {\bf Menger prespace}.
\begin{theorem}\label{T: Menger is Menger}
The Menger prespace $\mathbb{M}$ is a prespace containing cliques of size at most $2$. Its topological realization $|\mathbb{M}|$ is the Menger curve.
\end{theorem}
\begin{proof}
The Menger curve is the unique $1$-dimensional, Peano continuum
with the disjoint arcs property (\cite{Be}, see also \cite{An,MOT}). Recall that a space $X$ has the disjoint arcs property if every continuous map $ \{0,1\}\times [0,1] \mapsto X$ can be uniformly approximated by maps which send $\{0\}\times[0,1]$ and $\{1\}\times[0,1]$ to disjoint sets.
By Theorem \ref{T: Peano <--> prespaces}, we know that $|{\mathbb M}|$ is a Peano continuum. To show that $|\mathbb{M}|$ is $1$-dimensional we find for every open cover a refinement whose nerve is one-dimensional. Let $\mathcal{V}$ be any open cover of $|\mathbb{M}|$ and let $f\colon \mathbb{M}\to A$ be any $f\in\mathcal{C}^{\omega}$ with $A\in\mathcal{C}$, so that $\mathcal{V}_f=\{\pi(f^{-1}(a))\colon a\in A\}$ refines $\mathcal{V}$. Let $g\colon B \to A$ be in $\mathcal{C}$, so that $B$ has no cliques of size $3$. For example one can barycentrically subdivide $A$ and map the new vertexes to either of its two neighbors. The projective extension property of $\mathbb{M}$ provides us with a further refinement $\mathcal{V}_{h}=\{\pi(h^{-1}(b))\colon b\in B\}$ of $\mathcal{V}_f$. Notice that since $B$ has no cliques of size $3$, the nerve of $\mathcal{V}_{h}$ is isomorphic to $B$. Since $|\mathbb{M}|$ is a regular topological space and $\mathcal{V}_{h}$ is finite, we can find for every $W\in\mathcal{V}_{h}$ an open $U_W\supseteq W$, so that $\{U_W\colon W\in\mathcal{V}_{h}\}$ has the same nerve as $\mathcal{V}_{h}$ and still refines $\mathcal{V}$.
For the disjoint arcs property, let $\gamma_0,\gamma_1\colon [0,1]\to |\mathbb{M}|$ be two maps and let $\mathcal{V}$ be an open cover of $|\mathbb{M}|$. We will find disjoint $\gamma'_0,\gamma'_1\colon [0,1]\to |\mathbb{M}|$ which are $\mathcal{V}$-close to $\gamma_0$ and $\gamma_1$, that is, for every $x\in[0,1]$, there is $V\in\mathcal{V}$, so that both $\gamma_i(x),\gamma'_i(x)$ lie in $V$. As in the previous paragraph, let $\mathcal{V}_f=\{\pi(f^{-1}(a))\colon a\in A\}$ be a refinement of $\mathcal{V}$ and consider an open cover $\mathcal{U}_f=\{U_a\colon a\in A\}$ refining of $\mathcal{V}$, with $U_a\supseteq\pi(f^{-1}(a))$, having the same nerve as $\mathcal{V}_f$. Notice that for every $i\in\{0,1\}$ there is a finite cover $\mathcal{V}^i$ of $[0,1]$ with connected open intervals, and an assignment $\alpha_i\colon\mathcal{V}^i\to A$, so that $\gamma_i(V)\subseteq U_a$, for every $V\in\mathcal{V}^i$ with $\alpha_i(V)=a$. Let $J$ be the unique graph on domain $\{0,\frac{1}{2},1\}$ so that $R^J(j,j')$ if and only if $|j-j'|\leq \frac{1}{2}$, and notice that the canonical projection $\rho\colon J\times A\to A$ is in $\mathcal{C}$. Hence by the projective extension property of $\mathbb{M}$ we have a connected epimorphism $h\colon\mathbb{M}\to J\times A$ so that $f=\rho\circ h$. Using the fact that $\pi(h^{-1}(X))$ is path-connected for every connected subset $X$ of $J\times A$, it is easy now to construct a
paths $\gamma'_0$ and $\gamma'_1$ which are $\mathcal{V}$-close to the original paths and that moreover, $\gamma_i([0,1])\subset \pi( h^{-1}(\{i\}\times A))$.
\end{proof}
\section{The combinatorics of homogeneity} \label{S: Homogeneity}
In Theorem~\ref{T:2} below, we prove an injective homogeneity result for $\mathbb{M}$ analogous to the main result for $|\mathbb{M}|$ in \cite{MOT}.
In Corollary \ref{C:hmg}, we recover Anderson's homogeneity result for the Menger curve $|\mathbb{M}|$. We note that, as in Section \ref{S:ApproximateProjectiveHomogeneity}, an appropriate version of {\em projective} homogeneity can always be obtained naturally and without much difficulty for any continuum which has been presented as a topological realization of some projective Fra{\"i}ss{\'e} limit; see \cite{BK} and \cite{IS} for example.
Here we provide the first example of an {\em injective} homogeneity property that is obtained using projective Fra{\"i}ss{\'e} theoretic methods.
Let $K$ be a closed subgraph of $\mathbb{M}$. We say that $K$ is {\bf locally non-separating} if for each clopen connected $W$, the set $W\setminus K$ is connected.
\begin{theorem}\label{T:2}
If $K=[K]$ and $L=[L]$ are locally non-separating subgraphs of ${\mathbb M}$, then each isomorphism from $K$ to $L$ extends to an automorphism of $\mathbb{M}$.
\end{theorem}
For the proof of Theorem~\ref{T:2} we run a standard ``back and forth'' argument based on a lifting property for inclusions $K\hookrightarrow\mathbb{M}$ of locally non-separating sets; see page \pageref{Lifting property}.
This lifting property strengthens the projective extension property of $\mathbb{M}$.
Viewed from an abstract homotopy theoretic standpoint, the lifting property
suggests that maps in $\mathcal{C}$ relate to the above inclusion $K\hookrightarrow\mathbb{M}$ in the same way that trivial fibrations relate to cofibrations within a model category. The analogy with model categories is also reflected in the way we prove the lifting property: we define a combinatorial analogue of the \emph{mapping cylinder construction} for homomorphisms between graphs and we show that for any $f\colon\mathbb{M}\to A$ in $\mathcal{C}^{\omega}$, the induced map from $K$ to $A$ factors through a map of the form $r\circ i$, where $i$ is an inclusion and $r$ a mapping cylinder retraction. Before we describe the mapping cylinder construction we start with two general lemmas. The next result is probably known, but we could not find a reference for it.
\begin{lemma}\label{L:0} A closed subset $K$ of ${\mathbb M}$ is locally non-separating if and only if for each clopen connected set $W\supseteq K$ and
each clopen set $V$ with $K\subseteq V\subseteq W$ there exists a clopen set $U$ such that $K\subseteq U\subseteq V$ and $W\setminus U$ is
connected.
\end{lemma}
\begin{proof} Only the direction from left to right needs a proof.
Fix a connected clopen set $W$. Since $W\setminus K$ is open, we have $W\setminus K = \bigcup_{k\in {\mathbb N}} V_k$ for
some $V_k$ clopen and connected. Let $k(0)=0$ and define $U_0=V_0$. Given $U_n$, let $k(n+1)$ be the smallest natural number such that
$V_{k(n+1)}\not\subseteq U_n$ and $U_n\cup V_{k(n+1)}$ is connected, if such $k(n+1)$ exists. Otherwise, let $k(n+1)=k(n)$. Let $U_{n+1} = U_n\cup V_{k(n+1)}$.
Since $U_n\subseteq U_{n+1}$ for each $n$, by compactness, it will suffice to show that
\begin{equation}\label{E:ooo}
W\setminus K = \bigcup_{n\in {\mathbb N}}U_n.
\end{equation}
This follows as in the last part of the proof of Lemma \ref{L:ApproximateHomogeneityProperty}: assume that
$x\in W\setminus K$ and $x\not\in \bigcup_{n\in {\mathbb N}} U_n$; let $k(x)$ be such that $x\in V_{k(x)}$; check that
$[V_{k(x)}]\cap \bigcup_{n\in {\mathbb N}} U_n =\emptyset$; and derive a contradiction from the fact that $W\setminus K$ is connected.
\end{proof}
\begin{lemma}\label{L: separation}
If $K\in\mathcal{C}^{\omega}$ is a prespace, $Z\subseteq V \subseteq K$, $Z=[Z]$, and $V$ is open, then there is $W\subseteq K$ open with $Z\subseteq W$ and $[W]\subseteq V$. If moreover $Z$ is closed, then $W$ can be additionally chosen to clopen.
\end{lemma}
\begin{proof}
It suffices to show that for every $z\in Z$ we can find $W_z$ clopen with $z\in W_z$ and $[W_z]\subseteq V$. If such $W_z$ doesn't exist then one can find sequences $(x_n)$and $(y_n)$ so that $y_n\in[x_n]$, $x_n$ converging to $z$, and $y_n\in V^c$. By compactness of $V^c$ we can assume that $(y_n)$ converges to $y\in Y$. But since $R^K$ is closed this implies that $y\in [x]$, contradicting that $Z=[Z]\subseteq V$.
\end{proof}
Let $X$ be any finite (reflexive) graph and let $\alpha\colon X\to A$ be a graph homomorphism with $A\in \mathcal{C}$. We assume that $\mathrm{dom}(A)\cap\mathrm{dom}(X)=\emptyset$.
The {\bf mapping cylinder} $C_{\alpha}$ of $\alpha$ is the unique graph on domain $\mathrm{dom}(A)\cup \mathrm{dom}(X)$ with:
\begin{enumerate}
\item $C_{\alpha}\upharpoonright \mathrm{dom}(A)=A$ and $C_{\alpha}\upharpoonright \mathrm{dom}(X)=X$;
\item for each $x\in X$ and $a\in A$, there is an edge in $C_{\alpha}$ between $x$ and $a$ if and only if $a=\alpha(x')$ for some $x'\in X$ with $R^X(x,x')$.
\end{enumerate}
The mapping cylinder $C_{\alpha}$ comes together with two natural graph inclusions $A,X\hookrightarrow C_{\alpha}$ and a {\bf canonical retraction} $r_{\alpha}\colon C_{\alpha}\to A$ given by: $r_{\alpha}(x)=\alpha(x)$, if $x\in X$; and $r_{\alpha}(x)=x$, otherwise. It is easy to check that both $C_{\alpha},r_{\alpha}$ are in $\mathcal{C}$.
\begin{lemma}\label{L: Mapping cylinder}
Let $K=[K]$ be a locally non-separating subgraph of $\mathbb{M}$; let $X$ be a finite graph; let $\alpha\colon X\to A$ be graph homomorphism, with $A\in\mathcal{C}$. For every $f\colon\mathbb{M}\to A$ in $\mathcal{C}^{\omega}$ and every graph homomorphism $q\colon K\to X$ with $\alpha\circ q=f\upharpoonright K$, there is $\tilde{f}\colon\mathbb{M}\to C_{\alpha}$ in $\mathcal{C}^{\omega}$, with $r_{\alpha}\circ \tilde{f}=f$ and $\tilde{f}\upharpoonright K= q$.
\begin{center}
\begin{tikzcd}[column sep=small]
K \arrow[dd,hook']\arrow[rrrr, "q"] & & & & X \arrow[dd,"\alpha"] \arrow[dll, hook'] \\
& & C_{\alpha}\arrow[drr, "r_{\alpha}"] & & \\
\mathbb{M} \arrow[rrrr, "f"]\arrow[urr, "\tilde{f}",dotted] & & & & A
\end{tikzcd}
\end{center}
\end{lemma}
\begin{proof}
Let $K$, $X$, $A$, $\alpha$, $f$, $q$ be the data provided in the statement of Lemma \ref{L: Mapping cylinder} and set $K_x=q^{-1}(x)$, for every $x\in X$.
\begin{claim}
There is $g\colon B\to A$ in $\mathcal{C}$ and $h\colon \mathbb{M} \to B$ in $\mathcal{C}^{\omega}$, with $g\circ h=f$, together with collections $\{D_x\colon x\in X\}$ and $\{D_a\colon a\in A\}$ of subgraphs of $B$, so that if we set $B_a=g^{-1}(a)$ for all $a\in A$, we have:
\begin{enumerate}
\item $\{\mathrm{dom}(D_a)\}\cup\{\mathrm{dom}(D_x)\colon x\in X, \; \alpha(x)=a \}$ is a partition of $\mathrm{dom}(B_a)$;
\item the image of $K_x$ under $h$ is contained in $D_x$;
\item $R^{X}(x,x')$ if and only if there is $b\in D_x$ and $b'\in D_{x'}$ with $R^B(b,b')$;
\item $R^{X}(x,x')$ for some $x'$ with $\alpha(x')=a$ if and only if there is $b\in D_x$ and $b'\in B_a$ with $R^B(b,b')$;
\item for every connected component $C$ of $D_x$ there is $c\in C$ and $b\in D_{\alpha(x)}$ with $R(b,c)$;
\item if $\widetilde{D}$ is the subgraph of $B$ on domain $\bigcup_{a\in A}\mathrm{dom}(D_a)$, then $g\upharpoonright\widetilde{D}\colon \widetilde{D}\to A$ is in $\mathcal{C}$ and as a consequence $D_a$ is connected.
\end{enumerate}
\end{claim}
\begin{proof}[Proof of Claim.]
Since $\{K_x\colon x\in X\}$ is a finite collection of closed subsets of a $0$-dimensional, metrizable topological space we can find for each $x$ a clopen subset $W^0_x$ of $\mathbb{M}$ containing $K_x$ so that $W^{0}_x\cap W^{0}_x\neq\emptyset$ if and only if $x=x'$. By Lemma \ref{L: separation} we can find for each $x$ a clopen subset $W^1_x$ of $\mathbb{M}$ containing $[K_x]$ so that $[W^1_x]\cap[W^1_{x'}]\neq\emptyset$ if and only if $R^{X}(x,x')$ and $[W^1_x]\cap f^{-1}(a)\neq\emptyset$ if and only if there is $x'\in X$ with $R^{X}(x,x')$ and $\alpha(x')=a$. Finally, since $K$ is locally non-separating, we can chose for every---possibly trivial---edge $e=\{a,a'\}$ of $A$, a clopen subset $W_e$ of $\mathbb{M}\setminus K$ so that $f(W_e)=e$.
Let now $h'\colon \mathbb{M}\to B'$ be any map in $\mathcal{C}^{\omega}$ which refines $f$ as well as the partition generated by all the sets $W^{0}_x$, $W^{1}_x$, $W_e$ collected above. Let also $g'\colon B'\to A$ be the unique map with $h'\circ g'= f$ and set $D'_x$ be the subgraph of $B'_a$ generated on domain $h'(K_x)$ and $D'_a$ be the subgraph of $B'_a$ generated on $\mathrm{dom}(B'_a)\setminus(\bigcup_x \mathrm{dom}(D'_x))$. It is easy to see that $g'\in \mathcal{C}$ and the resulting $h',g',\{D'_a\}, \{D'_x\}$ satisfy properties (1), (2), (3), (4) above. Moreover if $\widetilde{D}'$ is the subgraph of $B'$ on domain $\bigcup_{a\in A}\mathrm{dom}(D'_a)$, then $g'\upharpoonright\widetilde{D}'\colon \widetilde{D}'\to A$ is an epimorphism.
Since locally non-separating sets are nowhere dense, for every $a\in A$ we can chose a clopen set $V'_a$ of $f^{-1}(a)\setminus K$ so that $h'(V'_a)$ intersects every connected component of every graph $D'_x$ with $a=\alpha(x)$. For every $a\in A$, set $W_a=f^{-1}(a)$, $K_a=W_a\cap K$, $V_a=\big((h')^{-1}(\bigcup_{x\colon \alpha(x)=a}D'_x)\big) \setminus V'_a$. By Lemma \ref{L:0} we get a clopen subset $U_a$ of $\mathbb{M}$ with $K_a\subseteq U_a\subseteq V_a$ so that $W_a\setminus U_a$ is connected. As above we can find $h\colon \mathbb{M}\to B$ in $\mathcal{C}^{\omega}$ and $g''\colon B\to B'$ in $\mathcal{C}$ with $g''\circ h= h'$, and so that $h$ refines the partition generated by $\{U_a\colon a\in A\}$. Set $g=g'\circ g''$ and $B_a=g^{-1}(a)$. Let also $D_a$ be the subgraph of $B_a$ on domain $h(W_a\setminus U_a)$ and let $D_x$ be the subgraph of $B_a$ on domain $(g'')^{-1}(D'_x)\cap h(U_a)$. Notice that all properties we established for $g'$ are preserved under refinements and that $g$ additionally satisfies properties (5) and (6).
\end{proof}
Given the configuration of the above claim, let $E_x$ be the subgraph of $B$ generated on $\mathrm{dom}(D_x)\cup \mathrm{dom}(D_{\alpha(x)})$. Properties (5) and (6) above imply that $E_x$ is connected. Let $E'_x$ be an isomorphic copy of $E_x$ and let $i_x\colon E'_x\to B$ be an embedding witnessing this isomorphism. Let $G$ be the mapping cylinder with respect to the map $i\colon \sqcup_{x\in X} E'_x\to B$, where $i=\sqcup_{x\in X}i_x$, and let $r\colon G\to B$ be the associated retraction. By projective extension property we get $f_0\colon \mathbb{M}\to G$ with $(g \circ r) \circ f_0=f$. By properties (3), (4), (5), (6) above, and the fact that $(f_1)^{-1}(x)$ is connected for all $x\in X$, we have the map $f_1\colon G\to C_{\alpha}$ that maps $E'_x\cup D_x$ to $x$ and $D_a$ to $a$ is in $\mathcal{C}$. It is also immediate that $r_{\alpha}\circ f_1= g\circ r $. To finish the proof we set $\tilde{f}=f_1\circ f_0$. As a consequence we have $ r_{\alpha}\circ\tilde{f}=r_{\alpha}\circ f_1\circ f_0= g\circ r \circ f_0=f$ and $\tilde{f}(K_x)=f_1\circ f_0(K_x)\subseteq f_1(\mathrm{dom}(E'_x)\cup \mathrm{dom}(D_x))=\{x\}$.
\end{proof}
We can turn now to the proof of the main theorem of this section.
\begin{proof}[Proof of Theorem \ref{T:2}]
The proof of Theorem \ref{T:2} is a standard ``back and forth'' argument based on the following lifting property. Notice that the content of the lower commuting triangle is our usual projective extension property.
\begin{property}\label{Lifting property}
Let $K=[K]$ be a locally non-separating subgraph of $\mathbb{M}$. Let also $g\colon B\to A$ in $\mathcal{C}$ and $f\colon \mathbb{M}\to A$ in $\mathcal{C}^{\omega}$. Then for every graph homomorphism $p\colon K\to B$, with $g\circ p=f\upharpoonright K$, there is $h\colon\mathbb{M}\to B$ in $\mathcal{C}^{\omega}$ with $g\circ h=f$ and $h\upharpoonright K = p$.
\begin{center}
\begin{tikzcd}[column sep=large, row sep=large]
K \arrow[r, "p"] \arrow[d, hook] & B \arrow[d, "g" ]\\
\mathbb{M} \arrow[ur, "h", dotted ,swap] \arrow[r, "f",swap] & A
\end{tikzcd}
\end{center}
\end{property}
We are left to show that the above lifting property holds. Notice first that if $g\colon B\to A$ is in $\mathcal{C}$ and $\beta\colon X\to B$, $\alpha\colon X\to A$, are graph homomorphisms with $g\circ\beta=\alpha$, then there is a unique extension $g^{*}\colon C_{\beta}\to C_{\alpha}$ of $g$ which makes the right diagram below commute. It is easy to check that $g^{*}$ is in $\mathcal{C}$.
\begin{center}
\begin{tikzcd}[column sep=small]
& & X \arrow[dll, "\beta",swap] \arrow[drr, "\alpha"] & & \\
B \arrow[rrrr, "g"] & & & & A
\end{tikzcd}
\quad\quad $\rightsquigarrow$ \quad\quad
\begin{tikzcd}[column sep=small]
& & X \arrow[dll, hook'] \arrow[drr, hook] & & \\
C_{\beta} \arrow[d, "r_{\beta}",swap] \arrow[rrrr, "g^*"] & & & & C_{\alpha} \arrow[d,"r_{\alpha}",swap] \\
B \arrow[rrrr, "g"] & & & & A
\end{tikzcd}
\end{center}
Let now $f,g,p$ be as in the statement of the Lifting Property for $\mathbb{M}$ and let $X$ be a graph isomorphic to the graph that is the image of $K$ in $B$ under $p$. Let also $\beta\colon X\to B$ be this isomorphism and let $q\colon K\to X$ be the unique map with $\beta\circ q=p$. Notice that $\beta$ is not an embedding---in general---but it is always an injective homomorphism. Set $\alpha\colon X\to A$ be the homomorphism $g\circ \beta$.
By Lemma \ref{L: Mapping cylinder} we have $\tilde{f}\colon\mathbb{M}\to C_{\alpha}$ in $\mathcal{C}^{\omega}$, with $r_{\alpha}\circ \tilde{f}=f$ and $\tilde{f}\upharpoonright K=q$. Let $g^{*}\colon C_{\beta}\to C_{\alpha}$ be the extension of $g$ to $C_{\beta}$ described above. By the projective extension property of $\mathbb{M}$ we get a map $\tilde{h}\colon\mathbb{M}\to C_{\beta}$ with $g^{*}\circ\tilde{h}=\tilde{f}$. It follows that the map $h\colon\mathbb{M}\to B$ defined by $r_{\beta}\circ\tilde{h}$ is the desired map. To see this notice that $f=r_{\alpha}\circ \tilde{f}=r_{\alpha}\circ g^{*}\circ \tilde{h}=g\circ r_{\beta}\circ\tilde{h}=g\circ h$. By a similar diagram chasing, using that $g^{*}\upharpoonright X=\mathrm{id}_X$ and $(g^{*})^{-1}(X)=X$ we get that $p=h\upharpoonright K$.
\end{proof}
We finish this section by showing how one can derive Anderson's homogeneity for the Menger curve \cite{An} from Theorem~\ref{T:2}.
\begin{corollary}[Anderson~\cite{An}]\label{C:hmg}
Any bijection between finite subsets of $|\mathbb{M}|$ extends to a homeomorphism of $|\mathbb{M}|$.
\end{corollary}
\begin{proof}
Let $\phi_0\colon F\to F'$ be a bijection between finite subsets of $|\mathbb{M}|$. If $\phi_0$ lifts through $\pi\colon \mathbb{M}\to |\mathbb{M}|$ to a bijection $\phi^{\pi}_0$ between $\cup F$ and $\cup F'$ then by Theorem \ref{T:2}, $\phi^{\pi}_0$ extends to a global automorphism $\phi^{\pi}\colon\mathbb{M}\to\mathbb{M}$, and $\phi=\pi\circ\phi^{\pi}\circ\pi^{-1}$ is the required homeomorphism extending $\phi_0$.
Here we used that finite subsets of $\mathbb{M}$ are locally non-separating,
which easily follows from Lemma \ref{L:0} and the projective extension property of $\mathbb{M}$. Hence the proof reduces to the following claim.
\begin{claim}
For every finite subset $F$ of $|\mathbb{M}|$, there exists a homeomorphism $\psi\colon|\mathbb{M}|\to|\mathbb{M}|$ so that every element $[y]$ in $\psi(F)=\{\psi([x])\colon [x]\in F\}$ is a singleton (as a subset of $\mathbb{M}$).
\end{claim}
\noindent{\em Proof of Claim.}
Let $E$ be the equivalence relation on $\mathbb{M}$ defined by $x E x'$ if either $x=x'$; or if $x'\in [x]$ and $[x]\in F$. Let $\mathbb{M}'=\mathbb{M}/E$, let $\rho\colon \mathbb{M}\to \mathbb{M}'$ be the quotient map, and let $R'$ be the equivalence relation on $\mathbb{M}'$, that is the push-forward of $R$ under $\rho$. Since $\rho$ is $R$-invariant, $R'$ is well defined. Notice that $\rho$ is continuous since $E$ is compact and hence the induced map $|\rho|\colon |\mathbb{M}|\to \mathbb{M}'/R'$ on the quotients is a homeomorphism. It suffices to show that there exists an isomorphism $\phi\colon\mathbb{M}\to \mathbb{M}'$ in $\mathcal{C}^{\omega}$. If so, the map $ \pi\circ\phi^{-1}\circ (\pi')^{-1} \circ |\rho|$, where $\pi'\colon\mathbb{M}'\to \mathbb{M}'/R'$ is the quotient map, is the desired homeomorphism $\psi$. Hence, by Theorem \ref{T: characterization}, we have to check that $\mathbb{M}'$ (with the relation $R'$) is in $\mathcal{C}^{\omega}$ and that it satisfies properties (1) and (2) therein.
To see that $\mathbb{M}'$ is in $\mathcal{C}^{\omega}$ notice first that the union of any two $R$-connected clopen subsets of $\mathbb{M}$ is clopen and $R$-connected. Since $F$ is finite one can easily generate a basis for the topology of $\mathbb{M}'$ consisting of clopen $R'$-connected sets. The rest follows from Proposition~\ref{P:char}.
We now check that $\mathbb{M}'$ satisfies property (1) from Theorem \ref{T: characterization}. Let $A\in \mathcal{C}$ and let $n$ be a number strictly larger than the cardinality of $F$. Consider the graph $\delta^nA$ which is attained by subdividing every edge of $A$ $n$-times, that is, each non-trivial edge $(v,v')$ of $A$ is replaced a chain $(v,v_1), (v_1,v_2), \ldots, (v_n,v')$ of $n$-many edges. Notice that for every map $(v,v')\mapsto_{\gamma}\{0,\ldots n\}$ which assigns to each edge $(v,v')$ of $A$ a number less or equal to $n$ we define a map $d_{\gamma}\colon \delta^nA\to A $ collapsing every vertex $v_m$ with $m>\gamma((v,v'))$ to $v'$ and every vertex $v_m$ with $m\leq\gamma((v,v'))$ to $v$. Let $f\colon \mathbb{M}\to \delta^nA $ be any $\mathcal{C}^{\omega}$ map. By the choice of $n$, there is an assignment $\gamma$ as above so that for every edge $(v,v')$ there is no $[x]\in F$ with $f([x])=(v_k,v_{k+1})$, where $k=\gamma((v,v'))$. The map $g\colon \mathbb{M}\to A$ with $g=d_{\gamma}\circ f$ is easily shown to push forward through $\rho$ to a $\mathcal{C}^{\omega}$ map $g^{\rho}\colon \mathbb{M}'\to A$.
Property (2) from Theorem \ref{T: characterization} is proved for $\mathbb{M}'$ in a similar fashion.
Let $f\colon \mathbb{M}'\to A$ in $\mathcal{C}^{\omega}$ and $g\colon B\to A$ in $\mathcal{C}$. Notice that $f\colon \rho \colon \mathbb{M}\to A$ is in $\mathcal{C}^{\omega}$. We can now construct the desired map $h\colon \mathbb{M}'\to B$ by relativizing the argument of the previous paragraph with respect to the constrains $f$ and $g$.
The claim and, therefore, also the corollary follow. \end{proof}
\section{The combinatorics of universality} \label{S: Universality}
In Theorem \ref{T:preMengerProjUniversality} we prove for $\mathbb{M}$ a combinatorial analogue of a strengthened version of Anderson--Wilson's theorem.
We use this to establish a variant of Anderson--Wilson's theorem for the Menger curve $|\mathbb{M}|$; see Corollary \ref{C:MengerProjUniversality}.
Notice that the following weak version of Corollary \ref{C:MengerProjUniversality} already follows from the projective extension property of $\mathbb{M}$ and
Theorem \ref{T: Peano <--> prespaces}.
\begin{proposition}\label{P: universality}
Every Peano curve $X$ is the continuous surjective image of the Menger curve $|\mathbb{M}|$ under a continuous and connected map $|h|\colon |\mathbb{M}|\to X$.
\end{proposition}
\begin{proof}
By Theorem \ref{T: Peano <--> prespaces}, the space $X$ is homeomorphic to $|K|$ for some prespace $K=\varprojlim (K_n,g^{n}_m)\in\mathcal{C}^{\omega}$. By the first property of Theorem \ref{T: characterization} we get a connected epimorphism $h_0\colon \mathbb{M}\to K_0$. We lift $h_0$ to a connected epimorphism $h\colon \mathbb{M}\to K$ by repeated application of the second property of Theorem \ref{T: characterization}. Since $h$ is a graph homomorphism cliques in $\mathbb{M}$ map to cliques in $K$. As a consequence $h$ induces a map $|h|\colon \mathbb{M}\to|K|$ between the quotients which is easy to see that it is continuous and connected.
\end{proof}
To strengthen the features of the map $h$ in Proposition \ref{P: universality} we will isolate certain combinatorial properties of $\mathcal{C}$ and incorporate them in
the construction of the map $h$ above. Our arguments can be adapted to other Fra{\"i}ss{\'e}{} classes $\mathcal{F}$ which satisfy the analogous properties.
\begin{definition}\label{Def 1-exact}
Let $\mathcal{F}$ be a projective Fra{\"i}ss{\'e}{} class. The projective amalgam $f',g'$ of $f,g$ below is called {\bf structurally exact} (with respect to $f$), if for every $B_0\subseteq B$ with
$f\upharpoonright B_0 $ in $\mathcal{F}$, if we set $D_0= (f')^{-1}(B_0)$, then we have $D_0\in \mathcal{F}$ and $g'\upharpoonright D_0 \in\mathcal{F}$.
\begin{center}
\medskip
\begin{tikzcd}
D \arrow[d, "g'", swap]\arrow[r, "f'"] & B \arrow[d,"f"]\\
C \arrow[r, "g"] & A
\end{tikzcd}
\medskip
\end{center}
We say that $\mathcal{F}$ has {\bf structurally exact amalgamation} if every $f,g$ as above admit structurally exact amalgam. We say that $\mathcal{F}$ has {\bf two--sided structurally exact amalgamation} if every $f,g$ as above admit an amalgam that is structurally exact with respect to both $f$ and $g$.
\end{definition}
Structural exactness is a natural generalization of the well studied notion of \emph{exactness}. Recall that an amalgamation diagram, as in Definition \ref{Def 1-exact}, is exact if for every $b\in B, c\in C$ with $f(b)=g(c)$ there is $d\in D$ so that $f'(d)=b$ and $g'(d)=c$; see \cite{Ge}.
In the context of Proposition \ref{P: universality}, structural exactness of $\mathcal{C}$ will allow us to strengthen the connectedness properties of the map $h$. Two--sided structural exactness together with the next property will additionally allow us to control isomorphism type of the fibers of $h$.
\begin{definition}
Let $\mathcal{F}$ be a projective Fra{\"i}ss{\'e}{} class. We say that $\mathcal{F}$ admits {\bf local refinements} if for every $f\colon B_0\to A_0$ in $\mathcal{F}$ and every embedding $i\colon A_0\to A$, there is $g\colon B\to A$ in $\mathcal{F}$ and an embedding $j\colon B_0\to B$ so that $g\circ j= i\circ f$.
\end{definition}
\begin{lemma}
The class $\mathcal{C}$ has two--sided structurally exact amalgams and local refinements.
\end{lemma}
\begin{proof}
The amalgam provided in the proof of Theorem \ref{T:1} is structurally exact with respect to both $f$ and $g$ as well. It is also easy to check that $\mathcal{C}$ admits local refinements.
\end{proof}
We can now prove the main theorem of this section.
\begin{theorem}\label{T:preMengerProjUniversality}
For every $K\in\mathcal{C}^{\omega}$ there exists a connected epimorphism $h\colon \mathbb{M}\to K$ which is open and satisfies the following properties:
\begin{enumerate}
\item for every $x\in \mathbb{M}$ there exists a collection $\mathcal{N}$ of clopen subsets of $\mathbb{M}$, with $\bigcap\mathcal{N}=[x]$, so that for every $N\in\mathcal{N}$ and for every closed connected subgraph $F$ of $h(N)\subset K$ the subgraph $h^{-1}(F)\cap N$ of $\mathbb{M}$ is connected;
\item for every closed subgraph $Q$ of $K$ that is a clique, the subgraph $h^{-1}(Q)$ of $\mathbb{M}$ is isomorphic to $\mathbb{M}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Fix sequences $(M_n,f^{n}_m)$ and $(K_n,g^{n}_m)$ in $\mathcal{C}$ with $\mathbb{M}=\varprojlim(M_n,f^{n}_m)$ and $K=\varprojlim(K_n,g^{n}_m)$. We denote by $f_n$ and $g_n$ the induced maps $\mathbb{M}\mapsto M_n$ and $K \mapsto K_n$.
We will first use the fact that $\mathcal{C}$ has structurally exact amalgams to produce map $h\colon \mathbb{M}\to K$ in $\mathcal{C}^{\omega}$ which is open and satisfies the property (1) in the statement of the theorem. Then, we will illustrate how to adjust the construction to additionally fulfill property (2) of the statement.
We point out that part of the argument below---deriving from exactness that the map $h$ is open---can also be found in \cite{Ge}.
We build $h$ as an inverse limit of a coherent sequence of maps $h_i\colon M_{n(i)}\to K_i$ from $\mathcal{C}$ where $(n(i): i\in\mathbb{N})$ is some increasing sequence of natural numbers. By the first property of Theorem \ref{T: characterization} we get $n(0)$ and a connected epimorphism $h_0\colon M_{n(0)}\to K_0$. Assume now that we have defined $n(i)$ and $h_i$. Setting $f=h_i$ and $g=g^{i+1}_{i}$ in initial diagram of Definition \ref{Def 1-exact} we get a structurally exact amalgam $D$, $f'\colon D\to M_{n(i)}$, $g'\colon D \to K_{i+1}$. Using the extension property of
Theorem \ref{T: characterization} we find $n(i+1)$ and a map $p\colon M_{n(i+1)}\to D$ such that $f' \circ p=f^{n(i+1)}_{n(i)}$. Set $h_{i+1}=p\circ g'$.
This finishes the induction and we therefore get a map $h=\varprojlim(h_i)$ in $\mathcal{C}^{\omega}$ from $\mathbb{M}$ to $K$.
\begin{claim}
For every $i\in\mathbb{N}$ and $a\in M_{n(i)}$ we have that $h(f_{n(i)}^{-1}(a))=g^{-1}_i(h_i(a))$.
\end{claim}
\begin{proof}[Proof of Claim.]
The non-trivial direction, $h(f_{n(i)}^{-1}(a)) \supseteq g^{-1}_i(h_i(a))$, follows from exactness of $D$ in the inductive step above. In particular, let $x=(x_0,x_1,\ldots)\in K$ with $x_i=h_i(a)$ and let $y_i=a$. Then, since $D$ is exact, there is $d\in D$ with $f'(d)=y_i$ and $g'(d)=x_{i+1}$. Let $y_{i+1}=d'$ for any $d'\in p^{-1}(d)$. Continuing this way we build inductively $y=(y_0,y_1,\ldots)\in \mathbb{M}$ with $h(y)=x$.
\end{proof}
By the above claim, the fact that $h_i$ is open, and since the family of all sets of the form $f_{n(i)}^{-1}(a)$ forms a basis for the topology of $\mathbb{M}$, it follows that $h$ is open.
Next we show that $h$ satisfies Property (1) in the statement of the Theorem. Let $x\in\mathbb{M}$ and notice that for every $n$, the subgraph $Q_{x,n}=f_n([x])$ of $M_n$ is a clique (of size at most $2$).
Set $\mathcal{N}=\{ f_n^{-1}(Q_{n}) : n\in\mathbb{N}\}$ and notice that by Lemma \ref{L: basis} it follows that $\mathcal{N}$ is indeed a collection of clopen subsets of $\mathbb{M}$ with $\bigcap\mathcal{N}=[x]$. Let $N\in\mathcal{N}$ and let $F$ be a closed connected subgraph of $h(N)\subset K$. By reparametrizing the sequences $(M_n)$ and $(K_n)$ above we can assume that $N=f^{-1}_0(Q)$ for some clique $Q$ in $M_0$ and that $n=n(i)=i$ in the definition of the sequence $h_i$ above. Set $Q_n=(f^n_0)^{-1}(Q)$ and let $F_n=g_n(F)$. It is immediate that $F_n$ is a connected subgraph of $K_n$ included in $h_n(Q_n)$, for every $n\in\mathbb{N}$. Let $E_n=h^{-1}_n(F_n)\cap Q_n$. While $f^n_m\upharpoonright E_n$ could fail to be a connected epimorphism, the following claim is true:
\begin{claim}
$E_n$ is a connected subgraph of $Q_n$.
\end{claim}
\begin{proof}
We prove this inductively. To run the induction we will actually need the stronger statement that $h_n\upharpoonright E_n\colon E_n\to F_n$ is in $\mathcal{C}$. Let $E_0=h_0^{-1}(F_0)\cap Q_0$. Since $Q_0$ is a clique, $h_0\upharpoonright E_0$ is a connected epimorphism from $E_0$ onto $F_0$.
Assume now that $h_n\upharpoonright E_n\colon E_n\to F_n$ is in $\mathcal{C}$. Since the structural exactness of $D,f',g'$ at the stage $n$ of the construction above is stable under precomposing with $p\colon M_{n+1}\to D$, we have that $h_{n+1}\upharpoonright (f^{n+1}_n)^{-1}(E_n)$ is a connected epimorphism from $(f^{n+1}_n)^{-1}(E_n)$ to $(g^{n+1}_n)^{-1}(F_n)$. Notice now that $E_{n+1}$, that was defined as $h_{n+1}^{-1}(F_{n+1})\cap Q_{n+1}$, equals $h_{n+1}^{-1}(F_{n+1})\cap (f^{n+1}_n)^{-1}(E_n)$. Since $E_{n+1}$ is the preimage of the connected set $F_{n+1}$ under the connected epimorphism $h_{n+1}\upharpoonright (f^{n+1}_n)^{-1}(E_n)$, the map $h_{n+1}\upharpoonright E_{n+1}\colon E_{n+1}\to F_{n+1}$ is connected as well.
\end{proof}
Since $f^n_m(E_n)=E_m$, the above claim implies that the inverse limit
\[
E=\varprojlim (E_n,f^{n}_m\upharpoonright E_n)
\]
is a closed and connected subgraph of $\mathbb{M}$ (although not in general locally-connected), with
$h^{-1}(F)\cap N= E$. Hence indeed $h$ satisfies the property (1) above.
We finish by describing how the above construction can be modified so that $h$ additionally satisfies property $(2)$ of the statement. Recall from Section \ref{S: Menger curve} that any topological graph is isomorphic to $\mathbb{M}$ if it can be expressed as an inverse limit of a generic sequence $(L_n,t^n_m)$. Recall also that a sequence $(L_n,t^n_m)$ is generic if is ``saturated'' with respect to $(A_n)$ and $(e_n)$; see the construction in Section \ref{S: Menger curve}.
Let $\mathcal{Q}$ be the collection of all closed subsgraphs of $K$ which are cliques.
Fix $Q\in\mathcal{Q}$ and for each $i$ set $Q_i=g_i(Q)$ and $L^{Q}_i=h_i^{-1}(Q_i)$. Notice that $Q_i$ is a clique in $K_i$ and as a consequence $g^{i+1}_i\upharpoonright Q_{i+1}$ is a connected epimorphism from $Q_{i+1}$ to $Q_i$.
Hence, by assuming during the construction of $h_{i+1}$ in the above that the amalgam $f'\colon D\to M_{n(i)}$, $g'\colon D \to K_{i+1}$ is two--sided structurally exact, we have that $f'\upharpoonright (f')^{-1}(Q_{i+1})$ is in $\mathcal{C}$, and therefore, $f^{n(i+1)}_{n(i)}\upharpoonright L^Q_{i+1} \colon L^Q_{i+1}\to L^Q_i$ is in $\mathcal{C}$. Therefore, for every $Q\in\mathcal{Q}$ we already have that $h^{-1}(Q)=\varprojlim(L^{Q}_n,f^{n}_m\upharpoonright L^{Q}_n)\in\mathcal{C}^{\omega}$.
In order to arrange for $h$ to have property (2) we need to make sure that for every $Q\in\mathcal{Q}$ the sequence $(L^{Q}_n,f^{n}_m\upharpoonright L^{Q}_n)$ is generic. This is done by modifying slightly the definition of $h_i$ above. In particular, let $(A_n)$ and $(e_n\colon C_n\to B_n)$ be as in the construction described in Section \ref{S: Menger curve} and assume that for every $Q\in\mathcal{Q}$ the finite sequence $(L^{Q}_n,f^{n}_m\upharpoonright L^{Q}_n; n,m\leq i)$ has been saturated with respect to $(A_n; n \leq i)$ and $(e_n; n \leq i)$.
In the process of defining $h_{i+1}$, after we construct $D,f',g'$ as the two--sided structurally exact amalgam of $h_i$ and $g^{i+1}_i$, we further refine it via a map $r\colon D'\to D$ in $\mathcal{C}$ which makes sure that if
$r'=f'\circ r$ is the map from $D'$ to $M_{n(i)}$ then for every $Q\in \mathcal{Q}$ we have that:
\begin{enumerate}
\item[(i)] there exists a map in $\mathcal{C}$ from $(r')^{-1}(Q_{i+1})$ to $A_{i+1}$;
\item[(ii)] if $s\in\mathcal{C}$ is any map from $(f')^{-1}(Q_{i+1})$ to $B_{i+1}$ then there exists $d\in\mathcal{C}$ from $(r')^{-1}(Q_{i+1})$ to $C_{i+1}$ so that $ s \circ \big(r\upharpoonright (r')^{-1}(Q_{i+1})\big) = e_{i+1} \circ d$.
\end{enumerate}
This is easily done since the ``local problems'' (i) and (ii) can be turned into ``global problems'' given that $\mathcal{C}$ has the local refinement property, and then get solved using finitely many application of the amalgamation property of $\mathcal{C}$.
Going back to the construction of $h_{i+1}$ above, we can now use the extension property of $\mathbb{M}$ to find $n(i+1)$ and a map $p\colon M_{n(i+1)}\to D'$ such that $f' \circ r \circ p=f^{n(i+1)}_{n(i)}$ and set $h_{i+1}=p \circ r \circ g'$.
\end{proof}
As a corollary we get the following variant of Anderson--Wilson's projective universality theorem \cite{An2,Wi}. Notice that the corresponding map in \cite{An2,Wi} is shown to be \emph{monotone}, that is, preimages of points are connected. Since we are working with compact spaces, a map is monotone if and and only if
it is connected \cite[p.131]{Ku}.
Moreover, as pointed out by Gianluca Basso, the map we construct is not open. Instead we get that
it is \emph{weakly locally-connected}: a continuous $\phi\colon Y\to X$ between topological spaces is called {\bf weakly locally-connected} if $Y$ admits a collection $\mathcal{N}$ of neighborhoods so that $\{\mathrm{int}(N)\colon N\in \mathcal{N}\}$ generates the topology of $Y$ and for every $N\in\mathcal{N}$, and for every closed subset $Z$ of $\phi(\mathrm{int}(N))$ we have that $\phi^{-1}(Z)\cap N$ is connected. This property seems rather technical but is very useful for constructing nice sections for the map $\phi$; see \cite{Mi}.
\begin{corollary}[see also Anderson \cite{An2}, Wilson \cite{Wi}] \label{C:MengerProjUniversality}
If $X$ is a Peano continuum, then there exists a continuous surjective map $|h|\colon |\mathbb{M}|\to X$ which is connected, weakly locally-connected, and $|h|^{-1}(x)$ is homeomorphic to $|\mathbb{M}|$, for every $x\in X$.
\end{corollary}
\begin{proof}
By Theorem \ref{T: Peano <--> prespaces}, the space $X$ is homeomorphic to $|K|$ for some prespace $K=\varprojlim(K_n,g^{n}_m)$ in $\mathcal{C}^{\omega}$. Let $h\colon \mathbb{M}\to K$ be the map provided by Theorem \ref{T:preMengerProjUniversality}. Since $h$ is an $R$-homomorphism, the map $h$ induces a map $|h|\colon |\mathbb{M}|\to |K|$ between the quotients. It is easy to check that $|h|$ continuous and surjective and connected. The rest follow from properties (1) and (2) of Theorem \ref{T: Peano <--> prespaces}.
\end{proof}
\section{The approximate projective homogeneity property}\label{S:ApproximateProjectiveHomogeneity}
The Menger prespace $\mathbb{M}$, being the projective Fra{\"i}ss{\'e}{} limit of $\mathcal{C}$,
automatically enjoys the \emph{projective homogeneity property}: for every $f,g\colon\mathbb{M}\to A$, with $A\in\mathcal{C}$ and $f,g\in\mathcal{C}^{\omega}$,
there is $\phi\in\mathrm{Aut}(\mathbb{M})$ with $f\circ\phi=g$. From this property we can naturally derive the following {\bf approximate projective homogeneity property} for the Menger curve $|\mathbb{M}|$.
\begin{theorem}\label{T:ApproximateHomogeneityProperty}
If $\gamma_0,\gamma_1\colon |\mathbb{M}|\to X$ are continuous and connected maps from the Menger curve onto some Peano continuum $X$, then for every open cover $\mathcal{V}$ of $X$ there is
$h\in \mathrm{Homeo}(|\mathbb{M}|)$ so that $(\gamma_0\circ h)$ and $\gamma_1$ are $\mathcal{V}$-close, that is,
\[\forall y\in |\mathbb{M}| \; \exists V\in\mathcal{V} \; (\gamma_0\circ h)(y),\gamma_1(y)\in V.\]
\end{theorem}
In other words, if we endow the space $\mathrm{Maps}_0(|\mathbb{M}|,X)$, of all continuous and connected maps from $|\mathbb{M}|$ onto the Peano continuum $X$ with the compact open topology, then the orbit of each $\gamma\in \mathrm{Maps}_0(|\mathbb{M}|,X)$ under the natural action of $ \mathrm{Homeo}(|\mathbb{M}|)$ on $\mathrm{Maps}_0(|\mathbb{M}|,X)$ is dense in $\mathrm{Maps}_0(|\mathbb{M}|,X)$. We start with a lemma.
\begin{lemma}\label{L:ApproximateHomogeneityProperty}
Let $A\in\mathcal{C}$ and let $\mathcal{U}=\{U_a\colon a\in \mathrm{dom}(A)\}$ be an open cover of $\mathbb{M}$ consisting of connected subgraphs. If $U_a\cap U_b\neq\emptyset\iff R^{A}(a,b)$, then there is $u\colon \mathbb{M}\to A$ in $\mathcal{C}^{\omega}$ so that $u^{-1}(a)\subseteq U_a$, for all $a\in\mathrm{dom}(A)$.
\end{lemma}
\begin{proof}
First we pick for each each $a\in\mathrm{dom}(A)$ a clopen connected subgraph $W_a$ of $\mathbb{M}$, with $\mathrm{dom}(W_a)\subseteq \mathrm{dom}(V_a)$, so that
\begin{equation}\label{Equatio2}
W_a\cap W_b\neq \emptyset \text{ if and only if } R^A(a,b).
\end{equation}
This can always be arranged as follows. Let $f_0\colon \mathbb{M}\to B$ be any map in $\mathcal{C}^{\omega}$, with $B\in\mathcal{C}$, so that $\{f_0^{-1}(b): b\in\mathrm{dom}(B)\}$ refines $\mathcal{U}$. Let $B\times Q_A\in\mathcal{C}$ be the product---see proof of Theorem \ref{T:1}---of $B$ with the clique $Q_A$ on domain $\mathrm{dom}(A)$, and let $p\colon B\times Q_A\to B$ the natural projection. Let $C\in\mathcal{C}$ be the graph attained by subdividing every non-trivial edge of the graph $B\times Q_A$, and let $r\colon C\to B\times Q_A$ be any map which maps every vertex that came from a subdivision to either of its two neighbors; and every vertex already in $\mathrm{dom}(B\times Q_A)$ to itself. Clearly the map $s\colon C\to B$ with $s = p \circ r$ is in $\mathcal{C}$. By the projective extension property of $\mathbb{M}$---see; Theorem \ref{T: characterization}---we can replace $f_0$ with a map $f\colon \mathbb{M}\to C$ from $\mathcal{C}^{\omega}$. Notice that for the map $f$ we can choose: for every $a\in\mathrm{dom}(A)$, a vertex $v_a\in \mathrm{dom}(C)$ with $f^{-1}(v_a)\subseteq U_a$, so that $v_a\neq v_{b}$ if $a\neq b$; and for every $a,b\in \mathrm{dom}(A)$ with $R^A(a,b)$, a path $P:=P(a,b)$ in $C$ from $v_a$ to $v_b$, with $f^{-1}(P)\subseteq U_a\bigcup U_{b}$, so that the collections of all these paths forms a ``strongly pairwise disjoint'' system, i.e., if the paths $P,P'$ are distinct and $v\in P$, $v'\in P'$, with $R^C(v,v')$, then either $v$ is an endpoint of $P$ or $v'$ is an endpoint of $P'$. Using this ``strongly pairwise disjoint" system of paths it is easy to define the collection $\{W_a: a\in\mathrm{dom}(A)\}$.
Next we find clopen, connected subgraphs $\widetilde{W}_a$ of $\mathbb{M}$ with
\begin{equation}\label{Equatio3}
\mathrm{dom}(W_a)\subseteq \mathrm{dom}(\widetilde{W}_a)\subseteq U_a, \; \widetilde{W}_a\cap\widetilde{W}_{a}'=\emptyset\text{ if } a\neq a', \; \mathrm{dom}(\mathbb{M})=\bigcup_{a}\mathrm{dom}(\widetilde{W}_a),
\end{equation}
and define the map $u\colon \mathbb{M}\to A$ with $u^{-1}(a)=\widetilde{W}_a$. Properties (\ref{Equatio2}), (\ref{Equatio3}), and the fact that $U_a\cap U_b\neq\emptyset\iff R^{A}(a,b)$ will then imply that this is indeed the desired map.
We define $\widetilde{W}_a$ as the union $\bigcup_{n}W^n_a$ of an increasing sequence of clopen subgraphs of $U_a$. Let $(O_k)$ be an enumeration of a basis for the topology of $\mathrm{dom}(\mathbb{M})$ consisting of clopen connected graphs with the property that $O_k\cap U_a\neq\emptyset$ implies $O_k\subseteq U_a$ for all $a\in\mathrm{dom}(A)$ and $k\in\mathbb{N}$. We set $W^0_a=W_a$, for every $a\in\mathrm{dom}(A)$. Assume that $W^n_a$ has been defined for all $a\in\mathrm{dom}(A)$, and let $k(n+1)$ be the smallest natural number so that $O_{k(n+1)}\cup (\bigcup_a W^n_a)$ is a connected graph strictly expanding $\bigcup_a W^n_a$, if such $k$ number exists; otherwise, let $k(n+1)=\infty$. If $k(n+1)\in\mathbb{N}$ then $O_{k(n+1)}$ is compact and locally-connected. Hence, $O_{k(n+1)}\setminus (\bigcup_a W^n_a)$ is the union of finitely many clopen connected subgraphs $R_1,\ldots,R_m$ of $\mathbb{M}$. It is easy to see that for each $i\leq m$ there is some $a(i)$ so that $W^n_{a(i)}\cup R_i$ is connected. Let $W^{n+1}_a$ be the union of $W^n_a$ together with all $R_i$ with $a(i)=a$, if $k(n+1)\in\mathbb{N}$; and let $W^{n+1}_a=W^n_a$, if $k(n+1)=\infty$. This finishes the definition of $\{W^n_a: a\in \mathrm{dom}(A)\}$ for each $n\in\mathbb{N}$ and an easy induction shows that $\{W^n_a: a\in \mathrm{dom}(A)\}$ is a disjoint collection of clopen connected graphs with $W^n_a\subseteq U_a$. We are left to show that
\[\mathbb{M}=\bigcup_n \bigcup_{a} W^n_a,\]
since then, by compactness of $\mathrm{dom}(\mathbb{M})$, the union along $\mathbb{N}$
will stabilize at some finite $n$, and $\widetilde{W}_a=\bigcup_nW^n_a$ will therefore be clopen. Assume towards contradiction that some $x\in\mathrm{dom}(\mathbb{M})$ is not in the domain of the above union and let $k(x)$ be such that $x\in O_{k(x)}$. It follows that
\begin{equation}\label{Equatio4}
[O_{k(x)}]\cap \bigcup_n \bigcup_{a} W^n_a =\emptyset,
\end{equation}
since otherwise $[V_{k(x)}]\cap \bigcup_{n\leq l} \bigcup_{a} W^n_a\not=\emptyset$ for some $l$, implying that for each $n>l$, $k(n)\not= k(n+1)$ and
$k(n)<k(x)$, which is contradictory. But then, setting $X=\mathrm{dom}(\mathbb{M})\setminus \bigcup_n \bigcup_{a} \mathrm{dom}(W^n_a)$, we have by (\ref{Equatio4}) that:
\[
\mathbb{M}= [\bigcup_{x\in X}O_{k(x)}] \bigcup \big( \bigcup_n \bigcup_{a} W^n_a \big), \text{ with } [\bigcup_{x\in X}O_{k(x)}] \bigcap \big( \bigcup_n \bigcup_{a} W^n_a \big) =\emptyset,
\]
Contradicting that $\mathbb{M}$ is a connected graph.
\end{proof}
We can now finish the proof of Theorem \ref{T:ApproximateHomogeneityProperty}.
\begin{proof}[Proof of Theorem \ref{T:ApproximateHomogeneityProperty}]
By Theorem \ref{T: Peano <--> prespaces}, $X$ is homeomorphic to $|K|=\pi_K(K)$ for some prespace $K\in\mathcal{C}^{\omega}$. Let $g\colon K\to A$ be a map in $\mathcal{C}^{\omega}$ with $A\in\mathcal{C}$ so that $\{\pi_K(g^{-1}(a))\mid a\in \mathrm{dom}(A)\}$ refines $\mathcal{V}$. Since each $\pi_K(g^{-1}(a))$ is a compact and connected subset of a locally-connected space we can find connected open subsets $V_a\supseteq \pi(g^{-1}(a))$ of $|K|$, with $V_a\cap V_b\neq \emptyset$ if and only if $R^A(a,b)$, so that $\{V_a\mid a\in\mathrm{dom}(A)\}$ refines $\mathcal{V}$. Let $U^0_a:= (\gamma_0\circ \pi_{\mathbb{M}})^{-1}(V_a), U^1_a:= (\gamma_1\circ \pi_{\mathbb{M}})^{-1}(V_a)$, and set $\mathcal{U}^0:=\{U^0_a\mid a\in\mathrm{dom}(A)\}, \mathcal{U}^1:=\{U^1_a\mid a\in\mathrm{dom}(A)\}$. Then $\mathcal{U}^0$ and $\mathcal{U}^1$ are open covers of $\mathbb{M}$ consisting of connected graphs of $\mathbb{M}$ so that:
\begin{equation}\label{Equatio1}
U^0_a\cap U^0_b\neq \emptyset \iff R^A(a,b) \iff U^1_a\cap U^1_b\neq \emptyset
\end{equation}
To see that $U^0_a$ and $U^1_a$ are connected graphs, notice that, since $X$ is a Peano continuum, $V_a$ is the increasing union of compact connected sets, and since $\gamma$ is a connected map, $\gamma^{-1}(V_a)$ is also the increasing union of compact connected sets.
Let $u_0$ and $u_1$ be the maps given by applying Lemma \ref{L:ApproximateHomogeneityProperty} to the covers $\mathcal{U}_0$ and $\mathcal{U}_1$, respectively. By the projective homogeneity property of $\mathbb{M}$ there is $\varphi\in\mathrm{Aut}(\mathbb{M})$ so that $u_0\circ\varphi=u_1$. Let $h\colon |\mathbb{M}|\to |\mathbb{M}|$ with $h([x])=(\pi_{\mathbb{M}}\circ \varphi)(x)$. Since $\varphi\in\mathrm{Aut}(\mathbb{M})$, it follows that $h$ is a well-defined homeomorphism of $|\mathbb{M}|$. To check that this is the desired homeomorphism, let $y\in|\mathbb{M}|$ and fix $x\in \mathrm{dom}(\mathbb{M})$ with $\pi_{\mathbb{M}}(x)=y$. Set $a:=u_1(x)$ and notice that since $x\in u_1^{-1}(a)\subseteq (\gamma_1\circ \pi_{\mathbb{M}})^{-1}(V_a)$, we have that $\gamma_1(y)=\gamma_1 \circ \pi_{\mathbb{M}}(x)\in V_a$. On the other hand, $h(y)=\pi_{\mathbb{M}}(\varphi(x))$, and since $u_0^{-1}(a)\subseteq (\gamma_0\circ \pi_{\mathbb{M}})^{-1}(V_a)$, we have that $\varphi(x)\in u^{-1}_0(a)\subseteq (\gamma_0\circ \pi)^{-1}(V_a)$. Hence, $h(y)=\gamma_0\circ \pi_{\mathbb{M}}\circ \varphi(x)\in V_a$.
Since $\{V_a:\mathrm{a}\in\mathrm{dom}(A)\}$ refines $\mathcal{V}$, we are done.
\end{proof}
\section{The $n$-dimensional case}\label{S: n-dim}
In this section, we consider simplicial complexes that are more general than graphs. A {\bf simplicial complex} $C$ is a family of finite sets that is closed downwards, that is, if $\sigma\in C$ and $\tau\subset\sigma$ then $\tau\in C$. The elements $\sigma$ of $C$ are called {\bf faces} of the simplicial complex.
We set $\mathrm{dom}(C)=\cup C$ to be the {\bf domain} of the simplicial complex. A {\bf subcomplex} $D$ of $C$ is a simplicial complex with $D\subseteq C$. A {\bf simplicial map} $f\colon B\to A$ is a map from $\mathrm{dom}(B)$ to $\mathrm{dom}(A)$ with $f\sigma\in A$ whenever $\sigma\in B$, where $f\sigma$ stands for the set $\{f(v) \colon v\in\sigma\}$.
Let $C$ be simplicial complex and let $\sigma\in C$. The {\bf dimension} $\mathrm{dim}(\sigma)$ of $\sigma$ is $n\geq(-1)$ if the cardinality of $\sigma$ is $n+1$. We say that $C$ is {\bf $n$-dimensional} if $\mathrm{dim}(\sigma)\leq n$ for every $\sigma\in C$. We briefly recall some definitions from algebraic topology. For more details see Definition \ref{D:Acyclic} and the discussion after the proof of Theorem \ref{T:n-dim}.
We say that $C$ is {\bf $n$-connected} if all homotopy groups $\pi_k(C)$ of $C$, with $k\leq n$, vanish. We say that it is {\bf $n$-acyclic} if all (reduced) homology groups $\widetilde{H}_k(C)$ of $C$, with $k\leq n$, vanish.
Similarly, a simplicial map $f\colon B\to A$ is called {\bf $n$-connected} if the preimage of every $n$-connected subcomplex of $A$ under $f$ is $n$-connected, and it is called {\bf $n$-acyclic} if the preimage of every $n$-acyclic subcomplex of $A$ under $f$ is $n$-acyclic. Since a simplicial complex $A$ is $(-1)$-connected if and only $\mathrm{dom}(A)\neq \emptyset$, a simplicial map is $(-1)$-connected if it is a surjection on the domains of the simplicial complexes.
\begin{definition}
For every $n\in\{0,1,\ldots\}\cup\{\infty\}$, let $\mathcal{C}_n$ be the class of all $(n-1)$-connected simplicial maps between finite, $n$-dimensional, $(n-1)$-connected simplicial complexes.
Similarly let $\widetilde{\mathcal{C}}_n$ be the class of all $(n-1)$-acyclic simplicial maps between finite, $n$-dimensional, $(n-1)$-acyclic simplicial complexes.
\end{definition}
\begin{theorem}\label{T:n-dim}
For all $n$ as above, both $\mathcal{C}_n$ and $\widetilde{\mathcal{C}}_n$ are projective Fra{\"i}ss{\'e}{}.
\end{theorem}
For the proof of Theorem \ref{T:n-dim} will need the next lemma. Let $\rho$ be a finite set. The {\bf simplex $\Delta(\rho)$ on $\rho$} is the simplicial complex $\{\sigma \colon \sigma\subseteq\rho\}$. If $C$ is a simplicial complex and $\rho\in C$ then $\Delta(\rho)$ is a subcomplex of $C$.
\begin{lemma}\label{L:n-dim}
If $f\colon B\to A$ is a simplicial map between two finite simplicial complexes, then we have that:
\begin{enumerate}
\item $f$ is $(n-1)$-connected if and only if $f^{-1}(\Delta(\rho))$ is $(n-1)$-connected for every $\rho\in A$ with $\mathrm{dim}(\rho)\leq n$.
\item $f$ is $(n-1)$-acyclic if and only if $f^{-1}(\Delta(\rho))$ is $(n-1)$-acyclic for every $\rho\in A$ with $\mathrm{dim}(\rho)\leq n$.
\end{enumerate}
\end{lemma}
Before we discuss the proof of Lemma \ref{L:n-dim} we show how it implies Theorem \ref{T:n-dim}.
\begin{proof}[Proof of Theorem \ref{T:n-dim}.]
We just check here the projective amalgamation property. Fix $n$ and let $f\colon B\to A$ and $g\colon C\to A$ be maps in $\mathcal{C}_n$. We will define the projective amalgam $D,f',g'$ as the $n$--skeleton $\mathrm{Sk}^n(B\times_A C)$ of the simplicial pullback $B\times_A C$, together with the canonical projection maps $\pi_B,\pi_C$. Recall that the simplicial pullback $B\times_A C$ is defined on domain $\mathrm{dom}(B)\times_{\mathrm{dom}(A)}\mathrm{dom}(C)$ as the simplicial complex whose faces are precicely all sets of the form
\[\sigma\times_A\tau=\{(b,c)\colon b\in\sigma, \; c\in \tau, \; f \sigma=g \tau \}, \]
where $\sigma\in B$ and $\tau\in C$. We let $D$ be the simplicial complex attained by $B\times_A C$ after we omit all faces of dimension strictly larger than $n$. Let $f'=\pi_B$ and $g'= \pi_C$ be the projection maps $(b,c)\mapsto b$ and $(b,c)\mapsto c$ from $D$ to $B$ and $C$ respectively. It is easy to check that both $f',g'$ are simplicial epimorphisms (surjective on faces). We now check that $f'\colon D\to B$ is $(n-1)$-connected. The fact that $D$ is $(n-1)$-connected is a special case of this and the fact. The same argument applies symmetrically to $g'$.
Let $B_0$ be a $(n-1)$-connected subcomplex of $B$. To show that $D_0=(f')^{-1}(B_0)$ is $(n-1)$-connected it suffices by Lemma \ref{L:n-dim}(1) to show that $(f')^{-1}(\Delta(\rho))$ is $(n-1)$-connected for every $\rho\in B$. Let $\tau$ be the image of $\rho$ under $f$ and let $\Delta(\tau)$ the corresponding simplex, that is a subcomplex of $A$. Let also $C_0=g^{-1}(\Delta_{\tau})$ and notice that, since $g\in\mathcal{C}_n$, $C_0$ is a $(n-1)$-connected subcomplex of $C$. Notice that $C_0$ is isomorphic to the subcomplex $K$ of $\mathrm{Sk}^n(\Delta(\tau) \times_{\Delta(\tau)} C_0)$ spanned by the vertexes in $\mathrm{graph}^{*}(g \upharpoonright \mathrm{dom} (C_0) ):=\{(w,v)\in \tau \times\mathrm{dom}(C_0) \colon g(v)=w\}$, where $\Delta(\tau)\times_{\Delta(\tau)} C_0$ is formed with respect to $\mathrm{id}\colon \Delta(\tau)\to \Delta(\tau)$ and $g\upharpoonright \mathrm{dom} (C_0) \colon C_0\to \Delta(\tau)$. Now, again by Lemma \ref{L:n-dim}(1), it is easy to see that the function $(f\upharpoonright \rho) \times \mathrm{id}$ from $\mathrm{Sk}^n(\Delta(\rho) \times_{\Delta(\tau)} C_0)$ to $\mathrm{Sk}^n(\Delta(\tau) \times_{\Delta(\tau)} C_0)$ is $(n-1)$-connected. But $D_0$ is simply the preimage of $K$ under this map and $K$ is isomorphic to $C_0$ which is $(n-1)$-connected.
A similar argument, using Lemma \ref{L:n-dim}(2) instead of Lemma \ref{L:n-dim}(1), shows that $\widetilde{\mathcal{C}}_n$ satisfies the projective amalgamation property.
\end{proof}
Lemma \ref{L:n-dim} (1) and (2) are special cases of \cite[Proposition 7.6]{Qu} and \cite[Corollary 4.3]{Bj}, respectively. However, since we are dealing with finite combinatorial objects, one can provide a direct proof of Lemma \ref{L:n-dim}. In the rest of this section we sketch the steps for a hands-on proof Lemma \ref{L:n-dim} (2). The interested reader can fill the missing details. For Lemma \ref{L:n-dim} (1) recall that, by the Hurewicz Theorem, a simplicial complex is $(n-1)$-connected for $n\geq 2$, if it is $(n-1)$-acyclic and it has a trivial fundamental group. A combinatorial proof of Lemma \ref{L:n-dim} (1) is now possible using the notions of \emph{combinatorial paths} and \emph{combinatorial homotopy} from \cite{HW}.
We now recall from \cite{Po} basic notions from homology and the proceed to sketch a direct proof of Lemma \ref{L:n-dim} (2). Let $C$ be a simplicial complex and let $\sigma\in C$. An {\bf orientation} for $\sigma$ is an equivalence class of expressions $\epsilon(v_0,\ldots,v_n)$, where $\sigma=\{v_0,\ldots,v_n\}$ and $\epsilon\in\{-1,1\}$. For $n=-1$ we have the empty listing. Two such expressions $\epsilon(v_0,\ldots,v_n)$ and $\epsilon'(v'_0,\ldots,v'_n)$ are equivalent if for the unique permutation $\pi$ with $v_i=v'_{\pi(i)}$, we have that $\mathrm{sgn}(\pi)=\epsilon\epsilon'$. There are precisely two orientations associated with each face. An {\bf oriented face} $\orient{\sigma}$ in $C$ is just an orientation for $\sigma$ with $\sigma\in C$.
The {\bf chain group} $\mathbb{C}(C)$ of a complex $C$ is the abelian group generated by oriented faces of $C$, with the relations $\orient{\sigma}+\orient{\tau}=0$, for any two distinct oriented faces $\orient{\sigma}$ and $\orient{\tau}$ with $\sigma=\tau$. Elements of $\mathbb{C}(C)$ are called {\bf chains}.
Each chain is uniquely represented as a finite sum $\sum_i \orient{\sigma}_i$, where each $\orient{\sigma}_i$ is
an oriented face and, for all $i,j$, if $\sigma_i = \sigma_j$, then $\orient{\sigma}_i=\orient{\sigma}_j$. We say that $\orient{\sigma}_i$ {\bf is in} the chain $\sum_i \orient{\sigma}_i$. The empty sum represents the identity element $0\in\mathbb{C}(C)$. An {\bf $n$-chain} is a chain consisting entirely of $n$-dimensional oriented faces. A {\bf $(\leq n)$-chain} consists of oriented faces whose dimension is less that or equal to $n$.
The chain group is equipped with an endomorphism $\partial$ which is defined on the generators of
$\mathbb{C}(C)$ by the following procedure. If $\orient{\sigma}$ is one of the two $(-1)$-dimensional oriented faces, let $\partial \orient{\sigma} = 0$.
If $\orient{\sigma}$ is the equivalence class of $\epsilon (v_0, \dots, v_n)$ with $n\geq 0$, let
\begin{equation}\label{E:boun}
\partial \orient{\sigma} = \sum_{i=0}^n \orient{\sigma}_i,
\end{equation}
where $\orient{\sigma}_i$ is the equivalence class of $(-1)^i\epsilon (v_0, \dots, v_{i-1}, v_{i+1}, \dots, v_n)$. Let finally $f\colon B\to A$ be a simplicial map. This map induces a function
\[
f_{\#}\colon \mathbb{C}(B)\to \mathbb{C}(A)
\]
given by the following rules. Let $\orient{\sigma}$ be an oriented face
in $B$. If $f \sigma$ has dimension strictly smaller than that of $\sigma$, let $f_\#(\orient{\sigma})=0$. If the dimensions of
$f\sigma$ and $\sigma$ are equal and $\orient{\sigma}$ is the equivalence class of $\epsilon (v_0, \dots, v_n)$, define $f_\#(\orient{\sigma})$ to be the equivalence class of
$\epsilon (f(v_0), \dots, f(v_n))$. One checks that
\[
f_\#\circ \partial = \partial\circ f_\#.
\]
We have now developed all homological prerequisites for the main definition.
\begin{definition} \label{D:Acyclic}
Let $n\geq (-1)$. A complex $C$ will be called {\bf $n$-acyclic} if for each $(\leq n)$-chain $\zeta$ with $\partial \zeta=0$ there is a chain $\eta$ with $\zeta = \partial \eta$.
\end{definition}
The non-trivial direction of Lemma \ref{L:n-dim} (2) reduces to the following more general statement whose proof relies on Lemma \ref{L:aux1} and Lemma \ref{L:aux2}
\begin{lemma}\label{L:l-n-acyclicity}
If $f\colon B\to A$ is a simplicial map between finite simplicial complexes and for some $l,n\in\mathbb{N}$ we have that:
\begin{enumerate}
\item $f$ is still simplicial when viewed as a map from $\mathrm{Sk}^n(B)$ to $\mathrm{Sk}^l(A)$;
\item $f^{-1}(\Delta(\sigma))$ is $(n-1)$-acyclic for every $\sigma\in \mathrm{Sk}^l(A)$;
\end{enumerate}
then $B$ is $(n-1)$-acyclic if $A$ is $(l-1)$-acyclic.
\end{lemma}
For any simplex $\Delta(\rho)$ on a set $\rho$ we define the {\bf boundary} $\mathrm{Bd}(\Delta(\rho))$ of $\Delta(\rho)$ to be the simplicial complex $\Delta(\rho)\setminus\{\rho\}$.
\begin{lemma}\label{L:aux1}
Let $f\colon B\to A$ be a simplicial map such that $f^{-1}(\Delta(\sigma))$ is $n$-acyclic, for every $\sigma \in A$. Let $\zeta$ be an $(\leq n)$-chain in $B$ such that
each $\orient{\sigma}$ in $\zeta$ we have that $\mathrm{dim}(\sigma)>\mathrm{dim}(f\sigma)$. If $\partial\zeta=0$, then there is a chain $\eta$ such that $\zeta=\partial\eta$.
\end{lemma}
\begin{proof}[Sketch of Proof.]
The proof is by induction on $l=\max\{\mathrm{dim}(f\sigma)\colon \orient{\sigma} \;\text{in}\; \zeta \}$. Let $\zeta=\sum_{\rho}\zeta_\rho+\zeta^{-}$, where $\rho$ varies over all $l$-dimensional faces of $A$ for which there is a $\orient{\sigma}$ in $\zeta$ with $f\sigma=\rho$, and with $\zeta_{\rho}$ collecting all such $\orient{\sigma}$. Since
\[0=\partial \zeta =\sum_{\rho}\partial \zeta_{\rho}+\partial\zeta^{-}\]
and each $\partial\zeta_{\rho}$ is a chain in $f^{-1}(\Delta(\rho))$ it follows actually that each $\partial\zeta_{\rho}$ is a chain in $f^{-1}(\mathrm{Bd}(\Delta(\rho)))$. By inductive hypothesis, and since $\partial\partial\zeta_{\rho}=0$, there exists a chain $\xi_{\rho}$ in $f^{-1}(\mathrm{Bd}(\Delta(\rho)))$ with $\partial\xi_{\rho}=\partial\zeta_{\rho}$. Since $f^{-1}(\Delta(\rho))$ is $n$-acyclic we get a chain $\eta_{\rho}$ in $f^{-1}(\Delta(\rho))$ with $\eta_{\rho}-\xi_{\rho}=\partial\eta_{\rho}$. We have that
\[\zeta-\partial(\sum_{\rho}\xi_{\rho})=\sum_{\rho}\xi_{\rho}+\zeta^-.\]
Since $\sum_{\rho}\xi_{\rho}+\zeta^{-}$ is a $(\leq n)$-chain in $f^{-1}(\mathrm{Sk}^{l-1}(A))$ with $\partial(\sum_{\rho}\xi_{\rho}+\zeta^{-})=\partial \zeta -\partial\partial(\sum_{\rho}\xi_{\rho})=0$ we have, by inductive hypothesis, a chain $\eta^{-}$ with $\partial\eta^{-}= \sum_{\rho}\xi_{\rho}+\zeta^{-}$. Set $\eta=\sum_{\rho}\eta_{\rho}+\eta^{-}$.
\end{proof}
\begin{lemma}\label{L:aux2}
Let $f\colon B\to A$ be simplicial such that $f^{-1}(\Delta(\sigma))$ is $l$-acyclic for every $\sigma \in A$. Let $\orient{\sigma}$ and $\orient{\tau}$ be oriented faces of $B$ with $f_{\#}(\orient{\sigma})=f_{\#}(\orient{\tau})=\orient{\rho}$. If $\orient{\sigma}$, $\orient{\tau}$, $\orient{\rho}$ have dimension $l$ and
$f_\#(\orient{\sigma}) + f_\#(\orient{\tau})=0$, then there is an $l+1$-chain $\epsilon$ in $f^{-1}(\Delta(\rho))$ and an $l$-chain $\gamma$ in $ f^{-1}(\mathrm{Bd}(\Delta(\rho)))$ so that
\[
\orient{\sigma} + \orient{\tau} = \partial \epsilon + \gamma.
\]
\end{lemma}
\begin{proof}[Sketch of Proof.]
The proof is by induction on $l$. By \eqref{E:boun}, we have that
\[
\partial \orient{\sigma} = \sum_{\nu} \orient{\sigma}_{\nu}\;\hbox{ and }\;\partial \orient{\tau} = \sum_{\nu} \orient{\tau}_{\nu},
\]
where $\nu$ varies over all $(l-1)$-dimensional faces with $\nu\subseteq\rho$ and $f(\sigma_{\nu}) = f(\tau_{\nu})=\nu$. It follows that $
f_\#(\sigma_{\nu}) + f_\#(\tau_{\nu}) =0$ and therefore, by inductive assumption, we have that
$\sigma_{\nu}+\tau_{\nu}=\partial\epsilon_{\nu}+ \gamma_{\nu}$, for an $l$-chain $\epsilon_{\nu}$ in $f^{-1}(\Delta(\nu))$ and an $l-1$-chain $\gamma_{\nu}$ in $f^{-1}(\mathrm{Bd}(\Delta(\nu)))$. One can check now that Lemma~\ref{L:aux2} applies to the chain $\sum_{\nu} \gamma_{\nu}$, producing an $l$-chain $\gamma$ in $f^{-1}(\mathrm{Bd}(\Delta(\rho)))$ with $\sum_{\nu} \gamma_{\nu} =\partial\gamma$. Since
$\partial (\orient{\sigma}+\orient{\tau}- (\sum_{\nu}\epsilon_{\nu} +\gamma))=0$ and
$f^{-1}(\Delta(\rho))$ is $l$-acyclic, there exists an $l+1$-chain $\epsilon$ in $f^{-1}(\Delta(\rho))$ such that
\[
\orient{\sigma}+\orient{\tau}- (\sum_{\nu}\epsilon_{\nu} +\gamma) = \partial\epsilon.
\]
It follows that $\orient{\sigma}+\orient{\tau} = \partial\epsilon + (\sum_{\nu}\epsilon_{\nu} +\gamma)$, where $\sum_{\nu}\epsilon_{\nu} +\gamma$ is an $l$-chain in $f^{-1}(\mathrm{Bd}(\Delta(\rho)))$, as required.
\end{proof}
\begin{proof}[Proof Sketch of Lemma \ref{L:l-n-acyclicity}.]
Assume without loss of generality that $l\leq n$ and notice that (2) implies that for every $\tau\in \mathrm{Sk}^l(A)$, there is $\sigma\in A$, with $f\sigma=\tau$.
Let $\zeta_B$ be a $(\leq n-1)$-chain in $B$ with $\partial \zeta_B$. We will find a chain $\eta_B$ with $\partial \eta_B=\zeta_B$.
\begin{claim}
We can assume without loss of generality that $f_{\#}(\zeta_B)=0$.
\end{claim}
\begin{proof}[Proof of claim]
Set $\zeta=f_{\#}(\zeta_B)$. Since $A$ is $(l-1)$-acyclic, we find a $(\leq l)$-chain $\eta$ in $A$ with $\partial \eta=\zeta$. Set $\eta=\sum_i\orient{\tau}_i$. Since $\tau_i\in\mathrm{Sk}^l(A)$, we can find a chain $\eta'=\sum_i\orient{\sigma}_i$ in $B$, with $\mathrm{dim}(\sigma_i)=\mathrm{dim}(\tau_i)$ and $f_{\#}(\orient{\sigma}_i)=\orient{\tau}_i$. One can now replace $\zeta_B$ with $\zeta_B-\partial\eta'$ which satisfies all the desired properties. Moreover, if $\zeta_B-\partial\eta'=\partial \eta''$ for some cycle $\eta''$, then $\zeta_B=\partial(\eta'+\eta'')$.
\end{proof}
By Lemma~\ref{L:aux1} we can further assume that $\zeta_B$ is in fact a $(\leq l-1)$-chain. As a consequence,
\[
\zeta_B = \sum_i (\orient{\sigma}_i+\orient{\tau}_i) + \zeta',
\]
where $f_\#(\orient{\sigma}_i)+f_\#(\orient{\tau}_i)=0$, $\mathrm{dim}(f\sigma_i)=\mathrm{dim}(\sigma_i)=\mathrm{dim}(\tau_i)=\mathrm{dim}(f\tau_i)=l-1$, and
$\zeta'$ is an $(\leq l-2)$-chain. Let $\rho_i=f\sigma_i=f\tau_i$. By
Lemma~\ref{L:aux2}, for each $i$, there is a chain $\epsilon_i$ and a chain $\gamma_i$ in $ f^{-1}(\mathrm{Bd}(\Delta(\rho_i)))$ such that $\orient{\sigma}_i+\orient{\tau}_i = \partial\epsilon_i + \gamma_i$.
Thus,
\[\zeta_B = \partial (\sum_i \epsilon_i) + (\sum_i \gamma_i + \zeta').\]
One checks now that $f_{\#}(\sum_i \gamma_i + \zeta')$ is a chain in $\mathrm{Sk}^{l-1}(A)$ and the above equation implies that $\partial(\sum_i \gamma_i + \zeta')=0$. By inductive assumption we can find $\eta$ with $\partial\eta =(\sum_i \gamma_i + \zeta')$ and set $\eta_B=(\sum_i \epsilon_i)+\eta$ to be the required chain.
\end{proof}
As in Section \ref{S: Menger curve}, we can now construct generic sequences for $\mathcal{C}_n$ and $\widetilde{\mathcal{C}}_n$ whose inverse limits we denote by $\mathbb{M}^n$ and $\widetilde{\mathbb{M}}^n$ respectively. Both $\mathbb{M}^n$ and $\widetilde{\mathbb{M}}^n$ are compact $n$-dimensional simplicial complexes and as in Theorem \ref{T: Menger is Menger} it is easy to see that the relation $R$, where $xRy$ iff there is a face $\sigma$ with $x,y\in\sigma$, is an equivalence relation. We let $|\mathbb{M}^n|=\mathbb{M}^n/R$ and $\widetilde{\mathbb{M}}^n=\widetilde{\mathbb{M}}^n/R$. It follows that $|\mathbb{M}^0|$ and $|\widetilde{\mathbb{M}}^0|$ are both homeomorphic to the Cantor space $2^{\mathbb{N}}$; both $|\mathbb{M}^1|$ and $|\widetilde{\mathbb{M}}^1|$ are homeomorphic to the Menger curve $|\mathbb{M}|$; and as in Theorem \ref{T: Menger is Menger} one can see that both $|\mathbb{M}^n|$ and $\widetilde{\mathbb{M}}^n$ are Peano continua. While one expects $|\mathbb{M}^n|$ to be the usual Menger compactum of dimension $n$ (see \cite{Be}), we observe that for $n>1$, the complex $\widetilde{\mathbb{M}}^n$ admits quotients $A\in \widetilde{\mathcal{C}}_n$ which are $(n-1)$-acyclic but not $(n-1)$-connected. To the best of our knowledge these ``homology Menger spaces'', and for $n=\infty$ this ``homology Hilbert cube,'' have not appeared elsewhere in the literature.
|
{
"timestamp": "2020-01-07T02:04:26",
"yymm": "1803",
"arxiv_id": "1803.02516",
"language": "en",
"url": "https://arxiv.org/abs/1803.02516"
}
|
\section{Introduction}\label{intro}
Tracing the multi-wavelength evolution of supernovae (SNe) over many years, and even decades, can provide important clues about the shock physics, circumstellar environment, and dust production. The current ground-based transient surveys ensure the optical follow-up of hundreds of SNe per year, but these observations are typically at early times during the photospheric phase. Late-time optical spectra and/or non-optical observations are more rare because they require large apertures or space telescopes.
The {\it Spitzer Space Telescope} (hereafter {\it Spitzer}) has been the primary source of mid-infrared (mid-IR) observations of many SNe. Between 2003 and 2009, in the cryogenic (or Cold Mission) phase, only a moderate number ($<$50) of nearby SNe were targeted by {\it Spitzer}. Since 2009, even with post-cryo (Warm Mission) {\it Spitzer}, more than 150 more SNe have been targeted. Two surveys, in particular, contributed to this surge: a program aimed to observe a large sample of Type IIn SNe \citep[73 observed SN sites, 13 detected targets, see][]{Fox11,Fox13}, and SPitzer InfraRed Intensive Transients Survey (SPIRITS), a systematic mid-IR study of nearby galaxies \citep{Kasliwal17}. SPIRITS has resulted in the detection of 44 objects of various types of SNe \citep[observing 141 sites,][]{Tinyanont16}, three obscured SNe missed by previous optical surveys \citep{Jencson17,Jencson18}, and a large number of other variables and transients including ones with unusual infrared behavior \citep{Kasliwal17}.
These mid-IR observations have several advantages over optical observations, including increased sensitivity to the ejecta as it expands and cools, less impact by interstellar extinction, and coverage of atomic and molecular emission lines generated by shocked gas as it cools \citep[see e.g.][]{Reach06a}. Most of the mid-IR observations are sensitive to warm dust in the SN environment. The origin and heating mechanism of the dust, however, is not always obvious as the dust may be newly formed or pre-existing in the circumstellar medium (CSM). Newly-condensed dust may form in either the ejecta or in a cool dense shell (CDS) produced by the interaction of the ejecta forward shock with a dense shell of CSM \citep[see e.g][]{Pozzo04,Mattila08,Smith09}. Pre-existing dust may be radiatively heated by the peak SN luminosity or by X-rays generated by late-time CSM interaction, thereby forming an IR echo \citep[see e.g.][]{Bode80,Dwek83,Graham86,Sugerman03,Kotak09}. In this case, the dust is a useful probe of the CSM characteristics and the pre-SN mass loss from either the progenitor or companion star \citep[see e.g.][for a review]{Gall11}.
Based on theoretical expectations \citep[see e.g.][]{Kozasa09,Gall11}, Type II-P explosions are likely the best candidates for dust formation among SNe. Some of these objects were targets of {\it Spitzer} observations in the early years of the mission. These data typically trace dust formation $\sim$1-3 yr after explosion and estimate the physical parameters of newly-formed dust. In addition to several detailed studies of single objects \citep[e.g.][]{Meikle06,Meikle07,Meikle11,Sugerman06,Kotak09,Andrews10,Fabbri11,Szalai11}, \citet{Szalai13} presented an analysis of twelve Type II-P SNe, yielding nine detections and three upper limits. The results do not support the theoretical prediction of significant($>>$0.001 $M_{\odot}$) dust production in SNe or the large dust masses observed in some old SN remnants and/or high-redshift galaxies. Several ways to reconcile this inconsistency include imperfections of grain condensation models, the probability of clumping dust formation, or significant grain growth in the interstellar matter (ISM) (see \citet{Gall11} for a review, as well as \citet{Szalai13}). Another possibility is that a significant amount of dust may be present in the form of very cold ($<$50 K) grains in the ejecta, but to date, far-IR and sub-mm observations have only been able to detect such dust in the very nearby case of SN 1987A \citep{Matsuura11,Matsuura15,Indebetouw14,Wesson15}.
Type IIn SNe exhibit signatures of interaction between the ejecta and dense CSM. This shock interaction may lead to either heating of pre-existing circumstellar grains or dust condensation in the CDS that can form between the forward and reverse shock. Papers on individual objects \citep[e.g.][]{Gerardy02,Fox10,Andrews11a,Gall14}, together with the comprehensive {\it Spitzer} study of SNe IIn mentioned above \citep{Fox11,Fox13} show how the mid-IR evolution can be used to the trace the mass-loss history of the progenitor in the years leading up to the SN.
In contrast with the relatively large number of Type II-P and IIn SNe with published {\it Spitzer} data, there are fewer published mid-IR observations of thermonuclear explosions of C/O white dwarfs (Type Ia SNe) or stripped-envelope core-collapse SNe (SE CCSNe, including Type Ib/c, Ibn, and IIb ones). Historically, these SN subclasses are less likely to form new dust due to their high ejecta velocities and less likely to have pre-existing, dense CSMs. For example, \citet{Chomiuk16} and \citet{Maeda15a} use radio and near-IR observations, respectively, to place strict upper limits on the amount of material surrounding SNe Ia.
In recent years, however, many SNe within the SNe Ia and stripped-envelope subclasses have shown signs of a dense CSM and/or warm dust. For example, SNe Ia-CSM, which are thought to be thermonuclear explosions exploding in dense, H-rich shells of ambient CSM \citep[producing IIn-like emission features in their late-time spectra, see e.g.][]{Silverman13,Fox15,Inserra16} and are very bright in mid-IR, even 3-4 years after explosion \citep{FF13,Graham17}. The subluminous thermonuclear Type Iax SN~2014dt showed an excess of mid-IR emission (over the expected fluxes of more normal SNe Ia) at $\sim$1 yr after explosion \citep[][see also in Section \ref{res_ev}]{Fox16}, and an excess of near-IR emission was observed by circumstellar dust around the super-Chandrasekhar candidate SN~2012dn \citep{Yamanaka16,Nagao17}.
Some stripped-enveloped SNe show mid-IR emission at late-times, too. For example, the Type Ic SN 2014C showed a excess of mid-IR emission develop $\sim$1 year post-explosion \citep{Tinyanont16}, as did several SNe IIb, including SN 2013df \citep{Szalai16,Tinyanont16} and SN~2011dh \citep{Helou13}. The Type Ibn SN subclass, which shows narrow helium lines, do not typically show a late-time mid-IR excess (respect to the expected flux level originates from the cooling ejecta). However, the Type Ibn SN~2006jc was bright in early-time {\it Spitzer} images \citep{Mattila08}.
Despite the relatively high number of SNe with reported {\it Spitzer} observations, most of the analysis consists of single object papers. There have been some broader studies on SNe IIn \citep{Fox11,Fox13}, SNe IIP \citep{Szalai13}, and SNe Ia \citep{Johansson17}, and a SPIRITS summary by \citet{Tinyanont16}, which includes observations of $\sim$140 core-collapse SNe within 20 Mpc.
The motivation of the current work, however, is to provide a complete review of all SNe currently in the {\it Spitzer} archive to compare the mid-IR properties of different SN subclasses. This paper includes mid-IR observations of more than 1100 SN positions, from which 119 objects have been detected. Within this detected sample, many observations were previously unpublished and 45 targets were observed serendipitously during other science programs.
In Section \ref{obs}, we describe the steps of data collection and photometry of {\it Spitzer}/IRAC (Infrared Array Camera) data. We present our results in Section \ref{res}, including a statistical analysis of the mid-IR evolution of the different SN subclasses and simple models fits to the spectral energy distributions (SEDs). Finally, the conclusions of our study are presented in Section \ref{conc}.
\section{Observations and data analysis}\label{obs}
\subsection{Collection of supernova data from the {\it Spitzer} Heritage Archive}\label{obs_coll}
Using the list of SNe on the website of Central Bureau for Astronomical Telegrams (CBAT) \footnote{http://www.cbat.eps.harvard.edu/lists/Supernovae.html}, and the website of All-Sky Automated Survey for Supernovae \citep[ASAS-SN][]{Shappee14,Holoien17a,Holoien17b,Holoien17c}\footnote{http://www.astronomy.ohio-state.edu/$\sim$assassin}, we selected all SNe that were discovered before 2015 and have been spectroscopically classified. We also selected additional nearby ($z \lesssim$ 0.05) SNe listed in the Open Supernova Catalog\footnote{https://sne.space} \citep{Guillochon17}. This search returned $\sim$4500 objects, all of which have had their positions searched in the {\it Spitzer} Heritage Archive (SHA)\footnote{http://sha.ipac.caltech.edu} (using a 100$\arcsec$ environment for the queries). We found 1142 SN sites that have been observed post-explosion with {\it Spitzer}. For these SNe, we downloaded the available IRAC data for further analysis whether or not the data had been previously published. We note that although MIPS (Multiband Imaging Photometer) and IRS (Infrared Spectrograph) data can also contribute to the understanding of the mid-IR behavior of SNe \citep[see e.g.][]{Kotak06,Kotak09,Gerardy07,Fabbri11,Szalai11,Szalai13}, only a few objects observed with these instruments exist so we focus only on IRAC data in this article.
\subsection{Object identification and photometry on {\it Spitzer}/IRAC images}\label{obs_phot}
We collected and analyzed all available IRAC post-basic calibrated data (PBCD). The scale of these images is 0.6$\arcsec$/pixel. Identifying a point source at the position of an SN explosion can be difficult at the large distances to some of these galaxies, where compact \ion{H}{2} regions or the host clusters of SNe may also appear as point-like sources on {\it Spitzer}/IRAC images. Furthermore, the target can be faint or on top of a complex background. We therefore performed image subtraction with HOTPANTS\footnote{http://www.astro.washington.edu/users/becker/hotpants.html} whenever a template exists (Fig. \ref{fig:sne_new}). This procedure achieved a good match between the background levels of target and template frames, resulting in net background levels close to zero in the subtracted images.
However, not all targets have templates. In these cases, local background was estimated by measuring actual flux fluctuations via placing apertures covering the region of the SN site.
In all cases (including either image-subtracted or non-subtracted images), we defined the source as a positive detection if i) the source showed epoch-to-epoch flux changes, and ii) its flux was {\it above} the local background by at least 5 $\mu$Jy and 15 $\mu$Jy at 3.6 and 4.5 $\mu$m, respectively (according to point-source sensitivities in Table 2.10 of the IRAC Instrument Handbook version 2).
Moreover, in some cases, only a single-epoch set of {\it Spitzer} observations is available, thus, epoch-to-epoch flux changes cannot be used as indicators of the presence of SNe.
In these cases, as a first step, we used archival pre-explosion 2MASS JHK$_s$ images in order to exclude the potential false-positive detections (compact \ion{H}{2} regions etc.). For the precise astrometric comparison, we collected the absolute coordinates of the concerned SNe from the Open Supernova Catalog and derived their (x,y) coordinates in the {\it Spitzer}/IRAC images (note that the uncertainties of the absolute SN coordinates have not been reported in the most cases). {\it Spitzer}/IRAC post-BCD images have a pointing to 2MASS with an accuracy of 0.15\arcsec (see IRAC Instrument Handbook\footnote{https://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/iracinstrumenthandbook/}); an additional limit is the 0.6\arcsec/pixel resolution of {\it Spitzer}/IRAC PBCD images. The basic astrometric criterion of a potential positive detection was an agreement between the absolute SN coordinates and the position of the photometric center of the mid-IR point source within two IRAC pixels (1.2\arcsec).
In the second step, we carried out aperture photometry on the pre-explosion 2MASS JHKs images (using the same aperture and annulus/dannulus parameters as during the {\it Spitzer} photometry). Since, in most cases, there are no detectable point sources on the 2MASS images at the positions of the SNe, it was not possible to estimate reliable photometric errors based on photon statistics; instead, we have used a $\pm$0.4 mag value as a general photometric error, based on the upper limit of 2MASS photometric uncertainties reported in \citet{Skrutskie06}.
In order to reveal the presence of any possibly real mid-IR excess at post-explosion {\it Spitzer}/IRAC images, we have fitted simple blackbodies to the SEDs consist of the upper limits of pre-explosion 2MASS photometry (assuming a general uncertainty of 0.4 mag mentioned above). The photometric criterion of a positive detection was to find {\it Spitzer}/IRAC fluxes being above the fitted SED with a 3-$\sigma$ photometric error in at least one IRAC channel.
Conclusively, we labeled in total 7 SNe with single-epoch {\it Spitzer} data as positive detections; we note that all of these SNe are expected to show strong mid-IR radiation at the given epoch (strongly interacting SNe IIn, or early-caught SNe of other types). We present all the pairs of images ({\it Spitzer}/IRAC + pre-explosion 2MASS K$_s$) and SED fittings lead us to select the single-epoch positive detections, together with an example for negative detections, in Appendix B.
We performed a photometric analysis for all positive detections at every epoch. For isolated sources, we implemented aperture photometry on the PBCD frames using the \texttt{phot} task of IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} as a first step. We generally used an aperture radius of 2\arcsec~and a background annulus from 2\arcsec~to 6\arcsec~(2$-$2$-$6 configuration), applied aperture corrections of 1.213, 1.234, 1.379, and 1.584 for the four IRAC channels (3.6, 4.5, 5.8, and 8.0 $\mu$m, respectively) as given in IRAC Data Handbook, but sometimes used the 3$-$3$-$7 configuration (aperture corrections: 1.124, 1.127, 1.143, and 1.234, respectively) or the 5$-$12$-$20 configuration (aperture corrections: 1.049, 1.050, 1.058, and 1.068, respectively). For targets with templates, we compared the results before and after template subtraction to test for consistency. We generally found good agreement between the two methods ($\lesssim$10\% difference in fluxes, which is within the approximated uncertainty of {\it Spitzer}/IRAC photometry). In the few cases where the difference between the two methods was more than 10\%, we preferred the results of image subtraction photometry.
For sources on top of complex backgrounds without a corresponding template, we implemented the photometric method described by \citet{Fox11} (called hereafter as ``Fox+11 method''). This method applies a set of single apertures with a fixed radius to estimate both the SN and average background flux. This technique allows us to visually identify only local background associated with the SN, as opposed to the annuli regarding the aperture configurations mentioned above.
We compared our results to any previously published {\it Spitzer}/IRAC SN-photometry. In general, we found good agreements with the published values ($\lesssim$10\% difference in fluxes). In a few cases, the flux differences are larger, but each of these cases consist of either a very faint target and/or complex sky background.
The target details and resulting mid-IR photometry of all SNe with previously unpublished {\it Spitzer} photometry are listed in Tables \ref{tab:sn} and \ref{tab:phot}, respectively. We clearly highlight SNe identified on a single-epoch set of {\it Spitzer} images, as well as all the other SNe where image subtraction can not be applied; in all these cases, measured fluxes are strictly handled as upper limits. Flux uncertainties in Table \ref{tab:phot} generally based on photon statistics provided by \texttt{phot}, but, where photometry was carried out on subtracted images, increment of the noise level by $\sqrt{2}$ is also taken into account.
\begin{figure}
\centering
\includegraphics[width=10cm]{sne_new2.eps}
\caption{HOTPANTS template subtraction of our {\it Spitzer} data. For each SN, the three panels show (left) the most recent {\it Spitzer}/IRAC 4.5 $\mu$m image, (center) template, and (right) differenced image.
\label{fig:sne_new}}
\end{figure}
\section{Results}\label{res}
\subsection{Demographics}\label{res_stat}
The total number of the observed SN positions is over 1100. The majority of SNe are nearby (z$<$0.05). We detect 119 SNe, including 45 objects that have no previously published {\it Spitzer} photometry. Only $\sim$12\% of the SN sites were observed pre-explosion. We also highlight three specific targets (SNe 2012aw, 2012fh, and 2013ee), which have been noted by \citet{Tinyanont16} as positive {\it Spitzer} detections, but without any corresponding photometry. We summarize the statistics of our SN sample in Table \ref{tab:stat} and Figs. \ref{fig:stat} and \ref{fig:stat_II}.
About 40$\%$ of the objects (mostly Type Ia) are located in distant, anonymous galaxies, but these observations did not yield any SN detections. There are also $\sim$60 SNe that are also located in complex regions of the galaxy, typically very close to the galaxy nuclei. In these cases, even template subtraction is not effective due to the asymmetric profile of the IRAC point-spread function (PSF). We do not include any of these SNe in this analysis.
Following the methods presented by \citet{Tinyanont16}, we present the detection rates separated in three time bins after discovery: less than one year, one to three years, and more than three years. If a SN is observed with at least one detection in a bin, it is considered detected, even though it might fade away later in the same bin.
\begin{longrotatetable}
\begin{deluxetable}{c|cccc|ccccccc|ccccc}
\tabletypesize{\scriptsize}
\tablecaption{\label{tab:stat} Statistics of the {\it Spitzer}/IRAC data regarding the sample of studied SNe.}
\tablehead{}
\startdata
~ & \multicolumn{16}{c}{{\bf Total number of observed SN sites: 1142/693$^{\dagger}$}} \\
Total number of & \multicolumn{4}{c}{Thermonuclear SNe} & \multicolumn{7}{c}{Stripped-envelope CC SNe} & \multicolumn{5}{c}{Type II SNe} \\
observed SN sites & Ia & Ia-pec & Iax & Ia-CSM & Ib & Ib-pec & Ibn & Ib/c & Ic & Ic-pec & IIb & II-P & II-P pec. & IIn & II-L & Unclass. SN II \\
~ & 723/294$^{\dagger}$ & 25/23$^{\dagger}$ & 8 & 5 & 59/53$^{\dagger}$ & 1 & 2 & 1 & 73/63$^{\dagger}$ & 5/4$^{\dagger}$ & 25 & 36 & 2 & 101 & 4 & 72 \\
\hline
~ & \multicolumn{16}{c}{{\bf SN sites with multiple observations: 553/334$^{\dagger}$}} \\
SN sites with & \multicolumn{4}{c}{Thermonuclear SNe} & \multicolumn{7}{c}{Stripped-envelope CC SNe} & \multicolumn{5}{c}{Type II SNe} \\
multiple observations & Ia & Ia-pec & Iax & Ia-CSM & Ib & Ib-pec & Ibn & Ib/c & Ic & Ic-pec & IIb & II-P & II-P pec. & IIn & II-L & Unclass. SN II \\
~ & 325/112$^{\dagger}$ & 9 & 4 & 5 & 27/25$^{\dagger}$ & 1 & 1 & -- & 35/33$^{\dagger}$ & 3 & 14 & 32 & 2 & 38 & 4 & 53 \\
\hline
~ & \multicolumn{16}{c}{{\bf SN sites with pre-explosion images: 111/87$^{\dagger}$}} \\
SN sites with& \multicolumn{4}{c}{Thermonuclear SNe} & \multicolumn{7}{c}{Stripped-envelope CC SNe} & \multicolumn{5}{c}{Type II SNe} \\
pre-explosion images & Ia & Ia-pec & Iax & Ia-CSM & Ib & Ib-pec & Ibn & Ib/c & Ic & Ic-pec & IIb & II-P & II-P pec. & IIn & II-L & Unclass. SN II \\
~ & 43/20$^{\dagger}$ & 3 & 2 & -- & 10 & -- & 1 & -- & 9/8$^{\dagger}$ & -- & 4 & 10 & 2 & 9 & 1 & 17 \\
\hline
~ & \multicolumn{16}{c}{{\bf Total number of positive detections: 119}} \\
Total number of & \multicolumn{4}{c}{Thermonuclear SNe} & \multicolumn{7}{c}{Stripped-envelope CC SNe} & \multicolumn{5}{c}{Type II SNe} \\
positive detections & Ia & Ia-pec & Iax & Ia-CSM & Ib & Ib-pec & Ibn & Ib/c & Ic & Ic-pec & IIb & II-P & II-P pec. & IIn & II-L & Unclass. SN II \\
~ & {\bf 24} & 1 & 2 & 5 & {\bf 5} & -- & {\bf 1} & 1 & 7 & 1 & 7 & 22 & 1 & {\bf 25} & 2 & {\bf 15} \\
\hline
~ & \multicolumn{16}{c}{{\bf Unpublished positive detections: 45}} \\
Unpublished & \multicolumn{4}{c}{Thermonuclear SNe} & \multicolumn{7}{c}{Stripped-envelope CC SNe} & \multicolumn{5}{c}{Type II SNe} \\
positive detections & Ia & Ia-pec & Iax & Ia-CSM & Ib & Ib-pec & Ibn & Ib/c & Ic & Ic-pec & IIb & II-P & II-P pec. & IIn & II-L & Unclass. SN II \\
~ & {\bf 13} & 1 & 1 & 2 & {\bf 3} & -- & -- & 1 & 2 & -- & 4 & 4 & -- & {\bf 6} & 1 & {\bf 7} \\
\hline
\enddata
\tablecomments{$^{\dagger}$Total number of objects / Number of objects excluding SNe in distant, anonymous galaxies}
\end{deluxetable}
\end{longrotatetable}
\begin{figure*}
\includegraphics[width=5cm]{hist1_Ia_new.eps}
\includegraphics[width=5cm]{hist1_Ibc_new.eps}
\includegraphics[width=5cm]{hist1_IIb_new.eps}
\includegraphics[width=5cm]{hist13_Ia_new.eps}
\includegraphics[width=5cm]{hist13_Ibc_new.eps}
\includegraphics[width=5cm]{hist13_IIb_new.eps}
\includegraphics[width=5cm]{hist3_Ia_new.eps}
\includegraphics[width=5cm]{hist3_Ibc_new.eps}
\includegraphics[width=5cm]{hist3_IIb_new.eps}
\caption{The statistics of detected Type Ia and stripped-envelope CC SNe in our {\it Spitzer}/IRAC sample. The statistics are divided by type and epoch. The number of detections is plotted as a function of distance in each case. We do not include SNe located in distant ($z\gtrsim$0.05), anonymous galaxies and/or too close to the center of their hosts. We also exclude most SNe with only a single-epoch {\it Spitzer}/IRAC observations.}
\label{fig:stat}
\end{figure*}
\begin{figure*}
\includegraphics[width=5cm]{snII_hist1_IIP.eps}
\includegraphics[width=5cm]{snII_hist1_IIn.eps}
\includegraphics[width=5cm]{snII_hist1_II.eps}
\includegraphics[width=5cm]{snII_hist13_IIP.eps}
\includegraphics[width=5cm]{snII_hist13_IIn.eps}
\includegraphics[width=5cm]{snII_hist13_II.eps}
\includegraphics[width=5cm]{snII_hist3_IIP.eps}
\includegraphics[width=5cm]{snII_hist3_IIn.eps}
\includegraphics[width=5cm]{snII_hist3_II.eps}
\caption{Same as Figure \ref{fig:stat}, except in this case for SNe II-P, Type IIn, and other (unclassified) Type II SNe.}
\label{fig:stat_II}
\end{figure*}
\subsection{Mid-IR evolution: trends and outliers}\label{res_ev}
Fig. \ref{fig:absmag} plots the mid-IR photometry of all SNe with positive {\it Spitzer} detections. Table \ref{tab:absmag_all}
lists all corresponding Vega magnitudes, distances, and $E(B-V)$ values. For plotting purposes, Fig. \ref{fig:absmag} excludes some objects with decade-long mid-IR datasets -- e.g. Type II-pec SN~1987A \citep{Dwek10}, or Type II-L SN~1979C \citep{Tinyanont16} --, but these SNe are included in our statistical analysis. Figs. \ref{fig:absmag_Ia}-\ref{fig:absmag_II} highlight the SN subclasses so that individual SNe can be identified and photometric details can be ascertained. Tables \ref{tab:sn} and \ref{tab:absmag_all} contain all the sources of previously published {\it Spitzer} data we used for constructing Figs. \ref{fig:stat}-\ref{fig:absmag_II} and for the analysis we present below.
\begin{figure*}
\includegraphics[width=15cm]{abs_mag_45_total.eps}
\caption{4.5 $\mu$m absolute Vega magnitudes of all SNe identified as point sources in {\it Spitzer}/IRAC images. Values and sources of data are shown in Table \ref{tab:absmag_all}.}
\label{fig:absmag}
\end{figure*}
\subsubsection{Thermonuclear SNe}
This work more than doubles the number of SNe Ia with positive mid-IR detection (33 vs. 15). Fig. \ref{fig:absmag_Ia} shows that most Type Ia SNe have a relatively well-defined evolution compared to the other SN subclasses, consistent with previous results \citep{Tinyanont16, Johansson17}. The handful of Type Ia-CSM SNe, however, are extremely bright at mid-IR wavelengths. This sample is small, so it is difficult to draw any definitive conclusions about the overall trend. For example, PTF11kx ($D \sim$200 Mpc) is still detectable at $\sim$1800 days, while SN~2002ic ($D \sim$280 Mpc) faded at a similar age.
\begin{figure}
\includegraphics[width=15cm]{abs_mag_Ia_total.eps}
\caption{Mid-IR evolution of thermonuclear SNe; highlighted objects are marked with black symbols, while filled and empty symbols denote SNe whose absolute magnitudes were determined with or without template subtraction, respectively. Values and sources of data are shown in Table \ref{tab:absmag_all}.}
\label{fig:absmag_Ia}
\end{figure}
We do not find any previously unpublished SNe Ia with mid-IR fluxes comparable to those of the known SNe Ia-CSM. This result suggests that a dense CSM is rare in the environments of thermonuclear SNe (which may also hint that SNe Ia-CSM arise from different progenitor systems than the majority of SNe Ia), or, that CSM shells may be far away from the explosion sites. Based on the existing (rough) estimations, SN Ia-CSM objects may contribute between 1\% and 5\% of all SNe Ia \citep[see e.g.][]{MP18}, which seems to be supported by our results; however, future systematic surveys are necessary for the thorough study of this problem.
We do find some other SNe Ia that deviate from the expected mid-IR evolution. SNe 2010B and 2010gp, observed at early times, are noticeably brighter. Note, however, that SN 2010B has a complex background that may be contributing additional mid-IR flux. On the other hand, SN 2010gp has a template for subtraction, so these results are quite robust. SN~2011iy, which is relatively nearby ($d \sim$20 Mpc), also appears as a point source at 4.5 $\mu$m after image subtraction at $\sim$1030 days after explosion (see Fig. \ref{fig:sne_new}). This SN, however, is not detectable at 3.6 $\mu$m.
SN~2014dt, classified as a Type Iax SN, should be also highlighted here: this object shows a clear and even growing mid-IR excess $\sim$1 yr after explosion, which has been explained with the presence of newly-formed dust, pre-existing dust, or possibly a bound remnant \citep{Fox16,Foley16}. The only other SN Iax we identified as a mid-IR source on {\it Spitzer} images is SN~2005P. While SN 2005P seems to have a slight 8.0 $\mu$m~detection at $\sim$180 days, there is no comparable data with SN 2014dt. By $\sim$1 year post-explosion, SN 2005P has faded below the detection threshold.
\subsubsection{Stripped-envelope CC SNe}
Fig. \ref{fig:absmag_Ibc} plots the mid-IR absolute magnitudes of SE CCSNe. The stripped-envelope designation encompasses various subclasses, including SNe IIb, Ib, and Ic, so their mid-IR evolution exhibits a bit of heterogeneity, particularly at later times.
The mid-IR evolution of ``normal'' Type Ib/Ic SNe seem to be fastest amongst SE CCSNe. SN~2014C presents itself as a special case in which the explosion transforms from a ``normal'' Type Ib into a strongly-interacting, Type IIn-like SN \citep{Milisavljevic15,Margutti17}. SN~2014C is located within NGC 7331, which has been followed extensively as part of the SPIRITS program. \citet{Tinyanont16} show a roughly constant IR-luminosity in the first $\sim$800 days and a unique re-brightening at $\sim$250 days as the CSM interaction begins.
Another interesting object is SN~2001em, a strongly-interacting Type Ib/c object, which generated strong X-ray, radio and optical emission for $\sim$3 years post-explosion \citep[see][]{Stockdale04,Pooley04,Soderberg04,Chugai06}. Unlike SN 2014C, however, the transformational process was not observed by {\it Spitzer}, making a direct comparison impossible. SN~2001em was observed by {\it Spitzer} only once. Fig. \ref{fig:absmag_Ibc} shows SN~2001em is even brighter than SN~2014C, although background contributions have not been removed. We present a more detailed analysis of SN~2001em in Section \ref{res_sed}.
Finally, it is worth mentioning the one observation of SN~2011ft, a distant ($d \sim$ 100 Mpc) Type Ib SN that is as bright as SN~2014C at $\sim$250 days after explosion. With only a single 3.6 $\mu$m, more observations are planned.
Observations exist for only two SNe Ibn: SN~2006jc (4 epochs, but only one from the first year) and PS1-12sk (1 epoch). These two events are bright in mid-IR during the early-time CSM interaction, but the brightness declines quickly. Following the interpretation of \citet{Mattila08}, the early mid-IR radiation may arise from newly formed dust in the CDS, while the source of the later-time mid-IR flux is probably an IR echo from pre-existing dust in the CSM.
\begin{figure}
\includegraphics[width=15cm]{abs_mag_Ibc_IIb_total_new.eps}
\caption{Mid-IR evolution of stripped-envelope core-collapse SNe; highlighted objects are marked with labels, while filled and empty symbols denote SNe whose absolute magnitudes were determined with or without image subtraction, respectively. Values and sources of data are shown in Table \ref{tab:absmag_all}.}
\label{fig:absmag_Ibc}
\end{figure}
Among SNe IIb, the moderately interacting SN~2013df \citep{Kamble16,Maeda15b,Szalai16} produces a slowly-declining mid-IR light curve between $\sim$270-820 days \citep{Tinyanont16,Szalai16}. SN 2001gd shows a similar brightness at $\sim$950 days. SN~2011dh, one of the best-sampled SN in mid-IR, has been also detectable up to almost two years after explosion. The Type IIb SN 1993J ($D$ $\sim$3.7 Mpc) is detected at $>$24 years post-explosion in mid-IR \citep{Tinyanont16}, while the Type IIb SN~2008ax ($D$ $\sim$7.8 Mpc) is not detected at even $\sim$4 years after explosion.
The differences between SNe IIb seem to correlate with the assumed sizes of the progenitors of SE CCSNe. SNe 1993J, 2001gd and 2013df, which are detected by {\it Spitzer} at later epochs, have been classified as Type eIIb objects \citep{Chevalier10,Szalai16}, which denotes that these explosions originate from extended progenitors (yellow or red giants). SN~2008ax is known as a representative of Type cIIb objects, which are defined to have more compact progenitors, similar to those of SNe Ib/c. SN~2011dh seems to be an intermediate case in both its progenitor radius ($R \sim$ a few tens of $R{_\odot}$) and mid-IR evolution.
\subsubsection{Type II-P SNe}\label{res_ev_IIP}
\begin{figure}
\includegraphics[width=15cm]{abs_mag_IIP_45.eps}
\caption{4.5 $\mu$m absolute magnitudes of Type II-P explosions. Colored symbols denote objects where mid-IR rebrightening occurred. Filled and empty symbols denote SNe whose absolute magnitudes were determined with or without image subtraction, respectively. In two cases (SNe 2004dj and 2014bi), re-brightening can be only observed at 3.6 $\mu$m (see details in text), which curves are also shown (marked with asterisks). All data are shown in Table \ref{tab:absmag_all}.}
\label{fig:absmag_IIP}
\end{figure}
Fig. \ref{fig:absmag_IIP} plots the mid-IR absolute magnitudes of Type IIP SNe, which show a relatively homogeneous mid-IR evolution. Theoretical models suggest that the ejecta of most of Type II-P SNe may form dust between $\sim$300-600 days due to the slow expansion velocities and high densities. Only a few SNe show evidence for a rebrightening in the mid-IR between $\sim$300-600 days: SNe~2004dj \citep{Szalai11,Meikle11}, 2011ja \citep{Andrews16,Tinyanont16}, and 2014bi \citep{Tinyanont16}. This unexpectedly low rate may be influenced by the poor sampling of the other observed Type IIP SNe. Furthermore, while both SNe~2004dj and 2014bi show the rebrightening effect at 3.6 $\mu$m \,\citep[the first object even at 5.8 and 8.0 \micron, see][]{Szalai11}, it is not detectable at 4.5 \micron \,(there is a linear flux decline instead). \citet{Szalai11} suggest that additional flux at 4.5 $\mu$m arises from the 1$-$0 vibrational band of CO at 4.65 $\mu$m \citep[see][]{Kotak05} during the declining phase, but disappears at $\sim$500d \citep{Szalai11,Szalai13}, thereby making 4.5 \micron~light-curves difficult to interpret for SNe II-P.
Two other Type IIP SNe, 2004et \citep{Kotak09,Fabbri11} and 2007oc \citep{Szalai13}, as well as Type IIP/IIL SN~2013ej \citep{Tinyanont16,Mauerhan17} also show mid-IR rebrightening, but it occurred between $\sim$700-1000 days. This rebrightening is detected at both 3.6 and 4.5 $\mu$m (at least in the cases of SNe 2004et and 2007oc; SN~2013ej becomes undetectable at 3.6 $\mu$m after $\sim$800 days). The above papers suggest this rebrightening is due to new dust forming in the CDS behind the reverse shock and not within the ejecta.
\subsubsection{Type IIn SNe}\label{res_ev_IIn}
Fig. \ref{fig:absmag_IIn} plots the mid-IR absolute magnitudes of SNe IIn. For most SNe IIn, \citet{Fox11,Fox13} show that the mid-IR radiation arises from pre-existing dust, which is radiatively heated by optical emission generated by ongoing interaction between the forward shock and CSM. While many SNe IIn show early evidence for CSM interaction (e.g., strong emission in H$\alpha$ / X-ray / radio), only a hand-full of {\it Spitzer} observations exist in the first few months post-explosion. SNe 2009ip \citep[][and this work]{Fraser15} and 2011A were faint mid-IR sources in the first months, but both of these objects are considered low-luminosity Type IIn events/impostors (see the analyses of e.g. \citealt{Pastorello13}, \citealt{Fraser13}, \citealt{Mauerhan13}, and \citealt{Margutti14}, and \citealt{deJaeger15}). By contrast, SN~2010jl was extremely bright in mid-IR during the first year \citep{Andrews11a,Fox13,Fransson14, Williams15}. The origin of the mid-IR excess has been debated, but is likely a combination of both newly formed and pre-existing dust \citep{Gall14,Fransson14}.
\begin{figure}
\includegraphics[width=15cm]{abs_mag_IIn_total.eps}
\caption{Mid-IR evolution of the studied Type IIn explosions. Highlighted objects are marked with colored symbols (see details in text). Filled and empty symbols denote SNe whose absolute magnitudes were determined with or without image subtraction, respectively. In the case of SN~2009ip, epochs are defined relative to the large outburst occurred in 2012. All the data are shown in Table \ref{tab:absmag_all}.}
\label{fig:absmag_IIn}
\end{figure}
The mid-IR evolution of SNe IIn is heterogeneous. While many SNe IIn remain bright for year post-explosion, the decline rates are not always the same. Furthermore, many SNe IIn are not even detected \citep[see Fig. \ref{fig:stat_II}, as well as][]{Fox11}. These differences likely correspond to the extent of pre-SN mass-loss, but may also suggest different geometries, shock velocities, and progenitors.
\vspace{3mm}
\begin{figure}
\includegraphics[width=15cm]{abs_mag_II_total.eps}
\caption{Mid-IR evolution of the unclassified Type II SNe in our sample (black symbols) compared to that of known SNe II-P and IIn (gray circles and rectangles, respectively). Filled and empty symbols denote SNe whose absolute magnitudes were determined with or without image subtraction, respectively (absolute magnitudes shown here are calculated from 3.6 $\mu$m fluxes, because several objects have been observed only at this wavelength). Values and sources of data are shown in Table \ref{tab:absmag_all}.}
\label{fig:absmag_II}
\end{figure}
Fig. \ref{fig:absmag} shows that Type II-P and Type IIn SNe have quite distinct late-time mid-IR evolution. This dichotomy serves as a useful classification method for several unclassified targets in our sample (see Table \ref{tab:stat}). Most of these sources are likely Type IIP, except SNe 2005kd, 2008fq, and 2011dq, which may SNe IIn, as it is shown in Fig. \ref{fig:absmag_II}.
\subsection{SED fittings: limitations, methods, consequences}\label{res_sed}
Mid-IR SEDs of SNe span the peak of the thermal emission from warm dust and can place useful constraints on the dust properties \citep[see e.g.][]{Kotak09,Szalai11,Szalai13}. In most of our sample, these fits are limited to only two photometry points. Further challenges exist. During the first several months after explosion, a hot component arising from an optically thick gas in the innermost part of the ejecta may affect the continuum emission at these wavelengths. Moreover, the line emission by CO at 4.65 $\mu$m (described in Section \ref{res_ev_IIP}) may also contribute a significant flux at 4.5 $\mu$m (although this effect has only been observed in some Type II-P SNe before $\sim$500 days after explosion). Most of our sample with previously unpublished {\it Spitzer} data lack the multi-wavelength data that can improve these fits. We also note that while the Galactic extinction (typically at a level below $E(B-V)$=0.1, see Table \ref{tab:sn}) is practically negligible at mid-IR wavelengths, the host galaxy extinction can be more important for both thermonuclear \citep[see e.g.][]{Phillips13} and core-collapse SNe \citep[see e.g.][and references therein]{Jencson18}. Unfortunately, regarding most of the studied SNe, we have no information about the host extinction. As a simple estimation \citep[based on the results of][]{Xue16}, an extreme value of $E(B-V)_{total}$=1.0 mag can attenuate the measured flux by $\sim$20\% at 3.6 or 4.5 $\mu$m.
We illustrate our fitting process using data from the SN IIn 2011A and SN II-P 2014cx. Both of these objects were observed by {\it Spitzer} within 3 months after explosion (at +86 and +53 days, respectively). In the case of SN~2011A, contemporaneous {\it g'r'i'z'} data can be found in \citet{deJaeger15}, and in the case of SN~2014cx, BVRI and {\it g'r'i'} data obtained at the epoch of {\it Spitzer} observations can be found in \citet{Huang16}. The mid-IR fluxes were transformed to $F_{\lambda}$ values and dereddened using the galactic reddening law parametrized by \citet{Fitzpatrick07} assuming R$_V$ = 3.1 and adopting $E(B-V)$ values listed in Table \ref{tab:sn}.
Fig. \ref{fig:11A_14cx} shows that single component black bodies (BBs) provide a good fit to the combined optical-IR SEDs of both SNe. Fitting only the mid-IR data yields significantly different parameters (see Table \ref{tab:models}, highlighting the shortcomings of fitting just two data points). Regardless, the SED in this case does not show any evidence for an excess of mid-IR emission above the optically peaked SED.
The Type Ib SN~2009jf also has sufficient data to construct a combined optical-IR SED adopting BVRI measurements from \citet{Sahu11b}. {\it Spitzer} data were obtained at 100 days after explosion, while optical data are from +94 and +105 days. Unlike SNe 2011A and 2014cx, Fig. \ref{fig:09jf} shows that SN 2009jf exhibits an excess at 4.5 $\mu$m, but not at 3.6 $\mu$m. Fitting the mid-IR data with a single BB is difficult.
Fig. \ref{fig:12aw_13ej} shows data for the Type II-P SN~2012aw. The earliest {\it Spitzer} observations occur on day 358, and we extrapolate V, R, and I-band data from day $\sim$330 \citep{DallOra14}. The hot component cannot be adequately modeled by a simple BB curve since the optical depth of the continuously expanding ejecta is quite low at this time. Therefore, we applied the global light-curve model of Type II-P SNe \citep[][called hereafter PP15 model]{Pejcha15} to estimate the contribution of the hot component to the mid-IR fluxes. In order to construct the PP15 model SED, we calculated its values at the wavelengths of BVRIJHK filters, while, at longer wavelengths, we used the the Rayleigh$-$Jeans approximation (F$_{\lambda} \propto \lambda^{-4}$). Like SN 2009jf, there is an excess at 4.5 $\mu$m, indicating a warm dust component is present. Fitting this component is difficult with just two data points and complicated even further by the potential 4.5 $\mu$m line emission in SNe II-P described above.
Finally, Fig. \ref{fig:12aw_13ej} shows a similar analysis for the CSM-interacting Type II-P/II-L SN 2013ej \citep{Leonard13,Bose15,Kumar16,Dhungana16,Chakraborti16,Mauerhan17}. Despite the amount of published data, modeling of the combined (UV)-optical-IR SEDs has not been presented in the literature. Only one epoch (+236d), however, has nearly contemporaneous mid-IR and optical data \citep[][respectively]{Tinyanont16,Bose15}. Like SN 2012aw, we fit the optical data with the PP15 model with parameters given by \citet{Muller17}. We ignore the R-band data in this case, however, given the strong H$\alpha$-emission arising from CSM interaction \citep{Bose15,Huang15,Dhungana16,Mauerhan17}. The results for each fit are given in Table \ref{tab:models2}.
\begin{figure*}
\includegraphics[width=8cm]{SN2011A_86_BB.eps}
\includegraphics[width=8cm]{SN2014cx_53_BB.eps}
\caption{Comparison of single blackbody fits to the (left) Type IIn SN 2011A at +86 days, and (right) Type II-P SN 2014cx at +53 days. Fits are applied to both the combined optical-IR SEDs and only the mid-IR fluxes.}
\label{fig:11A_14cx}
\end{figure*}
\begin{figure}
\includegraphics[width=8cm]{SN2009jf_100_models.eps}
\caption{Comparison of single blackbody fits to the Type Ib SN 2009jf at +100 days. A fit is applied to both the combined optical-IR SEDs and only the mid-IR fluxes.}
\label{fig:09jf}
\end{figure}
\begin{figure*}
\includegraphics[width=8cm]{SN2012aw_358_models.eps}
\includegraphics[width=8cm]{SN2013ej_236_models.eps}
\caption{Comparison of single blackbody fits to the (left) SN 2012aw at +358 days and (right) SN 2013ej at +236 days. Fits are applied to both the combined optical-IR SEDs and only the mid-IR fluxes. SEDs calculated using the PP15 model are marked with open rectangles.}
\label{fig:12aw_13ej}
\end{figure*}
\begin{table}
\footnotesize
\caption{\label{tab:models} Parameters of single BBs fitted to the optical-IR SEDs of SNe 2011A (IIn), 2014cx (II-P), 2009jf (Ib), and 2012aw (II-P).}
\newcommand\T{\rule{0pt}{3.1ex}}
\newcommand\B{\rule[-1.7ex]{0pt}{0pt}}
\begin{tabular}{c|cc|cc|cc|cc}
\hline
\hline
~ & \multicolumn{2}{c}{SN 2011A} & \multicolumn{2}{c}{SN 2014cx} & \multicolumn{2}{c}{SN 2009jf} & \multicolumn{2}{c}{SN 2012aw}\T \\
~ & \multicolumn{2}{c}{(IIn, +86d)} & \multicolumn{2}{c}{(II-P, +53d)} & \multicolumn{2}{c}{(Ib, +100d)} & \multicolumn{2}{c}{(II-P, +358d)}\B \\
~ & $R$ & $T$ & $R$ & $T$ & $R$ & $T$ & $R$ & $T$\T \\
~ & (10$^{16}$ cm) & (K) & (10$^{16}$ cm) & (K) & (10$^{16}$ cm) & (K) & (10$^{16}$ cm) & (K)\B \\
\hline
Opt. + IR (single BB) & 0.07 & 4100 & 0.13 & 5960 & 0.09 & 5760 & 0.05 & 3810 \T\\\
IR (single BB) & 0.12 & 1990 & 0.23 & 2940 & 1.48 & 670 & 2.79 & 400\B \\
\hline
\end{tabular}
\end{table}
\begin{table*}
\scriptsize
\caption{\label{tab:models2} Parameters of two- and one-component blackbodies fitted to the combined optical-IR SED of the known interacting Type II-P/II-L SN 2013ej, together with dust parameters determined from fitting a simply analytic dust model comparing with previously published results of \citet{Tinyanont16}.}
\newcommand\T{\rule{0pt}{3.1ex}}
\newcommand\B{\rule[-1.7ex]{0pt}{0pt}}
\begin{tabular}{c|cccc|ccc}
\hline
\hline
~ & \multicolumn{7}{c}{SN 2013ej (II-P/II-L, +236d)}\T\B \\
~ & $R_{opt}$ & $T_{opt}$ & $R_{IR}$ & $T_{IR}$ & $T_{dust}$ & $M_{dust}$ & $L_{dust}$\T \\
~ & (10$^{16}$ cm) & (K) & (10$^{16}$ cm) & (K) & (K) & (10$^{-5}$ M$_{\odot}$) & ($10^{6} L_{\odot}$)\B \\
\hline
Two-comp. BBs & 0.05 & 3910 & 3.48 & 430 & 360 & 580 & 5.3\T\\\
Opt. (PP15) + IR BB & -- & -- & 1.51 & 570 & 460 & 98.8 & 3.3 \\
IR (single BB) & -- & -- & 1.25 & 620 & 490 & 69.9 & 2.9 \\
\hline
Tinyanont et al. (2016) - IR & -- & -- & -- & -- & 477 & 75.0 & 2.7 \\
\hline
\end{tabular}
\end{table*}
We performed a similar analysis on the rest of the targets in our sample. Since we are most interested in late-time emission and want to minimize contributions from the early-time photospheric light-curve, we excluded targets that did not meet certain criteria. For example, we did not include SNe without late-time observations or only single filter IRAC photometry. We also exclude ``normal'' Type Ia SNe since their mid-IR photometry do not probe warm dust (but see \citealt{Nozawa11}).
For the SNe we analyze, we follow the method published in a number of papers \citep[see e.g.][]{Meikle07,Fox10,Fox11,Fox16,FF13,Szalai13,Graham17} by assuming a spherically symmetric, optically thin dust shell. We calculate the {\it minimum} shell radius by fitting BBs ($R_{BB}$) to the observed SEDs and, from the radii and the estimated ages, we also constrain the corresponding expansion velocities ($v_{BB}$) by assuming a constant expansion over time (see Table \ref{tab:dustparnew}).
For comparison, we also fit analytic dust model adopted from \citet{Fox10,Fox11}, assuming only thermal emission of optically thin dust with mass $M_d$, with a particle radius $a$, at a distance $d$ from the observer, thermally emitting at a single equilibrium temperature $T_d$; hence, the flux can be written as
\begin{equation}
F_{\nu} = \frac{M_d B_{\nu}(T_d) \kappa_{\nu}(a)}{d^2} ,
\end{equation}
\noindent where $B_{\nu}(T_d)$ is the Planck-function, $\kappa_{\nu}$ is the dust mass absorption coefficient as a function of the grain size. We chose pure graphite composition assuming single-sized grains of $a$=0.1 $\mu$m \citep[following][]{Fox10,Fox11}. During the fit, only $T_d$ and $M_d$ are free parameters; $\kappa_{\nu}$ has been determined from Fig. 4 of \citet{Fox10}. In cases of two-point SEDs, we are limited to using one temperature component.
Figure \ref{fig:01em} compares the analytical and blackbody fits in two SNe that have data from all four IRAC channels: the Type IIn SN 2002bu and and the Type Ib/c 2001em. SN 2002bu was observed $\sim$2 years post-explosion and can be fit with just a single-component graphite or blackbody dust model. SN~2001em, however, requires a two-component model. If we fit use blackbodies, we get the parameters shown in Figure \ref{fig:01em}. We can compare our results with those of \citet{Chugai06} who constructed a model for the strong late-time
X-ray, radio, and H$\alpha$ emission from SN~2001em and developed a picture in which the SN ejecta collide with a dense massive CS shell.
Our two-component model gives $\sim10^{16}cm$ and $\sim15\times 10^{16}cm$ for the two radii, which is compatible with the estimated size of the single CS shell (r$\sim7 \times 10^{16}cm$) calculated by \citet{Chugai06} from X-ray, radio, and H$\alpha$ data contemporaneous with mid-IR observations. If we change the longer-wavelength blackbody to a graphite dust model, we get $T_{dust}$ = 280K and an upper limit of $M_{dust}\approx$0.2 $M_{\odot}$, which are in a good agreement with the calculations of \citet{Chugai06} who derived -- indirectly -- 300K for dust temperature and 2-3 $M_{\odot}$ for the mass of the CS shell (which gives 0.02-0.03 $M_{\odot}$ dust mass assuming a 0.01 dust-to-gas mass ratio). These results strengthen previous conclusions of CSM interaction with SN 2001em, but further suggest the presence of multiple pre-explosion dust shells.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{SN2002bu_day759.eps}
\includegraphics[width=7cm]{SN2001em_day1135.eps}
\caption{{\it Left:} One-component carbonaceous dust model fit to the four-point mid-IR SED of the Type IIn SN 2002bu. {\it Right}: Two-component blackbody model fit to the four-point mid-IR SED of the known interacting Type Ib/c SN 2001em.}
\end{center}
\label{fig:01em}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{param_all_Ia.eps}
\caption{Dust parameters (temperatures -- top left, luminosities -- top right, and dust masses -- bottom left) and blackbody velocities (belong to minimum dust radii) of thermonuclear SNe derived from the SED fits. Filled and empty symbols denote SNe whose absolute magnitudes were determined with or without image subtraction, respectively -- in the latter cases, only upper limits can be determined for dust masses and luminosities (marked with arrows on both bottom left and top right panels). Blue arrows denote upper dust mass limits for a group of SNe Ia calculated by \citet{Johansson17}. Dotted lines on the bottom left panel denote theoretical dust masses at shock velocities $v_s$=5000 and 15\,000 km s$^{-1}$ assuming a shock heating scenario (see text for details); at the bottom right panel, the mentioned shock velocities are shown together with an upper limit of late-time ejecta velocities (black dashed line) expected in thermonuclear SNe \citep[$v_{ej,max}$=3000 km s$^{-1}$, based on][]{Silverman13}.}
\label{fig:param_Ia}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{param_all_Ibc.eps}
\caption{Same as Figure \ref{fig:param_Ia}, except in this case for stripped-envelope CC SNe. Filled and empty symbols denote SNe whose absolute magnitudes were determined with or without image subtraction, respectively -- in the latter cases (and in some other ones), only upper limits can be determined for dust masses and luminosities (marked with arrows on both bottom left and top right panels). At the bottom right panel, black dashed line denotes an upper limit of late-time ejecta velocities expected in SE CC SNe \citep[$v_{ej,max}$=8000 km s$^{-1}$, based on][]{Taubenberger06}.}
\label{fig:param_Ibc}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{param_all_IIP.eps}
\caption{ Same as Figure \ref{fig:param_Ia}, except in this case for SNe II-P. Filled and empty symbols denote SNe whose absolute magnitudes were determined with or without image subtraction, respectively -- in the latter cases (and in some other ones), only upper limits can be determined for dust masses and luminosities (marked with arrows on both bottom left and top right panels). At the bottom right panel, black dashed line denotes an upper limit of late-time ejecta velocities expected in SNe II-P \citep[$v_{ej,max}$=3000 km s$^{-1}$, based on][]{Szalai11}.}
\label{fig:param_IIP}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{param_all_IIn.eps}
\caption{Same as Figure \ref{fig:param_Ia}, except in this case for SNe IIn. Filled and empty symbols denote SNe whose absolute magnitudes were determined with or without image subtraction, respectively -- in the latter cases, only upper limits can be determined for dust masses and luminosities (marked with arrows on both bottom left and top right panels). At the bottom right panel, black dashed line denotes an upper limit of late-time ejecta velocities expected in SNe IIn \citep[$v_{ej,max}$=5000 km s$^{-1}$, based on][]{Patat01}.}
\label{fig:param_IIn}
\end{center}
\end{figure}
Table \ref{tab:dustparnew} lists and Figures \ref{fig:param_Ia}-\ref{fig:param_IIn} plot the best fit dust parameters (masses, temperatures, mid-IR luminosities). We fitted SEDs of all listed SNe using the dust model described above in order to generate a comparable set of dust parameters.
Nevertheless, in general, one cannot distinguish between dust compositions with only two IRAC filters. We include the results of silicate fits only in cases with spectroscopic evidence, such as SN 2004et \citep{Kotak09,Fabbri11} or SN 2005af \citep{Szalai13}. In some other cases, the temperature may provide guidance on the dust composition. For example, if $T_{dust} \gtrsim$1400 K, then the carbonaceous dust model makes the most sense since Si grains require lower temperatures for effective condensation \citep[see e.g.][]{Nozawa03}.
Different dust models (composition, grain size) used in the literature result systematic uncertainties in dust parameters. After comparing our results with previously published ones, we draw the following conclusions: the uncertainties can be as large as $\sim$100-150K in dust temperature (which is also significantly influenced by number of SED points including additional optical and/or near-IR data), while dust masses and dust luminosities can vary within one order of magnitude and within a factor of $\sim$1-2, respectively.
We also note that choosing non-spherical geometry for the dust-forming region, or, assuming clumpy dust formation \citep[see e.g.][]{Meikle07,Ercolano07,Andrews16} may also lead to significantly (an order of magnitude lower/higher) different calculated dust masses.
While the SED fits in Table \ref{tab:dustparnew} have a number of uncertainties, we can still draw some useful conclusions.
The blackbody expansion velocities ($v_{BB}$), shown on the bottom right panels of Figs. \ref{fig:param_Ia}-\ref{fig:param_IIn}, can distinguish between newly-formed and pre-existing dust. In cases where $v_{BB}$ is quite low (several hundreds or a few thousands km s$^{-1}$), the dust likely formed in the ejecta. In these cases, cover many Type II-P and SE CC SNe, we find the estimated temperatures and dust masses ($\sim$10$^{-6}-10^{-2} M_{\odot}$) are in agreement with this scenario \citep[see e.g.][]{Fox11,Fox13,Szalai13,Tinyanont16}.
In cases where $v_{BB}$ is a bit higher ($\sim$5000-15\,000 km s$^{-1}$), the velocities are consistent with the forward shock, suggesting new dust may be forming in the CDS behind the forward shock. Nevertheless, especially in the cases of SNe IIn or Ia-CSM, or other known interacting objects (e.g. SN Ib 2014C) with large ($>10^{-3} M_{\odot}$) observed dust masses, the presence of pre-existing dust should be invoked for explaining the amount of the observed mid-IR luminosities. For the distinction between the collisional and radiative heating scenarios, we adopt the method presented in \citet{Fox11} \citep[and also used by e.g.][]{Tinyanont16}. Eq. \ref{eq:one}, assuming a dust to gas ratio of 0.01, gives the mass of dust processed by the forward shock of the SN:
\begin{equation}
\label{eq:one}
M_{\mathrm{d}}(M_{\odot}) \approx 0.0028 \left( \frac{v_{\mathrm{s}}}{15\,000 \,\mathrm{km} \,\mathrm{s}^{-1}} \right)^3 \left( \frac{t}{\mathrm{year}} \right)^2 \left( \frac{a}{\mathrm{\mu m}}\right),
\end{equation}
\noindent where $v_\mathrm{s}$ is the shock velocity, $t$ is the time post explosion, and $a$ is the grain size (assumed to be 0.1 $\mu$m). The calculated dust masses -- using $v_\mathrm{s}$ = 5\,000 km s$^{-1}$ and 15\,000 km s$^{-1}$ for the shock velocities assumed to be constant -- appear as straight lines on the bottom left panels of Figs. \ref{fig:param_Ia}-\ref{fig:param_IIn}. A large fraction of Type IIn and other strongly interacting SNe show much larger dust masses than what can be expected even at $v_\mathrm{s}$ = 15\,000 km s$^{-1}$; in these cases, radiative heating by the photons may emerge from the ongoing CSM interaction.
Finally, in cases where $v_{BB}$ is very high (over $\sim$15\,000 km s$^{-1}$), the dust is likely located beyond the forward shock, suggesting the dust is pre-existing at the time of the explosion and radiatively heated. Such high velocities can be seen mainly in early-time ($<$1yr) observations, found e.g. in the cases of Type IIn SN~2010jl \citep{Gall14,Fransson14} or Type II-P SN~2014bi \citep{Tinyanont16}.
In these cases, another possible source of mid-IR emission may be an IR echo in which the dust shell is heated by the peak luminosity of the SN \citep[e.g.][]{Bode80,Dwek83,Sugerman03,Meikle06}. At later epochs, however, this possibility probably can be ruled out \citep[see details in][]{Fox11}.
\section{Conclusion}\label{conc}
Here we presented a comprehensive study of all SNe (discovered before 2015) observed with {\it Spitzer}/IRAC, both targeted and untargeted. In total, we increased the published sample of {\it Spitzer} SNe by a factor of $\sim$5$\times$ (from $\sim$200 to $\sim$1100), including nearly $\sim$2$\times$~more detections ($\sim$70 to $\sim$120).
We carry out a thorough photometric analysis of the entire SN sample, including all previously published data. In general, we find good agreements with the published values ($\lesssim$10\% difference in fluxes), except in some cases that were captured in a very faint phase and/or with a complex sky background (however, for this reason, the uncertainties of their original fluxes are also implicitly large).
The results include both a detailed analysis of specific targets with unique behavior and a statistical analysis of the mid-IR evolution of the different SN subclasses. For each detection, we fit both black bodies and simple analytic dust models. Modeling the SEDs (even in cases with just two photometry points) can disentangle the dust origin and heating mechanism, and, in some cases, to determine the main physical parameters of the assumed dust. Large dust masses ($\gtrsim$10$^{-3} M_{\odot}$) are observed primarily in Type IIn and other strongly interacting SNe. The associated $v_{BB}$ is quite high in most of these cases, again consistent with pre-existing, radiatively heated grains.
The large data set allows us to draw some broad conclusions; nevertheless, we note that these are based on studying a quite heterogeneous sample with usually 1-2 epochs of data per object. In general, each subclass tends to fill its own region of phase space. Amongst thermonuclear explosions (looking over the late-time mid-IR data of several hundred objects, finding mostly non-detections), we see that i) SNe Ia-CSM may be rare indeed, and ii) only a very limited number of ``intermediate'' cases with moderately strong CSM interaction may exist (suggested by a $\sim$8-10 mag gap in late-time mid-IR brightness of strongly interacting and slightly- or nondetected objects). Secondly, in the heterogeneous group of stripped-envelope CC SNe, the length of mid-IR light-curve seems to correlate with the assumed size of the progenitor (the larger the progenitor was, the longer the mid-IR light-curve is, from Type eIIb SNe to Type cIIb and Type Ib/c ones); however, this finding is based on a not-so-large sample of objects. Finally, Type IIn SNe may remain bright for several years post-explosion or may fade more quickly.
Although this study has significantly enlarged the sample sizes for all subclasses, however, the cadence is quite under-sampled both spectrally and temporally. Future observations with the {\it James Webb Space Telescope} (JWST) offer the sensitivity and spectral capabilities necessary to further constrain the dust geometry, mass, temperature, and composition.
\begin{acknowledgements}
We thank our anonymous referee for his/her valuable comments.
This work is part of the project ``Transient Astrophysical Objects'' GINOP-2-3-2-15-2016-00033 of the National Research, Development and Innovation Office (NKFIH), Hungary, funded by the European Union, and is also supported by the New National Excellence Program (UNKP-17-2, UNKP-17-4) of the Ministry of Human Capacities of Hungary.
TS has received funding from the Hungarian NKFIH/OTKA PD-112325 Grant.
OP is currently supported by award PRIMUS/SCI/17 from Charles University. TM was supported in part by the Ministry of Economy, Development, and Tourism’s Millennium Science Initiative through grant IC120009, awarded to the Millennium Institute of Astrophysics, MAS. TM thanks the LSSTC Data Science Fellowship Program, his time as a Fellow has benefited this work. TM was funded by the CONICYT PFCHA/DOCTORADO BECAS CHILE/2017 - 72180113.
This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration; the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration; and the SIMBAD database, operated at CDS, Strasbourg, France. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
We acknowledge the availability of NASA ADS services.
\end{acknowledgements}
\software{IRAF, HOTPANTS}
|
{
"timestamp": "2019-03-19T01:32:55",
"yymm": "1803",
"arxiv_id": "1803.02571",
"language": "en",
"url": "https://arxiv.org/abs/1803.02571"
}
|
\section{Introduction} \label{sec:intro}
All main-sequence stars are convective somewhere in their interior: low-density, high-temperature fluid parcels rise or fall through the stratified medium, transporting heat by their motion. In high-mass stars this convective transport occurs primarily in the innermost regions, whereas low-mass stars like the Sun have convection occurring in an envelope; stars of sufficiently low mass ($\lesssim 0.35 \, \text{M}_\odot$) are convective throughout their interiors \citep[e.g.,][]{1997A&A...327.1039C}. Pre-main-sequence stars on the Hayashi track are likewise fully-convective \citep[e.g.,][]{1961PASJ...13..450H}. Variations in the opacity, energy generation rate, or adiabatic index determine where and when this convection occurs: broadly, it happens whenever the temperature gradient required to carry a star's flux by radiative processes alone is too steep \citep[e.g.,][]{Bohm-Vitense:1992aa}. This may be encapsulated via the Schwarzschild criterion, which states that convection occurs whenever the dimensionless temperature gradient ${\nabla = d \ln{T} / d \ln{p}}$ is greater than the adiabatic gradient ${\nabla_{\text{ad}} = (d \ln{T} / d \ln{p})_{\text{ad}}}$.
In the interior of a star, modest convective velocities and temperature gradients very close to the adiabatic value are usually sufficient to carry a star's flux outwards \citep[e.g.,][]{kippenhahn2012stellar}, owing mainly to the high density and heat capacity of these regions. For fully-convective stars, this implies that most of the interior lies at nearly-constant specific entropy. However, larger entropy gradients are established near the surface (as discussed in \S~\ref{sec:entropy} below). The interaction between convection and radiative transfer in the region of the surface layer thus creates a specific entropy jump $\Delta s$ between the nearly-constant specific entropy in the deep interior, conventionally labeled $s_{\text{ad}}$, and the specific entropy at the stellar photosphere $s_{\text{ph}}$ \citep[e.g.,][]{Trampedach:2014aa}.
The gross structure of a star is linked to its entropy \citep[see, e.g., discussions in][]{1988PASP..100.1474S,2004sipp.book.....H}. In particular, for isentropic stars, knowledge of $s_{\text{ad}}$---i.e., knowledge of which adiabat the star is on---is enough to specify the entire structure. As emphasized by \citet{Gough:1976aa}, a complete theory of convection would specify the adiabat, but in practice this is typically calibrated by comparison to observations. In standard 1D stellar models employing the mixing length theory (MLT) of convection, fluid parcels are assumed to travel some characteristic mixing length $\ell_{\text{MLT}} = \alpha_{\text{MLT}} H_p$ before transferring their heat to their surroundings, where $\alpha_{\text{MLT}}$ is conventionally a depth-independent dimensionless parameter and $H_p$ is the pressure scale height \citep{1958ZA.....46..108B}. In typical models of fully-convective stars, $\alpha_{\text{MLT}}$ effectively specifies the entropy contrast $\Delta s$, and so fixes the adiabat and the overall stellar structure.
Observations have suggested that some low-mass stars have radii that are $5-15 \%$ larger than standard 1D models would predict \citep[e.g.,][]{Torres:2002aa,2006Ap&SS.304...89R,2008A&A...478..507M,2010A&ARv..18...67T,2012ApJ...760L...9T}. These ``inflated" radii could in turn lead to erroneous age estimates of stars on the pre-main-sequence \citep[see, e.g.,][]{Feiden:2016aa}. Several authors have argued that the inhibition of convection by some mechanism could explain these modifications to the structure, with rotation and/or magnetic fields both invoked as possible culprits {\citep[e.g.,][]{1981ApJ...245L..37C,Chabrier:2007aa}.}
Rotation is well known to influence convection. For example, in classic linear stability analysis, the onset of convection is impeded by the presence of rotation: the critical Rayleigh number for convective instability (measuring, roughly, how great buoyancy driving must be relative to viscous and thermal dissipation) increases with rotation rate $\Omega$ \citep{1961hhs..book.....C}, scaling as $\Omega^{4/3}$ in appropriate circumstances. The horizontal scale of the most unstable modes likewise diminishes with more rapid rotation. The non-linear effects of rotation on the convection are less clear. Broadly, the reduction of horizontal lengthscales and convective speeds in rapidly-rotating systems is expected to inhibit the heat transport somewhat, leading to higher values of the temperature (or in a stratified system, entropy) gradient \citep{Stevenson:1979aa,2012PhRvL.109y4503J,Barker:2014aa}. Rotation also breaks the spherical symmetry, with motions increasingly aligned with the rotation axis at rapid rotation rates, in keeping with the Taylor-Proudman constraint \citep{1916RSPSA..92..408P,1917RSPSA..93...99T}. Other aspects of the non-linear impact of rotation, such as its effect on heat transport and on the establishment of zonal flows, have also been extensively explored using theory and simulation \citep[e.g.,][]{1994GApFD..76..223B,julien_knobloch_1998,sprague_julien_knobloch_werne_2006,2012A&A...546A..19G,king_stellmach_aurnou_2012,2012PhRvL.109y4503J,2014PhRvL.113y4501S,2015PEPI..246...52A,2015JFM...780..143C,2015GApFD.109..145G,2016JFM...808..690G,2016JFM...798...50J,2017JFM...813..558A}.
A reformulation of MLT to treat rapidly-rotating cases was proposed for example by \citet{Stevenson:1979aa}, who argued following \citet{1954RSPSA.225..196M} that the non-linear state was likely to be dominated by the modes that transport the most heat. \citet{2012PhRvL.109y4503J} also examined the transport in rapidly-rotating systems, by scaling to the state of marginal stability; they argue that in contrast to classical non-rotating convection, in which heat transport is ``throttled" in narrow boundary layers, the heat transport of rapidly-rotating systems is limited by the efficiency of turbulent motion in the bulk of the fluid. Recently, \citet{Barker:2014aa} derived a version of rotating MLT equivalent to \citet{Stevenson:1979aa} in a different way, and tested it using 3D simulations in Cartesian domains. Broadly, several methods of analysis suggest that the temperature gradient in the middle of the rotating convective layer ($dT/dz$) increases with rotation rate $\Omega$. In particular, \citet{Barker:2014aa} have argued specifically that $dT/dz \propto \Omega^{4/5}$ in the rapidly rotating limit. Their simulations support this scaling, though it must be noted that their models encompass only a single latitude (namely the pole); extensions to other latitudes are under way (L. Currie, A. Barker, and Y. Lithwick, {personal} communication).
Magnetic fields are likewise known to influence convection in some manner, but it is not clear how this affects the heat transport in the stellar context. Magnetic fields can inhibit convection in the stellar interior via the Lorentz force, hindering fluid flow perpendicular to the field \citep[e.g.,][]{Stein:2012aa}. Like rotation, magnetic fields influence the linear stability of the fluid to convective motions: in the absence of rotation, magnetism is stabilizing \citep{1961hhs..book.....C,Gough:1966aa}. When rotation is present, the linear stability is more complex, and in fact the critical Rayleigh number for convection with \emph{both} rotation and magnetism can be lower than in the presence of either rotation or magnetism alone \citep{1961hhs..book.....C,Stevenson:1979aa}. Again, the non-linear impact of the magnetism is much less clear. \citet{Stevenson:1979aa} also fashioned a ``magnetic" version of MLT, but (to our knowledge) this has not been incorporated into 1D stellar structure models. \citet{Mullan:2001ab}, drawing on the linear stability analysis of \citet{Gough:1966aa}, argued that the effects of magnetism in a 1D stellar model could be mimicked simply by modifying the adiabatic gradient $\nabla_{\text{ad}}$ (wherever it appears in the MLT prescription) to include a perturbation term proportional to the magnetic pressure (relative to the gas pressure). Physically, this amounts to asserting that the end-state of magnetized convection is to approach a state of marginal stability---where this stability now depends on the strength of the magnetism---in much the same way that non-magnetic convection might be taken to approach an isentropic state.
\citet{Chabrier:2007aa}, noting that even fairly modest magnetic fields might strongly feed back on the flows through Lorentz forces, modeled rotational and magnetic effects simply by varying the depth-independent $\alpha_{\text{MLT}}$; they also briefly considered the effects of near-surface spots, taken to be regions of cool effective temperature covering some fraction of the surface. \citet{Feiden:2012aa}, drawing on \citet{Lydon:1995aa}, have implemented a more complex magnetic MLT model into the Dartmouth stellar evolution code, with properties of the resulting structure dependent on the strength and (imposed) spatial distribution of the magnetism. Broadly, these authors have argued that magnetic fields can affect the radius of a star, either by inhibiting convection or through the effects of near-surface spots {\citep[e.g.,][]{1981ApJ...245L..37C,Mullan:2001ab,Chabrier:2007aa,MacDonald:2012aa,MacDonald:2013aa,MacDonald:2014aa,MacDonald:2017aa,Feiden:2012aa,Feiden:2014aa,Feiden:2016aa}.}
In this paper, we examine the effects of rotation and magnetic fields on the structure of fully-convective stars via 1D stellar structure models, using the Modules for Experiments in Stellar Astrophysics (MESA) code \citep{Paxton:2011aa,Paxton:2013aa,Paxton:2015aa}. All the reformulations of MLT noted above can modify the adiabat of the star, by changing the efficiency of convective heat transport in the stellar interior. Thus, in \S~\ref{sec:entropy}, we begin by giving an overview of the role of entropy in standard 1D stellar structure models; in particular, we recall how the stellar radius is sensitive to changes in the specific entropy, which is itself sensitive to differing levels of convective inhibition via changes in $\alpha_{\text{MLT}}$. We give an explicit relationship between specific entropy, stellar radius, and $\alpha_{\text{MLT}}$ for these ``standard" models with a depth-independent $\alpha_{\text{MLT}}$.
We then examine the ``rotating" and ``magnetic" MLT reformulations by \citet{Stevenson:1979aa} and \citet{MacDonald:2014aa} respectively in \S~\ref{sec:rot_inhibition_conv} and \S~\ref{sec:mag_inhibition_conv} to determine how these mechanisms inhibit convection, and so influence the stellar radius, compared to solely changing $\alpha_{\text{MLT}}$. We set aside for now the question of whether these formulations correctly capture the complex interaction between rotation, convection, and magnetism in a star; here, we simply examine the consequences of these prescriptions for the entropy and radius of the star. We also investigate the influence on stellar structure as a result of combining these ``rotating" and ``magnetic" MLT reformulations in \S~\ref{sec:rot_and_mag}.
In \S~\ref{sec:depth_dep_alpha}, we show that these reformulations to MLT may be precisely duplicated in a standard (non-magnetic, non-rotating) 1D model by the introduction of a depth-dependent $\alpha_{\text{MLT}}$. We provide formulae for depth-dependent $\alpha_{\text{MLT}}$ profiles that can be used to mimic the effects of rotation or magnetism on the stellar superadiabaticity, and hence on the stellar radius (assuming these are captured by the \citet{Stevenson:1979aa} and \citet{MacDonald:2014aa} formulations respectively), providing a simple way for users to model these non-standard effects. Finally, we discuss our results in \S~\ref{sec:discussion_conclusion}.
\section{Entropy, convection, and the radii of standard 1D stellar structure models} \label{sec:entropy}
\begin{figure*}
\gridline{\fig{logsupergrad_v_logRho_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{logsupergrad_v_logRho_1000_0_Myr.pdf}{0.513\textwidth}{(b)}
}
\caption{$\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$, for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$ stellar models at $\alpha_{\text{MLT}} = 0.5 - 2.0$ ($\Delta 0.5$). As $\alpha_{\text{MLT}}$ decreases, {the superadiabaticity} $\nabla_{\text{s}}$ increases throughout the stellar interior, but $\nabla_{\text{s}}$ is inherently lower in main-sequence models.\label{fig:logsupergrad_v_logRho}}
\end{figure*}
\subsection{Role of specific entropy in standard MLT} \label{subsec:role_entropy}
Heat transport, entropy, and the stellar structure are tightly linked in fully-convective objects. Here, we briefly review these links, outlining how changes in the convective efficiency of classical MLT modify the internal entropy structure and hence the stellar radius. The material in this section largely duplicates standard results found elsewhere \citep[see e.g.,][]{2004sipp.book.....H}, but we include it here as background for our studies in \S~\ref{sec:rot_inhibition_conv} - \S~\ref{sec:depth_dep_alpha}.
For an ideal gas without radiation pressure, the specific entropy (i.e., the entropy per unit mass) $s$ is
\begin{equation}\label{eq:closed_form_s}
s \simeq s_0 + \frac{N_{\text{A}} k_{\text{B}}}{\mu} \ln{\left(\frac{T^{1/(\gamma - 1)}}{\rho} \right)},
\end{equation}
where $s_0$ is {a constant}, $N_{\text{A}}$ is Avogadro's constant, $k_{\text{B}}$ is Boltzmann's constant, $\mu$ is the mean molecular weight, $T$ is temperature, $\rho$ is density, and $\gamma$ is the adiabatic exponent.
To examine how the specific entropy changes in response to variations in the convective efficiency, we first constructed a series of standard 1D stellar structure models using MESA. Here, we simply use the default setup provided by MESAstar: the MLT prescription is that of \citet{Cox:1968aa}; the atmospheric boundary conditions are MESA's ``simple" option, in which the photosphere is located at optical depth $\tau = 2/3$, the surface temperature is given by the Eddington $T(\tau)$ relation, and the opacity is calculated in an iterative fashion (see \citet{Paxton:2011aa} for details); the metallicity is fixed at $Z = 0.02$. We model stars only at a fixed mass of $0.3 \, \text{M}_\odot$, evolving each model from the pre-main-sequence up to an age of $4 \, \text{Gyr}$. Models of this mass are convective throughout their interiors. We vary the mixing length parameter $\alpha_{\text{MLT}}$ to vary the convective efficiency, effectively reducing the distance traveled by convective elements \citep{Trampedach:2014aa}. Taken together, these choices imply that our models are somewhat more idealized depictions of a $0.3 \, \text{M}_\odot$ star than the most sophisticated ones in use today \citep[e.g.,][]{2015A&A...577A..42B}. For example, in reality (and in more complete models) convection extends well into the optically thin regime, mainly because the formation of H$_2$ decreases the adiabatic gradient, favoring convection \citep{1997A&A...327.1039C}. Values of the effective temperature in models including this effect will generally differ from those reported here (which simply assume the Eddington $T(\tau)$ relation). We choose this simpler boundary condition partly because it allows us to compare more directly with analytical theory below, and because we are interested mainly in \emph{changes} between models with differing $\alpha_{\text{MLT}}$ rather than in the absolute values of $T_{\text{eff}}$, $R$, etc.
We turn first to consideration of the superadiabatic gradient $\nabla_{\text{s}} \equiv (\nabla - \nabla_{\text{ad}})$, which is a dimensionless measure of the entropy gradient. In Figure~\ref{fig:logsupergrad_v_logRho}, we plot $\log_{10}{(\nabla_{\text{s}})}$ as a function of logarithmic density $\log_{10}{(\rho)}$ for $0.3 \, \text{M}_\odot$, $10 \, \text{Myr}$ pre-main-sequence and $1 \, \text{Gyr}$ main-sequence stellar models with $\alpha_{\text{MLT}} = 0.5 - 2.0$ ($\Delta 0.5$). Vertical dotted lines in this figure onwards indicate average radial positions in the region of the surface layer. A few key features are readily apparent: first, in the bulk of the convection zone, $\nabla_{\text{s}}$ reaches negligible values due to highly efficient convective transport, where the temperature gradient is nearly adiabatic. Nearer the surface, $\nabla_{\text{s}}$ increases, driven by the continuous decline in the density and temperature of the plasma. Convection carries nearly all the flux until radii of greater than $0.995 \, \text{R}$, where $\text{R}$ is the radius of a given model, and is highly efficient over most of that region; radiative diffusion begins to carry a non-negligible amount of flux only above $0.9995 \, \text{R}$. Comparing the left and right panels of Figure~\ref{fig:logsupergrad_v_logRho}, we see that $\nabla_{\text{s}}$ is somewhat lower in the main-sequence models (right panel) than on the pre-main-sequence. In both sets of models, at all depths $\nabla_{\text{s}}$ depends on the convective efficiency: less efficient convection, which in these models corresponds simply to a smaller value of $\alpha_{\text{MLT}}$, means that a higher $\nabla_{\text{s}}$ is required to carry the same heat flux.
To quantify how changing $\alpha_{\text{MLT}}$ influences the run of $\nabla_{\text{s}}$, and so explain the trends visible in Figure~\ref{fig:logsupergrad_v_logRho}, we consider the convective flux $F_{\text{conv}}$ as defined in the classic MLT prescription of \citet{1958ZA.....46..108B}, as implemented in MESA:
\begin{equation} \label{eq:F_conv_general}
F_{\text{conv}} = \frac{1}{4 \sqrt{2}} c_p (p \rho Q)^{1/2} T (\nabla - \nabla')^{3/2} \alpha_{\text{MLT}}^2,
\end{equation}
where $c_p$ is the specific heat capacity (at constant pressure), $p$ is pressure, $Q = -(\partial \ln{\rho} / \partial \ln{T})_p$ is the isobaric expansion coefficient, and {$\nabla' = (d \ln{T} / d \ln{p})'$} is the temperature gradient of the rising element \citep{Cox:1968aa}. Following \citet{Cox:1968aa}, we can solve for the convective efficiency $\Gamma = A (\nabla - \nabla')^{1/2}$, which is the ratio of energy successfully transported and that which is lost by a convective element, in terms of $\nabla_{\text{s}}$, and express $\nabla - \nabla'$ as a function of $\nabla_{\text{s}}$:
\begin{equation} \label{eq:nabla_terms_of_nabla_s}
\nabla - \nabla' = \left(\frac{\Gamma}{A}\right)^2 = \frac{1}{4A^2} \left(\sqrt{1 + 4 A^2 \nabla_{\text{s}}} - 1\right)^2,
\end{equation}
where
\begin{equation} \label{eq:A_mlt}
A = \frac{Q^{1/2} c_p \kappa g \rho^{5/2} H_p^2}{12 \sqrt{2} a c p^{1/2} T^3} \alpha_{\text{MLT}}^2 \equiv A_{\text{other}} \alpha_{\text{MLT}}^2
\end{equation}
is the ratio of convective and radiative conductivities, where $\kappa$ is opacity, $g$ is gravitational acceleration, $a$ is the radiation constant, and $c$ is the speed of light.
Using equations~(\ref{eq:F_conv_general}) and~(\ref{eq:nabla_terms_of_nabla_s}), we express $\nabla_{\text{s}}$ as a function of $\alpha_{\text{MLT}}$:
\begin{multline} \label{eq:nabla_alpha_general_relation}
\nabla_{\text{s}} = \left(\frac{4 \sqrt{2} F_{\text{conv}}}{c_p (p \rho Q)^{1/2} T}\right)^{2/3} \alpha_{\text{MLT}}^{-4/3} \\
+ \frac{1}{A_{\text{other}}} \left(\frac{4 \sqrt{2} F_{\text{conv}}}{c_p (p \rho Q)^{1/2} T}\right)^{1/3} \alpha_{\text{MLT}}^{-8/3}.
\end{multline}
Equation~(\ref{eq:nabla_alpha_general_relation}) reflects the fact that there are two regimes of convective efficiency $\Gamma \sim A \nabla_{\text{s}}^{1/2}$. As noted by \citet{Gough:1976aa}, stellar convection theories tend asymptotically toward two regimes: high ($\Gamma \gg 1$, left term) and low ($\Gamma \ll 1$, right term) convective efficiency. The connection between these two asymptotic limits is very thin, so the structure of this transition is typically not significant in the astrophysical context.
\begin{figure*}
\gridline{\fig{supergrad_interpol_v_logRho_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{supergrad_interpol_v_logRho_1000_0_Myr.pdf}{0.515\textwidth}{(b)}
}
\caption{$\nabla_{\text{s}}$ as a function of $\log_{10}{(\rho)}$, comparing the outputted values and those reproduced using equation~(\ref{eq:nabla_alpha_efficient_full}) (plus markers), for a selection of $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$ stellar models at $\alpha_{\text{MLT}} = 0.8 - 1.7$ ($\Delta 0.3$). $\alpha_{\text{MLT}} = 1.7$ is chosen to be the ``unperturbed" model. As $\alpha_{\text{MLT}}$ decreases, the $10 \, \text{Myr}$ model's reproduced $\nabla_{\text{s}}$ increasingly diverges right at the photosphere, due to the non-negligible low efficiency regime. \label{fig:supergrad_interpol_v_logRho}}
\end{figure*}
\begin{figure*}
\gridline{\fig{entropy_v_logRho_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{entropy_v_logRho_1000_0_Myr.pdf}{0.5\textwidth}{(b)}
}
\caption{$s$ as a function of $\log_{10}{(\rho)}$ for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$ stellar models at $\alpha_{\text{MLT}} = 0.5 - 2.0$ ($\Delta 0.5$). Decreasing $\alpha_{\text{MLT}}$ increases $s_{\text{ad}}$, i.e., the asymptotic value of specific entropy in the bulk of the convection zone, but to a lesser extent for main-sequence models.\label{fig:entropy_v_logRho}}
\end{figure*}
For homologous stellar models of highly efficient convection, where luminosity (hence convective flux) is fixed throughout the radial distribution in the bulk of the stellar interior,
\begin{equation} \label{eq:nabla_alpha_efficient}
\nabla_{\text{s}} \propto \alpha_{\text{MLT}}^{-4/3},
\end{equation}
demonstrating that in this regime a decrease in $\alpha_{\text{MLT}}$ corresponds to an monotonic increase of $\nabla_{\text{s}}$ in the bulk of the convection zone \citep[e.g.,][]{Christensen-Dalsgaard:1997aa}.
From equation~(\ref{eq:nabla_alpha_efficient}), it is possible to reproduce a majority of a model's $\nabla_{\text{s}}$ profile using the model's $\alpha_{\text{MLT}}$, and an unperturbed, or reference, model's $\alpha_{\text{MLT}}$ and $\nabla_{\text{s}}$, via
\begin{equation} \label{eq:nabla_alpha_efficient_full}
\nabla_{\text{s}} \simeq \nabla_{\text{s}_0} \left(\frac{\alpha_{\text{MLT}}}{\alpha_{\text{MLT}_0}}\right)^{-4/3},
\end{equation}
where zero subscripts denote values from the unperturbed model. This is valid only for models where the convective flux remains roughly the same as in our fiducial model. In Figure~\ref{fig:supergrad_interpol_v_logRho}, we plot the outputted $\nabla_{\text{s}}$ and those reproduced using equation~({\ref{eq:nabla_alpha_efficient_full}) as a function of $\log_{10}{(\rho)}$ for $0.3 \, \text{M}_\odot$, $10 \, \text{Myr}$ and $1 \, \text{Gyr}$ stellar models with $\alpha_{\text{MLT}} = 0.8 - 1.7$ ($\Delta 0.3$). We choose $\alpha_{\text{MLT}} = 1.7$ to be our unperturbed model and the lower limit $\alpha_{\text{MLT}} = 0.8$ corresponds to the lowest $\alpha_{\text{MLT}}$ for which the convective flux is similar to the unperturbed model. We plot $\nabla_{\text{s}}$ linearly to show the surface layers more clearly. Small deviations are increasingly evident right near the photosphere in the $10 \, \text{Myr}$ models with decreasing $\alpha_{\text{MLT}}$, as the ``low efficiency" regime (ignored in equation~(\ref{eq:nabla_alpha_efficient_full})) begins to come into play. However, the approximation of equation~(\ref{eq:nabla_alpha_efficient_full}) captures the behavior of $\nabla_{\text{s}}$ up to $\approx 0.9995 \, \text{R}$.
We turn next to an analysis of the specific entropy in the same models. In Figure~\ref{fig:entropy_v_logRho}, we plot $s$ as a function of $\log_{10}{(\rho)}$ for these models. We obtain $s$ as a function of the radial distribution $r$ in our stellar models by taking the outputted central specific entropy $s_{\text{c}}$ and integrating the specific entropy gradient $ds/dr$ up to a radial point $r'$:
\begin{equation} \label{eq:s_s_ad_int_ds_dr}
s(r') = s_{\text{c}} + \int_0^{r'} \frac{ds}{dr} \, dr.
\end{equation}
$ds / dr$ is related to the superadiabaticity $\nabla_{\text{s}}$ through the first and second law of thermodynamics:
\begin{equation} \label{eq:ds_dr_relation}
\frac{ds}{dr} = - \frac{c_p}{H_p} \nabla_{\text{s}}.
\end{equation}
In the bulk of the convection zone, specific entropy asymptotically converges with depth towards a nearly-constant specific entropy value $s_{\text{ad}}$. The value of $s_{\text{ad}}$ largely determines the stellar structure, including the stellar radius. As noted by \citet{Gough:1976aa}, a perfect theory of convection would specify this adiabat (i.e., fix $s_{\text{ad}}$), but in practise it must be calibrated via observations. To be specific, note that for a fully-convective isentropic star (with $\gamma = 5/3$), we would have $s \propto \ln{(T^{3/2}/\rho)} = const$. In this case, properties at the center `c' and the photosphere `ph' would be directly linked, with $(T_{\text{c}}^{3/2}/\rho_{\text{c}}) = (T_{\text{ph}}^{3/2}/\rho_{\text{ph}})$, where $T_{\text{ph}} \equiv T_{\text{eff}}$. Specifying the surface properties and the adiabat would in this case clearly suffice to determine the properties of the star everywhere in its interior.
However, standard stellar structure models are not perfectly isentropic. Ascending into the surface layers, specific entropy decreases: although $\nabla_{\text{s}}$ is nearly constant there (Figure~\ref{fig:logsupergrad_v_logRho}), the entropy gradient (equation~(\ref{eq:ds_dr_relation})) is increasingly negative. This arises because although $c_p$ remains high even near the surface (in fact, in these models it is higher at $0.9995 \, \text{R}$ than at $0.99 \, \text{R}$), $H_p$ declines monotonically, implying that $ds/dr$ increases in magnitude near the surface. This non-zero $ds/dr$ implies that there is an entropy jump $\Delta s$ between the interior adiabat and the surface value. If this is the only region where $ds/dr$ is non-zero, then the ratio of the central and photospheric properties, from the logarithmic argument of equation~(\ref{eq:closed_form_s}), is now a function of $\Delta s$:
\begin{equation}\label{eq:delta_s_ratio}
\frac{T_{\text{c}}^{1/(\gamma - 1)}/\rho_{\text{c}}}{T_{\text{ph}}^{1/(\gamma - 1)}/\rho_{\text{ph}}} = \exp{\left(\frac{\mu \Delta s}{N_{\text{A}} k_{\text{B}}}\right)},
\end{equation}
demonstrating explicitly how noticeable values of $\Delta s$ may influence the stellar properties of fully-convective models.
Examining the variation of specific entropy in Figure~\ref{fig:entropy_v_logRho}, a few key trends are clear. At both $10 \, \text{Myr}$ and $1 \, \text{Gyr}$, models with lower $\alpha_{\text{MLT}}$ always have a larger contrast $\Delta s$ between the photosphere and the deep interior. The lower-$\alpha_{\text{MLT}}$ models also have a lower specific entropy at the photosphere $s_{\text{ph}}$. In the pre-main-sequence models, models at lower $\alpha_{\text{MLT}}$ also possess a higher internal entropy $s_{\text{ad}}$, but by an age of $1 \, \text{Gyr}$ this variation has largely vanished, with only the very lowest-$\alpha_{\text{MLT}}$ model here ($\alpha_{\text{MLT}} = 0.5$) possessing a noticeably higher $s_{\text{ad}}$. These features can be understood as discussed below.
First, consider the overall entropy contrast $\Delta s$ in the near-surface layers. To quantify how the profile of specific entropy varies with $\alpha_{\text{MLT}}$, we first consider $\Delta s$ expressed in terms of $\nabla_{\text{s}}$ via equation~(\ref{eq:ds_dr_relation}):
\begin{equation} \label{eq:s_jump}
\Delta s = - \int_0^{R} \frac{ds}{dr} \, dr = \int_0^{R} \frac{c_p}{H_p} \nabla_{\text{s}} \, dr.
\end{equation}
Using equation~(\ref{eq:nabla_alpha_general_relation}), it can be shown that $\Delta s$ increases with decreasing $\alpha_{\text{MLT}}$:
\begin{multline} \label{eq:s_jump_alpha}
\Delta s = \alpha_{\text{MLT}}^{-4/3} \int_0^{R} \frac{c_p}{H_p} \left(\frac{4 \sqrt{2} F_{\text{conv}}}{c_p (p \rho Q)^{1/2} T}\right)^{2/3} \, dr \\
+ \alpha_{\text{MLT}}^{-8/3} \int_0^{R} \frac{c_p}{H_p} \frac{1}{A_{\text{other}}} \left(\frac{4 \sqrt{2} F_{\text{conv}}}{c_p (p \rho Q)^{1/2} T}\right)^{1/3} \, dr,
\end{multline}
where $\alpha_{\text{MLT}}$ is taken out of the integrands due to being depth-independent. As we are able to reproduce a majority of $\nabla_{\text{s}}$ via the high efficiency regime using equation~(\ref{eq:nabla_alpha_efficient_full}), it follows that for models where the convective flux remains roughly the same as in our unperturbed model that
\begin{equation}\label{eq:delta_s_prop_alpha_4_3}
\Delta s \propto \alpha_{\text{MLT}}^{-4/3}.
\end{equation}
Next, consider the photospheric entropy in the models. For an ideal gas with $\gamma=5/3$,
\begin{equation} \label{eq:s_ph_T_eff}
s_{\text{ph}} \simeq \frac{N_{\text{A}} k_{\text{B}}}{\mu} \ln{\frac{T_{\text{eff}}^{5/2}}{p_{\text{ph}}}} \propto \frac{N_{\text{A}} k_{\text{B}}}{\mu} \ln{(T_{\text{eff}}^{23/2} \rho_{\text{ph}}^{1/2} R^2)},
\end{equation}
where $R$ is the stellar radius, and the proportionality assumes that the photosphere occurs at a pressure $p_{\text{ph}} \propto g/\kappa_{\text{ph}}$, with the surface opacity $\kappa_{\text{ph}}$ taken for simplicity to be dominated by H- opacity \citep{1988PASP..100.1474S}, which is proportional to $\rho_{\text{ph}}^{1/2}T_{\text{eff}}^{9}$. Note that in actuality, molecules also contribute substantially to the near-surface opacity in objects of this mass \citep{2005ApJ...623..585F}, and become more dominant at lower masses. The photospheric entropy is thus tightly linked to variations in the effective temperature, and this in turn is tightly constrained to lie within a narrow range: if the temperature were suddenly made much higher, for example, the opacity would sharply increase, increasing the optical depth at a given pressure level and hence driving the photosphere upwards (i.e., to lower pressure, and hence to lower temperatures). Conversely, much lower temperatures would lead to much lower opacities, requiring that the photosphere (at fixed optical depth) move inwards (to higher pressures, and higher temperatures). This behavior is well-known, and is essentially the basis for the ``forbidden region" of cool temperatures in pre-main-sequence evolution \citep{1961PASJ...13..450H}. In the present context, only modest variations in $T_{\text{eff}}$ are therefore allowed. Within this allowed range, models with lower $\alpha_{\text{MLT}}$ have a lower $T_{\text{eff}}$: for the same initial interior conditions, steeper entropy (and temperature) gradients are, per our discussion of $\Delta s$ above, required to carry out the same surface luminosity, and this leads to slightly lower surface temperatures (the subsequent evolution of $T_{\text{eff}}$ is somewhat more involved, as we will discuss more below, but the tendency to have lower $T_{\text{eff}}$ at lower $\alpha_{\text{MLT}}$ is robust). The strong dependence of $s_{\text{ph}}$ on $T_{\text{eff}}$ dominates over changes in $\rho_{\text{ph}}$ and stellar radii between models at a given age, implying (finally) that $s_{\text{ph}}$ is lower in models with lower $\alpha_{\text{MLT}}$.
Finally, we turn to discussion of the nearly-constant specific entropy $s_{\text{ad}}$ in the deep interior of the models. This exhibits different behavior on the main-sequence than during the pre-main-sequence contraction phase. Recall that during this phase, stars descend along a Hayashi track at nearly constant $T_{\text{eff}}$; they contract because they are losing total energy (via radiative losses from the surface), so the contraction rate depends on the star's luminosity. From the virial theorem, the internal temperature of the star increases as its radius decreases ($T \propto R^{-1}$), but the increasing density ($\rho \propto R^{-3}$) results in a net loss of entropy. During this phase, it is clear from Figure~\ref{fig:entropy_v_logRho} that $s_{\text{ad}}$ is higher at a given age in models with lower $\alpha_{\text{MLT}}$. This mostly reflects the fact that these low-$\alpha_{\text{MLT}}$ models have had a slightly lower effective temperature during their contraction, and have ultimately lost somewhat less entropy at any fixed time; they therefore have a somewhat greater specific entropy at the time sampled in this figure. At these ages, the enhanced entropy contrast associated with lower $\alpha_{\text{MLT}}$ (per our discussion above) is thus not entirely confined to the near-surface layers: though the photospheric entropy is lower for low-$\alpha_{\text{MLT}}$ models, $s_{\text{ad}}$ is also higher.
The pre-main-sequence contraction eventually ends because the interior temperature and density have increased enough for nuclear fusion in the core (rather than gravitational contraction) to provide the energy needed to offset the star's radiative losses at the surface. On the main-sequence, then, the value of $s_{\text{ad}}$ is not merely determined by the star's initial entropy and by its passive cooling (which was mediated by the near-surface layers): rather, it is bounded from below by the entropy production associated with nuclear fusion occurring in a steady state. Of course this also is informed by the near-surface layers to some degree, but only insofar as these affect the entropy production rate by nuclear reactions. For the depth-independent $\alpha_{\text{MLT}}$ values probed here, these changes are modest, and so the deep interior entropy $s_{\text{ad}}$ is largely constant across models with varying $\alpha_{\text{MLT}}$ (at even smaller values of $\alpha_{\text{MLT}}$, $s_{\text{ad}}$ would be altered, as explored for example in \citealt{Chabrier:2007aa}). Thus in these models the higher $\Delta s$ associated with less efficient convection is almost entirely confined to the near-surface layers: the decrease in photospheric entropy with decreasing $\alpha_{\text{MLT}}$ compensates almost exactly for the increasing $\Delta s$.
\subsection{Scaling of stellar radius with $s_{\text{ad}}$ and $\alpha_{\text{MLT}}$} \label{subsec:radius_s_ad}
It has long been realized that a star's radius is sensitive to changes in its entropy \citep[see, e.g.][]{1988PASP..100.1474S,2004sipp.book.....H}. {For example, for a star with constant specific entropy, well-described by a polytropic model $p = K \rho^\gamma$, where $K$ is the polytropic constant, straightforward rearrangement gives}
\begin{equation}\label{eq:s_polytropic}
{s = \frac{N_{\text{A}} k_{\text{B}}}{\mu} \ln{(K)}}.
\end{equation}
{It can be shown that $K \propto M^{2-\gamma} R^{3 \gamma - 4}$ (see equation~(7.40) in \citealt{2004sipp.book.....H}), where $M$ is the stellar mass. By substituting this into equation~(\ref{eq:s_polytropic}) and integrating over the mass distribution of the stellar model, yielding the total entropy $S_{\text{tot}} \sim s M$ for a star of uniform composition, it can be shown that the stellar radius increases with the exponent of $S_{\text{tot}}$ for fixed mass:}
\begin{equation}\label{eq:R_exp_s_tot}
R \propto \exp{\left(\frac{\gamma - 1}{3 \gamma -4} \frac{\mu S_{\text{tot}}}{N_{\text{A}} k_{\text{B}} M}\right)},
\end{equation}
as noted for example in \citet{2004sipp.book.....H} (their equation~(7.150)). More precise relations between $R$, $S_{\text{tot}}$, and other variables can be derived in some specific cases, and these figure prominently in the classic theory of stellar structure \citep[e.g.,][]{1926ics..book.....E,1961PASJ...13..442H}. For example, for a star in hydrostatic equilibrium, the assumption of a perfectly isentropic interior allows relation of the central temperature, pressure, and density to the values of these quantities at the surface, following standard polytropic theory. If the nuclear energy generation $\epsilon$ is provided by fusion, it is further possible to solve for the radius of the star from first principles (by equating the luminosity produced by fusion, $L_{\text{fusion}} \propto R^3 \epsilon \propto R^3 \rho_{\text{c}}^2 T_{\text{c}}^{6}$ for the pp-chain, to the surface luminosity $L_{\text{surf}} = 4 \pi R^2 T_{\text{eff}}^4$, and adopting a closed-form expression for the surface opacity).
However, the structure models calculated by MESA (or any other stellar structure code) are not isentropic. The level of departure from isentropy depends on details of the models, and in particular on the convective mixing length. In practice, as discussed in \S~\ref{subsec:role_entropy}, most of the entropy resides in the deep interior with nearly-constant specific entropy $s_{\text{ad}}$, so that $S_{\text{tot}} \simeq s_{\text{ad}} M$ and equation~(\ref{eq:R_exp_s_tot}) simplifies to
\begin{equation}\label{eq:R_exp_s_ad}
R \propto \exp{\left(\frac{\gamma - 1}{3 \gamma -4} \frac{\mu s_{\text{ad}}}{N_{\text{A}} k_{\text{B}}}\right)}.
\end{equation}
{Thus, we can relate the ratio of two stellar radii and the change in $s_{\text{ad}}$ between two fixed mass models:}
\begin{figure*}
\gridline{\fig{R2_R1_LHS_v_RHS_s_ad_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{R2_R1_LHS_v_RHS_s_ad_1000_0_Myr.pdf}{0.51\textwidth}{(b)}
}
\caption{The ratio of stellar radii $R_2/R_1$ as a function $\Delta s_{\text{ad}}$ via equation~(\ref{eq:radii_entropy_relation}), for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$ stellar models at $\alpha_{\text{MLT}} = 0.5 - 1.7$ ($\Delta 0.05$). We see a divergence at $1 \, \text{Gyr}$, which arise from deviations from the ideal equation of state. $y=x$ (orange) is plotted for ease of comparison.\label{fig:R2_R1_LHS_v_RHS_s_ad}}
\end{figure*}
\begin{equation}\label{eq:radii_entropy_relation}
\frac{R_2}{R_1} \simeq \exp{\left(\frac{\gamma - 1}{3\gamma - 4} \frac{\mu \Delta s_{\text{ad}}}{N_{\text{A}} k_{\text{B}}} \right)}.
\end{equation}
{where $R_1$, $R_2$ are the radii of the first and second model respectively, assuming a uniform $\gamma = 5/3$ for simplicity.} This illustrates how an increase in $s_{\text{ad}}$ ``inflates" the stellar radius of these stellar models. Choosing $\alpha_{\text{MLT}} = 1.7$ to be our unperturbed model, we determine an unperturbed stellar radius $R_0 = 0.683 \, \text{R}_\odot$ and $0.286 \, \text{R}_\odot$, for $10 \, \text{Myr}$ and $1 \, \text{Gyr}$ respectively.
Models with different $\alpha_{\text{MLT}}$ have somewhat different radii. For example, at $10 \, \text{Myr}$, ``perturbing" our standard model by considering $\alpha_{\text{MLT}}$ in the range 0.5-1.7 results in radius inflation $\Delta R / R_0 \lesssim 17.5 \%$ ($R \lesssim 0.803 \, \text{R}_\odot$). However, for $1 \, \text{Gyr}$ models with the same range of $\alpha_{\text{MLT}}$, we only find $\Delta R / R_0 \lesssim 1.5 \%$ ($R \lesssim 0.289 \, \text{R}_\odot$); we analyze this important difference in the radius inflation between pre-main-sequence and main-sequence models in more detail below, but for now note that it stems partly from the lower superadiabaticity of these main-sequence models. This in turn implies that the properties of fixed mass fully-convective main-sequence stars are relatively insensitive to $\alpha_{\text{MLT}}$ in standard stellar structure models \citep[as noted previously by, e.g.,][]{Chabrier:2007aa,Feiden:2014aa}.
In Figure~\ref{fig:R2_R1_LHS_v_RHS_s_ad}, we examine the ratio of two outputted stellar radii as a function of $\Delta s_{\text{ad}}$ via equation~(\ref{eq:radii_entropy_relation}), for $0.3 \, \text{M}_\odot$ stellar models at both $10 \, \text{Myr}$ and $1 \, \text{Gyr}$ for all possible model comparisons between $\alpha_{\text{MLT}} = 0.5 - 1.7$ ($\Delta 0.05$). The line $y=x$, which would indicate perfect agreement with equation~(\ref{eq:radii_entropy_relation}), is over-plotted (orange line) for ease of comparison. At $10 \, \text{Myr}$ (left panel), the variations in stellar radii are captured extremely well by this expression; at $1 \, \text{Gyr}$ (right panel), they deviate from it slightly. The small deviations from equation~(\ref{eq:radii_entropy_relation}) arise partly from departures from the ideal equation of state assumed in our derivation of this equation. In particular, the central temperature for stars of this mass on the main sequence deviates slightly from the virial expectation that $T \propto M/R$ (owing partly to the fact that these interiors are somewhat degenerate). {Further deviations from equation~(\ref{eq:radii_entropy_relation}) arise due to our assumption of a uniform $\gamma = 5/3$ in deriving this expression; in our models, $\gamma$ is indeed roughly uniform (and $= 5/3$) in the interiors of our pre-main-sequence models, but deviates from this slightly on the main-sequence. (These deviations in turn arise partly from Coulomb interactions, which though small are not entirely negligible.)} Note, further, that the overall range in stellar radii across all models, and likewise the variation in $s_{\text{ad}}$ across these models, is much smaller than on the pre-main-sequence.
\begin{figure*}
\gridline{\fig{s_ad_v_alpha_4_3_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{s_ad_v_alpha_4_3_1000_0_Myr.pdf}{0.525\textwidth}{(b)}
}
\caption{$s_{\text{ad}}$ as a function of $\alpha_{\text{MLT}}^{-4/3}$ for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$ stellar models at $\alpha_{\text{MLT}} = 0.8 - 1.7$ ($\Delta 0.05$). We extrapolate to the isentropic value of $s_{\text{ad}}$ at that given age, which drops as a function of age in the pre-main-sequence, settling in the main-sequence. The trend between $s_{\text{ad}}$ and $\alpha_{\text{MLT}}^{-4/3}$ also decreases with age.} \label{fig:s_ad_v_alpha_4_3}
\end{figure*}
The changes in $s_{\text{ad}}$, and hence in the stellar radius, are linked to changes in $\alpha_{\text{MLT}}$. To examine this quantitatively, we must find how the value of the adiabat is linked to $\Delta s$. For example, if all the changes in $\Delta s$ between models were reflected simply in changes to the photospheric entropy $s_{\text{ph}}$, this would imply an $s_{\text{ad}}$ that is nearly uniform across models; meanwhile if $s_{\text{ph}}$ were instead somehow held constant across all models, changes in $\Delta s$ would translate directly to changes in $s_{\text{ad}}$. The true relation between $s_{\text{ph}}$ and $s_{\text{ad}}$ (and hence $\Delta s$) is more complex than either of these simple examples. Overall, though, as established previously, a decrease in $\alpha_{\text{MLT}}$ decreases $s_{\text{ph}}$ and (on the pre-main-sequence in particular) increases $s_{\text{ad}}$.
To see roughly why this is so, note that in general the surface luminosity $L_{\text{surf}} \propto R^2 T_{\text{eff}}^4$, which (using equation~(\ref{eq:s_ph_T_eff})) can be written as
\begin{equation}\label{eq:Lsurf_sph}
L_{\text{surf}} \propto \exp{\left(\frac{\mu s_{\text{ph}}}{N_{\text{A}} k_{\text{B}}} \right)} / (\rho_{\text{ph}}^{1/2} T_{\text{eff}}^{15/2}).
\end{equation}
On the pre-main-sequence, the luminosity is ultimately derived from gravitational contraction, with $L_{\text{surf}} \propto R^{-2} (dR/dt)$. Equating the two, and noting how $R$ scales with $s_{\text{ad}}$ (equation~(\ref{eq:R_exp_s_ad})), implies that for contraction at nearly constant effective temperature, we must have
\begin{equation}\label{eq:Lsurf_sph2}
\exp{\left(-\frac{\gamma - 1}{3 \gamma -4} \frac{\mu s_{\text{ad}}}{N_{\text{A}} k_{\text{B}}} \right)} \propto \exp{\left(\frac{\mu s_{\text{ph}}}{N_{\text{A}} k_{\text{B}}} \right)} / (\rho_{\text{ph}}^{1/2} T_{\text{eff}}^{15/2}).
\end{equation}
This in turn implies that $s_{\text{ad}} \propto - s_{\text{ph}}$ on the pre-main-sequence (plus additional smaller terms). A similar proportionality holds on the main-sequence, where now the interior luminosity is generated by fusion, with $L \propto R^3 \epsilon \propto R^2 \rho_{\text{c}}^2 T_{\text{c}}^6 \propto R^{-9}$ for stars in virial equilibrium. This again implies $s_{\text{ad}} \propto - s_{\text{ph}}$, but with a different (and in fact significantly smaller) constant of proportionality. Thus in both cases, in comparing models of similar total convective flux ($\alpha_{\text{MLT}} = 0.8-1.7$), we have that $s_{\text{ad}} \propto \Delta s$, hence
\begin{equation}\label{eq:s_ad_prop_alpha_4_3}
s_{\text{ad}} \propto \alpha_{\text{MLT}}^{-4/3}.
\end{equation}
The constant of proportionality decreases with the age of the model---as discussed previously, the interior adiabat in pre-main-sequence models is more sensitive to variations in $\alpha_{\text{MLT}}$---but the proportionality holds true even for main-sequence models.
In Figure~\ref{fig:s_ad_v_alpha_4_3}, we examine $s_{\text{ad}}$ as a function of $\alpha_{\text{MLT}}^{-4/3}$ for $0.3 \, \text{M}_\odot$ stellar models at $\alpha_{\text{MLT}} = 0.8 - 1.7$ ($\Delta 0.05$) for both $10 \, \text{Myr}$ and $1 \, \text{Gyr}$, where the proportionality in equation~(\ref{eq:s_ad_prop_alpha_4_3}) holds for both ages here. We extrapolate to find $s_{\text{ad} (\alpha_{\text{MLT}} \rightarrow \infty)}$, i.e., the value corresponding to an isentropic model, which gives the constant of proportionality in equation~(\ref{eq:s_ad_prop_alpha_4_3}) as
\begin{equation}\label{eq:d_sad_s_alpha_4_3}
\frac{d s_{\text{ad}}}{d\alpha_{\text{MLT}}^{-4/3}} \approx \frac{s_{\text{ad}} - s_{\text{ad} (\alpha_{\text{MLT}} \rightarrow \infty)}}{\alpha_{\text{MLT}}^{-4/3}}.
\end{equation}
Thus, for fully-convective stellar models of similar total convective flux, one can predict the radius inflation between two models of known $\alpha_{\text{MLT}}$ without having to determine a perturbed model's $s_{\text{ad}}$, solely using the unperturbed model's $s_{\text{ad}}$, and $s_{\text{ad} (\alpha_{\text{MLT}} \rightarrow \infty)}$ at a given age:
\begin{equation} \label{eq:R2_R1_delta_alpha_4_3_relation}
{\frac{R_2}{R_1} \approx \exp{\left(\frac{\gamma - 1}{3\gamma - 4} \frac{\mu}{N_{\text{A}} k_{\text{B}}} \frac{s_{\text{ad}_1} - s_{\text{ad} (\alpha_{\text{MLT}} \rightarrow \infty)}}{\alpha_{\text{MLT}_1}^{-4/3}} \Delta \left(\alpha_{\text{MLT}}^{-4/3}\right)\right)}}.
\end{equation}
\begin{figure}
\includegraphics[width=0.5\textwidth]{ln_R_2_R_1_v_delta_alpha_4_3.pdf}
\caption{$\log{(R_2/R_1)}$ as a function of $\Delta \left(\alpha_{\text{MLT}}^{-4/3}\right)$ at various ages for $0.3 \, \text{M}_\odot$ stellar models at $\alpha_{\text{MLT}} = 0.8 - 1.7$ ($\Delta 0.05$). During the pre-main-sequence, the trend decreases with age (see equation~(\ref{eq:radius_inflation_time})), eventually reaching levels of negligible radius inflation in the main-sequence.} \label{fig:ln_R_2_R_1_v_delta_alpha_4_3}
\end{figure}
In Figure~\ref{fig:ln_R_2_R_1_v_delta_alpha_4_3}, we examine the ratio of two outputted stellar radii as a function of $\Delta \left(\alpha_{\text{MLT}}^{-4/3}\right)$ at different ages between $10 \, \text{Myr}$ and $1 \, \text{Gyr}$. Models at all $\alpha_{\text{MLT}}$ contract on the pre-main-sequence, with $dR/dt \propto R^4$, implying in turn that $R \propto t^{-1/3}$ if the effective temperature remains constant. In our models, the radius inflation between two models of differing $\alpha_{\text{MLT}}$ decreases with age, becoming almost negligible during the main-sequence ($1 \, \text{Gyr}$). This time dependence ultimately reflects the fact that (as shown in Figure~\ref{fig:s_ad_v_alpha_4_3}) $d s_{\text{ad}}/d \alpha_{\text{MLT}}^{-4/3}$ (equation~(\ref{eq:d_sad_s_alpha_4_3})) changes with time, becoming much shallower on the main sequence; per equation~(\ref{eq:R2_R1_delta_alpha_4_3_relation}), this means a less pronounced radius inflation for a given change in $\alpha_{\text{MLT}}$. Empirically, we find that }
\begin{equation} \label{eq:radius_inflation_time}
\frac{R_2}{R_1} \propto t^{-0.03 \Delta \left(\alpha_{\text{MLT}}^{-4/3}\right)},
\end{equation}
demonstrating that a larger change in $\alpha_{\text{MLT}}$ between two models does indeed result in the model contracting more rapidly with time.
As previously discussed, during the main-sequence, $s_{\text{ad}}$ is predominantly bounded by the entropy production via nuclear fusion, thus any changes in $s_{\text{ph}}$ in our models fail to produce noticeable changes in $s_{\text{ad}}$. Hence, for our range of main-sequence models, where $\Delta s_{\text{ph}} \lesssim 10^7 \, \text{erg} \, \text{g}^{-1} \, \text{K}^{-1}$, we find that $R_2 \approx R_1$, as demonstrated by the trend at $1 \, \text{Gyr}$ in Figure~{\ref{fig:ln_R_2_R_1_v_delta_alpha_4_3}}.
Note that our lowest-efficiency models in Figures~\ref{fig:logsupergrad_v_logRho} and~\ref{fig:entropy_v_logRho} have $\alpha_{\text{MLT}} = 0.5$; at even lower values, radius inflation is possible even on the main sequence, as demonstrated for example by \citet{Chabrier:2007aa}. In this regime, however, the convective flux is not the same as at higher values of $\alpha_{\text{MLT}}$ (that is, the nuclear energy generation in the interior is affected), breaking the assumptions made in our analysis. Indeed, \citet{Chabrier:2007aa} show that at $\alpha_{\text{MLT}} \approx 0.05$ a radiative (stable) core begins to form in the interior, violating our assumption that the star is fully-convective. We defer analysis of such cases to other work.
\section{Rotational inhibition of convection: Stevenson (1979) formulation} \label{sec:rot_inhibition_conv}
\subsection{Theory: rotational modification to MLT} \label{subsec:theory_rot}
As noted in \S~\ref{sec:intro}, rotation generally acts to inhibit convection. In linear theory, this inhibition manifests as an increase in the critical Rayleigh number required to drive convection \citep{1961hhs..book.....C}. The effects of rotation in the non-linear regime are more difficult to gauge, but many authors have argued that ultimately the temperature gradient required to transport a given heat flux by convection must increase somewhat if the rotation is sufficiently rapid. \citet{Stevenson:1979aa} (\citetalias{Stevenson:1979aa}, hereafter), for example, derived a mixing length prescription for rotating convection through consideration of the growth of linear, Boussinesq convective modes, constructing a finite amplitude theory by assuming that non-linearities, such as shear instabilities, limit the amplitude of the flow. Following \citet{1954RSPSA.225..196M}, \citetalias{Stevenson:1979aa} argued that the convective flow is dominated by the modes that transport the most heat. \citetalias{Stevenson:1979aa} use this model to relate $\nabla_{\text{s}}$ in a ``perturbed" model (at rotation rate $\Omega$) to the unperturbed (non-rotating) model's:
\begin{equation} \label{eq:general_rotation}
\left(\frac{\nabla_{\text{s}}}{\nabla_{\text{s}_0}}\right)^{5/2} - \frac{\nabla_{\text{s}}}{\nabla_{\text{s}_0}} = \frac{1}{41} \text{Ro}^{-2} \equiv \frac{4}{41} \tau_{\text{c}_0}^2 \Omega^2,
\end{equation}
where $\tau_{\text{c}_0}$ is the convective turnover time of the unperturbed model, and $\text{Ro} \equiv (2 \tau_{\text{c}_0} \Omega)^{-1}$ is the Rossby number.
In the slow regime, i.e., $\text{Ro} \gg 1$,
\begin{equation} \label{eq:slow_rotation}
\nabla_{\text{s}} \simeq \nabla_{\text{s}_0} \left(1 + \frac{1}{62} \text{Ro}^{-2}\right) \equiv \nabla_{\text{s}_0} \left(1 + \frac{4}{62} \tau_{\text{c}_0}^2 \Omega^2 \right),
\end{equation}
converging towards the non-rotating model. In the rapid regime, i.e., $\text{Ro} \ll 1$,
\begin{equation} \label{eq:rapid_rotation}
\nabla_{\text{s}} \simeq 0.23 \nabla_{\text{s}_0} \text{Ro}^{-4/5} \equiv 0.92 \nabla_{\text{s}_0} \tau_{\text{c}_0}^{4/5} \Omega^{4/5}.
\end{equation}
As $\nabla_{\text{s}} \propto ds/dr$, this mechanism modifies the gradient of the specific entropy, i.e., specific entropy asymptotically converges to a different adiabat in the presence of rotation.
We are motivated to explore this reformulation of MLT partly because more recent investigations have suggested similar scalings for the temperature gradient and/or velocity in rapidly-rotating convection. For example, as noted in \S~\ref{sec:intro}, \citet{Barker:2014aa} derive a rotating MLT equivalent to that of \citetalias{Stevenson:1979aa} via simplified physical arguments, achieving the same scaling between $\nabla_{\text{s}}$ ($dT/dz$ in their case) and $\Omega$ when in the rapidly-rotating regime. To test their relationship, they take an average of $dT/dz$ from the middle third of the convection zone in a series of 3D hydrodynamical simulations of Boussinesq convection in a Cartesian box. They find that $dT/dz$ in the simulations does indeed scale with $\Omega$ as dictated by equation~(\ref{eq:rapid_rotation}), and likewise that the typical velocities and spatial structures amidst the flow also scale with $\Omega$ in the manner predicted by the theory. Previously, \citet{2012PhRvL.109y4503J} also examined the transport in rapidly-rotating convection using a set of asymptotically reduced equations. They likewise find that heat transport in the rapidly-rotating regime is ``throttled" by convection in the bulk of the domain---in marked contrast to the non-rotating case, which is controlled mainly by the boundary layers. Overall, their theoretical model yields scalings of $dT/dz$ as a function of $\Omega$ that are arguably compatible with those in \citetalias{Stevenson:1979aa} and \citet{Barker:2014aa}. The broad concordance between these different theoretical models suggest that the MLT formulation adopted in \citetalias{Stevenson:1979aa}, though undoubtedly a simplified description of the complex flows occurring in actual stars, may nonetheless adequately capture how the primary quantity of interest for stellar convection---namely the temperature or entropy gradient as a function of the flux---varies with rotation rate.
We therefore incorporate rotational effects into our 1D stellar structure models by implementing the modified MLT formulation of \citetalias{Stevenson:1979aa} into MESA. Observations and simulations of fully-convective stars have indicated that they are likely to rotate mostly as solid bodies, supporting our choice of using a fixed $\Omega$ to model rotation inhibition. \citet{Barnes:2005aa} shows surface differential rotation diminishes with increasing convective depth in low-mass stellar observations, and magnetohydrodynamical (MHD) simulations performed by, e.g., \citet{Browning:2008aa} and \citet{2015ApJ...813L..31Y,2016GeoJI.204.1120Y}, suggest that magnetic fields react strongly on flows, helping to enforce solid-body rotation.
\subsection{Radius inflation: S79 models} \label{subsec:s79_models}
\begin{figure*}
\gridline{\fig{logsupergrad_v_logRho_rot_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{logsupergrad_v_logRho_rot_1000_0_Myr.pdf}{0.51\textwidth}{(b)}
}
\caption{$\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$, for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$ stellar models at $\Omega = 0 - 20 \, \Omega_\odot$ ($\Delta 5 \, \Omega_\odot$). As $\Omega$ increases, $\nabla_{\text{s}}$ increases throughout the bulk of the stellar interior ($\text{Ro} \ll 1$), but becomes comparable to the unperturbed model in the near-surface layers ($\text{Ro} \gg 1$).\label{fig:logsupergrad_v_logRho_rot}}
\end{figure*}
Some active low-mass stars are fast rotators, with rotation velocities $v_{\text{rot}} \gtrsim 10 \, \text{km s}^{-1}$ \citep[e.g.,][]{Reid:2002aa,Mohanty:2003aa} in some cases. We test $\Omega = 5 - 20 \, \Omega_\odot$ ($\Delta 5 \, \Omega_\odot$), which is $\lesssim 10\%$ of the break-up velocity at $10 \, \text{Myr}$, and $\lesssim 3\%$ at $1 \, \text{Gyr}$, of our unperturbed $0.3 \, \text{M}_\odot$, $\alpha_{\text{MLT}} = 1.7$ stellar model. These produce typical rotation velocities of $v_{\text{rot}} \simeq 7-27 \, \text{km s}^{-1}$ and $v_{\text{rot}} \simeq 3-11 \, \text{km s}^{-1}$ for $10 \, \text{Myr}$ and $1 \, \text{Gyr}$ respectively. We have not attempted to account for changes in the effective gravity as $\Omega$ increases; since the angular velocity in all cases is only a small fraction of the breakup velocity, these effects probably play only a minor role. At each $\Omega$, {we calculate a new value of $\nabla_{\text{s}}$ at each point in the mass distribution, by modifying the non-rotating} $\nabla_{\text{s}}$ according to equation~(\ref{eq:general_rotation}), representing a ``rotating" version of the 1D stellar structure model. The depth-dependence of $\nabla_{\text{s}}$ is then determined by the profile of $\text{Ro}$, which in turn depends on the convective overturning time at every depth in the model. Here, we take this overturning time simply to be $\tau_{\text{c}_{\text{0}}}$ from the \emph{unperturbed} model---that is, we neglect the small changes in overturning time associated with changes in the convective velocity at rapid rotation. This simplification has the consequence that our models slightly underestimate the influence of rotation at any fixed $\Omega$ (compared to a fully self-consistent model), but we will see in a moment that this effect is utterly negligible for the overall structure.
In Figure~\ref{fig:logsupergrad_v_logRho_rot}, we plot $\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$ for $0.3 \, \text{M}_\odot$, $\alpha_{\text{MLT}} = 1.7$ stellar models, at both $10 \, \text{Myr}$ and $1 \, \text{Gyr}$, for $\Omega = 0 - 20 \, \Omega_\odot$ ($\Delta 5 \, \Omega_\odot$). In the bulk of the convection zone, convective velocities are low, i.e., $\text{Ro} \ll 1$, hence this region becomes more superadiabatic than the unperturbed model by a few orders of magnitude. However, due to the already near-adiabaticity in this region, this perturbation in $\nabla_{\text{s}}$ does not influence the stellar structure noticeably. In the surface layers, where convective velocities become increasingly rapid, $\text{Ro}$ remains $\gg 1$ at all the $\Omega$ values sampled, resulting in negligible changes in the superadiabaticity there, i.e., $\nabla_{\text{s}} \simeq \nabla_{\text{s}_0}$.
The near equivalence of $\nabla_{\text{s}}$ in the surface layers of all these models, and their near-adiabaticity in the bulk of the convection zone, together imply that there are negligible differences between the specific entropy profiles of models at varying rotation rates. As discussed in \S~\ref{sec:entropy}, the radius of the star is determined primarily by the interior adiabat (i.e., $s_{\text{ad}}$), which in turn is largely established by the near-surface layers. Because the near-surface layers have $\text{Ro} \gg 1$ and thus are almost totally uninfluenced by convection, the entropy jump in all our rotating models is nearly identical to that in the non-rotating case. This in turn means that the specific entropy of the deep interior $s_{\text{ad}}$ is unchanged by rotation, even though the deep layers of the star are strongly influenced by Coriolis forces ($\text{Ro} \ll 1$), and $\nabla_{\text{s}}$ varies considerably between models there. This, following the discussion in \S~\ref{subsec:radius_s_ad}, finally implies that rotation will lead only to negligible changes in the overall structure and radius of the star.
This expectation is confirmed in our models. We measured $\Delta R / R_0 \sim 10^{-2} \%$ and $\sim 10^{-4} \%$ for $\Omega = 5 - 20 \, \Omega_\odot$ models at $10 \, \text{Myr}$ and $1 \, \text{Gyr}$ respectively. Thus, implementing rotational inhibition of convection using this modified formulation of MLT does not produce noticeable changes in the stellar radius. This is, again, due mainly to the depth-dependence of the convective velocities and hence of the Rossby number: if the star were instead well-characterized by a single depth-independent Rossby number, radius inflation would be much more noticeable (for low enough values of $\text{Ro}$).
\section{Magnetic inhibition of convection: MacDonald \& Mullan (2014) formulation} \label{sec:mag_inhibition_conv}
\subsection{Theory: magnetic modification to MLT} \label{subsec:theory_mag}
It is not clear how best to encapsulate the influence of magnetism on convection in 1D stellar structure models. Clearly magnetic fields can inhibit flows via the Lorentz force. However, in the presence of rotation, the effects of magnetism can be more complex, with magnetized rotating fluids sometimes \emph{more} amenable to convection than their non-magnetic equivalents (see \S~\ref{sec:intro}). As with rotation, the impact of magnetism in the non-linear regime is much less clear. Various authors have turned to different prescriptions for encapsulating these effects in 1D models, motivated by physical arguments and results from linear theory, as summarized also in \S~\ref{sec:intro}. Here, we have chosen to focus our attention on one such model, namely that proposed by \citet{MacDonald:2014aa} (\citetalias{MacDonald:2014aa}, hereafter), which is a slightly modified form of \citealt{Mullan:2001ab}); we have chosen this model not because it is necessarily superior to others (e.g., \citealt{Feiden:2012aa}, or the reduced-$\alpha_{\text{MLT}}$ models of \citealt{Chabrier:2007aa}), but because its physical motivation is clear, it has been employed in a series of follow-on papers \citep[see, e.g.,][]{2015MNRAS.448.2019M,MacDonald:2017aa,MacDonald:2017ab}, and it is straightforward to implement in a 1D stellar evolution code. In this section, we briefly describe this prescription, its physical motivation, and then discuss its implementation into MESA models. We aim here to examine whether the mechanism by which radii are inflated in these ``magnetic" models is substantially the same as in the non-magnetic cases discussed in \S~\ref{sec:entropy} and \S~\ref{sec:rot_inhibition_conv}; that is, we examine how the radii, specific entropy, and adopted magnetic prescription are linked. We show that radius inflation in the \citetalias{MacDonald:2014aa} models is, as in their non-magnetic cousins, associated with changes in the specific entropy of the deep interior, which in turn is linked to the entropy contrast in the near-surface layers.
The models of \citetalias{MacDonald:2014aa} are based partly on the linear stability work of \citet{Gough:1966aa}, who derived a criterion for convective instability onset due to a magnetic field in certain circumstances. In non-magnetic models, the criterion of convective onset is purely local; magnetic fields connect parcels of fluid at different levels, so such a criterion is not generally obtainable \citep{Gough:1966aa}. However, simple local stability criteria exist for particularly elementary magnetic field configurations. In practice, \citetalias{MacDonald:2014aa} modify the Schwarzschild criterion due to the presence of a magnetic field:
\begin{equation} \label{eq:schwarz_crit_mag}
\nabla_{\text{rad}} > \nabla_{\text{ad}} + \frac{\delta}{Q},
\end{equation}
where
\begin{equation} \label{eq:delta}
\delta = \frac{B_{\text{v}}^2}{B_{\text{v}}^2 + 4 \pi \gamma P_{\text{g}}}
\end{equation}
is a magnetic inhibition parameter, and $Q = -(\partial \ln{\rho} / \partial \ln{T})_p$ is the isobaric expansion coefficient. In this expression, $P_{\text{g}}$ is the gas pressure and $B_{\text{v}}$ is taken by \citetalias{MacDonald:2014aa} to represent the vertical component of the magnetic field, on the grounds that this component figures prominently in the linear stability analysis of \citet{Gough:1966aa}. More generally, we might take $B_{\text{v}}$ as a crude proxy encompassing both the strength of the field at a point and some aspects of its spatial morphology. This parameter ($\delta/Q$) is added to every instance of $\nabla_{\text{ad}}$ in the MLT prescription, in order to determine the perturbed temperature gradient at a given convective energy flux (or vice versa). Physically, this amounts to asserting that the dimensionless temperature gradient in non-linear convection tends not towards $\nabla_{\text{ad}}$, as it would for efficient non-magnetized, non-rotating convection at sufficiently high Rayleigh number, but to $\nabla_{\text{ad}} + \delta/Q$. We have not attempted to take into consideration other effects arising from the presence of a magnetic field (e.g., magnetic pressure). At each time step, the model evolves self-consistently using the perturbed structure. The criterion expressed in equation~(\ref{eq:schwarz_crit_mag}) differs from that used in \citet{Mullan:2001ab} by a factor $Q$, which was adopted in \citetalias{MacDonald:2014aa} onwards to account for non-ideal thermodynamic behavior.
\begin{figure*}
\gridline{\fig{logsupergrad_v_logRho_mag_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{logsupergrad_v_logRho_mag_1000_0_Myr.pdf}{0.51\textwidth}{(b)}
}
\caption{$\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$, for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$, $\alpha_{\text{MLT}} = 1.7$ stellar models at some combinations of $B_{\text{v-max}} = 10^3-10^5 \, \text{G}$ and (a) $\delta = 0.01 - 0.03$ (b) $\delta = 0.04 - 0.06$, including the unperturbed model. Increasing $\delta$ noticeably increases $\nabla_{\text{s}}$ where $B_{\text{v}} < B_{\text{v-max}}$, and increasing $B_{\text{v-max}}$ increases the depth at which $\delta$ noticeably increases $\nabla_{\text{s}}$.\label{fig:logsupergrad_v_logRho_mag}}
\end{figure*}
Higher values of $B_{\text{v}}$ inhibit convection, requiring a steeper temperature gradient to transport an equivalent heat flux; hence, increasing $\delta$ will increase the superadiabaticity of the stellar interior. The choice of radial profile for $\delta$ is, in these models, somewhat arbitrary. \citetalias{MacDonald:2014aa} choose $\delta = const$ from the surface downwards to some radius $r_{\text{max}}$, where $B_{\text{v}}$ reaches its critical strength $B_{\text{v-max}}$; thus, $\delta$ rapidly decreases with depth for $r<r_{\text{max}}$. Best-fit magneto-convection models performed by this reformulation of MLT are more sensitive to $\delta$ than to the chosen $B_{\text{v-max}}$. The range of vertical surface magnetic field strengths $B_{\text{v-surf}}$ in the models is not dictated by the large range of uncertainty in $B_{\text{v-max}}$, i.e., deep interior field strengths, but rather to the range of $\delta$ considered.
\subsection{Radius inflation: MM14 models} \label{subsec:MM14_models}
We implement this magnetic inhibition of convection into MESA, producing ``magnetic" $0.3 \, \text{M}_\odot$, $\alpha_{\text{MLT}} = 1.7$ stellar models at both $10 \, \text{Myr}$ and $1 \, \text{Gyr}$. For ease of comparison with prior work, we adopt the same strategy as \citetalias{MacDonald:2014aa} by assuming $\delta$ is constant down to some radius $r_{\text{max}}$ at which $B=B_{\text{v-max}}$; below this point, $\delta$ decreases rapidly in accord with the rising gas pressure. It must be noted at the outset that this assumption amounts to asserting that the magnetic pressure remains a constant fraction of the gas pressure at depths between the surface and $r_{\text{max}}$. In non-linear 3D simulations of turbulent stellar dynamos, the field strength is typically not directly related to the gas pressure at any given depth, but is set by the dynamics of the convection coupled to rotation and shear \citep[e.g.,][]{2006ApJ...638..336D,Browning:2008aa,2016GeoJI.204.1120Y}. But once this choice of $\delta$ profile is made, the model is specified fully by the choice of surface $\delta$ and by the value of $B_{\text{v-max}}$.
The total gas pressure increases rapidly with depth, so if no $B_{\text{v-max}}$ is specified, the magnetic field strengths implied by a $\delta=const$ profile would quickly become enormous. Some of the first studies along these lines, for example, allowed for fields of sufficient strength that the formation of a radiative core would result \citep[e.g.,][]{Mullan:2001ab}. Some later models adopted a maximum field strength of order $1 \, \text{MG}$, \citep{MacDonald:2012aa,Mullan:2015aa}. Recently, \citet{Browning:2016aa} suggested $B_{\text{v-max}} \sim 10^5 \, \text{G}$ to be an extreme upper limit for the maximum field strengths found in these fully-convective low-mass stars. They argue that at a given magnetic field strength, large-scale field configurations are subject to the constraints of magnetic buoyancy instabilities, whilst Ohmic dissipation associated with small-scale field configurations was enough to exceed the stellar luminosity in some cases. Combining these constraints produced an upper limit on the maximum field strength of $B_{\text{v-max}} \leq 800 \, \text{kG}$, for models of particularly simple magnetic field spatial structure. Additional, stronger constraints come again from 3D simulations of dynamo action in these objects. For example, \citet{2015ApJ...813L..31Y} found $B_{\text{v-max}} \approx 14 \, \text{kG}$ for a fully-convective M dwarf with a rotation period of 20 days, and likewise the simulations of \citet{Browning:2008aa} found fields of order the equipartition strength (with the turbulent convective energy density). Broadly, we think models in which the field does not greatly exceed values of order $10^4 \, \text{G}$ are most realistic (as also studied recently, for example, by \citealt{MacDonald:2017ab}). Note that as $B_{\text{v-max}}$ approaches the value of the surface field, the profile assumed for $\delta$ becomes increasingly irrelevant; in that limit, the field strength throughout the interior is just the constant $B_{\text{v-max}} \approx B_{\text{surf}}$.
Motivated by these considerations, we test $B_{\text{v-max}} = 10^3 - 10^5 \, \text{G}$ ($\Delta 1 \, \log_{10}{(\text{G})}$) at both ages. Note that we include $10^5 \, \text{G}$ for comparison with prior work and to demonstrate the utility of our mechanism even in the extreme field cases, even though we think, as noted above, that $10^4 \, \text{G}$ is a reasonable upper limit. We use $\delta = 0.01-0.03$ ($\Delta 0.005$) for our $10 \, \text{Myr}$ models, giving $B_{\text{v-surf}} \lesssim 0.3 \, \text{kG}$. We use an extended range of $\delta = 0.01 - 0.06$ ($\Delta 0.005$) for our $1 \, \text{Gyr}$ models, to counteract the suppression of radius inflation in main-sequence models, producing $B_{\text{v-surf}} \lesssim 0.9 \, \text{kG}$.
In Figure~\ref{fig:logsupergrad_v_logRho_mag}, we plot $\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$ for $0.3 \, \text{M}_\odot$, $\alpha_{\text{MLT}} = 1.7$ stellar models, for some combinations of $B_{\text{v-max}} = 10^3-10^5 \, \text{G}$, with $\delta = 0.01 - 0.03$ for $10 \, \text{Myr}$ models and $\delta = 0.04 - 0.06$ for $1 \, \text{Gyr}$ models, which we compare with the unperturbed model. In accord with equation~(\ref{eq:schwarz_crit_mag}), $\nabla_{\text{s}} \simeq \nabla_{\text{s}_0} + \delta / Q_0$ at all depths. Changes in $Q$ are negligible between models, hence we used the unperturbed value. In the bulk of the convection zone, where $B_{\text{v}} = B_{\text{v-max}}$, $\nabla_{\text{s}} \sim B_{\text{v-max}}^2 / Q_0 \gamma P_{\text{gas}} \gg \nabla_{\text{s}_0}$, thus a factor of ten increase in $B_{\text{v-max}}$ results in a factor of $\sim 100$ increase in superadiabaticity. As $\delta$ increases, $\nabla_{\text{s}}$ increases in the surface layers. The point at which $\nabla_s$ transitions---from a nearly-constant value near the surface to a steeply declining profile in the interior---is mediated by the point at which the vertical surface magnetic field (here set by $\delta$) reaches $B_{\text{v-max}}$, because interior to that point the gas pressure begins to exceed the magnetic pressure by an increasingly large amount.
\begin{figure*}
\gridline{\fig{entropy_v_logRho_mag_10_0_Myr.pdf}{0.515\textwidth}{(a)}
\fig{entropy_v_logRho_mag_1000_0_Myr.pdf}{0.5\textwidth}{(b)}
}
\caption{$s$ as a function of $\log_{10}{(\rho)}$, for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) 1 Gyr, $\alpha_{\text{MLT}} = 1.7$ stellar models at some combinations of $B_{\text{v-max}} = 10^3-10^5 \, \text{G}$ and (a) $\delta = 0.01 - 0.03$ (b) $\delta = 0.04 - 0.06$, including the unperturbed model.\label{fig:entropy_v_logRho_mag}}
\end{figure*}
\begin{figure*}
\gridline{\fig{R2_v_R2_s_ad_10_0_Myr.pdf}{0.515\textwidth}{(a)}
\fig{R2_v_R2_s_ad_1000_0_Myr.pdf}{0.5\textwidth}{(b)}
}
\caption{Radius inflation determined from $\Delta s_{\text{ad}}$ via equation~(\ref{eq:radii_entropy_relation}) as a function of the outputted radius inflation from the \citetalias{MacDonald:2014aa} models for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$, $\alpha_{\text{MLT}} = 1.7$ stellar models at all combinations of ${B_{\text{v-max}}} = 10^3-10^5 \, {\text{G}}$ ($\Delta 1 \, \log_{10}{(\text{G})}$) and (a) $\delta = 0.01 - 0.03$ ($\Delta 0.005$) (b) $\delta = 0.01 - 0.06$ ($\Delta 0.005$). $y=x$ (orange) is plotted for ease of comparison.\label{fig:R2_v_R2_s_ad}}
\end{figure*}
In Figure~\ref{fig:entropy_v_logRho_mag}, we plot $s$ as a function of $\log_{10}{(\rho)}$ for the same stellar models. At fixed $\delta$, the photospheric entropy $s_{\text{ph}}$ decreases monotonically with increasing $B_{\text{v-max}}$; likewise at fixed $B_{\text{v-max}}$, increasing $\delta$ decreases $s_{\text{ph}}$. In turn, $s_{\text{ad}}$ is shown to increase strongly with $\delta$, and to a lesser extent $B_{\text{v-max}}$. Pre-main-sequence stars with lower $s_{\text{ph}}$ have higher $s_{\text{ad}}$ for the reasons discussed in \S~\ref{sec:entropy}; hence, stars with higher $B_{\text{v-max}}$ and $\delta$ tend to have a higher $s_{\text{ad}}$. On the main-sequence, variations in $s_{\text{ad}}$ are smaller, due to the self-regulation of the star through nuclear fusion. However, the differences in $s_{\text{ph}}$ induced by changes in $\delta$ or $B_{\text{v-max}}$ are larger than in our fixed-$\alpha_{\text{MLT}}$ models. A larger entropy contrast, as a result of higher superadiabaticity in the surface layers, produces small but noticeable changes in $s_{\text{ad}}$. As in \S~\ref{sec:entropy} and \S~\ref{sec:rot_inhibition_conv}, stellar structure is largely insensitive to the increasing $\nabla_{\text{s}}$ in the deep interior; it responds more readily to an increased $\nabla_{\text{s}}$ in the surface layers.
\begin{figure*}
\gridline{\fig{logsupergrad_v_logRho_rot_mag_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{logsupergrad_v_logRho_rot_mag_1000_0_Myr.pdf}{0.51\textwidth}{(b)}
}
\caption{$\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$, for a $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$, $\alpha_{\text{MLT}} = 1.7$ stellar model at $\Omega = 20 \, \Omega_\odot$, $B_{\text{v-max}} = 10^4 \, \text{G}$, and (a) $\delta = 0.03$ (b) $\delta = 0.06$, including the rotating-only, magnetic-only, and unperturbed models.\label{fig:logsupergrad_v_logRho_rot_mag}}
\end{figure*}
In Figure~\ref{fig:R2_v_R2_s_ad}, we examine radius inflation calculated via $\Delta s_{\text{ad}}$ (equation~(\ref{eq:radii_entropy_relation})) as a function of the outputted radius inflation, showing good agreement for these \citetalias{MacDonald:2014aa} models. For $\delta = 0.01 - 0.03$ models, we find $\Delta R / R_0 \lesssim 13 \%$ ($R \lesssim 0.771 \, \text{R}_\odot$) for our range of perturbed models at $10 \, \text{Myr}$, and $\Delta R / R_0 \lesssim 2 \%$ ($R \lesssim 0.292 \, \text{R}_\odot$) at $1 \, \text{Gyr}$. For $\delta = 0.04 - 0.06$ models at $1 \, \text{Gyr}$, we find $\Delta R / R_0 \lesssim 6 \%$ ($R \lesssim 0.302 \, \text{R}_\odot$). Overall, we find greater changes in $s_{\text{ph}}$ in these models than in the fixed-$\alpha_{\text{MLT}}$ main-sequence models in \S~\ref{subsec:radius_s_ad}, which is enough to slightly perturb $s_{\text{ad}}$ from the value predominantly determined via nuclear fusion, producing small, yet noticeable radius inflation. There is a slight divergence for our most-inhibited fully-convective models, due to the increasing effective depth of the magnetic inhibition of convection. For those models, the asymptotic increase towards $s_{\text{ad}}$ is reached at ever-increasing depth, thus our approximation $S_{\text{tot}} \simeq s_{\text{ad}} M$ becomes increasingly less accurate. Therefore, with increasing levels of radius inflation, the accuracy of using $s_{\text{ad}}$ alone to determine the stellar radius decreases.
\section{Combined inhibition of convection by rotation and magnetism}\label{sec:rot_and_mag}
Both the \citetalias{Stevenson:1979aa} rotational and \citetalias{MacDonald:2014aa} magnetic reformulations of MLT modify the superadiabaticity of a model. In the ``magnetic" case, the superadiabaticity in the surface layers is noticeably increased between 0.99-0.995 R, and slightly increased from 0.995 R up to the photosphere (see Figure~\ref{fig:logsupergrad_v_logRho_mag}). In the ``rotating" case, there is a small difference in $\nabla_{\text{s}}$ in the 0.99-0.995 R region, but negligible difference after this point up towards the photosphere (see Figure~\ref{fig:logsupergrad_v_logRho_rot}). Here, we briefly examine whether the \emph{combination} of rotation and magnetism using these prescriptions could increase the radius of a model even further. To do so, we first modify the criterion for convection using the \citetalias{MacDonald:2014aa} formulation, as in \S~\ref{sec:mag_inhibition_conv}; the resulting model is then used as the ``unperturbed" model for an application of the rotational formulation described in \S~\ref{sec:rot_inhibition_conv}. Hence, the enhanced superadiabaticity near the surface in the magnetic models may be further increased by the rotation, with possible impacts on the structure. Of course, this is a very crude approximation; as noted in \S~\ref{sec:intro}, the combined effects of rotation and magnetism may be considerably more complex than either simply rotation or magnetism acting alone, and these effects may not be additive (and indeed, in the case of the linear onset of convection, are not). Nonetheless we adopt it here as a first attempt at the problem.
In Figure~\ref{fig:logsupergrad_v_logRho_rot_mag}, we plot $\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$, comparing a ``magnetic rotating" $0.3 \, \text{M}_\odot$, $\alpha_{\text{MLT}} = 1.7$ stellar model at $\Omega = 20 \, \Omega_\odot$, $B_{\text{v-max}} = 10^4 \, \text{G}$, with $\delta = 0.03$ for $10 \, \text{Myr}$ and $\delta = 0.06$ for $1 \, \text{Gyr}$, compared with the rotating-only case, the magnetic-only case, and the unperturbed model. We choose the most-perturbed model at each age to be $10^4 \, \text{G}$ in order to investigate the highest possible radius inflation attained by the addition of ``rotational" effects at what we think is a realistic maximum field strength. At both ages, the superadiabaticity of our ``magnetic rotating" model is higher than in the magnetic-only case by orders of magnitude within the deep convection zone, where convective velocities are low (i.e. $\text{Ro} \ll 1$). Closer to the surface, this difference diminishes (because $\text{Ro}$ increases there).
We plot $s$ as a function of $\log_{10}{(\rho)}$ in Figure~\ref{fig:entropy_v_logRho_rot_mag} for our $10 \, \text{Myr}$ model. These changes in superadiabaticity are enough to produce a small change in $s_{\text{ad}}$ for our pre-MS model. As a result of this, our $10 \, \text{Myr}$ ``magnetic rotating" model is inflated by a further $1 \%$ compared to the magnetic-only case, giving $\Delta R / R_0 \simeq 10.5 \%$ ($R \simeq 0.755 \, \text{R}_\odot$). However, for our $1 \, \text{Gyr}$ model, there is negligible inflation, as the superadiabaticity is much lower throughout the surface layers compared to the pre-MS model, giving negligible changes in $s_{\text{ad}}$. These results suggest that the combination of rotation and magnetism may indeed further inflate the stellar radius, but the additional effect arising from rotation is only noticeable in the youngest models.
\begin{figure}
\includegraphics[width=0.5\textwidth]{entropy_v_logRho_rot_mag_10_0_Myr.pdf}
\caption{$s$ as a function of $\log_{10}{(\rho)}$, for a $0.3 \, \text{M}_\odot$, $10 \, \text{Myr}$, $\alpha_{\text{MLT}} = 1.7$ stellar model at $\Omega = 20 \, \Omega_\odot$, $B_{\text{v-max}} = 10^4 \, \text{G}$, and $\delta = 0.03$, including the rotating-only, magnetic-only, and unperturbed models. The rotating-only case is near-identical to the unperturbed model. \label{fig:entropy_v_logRho_rot_mag}}
\end{figure}
\section{Depth-dependent $\alpha_{\text{MLT}}$ as MLT proxies for rotation and magnetic fields}\label{sec:depth_dep_alpha}
The structure of a 1D stellar model constructed with a modified version of MLT, like the rotationally or magnetically-constrained versions described in \S~\ref{sec:rot_inhibition_conv} and \S~\ref{sec:mag_inhibition_conv}, cannot generally be duplicated by a model with a standard depth-independent $\alpha_{\text{MLT}}$. The reason for this is straightforward: in the standard 1D models, $\nabla_{\text{s}}$ throughout the stellar interior increases with decreasing $\alpha_{\text{MLT}}$, whereas for the \citetalias{Stevenson:1979aa} and \citetalias{MacDonald:2014aa} models the inhibition of convection depends on parameters that vary with depth---i.e., $\text{Ro}$ in the ``rotating" case and $\delta$ in the ``magnetic" case. It is not possible to mimic these effects with a standard depth-independent $\alpha_{\text{MLT}}$, no matter its value. They can, however, be captured by models that include a depth-\emph{dependent} $\alpha_{\text{MLT}}$ ($\alpha_{\text{MLT}}(r)$, hereafter), as described in this section.
Here, we provide explicit formulae linking a $\alpha_{\text{MLT}}(r)$ profile to the rotationally- and magnetically-inhibited convection formulae of \citetalias{Stevenson:1979aa} and \citetalias{MacDonald:2014aa} respectively. Our motivation for constructing such profiles is just that, in a given 1D stellar evolution code, it may be much more straightforward to input (or implement) a $\alpha_{\text{MLT}}(r)$ profile than to modify the whole underlying MLT formulation. Knowledge of the precise correspondence between $\alpha_{\text{MLT}}(r)$ and a particular depth-dependent theory of convective inhibition---arising from rotation, magnetism, or other effects---gives us the ability to model the non-standard 1D stellar structures arising from these effects without undue difficulty.
Models constructed with modified MLT formulations of the type and magnitude considered here can be regarded as perturbations at each depth to a fiducial unperturbed model. We write the perturbed model's $\nabla_{\text{s}}$ as the unperturbed model's plus a given depth-dependent perturbation $\beta$:
\begin{equation} \label{eq:nabla_perturbation}
{\nabla_{\text{s}} = \nabla_{\text{s}_0} + \beta}.
\end{equation}
Thus any perturbation made to the superadiabaticity results in the modification of $ds/dr \propto \nabla_{\text{s}}$, implying that the specific entropy will asymptotically converge to a different adiabat.
In \S~\ref{subsec:role_entropy}, we found that a perturbed model's $\nabla_{\text{s}}$ could be reproduced using the unperturbed model's and each model's $\alpha_{\text{MLT}}$, i.e., equation~(\ref{eq:nabla_alpha_efficient_full}). By substituting equation~(\ref{eq:nabla_alpha_efficient_full}) into equation~(\ref{eq:nabla_perturbation}), we find an approximate expression for $\alpha_{\text{MLT}}(r)$ as a function of the unperturbed model's depth-\emph{independent} $\alpha_{\text{MLT}}$ and $\nabla_{\text{s}}$, and the perturbation $\beta$:
\begin{equation} \label{eq:depth_dep_alpha_general}
\alpha_{\text{MLT}}(r) \simeq \frac{\alpha_{\text{MLT}_0}}{\left(1 + \frac{\beta}{\nabla_{\text{s}_0}}\right)^{3/4}}.
\end{equation}
\begin{figure*}
\gridline{\fig{logsupergrad_v_logRho_depth_dep_rot_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{logsupergrad_v_logRho_depth_dep_rot_1000_0_Myr.pdf}{0.51\textwidth}{(b)}
}
\caption{$\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$, comparing \citetalias{Stevenson:1979aa} ``rotating" models and our $\alpha_{\text{MLT}}(r)$ models (plus markers), for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$, $\alpha_{\text{MLT}} = 1.7$ stellar models at $\Omega = 5-20 \, \Omega_\odot$ ($\Delta 5 \, \Omega_\odot$).\label{fig:logsupergrad_v_logRho_depth_dep_rot}}
\end{figure*}
We find that a $\alpha_{\text{MLT}}(r)$ profile constructed using this expression allows us to reproduce virtually all of the radial variation of $\nabla_{\text{s}}$ in both our $10 \, \text{Myr}$ and $1 \, \text{Gyr}$ non-standard ``rotating" and ``magnetic" stellar structure models. First, consider the case of the \citetalias{Stevenson:1979aa} ``rotating" MLT formulation. We express equation~({\ref{eq:general_rotation}}) in terms of $\alpha_{\text{MLT}}(r)$ and the unperturbed depth-independent $\alpha_{\text{MLT}}$ using equation~(\ref{eq:nabla_alpha_efficient}):
\begin{equation} \label{eq:general_rotation_alpha}
\left(\frac{\alpha_{\text{MLT}}(r)}{\alpha_{\text{MLT}_0}}\right)^{-10/3} - \left(\frac{\alpha_{\text{MLT}}(r)}{\alpha_{\text{MLT}_0}}\right)^{-4/3} \simeq \frac{4}{41} \tau_{\text{c}_0}^2 \Omega^2.
\end{equation}
Therefore, in the case of the \citetalias{Stevenson:1979aa} models, the depth-dependent perturbation can be expressed as
\begin{equation} \label{eq:beta_rotation}
\beta \simeq \nabla_{\text{s}_0} \left[\left(\frac{\alpha_{\text{MLT}}(r)}{\alpha_{\text{MLT}_0}}\right)^{-10/3} - \frac{4}{41} \tau_{\text{c}_0}^2 \Omega^2 - 1\right],
\end{equation}
giving
\begin{equation} \label{eq:depth_dep_alpha_rot}
\alpha_{\text{MLT}}(r) \simeq \frac{\alpha_{\text{MLT}_0}}{\left[\left(\frac{\alpha_{\text{MLT}}(r)}{\alpha_{\text{MLT}_0}}\right)^{-10/3} - \frac{4}{41} \tau_{\text{c}_0}^2 \Omega^2 \right]^{3/4}},
\end{equation}
which must be solved iteratively.
We can mimic the ``rotating" effects from the \citetalias{Stevenson:1979aa} MLT formulation in our 1D stellar structure models, solely using this $\alpha_{\text{MLT}}(r)$ profile. We modify MESA to input $\alpha_{\text{MLT}}(r)$ rather than the conventional fixed value and produce near-identical models to those produced using the \citetalias{Stevenson:1979aa} reformulation where we modified $\nabla_{\text{s}}$. To demonstrate this, in Figure~\ref{fig:logsupergrad_v_logRho_depth_dep_rot}, we plot $\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$ for both our $\alpha_{\text{MLT}}(r)$ and \citetalias{Stevenson:1979aa} stellar models, at $10 \, \text{Myr}$ and $1 \, \text{Gyr}$; models constructed using the two techniques are indistinguishable here.
We can apply the same technique to mimic the effects of ``magnetic" inhibition of convection via $\alpha_{\text{MLT}}(r)$. In the case of the \citetalias{MacDonald:2014aa} MLT formulation {in the high efficiency convective regime,} ${\beta \simeq \delta / Q_0}$, thus
\begin{equation} \label{eq:depth_dep_alpha_mag}
\alpha_{\text{MLT}}(r) \simeq \frac{\alpha_{\text{MLT}_0}}{\left(1 + \frac{\delta}{Q_0 \nabla_{\text{s}_0}}\right)^{3/4}}.
\end{equation}
We again input $\alpha_{\text{MLT}}(r)$ into MESA and reproduce near-identical models to those produced using the \citetalias{MacDonald:2014aa} reformulation. In Figures~\ref{fig:logsupergrad_v_logRho_depth_dep_mag} and~\ref{fig:entropy_v_logRho_depth_dep_mag}, we plot examples of $\log_{10}{(\nabla_{\text{s}})}$ and $s$ respectively as a function of $\log_{10}{(\rho)}$, produced by our $\alpha_{\text{MLT}}(r)$ models and our \citetalias{MacDonald:2014aa} models; excellent correspondence between the two model structures is evident.
In Figure~\ref{fig:R2_v_R2_depth_dep}, we examine radius inflation from our $\alpha_{\text{MLT}}(r)$ models as a function of the radius inflation from our \citetalias{MacDonald:2014aa} ``magnetic" models, at both $10 \, \text{Myr}$ and $1 \, \text{Gyr}$. They are in good agreement, with small divergences for our most-inhibited fully-convective models, as in \S~\ref{subsec:MM14_models}. As with the models discussed in \S~\ref{sec:rot_inhibition_conv} and \S~\ref{sec:mag_inhibition_conv}, this agreement is not just fortuitous: it stems from the fact that changes in the radii are linked to changes in $s_{\text{ad}}$, which are well-described by our $\alpha_{\text{MLT}}(r)$ profiles.
\begin{figure*}
\gridline{\fig{logsupergrad_v_logRho_depth_dep_mag_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{logsupergrad_v_logRho_depth_dep_mag_1000_0_Myr.pdf}{0.51\textwidth}{(b)}
}
\caption{$\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$, comparing \citetalias{MacDonald:2014aa} ``magnetic" models and our $\alpha_{\text{MLT}}(r)$ models (plus markers), for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$, $\alpha_{\text{MLT}} = 1.7$ stellar models at some combinations of $B_{\text{v-max}} = 10^3-10^5 \, \text{G}$ and (a) $\delta = 0.01 - 0.03$ (b) $\delta = 0.04 - 0.06$.\label{fig:logsupergrad_v_logRho_depth_dep_mag}}
\end{figure*}
\begin{figure*}
\gridline{\fig{entropy_v_logRho_depth_dep_mag_10_0_Myr.pdf}{0.51\textwidth}{(a)}
\fig{entropy_v_logRho_depth_dep_mag_1000_0_Myr.pdf}{0.5\textwidth}{(b)}
}
\caption{$s$ as a function of $\log_{10}{(\rho)}$, comparing \citetalias{MacDonald:2014aa} ``magnetic" models and our $\alpha_{\text{MLT}}(r)$ models (plus markers), for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$, $\alpha_{\text{MLT}} = 1.7$ stellar models at some combinations of $B_{\text{v-max}} = 10^3-10^5 \, \text{G}$ and (a) $\delta = 0.01 - 0.03$ (b) $\delta = 0.04 - 0.06$. \label{fig:entropy_v_logRho_depth_dep_mag}}
\end{figure*}
\begin{figure*}
\gridline{\fig{R2_v_R2_depth_dep_10_0_Myr.pdf}{0.51\textwidth}{(a)}
\fig{R2_v_R2_depth_dep_1000_0_Myr.pdf}{0.5\textwidth}{(b)}
}
\caption{Radius inflation from our $\alpha_{\text{MLT}}(r)$ models as a function of radius inflation from our \citetalias{MacDonald:2014aa} models, for $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$, $\alpha_{\text{MLT}} = 1.7$ stellar models at all combinations of ${B_{\text{v-max}}} = 10^3-10^5 \, {\text{G}}$ ($\Delta 1 \, \log_{10}{(\text{G})}$) and (a) $\delta = 0.01 - 0.03$ ($\Delta 0.005$) (b) $\delta = 0.01 - 0.06$ ($\Delta 0.005$). $y=x$ (orange) is plotted for ease of comparison.\label{fig:R2_v_R2_depth_dep}}
\end{figure*}
\begin{figure*}
\gridline{\fig{logsupergrad_v_logRho_depth_dep_rot_mag_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{logsupergrad_v_logRho_depth_dep_rot_mag_1000_0_Myr.pdf}{0.51\textwidth}{(b)}
}
\caption{$\log_{10}{(\nabla_{\text{s}})}$ as a function of $\log_{10}{(\rho)}$, comparing our $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$, $\alpha_{\text{MLT}} = 1.7$ stellar model at $\Omega = 20 \, \Omega_\odot$, $B_{\text{v-max}} = 10^4 \, \text{G}$, and (a) $\delta = 0.03$ (b) $\delta = 0.06$, to our $\alpha_{\text{MLT}}(r)$ model (plus markers). We include the rotating-only and magnetic-only cases for further comparison.\label{fig:logsupergrad_v_logRho_depth_dep_rot_mag}}
\end{figure*}
\begin{figure}
\includegraphics[width=0.5\textwidth]{entropy_v_logRho_depth_dep_rot_mag_10_0_Myr.pdf}
\caption{$s$ as a function of $\log_{10}{(\rho)}$, comparing our $0.3 \, \text{M}_\odot$, $10 \, \text{Myr}$, $\alpha_{\text{MLT}} = 1.7$ stellar model at $\Omega = 20 \, \Omega_\odot$, $B_{\text{v-max}} = 10^4 \, \text{G}$, and $\delta = 0.03$, to our $\alpha_{\text{MLT}}(r)$ model (plus markers). We include the rotating-only and magnetic-only cases for further comparison.\label{fig:entropy_v_logRho_depth_dep_rot_mag}}
\end{figure}
We also create an $\alpha_{\text{MLT}}(r)$ expression for the combination of the magnetic and rotational reformulations of MLT (see \S~\ref{sec:rot_and_mag}), by treating $\alpha_{\text{MLT}_0}$ in equation~(\ref{eq:depth_dep_alpha_rot}) as the $\alpha_{\text{MLT}}(r)$ profile for the magnetic prescription in equation~(\ref{eq:depth_dep_alpha_mag}), which we will denote as $\alpha_{\text{MLT}}(r)_B$, producing
\begin{equation} \label{eq:depth_dep_alpha_rot_mag}
\alpha_{\text{MLT}}(r) \simeq \frac{\alpha_{\text{MLT}}(r)_B}{\left[\left(\frac{\alpha_{\text{MLT}}(r)}{\alpha_{\text{MLT}}(r)_B}\right)^{-10/3} - \frac{4}{41} \tau_{\text{c}_0}^2 \Omega^2 \right]^{3/4}},
\end{equation}
which must also be solved iteratively. In Figures~\ref{fig:logsupergrad_v_logRho_depth_dep_rot_mag} and~\ref{fig:entropy_v_logRho_depth_dep_rot_mag}, we plot $\log_{10}{(\nabla_{\text{s}})}$ for both ages and $s$ for $10 \, \text{Myr}$ respectively as a function of $\log_{10}{(\rho)}$, {produced by a particular ``rotating magnetic" model from \S~\ref{sec:rot_and_mag} and our $\alpha_{\text{MLT}}(r)$ model, including profiles from the equivalent rotating-only and magnetic-only cases}; again, we see excellent correspondence between the two model structures.
{In Figure~\ref{fig:alpha_mlt_v_logRho}, we plot $\alpha_{\text{MLT}}(r)$ as a function of $\log_{10}{(\rho)}$ at both ages for the same model, to show the differences in the depth-dependence of $\alpha_{\text{MLT}}(r)$ for the rotating-only and magnetic-only cases. For the rotating-only case, $\alpha_{\text{MLT}}(r)$ is constant and close to the unperturbed model value ($\alpha_{\text{MLT}} = 1.7$) across a majority of the surface layers (implying negligible changes in stellar structure), and drops rapidly with depth in the deep interior where $\text{Ro} \ll 1$. For the magnetic-only case, $\alpha_{\text{MLT}}(r)$ starts at a lower value at the photosphere, and drops sharply with depth in the surface layers (producing noticeable changes in stellar structure), rising again in the deep interior where $\delta$ drops rapidly.}
\begin{figure*}
\gridline{\fig{alpha_mlt_v_logRho_10_0_Myr.pdf}{0.5\textwidth}{(a)}
\fig{alpha_mlt_v_logRho_1000_0_Myr.pdf}{0.51\textwidth}{(b)}
}
\caption{{$\alpha_{\text{MLT}}(r)$ as a function of $\log_{10}{(\rho)}$, for a $0.3 \, \text{M}_\odot$, (a) $10 \, \text{Myr}$ (b) $1 \, \text{Gyr}$, $\alpha_{\text{MLT}} = 1.7$ stellar model in the rotating-only case ($\Omega = 20 \, \Omega_\odot$), and the magnetic-only case ($B_{\text{v-max}} = 10^4 \, \text{G}$, and (a) $\delta = 0.03$ (b) $\delta = 0.06$).}\label{fig:alpha_mlt_v_logRho}}
\end{figure*}
In \S~\ref{subsec:radius_s_ad}, we showed that it possible to determine an explicit relation between $s_{\text{ad}}$ and the depth-\emph{independent} $\alpha_{\text{MLT}}$ in standard 1D models. If this were possible in the depth-\emph{dependent} case as well, it would allow us to provide analytical estimates of how $s_{\text{ad}}$, and hence (via the formulae of \S~\ref{subsec:radius_s_ad}) the overall stellar radius, responds to changes in the depth-dependent convective inhibition parameters in any given theory (e.g., $\delta$ in the \citetalias{MacDonald:2014aa} formulation). Unfortunately, although we find that $s_{\text{ad}} \propto \Delta s$ in all of our $\alpha_{\text{MLT}}(r)$ models, it is no longer feasible to provide a simple analytical formula encapsulating the link between $s_{\text{ad}}$ and $\alpha_{\text{MLT}}(r)$. Essentially, this arises because we can no longer exclude $\alpha_{\text{MLT}}(r)$ from the integral producing $\Delta s$ in equation~(\ref{eq:s_jump_alpha}): in the fixed-$\alpha_{\text{MLT}}$ case for models with similar convective flux profiles, the integral associated with the high efficiency regime (excluding $\alpha_{\text{MLT}}$ due to its depth-independence) is near-homologous between models, allowing a direct proportionality between $\Delta s$ and $\alpha_{\text{MLT}}$ (equation~(\ref{eq:delta_s_prop_alpha_4_3})). However, in the $\alpha_{\text{MLT}}(r)$ case, this is not possible as the integral is now weighted by $\alpha_{\text{MLT}}(r)$ throughout the radial distribution; hence, in order to determine a change in $s_{\text{ad}}$ between two models of differing $\alpha_{\text{MLT}}(r)$, one must also have knowledge of all parameters in equation~(\ref{eq:s_jump_alpha}) for the perturbed model, rather than just $\alpha_{\text{MLT}}(r)$ and details of the unperturbed model.
\section{Discussion and conclusion} \label{sec:discussion_conclusion}
Rotation and magnetism both affect convection: the velocities, temperature gradients, and spatial structure that prevail in a magnetized, rotating flow are not generally the same as those that occur when rotation and magnetic fields are absent. In principle, the resulting changes in convective heat transport could affect the structure of stars or planets that host convection. Motivated by the observation that some low-mass stars appear to have larger radii than predicted by standard 1D stellar models, which parameterize the convective transport using MLT, several authors have suggested that rotation and/or magnetism may indeed be influencing the overall stellar structure. In this paper, we have examined this issue using 1D stellar models that attempt to incorporate both rotational and magnetic effects in a highly simplified way, and compared our results to models constructed using a standard version of MLT (modified here to allow for a mixing length parameter $\alpha_{\text{MLT}}$ that in some cases varies with depth). Below, we recapitulate our main findings and note some of their limitations.
The structure of a star may be regarded as a function of its entropy, so assessing the structural impacts of rotation or magnetism amounts to determining the role these play in modifying the star's entropy. In \S~\ref{sec:entropy}, we reviewed the links between entropy, convective efficiency, and stellar radii in ``standard" 1D models, in which the mixing length parameter $\alpha_{\text{MLT}}$ is assigned a depth-independent value that must be calibrated by comparison with observations. In these models, reducing the convective efficiency via a decrease in $\alpha_{\text{MLT}}$ increases the temperature gradient required to carry an equivalent heat flux within the stellar interior. This translates into a larger entropy contrast between the photosphere and the deep interior for both pre-main-sequence and main-sequence models, which in turn influences the specific entropy attained in the deep interior (i.e., $s_{\text{ad}}$) in an age-dependent fashion. We explicitly determine the radius inflation of a given model from the difference in $s_{\text{ad}}$, with $\Delta \ln{R} \propto \Delta s_{\text{ad}}$. We also show how changes in the depth-independent $\alpha_{\text{MLT}}$ are directly related to changes in the stellar radius, in a manner similar to that described by \citet{Christensen-Dalsgaard:1997aa} for solar-like stars.
One of our principal aims was to determine whether rotation alone could plausibly modify the convective transport enough to change a fully-convective star's radius by a noticeable amount. In \S~\ref{sec:rot_inhibition_conv}, we considered a rotationally-constrained version of MLT originally proposed by \citetalias{Stevenson:1979aa}, and given renewed vibrancy by the recent analyses and simulations of \citet{2012PhRvL.109y4503J} and \citet{Barker:2014aa}. By implementing this theory directly into our 1D MESA models, we find that rotation has a negligible impact on the star's overall radius. This is because the radius is determined primarily by the interior adiabat, which in turn is established largely by layers near the stellar surface. These layers are almost completely uninfluenced by rotation at any plausible rotational velocity---that is, $\text{Ro} \gg 1$ there because the convective velocity increases rapidly near the low-density photosphere---so rotation has little effect on $s_{\text{ad}}$ and hence on the stellar radius, even though flows in the deep interior of the star \emph{are} strongly affected by rotation. It is worth noting that if stars were instead well-characterized by a single depth-independent Rossby number, rotation would (in at least some stars) be important everywhere, and would have a much more significant impact on the radius; it is primarily the depth variation of convective velocities that makes this impossible.
In \S~\ref{sec:mag_inhibition_conv}, we argued that a particular prescription for incorporating the effects of magnetism into 1D stellar models, due to \citetalias{MacDonald:2014aa}, could be usefully analyzed using the same techniques developed in \S~\ref{sec:entropy}. In particular, we note that the effect of varying magnetic fields in this model is to vary the entropy content of the deep interior; once this is known, the stellar radius is also determined, via the same formula developed in \S~\ref{sec:entropy} (namely, equation~\ref{eq:radii_entropy_relation}) for standard MLT models. In accord with \citet{MacDonald:2017ab}, we find that \emph{if} magnetic fields indeed influence convective transport in the manner assumed here, fields of a plausible strength ($10^4 \, \text{G}$ or less) could noticeably ``inflate" the stellar radius. This inflation is larger (by about a factor of two) in models at $10 \, \text{Myr}$ than in those at an age of $1 \, \text{Gyr}$.
In \S~\ref{sec:rot_and_mag}, we showed that combining the rotational and magnetic reformulations of MLT, covered in \S~\ref{sec:rot_inhibition_conv} and \S~\ref{sec:mag_inhibition_conv} respectively, can indeed ``inflate" stellar radii by a further small amount. This demonstrates that the \citetalias{Stevenson:1979aa} rotation prescription is only effective at changing the stellar structure if the model is already ``perturbed" by magnetism. The superadiabaticity throughout the stellar interior increases with magnetic field strength; the effects of rotational inhibition can ``feed" on this, increasing the superadiabaticity somewhat further and producing small structural differences in some cases. In our models, this additional effect is noticeable only on the pre-main-sequence.
Finally, in \S~\ref{sec:depth_dep_alpha} we showed that both the rotationally- and magnetically-constrained versions of MLT explored in \S~\ref{sec:rot_inhibition_conv} and \S~\ref{sec:mag_inhibition_conv}, and the combination of these as shown in \S~\ref{sec:rot_and_mag}, can be duplicated by a ``standard" MLT model in which the mixing length parameter $\alpha_{\text{MLT}}$ is allowed to be depth-dependent. We provide explicit formulae linking the radially-variable $\alpha_{\text{MLT}}(r)$ to the rotational and magnetic formulations of \citetalias{Stevenson:1979aa} and \citetalias{MacDonald:2014aa} (equations~\ref{eq:depth_dep_alpha_rot} and~\ref{eq:depth_dep_alpha_mag} respectively), and we show that models constructed using these $\alpha_{\text{MLT}}(r)$ are indistinguishable from those directly employing the \citetalias{Stevenson:1979aa} or \citetalias{MacDonald:2014aa} models. These formulae enable the computation of ``magnetic" or ``rotating" models---within the assumptions of the \citetalias{Stevenson:1979aa} or \citetalias{MacDonald:2014aa} prescriptions---{without modification} of the mixing length formulation in a standard 1D stellar evolution code {(though they do require that codes be capable of modeling non-constant $\alpha_{\text{MLT}}$)}. We must caution, though, against taking these formulae as providing a \emph{quantitatively} correct assessment of how rotation and/or magnetism affect the heat transport (and hence the structure of the star) at every depth; this is in our opinion unlikely to be the case, since the formulations on which it is based (namely those of \citetalias{Stevenson:1979aa} and \citetalias{MacDonald:2014aa}) have many potential shortcomings, as detailed below. We have derived and included these formulae mainly in order to illustrate \emph{how} rotation and magnetism (in these prescriptions) could affect the structure of the star---namely, by modifying its specific entropy, just as $\alpha_{\text{MLT}}(r)$ modifies the entropy in this depth-dependent MLT. The trends deduced here (regarding the relative efficacy of these mechanisms, for example, in objects of different ages) may well be \emph{qualitatively} correct, even if the specific values of stellar radii, effective temperatures, etc., ultimately are not.
A principal limitation of our work is its reliance throughout on particularly simple models of how the rotation or magnetism affect the convective transport. In considering the effects of rotation on the structure, we effectively assumed that only the variation of $ds/dr$ with $\Omega$ matters, and also that the rotationally-constrained MLT of \citetalias{Stevenson:1979aa} adequately captures this variation; both assumptions are questionable. For example, the simulations of \citet{Barker:2014aa}, which we cite as providing some numerical support for this scaling, effectively model only a single latitude near the pole (i.e., where rotation and the gravity vector are aligned); it is by no means clear that the same temperature scalings will hold at different latitudes. In general rotation also introduces new anisotropy into the system (with motions increasingly aligned with the rotation axis in accord with the Taylor-Proudman constraint), implying that we might generally expect variations in the heat flux and/or entropy gradient with latitude. It is unclear how these latitudinal variations could best be represented in a 1D stellar model, which intrinsically assumes spherical symmetry. Similarly, the scaling of temperature or entropy gradients with rotation rate may well depend on latitude; indeed, latitudinal variations in these quantities are often present in spherical shell simulations of rotating convection \citep[e.g.,][]{2004ApJ...601..512B,2018A&A...609A.124R}.
It must likewise be acknowledged that the effects of magnetism on the flow, and hence on the stellar structure, are still uncertain. In general they will depend on both the strength and the spatial morphology of the magnetic fields---which, in all the models quoted above and in our own work here, is not solved-for self-consistently as the outcome of a dynamo process, but instead must simply be imposed \emph{a priori}. Models making different assumptions about the interior field strengths have yielded substantially different results. {For example, the low-mass star models of \citet{Mullan:2001ab} explored fields of such strength ($\sim 100 \, \text{MG}$) that portions of the interior were rendered convective stable; this was motivated partly by the striking observational finding that the coronal heating efficiency of stars did not exhibit any clear break in behavior at around spectral types M3-M4, where stars are (in standard non-magnetic models) predicted to transition from being partially radiative to fully convective \citep[e.g.,][]{1993ApJ...410..387F}.} Many of the other models noted above, including \citet{MacDonald:2012aa} onwards, have considered much weaker fields, which are probably more realistic \citep[e.g.,][]{Browning:2016aa}. Meanwhile numerical simulations of the interiors of low-mass stars \citep{2006ApJ...638..336D,Browning:2008aa,2015ApJ...813L..31Y} suggest that in many cases dynamos in these objects may yield fields that are approximately in equipartition with the convective kinetic energy density, rising above this in the most rapidly rotating cases \citep[see, e.g., discussion in][]{2017arXiv170102582A}; the spatial structure of the fields is not yet certain, but is clearly influenced by the rotation rate (e.g., \citealt{2006GeoJI.166...97C,Browning:2008aa,2012A&A...546A..19G,2015ApJ...813L..31Y,Weber:2016aa,2017JFM...813..558A}; see also discussions in \citealt{2017LRSP...14....4B}). The 1D models considered here (and for example in \citealt{MacDonald:2017ab}) are at least broadly consistent with these constraints on the overall field strengths, but we have made no effort to mimic the interior radial profile of the field, or to capture aspects of its actual spatial morphology---which, in any event, are still uncertain.
The effects of the magnetism on heat transport are also somewhat unclear, but note for example that \citet{2016GeoJI.204.1120Y} find that convective heat transport is actually \emph{enhanced} (relative to conductive transport) by the presence of magnetism in certain cases, in striking contrast to what is assumed in the \citetalias{MacDonald:2014aa} formulation (or likewise that of \citealt{Feiden:2014aa}, or in the reduced-$\alpha_{\text{MLT}}$ models discussed here). Of course the simulations operate in parameter regimes far removed from those in actual stellar interiors, but they are nonetheless indicative of the sometimes surprising dynamics that can occur when convection, rotation, and magnetism interact in spherical domains.
More fundamentally, our models rely on the mixing length theory of convection, and on extremely simple atmospheric boundary conditions; both are crude approximations of the complex 3D transport occurring in these layers. Several authors have noted effects that are present in 3D convection but not easily captured in MLT \citep[e.g.,][]{1991ApJ...370..295C,2007ApJ...667..448M,2010ApJ...710.1619A,2017ApJ...845L..17C}. Likewise, the role of the near-surface layers, where 3D convection coupled to radiative transport ultimately helps set the stellar adiabat, has lately been studied using simulations and theory \citep[e.g.,][]{2014ApJ...785L..13T,Tanner:2016aa,Trampedach:2014aa,2015A&A...573A..89M}. It is beyond the scope of this paper to provide detailed comparison between the effects induced by magnetism or rotation and those arising from all other effects not included in our modeling. However, it is worth noting that some of these effects must be clarified if a quantitative comparison between models and any specific observational data point is required. For example, variations in the surface atmospheric boundary condition and in metallicity, both fixed in our models, would modify the precise values of radius or effective temperature achieved at any given $\alpha_{\text{MLT}}$, whether depth-dependent or not \citep[see, e.g., discussions in][]{2014ApJ...785L..13T,Tanner:2016aa}.
Overall, our results suggest that rotation alone (if indeed it affects convection in the manner assumed here) cannot notably influence the overall structure of a fully-convective star, but magnetism might. To have a substantial influence, the magnetism (or indeed any other agent that modifies the heat transport) must impact layers relatively close to the stellar surface, which largely establish the star's overall adiabat and hence its radius. These effects can be duplicated using standard MLT, but at the cost of allowing a depth-dependent $\alpha_{\text{MLT}}(r)$ (intended to mimic the depth dependence of convective inhibition). In general, this may be difficult or impossible to calibrate using observations that probe the stellar surface alone. Further independent constraints on the form such depth-dependent convective inhibition must take---for example, by detailed comparison with 3D simulations incorporating rotation, magnetism, and radiative transport---may therefore be a prerequisite for truly predictive models of how magnetism affects the structure and evolution of these stars.
\acknowledgments
This research has been supported by the European Research Council under ERC grant agreements No. 337705 (CHASM), and by a Consolidated Grant from the UK STFC (ST/J001627/1). We have also benefited from access to the University of Exeter supercomputer, a DiRAC Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS, and the University of Exeter. We also acknowledge PRACE for awarding us access to computational resources, namely Mare Nostrum based in Spain at the Barcelona Supercomputing Center, and Fermi and Marconi based at Cineca in Italy. We thank Isabelle Baraffe for helpful comments on a draft of the manuscript. {We also thank the referee for a thoughtful review that helped to improve the manuscript.}
\software{MESA \citep{Paxton:2011aa,Paxton:2013aa,Paxton:2015aa,2017arXiv171008424P}
}
|
{
"timestamp": "2018-03-08T02:08:50",
"yymm": "1803",
"arxiv_id": "1803.02664",
"language": "en",
"url": "https://arxiv.org/abs/1803.02664"
}
|
\chapter{Measures and characterisations}
\label{chapter:meas}
In order to investigate the dynamics of human social behaviour quantitatively, we first introduce it as a time series and we show how it is characterised by means of various techniques of time series analysis. According to Box~\emph{et al.}~\cite{Box2008Time}, a time series is a set of observations that are made sequentially in time. The timing of an observation denoted by $t$ can be either continuous or discrete. Since most datasets of human dynamics have recently been recorded digitally, we will here focus on the case of discrete timings. In this sense, the time series can be called an event sequence, where each event indicates an observation. In this series the $i$th event takes place at time $t_i$ with the result of the observation $z_i$ that can denote a number, a symbol, or even a set of numbers, depending on what has been measured. The sequence of $\{(t_i,z_i)\}$ can be simply denoted by $z_t$. Some events could occur in a time interval or with duration. For example, a phone call between two individuals may last from few minutes to hours~\cite{Holme2012Temporal}. In many cases as the time scale for event duration is much smaller than that of our interest, the event duration will be ignored in our monograph unless stated otherwise.
In most cases a time series refers to observations made at regular timings. For a fixed time interval $t_{\textrm{int}}$, the timings are set as $t_i=t_0+ t_{\textrm{int}}i$ for $i=0, 1, 2,\cdots$. In many cases, $t_0$ and $t_{\textrm{int}}$ are fixed at the outset thus they can be ignored for time series analysis. An example of a time series with regular observations is the daily price of a stock in the stock market, constituting a financial time series~\cite{Mantegna2007Introduction}. Such time series are often analysed by using traditional techniques like the autocorrelation function with the aim to reveal the dependencies between observed values, which often show inhomogeneities and large fluctuations in them.
One also finds many cases in which the timings of observations are inhomogeneous, like in case of emails sent by a user~\cite{Barabasi2005Origin}. The fact that the occurrence of events is not regular in time leads to temporally inhomogeneous time series, potentially together with the variation of observed value $z_t$. In these cases we can talk about two kinds of inhomogeneities in observed time series. On the one hand, fluctuations are associated both with temporal inhomogeneities and with the variation of observations. On the other hand, inhomogeneities can be associated only with the timings of events, not with observation values. This is the case of several recent datasets, e.g., those related to communication or individual transactions. In such datasets events are typically not assigned with content due to privacy reasons, thus only their timings are observable. In the following Sections we will mainly focus on the latter type of time series.
We remark that the time series with regular timings but with irregular observed values could be translated into time series with irregular timings. This can be done, e.g., by considering only the observations with $z_t \geq z_{\textrm{th}}$, where $z_{\textrm{th}}$ denotes some threshold value. Then the time series can be generated, which contains only observations with extreme values, like crashes in the financial markets. In the opposite direction, the time series with irregular timings can be also translated into that with regular timings, e.g., by binning the observations over a sufficiently large time window $t_w$. More precisely, one can obtain the time series with regular timings as follows:
\begin{equation}
\tilde z_k\equiv \sum_{kt_w\leq t<(k+1)t_w} z_t
\end{equation}
for all possible integers $k$. This constitutes a coarse-graining process for the time series.
\section{Point processes as time series with irregular timings}
\label{subsect:point}
A time series with irregular timings can be interpreted as the realisation of a point process on the time axis. To introduce these interpretations, let us first disregard the information contained in the observation results $z_t$, as it is not generally accessible, and consider only the timings of events. On the one hand, the event sequence with $n$ events can be represented by an ordered list of event timings, i.e., $ev(t_i)=\{t_0,t_1,\cdots,t_{n-1}\}$, where $t_i$ denotes the timing of the $i$th event. On the other hand, the event sequence can be depicted as a binary signal $x(t)$ that takes a value of $1$ at time $t=t_i$, or $0$ otherwise. For discrete timings, one can write the signal as
\begin{equation}
x(t)=\sum_{i=0}^{n-1}\delta_{t,t_i},
\label{eq:xt}
\end{equation}
where $\delta$ denotes the Kronecker delta.
\subsection{The Poisson process}
The temporal Poisson process is a stochastic process, which is commonly used to model random processes such as the arrival of customers at a store, or packages at a router. It evolves via completely independent events, thus it can be interpreted as a type of continuous-time Markov process. In a Poisson process, the probability that $n$ events occur within a bounded interval follows a Poisson distribution
\begin{equation}
P(n)=\frac{\lambda^{n}e^{-\lambda}}{n!},
\label{eq:PoissEvRate}
\end{equation}
where $\lambda$ denotes the average number of events per interval, which is equal to the variance of the distribution in this case. Since these stochastic processes consist of completely independent events, they have served as reference models when studying bursty systems. As we will see later, bursty temporal sequences emerge with fundamentally different dynamics with strong temporal heterogeneities and temporal correlations. Any deviation in their dynamics from the corresponding Poisson model can help us to indicate patterns induced by correlations or other factors like memory effects.
Throughout the monograph we are going to refer to two types of Poisson processes. One type, called the \emph{homogeneous Poisson process}, is characterised by a constant event rate $\lambda$, while the other type, called the \emph{non-homogeneous Poisson process}, is defined such that the event rate varies over time, denoted by $\lambda(t)$. For more precise definitions and discussion on the characters of Poisson processes we suggest the reader to study the extended literature addressing this process, e.g., Ref.~\cite{Grimmett2009Probability}. We remark that the Poisson processes and their variants have been studied in terms of shot noise in electric conductors and related systems~\cite{Blanter2000Shot, Lowen1990Powerlaw, Bulashenko2000Suppression}.
\subsection{Characterisation of temporal heterogeneities}
The temporal irregularities of an event sequence can be characterised in terms of various quantities. For this, a schematic diagramme and a realistic example of such event sequences are respectively depicted in Fig.~\ref{fig:scheme1} and Fig.~\ref{fig:example_model}(a), where the example has been generated using a model for bursty dynamics~\cite{Jo2015Correlated}.
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{figs/fig_interevent_time.pdf}
\caption{Schematic diagramme of an event sequence, where each vertical line indicates the timing of the event. The inter-event time $\tau$ is the time interval between two consecutive events. The residual time $\tau_r$ is the time interval from a random moment (e.g., the timing annotated by the vertical arrow) to the next event. In most empirical datasets, the distributions of $\tau$ are heavy-tailed.}
\label{fig:scheme1}
\end{figure}
\subsubsection{The inter-event time distribution}
In order to formally introduce these measures let us first consider an event sequence $ev(t_i)$ and define the inter-event time as
\begin{equation}
\tau_i\equiv t_i-t_{i-1},
\end{equation}
which is the time interval between two consecutive events at times $t_{i-1}$ and $t_i$ for $i=1,\cdots,n-1$. Then we obtain a sequence of inter-event times, i.e., $iet(\tau_i)=\{\tau_1,\cdots,\tau_{n-1}\}$, where $n\geq 2$ is assumed. By ignoring the order of $\tau_i$s, we can compute the probability density function of inter-event times, i.e., the inter-event time distribution $P(\tau)$. For completely regular time series, all inter-event times are the same and equal to the mean inter-event time, denoted by $\langle\tau\rangle$, thus the inter-event time distribution reads as follows:
\begin{equation}
P(\tau)=\delta(\tau-\langle\tau\rangle),
\end{equation}
where $\delta(\cdot)$ denotes the Dirac delta function. Here the standard deviation of inter-event times, denoted by $\sigma$, is zero.
For the completely random and homogeneous Poisson process, it is easy to derive~\cite{Grimmett2009Probability} that the inter-event times are exponentially distributed as follows:
\begin{equation}
P(\tau)=\frac{1}{\langle\tau\rangle} e^{-\tau/\langle\tau\rangle},
\end{equation}
where $\sigma=\langle\tau\rangle$. Note that the event rate introduced in Eq.~(\ref{eq:PoissEvRate}) is $\lambda=1/\langle \tau \rangle$.
Finally, in many empirical processes in nature and society, inter-event time distributions have commonly been observed to be broad with heavy tails ranging over several magnitudes. In such bursty time series the fluctuations characterised by $\sigma$ are much larger than $\langle\tau\rangle$, indicating that $P(\tau)$ is rather different from an exponential distribution, as it would derive from Poisson dynamics. Bursty systems evolve through events that are heterogeneously distributed in time. It leads to a broad $P(\tau)$, which can be fitted with either power law, log-normal, or stretched exponential distributions, just to name a few candidates. Most commonly, many empirical analyses show that $P(\tau)$ could be described in the power-law form with an exponential cutoff as
\begin{equation}
P(\tau)\simeq C\tau^{-\alpha}e^{-\tau/\tau_c},
\end{equation}
where $C$ denotes a normalisation constant, $\alpha$ is the power-law exponent, and $\tau_c$ sets the position of the exponential cutoff. Refer to an example of the power-law $P(\tau)$ in Fig.~\ref{fig:example_model}(b). The power-law scaling of $P(\tau)$ indicates the lack of any characteristic time scale, but the presence of strong temporal fluctuations, characterised by the power-law exponent $\alpha$. Power-law distributions are also associated to the concepts of scale-invariance and self-similarity as demonstrated in Ref.~\cite{Newman2005Power}. In this sense, the value of $\alpha$ is deemed to have an important meaning, especially in terms of universality classes in statistical physics~\cite{Plischke2006Equilibrium}. Interestingly, as will be discussed in Chapter~\ref{chapter:emp}, a number of recent empirical researches have reported power-law inter-event time distributions with various exponent values.
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{figs/summary_mu0_1_nu2_0.pdf}
\caption{(a) An example of the realistic event sequence generated by a model with preferential memory loss mechanism for correlated bursts~\cite{Jo2015Correlated} using the parameter values of $\mu=0.1$, $\nu=2$, and $\epsilon=\epsilon_L=10^{-6}$. The bursty behaviour of the event sequence can be characterised by (b) inter-event time distribution $P(\tau)$, (c) bursty train size distribution $P_{\Delta t}(E)$ for time window $\Delta t$, and (d) autocorrelation function $A(t_d)$ with time delay $t_d$. In addition, the burstiness parameter and memory coefficient of the event sequence were estimated as $B\approx 0.483$ and $M\approx 0.038$, respectively.}
\label{fig:example_model}
\end{figure}
Nevertheless, we note that although recent studies disclosed several bursty systems with broad inter-event time distributions, it is not trivial to identify the best functional form of distribution fitting the data points and to estimate its parameters like the value of power-law exponent. For the related statistical and technical issues, one can see Ref.~\cite{Clauset2009Powerlaw} and references therein. In addition, the effect of finite size of the observation period on the evaluation of inter-event time distributions has recently been discussed in Ref.~\cite{Kivela2015Estimating}.
\subsubsection{The burstiness parameter}
The heterogeneity of the inter-event times can be quantified by a single measure introduced by Goh and Barab\'asi~\cite{Goh2008Burstiness}. The burstiness parameter $B$ is defined as the function of the coefficient of variation (CV) of inter-event times $r\equiv \sigma/\langle \tau\rangle$ to measures temporal heterogeneity as follows:
\begin{equation}
B\equiv \frac{r-1}{r+1}=\frac{\sigma-\langle\tau\rangle}{\sigma+\langle\tau\rangle}.
\label{eq:burstiness_param}
\end{equation}
Here $B$ takes the value of $-1$ for regular time series with $\sigma=0$, and it is equal to $0$ for random, Poissonian time series where $\sigma=\langle \tau\rangle$. In case when the time series appears with more heterogeneous inter-event times than a Poisson process, the burstiness parameter is positive ($B>0$), while taking the value of $1$ only for extremely bursty cases with $\sigma \rightarrow \infty$. This measure has found a wide range of applications because of its simplicity, e.g., in analysing earthquake records, heartbeats of human subjects, and communication patterns of individuals in social networks, as well as for testing models of bursty dynamics~\cite{Goh2008Burstiness, Jo2012Circadian, Yasseri2012Dynamics, Kim2014Index, Wang2015Temporal, Zhao2015Empirical, Jo2015Correlated, Gandica2016Origin, Li2016Collective}.
However, it was recently shown that the range of $B$ is strongly affected by the number of events $n$ especially for bursty temporal patterns~\cite{Kim2016Measuring}. For the regular time series, the CV of inter-event times, $r$, has the value of $0$ irrespective of $n$ as all the inter-event times are the same. For the random time series, one gets $r=\sqrt{(n-1)/(n+1)}$ by imposing the periodic boundary condition to the time series. This case basically corresponds to the Poisson process. Finally, for the extremely bursty time series, one has $r=\sqrt{n-1}$, corresponding to the case when all events occur asymptotically at the same time. This implies the strong finite-size effect on the burstiness parameter for time series with moderate number of events. We also remark that $B=1$ is realised only when $n\to \infty$. Let us assume that one compares the degrees of burstiness of two event sequences but with different numbers of events in them. If the measured values of $B$ are the same for both event sequences, does it really mean that those event sequences are equally bursty? This is not a trivial issue. Thus, in order to fix these strong finite-size effects, an alternative measure has been introduced for the burstiness parameter in Ref.~\cite{Kim2016Measuring}:
\begin{equation}
B_n\equiv \frac{\sqrt{n+1} r-\sqrt{n-1}}{(\sqrt{n+1}-2)r +\sqrt{n-1}},
\end{equation}
which was devised to have the value of $1$ for $r=\sqrt{n-1}$, $0$ for $r=\sqrt{(n-1)/(n+1)}$, and $-1$ for $r=0$, respectively. The authors claimed that using this measure, one can distinguish the finite-size effect from the intrinsic burstiness characterising the time series.
\subsubsection{The memory coefficient}
So far, we have ignored any possible correlation between inter-event times for the sake of simple description. As a first approximation to quantify dependencies between consecutive inter-event times, a joint distribution $P(\tau_i,\tau_{i+1},\cdots,\tau_{i+k})$ of arbitrary number of consecutive inter-event times can be directly studied in a non-trivial fashion as introduced in Ref.~\cite{Jo2015Correlated}. For a simpler description of such dependencies, Goh and Barab\'asi~\cite{Goh2008Burstiness} introduced the memory coefficient $M$ to measure two-point correlations between consecutive inter-event times as follows:
\begin{equation}
M\equiv \frac{1}{n-2}\sum_{i=1}^{n-2}\frac{(\tau_i-\langle \tau\rangle_1)(\tau_{i+1}-\langle\tau\rangle_2)}{\sigma_1\sigma_2},
\label{eq:memory_coeff}
\end{equation}
with $\langle\tau\rangle_1$ (respectively $\langle\tau\rangle_2$) and $\sigma_1$ (respectively $\sigma_2$) being the average and the standard deviation of inter-event times $\{\tau_i | i=1,\cdots, n-2\}$ (respectively $\{\tau_{i+1} | i=1,\cdots, n-2\}$). Beyond only considering consecutive inter-event times, this measure can be extended to capture correlations between inter-event times separated by exactly $m-1$ intermediate inter-event times ($m\geq 1$). As a general form, the memory coefficient can be written as follows:
\begin{equation}
M_m\equiv \frac{1}{n-m-1}\sum_{i=1}^{n-m-1}\frac{(\tau_i-\langle \tau\rangle_1)(\tau_{i+m}-\langle\tau\rangle_2)}{\sigma_1\sigma_2}
\label{eq:Mgen}
\end{equation}
with the corresponding definitions of $\langle\tau\rangle_1$, $\langle\tau\rangle_2$, $\sigma_1$, and $\sigma_2$. Then, the set of $M_m$ for all possible $m$ may fully characterise the memory effects between inter-event times.
Note that an alternative measure, called the local variation, was introduced originally in neuroscience~\cite{Shinomoto2003Differences}. The local variation is defined as
\begin{equation}
{\textrm{LV}} \equiv \frac{1}{n-2}\sum_{i=1}^{n-2}\frac{3(\tau_i-\tau_{i+1})^2}{(\tau_i+\tau_{i+1})^2},
\end{equation}
which takes the values of $0$, $1$, and $3$, respectively, for the regular, random, and extremely bursty time series. This measure has also been used to analyse datasets describing human bursty patterns~\cite{Aoki2016Inputoutput}.
We also introduce an entropy-based measure for the correlations between consecutive inter-event times that applies only to the power-law inter-event time distribution~\cite{Baek2008Testing}. If the inter-event time distribution is a power law as $P(\tau)\propto \tau^{-\alpha}$ for $\tau\geq \tau_{\textrm{min}}$, to each inter-event time $\tau_i$ one can assign a number $r_i$ as follows:
\begin{equation}
r_i=1-\left(\frac{\tau_i}{\tau_{\textrm{min}}}\right)^{1-\alpha},
\end{equation}
which will be uniformly distributed between $[0,1)$. Then the correlation between consecutive inter-event times is measured in terms of the mutual information using the joint probability density function $P(r_i, r_{i+1})$:
\begin{equation}
I(r_i;r_{i+1})\equiv \sum_{r_i}\sum_{r_{i+1}}P(r_i, r_{i+1}) \log\left[\frac{P(r_i, r_{i+1})}{P(r_i)P(r_{i+1})}\right].
\end{equation}
If $\tau_i$ and $\tau_{i+1}$ are fully uncorrelated, so are $r_i$ and $r_{i+1}$, leading to the zero value of the mutual information defined above.
\subsubsection{The autocorrelation function}
The conventional way for detecting correlations in time series is to measure the autocorrelation function. For this, we use the representation of event sequences as binary signals $x(t)$ as defined in Eq.~(\ref{eq:xt}). In addition, for a proper introduction we need to define the delay time $t_d$, which sets a time lag between two observations of the signal $x(t)$. Then the autocorrelation function with delay time $t_d$ is defined as follows:
\begin{equation}
A(t_d)\equiv \frac{ \langle x(t)x(t+t_d)\rangle_t- \langle x(t)\rangle^2_t}{ \langle x(t)^2\rangle_t- \langle x(t)\rangle^2_t},
\end{equation}
where $\langle \cdot \rangle_t$ denotes the time average over the observation period. For more on the autocorrelation function, see Ref.~\cite{Box2008Time}. In the time series with temporal correlations, $A(t_d)$ typically decays as a power law:
\begin{equation}
A(t_d)\sim t_d^{-\gamma}
\end{equation}
with decaying exponent $\gamma$. One can see an example of the power-law decaying $A(t_d)$ in Fig.~\ref{fig:example_model}(d). In addition, note that one can relate $A(t_d)$ to the power spectrum or spectral density of the signal $x(t)$ as follows:
\begin{equation}
\label{eq:powerSpectrum}
P(\omega)=\left|\int x(t)e^{i\omega t}dt\right|^2 \propto \int A(t_d)e^{-i\omega t_d}dt_d,
\end{equation}
which appears as the Fourier transform of autocorrelation function. We are mostly interested in the power-law decaying power spectrum as
\begin{equation}
P(\omega)\sim\omega^{-\alpha_\omega}
\end{equation}
with $0.5<\alpha_\omega<1.5$, then the time series is called $1/f$ noise. $1/f$ noise has been ubiquitously observed in various complex systems~\cite{Bak1987Selforganized}, hence extensively studied for the last few decades.
The scaling relation between $\alpha$ and $\gamma$ has been studied both analytically and numerically. Let us first mention the relation between $\alpha_\omega$ and $\gamma$. If $A(t_d) \sim t_d^{-\gamma}$ for $0<\gamma<1$, then from Eq.~(\ref{eq:powerSpectrum}) one finds the scaling relation:
\begin{equation}
\label{eq:alpha_omega_gamma}
\alpha_\omega=1-\gamma.
\end{equation}
When the inter-event times are i.i.d. random variables with $P(\tau)\sim \tau^{-\alpha}$, implying no interdependency between inter-event times, the power-law exponent $\alpha_\omega$ is obtained as a function of $\alpha$ as follows~\cite{Lowen1993Fractal, Allegrini2009Spontaneous}:
\begin{equation}
\label{eq:alpha_omega_alpha}
\alpha_\omega=\left\{\begin{tabular}{ll}
$\alpha-1$ & for $1<\alpha\leq 2$,\\
$3-\alpha$ & for $2<\alpha\leq 3$,\\
$0$ & for $\alpha>3$.
\end{tabular}\right.
\end{equation}
For this result, the following inter-event time distribution was used:
\begin{equation}
P(\tau)=\left\{\begin{tabular}{ll}
$\frac{\alpha-1}{a^{1-\alpha}-b^{1-\alpha}}\tau^{-\alpha}$ & for $0<a<\tau <b$,\\
$0$ & otherwise.
\end{tabular}\right.
\end{equation}
Combining Eqs.~(\ref{eq:alpha_omega_gamma}) and~(\ref{eq:alpha_omega_alpha}), we have
\begin{eqnarray}
\label{eq:alpha_gamma}
\begin{tabular}{ll}
$\alpha+\gamma=2$ & for $1<\alpha\leq 2$,\\
$\alpha-\gamma=2$ & for $2<\alpha\leq 3$,
\end{tabular}
\end{eqnarray}
which have also been derived in Ref.~\cite{Vajna2013Modelling}. The above power-law exponents can be related via the Hurst exponent $H$, i.e., $\gamma=2-2H$~\cite{Kantelhardt2001Detecting} or $\alpha_\omega=2H-1$~\cite{Allegrini2009Spontaneous, Rybski2012Communication}. This indicates that the power-law decaying autocorrelation function could be explained solely by the inhomogeneous inter-event times, not by the interdependency between inter-event times. In fact, the observed autocorrelation functions measure not only the inhomogeneities in inter-event times themselves but also correlations between consecutive inter-event times of arbitrary length. Thus, it is required to distinguish these effects from each other, if possible, for better understanding of bursty behaviour. For this, another measurement has recently been introduced, called bursty train size distribution, to be discussed below.
\subsubsection{The bursty train size distribution}
\label{sec:PE}
The above mentioned ambiguity of the autocorrelation function called for another way to indicate correlations between consecutive inter-event times. A method has been proposed by detecting correlated bursty trains as introduced in Ref.~\cite{Karsai2012Universal}. A bursty train is a sequence of events, where each event follows the previous one within a time window $\Delta t$. $\Delta t$ actually defines the maximum time between consecutive events, which are assumed to be causally correlated. In this way, an event sequence can be decoupled into a set of causal event trains in which each pair of consecutive events in a given train is closer than $\Delta t$, while trains are separated from each other by an inter-event time $\tau>\Delta t$. To obtain the size of each bursty train, denoted by $E$, we can count the number of events they contain, as depicted in Fig.~\ref{fig:scheme2}. Note that this notion assigns a bursty train size $E=1$ to standalone events, which occurs independently from any of the previous or following events, according to this definition. The relevant measure for temporal correlation is the bursty train size distribution $P_{\Delta t}(E)$ for a fixed $\Delta t$. If events are independent, $P_{\Delta t}(E)$ must appear as follows:
\begin{eqnarray}
P_{\Delta t}(E) &=& \left[ \int_0^{\Delta t}P(\tau)d\tau \right]^{E-1}\left[1- \int_0^{\Delta t} P(\tau)d\tau\right]\\
& \approx & \frac{1}{E_c(\Delta t)}e^{-E/E_c(\Delta t)},
\label{eq:PEindep}
\end{eqnarray}
where $E_c(\Delta t)\equiv -1/\ln F(\Delta t)$ with the cumulative distribution of inter-event times $F(\Delta t)\equiv \int_0^{\Delta t}P(\tau)d\tau$. Since $F(\Delta t)$ is not a function of $E$ in this case, the functional form of $P(\tau)$ is irrelevant to the functional form of $P_{\Delta t}(E)$, which appears with an exponential distribution for any independent event sequences. Thus any correlation between inter-event times may lead to different forms of $P_{\Delta t}(E)$, implying that any deviation from an exponential form of $P_{\Delta t}(E)$ indicates correlations between inter-event times. Interestingly, several empirical cases have been found to show the power-law distributed train sizes as
\begin{equation}
P_{\Delta t}(E)\sim E^{-\beta},
\end{equation}
with the power-law exponent $\beta$ for a wide range of $\Delta t$~\cite{Karsai2012Universal, Karsai2012Correlated, Jiang2013Calling, Kikas2013Bursty}. For the demonstration of such observations, see Fig.~\ref{fig:PE}(a--c) adopted from Ref.~\cite{Karsai2012Universal}. This phenomenon, called \emph{correlated bursts}, has been shown to characterise several systems in nature and human dynamics~\cite{Karsai2012Universal}.
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{figs/fig_burstytrain.pdf}
\caption{Schematic diagramme of an event sequence, where each vertical line indicates the timing of the event. For a given time window $\Delta t$, a bursty train is determined by a set of events separated by $\tau\leq \Delta t$, while events in different trains are separated by $\tau>\Delta t$. The number of events in each bursty train, i.e., bursty train size, is denoted by $E$. In most empirical datasets, the distributions of $E$ are heavy-tailed.}
\label{fig:scheme2}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{figs/Karsai2012Universal_fig2.pdf}
\caption{The characteristic functions of human communication event sequences. The bursty train size distribution $P_{\Delta t}(E)$ with various time windows $\Delta t$ (main panels), the inter-event time distribution $P(\tau)$ (left bottom panels), and autocorrelation functions $A(t_d)$ (right bottom panels) are calculated for different communication datasets. (a) Mobile phone call dataset: The scale-invariant behaviour was characterised by power-law functions with exponent values $\alpha\simeq 0.7$, $\beta\simeq 4.1$, and $\gamma\simeq 0.5$ (b) Almost the same exponents were estimated for short message sequences taking values $\alpha\simeq 0.7$, $\beta\simeq 3.9$ and $\gamma\simeq 0.6$. (c) Email event sequence with estimated exponents $\alpha\simeq 1.0$, $\beta\simeq 2.5$ and $\gamma=0.75$. A gap in the tail of $A(t_d)$ on figure (c) appears due to logarithmic binning and slightly negative correlation values. Empty symbols assign the corresponding calculation results on independent sequences. Lanes labeled with s, m, h and d are denoting seconds, minutes, hours and days, respectively. (\emph{Source:} Adapted from Ref.~\cite{Karsai2012Universal} under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.)}
\label{fig:PE}
\end{figure}
Finally, we mention the possible effects of interdependency between inter-event times on the scaling relations between power-law exponents of inter-event times and autocorrelation function as presented in Eq.~(\ref{eq:alpha_gamma}). For example, one can compare the autocorrelation function calculated for an empirical event sequence with that for the shuffled event sequence, where correlations between inter-event times are destroyed, as shown in the lower right panels in each of Fig.~\ref{fig:PE}(a--c). By doing so, the effects of interdependency between inter-event times can be tested. Such effects of correlation between inter-event times on the scaling relation should be studied more rigorously in the future as they are far from being fully understood. So far only a few studies have tackled this issue, e.g., see Refs.~\cite{Rybski2012Communication, Karsai2012Universal, Vajna2013Modelling, Jo2015Correlated}.
\subsubsection{Memory kernels}
We also introduce the memory kernel as one of the measurements for the bursty temporal patterns~\cite{Crane2008Robust, Masuda2013SelfExciting, Aoki2016Inputoutput}. The memory kernel $\phi(t)$ relates the past events, either being endogenous or being exogenous, to the future events. This measure, which represents the effect by the past events, has been empirically found to have the power-law form as
\begin{equation}
\phi(t)\sim t^{-(1+\theta)},
\end{equation}
where $t$ is the elapsed time from the past event and $\theta$ denotes the power-law exponent characterising the degree of memory effects. However, in general, memory kernels are also assumed to follow different functional forms, e.g., hyperbolic, exponential~\cite{Masuda2013SelfExciting}, or power-law~\cite{Jo2015Correlated}. They are commonly applied in modelling bursty systems using self-exciting point processes~\cite{Mehrdad2015Hawkes}: For a given set of past events occurred before the time $t$, the event rate at time $t$ reads as follows:
\begin{equation}
\lambda(t)=V(t)+\sum_{i,t_i\leq t} \phi(t-t_i),
\end{equation}
where $V(t)$ is the exogenous source, and $t_i$ denotes the timing of the $i$th event. We are going to discuss more in details in Section~\ref{sec:SelfExcPP}.
\subsubsection{Other characteristic measures}
In addition to the conventional measures of bursty behaviour that we already introduced, we here mention some less recognised ones. There indeed exist a number of traditional measures and techniques in nonlinear time series analysis~\cite{Kantz2004Nonlinear, Mantegna2007Introduction}. Among them we here introduce the detrended fluctuation analysis (DFA), originally devised for analysing DNA sequences~\cite{Peng1994Mosaic}. For a given time series $x(t)$ for $0\leq t<T$, with its average value $\langle x\rangle$, the cumulative time series is constructed by
\begin{equation}
y(t)\equiv \int_0^t (x(t')-\langle x\rangle)dt'.
\end{equation}
The total time period $T$ is divided into segments of size $w$. For each segment, the cumulative time series is fit to a polynomial $y_w (t)$. Using the fit polynomials for all segments, the mean-squared residual for the entire range of time series is calculated as follows:
\begin{equation}
F(w)\equiv \sqrt{\int_0^T [y(t)-y_w (t)]^2 dt},
\end{equation}
which typically scales with the segment size $w$ as $w^H$. Here the scaling exponent $H$ is called the Hurst exponent~\cite{Bryce2012Revisiting}.
\section{Inter-event time, residual time, and waiting time}
\label{sec:iet_rt_wt}
As for the terminology for burstiness, there is a common confusion between the definitions of inter-event time, waiting time, and residual time. Here we would like to clarify their definitions and relations to each other.
For a given event sequence, the \emph{inter-event time} $\tau$ is defined as the time between two consecutive events. However, the observations of an event sequence always cover a finite period of time, which has to be considered in the terminology. So let us assume an observer who begins to observe the time series of events at a random moment of time, and waits for the next, firstly observed event to take place. The time interval between the beginning time of the observation period and the next event has been called the \emph{residual time} $\tau_r$, also often called the \emph{residual waiting time} or \emph{relay time}~\cite{Kampen2007Stochastic}. A similar definition of the residual time is found in queuing theory in a situation when a customer arrives at a random time and waits for the server to become available~\cite{Cox1972Theory, Cooper1998Some}. The residual time then is the time interval between the time of arrival and the time of being served, thus it corresponds to the remaining or residual time to the next event after a random arrival. The residual time distribution can be derived from the inter-event time distribution as
\begin{equation}
P(\tau_r)=\frac{1}{\langle \tau\rangle}\int_{\tau_r}^\infty P(\tau)d\tau,
\label{eq:rst_iet}
\end{equation}
and the average residual time can be calculated as
\begin{equation}
\langle \tau_r \rangle = \int_0^\infty \tau_r P(\tau_r)d\tau_r = \frac{\langle \tau^2\rangle}{2\langle \tau\rangle}.
\label{eq:taurderiv}
\end{equation}
This result explains a phenomenon called the \emph{waiting-time paradox}, which has important consequences on dynamical processes evolving on bursty temporal systems that we will discuss in details later in Section \ref{sec:wtp}. As we mentioned earlier, a common reference dynamics to quantify the heterogeneity of a bursty sequence is provided by a Poisson process. Thus we may consider a normalised average residual time after dividing $\langle \tau_r \rangle$ by the corresponding residual time of a Poisson process $\langle \tau_{r}^{P} \rangle$, which is simply $\langle \tau\rangle$. This can then be written as
\begin{equation}
\frac{\langle \tau_r \rangle}{\langle \tau_{r}^{P} \rangle}=\frac{\langle \tau^2 \rangle}{2\langle \tau \rangle^2}=\frac{1}{2}\left[ \left( \frac{\sigma}{\langle \tau \rangle}\right)^2+1\right]=\frac{B^2+1}{(B-1)^2},
\label{r_definition}
\end{equation}
where $\sigma$ is the standard deviation of $P(\tau)$ and $B$ is the burstiness parameter as defined in Eq.~(\ref{eq:burstiness_param}). Consequently this ratio can equally well be seen as a measure of burstiness.
Contrary to the above definitions, \emph{waiting times} are not necessarily derived from series of consecutive events, but they can rather characterise the lifespan of single tasks. The tasks wait to be executed for a period depending on their priorities as well as on the newly-coming other tasks. In this way the \emph{waiting time} $\tau_w$, also often called \emph{response time} or \emph{processing time}, is defined as the time interval a newly arrived task needs to wait before it is executed. For example, in an editorial process, each submitted manuscript gives rise to one waiting time until the decision is made~\cite{Mryglod2012Editorial, Jo2012Timevarying,Hartonen2013How} and the waiting time distribution is obtained from a number of submitted manuscripts. However, the heavy tail of the waiting time distribution, $P(\tau_w)$, implies the heterogeneity of the editorial system, but not necessarily the bursty dynamics of the process itself. On the other hand, the waiting time can be deduced from an event sequence, e.g., of directed interactions, like the time between receiving and responding to an email or letter. In these cases, a close relation between $P(\tau)$ and $P(\tau_w)$ seems to appear. Actually, it has been argued that in case of a process with heterogeneous waiting time distribution, the inter-event time distribution is also heterogeneous and vice versa, and can be characterised by the same exponent~\cite{Barabasi2005Origin, Vazquez2006Modeling, Li2008Empirical, Formentin2015New}. Waiting times will be duly addressed later in Section~\ref{sec:PriQueMod},
where they appear as the central quantity in the definition of priority queuing models~\cite{Abate1997Asymptotics, Barabasi2005Origin}.
\section{Collective bursty phenomena}\label{subsect:system}
So far we have been discussing measures to characterise bursty behaviour at the level of single individuals. However, individuals form egocentric networks and connected to a larger social system, which could show bursty dynamics and be characterised at the system level. Since individual dynamics is observed to be bursty, it may effect the system-level dynamics and the emergence of any collective phenomena, while also the contrary is true: If the collective dynamics is bursty, it must affect the temporal patterns of each individual. The structure of social systems has been commonly interpreted as social networks~\cite{Borgatti2009Network, Wasserman1994Social}, where nodes are identified as individuals and links assign their interactions. Thanks to the recent access to a huge amount of digital datasets related to human dynamics and social interaction, a number of empirical findings have been cumulated to study the structure and dynamics of social networks. Researchers have analysed various social networks of face-to-face interactions~\cite{Eagle2006Reality, Zhao2011Social, Fournet2014Contact}, emails~\cite{Eckmann2004Entropy, Klimt2004Enron}, mobile phone communication~\cite{Onnela2007Structure, Blondel2015Survey}, online forums~\cite{Hric2014Community, Eom2015Tailscope}, Social Networking Services (SNS) like Facebook~\cite{Ugander2011Anatomy} and Twitter~\cite{Kwak2010What}, as well as even massive multiplayer online games~\cite{Szell2010Measuring, Szell2010Multirelational}. These studies of social networks show that there are commonly observed features or \emph{stylised facts} characterising their structures~\cite{Jackson2010Social, Murase2015Modeling, Kertesz2016Multiplex}, see also the summary in Table~I in Ref.~\cite{Jo2017Stylized}. For example, one finds broadly distributed network quantities like node degree and link weight~\cite{Albert2002Statistical, Onnela2007Analysis}, homophily~\cite{McPherson2001Birds, Newman2002Assortative}, community or modular structure~\cite{Granovetter1973Strength, Fortunato2010Community}, multilayer nature~\cite{Kivela2014Multilayer, Boccaletti2014Structure}, and geographical and demographic correlations~\cite{Onnela2011Geographic, Palchykov2012Sex, Jo2014Spatial} to mention a few. All these characters play important roles in the dynamics of social interactions.
At the same time, such datasets lead to the observation of mechanisms and correlations driving the interaction dynamics of people. This is the subject of the recent field of \emph{temporal networks}~\cite{Holme2012Temporal, 2013Temporal, Holme2015Modern, Masuda2016Guide}, which identifies social networks as temporal objects, where interactions are time-varying, and code the static structure after aggregation over an extensive period. Temporal networks are commonly interpreted as a sequence of events, which are defined as triplets $(i,j,t)$, indicating that a node $i$ interacts with a node $j$ at time $t$. The analysis of event sequences of large number of individuals can disclose the mesoscopic structure of bursty interaction patterns, and enable us to characterise burstiness at the system level as well.
\subsection{Bursty patterns in egocentric networks}
\label{sec:egoburst}
The interaction dynamics of a focused individual or an ego can be exploited from the global temporal network by extracting all event sequence where the ego $i$ participates as:
\begin{equation}
x_i(t)\equiv \sum_{j\in\Lambda_i} x_{ij}(t),
\end{equation}
where $\Lambda_i$ denotes the neighbour set of the ego $i$. In other works, the event sequence $x_i(t)$ builds up from interaction sequences on single links, $x_{ij}(t)$, which together define the dynamics of the egocentric network. Our first question is how the bursty interactions of an ego are distributed among the different neighbours.
We have already discussed that by observing an individual, her bursty activities may evolve in trains where several events follow each other within a short time window $\Delta t$. This is especially true for communication dynamics, where interactions like mobile calls, SMSs or emails sent or received by an ego, exhibit such patterns. However, the question remains whether such bursty communication trains are the consequences of some collective interaction patterns in the larger egocentric network, e.g., to organise an event or to process information, or on the contrary, they evolve on single links induced by discussions between only two people. One can easily figure this out by decoupling the entangled egocentric dynamics to single links and see how the bursty train size distribution $P(E)$ changes before and after this process. If the first hypothesis is true, as long trains of an ego are distributed among many links, after decoupling the trains should fall apart and their size distribution should change radically. On the other hand, if the second hypothesis is true, their size distribution should not change considerably. Using mobile phone call and SMS sequences, it has been shown in Ref.~\cite{Karsai2012Correlated} that after decoupling, $P(E)$ measured on single links are almost identical to ones observed in individual activity sequences. In support of this observation it has been found that $\sim 80\%$ of trains evolve on single links, almost independently from the train size. Consequently, this suggests that long correlated bursty trains are more like the property of links rather than nodes and are commonly induced by dyadic interactions. This study further discusses the difference between call and SMS sequences and finds that call (respectively SMS) trains are more imbalanced (respectively balanced) than one would expect from the overall communication balance of the social tie.
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{figs/fig_context.pdf}
\caption{Schematic example of the event sequence of an individual $A$ with her various contexts (neighbours) $B$, $C$, and $D$. The collective inter-event time $\tau^{(i)}$ is defined as the time interval between consecutive events of any contexts of the ego $i=A$. The contextual inter-event time $\tau^{(ij)}$ is defined between events of the same context, e.g., $j=B$.}
\label{fig:contextualBursts}
\end{figure}
One can adopt the same picture to understand the contribution of bursty patterns on links to the overall inter-event time distribution of an ego. This question was addressed by Jo \emph{et al.}, who proposed an alternative explanation for bursty links related to contextual dependence of behaviour. In their interpretation, the context of an event~\cite{Jo2012Spatiotemporal, Jo2013Contextual} is the circumstance in which the event occurs and can be a person, a social situation with some convention, or a place. In case of social interactions, for an ego $i$ the context of social interactions can be associated to a neighbour $j$ in the egocentric network. Then the question is how much \emph{contextual bursts}, which evolve in the interaction sequences of single links $x_{ij}(t)$, determine \emph{collective bursts} observable in the overall interaction sequence $x_i(t)$ of the ego $i$. This question can be addressed on the level of inter-event times. As depicted in Fig.~\ref{fig:contextualBursts}, let us denote collective inter-event times in $x_i(t)$ as $\tau^{(i)}$, while contextual inter-event times in $x_{ij}(t)$ as $\tau^{(ij)}$. It is straightforward to see that a contextual inter-event time comprises typically of multiple collective inter-event times as follows:
\begin{equation}
\label{eq:tauij_taui}
\tau^{(ij)} =\sum_{k=1}^n \tau^{(i)}_k,
\end{equation}
where $n-1$ is the number of events with contexts other than $j$ between two consecutive events with $j$. For example, one finds $n=3$ in Fig.~\ref{fig:contextualBursts} between the first and second observed interactions with context $B$. The relation between $P(\tau^{(ij)})$ and $P(\tau^{(i)})$ for uncorrelated inter-event times has been studied analytically and numerically in Ref.~\cite{Jo2013Contextual}, where both $P(\tau^{(ij)})$ and $P(\tau^{(i)})$ are assumed to have power-law forms with exponents $\alpha'$ and $\alpha$, respectively. For deriving the scaling relation between $\alpha'$ and $\alpha$, another power-law distribution is assumed for $n$ in Eq.~(\ref{eq:tauij_taui}), i.e., the number of collective inter-event times for one contextual inter-event time, as $P(n)\sim n^{-\eta}$. The distribution of $n$ is related to how the ego distributes her limited resource like time to her neighbours. Then one can write the relation between distribution functions as follows:
\begin{eqnarray}
P(\tau^{(ij)})&=&\sum_{n=1}^\infty P(n) F_n(\tau^{(ij)}),\\
F_n(\tau^{(ij)})&\equiv &\prod_{k=1}^n \int_{\tau_0}^\infty d\tau_k^{(i)} P(\tau_k^{(i)}) \delta\left(\tau^{(ij)}-\sum_{k=1}^n \tau_k^{(i)}\right),
\end{eqnarray}
where $F_n$ denotes the probability of making one $\tau^{(ij)}$ as a sum of $n$ $\tau^{(i)}$s, and $\tau_0$ is the lower bound of inter-event times $\tau^{(i)}$. By solving this equation, the scaling relation between $\alpha'$, $\alpha$, and $\eta$ is obtained~\cite{Jo2013Contextual}:
\begin{equation}
\alpha'=\min\{(\alpha-1)(\eta-1)+1,\alpha,\eta\}.
\end{equation}
This result provides a condition under which the statistical properties of the ego's own temporal pattern could be described similarly to those of the ego's relationships.
Note that this terminology can be generalised for event sequences not only on links but for an arbitrary set of neighbours associated to the same context $\Lambda$. In this case the contextual event sequence can be written as
\begin{equation}
x_\Lambda (t)\equiv \sum_{i,j\in \Lambda} x_{ij}(t),
\end{equation}
where the summation considers individuals $i$ and $j$ who both belong to the same context or group of $\Lambda$. Then one can study the relation between statistical properties at different levels of contextual grouping. For example, empirical analysis using online forum dataset was recently performed to relate individual bursty patterns to the forum-level bursty patterns in Ref.~\cite{Panzarasa2015Emergence}.
In another work Song \emph{et al.}~\cite{Song2013Connections} proposed scaling relations between power-law exponents characterising structural and temporal properties of temporal social networks. In terms of structure they concentrate on the distribution of node degrees and link weights observed over a finite time period. Here the node degree indicates the number of neighbours of a node, while the link weight is defined as the number of interaction events between two neighbouring nodes. Both of these distributions can be approximated as power-laws with exponents $\epsilon_k$ and $\epsilon_w$. To characterise the dynamics of the network they consider individual activity $a_i$, defined as the total number of interactions of an ego $i$ within a given period, and inter-event time distributions, but not in real time but event times and not of egos but of social ties. In this case inter-event time is defined as the number of events between two consecutive interaction of the ego and one specific neighbour (similar to $n$ in Eq.~(\ref{eq:tauij_taui})). Distributions of these dynamical quantities can be also approximated by power-laws with exponents assigned as $1+\alpha_{a}$ for activity and $1+\alpha_{\tau}$ for inter-event times. They first show that the degree of a node $i$, denoted by $k_i$, observed for a period $[ t_1, t_2 ]$ is increasing as
\begin{equation}
k_i(t_1,t_2)\sim a_i(t_1,t_2)^{\kappa_i}.
\end{equation}
They argue that the power-law exponent $\kappa_i$ measured for an ego $i$, what they call the sociability, satisfies the condition
\begin{equation}
\kappa_i+\alpha_{\tau,i}=1,
\end{equation}
where $\alpha_{\tau,i}$ denotes the inter-event time exponent observed in the interaction sequence of the ego $i$. They further argue that the degree and weight distribution exponents can be determined by the dynamical parameters as
\begin{equation}
\epsilon_k=1+\min \left\{
\frac{\alpha_a}{1-\overline{\alpha}_{\tau}}, \frac{u}{\overline{\alpha}_{\tau} \ln \overline{a}} \right\},
\hspace{.3in} \epsilon_w=2-\overline{\alpha}_{\tau},
\end{equation}
where $\overline{\alpha}_{\tau}$ and $\overline{a}$ denote average values, while $u$ is a parameter capturing the variability of sociability $\kappa$. The authors support these scaling relations by introducing the scaling functions to scale the corresponding distributions obtained from various human interaction datasets.
\subsection{Bursty temporal motifs}
Taking off from the egocentric point of view, bursty temporal interaction patterns can appear not only centered around a single ego but also between a larger number of people. Such patterns are formed by causally correlated sequence of interactions, which appear within a short time window between two or more people. These \emph{temporal motifs} are arguably induced by group conversations, information processing, or organisation like a common event, etc., and can be associated to burstiness at the mesoscopic level of networks. The emergence of such group-level bursty events is rather rare and it strongly depends on the observed communication channel and the type of induced events. However, it has been shown that some of them appear with a significantly larger frequency as compared to random reference models.
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{figs/TempMotifsIllustr.pdf}
\caption{(a) A directed temporal network between four nodes $a$, $b$, $c$, and $d$ with four events, $e_1$, $e_2$, $e_3$, $e_4$, respectively at $t=15$, $18$, $24$, and $33$. By assuming that $\Delta t = 10$, $e_2$ end $e_4$ are adjacent but not $\Delta t$-adjacent. (b--d) All 2-event valid temporal subgraphs. (e) An invalid subgraph because it skips the event $e_2$ that for node $a$ takes place between $e_1$ and $e_3$.}
\label{fig:tempmotill}
\end{figure}
Temporal motifs are defined in temporal networks. For a schematic example, see Fig.~\ref{fig:tempmotill}(a). Here interactions between nodes occur in different timings and they are interpreted as events assigned with time stamps. For a more detailed definition and characterisation of temporal networks we refer the reader to Refs.~\cite{Holme2012Temporal, Masuda2016Guide}. Temporal motifs consist of \emph{$\Delta t$-adjacent events} in the temporal network, which share at least one common node and happens within a time window $\Delta t$. Two events that are not directly $\Delta t$-adjacent might be \emph{$\Delta t$-connected} if there is a sequence of events connecting the two events, which are successive in time and $\Delta t$-adjacent. A connected temporal subgraph is then a set of events where each pair of events are $\Delta t$-connected, as depicted in Fig~.\ref{fig:tempmotill}(b--e). To define temporal motifs we further restrict our definition on \emph{valid temporal subgraph} where for each node in the subgraph the events involving the node must be consecutive, e.g., as in Fig.~\ref{fig:tempmotill}(b--d). Note that for the final definition of temporal motifs we consider only \emph{maximal valid temporal subgraphs}, which contain all events that are $\Delta t$-connected to the included events. For a more precise definition, see Refs.~\cite{Kovanen2011Temporal, Kovanen2013Temporal}. Also note that an alternative definition of temporal motifs has been proposed recently, where motifs are defined by events which all appear within a fixed time window~\cite{Paranjape2017Motifs}.
One way to detect temporal motifs is by interpreting them as static directed colored graphs and find all isomorphic structures with equivalent ordering in a temporal network~\cite{Kovanen2011Temporal}. The significance of the detected motifs can be inferred by comparing the observed frequencies to those calculated in some reference models, where temporal and causal correlations were removed. Such analysis has shown~\cite{Kovanen2011Temporal} that the most frequent motifs in mobile phone communication sequences correspond to dyadic bursty interaction trains on single links. On the other hand the least frequent motifs are formed by non-causal events, suggesting strong dependence between causal correlations and bursty phenomena.
\subsection{System level characterisation}
Finally, we discuss methods to characterise bursty phenomena at the level of the whole social network. Temporal inhomogeneity at the system level can be measured in terms of \emph{temporal network sparsity}~\cite{Perotti2014Temporal}. This measure counts the number of microscopic configurations associated with the macroscopic state of a temporal networks. This concept of multiplicity has been known in statistical physics. More specifically, in a temporal network for a given time window, events can be distributed over the links of the corresponding static structure. Here we denote a link between nodes $i$ and $j$ as $ij$, and the set of all links as $L$. Thus, for a time window one can measure the fraction of events on a given link $ij$, denoted by $p_{ij}$, and compute the Shannon entropy considering each link $ij\in L$ as:
\begin{equation}
H_{L} = -\sum_{ij\in L} p_{ij}\ln p_{ij},
\end{equation}
which quantifies how heterogeneously events are distributed among different links. After computing an average entropy $\langle H_L \rangle$ over several time windows, one can estimate the \emph{effective number of links} as
\begin{equation}
L^{\textrm{eff}} \equiv \exp(\langle H_L \rangle),
\end{equation}
which gives the number of links in a given time window assuming that the event rate per a link is constant. Simultaneously measuring the effective number of links in the empirical temporal network and in a random reference model where events are uniformly distributed in time, one can introduce the notion of \emph{temporal network sparsity}:
\begin{equation}
\zeta_{\textrm{temp}} \equiv \frac{L^{\textrm{eff}}}{L^{\textrm{eff}}_{\textrm{ref}}}.
\end{equation}
This measure indicates the overall distribution of events within a given time window as compared to the case with homogeneously distributed events. The smaller value $\zeta_{\textrm{temp}}$ has, the more severe heterogeneities characterise the event sequence and the more ``temporally sparse'' the network is. This measure turns out to have some explanatory power for spreading dynamics on various temporal networks~\cite{Perotti2014Temporal}.
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{figs/fig_deseason.pdf}
\caption{(a) An example of the deseasoning method applied to a mobile call series of a user, with $T_{\circlearrowleft}=1$ week. The top shows the first two weeks of the call series colored in red (the first week) and blue (the second week). Events for all weeks are collected in one week period to obtain the event rate $\rho(t)$ for $0\leq t< T_{\circlearrowleft}$. After deseasoning, the events in each week are put back to their original slot. (b) The original inter-event time distribution for individuals with $200$ calls is compared to the distributions with deseasoned inter-event times for various values of $T_{\circlearrowleft}$. (\emph{Source:} Adapted from Ref.~\cite{Jo2012Circadian} under $\textcopyright$ IOP Publishing \& Deutsche Physikalische Gesellschaft (CC BY-NC-SA).)}
\label{fig:deseason}
\end{figure}
\section{Cyclic patterns in human dynamics}\label{subsect:cyclic}
It is evident that humans follow intrinsic periodic patterns of circadian, weekly, and even longer cycles~\cite{Malmgren2008Poissonian, Jo2011Circadian, Jo2012Circadian, Aledavood2015Daily}. Such cycles clearly contribute to the inhomogeneities of temporal patterns, and they often result in an exponential cutoff to the inter-event time distributions. Identifying and filtering out such cyclic patterns from a time series can reveal bursty behaviour of different origins than those cycles. In order to characterise such cyclic patterns, let us consider a time series, i.e., the number of events at time $t$, denoted by $x(t)$, for the entire period of $0\leq t< T$. One may be interested in a specific cycle, like daily or weekly ones, with period denoted by $T_{\circlearrowleft}$. Then, for a given period of $T_{\circlearrowleft}$, the event rate with $0\leq t <T_{\circlearrowleft}$ can be defined as
\begin{equation}
\rho(t)\equiv \frac{T_{\circlearrowleft}}{X}\sum_{k=0}^{T/T_{\circlearrowleft}} x(t+kT_{\circlearrowleft}),\vspace{.2in} X\equiv \int_0^{T} x(t)dt.
\end{equation}
Such cycles turn out to be also apparent in the inter-event time distributions $P(\tau)$. For example, one finds peaks of $P(\tau)$ corresponding to multiples of one day in mobile phone calls and blog posts~\cite{Jo2011Circadian, Kim2013Microscopic}. Note that such periodicities could be characterised by means of a power spectrum analysis in Eq.~(\ref{eq:powerSpectrum}), however here we take another way.
Once such cycles are identified in terms of the event rate $\rho(t)$, we can filter them by deseasoning the time series~\cite{Jo2012Circadian}. First, we extend indefinitely the domain of $\rho(t)$ by $\rho(t+kT_{\circlearrowleft})=\rho(t)$ with an arbitrary integer $k$. Then using the identity of $\rho(t)dt=\rho^*(t^*)dt^*$ with the deseasoned event rate of $\rho^*(t^*)=1$, we can get the deseasoned time $t^*(t)$ as
\begin{equation}
t^*(t)\equiv \int_0^t \rho(t')dt'.
\end{equation}
For the schematic example of the deseasoning method, see Fig.~\ref{fig:deseason}(a). In plain words, the time is dilated (respectively contracted) at the moment of the high (respectively low) event rate. Then the deseasoned event sequence of $\{t^*(t_i)\}$ is compared to the original event sequence of $\{t_i\}$ to see how strong signature of burstiness or memory effects remained in the deseasoned sequence. This reveals whether the empirically observed temporal heterogeneities can (or cannot) be explained by the intrinsic cyclic patterns, characterised in terms of the event rate. For example, if one obtains the deseasoned inter-event time $\tau_i^*$ corresponding to the original inter-event time $\tau_i=t_i-t_{i-1}$ as
\begin{equation}
\tau_i^* \equiv t^*(t_i)-t^*(t_{i-1})=\int_{t_{i-1}}^{t_i} \rho(t')dt',
\end{equation}
then the deseasoned inter-event time distribution $P(\tau^*)$ can be compared to the original inter-event time distribution $P(\tau)$. This method was applied to the mobile phone call series~\cite{Jo2012Circadian}, as partly depicted in Fig.~\ref{fig:deseason}(b), where the inter-event time distributions for the original and deseasoned event sequences show almost the same shape for various values of $T_{\circlearrowleft}$. This indicates that there could be other origins for the human bursty dynamics than the circadian and weekly cycles of humans. In order to quantitatively study the effects of deseasoning, the burstiness parameter $B$ has been measured for both original and deseasoned mobile phone call series to find the overall decreased yet positive values of $B$, implying that the bursts remain after deseasoning. In addition, the memory coefficients $M_m$, bursty train size distributions $P_{\Delta t}(E)$, and autocorrelation function $A(t_d)$ can be also measured by using the deseasoned event sequence of $\{t^*(t_i)\}$ for the comparison to the original ones.
It is straightforward to extend this method for the aggregated time series at different levels of activity groups, including the whole population. For a set of individuals $\Lambda$, the number of events in time $t$ is denoted by
\begin{equation}
x_{\Lambda}(t)\equiv \sum_{i\in \Lambda} x_i(t),
\end{equation}
where $x_i(t)$ is the number of events of an individual $i$ at time $t$. Then, for a given period of $T_{\circlearrowleft}$, the event rate with $0\leq t <T_{\circlearrowleft}$ is defined as
\begin{equation}
\rho_{\Lambda}(t)\equiv \frac{T_{\circlearrowleft}}{X_{\Lambda}}\sum_{k=0}^{T/T_{\circlearrowleft}} x_{\Lambda}(t+kT_{\circlearrowleft}),\ X_{\Lambda}\equiv \int_0^{T} x_{\Lambda}(t)dt.
\end{equation}
Using this event rate for the actual set of individuals $\Lambda$, one can get the deseasoned time $t^*_{\Lambda}(t)$ as follows:
\begin{equation}
t^*_{\Lambda}(t)\equiv \int_0^t \rho_{\Lambda}(t')dt'.
\end{equation}
We remark that the fully deseasoned time series, i.e., for $T_{\circlearrowleft}=T$, corresponds to the time series represented in the ordinal time-frame, where real timings of events are replaced by the orders of events. Now if $T_{\circlearrowleft}=T$, we have the event rate for a node $i$ as $\rho_i(t) =\frac{T}{X_i}x_i(t)$ with $X_i$ denoting the total number of events of the node $i$. We assign the timing of the $k$th event between $i$ and $j$ by $t^{(ij)}_k$ and get the deseasoned inter-event time corresponding to $\tau^{(ij)}_k=t^{(ij)}_k-t^{(ij)}_{k-1}$ as
\begin{equation}
{\tau^*}^{(ij)}_k \equiv \frac{T}{X_i} \int_{t^{(ij)}_{k-1}}^{t^{(ij)}_k} x_i(t')dt' = \frac{T}{X_i}n^{(ij)}_k.
\end{equation}
Here $n^{(ij)}_k$ is the contextual ordinal inter-event time, i.e., the number of events of contexts other than $j$ between two consecutive events with the context $j$. Thus, the fully deseasoned real time-frame is simply translated into the ordinal time-frame. The characterisation of bursts in terms of the ordinal time-frame has also been studied in other contexts, e.g., in terms of activity clock~\cite{Gauvin2013Activity}, relative clock~\cite{Zhou2012Relative}, and ``proper time''~\cite{Formentin2014Hidden, Formentin2015New}. In these works, the elapsed time is counted in terms of the number of events instead of the real time.
\subsection{Remark on non-stationarity}
So far, the time series has been assumed to be stationary, either explicitly or implicitly. As the stationarity by definition indicates the symmetry under the time translation, all non-Poissonian processes could be considered non-stationary, hence various time series analysis methods mentioned cannot be applied to the bursty temporal patterns. However, the definition of the stationarity can be relaxed by allowing a non-stationary behaviour only for some specific time scale: For example, human individuals can show a daily cycle in their temporal patterns, while they might keep their daily routines for several months or longer. Then, their temporal patterns can be considered stationary only at time scales that are longer than one day and shorter than several months. This relaxed definition of stationarity could be yet misleading, given the fact that most bursty phenomena show scale-free, hierarchical nature in terms of time scales, while we can apply various time series analysis methods as long as the time series looks stationary at least at some specific time scales. In this sense, the deseasoning method or detrended fluctuation analysis and its variants can be useful for removing the non-stationary temporal patterns from the original time series, hence for allowing us to investigate the bursty nature of time series without being concerned with non-stationarity issue. This is an important issue but has been largely ignored in many works, except for some recent studies mostly in relation to the dynamic processes on networks~\cite{Horvath2014Spreading, Holme2016Temporal}.
\chapter{Empirical findings in human bursty dynamics}\label{chapter:emp}
There are a number of natural phenomena that show complex structural and dynamical patterns as results of self-organisation and adaptive response to the environment. Such fundamental characteristics are also found in social systems, in which the behaviour of large number of interacting individuals induces complex and heterogeneous patterns at different organisational scales. Therefore, we find a number of empirical evidences showing temporal inhomogeneities or bursty behaviour in human dynamics, mostly due to the recent development of information-communication technology (ICT) and a number of accessible large-scale digital datasets. In this Chapter, we provide a systematic introduction of empirical findings from diverse sources of data. We will conduct the discussion at two main levels of organisation, (i) at the level of individual activities and (ii) at the level of interaction-driven collective activities. The first category includes individual activities that do not necessarily concern with direct interactions between individuals. This category also includes activities by individuals but collected at a population or system level, in which individuals do not explicitly coordinate or cooperate with others for their own actions. However, as there is no clear-cut distinction between these two types of activities due to the intrinsic sociality of humans, these categories must not be considered exclusive. We will show the empirical findings in interaction-driven collective activities mainly according to the interaction or communication channels. Finally, we will discuss other bursty patterns that are not covered by the above two categories, i.e., the bursts observed for financial activities, in human mobility patterns, and in the behavioural patterns of animals like monkeys or fruit flies.
\begin{table}[!t]
\caption{Empirical findings of individual activities. First column collects the paper where observations were reported, second column summarises the analysed dataset, and the last column provides information about the analysis results, mostly for the statistics of inter-event times and waiting times. In case of power-law distributions, $\alpha$ ($\alpha_w$) denotes the corresponding exponent of inter-event times (or waiting times), with errors in parentheses whenever available. The exponent of bursty train size distribution is denoted by $\beta$, while the decaying exponent of autocorrelation function is denoted by $\gamma$.}
\label{table:individual}
\begin{tabular}{m{3.5cm} | m{6.1cm} | m{4.7cm}}
\hline
Reference & Dataset & Finding \\
\hline
Paxson \emph{et al.}~\cite{Paxson1995WideArea} & TCP connection packets from Bellcore, U.C.B. & $\alpha\approx 2$ \\ \hline
Kleban \emph{et al.}~\cite{Kleban2003Hierarchical} & Job submissions to supercomputers, Blue Mountain and Blue Pacific & stretched exponential $P(\tau)$ \\ \hline
Harder \emph{et al.}~\cite{Harder2006Correlated} & Print requests to the printer at Imperial College London & $\alpha=1.76$ for different thresholds of file size, $\alpha=1.3$ for individuals \\ \hline
Vazquez \emph{et al.}~\cite{Vazquez2006Modeling} & Library loans by the faculty at University of Notre Dame & $\alpha$ distributed around $1$ \\ \hline
Nakamura \emph{et al.}~\cite{Nakamura2007Universal} & Locomotor activity, e.g., resting periods, from 14 patients and 11 healthy control subjects & $\alpha=1.92(6)$ for controls, $1.72(11)$ for patients \\ \hline
Alfi \emph{et al.}~\cite{Alfi2007Conference, Alfi2009How} & Statphys23 registration statistics & logarithmic singularity up to the deadline \\ \hline
Coley \emph{et al.}~\cite{Coley2008Arm} & Inter-movement intervals of arms of human subjects & power-law $P(\tau)$ \\ \hline
Goh \emph{et al.}~\cite{Goh2008Burstiness} & Various datasets in human dynamics, texts, and cardiac rhythms & high $B$ and negligible $M$ for human activities \\ \hline
Baek \emph{et al.}~\cite{Baek2008Testing} & Linux command histories of six users & $\alpha\in [1.47,1.74]$ \\ \hline
Bohorquez \emph{et al.}~\cite{Bohorquez2009Common} & Conflicts from media, government and non-governmental organization, and academic studies & heterogeneous numbers of conflicts per day \\ \hline
Bogachev \emph{et al.}~\cite{Bogachev2009Occurrence} & Outgoing traffic of 3 HTTP servers: two Canadian universities and NASA Kennedy Space Center & stretched exponential $P(\tau)$ \\ \hline
Jo \emph{et al.}~\cite{Jo2012Timevarying} & Paper updating intervals in \url{arXiv.org} & $\alpha_w\in [0.76,1.16]$ depending on the number of authors \\ \hline
Mryglod \emph{et al.}~\cite{Mryglod2012Editorial} & Paper processing times in Physica A and others & log-normal $P(\tau_w)$ with power-law tails of $\alpha_w=1$ \\ \hline
Hartonen \emph{et al.}~\cite{Hartonen2013How} & Paper processing times in JSTAT and JHEP & no power-law in $P(\tau_w)$ \\ \hline
Lee \emph{et al.}~\cite{Lee2013Mobile} & WiFi connectivity of iPhone users in urban areas & $\alpha=1.63$ \\ \hline
Hasan \emph{et al.}~\cite{Hasan2013Spatiotemporal} & Stay times from smart card transaction dataset in London, UK & heavy-tailed $P(\tau)$ \\ \hline
Wang \emph{et al.}~\cite{Wang2015Temporal} & Emergency calls in a Chinese city & $\alpha\in [0.86,1.19]$, $\beta=2.21$ \\ \hline
\end{tabular}
\end{table}
\begin{figure}[!t]
\center
\includegraphics[width=\columnwidth]{figs/fig_emp1.pdf}
\caption{Individual activities: Examples of inter-event time distribution $P(\tau)$ from the datasets for (a) job submissions to a printer (\emph{Source:} Adapted from Ref.~\cite{Harder2006Correlated} under Copyright $\textcopyright$ 2005 Elsevier B.V. All rights reserved.), (b) library loans made by a single individual (\emph{Source:} Adapted from Ref.~\cite{Vazquez2006Modeling} under Copyright (2006) by the American Physical Society.), (c) manuscripts submitted to Physica A (\emph{Source:} Adapted from Ref.~\cite{Mryglod2012Editorial}.), and (d) emergency calls in a Chinese city (\emph{Source:} Adapted from Ref.~\cite{Wang2015Temporal} under Copyright $\textcopyright$ 2015 Elsevier B.V. All rights reserved.). In all cases, heavy tail behaviour in $P(\tau)$ is observed, most of which have been fitted with the power-law form in the mentioned references.}
\label{fig:empirical_individual}
\end{figure}
\section{Individual activities}\label{subsect:individual}
We first overview the empirical findings of inhomogeneous temporal patterns or bursts in individual activities not necessarily involving direct interactions between individuals. We also include observations of bursts in individual activities at the population or system level. Such findings and observations range from everyday life to professional activities, e.g., including job submissions to supercomputers~\cite{Kleban2003Hierarchical}, print requests to the printer~\cite{Harder2006Correlated}, and library loans by the faculty at a university~\cite{Vazquez2006Modeling}. Events like paper processing or updating times~\cite{Jo2012Timevarying, Mryglod2012Editorial, Hartonen2013How} and human arm movements~\cite{Nakamura2007Universal, Coley2008Arm} have also been analysed to show the existence of bursty behaviour. In Table~\ref{table:individual} we have summarized a number of empirical results of bursty behaviour although it should not be considered as an exhaustive list.
In most cases mentioned above, the distributions $P(\tau)$ and $P(\tau_w)$ have been reported to have a heavy or power-law tail. To demonstrate such cases a few examples of inter-event time distribution are presented in Fig.~\ref{fig:empirical_individual}. Whenever the distribution is described in terms of a power law, the power-law exponent is provided and its value turns out to be quite diverse, i.e., ranging from $0.7$ to $2$. It should be noted, however, that even in case of the same dataset one can find that the value of the power-law exponent can vary from one individual to another, in other words describing the behaviour in terms of a distribution of exponent values. This observation indicates that the power-law behaviour in human dynamics is rather sensitive to the details of the phenomenon in question, and hence it seems not supporting the perspective of universality classes in statistical physics. It has been argued that the large variance of individual characters may induce a heterogeneous inter-event time distribution at the population level~\cite{Jiang2016Twostate}. Researchers have also found other functional forms of $P(\tau)$ that match with the empirical datasets, like stretched exponential~\cite{Kleban2003Hierarchical, Bogachev2009Occurrence} and log-normal ones~\cite{Mryglod2012Editorial}, which implies that different bursty phenomena may have different origins or follow different mechanisms.
Heavy-tailed $P(\tau)$ for individual activities may give some hints about the origin of bursty dynamics of human individuals. For this reason one can ask a question: Can the bursty dynamics observed in the individual activities be understood in terms of the ``purely'' intrinsic property of those individuals, or in terms of the interaction-driven extrinsic property? In other words, one can ask if the bursty dynamics of individuals is the consequence of node burstiness or link burstiness. As human beings are social, it is hard to say how much the behavioural datasets reflect purely individual actions, when compared to the interaction-driven activities. This is an important yet unresolved issue for understanding the origin of bursts in human dynamics.
\section{Interaction-driven collective activities}\label{subsect:interaction}
Next we present some empirical findings of interaction-driven collective activities of several subcategories mostly according to the communication channel used in the interaction. Among these cases face-to-face interaction is considered to be the most direct and natural way of communication or interaction between human individuals as in this case people must be spatially close to each other and the communication takes place in real time. As for the other means of communication nearest to face-to-face interactions are those based on real time video and voice links, like Skype and Hangout, as they provide the feeling of closeness or even intimacy between the individuals, although they are not spatially close to each others. Then comes the communication by using phones, especially mobile ones in recent years, as the interaction still takes place in real time over the voice link providing some feeling of closeness or intimacy though the communication is location-independent. After mobile phone communication come services like traditional posted letters, emails, and text messages or SMSs, which are not means of real time interactions. The recently introduced web-based messaging services, e.g., proposed by Social Networking Services (SNSs), have become very popular lately especially for younger generations. It should be noted, however, that people nowadays interact with each other using many of these communication channels in parallel, simultaneously or intermittently. In Fig.~\ref{fig:empirical_interaction} we have collected a few examples of inter-event time distributions.
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{figs/fig_emp2.pdf}
\caption{Interaction-driven collective activities: Examples of inter-event time distribution $P(\tau)$ from the datasets for (a) face-to-face interaction among high school students (\emph{Source:} Adapted from Ref.~\cite{Fournet2014Contact}.), (b) mobile phone calls in a European country (\emph{Source:} Adapted from Ref.~\cite{Karsai2011Small} under Copyright (2011) by the American Physical Society.), (c) email communications in a university (\emph{Source:} Adapted from Ref.~\cite{Barabasi2005Origin} by permission from Macmillan Publishers Ltd: Nature, 435:207--211, copyright (2005).), and (d) messages on an online forum (\emph{Source:} Adapted from Ref.~\cite{Panzarasa2015Emergence} under Copyright (2015) by the American Physical Society.). In all cases, heavy tails of $P(\tau)$ are observed, most of which have been fitted with power-law form in the mentioned references.}
\label{fig:empirical_interaction}
\end{figure}
\subsection{Face-to-face interactions}\label{subsubsect:face2face}
To collect data from face-to-face interaction at large scale is challenging. However, today's communication technology and smart devices provide a solution as these devices are able to communicate with each other, thus enabling to collect data from face-to-face interactions, containing information of the proximity between individuals. Examples of this approach are the SocioPatterns (\url{www.sociopatterns.org}) and other similar projects~\cite{Obadia2015Detailed}, in which wearable sensors with Radio-Frequency Identification (RFID) are used to collect datasets of individual contacts in real environments, such as schools, museums, hospitals, and academic conferences~\cite{Cattuto2010Dynamics, Stehle2011Simulation, Stehle2011HighResolution, Isella2011Whats, Vanhems2013Estimating}. Other studies have used Bluetooth devices~\cite{Hui2005Pocket}, infrared modules~\cite{Takaguchi2011Predictability}, or motes~\cite{Hashemian2010Flunet} that can also communicate with each other. Despite the advantages of directly measuring the proximity between individuals, the number of nodes in those datasets is relatively small, i.e., of the order of hundreds. Thus, large-scale conclusions may not be deduced from this approach.
A number of empirical findings concerning face-to-face interaction are summarised in Table~\ref{table:face2face}. In some analyses, it was found that the inter-event times or inter-contact times are power-law distributed with exponent between $1.4$ and $1.6$~\cite{Hui2005Pocket, Takaguchi2011Predictability, Fournet2014Contact}. The distributions of contact times or durations have been observed to show heavy tails in their distributions~\cite{Hui2005Pocket, Cattuto2010Dynamics, Stehle2011Simulation, Stehle2011HighResolution, Isella2011Whats, Fournet2014Contact}. Since the ``typical'' timescale of contact durations is much shorter than that of inter-contact times, the contact durations can be ignored in the analyses of bursty patterns. Note that the contact durations could be affected by the inter-contact times just before or after the contacts, of which the latter resembles the recovery time of neurons after firing.
\begin{table}[!t]
\caption{Empirical findings of interaction-driven activities based on face-to-face interaction. The notations are the same as in Table~\ref{table:individual}.}
\label{table:face2face}
\begin{tabular}{m{3.5cm} | m{6.1cm} | m{4.7cm}}
\hline
Reference & Dataset & Finding \\
\hline
Hui \emph{et al.}~\cite{Hui2005Pocket} & Face-to-face interaction logs in an IEEE conference & $\alpha=1.4$ \\ \hline
Takaguchi \emph{et al.}~\cite{Takaguchi2011Predictability} & Face-to-face interaction logs in offices in Japan & $\alpha=1.52$ \\ \hline
Starnini \emph{et al.}~\cite{Starnini2012Random, Starnini2013Modeling}, Zhao \emph{et al.}~\cite{Zhao2011Social} & Face-to-face proximity datasets using RFID in the frame of SocioPatterns project & heavy-tailed $P(\tau)$\\ \hline
Sun \emph{et al.}~\cite{Sun2013Understanding} & Users' encounters in public transit transaction in Singapore & daily peaks in $P(\tau)$ \\ \hline
Fournet \emph{et al.}~\cite{Fournet2014Contact} & Face-to-face encounters between high school students in France & $\alpha=1.57$ \\ \hline
\end{tabular}
\end{table}
\subsection{Mobile phone-based interactions}\label{subsubsect:mobilephone}
Recently, mobile phones or handsets are utilised to accurately measure or sense human behaviour. These personal devices, being equipped with a variety of sensors like GPS and WiFi, are carried around by the users everyday and all day through, thus they are capable to collect precise information about the communications, whereabouts, and online activities of their owners. Moreover, since the number of users or phone numbers in some datasets is up to several millions or even larger~\cite{Onnela2007Structure, Miritello2011Dynamical, Aoki2016Inputoutput}, they provide ways to overcome the issues due to the small sampling sizes.
The reliability of datasets collected from mobile phones was tested in the series of studies conducted within the frame of Reality Mining project~\cite{Eagle2006Reality, Pentland2009Reality, Eagle2009Inferring, Aharony2011Social}. It was shown that the behavioural data are at least comparable to self-report survey data in terms of friendship network and even capturing information that were missing from self-reports~\cite{Eagle2009Inferring}. Similar approaches were taken in the OtaSizzle project at Aalto University~\cite{Mantyla2008OtaSizzle, Karikoski2010Measuring, Jo2012Spatiotemporal} and Copenhagen Networks Study~\cite{Stopczynski2014Measuring, Sekara2016Fundamental, Sapiezynski2016Inferring}, where multiple kinds of individual activities and interactions were recorded simultaneously but from a relatively small group of volunteers. For other studies using mobile phone datasets, see Ref.~\cite{Blondel2015Survey} and references therein.
\begin{table}[!t]
\caption{Empirical findings for interaction-driven activities using mobile phones, i.e., mobile phone calls and Short Message Services (SMSs). The notations are the same as in Table~\ref{table:individual}.}
\label{table:mobilePhone}
\begin{tabular}{m{3.5cm} | m{6.1cm} | m{4.7cm}}
\hline
Reference & Dataset & Finding \\
\hline
Candia \emph{et al.}~\cite{Candia2008Uncovering} & Mobile phone calls (source not mentioned) & $\alpha=0.9(1)$ \\ \hline
Hong \emph{et al.}~\cite{Hong2009HeavyTailed} & SMS records of volunteers in the university & $\alpha\in [1.52,2.09]$ \\ \hline
Wu \emph{et al.}~\cite{Wu2010Evidence} & SMSs of individual users from a mobile phone operator & bimodal $P(\tau)$ with power-law regimes, where $\alpha$ ($\alpha_w$) centred at $1.5$ ($2$) \\ \hline
Miritello \emph{et al.}~\cite{Miritello2011Dynamical} & Mobile phone calls from a European operator in a single country & heavy-tailed $P(\tau_w)$ \\ \hline
Zhao \emph{et al.}~\cite{Zhao2011Empirical} & SMSs in China & $\alpha\in [1.1,1.3]$ depending on activity level \\ \hline
Karsai \emph{et al.}~\cite{Karsai2011Small, Karsai2012Correlated, Karsai2012Universal} & Mobile phone calls and SMSs from a European operator & $\alpha=0.7$, $\beta=4.1$, $\gamma=0.5$ for calls and $\alpha=1.0$, $\beta=3.9$, $\gamma=0.6$ for SMSs \\ \hline
Kivela \emph{et al.}~\cite{Kivela2012Multiscale} & Mobile phone calls from a European operator & heavy-tailed $P(\tau)$ and $P(\tau_w)$ \\ \hline
Jo \emph{et al.}~\cite{Jo2012Circadian} & Mobile phone calls and SMSs from a European operator & heavy-tailed $P(\tau)$ with daily and weekly peaks \\ \hline
Jiang \emph{et al.}~\cite{Jiang2013Calling, Jiang2016Twostate} & Mobile phone call dataset from a Chinese cell phone operator & $\alpha=0.873$ for all users, stretched exponential or $\alpha \in [1.5,2.6]$ for individuals, exponential $P_{\Delta t}(E)$ for a majority of individuals \\ \hline
Schneider \emph{et al.}~\cite{Schneider2013Unravelling} & Surveys and mobile phone data from Paris and Chicago & heavy-tailed $P(\tau)$ for home/work, $\alpha=0.49$ for other locations \\ \hline
Aoki \emph{et al.}~\cite{Aoki2016Inputoutput} & Mobile phone calls and SMSs from a European cellphone service provider & $\alpha=1.176$ (calls) and $1.388$ (SMSs) \\ \hline
\end{tabular}
\end{table}
Bursty dynamics has been observed in both mobile phone calls and Short Message Services (SMSs). In a number of empirical results, one finds the heavy-tailed distribution $P(\tau)$, in particular with power-law scaling regime. The values of power-law exponent $\alpha$ turned out to be dependent on communication channels, whether they are for calls or for SMSs. We found that for calls the exponent value of $\alpha\approx 0.7$~\cite{Karsai2012Universal, Karsai2012Correlated} ($\approx 1.2$ in Ref.~\cite{Aoki2016Inputoutput}) tends to be smaller than that observed in SMS sequences with $\alpha=1.0$~\cite{Karsai2012Universal, Karsai2012Correlated} ($\approx 1.4$ in Ref.~\cite{Aoki2016Inputoutput}). It was also found in Fig.~\ref{fig:empirical_interaction}(b) that $P(\tau)$s for different activity groups collapse onto one curve when being normalised by the average inter-event time for each $P(\tau)$, which implies a strong similarity in human behaviour across the different activity levels.
From another mobile phone dataset, Jiang~\emph{et al.} found that although the aggregate $P(\tau)$ follows a power law, a majority of individual users, i.e., more than 73\%, show Weibull distributions for inter-event times~\cite{Jiang2013Calling}. For other users in the ``power-law'' group, the values of $\alpha$ varied from $1.5$ to $2.6$. In addition, bimodal distributions of inter-event times have been observed in SMS datasets~\cite{Wu2010Evidence}: The distributions are power-law for $\tau<\tau_c$, and exponential for $\tau>\tau_c$. This functional form is different from the power-law distribution with exponential cutoff, hence implying different mechanisms to act in the background. For the power-law regime, the values of $\alpha$ obtained at the individual level are distributed around $1.5$. As shown in Refs.~\cite{Karsai2012Universal, Karsai2012Correlated}, long-range memory effects have also been observed in terms of heavy-tailed burst size distributions and power-law decaying autocorrelation functions both in calls and SMSs. We note that such long-range memory effects have been investigated only recently and phenomenologically. For more fundamental understanding, one might need to obtain more information about the mobile phone users as typically done in sociology, e.g., in Ref.~\cite{Rettie2009Mobile}.
\begin{table}[!t]
\caption{Empirical findings of interaction-driven activities by letters and emails. The notations are the same as in Table~\ref{table:individual}.}
\label{table:letteremail}
\begin{tabular}{m{3.5cm} | m{6.1cm} | m{4.7cm}}
\hline
Reference & Dataset & Finding \\
\hline
Oliveira \emph{et al.}~\cite{Oliveira2005Human}, Vazquez \emph{et al.}~\cite{Vazquez2006Modeling} & Letter correspondence of Darwin, Einstein, and Freud & $\alpha_w \approx 1.5$ \\ \hline
Li \emph{et al.}~\cite{Li2008Empirical} & Letter correspondence of a Chinese scientist & $\alpha=\alpha_w=2.1(1)$ \\ \hline
Malmgren \emph{et al.}~\cite{Malmgren2009Universality} & Letter correspondence of 16 writers, performers, politicians, and scientists & heavy-tailed $P(\tau)$ \\ \hline
Formentin \emph{et al.}~\cite{Formentin2014Hidden, Formentin2015New} & Letters, emails, SMSs from diverse sources & $\alpha_w \approx 1.5$ \\ \hline
Barab\'asi~\cite{Barabasi2005Origin}, Eckmann \emph{et al.}~\cite{Eckmann2004Entropy}, Johansen~\cite{Johansen2004Probing} & Emails in a university (Universite de Geneve or Weizmann Institute of Science) & $\alpha=1$, $\alpha_w=1$ \\ \hline
Malmgren \emph{et al.}~\cite{Malmgren2008Poissonian} & Email dataset as in~\cite{Barabasi2005Origin} & heavy-tailed $P(\tau)$ \\ \hline
Gao \emph{et al.}~\cite{Gao2011Network} & Individuals in Enron email dataset & $\alpha\in [0.8,1.8]$ \\ \hline
Iribarren \emph{et al.}~\cite{Iribarren2009Impact, Iribarren2011Branching} & Campaign propagation dataset by emails & heavy-tailed $P(\tau_w)$ \\ \hline
\end{tabular}
\end{table}
\subsection{Communication by posted letters and emails}\label{subsubsect:letter}
In contrast to the face-to-face interaction and mobile phone call communication, communication by posted letters, electronic mails or emails, and text messages or SMSs does not take place synchronously in real time and may depend on the location and distance between the senders and receivers. Hence their interaction patterns could be very different from those in face-to-face and mobile phone call communications. As we already discussed SMSs in the previous Subsection, here we consider posted letters and emails.
Traditionally posted letters were one of the most important communication channels between people outside their daily proximity before various ICT-based communication channels like emails and mobile phones emerged. There are a few examples of such datasets of letters being exchanged by historic figures, such as Darwin, Einstein, and Freud, that have been analysed and found to have the heavy-tailed distribution of waiting times between letters being sent~\cite{Oliveira2005Human}. In these cases the waiting time distributions have often been fitted using power-law forms~\cite{Oliveira2005Human, Li2008Empirical}, while alternative mechanisms excluding power-law forms have been studied in terms of cascading non-homogeneous Poisson processes~\cite{Malmgren2009Universality}. See the summary of empirical findings in Table~\ref{table:letteremail}.
Recently, however, the usage of posted letters has dramatically dropped due to people using emails for various purposes and contexts, e.g., for communications between colleagues or friends, etc. This development has provided an unprecedented amount of email data and plethora of email datasets rich of detailed and useful information of social interactions and temporal patterns for the researchers to investigate. Many of these datasets have been analysed to investigate the origin of bursts in human dynamics, which has led to some debates on the issue. In the beginning, Barab\'asi claimed that the inter-event time and waiting time distributions for email users show power-law tails as $P(\tau)\sim\tau^{-\alpha}$ with $\alpha=1$~\cite{Barabasi2005Origin}. Similar analyses using the same email dataset were previously performed in Refs.~\cite{Eckmann2004Entropy,Johansen2004Probing}. Since then, there are debates between different research groups about the origin of bursts~\cite{Stouffer2005Comment, Barabasi2005Reply, Stouffer2006Lognormal}. Malmgren~\emph{et al.} suggested Poissonian explanation for heavy tails in the email communication patterns~\cite{Malmgren2008Poissonian} to argue that the bursts are the consequence of daily and weekly cycles of humans with cascading behaviour whenever the email session is initiated. Later they also argued about the universality in human activity~\cite{Malmgren2009Universality}. Despite these debates, many issues were left unresolved, such as how cyclic patterns intrinsic in human behaviour interplays with other human factors like task executions. For resolving this, a deseasoning method was applied to the mobile phone calls and SMSs from a European operator, leading to the conclusion that the burstiness is robust with respect to the deseasoning of circadian and weekly cycles~\cite{Jo2012Circadian}. Here we should remark that the bursty dynamics observed in one communication channel, e.g., emails, could be driven by different mechanisms as ones observed other communication channels, e.g., mobile phone calls and SMSs. Thus one needs to be careful whenever translating the conclusions from one dataset into those for another dataset.
\begin{table}[!t]
\caption{Empirical findings of interaction-driven and some individual activities using web services (part I). The notations are the same as in Table~\ref{table:individual}.}
\label{table:web-based1}
\begin{tabular}{m{3.5cm} | m{6.1cm} | m{4.7cm}}
\hline
Reference & Dataset & Finding \\
\hline
Henderson \emph{et al.}~\cite{Henderson2001Modelling} & Logging of users in networked games, Quake and Half-Life & $\alpha=2.15$ \\ \hline
Dewes \emph{et al.}~\cite{Dewes2003Analysis} & Web-chat messages in the University of Saarland & power-law $P(\tau)$ \\ \hline
Vazquez \emph{et al.}~\cite{Vazquez2006Modeling}, Dezs\H{o} \emph{et al.}~\cite{Dezso2006Dynamics} & Web browsing in Hungarian news and entertainment portal, \url{www.origo.hu} & $\alpha$ distributed around $\approx 1.1$ \\ \hline
Kujawski \emph{et al.}~\cite{Kujawski2007Growing} & Online forums and news groups in Poland & $\alpha=1.25$ for a forum, $1.33$ for a news group \\ \hline
Goncalves \emph{et al.}~\cite{Goncalves2008Human} & Logs of individuals to the web server at Emory University & $\alpha=1$ for one URL, $\alpha=1.25$ for all pages \\ \hline
Zhou \emph{et al.}~\cite{Zhou2008Role} & Rating by users in Netflix & $\alpha=2.08$ for the whole, $\in [1.5,2.5]$ depending on activity level \\ \hline
Hu \emph{et al.}~\cite{Hu2008Empirical} & Online music service in a Chinese university & heavy-tailed $P(\tau)$ \\ \hline
Crane \emph{et al.}~\cite{Crane2008Robust} & YouTube video views & time series into 4 classes: exogenous/endogenous critical/subcritical \\ \hline
Altmann \emph{et al.}~\cite{Altmann2009Beyond} & Frequent words in USENET discussion groups & stretched exponential $P(\tau)$ \\ \hline
Rybski \emph{et al.}~\cite{Rybski2009Scaling} & Messages in online community of men having sex with men, messages between teenagers & power-law decaying $A(t_d)$ and Hurst exponent $>0.5$ \\ \hline
Radicchi~\cite{Radicchi2009Human} & Feedback messages in Ebay and queries in America On Line & $\alpha=1.9$ for both datasets \\ \hline
Radicchi~\cite{Radicchi2009Human} & Logging to English Wikipedia & $\alpha=1.2$ for the whole, $\in [1.1,2.3]$ depending on activity level \\ \hline
Rocha \emph{et al.}~\cite{Rocha2010Information} & Online posts by buyers and sellers in the prostitution network & $\alpha=1.49(4)$ for sellers, $1.5(1)$ for buyers \\ \hline
Ratkiewicz \emph{et al.}~\cite{Ratkiewicz2010Characterizing} & Two traffic datasets of Wikipedia & $\alpha=0.8$ (events if $\frac{\Delta k}{k}>1$) \\ \hline
\end{tabular}
\end{table}
\begin{table}[!t]
\caption{Empirical findings of interaction-driven and some individual activities using web services (part II). The notations are the same as in Table~\ref{table:individual}.}
\label{table:web-based2}
\begin{tabular}{m{3.5cm} | m{6.1cm} | m{4.7cm}}
\hline
Reference & Dataset & Finding \\
\hline
Wang \emph{et al.}~\cite{Wang2011Heterogenous} & Edits of articles in Chinese Wikipedia and blog posting in website of Nanjing University & $\alpha$ depending on activity level \\ \hline
Guo \emph{et al.}~\cite{Guo2011Weblog} & Logging of bloggers at \url{sciencenet.cn} & $\alpha\in [0.2,0.5]$\\ \hline
Szell \emph{et al.}~\cite{Szell2012Understanding} & Intervals between jumps in massive multiplayer online game, Pardus & $\alpha=2.2$ \\ \hline
Rybski \emph{et al.}~\cite{Rybski2012Communication} & Messages in an online community POK & $\alpha=1.5$ in days, $\alpha=2.25$ in seconds \\ \hline
Jo \emph{et al.}~\cite{Jo2012Spatiotemporal} & Web domain visits by all users in OtaSizzle project & heavy-tailed $P(\tau)$ \\ \hline
Yan \emph{et al.}~\cite{Yan2012Human, Yan2013Social} & Messages in Chinese microblog & $\alpha=1.231$ for one user, $1.323$ for others \\ \hline
Yasseri \emph{et al.}~\cite{Yasseri2012Dynamics, Yasseri2012Circadian} & Edits on $20$ highly disputed articles of Wikipedia & daily patterns, $\alpha=0.97$, $\gamma=0.56$ \\ \hline
Garas \emph{et al.}~\cite{Garas2012Emotional} & Posts in Internet Relay Chat (IRC) & $\alpha=1.53$ \\ \hline
Zhou \emph{et al.}~\cite{Zhou2012Relative} & Datasets from AOL, Delicious, SMS, and Twitter & $\alpha=1.31$, $1.12$, $1.33$, $1.05$ for each dataset \\ \hline
Zhao \emph{et al.}~\cite{Zhao2012Empirical} & Netflix, MovieLens, Delicious, Ebay, FriendFeed, and Twitter & $\alpha\in [1.17,2.15]$ \\ \hline
Gaito \emph{et al.}~\cite{gaito2012bursty} & Link creation in Renren online social networks & $\alpha$ broadly distributed around $1$ \\ \hline
Mathiesen \emph{et al.}~\cite{Mathiesen2013Excitable} & Tweets mentioning brand names & $1/f$ noise \\ \hline
Kikas \emph{et al.}~\cite{Kikas2013Bursty} & Social link creation and removal in Skype & $\alpha\approx 0.85$ \\ \hline
Zhao \emph{et al.}~\cite{Zhao2013Emergence} & E-commerce (Douban and Taobao), and MPR & $\alpha\in[1.41,2.04]$ for individuals \\ \hline
Karimi \emph{et al.}~\cite{Karimi2014Structural} & Posts and messages in Sweden's online movie community & Broad $P(\tau)$ \\ \hline
Panzarasa \emph{et al.}~\cite{Panzarasa2015Emergence} & Messages on an online forum at the University of California, Irvine & $\alpha=1.53(11)$, $0.71(4)$, $1.87(5)$ for different scaling regimes at the user level\\ \hline
Kwon \emph{et al.}~\cite{Kwon2016Double} & Edits on $418$ featured articles in English Wikipedia & double power-law with $\alpha=1$ for small $\tau$ and $2$ for large $\tau$ \\ \hline
Zhang \emph{et al.}~\cite{Zhang2016Multiscale} & Chatting messages at Tencent QQ in China & $\alpha\in [1.3,1.5]$ \\ \hline
\end{tabular}
\end{table}
\subsection{Web-based activities and social interactions}\label{subsubsect:web}
Since the World Wide Web (WWW) was invented by Tim Berners-Lee in 1989~\cite{BernersLee2000Weaving}, it has grown enormously for the past few decades to find over one billion websites today~\cite{InternetLiveStatsTotal}. Nowadays it is not just a network made of hyperlinks between web pages, but it functions as the platform for e-commerce, online forums~\cite{Ugander2011Anatomy, Eom2015Tailscope}, and SNSs like Twitter~\cite{Kwak2010What} and Facebook~\cite{Ugander2011Anatomy}, etc. More recently the websites are accessible not only from desktop computers but also from various mobile devices including mobile phones and tablets. In this sense, the web-based datasets can be considered to reflect well human behaviour of people's information acquisition, entertainment, and maintaining relationships, etc., despite the fact that the available datasets are typically reflecting only some aspects of the reality. As all the interactions on the web can be in principle recorded, such datasets, along with those from mobile phones, opened a new avenue for social sciences. The analysis of the collected large data corpus called for methodologies borrowed from other disciplines like computational and even physical sciences. This is even more true for the modelling the possible underlying mechanisms in producing the structure of the system and modelling its dynamics~\cite{Lazer2009Computational}.
In Tables~\ref{table:web-based1} and~\ref{table:web-based2}, we present a number of empirical findings for bursts in various web-based activities including several individual activities. Here the interaction between individual users could be message exchanges~\cite{Dewes2003Analysis, Rybski2009Scaling, Radicchi2009Human, Rybski2012Communication, Yan2012Human, Garas2012Emotional, Karimi2014Structural, Zhang2016Multiscale} and discussions in forums, news groups, and Internet Relay Chat (IRC) channels~\cite{Kujawski2007Growing, Rocha2010Information, Garas2012Emotional, Panzarasa2015Emergence, Zhang2016Multiscale}. Individual activities include logging actions to online games~\cite{Henderson2001Modelling}, web servers at universities~\cite{Goncalves2008Human}, Wikipedia~\cite{Radicchi2009Human}, and blogs~\cite{Vazquez2006Modeling, Dezso2006Dynamics}, as well as online queries~\cite{Radicchi2009Human}, edits of articles on the web~\cite{Wang2011Heterogenous, Yasseri2012Dynamics, Kwon2016Double}, and jumps in the online game~\cite{Szell2012Understanding} to name a few.
On the basis of these studies we observe that the values of power-law exponent of inter-event time distributions are very diverse, i.e., ranging from $0.2$ to $2.5$. It turns out that the power-law behaviour depends on the activity level of users or on the observation scale. For example, Zhou~\emph{et al.} showed by analysing the rating patterns in Netflix that the group of more active users shows a larger value of $\alpha$, i.e., less bursty temporal patterns~\cite{Zhou2008Role}. More active users may have smaller averages of inter-event times, not necessarily leading to the larger values of $\alpha$. Thus, this tendency or dependence of power-law behaviour on the activity level of users must be investigated rigorously. For this, the interaction or network structure of individual users can be relevant in understanding the complex bursty dynamics. The effect of the observation scale on the power-law behaviour, from individuals to the groups they form, and to the whole population they belong, was observed by Panzarasa and Bonaventura~\cite{Panzarasa2015Emergence}. By analysing messages posted in an online forum at a university, they found that the inter-event time distribution at the individual level shows three scaling regimes, i.e., for short, intermediate, and long inter-event times, as depicted in Fig.~\ref{fig:empirical_interaction}(d). The scaling regime for the short inter-event times coincides with that of the inter-event time distribution at the population level. As for the inter-event time distribution at the group level, they argued that the group dynamics is governed by a nontrivial reciprocal mechanism between users. In addition to the inter-event time distributions, the long-range memory effects have also been measured in terms of autocorrelation function, power spectrum, or Hurst exponent, e.g., in Refs.~\cite{Rybski2009Scaling, Rybski2012Communication, Yasseri2012Dynamics, Jo2012Circadian, Panzarasa2015Emergence}. Finally, as for the interplay between bursty dynamics and network evolution, Gaito \emph{et al.}~\cite{gaito2012bursty} identified bursty link creation patterns in online social networks and characterised the individual link creation dynamics in four different phases (as acceleration, deceleration, cruising, and inactive). In another work Kikas~\emph{et al.}~\cite{Kikas2013Bursty} analysed the correlation between service adoption and bursty link creation of Skype users, while Myers and Leskovec~\cite{Myers2014Bursty} analysed Twitter datasets to find that information diffusion also creates sudden bursts of new links that in turn affect the users' local network structure.
\begin{table}[!t]
\caption{Empirical findings of financial activities. The notations are the same as in Table~\ref{table:individual}.}
\label{table:economic}
\begin{tabular}{m{3.5cm} | m{6.1cm} | m{4.7cm}}
\hline
Reference & Dataset & Finding \\
\hline
Mainardi \emph{et al.}~\cite{Mainardi2000Fractional} & Financial market datasets for BUND future & stretched exponential $P(\tau)$ for small $\tau$, power-law $P(\tau)$ for large $\tau$ \\ \hline
Raberto \emph{et al.}~\cite{Raberto2002Waitingtimes} & GE stock prices & stretched exponential $P(\tau)$ \\ \hline
Masoliver \emph{et al.}~\cite{Masoliver2003Continuoustime} & US dollar-Deutsche mark future exchange & $\alpha=3.47$ \\ \hline
Vazquez \emph{et al.}~\cite{Vazquez2006Modeling} & Trade transactions by a stock broker at Central European bank & $\alpha=1.3$ \\ \hline
Wang \emph{et al.}~\cite{Wang2010Human} & Time intervals between contracts and payments of a logistics company in Shanghai & $\alpha_w=2.5$ \\ \hline
\end{tabular}
\end{table}
\section{Other bursty patterns}\label{sect:others}
As we remarked in the beginning of this Chapter, we primarily concentrate on direct observations of bursty phenomena in human dynamics. Yet we would like to briefly mention some related sets of observations, which may not be directly taken on human activities or temporal behaviour, but certainly have relevance in the scope of bursty human dynamics.
\subsection{Financial activities}\label{subsect:econo}
We present the bursty patterns in financial interactions in Table~\ref{table:economic}. Examples include financial trades in markets for future, stocks, and foreign exchange. In these cases, the events are mostly defined by transactions, implying that the inter-event time measures the time interval between two consecutive transactions. It has been found that the inter-event time distributions are heavy-tailed. In case with power-law inter-event time distributions, the values of the power-law exponent was found to range from $1.3$ to $3.47$~\cite{Vazquez2006Modeling, Masoliver2003Continuoustime}.
The inhomogeneous temporal patterns in economic and financial systems have been extensively studied but mostly from the macroscopic perspective~\cite{Mantegna2007Introduction}. For the microscopic approach, one can refer to the economic perspective that time is considered as tradable goods or cost~\cite{Nichols1971Discrimination, Barzel1974Theory}. Based on this concept, one can discuss about the optimal waiting time of agents when they must wait for the service or goods, for example, as modelled in Ref.~\cite{Jo2012Optimized}. Hence such agents might be driven by objectives like maximising the profit or utility, which may lead to different bursty patterns than those for other human activities like communications. Or the economic constraints like the limited time resource can also account for the bursts in communications. These issues can be investigated for better understanding the origin of bursts in human dynamics.
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{figs/fig_emp3.pdf}
\caption{Human mobility patterns: An example of displacement distribution $P(r)$ (a) and inter-event time distribution $P(\tau)$ (b) from the mobility dataset inferred from the circulation of bank notes in the United States of America. Both distributions are described by power-law decaying behaviour. (\emph{Source:} Adapted from Ref.~\cite{Brockmann2006Scaling} by permission from Macmillan Publishers Ltd: Nature, 439:462--465, copyright (2006).)}
\label{fig:empirical_mobility}
\end{figure}
\subsection{Human mobility}\label{subsect:mobility}
An important aspect of human dynamics addresses their mobility in geographical space as well as other abstract space. Their replacement is typically driven by everyday routines such as going to work, returning home, or go shopping, or on a larger spatial scale when sometimes they migrate to another city or country. Such commuting and travel patterns emerge as a multiscale spatiotemporal phenomenon, which may exhibit bursty patterns in space and/or time. Here events indicate individual movements, thus each event may be described by a time and a distance of the individual's displacement $r$. One common observation is that the distribution of displacement of individuals follows a power law as:
\begin{equation}
P(r)\sim r^{-\mu},
\end{equation}
sometimes with exponential cutoff~\cite{Brockmann2006Scaling, Gonzalez2008Understanding, Song2010Modelling, Asgari2013Survey}. Some values of $\mu$ are summarised in Table~\ref{table:mobility}, while note that evidences for exponentially distributed replacement has also been found in other datasets~\cite{Liang2012Scaling, Kang2012Intraurban}. Spatial dynamics with power-law distributed displacements is commonly called L\'evy-flights and was found to characterise the mobility of humans~\cite{Gonzalez2008Understanding, Brockmann2006Scaling} and foraging of animals~\cite{Viswanathan1999Optimizing} in spatial space, and even in mental space as well~\cite{Baronchelli2013Levy}.
\begin{table}[!ht]
\caption{Empirical findings of human mobility patterns. In case when the displacement distribution $P(r)$ is a power law, $\mu$ denotes the power-law exponent characterising the displacement distribution. Other notations are the same as in Table~\ref{table:individual}.}
\label{table:mobility}
\begin{tabular}{m{3.5cm} | m{6.1cm} | m{4.7cm}}
\hline
Reference & Dataset & Finding \\
\hline
Brockmann \emph{et al.}~\cite{Brockmann2006Scaling} & Circulation of bank notes in the United States of America & $\mu=1.59(2)$ and $\alpha=1.60(3)$ \\ \hline
Gonzalez \emph{et al.}~\cite{Gonzalez2008Understanding} & Mobile phone call dataset & $\mu=1.75(15)$ \\ \hline
Jiang \emph{et al.}~\cite{Jiang2009Characterizing} & GPS data of taxis' positions in four cities/towns in Sweden & $\mu=2.5$ for intracity, $4.6$ for intercity \\ \hline
Song \emph{et al.}~\cite{Song2010Modelling} & Mobile phone call dataset and users of a location-based service & $\mu=1.55(5)$ and $\alpha=1.8(1)$ \\ \hline
Kang \emph{et al.}~\cite{Kang2012Intraurban} & Mobile phone dataset from 8 Chinese cities & exponential $P(r)$ \\ \hline
Liang \emph{et al.}~\cite{Liang2012Scaling} & GPS datasets of taxis in Beijing & exponential $P(r)$, $\alpha\in [0.5,2.5]$ for individual taxis \\ \hline
Yan \emph{et al.}~\cite{Yan2013Diversity} & Travel diaries of hundreds of volunteers & $\mu=1.05$ \\ \hline
\end{tabular}
\end{table}
Individual mobility patterns can also be characterised by the radius of gyration $r_g$, measuring how far an individual trajectory is from its center of mass. For this analysis, the individual trajectory can be described by a sequence of locations, $\{\vec r_1,\cdots, \vec r_n\}$, to calculate the radius of gyration as follows:
\begin{equation}
r_g \equiv \sqrt{\frac{1}{n}\sum_{i=1}^n |\vec r_i - \vec r_{_{\textrm{CM}}}|^2},
\end{equation}
where the centre of mass of the trajectory is defined as
\begin{equation}
\vec r_{_{\textrm{CM}}}\equiv \frac{1}{n}\sum_{i=1}^n \vec r_i.
\end{equation}
The distribution of $r_g$ is found to decay as a power law~\cite{Gonzalez2008Understanding}, providing another evidence for heterogeneous mobility patterns of humans.
Recently, the trajectory or the sequence of locations is measured with high time resolution, enabling one to analyse such event sequences using the methods introduced in the previous Chapter. However, there are only several empirical results for inter-event time distributions. As each event denotes a displacement, the inter-event time indicates the staying time in a location or the time interval between two consecutive displacements. In all these cases heavy-tailed distributions $P(\tau)$ were found, some of them with power-law tails. For such cases the estimated values of $\alpha$ exponents are presented in Table~\ref{table:mobility}. These results together with the observations on heterogeneous replacement imply that human mobility is bursty in terms of time and space as well.
\begin{table}[!h]
\caption{Empirical findings of bursty patterns of animals. The notations are the same as in Table~\ref{table:mobility}.}
\label{table:animal}
\begin{tabular}{m{3.5cm} | m{6.1cm} | m{4.7cm}}
\hline
Reference & Dataset & Finding \\
\hline
Sorribes \emph{et al.}~\cite{Sorribes2011Origin} & Walking activities of flies, \emph{Drosophila melanogaster} & Weibull distribution of $P(\tau)$ \\ \hline
Boyer \emph{et al.}~\cite{Boyer2012Nonrandom} & Movements of capuchin monkeys & $\mu=2.7$ and $\alpha=1.6$ \\ \hline
Proekt \emph{et al.}~\cite{Proekt2012Scale} & Movements of adult mice & $\alpha=1.7$ \\ \hline
Wearmouth \emph{et al.}~\cite{Wearmouth2014Scaling} & Waiting time of ambush predators & $\alpha_w=1.58(36)$ for wild fish, $1.59(38)$ for captive fish \\ \hline
\end{tabular}
\end{table}
\subsection{Animal behaviours}\label{subsect:animal}
Here we briefly discuss some similarities found in studies of bursty behaviour of animals with that of humans. The bursty behaviour is also observed in temporal patterns of monkeys, mice, and fruit flies, as summarised in Table~\ref{table:animal}. Sorribes~\emph{et al.}~\cite{Sorribes2011Origin} found the walking activity of \emph{Drosophila melanogaster} to show Weibull distributions of inter-event times. They argued that the bursty dynamics of fruit flies are similar to that of humans in terms of positive burstiness parameter $B$ in Eq.~(\ref{eq:burstiness_param}) and near-zero memory coefficient $M$ in Eq.~(\ref{eq:memory_coeff}). Boyer~\emph{et al.}~\cite{Boyer2012Nonrandom} found by analysing the displacements of capuchin monkeys that the power-law exponents are $\mu=2.7$ and $\alpha=1.6$, respectively. On the other hand in case of human mobility, the power-law exponents were found to be $\mu\approx 1.6$ and $\alpha\approx 1.8$, as presented in Ref.~\cite{Song2010Modelling}. Furthermore, Proekt~\emph{et al.}~\cite{Proekt2012Scale} observed that displacements of adult mice show the power-law $P(\tau)$ with exponent $\alpha=1.7$. This value of $\alpha$ turns out to be similar to the values for monkeys and humans, which could imply the same underlying mechanisms or a kind of universality.
\chapter{Models and mechanisms of bursty behaviour}
\label{chapter:model}
Bursty dynamical patterns characterise human behaviour not only at the individual level but also at the level of dyadic interactions and even when it comes to collective phenomena at the network level. To get insight and capture these phenomena at multiple scale, a number of models have been introduced, which sometimes lead to seemingly conflicting interpretations. In this Chapter our aim is to give a comprehensive summary of all these efforts and let the reader to judge which one of them seems to be the most suitable explanation of the same phenomena.
\section{Models of individual activity}
\label{sec:indivmodels}
The first observations of human bursty patterns were commonly addressing the activity of individuals, although many observations were made from the datasets based on social interactions at the dyadic level. All these studies were reporting heterogeneous non-Poissonian dynamical patterns characterised by broad inter-event time distributions, which was explained in various ways, namely (i) due to intrinsic correlations via decision making mechanisms or (ii) due to independent actions influenced by circadian patterns or (iii) due to some other underlying mechanisms like reinforcement or temporal correlations. In addition several combinations of these modelling directions were proposed together with phenomenological models, which did not, however, address the possible reasons behind the observed dynamics but only aimed at reproducing signals with similar temporal features. Below we address in details all of these modelling directions.
\subsection{Queuing models of bursty phenomena}
\label{sec:PriQueMod}
As we have observed in Chapter~\ref{chapter:meas}, the bursty temporal patterns in human dynamics can be characterised in terms of broad inter-event time or waiting time distributions, which in many cases have power-law tails with exponent $\alpha$ and $\alpha_w$, respectively. The values of $\alpha$ and $\alpha_w$ turn out to be very diverse, implying that there could be various underlying mechanisms behind the observed scaling behaviours. To understand the differences in the scaling behaviour, Barab\'asi proposed in his seminal paper the idea that consecutive rational actions of an individual are driven by the execution of prioritised tasks~\cite{Barabasi2005Origin}. Here the model considers an agent with a priority list of $l$ tasks, each of them assigned with a priority value $x_i$ drawn from a distribution, denoted by $\eta(x)$. The priority values allow the agent to rank the tasks and execute them in the rational order based on their priorities. Then the central quantity of this model is the waiting time $\tau_w$ for a task to be spent between its insertion to the queue and its execution.
\subsubsection{Cobham priority queuing model}
The priority queuing model was first introduced by Cobham~\cite{Cobham1954Priority}, where the priority list can contain an arbitrary number of tasks and the priority of each task is an integer drawn from some distribution. The tasks are set to arrive with the rate $\lambda$ following a Poisson dynamics with exponential arrival time distribution and they are executed with rate $\mu$ by always choosing the one with the highest priority. Since then, the waiting time distribution for some case has been obtained~\cite{Abate1997Asymptotics}, and also discussed in Ref.~\cite{Vazquez2006Modeling}, as follows:
\begin{equation}
P(\tau_w) \sim \tau_w^{-3/2} \exp\left( -\frac{\tau_w}{\tau_c} \right) \hspace{.3in} \mbox{with} \hspace{.3in} \tau_c=\frac{1}{\mu(1-\sqrt{\rho})^2},
\label{eq:Ptw}
\end{equation}
where $\rho=\lambda/\mu$ denotes the control parameter of the process. If $\rho<1$, the task list is typically short as tasks are executed right after their arrival. Then $P(\tau_w)$ is reduced to an exponential distribution as $\rho\rightarrow 0$. On the other hand, in the limit of $\rho=1$ the waiting time distribution appears as a power-law function with exponent $\alpha_w=3/2$ and an exponential cutoff. In this case most of the tasks are executed shortly after their insertion, but some low priority tasks may be stuck in the list introducing heterogeneity in the waiting time distribution. It has been shown that the queue length $l$ performs a one-dimensional random walk with a bound at $l=0$, implying a return time distribution $P(t_{r})\sim t_{r}^{-3/2}$ that gives the origin of the same exponent value for $P(\tau_w)$~\cite{Vazquez2006Modeling}. Finally, if $\rho>1$, the average queue length is increasing linearly as $\langle l(t) \rangle = (\lambda-\mu)t$, and thus a fraction of tasks $1-\rho^{-1}$ will remains in the list forever. Nevertheless, the waiting time distribution of the executed tasks still follows the form of Eq.~(\ref{eq:Ptw}). We note that the predicted exponent value $\alpha_w=3/2$ is found to fit well with the empirical observations of letter correspondence activities~\cite{Oliveira2005Human}, as were found typically with $\rho=1.1$. Later, Grinstein and Linsker~\cite{Grinstein2006Biased} obtained the analytic solutions for the continuous distribution of priority: $\eta(x)=1$ for $x\in [0,1]$.
\subsubsection{Barab\'asi priority queuing model}
\label{sec:barabquemod}
Another version of the model, motivated by the finite capacity of immediate memory of humans~\cite{Miller1956Magical}, operates with a priority list of fixed size $l$. In addition it assumes that occasionally agents may decide to perform a task with a lower priority before doing all the high priority ones. This is realised by introducing a probability $p$ that in the current iteration the agent performs the highest priority task, otherwise, with probability $(1-p)$ it selects a task randomly from the task list. In the limit $p\rightarrow 1$ the agent tends to choose always the task with the highest priority. This model results in a process that has a power-law tailed waiting time distribution with exponent $\alpha_w=1$, and it matches with the empirical observations reported in Refs.~\cite{Barabasi2005Origin, Oliveira2005Human, Vazquez2006Modeling}. On the other hand, if $p\rightarrow 0$, the agent performs a fully random selection strategy, in which case the waiting times appear with an exponential distribution.
Note that in order to classify bursty systems based on these early modelling results V\'azquez~\emph{et al.}~\cite{Vazquez2006Modeling} suggested two universality classes with the above mentioned two different exponent values characterising power-law inter-event time distributions. Nevertheless, this picture turned out to be not completely consistent as further empirical evidences and modelling results of other bursty systems were found with various different exponent values as we have seen in Chapter \ref{chapter:emp} and will discuss below.
An exact stationary solution of this model for $l=2$ was provided by V\'azquez~\cite{Vazquez2005Exact} with the general form of the waiting time distribution derived as
\begin{equation}
P(\tau_w) =
\begin{cases}
1-\frac{1-p^2}{4p} \mbox{ln} \frac{1+p}{1-p}, & \quad \tau_w=1\\
\frac{1-p^2}{4p(\tau_w-1)}\left[ \left(\frac{1+p}{2}\right)^{\tau_w-1} - \left(\frac{1-p}{2}\right)^{\tau_w-1}\right], & \quad \tau_w>1.
\end{cases}
\end{equation}
which turned out to be independent of the priority distribution $\eta(x)$. In the limit of $p\rightarrow 0$, this solution reduces to the exponential form as
\begin{equation}
\lim_{p\rightarrow 0} P(\tau_w) = \left( \frac{1}{2}\right)^{\tau_w},
\end{equation}
while in the limit of $p\rightarrow 1$ the solution reads
\begin{equation}
\lim\limits_{p\rightarrow 1} P(\tau_w) =
\begin{cases}
1 + \mathcal{O}\left(\frac{1-p}{2} \ln (1-p)\right), & \quad \tau_w=1\\
\mathcal{O}\left(\frac{1-p}{2} \right) \frac{1}{\tau_w-1}, & \quad 1<\tau_w\ll \tau_c,
\end{cases}
\label{eq:BPGMsol}
\end{equation}
where $\tau_c=1/ \ln (2/(1+p))$ and the distribution is decaying with $\alpha_w=1$. Finally in the case when $0<p<1$, one finds a power-law distribution with exponential cutoff as follows:
\begin{equation}
P(\tau_w)\sim \frac{1-p^2}{4} \tau_w^{-1} \exp \left(-\frac{\tau_w}{\tau_c}\right)
\end{equation}
where the exponential cutoff is shifted towards larger $\tau_w$ values as $p\rightarrow 1$ (Fig.~\ref{fig:figurewtq}). Note that an exact non-stationary probabilistic description of this model was provided by Gabrielli and Caldarelli~\cite{Gabrielli2007Invasion} showing that for $0<p<1$ the system relaxes exponentially fast to the stationary solution. As $p\rightarrow 1$ the relaxation slows down and the system shows a non-stationary dynamics with a different exponent as described above.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{figs/Vazquez2006Modelling.pdf}
\caption{Waiting time distributions of tasks in models with (a) arbitrary length and (b) fixed length ($l=2$) priority queues. In panel (a) the waiting time distribution with different values of $\rho$, i.e., $0.9$, $0.99$, and $0.999$, are scaled together and can be approximated with exponent $\alpha_w=3/2$ (solid line). In the inset $P(\tau_w)$ with $\rho=1.1$ is shown. In panel (b) the results of the model with fixed queue length of $l=2$ are shown with $p=0.9$ (squares), $0.99$ (diamonds), and $0.999$ (triangles). Solid lines depict the solution in Eq.~(\ref{eq:BPGMsol}). (\emph{Source:} Adapted from Ref.~\cite{Vazquez2006Modeling} under Copyright (2006) by the American Physical Society.)}
\label{fig:figurewtq}
\end{figure}
The general stationary solution of the model with arbitrary but fixed length $l$ was provided by Anteneodo~\cite{Anteneodo2009Exact}. Here a master equation formalism was used to obtain the solution for the waiting time distribution as
\begin{equation}
P(\tau_w)=\int_0^1 dR(x) r_{\tau_w}(x),
\label{eq:Ptwexact}
\end{equation}
where $R(x)$ is the probability that a new inserted task has a priority smaller than $x$, and $r_{\tau_w}(x)$ assigns the probability that a new task, which was inserted at $t=t_0$ with priority $x$, will be executed at time $t=t_0+\tau_w$. This latter probability can be approximated as follows:
\begin{equation}
r_{\tau_w}(x) \simeq [1-r_1(x)][1-f(x)]^{\tau_w-2}f(x)
\end{equation}
where $f(x)$ is the average probability that a task is executed at time $t>t_0+1$. If $f(x)=(1-p)/l$ and $p=0$ the integral in Eq.~(\ref{eq:Ptwexact}) correctly results in the solution with exponential decay as $P(\tau_w)=(1-1/l)^{\tau_w-1}/l$, while if $p\rightarrow 1$ it leads to the asymptotic solution of a power-law with exponential cutoff $P(\tau_w)\sim \tau_w^{-1} \exp (-\tau_w/\tau_c)$ with $\tau_c=1/\ln [ l/(l-1+p)]\sim l/ (1-p)$. This solution also shows that the characteristic time $\tau_c$ of the exponential cutoff is shifted to larger values if $p\rightarrow 1$ or when $l$ is increased.
\paragraph{\textit{Waiting times vs. inter-event times.}} Empirical observations typically provide either the waiting time $\tau_w$ of a single task like a response to the received letter, or the inter-event times $\tau$ between similar tasks like mobile phone calls. The difference between these observables has been discussed in Section~\ref{sec:iet_rt_wt}. The models proposed above concern only the waiting times, while assuming that inter-event times are characterised by the same form of distributions. However, the relation between the two quantities and their distributions is not necessarily obvious. Following the arguments of Vazquez~\emph{et al.}~\cite{Vazquez2006Modeling}, while in the data we monitor the activities of an individual regarding a specific task only, in contrast the model simulates the execution of all tasks by an individual. Labelling tasks in the model process and reinserting them after execution would allow us to measure their inter-event time distribution, which would scale similarly with the waiting time distribution. A further argument says that inter-event times in communication depend on the activity patterns of a pair of interacting individuals. In case when they both prioritise their task lists, the effective inter-event time distribution would show the same scaling form as $P(\tau_w)$. Supporting these arguments Li~\emph{et al.} have reported an empirical study using a dataset of letter correspondence to confirm the matching exponents of the inter-event and waiting time distributions~\cite{Li2008Empirical}.
\paragraph{\textit{Criticism.}} After the seminal paper by Barab\'asi a few criticising comments were published~\cite{Stouffer2005Comment} that raised concerns about the data analysis, claiming that the observed inter-event time distribution in Ref.~\cite{Barabasi2005Origin} is better approximated by a log-normal distribution rather than a power-law and that the model gives unrealistically high preference to execute newly arriving tasks while keeping low priority tasks extremely long in the queue. Further concerns were expressed in Ref.~\cite{Kentsis2006Mechanisms}, criticising that the proposed model completely disregards the semantic content of an individual correspondence and the social context in which this correspondence takes place. As a response~\cite{Oliveira2006Mechanisms}, it has been argued that it is impossible to detect semantic and social context of correspondence as the content of the messages are not available due to privacy reasons. However, arguably prioritising should yet play a role in human correspondence as not only the context but also the deadlines are driving individual decisions to perform a task. Moreover, one does not need to obtain any knowledge about the prioritisation mechanisms as the power-law waiting time distribution in the models is emergent regardless of the functional form of the priority distribution.
\paragraph{\textit{Extensions.}} After the initial observations, other studies~\cite{Li2008Empirical, Karsai2012Universal} have identified empirical systems with diverse exponent values for the inter-event time distributions. Motivated by these observations some other extended models have been proposed. Masuda \emph{et al.}~\cite{Masuda2009Priority} assumed that in a time iteration, task arrival does not follow Poisson dynamics, similarly to Refs.~\cite{Cobham1954Priority,Barabasi2005Origin}, but $n$ tasks are added in each time step with a number sampled from a power-law distribution of $n^{-\gamma}$. This extension in turn led to the exponent values of the waiting time distribution depending on the exponent $\gamma$, the average number of $n$, and the execution probability $\mu$. Further extensions addressing interacting priority queues of links~\cite{Oliveira2009Impact} and coupled as a network~\cite{Min2009Waiting} were proposed providing alternatives to explain diverse exponent values. Their discussion will be the subject of Section \ref{sec:linkbrstmodel}. Finally, Gon\c{c}alves and Ramasco~\cite{Goncalves2008Human} suggested to extend the model such that multiple number of tasks are executed in each time steps. They showed that the execution of three tasks leads to an emerging exponent $\alpha=1.25$, which fits well with their observations for online browsing dynamics. Note that other extensions of the queuing model were proposed by Mryglod \textit{et al.}~\cite{Mryglod2012Editorial} and Jo~\emph{et al.}~\cite{Jo2012Timevarying}, where time-varying priority was considered to model heterogeneous dynamics of editorial review processes with or without peer-review processes. Finally, Cajueiro and Maldonado~\cite{Cajueiro2008Role} considered the cost of keeping a non-processed collection of tasks by introducing a discount factor, and they identified various protocols for executing tasks, depending on the discount factor, for minimising the cost function.
\subsubsection{Position based priority lists}
A somewhat different model was proposed by Vajna \textit{et al.}~\cite{Vajna2013Modelling} who defined a priority list model assuming that the priorities of tasks depend on their position in the list. More precisely they took a task list of size $l$ with ordered positions starting from $i=1$ to $l$. The list is filled with tasks of different types of activities. In each time step a task is chosen based on its position in the list with a probability $w_i$ that is decreasing as a function of $i$. Once a task is chosen, it jumps to the front of the list to trigger the corresponding activity, and it pushes the tasks that preceded it to the right. Once a task is in the front of the list it has the largest probability to be chosen again in the next iterations. In this way, the heterogeneous inter-event times between the consecutive executions of the same task are generated. Note that this model proposes the observations of the inter-event times rather than the waiting times.
The authors have shown that this model is capable of inducing power-law decaying inter-event time distributions with a tunable exponent between $\alpha\in [1,2]$ by using various $w_i$ distributions. Furthermore, they generalised their results by using a power-law decaying and an exponentially decaying priority distribution for $w_i$, as well as discussed the case of stretched exponential. In case of a finite list, they found that $P(\tau)$ decays as a power-law with an exponential cutoff, where the cutoff is the consequence of reaching the end of the list but it disappears in the $l \rightarrow \infty$ limit.
\subsection{Memory driven models of bursty phenomena}
Another modelling paradigm of bursty activity patterns concerns non-Markovian correlations between consecutive actions of an individual. It assumes that memory functions or reinforcement processes lay behind bursty signals in human behaviour. In the following we are going to walk through modelling examples contributing to this direction.
\subsubsection{Processes with simple memory functions}
One of the first models of this kind was proposed by Vazquez \textit{et al.}~\cite{Vazquez2007Impact} who aimed at modelling email and letter correspondence behaviour by assuming that the subsequent actions of an agent is influenced by its previous mean activity rate. Their model builds on the probability $\lambda(t)dt$ that an agent performs an action within the time window $[dt,t+dt]$ formulated as
\begin{equation}
\lambda(t)=\frac{a}{t}\int_0^t \lambda(t') dt'
\label{eq:VazqLamd}
\end{equation}
where the parameter $a>1$ determines the degree and type of reaction to the past perception. If $a=1$, one obtains a stationary process with $\lambda(t)=\lambda(0)$, while if $a\neq 1$, the process is non-stationary either with acceleration ($a>1$) or with reduction ($a<1$). At the starting time $t=0$ one assumes that an agent does not consider what happened before and performs the actions for a period $T$. The general solution of Eq.~(\ref{eq:VazqLamd}) shows $\lambda(t)=\lambda_0 a (t/T)^{a-1}$, where $\lambda_0$ is the mean number of actions within $T$. $\lambda(t)$ is approximately a constant for short time intervals of $T$ and the dynamics follows a Poisson process with an exponential inter-event time distribution. However, if $a>1$, i.e., the system is in the accelerating regime, the inter-event time distribution exhibits a power-law form, $P(\tau)\sim (\tau /\tau_0)^{-\alpha}$, where $\tau_0=1/(a\lambda_0)$ and $\alpha=2+1/(a-1)$ if $\tau_0 \ll \tau < T$. On the other hand, in the reduction regime, if $1/2<a<1$ the $P(\tau)$ does not show a power-law scaling, while for $0<a<1/2$ it appears again to follow a power-law with the exponent $\alpha=1-a/(1-a)$ if $\tau \ll \tau_0$.
A somewhat similar model was introduced by Han \textit{et al.}~\cite{Han2008Modeling} to model actions like web-browsing or video game playing, which are arguably driven by the adaptive interest. In their paper they introduced two thresholds, $T_1$ and $T_2$ (where $T_1\ll T_2$), to model the increased and the depressed activity rates by focusing on the probability $r(t)$ that the given action will occur at time $t$. They measured the inter-event times between consecutive occurrences of actions. While employing discrete time steps if the $(i+1)$th event appeared at time $t$ the value of $r$ is updated as $r(t+1)=a(t)r(t)$, where $a(t)$ determines the actual activity rate. For convenience they choose the $\tau_i=t_{i+1}-t_i$ be the inter-event time between the $i$th and $(i+1)$th events. Then if $\tau_i \leq T_1$, $a(t)=a_0$ and the process evolves with depressed rate. On the other hand, if $\tau_i \geq T_2$, $a(t)=a_0^{-1}$ and the process evolved with an increased rate. Finally, if $T_1 < \tau_i < T_2$ or there was no event within time $t$ then $a(t)=a(t-1)$. This model was found to induce bursty activity dynamics, characterised by a power-law inter-event time distribution with exponent $\alpha=1$.
\subsubsection{Self-exciting point processes}
\label{sec:SelfExcPP}
Another family of memory driven models for bursty processes are based on self-exciting stochastic processes of Hawkes type~\cite{Masuda2013SelfExciting}. Such models are able to reproduce heterogeneously distributed inter-event times and short-term temporal correlations, commonly observed in case of human dynamics. The general definition of Hawkes processes concerns the activity rate $\lambda(t)$ defined as follows:
\begin{equation}
\lambda(t)=\lambda_0+\sum_{i,t_i\leq t}\phi(t-t_i),
\end{equation}
where $\nu$ sets the ground activity level, while $\phi(t)$ is called the memory kernel, i.e., the additional rate incurred by the past events. For more comprehensive account of the Hawkes process, see the review~\cite{Mehrdad2015Hawkes}. There are several definitions of the memory kernel function that have been considered to describe human bursty phenomena. For example Masuda \emph{et al.}~\cite{Masuda2013SelfExciting} assumed an exponential form $\phi(t)=\alpha e^{-\beta t}$ to model event clusters initiated by single events appearing with rate $\lambda_0$. Such event trains appeared with size $c=1/(1-(\alpha/\beta))$ on average and induced a stationary rate of events $\overline{\lambda}=c\lambda_0$ with the condition that $\alpha<\beta$. This way their model process was fully determined by the parameters $\lambda_0$, $\alpha$, and $\beta$, which could be estimated by the maximum likelihood methods from the empirical data. In their study they used two face-to-face conversation datasets recorded independently in two Japanese companies, and after estimating the parameters of the interaction sequences of active individuals they found surprisingly good match between the statistical characteristics of the empirical and modelled activity signals.
In another work Jo \textit{et al.}~\cite{Jo2015Correlated} applied a power-law memory kernel of the form, $\phi(t)=1/t$, to define a memory function for $t>t_w$ as follows:
\begin{equation}
m(t)=\sum_{i=1}^w \frac{1}{t-t_i},
\end{equation}
where the memory is kept only up to the $w$th latest events in order to take into account the finite capacity of memory. This is called sequential memory loss mechanism. In the model, the larger $m(t)$ is, it induces the higher probability of new events at time $t$. As a result, the heavy-tailed inter-event time distributions emerges, while long-range correlations between inter-event times are limited by the control parameter $w$. For more realistic consideration, instead of having $w$ fixed, $w$ can be a variable such that for each newly occurred event, the value of $w(t)$ is reset to $1$ with probability of $q[w(t)]=1-\left[w(t)/(w(t)+1)\right]^\nu$, otherwise $w(t)$ is set to be increased by $1$. Here the larger $w(t)$ implies the longer time for resetting the memory, hence more consecutive events. This is called preferential memory loss mechanism. Here for the intermediate range of $\nu$, they found more realistic features in terms of temporal heterogeneities and higher order correlations, as exemplified by a power-law bursty train size distribution $P(E)$.
Note that a power-law memory kernel was also used by Crane and Sornette~\cite{Crane2008Robust} to model a somewhat different phenomenon, namely the cascade of social influence diffusion in social networks. Such processes lead potentially to bursty cascades of information adoption events at the population level with long lasting relaxation times back to normal adoption rates.
\subsubsection{Reinforcement point processes}
\begin{figure}[!t]
\centering
\includegraphics[width=.8\columnwidth]{figs/Karsai2012Correlated_Fig5.pdf}
\caption{The schematic definition and numerical results of the model in Ref.~\cite{Karsai2012Correlated}. (a) $P(E)$ distributions of the synthetic sequence after logarithmic binning with window sizes $\Delta t=1 \cdots 1024$ and fitted power-law exponent $\beta=3.0$. (b) Transition probabilities of the reinforcement model with memory. (c) Inter-event time distribution of the simulated process with a maximum inter-event time $\tau^{max}=10^6$ and emerging exponent value $\alpha=1.3$. (d) The average autocorrelation function with the maximum lag of $t_d^{max}=10^4$ and emerging characteristic exponent $\gamma=0.7$. Results are averages over $1000$ independent realisations with parameters $\mu_A=0.3$, $\mu_B=5.0$, $\nu=2.0$, $\pi=0.1$, and $T=10^9$.}
\label{fig:Karsai2012Correlated5}
\end{figure}
Reinforcement mechanisms provide another way to consider memory effects in dynamical processes, and they propose a possible explanation for the emerging bursty patterns. Based on this idea Karsai \emph{et al.}~\cite{Karsai2012Correlated} introduced a model to capture not only the heterogeneous individual communication dynamics but also the correlated bursty event trains commonly observed in real systems. In their model they defined a two-state dynamics, by considering an agent who can be either in a normal state $A$, for which events are executed for longer time, or in an excited state $B$, where actions appear with a higher rate. The timings of the consecutive events were determined by a reinforcement process with the assumption that the longer the system waits for an event, the larger the probability that it will keep waiting. Note that similar assumption was taken in models of collective bursty dynamics~\cite{Stehle2010Dynamical, Zhao2011Social}, which will be discussed in Section \ref{sec:brstmodelgroups}. In the model, the inter-event times are induced by a reinforcement function of the form
\begin{equation}
f_{A,B}(\tau)=\left(\frac{\tau}{\tau+1}\right)^{\mu_{A,B}}
\label{eq:KarsaiMemory}
\end{equation}
that gives the probability to wait one time step longer in order to execute the next event after the system has waited already time $\tau$ since the last event. Here the exponents $\mu_A$ and $\mu_B$ control the reinforcement dynamics in states $A$ and $B$, respectively. If $\mu_A \ll \mu_B$ the characteristic inter-event times in states $A$ and $B$ become fairly different leading to the emergence of temporal inhomogeneities in the dynamics. In addition the actual state of the system is determined by transition probabilities as demonstrated in Fig.~\ref{fig:Karsai2012Correlated5}(b). To be more specific, the model is defined as follows: first the system performs an event in a randomly chosen initial state. If the last event was in the normal state $A$, it waits for a time induced by $f_A(\tau)$, after which it switches to an excited state $B$ with probability $\pi$ and performs an event there, or with probability $1-\pi$ it stays in the normal state $A$ and executes a new normal event. In the excited state the inter-event time for the actual event comes from $f_B(\tau)$ and the probability to perform the next event in the excited state is given by $p(n)=(n/(n+1))^{\nu}$ as determined by the number of excited events $n$ since the last event in state $A$ and by the reinforcement exponent $\nu$. Then the model induces power-law inter-event time distribution with exponent $\alpha=\mu_A+1$ as shown in Fig.~\ref{fig:Karsai2012Correlated5}(c) and also generates correlated bursty trains whose size distribution is written as $P(E)\sim E^{-\beta}$ with $\beta=\nu+1$, as can be seen in Fig.~\ref{fig:Karsai2012Correlated5}(a). Note that a similar model without memory was also introduced in Ref.~\cite{Kleinberg2002Bursty}, which will be discussed in Section \ref{sec:PoissInfAut}.
Somewhat similar model was proposed by Wang \textit{et al.} \cite{Wang2014Modeling} to model the blog-posting behaviour of individuals. Their objective was to introduce short term correlations to capture the heterogeneous distribution of inter-event times and the power-law decay of the memory coefficient as defined in Eq.~(\ref{eq:Mgen}). Their model assumes that in each time step an agent can select one from $n$ possible tasks in two possible ways, i.e., randomly with probability $r$ or with probability $1-r$ it selects a recently performed task again with probability $t_i/m$. Here $t_i$ assigns the number of times a given task $i$ was performed in the last $m$ time steps, which in turn defines the length of the memory. By varying $m$ they observe that for smaller values of $m$ the inter-event time distribution scales as a power-law with exponent $\alpha>2$, while for larger memory lengths the exponent increases and the distribution relaxes into an exponential form. In addition they argue that their model successfully reproduces the short term power-law decay of the memory function and deviates from the empirical observations only in the tail region.
\subsection{Poisson models of bursty phenomena}
\subsubsection{Infinite automatons}
\label{sec:PoissInfAut}
One of the early models of bursty phenomena in human dynamics was proposed by Kleinberg~\cite{Kleinberg2002Bursty}, whose aim was to understand heterogeneous and hierarchical patterns of topic appearances in document streams. His subsequent aim was to provide a better organisation principle for large document archives, such as emails and scientific publications. Based on the analogy of email correspondence he suggested a model using an infinite-state automaton, with states determining the actual rate of message arrival and with inter-state transitions determined by the upward difference between states.
More precisely, he takes an automaton $\mathcal{A}$, which can be in states $q_i$ and performs $n+1$ events over a period of $T$. First of all he assumes that events in the state $q_i$ occur with inter-arrival times sampled from a ``memoryless'' exponential distribution $f_i(\tau)=a_i e^{-a_i \tau}$ with $a_i>0$. In other words events in a given state behave as a Poisson process. Each state $q_i$ is characterised by the arrival rate of messages $a_i$ such that $a_i=(n/T)s^i$ for states $i=0,1,\cdots$, where $s>1$ is a scaling parameter. In addition, the automaton $\mathcal{A}$ can transfer from state $q_i$ to $q_j$ with cost $\kappa(i,j)$, where the cost is proportional to $(j-i)\ln n$ for $j>i$, or simply zero for $j<i$. Subsequently the ultimate aim here was to find a sequence of states $\bm{q}=(q_{i_1}, q_{i_2},...,q_{i_n})$ for a given sequence of inter-arrival times $\bm{\tau}=(\tau_1, \tau_2,...,\tau_n)$, such that the overall cost function defined as
\begin{equation}
c(\bm{q}|\bm{\tau})=\left( \sum_{t=0}^{n-1} \kappa(i_t,i_{t+1}) \right) + \left( \sum_{t=1}^{n} -\ln f_{i_t}(\tau_t) \right)
\end{equation}
is minimal. A recursive solution of this model was provided in Ref.~\cite{Kleinberg2002Bursty} and was used to identify hierarchical structures in terms of state transitions (and thus in inter-arrival times) in document streams. Note that the overall framework developed in this paper can be viewed as drawing an analogy with models from queuing theory for bursty network traffic~\cite{Kelly1996Notes}, as well as the formalism of hidden Markov models~\cite{Rabiner1989Tutorial}. The principal aim
of this model was not to reproduce heterogeneous inter-arrival sequences, but more to provide a possible reason behind their emergence and to give applicable solutions in order to organise better streaming of documents.
\subsubsection{Heterogeneous Poisson model}
In reflection to the model by Barab\'asi~\cite{Barabasi2005Origin} a simple explanation was proposed by Hidalgo~\cite{Hidalgo2006Conditions} to describe the emergence of a power-law inter-event time distribution by using Poissonian agents that change the rates at which they perform an event in a random or deterministic fashion. To be more precise, the event rate of an individual is denoted by $\lambda$ and its distribution at the population level by $f(\lambda)$. It has been shown that if $f(\lambda)$ is heterogeneous, the asymptotic behaviour of the emergent
inter-event time distribution reads as
\begin{equation}
P(\tau)=\frac{f(1/\tau)}{\tau^2}.
\end{equation}
Assuming a uniform distribution of the form $f(\lambda)\sim U\left[0,L\right]$, the inter-event time distribution appears as $P(\tau)\propto \tau^{-\alpha}$ with $\alpha=2$. If $f(\lambda)\sim \lambda^{\nu}$, then one gets $P(\tau)\propto \tau^{-(\nu+2)}$, which implies $\alpha=\nu+2$. Similar scaling would also hold if we assume similar distributions of activity rates at the individual level~\cite{Hidalgo2006Conditions, Chatterjee2003Login}, or in case of periodically varying activity rates of individuals. Although this model was meant to describe natural phenomena it provides a simple explanation for bursty processes in cases when human individuals are assumed to be Poissonian agents.
\subsubsection{Bursty model with Poissonian cascades}
An alternative and descriptive modelling framework of bursty phenomena in human interactions was proposed by Malmgren \emph{et al.}~\cite{Malmgren2008Poissonian,Malmgren2009Universality}. In this approach it is argued that ``human behavior is primarily driven by external factors such as circadian and weekly cycles, which introduces a set of distinct characteristic time scales, thereby giving rise to heavy tails''~\cite{Malmgren2008Poissonian}, instead of rational decision making and correlated activity patterns,
proposed by Barab\'asi and others~\cite{Barabasi2005Origin, Oliveira2005Human, Vazquez2006Modeling}. They proposed a model, that captures individual email correspondence and builds on the intuition that our activities are strongly determined by circadian and weekly patterns, while they are grouped in cascades of actions in short active periods. For an illustration, see Fig.~\ref{fig:Malmgren2008A_1}. In this model the dynamics is defined as alternating non-homogeneous and homogeneous Poisson processes, which in turn gives rise to heterogeneous temporal behaviour with good correspondence with empirical observations.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{figs/Malmgren2008A_1.pdf}
\caption{An example of a periodic and cascading stochastic process. (A) Expected probability for starting an active interval during a particular day of the week $p_w(t)$. (B) Expected probability for starting an active interval during a particular time of the day $p_d(t)$. (C) The resulting activity rate $\rho(t)$ for the non-homogeneous Poisson process. Here the form $\rho(t)=N_wp_w(t)p_d(t)$ is assumed, where the proportionality constant $N_w$ is the average number of active intervals per week. (D) A time series of events generated by a non-homogeneous Poisson process. Each event in this time series initiates a cascade of additional events, called an active interval. (E) Schematic illustration of cascading activity with $N_a$ additional emails sent according to a homogeneous Poisson process with rate $\rho_a$. (F) Observed time series. (\emph{Source:} Adapted from Ref.~\cite{Malmgren2008Poissonian} under Copyright (2008) National Academy of Sciences, U.S.A.)}
\label{fig:Malmgren2008A_1}
\end{figure}
More precisely, the model accounts for periodic activity patterns by using a non-homogeneous Poisson process with a time-dependent periodic rate of events $\rho(t)=\rho(t+W)$, with period $W$. This rate function captures the convolution of daily and weekly activity distributions of active interval initiation, $p_d(t)$ and $p_w(t)$ as follows
\begin{equation}
\rho(t)=N_w p_d(t) p_w(t),
\label{eq:NHPoissRate}
\end{equation}
where $N_w$ stands for the proportionality constant being the average number of active intervals within one period $W$ (here a week). Each event generated by $\rho(t)$ initiates a secondary process that is a cascade of activity or active period, modelled by a homogeneous Poisson process with the rate $\rho_a$. During an active period, $N_a$ additional events occur, after which the activity of an individual is again governed by the primary process defined in Eq.~(\ref{eq:NHPoissRate}). Here the number of events $N_a$ is drawn from some distribution $p(N_a)$. The inter-event time between events within an active period is determined by $\rho_a$, while the times between active periods are induced by $\rho(t)$. In this way the process is fully determined by the parameters $N_w$, $\rho_d(t)$, $\rho_w(t)$, $\rho_a$, and $p(N_a)$, which can be inferred from data by using simulated annealing. Fitting this model on individual activity sequences gives a very close match between the modelled and empirical inter-event time distributions~\cite{Malmgren2008Poissonian}, as shown in Fig.~\ref{fig:Malmgren2008A_2}. This suggests that Poissonian bursts provide an alternative description for individual bursty activity patterns. In addition, in a complementary work~\cite{Anteneodo2010Poissonian} it is argued that email correspondence patterns present no detectable correlations in terms of the ordering of events as compared to randomly reordered time series. This is in contrast to what has been suggested in Ref.~\cite{Barabasi2005Origin}. They also concluded that the proposed Poisson model is sufficient in describing the observed phenomena with spurious correlations. Finally the same authors studied the estimation of the functional form of the inter-event time and waiting time distributions in email activity logs~\cite{Stouffer2006Lognormal, Malmgren2009Universality}, with the conclusion that they can be better approximated by log-normal distributions or the superposition of two log-normal distributions rather than with a truncated power-law function as suggested in Ref.~\cite{Barabasi2005Origin}. They argue that the generative queuing model proposed by Barab\'asi may not describe the observed log-normal waiting-time distributions as it predicts power-law distributed waiting times.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{figs/Malmgren2008A_2.pdf}
\caption{Comparison of the predictions of the cascading non-homogeneous Poisson process (red line) with the empirical cumulative distribution of inter-event times $P(\tau)$ of email correspondence (black line) for selected users. (\emph{Source:} Adapted from Ref.~\cite{Malmgren2008Poissonian} under Copyright (2008) National Academy of Sciences, U.S.A.)}
\label{fig:Malmgren2008A_2}
\end{figure}
\paragraph{\emph{Criticism.}} Some criticism has been expressed about this modelling approach. Firstly, it has been argued that even though this model gives a very close approximation with the empirical data it is only descriptive. It does not provide any generative explanation about the emergence of heterogeneous human dynamics but (a) provides a quite precise approximation by fitting the model process using a large set of parameters; (b) assuming circadian fluctuations to induces heterogeneity in human activity patterns. However, it has been shown (see Section \ref{subsect:cyclic}) that even after removing effects of such periodic fluctuations the signal remains bursty, indicating that circadian patterns cannot be the generative reason behind this phenomena~\cite{Jo2012Circadian, Zhou2012Relative}; (c) assuming only two states might be an over-simplification of human behaviour, while assuming two or more types of active states give considerable better approximation of bursty behaviour at the individual level~\cite{Ross2015Understanding}; and (d) although the model assumes that the action dynamics of an individual consists of independent events, temporal correlations have been detected in such signals~\cite{Karsai2012Universal, Gandica2016Origin}.
\paragraph{\emph{Extensions.}} Recently two extensions of this model have been proposed. In one case, Jiang \emph{et al.}~\cite{Jiang2016Twostate} modelled the communication activity of an individual by a two-states Markov-chain Poisson process where an individual can be either in the normal state ($s_n$) or in the bursty state ($s_b$). The interaction dynamics of an individual is determined by two Poisson processes $\mathcal{P}_n$ and $\mathcal{P}_b$ with characteristic intensities $\lambda_n$ and $\lambda_b$ such that $\lambda_n < \lambda_b$. Assuming a normal initial state $s_n$, the dynamics of an individual is determined by $\mathcal{P}_n$ where an interaction is initiated with the probability $\lambda_n t_p$, where $t_p$ denotes a characteristic time. The state of the next call is determined by a conditional probability $p(j|i)=p(i,j)/p(i)$ to switch to the bursty state, (in case with $i=s_n$ and $j=s_b$), or remain in the normal state (in case with $i=s_n$ and $j=s_n$). In the bursty state once the next call occurs with probability $\lambda_b t_p$, the next state is determined by the conditional probability to switch to $s_n$ (where $i=s_b$ and $j=s_n$), or to remain in the bursty state (where $i=s_b$ and $j=s_b$). The parameters $t_p$, $\lambda_b$, and $\lambda_n$ can be estimated directly from empirical data just as in Ref.~\cite{Malmgren2008Poissonian}. Just like the model of Malmgren \textit{et al.}, this model can give a close approximation to the interaction dynamics and inter-event time distribution of an individual.
Another extension was proposed by Ross and Jones~\cite{Ross2015Understanding} who suggested a model based on observations on Twitter activity logs. In this model an individual can be in one inactive state $s_0$, or in two active states constituting a more bursty state $s_1$, corresponding to conversation type of communication, and a less bursty state $s_2$, corresponding to broadcasting type of communication. In the inactive state $s_0$, inter-event times are determined by an inhomogeneous Poisson process just like in Ref.~\cite{Malmgren2008Poissonian} with a time dependent intensity function $\lambda_0(t)$. After performing an event in $s_0$ the node may switch to one of the active states $s_1$ and $s_2$ with probabilities $p_1$ and $p_2$, respectively. The number of events in active states is sampled from a geometric distribution with parameters different for $s_1$ and $s_2$, and inter-event times are sampled from an arbitrary distribution $g(\tau|s_i)$. The authors considered $g(\tau|s_i)$ being an exponential, log-normal, or Weibull distribution, with parameters again depending on the actual active state. After fitting this multi-parameter model with the empirical data, they found closer match between the real and modelled inter-event time distributions as in the case of the two-state model~\cite{Malmgren2008Poissonian}.
\subsubsection{Non-homogeneous Poisson process with decreasing interest}
There is yet another model proposed by Guo \textit{et al.} that studies inter-event time distributions of dynamical systems, where the interest of people in doing something is dependent on time~\cite{Guo2011Weblog}. Their hypothesis is introduced by an event rate
\begin{equation}
\lambda(t)=a+\frac{\alpha}{bt+1},
\end{equation}
which is decreasing until it reaches a stationary value $a$ of personal interest. In addition they show that in case of a non-homogeneous Poisson process with event rate $\lambda(t)$ and independent stationary increments, the cumulative distribution of the inter-event times appears as follows:
\begin{equation}
1-F(\tau) \sim b^{-\frac{\alpha}{b}}\left( \tau+\frac{1}{b}\right)^{-\frac{\alpha}{b}}e^{-a \tau} \hspace{.3in} \text{as} \hspace{.3in} \tau\rightarrow \infty
\end{equation}
for positive constants $a$, $b$, and $\alpha$. This is a mixed distribution with exponential and power-law features that approximates the Gamma distribution. To demonstrate their analytic findings they come up with good fits between their model and the inter-event time distribution of blog posts of four users of a popular Chinese blog space.
\subsection{Other type of models}
\label{sec:otherindivmods}
\paragraph{\emph{Models of rational bursty consumers.}}
In their study Maillart \emph{et al.}~\cite{Maillart2011Quantification} mapped the priority queuing process onto the economic theory of consumption. In economic theory, the consumer is assumed to maximise the total utility from consuming some units of wealth under the constraint of the limited wealth, namely under budget constraint. Similarly, in priority queuing processes the agent tries to maximise the total utility from consuming time $\tau_w$ for solving a task under the constraint of the limited total time, i.e., the time budget, which is expressed by
\begin{equation}
\sum_{i=1}^{N(T)}\tau_{w,i}\leq T,
\end{equation}
where $N(T)$ is the number of tasks in a given time period of $T$. Based on this mapping, the authors use several strategies of executing tasks to find realistic waiting time distribution with power-law tails.
In their paper Jo \emph{et al.}~\cite{Jo2012Optimized} introduced an alternative economics-inspired model, where an agent in an uncertain situation tries to reduce the uncertainty by communicating with information providers, while the agent has to wait for responses. Here the waiting time can be considered as cost. The authors showed that the optimal choice for the waiting time under uncertainty gives rise to the bursty dynamics, characterised by a power-law distribution of the optimal waiting time. More precisely, the risk-averse utility function is assumed to be $u(x_t)=-\exp(-x_t)+a$ with $a\geq 0$, where the uncertainty of the state $x_t$ is described by a normal distribution with the zero mean and variance of $\sigma^2/t^\eta$. Here the parameter $\eta$ controls the speed of decreasing uncertainty. That is, the uncertainty decreases with time as the agent waits for the information, while the cost of the spent waiting time is modelled as $c(t)\propto \sigma^{-2/\nu}t$, where the parameter $\nu$ controls the cost per unit time. Then the expected utility is obtained as $E[u(x_t)]-c(t)$, which is optimised to obtain the optimal waiting time as follows:
\begin{equation}
\tau_w(\sigma)=C_1\sigma^{2/\eta}W(C_2\sigma^{2(\nu-\eta)/[\nu(\eta+1)]})^{-1/\eta}
\end{equation}
with coefficients $C_1$ and $C_2$. Here $W$ denotes the Lambert function. Then using the distribution of uncertainty $P(\sigma)=e^{-\sigma}$, one obtains the optimal waiting time distribution $P(\tau_w)$ with power-law exponent, e.g., $\alpha_w=1-\eta/2$ if $\nu=\eta$.
\paragraph{\emph{Independent models.}}
The simplest way to model bursty activity sequences is by sampling inter-event times from a given distribution. Note that although this method provides heterogeneous activity patterns, it does not provide neither any understanding about the roots of bursty phenomena nor it induces correlations between consecutive events, which in turn remain independent. This method has been used in Ref.~\cite{Horvath2014Spreading, Jo2014Analytically} to study the effects of node and link burstiness on the speed of information spreading in temporal networks, or in Ref.~\cite{Karsai2012Universal} to highlight the spurious behaviour of the autocorrelation function in case of heterogeneous but independent activity signals and to detect the presence of temporal correlations common in empirical cases.
\paragraph{\emph{Voter model.}}
An alternative definition of the voter model has been suggested by Fern\'andez-Gracia \emph{et al.}~\cite{FernandezGracia2013Timing} where temporal heterogeneities arise as a consequence of new update rules. In the model each node gets updated with a probability that depends on the time since the last event of the node took place. Here, an event can be an update attempt (exogenous update) or a change of state (endogenous update). In their paper they find that both update rules can give rise to power-law inter-event time distributions. If the update probability of a node is given as $p(\tau)=b/\tau$, where $\tau$ is the time since the last update, then the inter-event time distribution emerges in the form $P(\tau)\sim \tau^{-b}$. In addition it is shown that for the exogenous update rule and the standard update rules the voter model does not reach consensus in the infinite size limit, while for the endogenous update there exists a coarsening process driving the system toward consensus configurations.
\paragraph{\emph{Rank shift model.}}
This model, proposed in Ref.~\cite{Ratkiewicz2010Characterizing}, addresses popularity dynamics of online contents, which emerge with heterogeneous patterns in terms of the number of citation events. In this model each task is in a list and assigned with a popularity, implemented as a citation probability that decays as a power-law as the actual position of the given task in the list. In addition the model accounts for exogenous effects that potentially changes the popularity of a task suddenly and drastically. The simplest way to implement this mechanism is by introducing in the ranking model a re-ranking probability. In this case at each iteration every item is moved to a new position towards the front of the list, which is chosen randomly with equal probability between $1$ (the top position) and the task's current rank $j$. As a consequence of these two mechanisms the model induces power-law distributed number of citations of tasks with exponent close to empirically observed ones in Wikipedia citations and online crawling data. It should be noted that this model does not propose explanation for emergent bursty patterns in terms of time, and also that it is somewhat similar to the one in Ref.~\cite{Vajna2013Modelling} with an important difference that its definition is not based on a priority queue.
\paragraph{\emph{Random reference models.}}
A set of models has been recently proposed that are not generative but yet address temporal bursty behaviour. These models~\cite{Karsai2011Small,Miritello2011Dynamical,Holme2012Temporal} apply various random shuffling techniques on real event sequences to obtain statistical reference models where selected temporal or structural correlations are vanished from the system. This modelling techniques are commonly used to remove bursty activity patterns from empirical signals to study their effects on data-driven models of dynamical processes. These models, in a way, ``Poissonise'' a temporal event sequence either by shuffling times between events or by assigning a random time to each event selected uniformly from a given period $T$. The use and advantage of these models will be discussed in details in Chapter~\ref{chapter:processes}.
\paragraph{\emph{Self-organised critical systems.}}
Finally, there is a very interesting proposition by Tang \textit{et al.}~\cite{Tang2010Stretched} in order to model retrospectively the
bursty dynamics of the emergence of wars in ancient China. They argue that the dynamics of wars is driven mostly by short term correlations between the last and next following events and they can be related to the Bak-Sneppen evolutionary model with self-organised criticality.
\section{Models of link activity}
\label{sec:linkbrstmodel}
\subsection{Interacting priority queues}
The bursty models we have discussed so far in Section \ref{sec:indivmodels} attempt to model the action dynamics of single individuals, while neglecting the fact that the tasks are commonly carried out in human-to-human interactions. Examples can be found in any type of communication or communication driven activities, where as a consequence bursty patterns appear between connected peers and thus they are associated more to links~\cite{Karsai2012Correlated} rather than to individual dynamics. This problem was addressed by Oliveira and Vazquez~\cite{Oliveira2009Impact} who introduced a model based on the definition by Barab\'asi~\cite{Barabasi2005Origin} but considering two priority queues $A$ and $B$ with fixed sizes $l_A$ and $l_B$, respectively. They assumed two types of tasks to be present in each queue, a single interacting task $I$ and $l_j-1$ non-interacting tasks $O$ with $j=A,B$. Each task is assigned with a random priority $x$ drawn from the uniform distribution in $[0,1]$ to obtain
\begin{equation}
f_{ij}(x)=\left\{
\begin{array}{ll} 1,& i=I\\
(l_j-1)x^{l_j-2} ,& i=O,\\
\end{array}
\right.
\end{equation}
where $f_{Oj}(x)$ denotes the highest priority among $l_j-1$ non-interacting tasks. Then $l_j-1$ non-interacting tasks with priorities uniformly distributed in $[0,1]$ can be reduced to one non-interacting task with priority $f_{Oj}(x)$ as only the highest priority task is relevant. Initially, the priorities are assigned to the tasks as described earlier. In each time step, both agents select the task with highest priority in their lists. If both agents select the task $I$ then it is executed, otherwise each agent executes the task $O$. Each executed task is assigned with a new priority drawn from the distribution $f_{ij}(x)$. This process leads to bursty patterns of task execution, which in turn induces power-law distributed inter-event times with an exponent $\alpha$ depending on the length of the priority queues. The exponent follows qualitatively the relation $\alpha=1+1/\max\{l_j-1\}$. Its maximum is $\alpha=2$ if the queues consists of two tasks, then $\alpha=3/2$ for three tasks leading to $\alpha=1$ as $l$ increases. This suggests that there are not only two ``universal'' exponent values, e.g., as proposed in Ref.~\cite{Vazquez2006Modeling}, but $\alpha$ can take several other values depending on the length of the priority queues. Note that the authors provided a definition of coarse grained models to achieve large scale simulations with large inter-event times and reliable scaling exponent estimations. They also showed that the emerging cutoff of the inter-event time distribution is a consequence of the finite simulation window thus the power-law functional form of $P(\tau)$ is asymptotically true.
An extended definition of the above model was proposed by Min \textit{et al.} who considered two scalable interaction protocols and a network of individuals with priority queues~\cite{Min2009Waiting}. They identified the above model definition as the AND-type protocol where $I$ tasks are executed only if they obtain the largest priority at the same time. They show that this protocol commonly leads to frozen states when applied to queues connected in a network. They argue, however, that an OR-type protocol would be more reasonable for the tasks, which require simultaneous actions of two or more individuals though the action can be initiated primarily by one of them. Examples are phone calls or instant messages where the task of answering of an incoming interaction jumps usually to the top of one's priority queue immediately when one receives a call or a message. The iteration of the model starts by choosing a random node $i$. If its highest priority task is $I_{ij}$, the two tasks $I_{ij}$ and $I_{ji}$ are executed regardless of the priority value of $I_{ji}$; in the other case if $O_i$ is the highest priority task, only that is executed. Priorities of all the executed tasks are randomly reassigned. This model process does not drive to a globally frozen state yet the $P(\tau)$ exhibits power-law tails, however with an exponent that depends on the network size $N$ as well as the network topology in a diverse way.
\subsection{Models with combined mechanisms}
Wu \emph{et al.}~\cite{Wu2010Evidence} proposed a combined model of Poissonian and priority induced bursts to explain the bimodal shape of the inter-event time distribution, typical in SMS communications. They argue that this phenomenon is a consequence of the interplay between processes effective at different time scales and determined by three important ingredients, namely (a) a Poisson process responsible for the initiations of bursts, (b) execution of competing tasks of an individual, and (c) interactions. They identify two types of tasks, an interaction task (I) and other tasks (O), and they consider each interaction task whether it is an initiation or a response action. Based on data analysis of individual SMS interaction patterns, they found that the inter-event time distribution of an individual can be described best by a power-law distribution if $\tau<\tau_0$ and by an exponential if $\tau>\tau_0$, where $\tau_0 \simeq 20$ min.
Based on these observations they propose a model defined as two interacting priority queues to mimic the interaction dynamics of two individuals. They first consider the priority queues of tasks of individuals in which the tasks in the queue are executed one by one with the probability $\Pi=x^{\alpha}$ in a ranked order by their randomly chosen priority $x\in (0,1)$. In addition they introduce a processing time $t_i$ determining as the time scale by which tasks are executed and added to the list. Interacting tasks (I) are added to the list with a small rate $\lambda_i=\lambda t_i$ in a Poissonian fashion. Next they consider the interaction between individuals: This occurs when one of the agents A (or B) executes an I-task, which will add an I-task to the list of B (A) with a corresponding probability $P_B$ (or $P_A$). All the I-tasks are randomly initiated by an individual and responding to other tasks will be put to the waiting list with a random priority $x$, subsequently competing for the execution with the O-tasks. Thus, the model is controlled by three important parameters for each user, i.e., $\lambda_i$, $\alpha_i$, and $P_i$, each of which is related to the Poisson process, decision making, and interaction, respectively. Fitting these parameters with empirical sequences allows the model successfully reproducing the bimodal shape of inter-event time distributions between interactions of an individual. This suggests that all the three ingredients are necessary in explaining this phenomenon.
\section{Network models of bursty agents}
\subsection{Zero-crossing random walk model}
Beyond a single node or link dynamics, other models have been proposed to simultaneously capture the topological and temporal features of agents interacting in a larger network. The first among these models was proposed by G\"{o}tz \emph{et al.}~\cite{Goetz2009Modeling} whose aim was to simulate the posting dynamics of bloggers, which in turn induces a reference network between blogs and postings with particular topological features. In their model they associate a blogger with a random walker in one dimensional space, who posts each time when it returns to its original position, hence it is called zero-crossing model. In the beginning of the process a blogger $A$ starts a walk from position $0$ and in each time step, with probability $1/2$, adds or subtracts a unit from its position. Whenever, the position of $A$ becomes $0$ it creates a post $P$ which, with probability $1-p_L$, is a new conversation, or otherwise a comment on another post. In the latter case, with probability $1-p_E$, the blogger comments on one of the posts of a blog it has already commented on (exploitation mode), or otherwise chooses a new blog to comment on (exploration mode). Subsequently in the selected blog $B$ to refer a post $Q$ is chosen with a probability weighted by the number of times this post has been earlier referred. Finally for each post $R$ reachable from post $P$, for each path $p$ from $P$ to $R$ create a link from post $P$ to post $R$ with probability $p_{LE}^{|p|}$. Here $p_{LE}$ denotes the probability for expanding a link and $|p|$ is the path length. The authors show that the structure of the simulated blog post network emerges with a power-law in-degree distribution and cascade size, while the inter-event times between consecutive posts of a blogger are distributed as $P(\tau)\sim \tau^{-3/2}$. In addition the emerging activity signals are self-similar with fractal dimension 0.5, comparable to the empirical observations presented by the authors.
\subsection{Reinforcement models of group formation}
\label{sec:brstmodelgroups}
A model based on reinforcement mechanisms was proposed by Stehl\'e \emph{et al.}~\cite{Stehle2010Dynamical} and Zhao \emph{et al.}~\cite{Zhao2011Social} to simulate social group formation in bursty contact networks. Their model simulates interacting agents forming disconnected groups, which evolve by successive mergings and splittings. These actions are driven by underlying reinforcement processes summarised by the authors as ``the longer an agent interacts with a group, the less that agent is likely to leave the group and the more an agent is isolated the less likely the agent is to interact with a group.'' More precisely their model considers $N$ agents, which either can be isolated or belong to a group defining an instantaneous contact network. Each agent is characterised by two variables: the number $p_i$ of actually contacted agents (its group size minus one) and the time $t_i$ when $p_i$ changed for the last time. At each time step $t$ an agent $i$ is randomly chosen. If the agent is isolated, it changes state with probability $b_0f(t,t_i)$ and chooses another isolated agent $j$ with probability $\Pi(t,t_j)$, such that they form a pair and update their state variables $p_i$, $t_i$, $p_j$, and $t_j$. On the other hand if $i$ is part of a group it changes its state with probability $b_1f(t,t_i)$. When the state changes, the agent can become isolated with probability $\lambda$, or otherwise it introduces an isolated node $j$ selected with probability $\Pi(t,t_j)$. If a node leaves or a new node is introduced to a group, all participating nodes update their state variables accordingly. The parameters $b_0$ and $b_1$ determine the tendency of the agents to change their state between being isolated and in a group, while $\lambda$ controls the tendency either to leave groups or in contrary to make them grow. In addition, the model dynamics strongly depends on the functions $f$ and $\Pi$. For simplicity, choosing them to be identical and to decay as a power-law like
\begin{equation}
f(t-t_0)=\Pi(t-t_0)=(1+\tau)^{-1}, \hspace{.2in} \tau=(t-t_0)/N
\end{equation}
leads to system dynamics governed by reinforcement processes. This way the modelled system can reproduce several realistic features observed in real interaction data, such as power-law distributed interaction and inter-event times and duration of triadic interactions, and that the stability of groups decreases with their size.
\subsection{Evolving networks with interacting priority queues}
Jo \textit{et al.}~\cite{Jo2011Emergence} introduced an evolving network model, which integrates different interaction strategies, inspired by the Kumpula model for social network evolution~\cite{Kumpula2007Emergence}, with interacting priority queues defined above. In their model $N$ agents are given, each with a priority queue of two tasks $I$ and $O$ with priorities randomly assigned from a uniform distribution. At each time step $t$ every node selects its highest priority task. If it is an $I$-task, the node $i$ selects a target node for interaction either (a) from the whole population with probability $p_{_{GA}}$, (b) from its next nearest neighbours with probability $p_{_{LA}}$, or (c) from its neighbours with probability $1-p_{_{GA}}-p_{_{LA}}$ weighted by their link weight $w_{ij}$. The next nearest neighbour $j$ of the node $i$ is defined as a node satisfying $\{ t_{ik},t_{jk} \} =\{ t-2, t-1 \}$ with an intermediate node $k$, implying that $i$ and $k$ interacted at time $t-2$, and $k$ and $j$ interacted at time $t-1$. In both cases of (a) and (b), links between nodes are created with unit weight either randomly, representing the {\em focal closure} mechanism, or by closing a triangle, representing the {\em triadic closure} mechanism. The case of (c) represents the reinforcement mechanism as existing links are selected and their weights are reinforced. After a target node $j$ is selected an interaction between $i$ and $j$ takes place if the target $j$ has not been involved in any event at this time step $t$. After an interaction the priority of $I$-task of node $i$ is updated. In addition, at each time step each node can forget all of its existing connections with probability $p_{_{ML}}$, i.e., memory loss, to become isolated. By measuring the inter-event times between two consecutive $I$-tasks of a given node the system exhibits a broad inter-event time distribution with an exponential cutoff. In addition the emerging network structure shows several realistic features such as Granovetterian community structure~\cite{Granovetter1973Strength}, high clustering, assortative degree correlations, and broad link-weight distributions. It is worth to note that similar interaction dynamics has been observed in a variant of this model without priority queues, possibly suggesting that the source of heterogeneous dynamical behaviour could be also a consequence of the link-weight reinforcement process.
\subsection{Dynamic networks with memory}
A conceptually different type of evolving network model was proposed by Colman and Greetham~\cite{Colman2015Memory} who used a different memory kernel to induce bursty interaction sequences of agents in an evolving network. To generate event sequences with power-law distributed inter-event times they defined a discrete-time stochastic process, which generates an infinite sequence of binary random variables $X_t$ taking values $1$ ($0$) if an event takes place at time $t$ (or not). To determine $X_t$ an agent has a memory capacity of size $M$, represented by $m_n(t)$ for $n=1,2,\cdots,M$. Each $m_n(t)$ can have a value of $1$ or $0$. Using the definition of $k_t=\sum_{n=1}^M m_n(t)$, the kernel $f(k_t)$ determines the probability to execute an event in time step $t$. The new event occurs, i.e., $X_t=1$, with probability $f(k_t)$, otherwise no event occurs, i.e., $X_t=0$. The authors introduced two possible memory updating mechanisms: One is for a random $n'$ to set $m_{n'}(t+1)$ by the value of $X_t$, while keeping all others, i.e., $m_n(t+1)=m_n(t)$ for $n\neq n'$. The other is basically shifting $m_n(t)$ by one position, i.e., $m_n(t+1)=m_{n+1}(t)$ for $n=1,\cdots,M-1$, and setting $m_M(t+1)=X_t$. In this way, the memory is kept up to the $M$th latest realisations, similarly to the sequential memory loss mechanism proposed in Ref.~\cite{Jo2015Correlated}. They propose to use a linear probability kernel
\begin{equation}
f(k_t)=\frac{k_t+x}{M+x+\epsilon},
\label{eq:fkt}
\end{equation}
where $x$ and $\epsilon$ are positive real numbers. If $x$ is large the system approaches a Bernoulli process, while if $x$ is small relative to $M$ the inter-event time asymptotically follows a power-law as $P(\tau)\sim \tau^{-(2+x)}$. Using this dynamics for each agent they introduce an evolving network model of $N$ nodes and $E$ edges, where each node is assigned with a fitness value $x$ sampled from a probability distribution $\rho(x)$. The network is originally a random structure and at each step a node $i$ is randomly selected with a probability given by its attachment kernel $\Pi(i,x_i)$, a second node $j$ is selected in the same way and an edge is created between them. At the same time the oldest edge is removed from the network keeping the average degree constant. Considering the attachment kernel as
\begin{equation}
\Pi(i,x_i)=\frac{k_i+x_i}{\sum_j(k_j+x_j)}
\label{eq:Piix}
\end{equation}
with $k_i$ denoting the degree of $i$, they show that if $\rho(x)$ follows a power law the model process induces a scale-free structure and since always the oldest link is removed, $M=E$. In case of setting $x_i+\epsilon=N\langle x \rangle/2$, Eqs.~(\ref{eq:fkt}) and~(\ref{eq:Piix}) become equivalent. Thus if the fitness distribution is chosen such that $\langle x \rangle \ll \langle k \rangle$, the interacting nodes will exhibit bursty interaction patterns.
\subsection{Activity driven network models with bursty nodes}
Activity driven models of time-varying networks is a family of generative temporal network models, which can be used to simulate synthetic interaction sequences of model agents with arbitrary level of complexity. In its simplest definition~\cite{Perra2012Activity} the model assumes that there are $N$ independent agents, all assigned with an activity potential $a_i$ drawn from an arbitrary distribution. The activity potential describes the probability that an agent initiates an interaction with a randomly selected other agent at each time step. Initiating the model with disconnected agents and simulating their interactions over a transient period one can cumulate the emerging interaction structure and obtain a generative network structure. It has been shown that this cumulated network structure emerges with a degree distribution, which scales as the originally assumed distribution of activities. Further extension of the model leads to emerging weight heterogeneities~\cite{Karsai2014Time}, communities, weight-topology correlations~\cite{Laurent2015From}, etc. just to mention a few examples to demonstrate the potential of this modelling framework.
This model has been extended in two ways to consider agents with bursty activity patterns, in order to understand the effects of non-Poissonian dynamics on the emerging network structure. Although burstiness is not an emergent property in any of these models, yet we briefly discuss them as they may be useful to study in the future the effects of burstiness on emerging structures or ongoing dynamical processes.
In one definition of Moinet \textit{et al.}~\cite{Moinet2015Burstiness, Moinet2016Aging} the timings of node activities are determined by a renewal process. Each agent $i$ is assigned with a time-dependent activity $a_i(\tau)$, which depends on the time $\tau$ passed since its last activation. The activation of each node follows a renewal process governed by a waiting time distribution $P(\tau, c_i)$, where $c$ is a parameter determining the heterogeneity of the activation rate of the agents, and assumed to be randomly sampled from a distribution $\eta(c)$. Assuming a power-law waiting time distribution
\begin{equation}
P(\tau,c)=\alpha c(c\tau+1)^{-(\alpha+1)}, \hspace{.2in} 0<\alpha <1
\end{equation}
and a power-law heterogeneity distribution
\begin{equation}
\eta(c)=\frac{\beta}{c_0}(c/c_0)^{-(\beta+1)}
\end{equation}
with $\beta>\alpha$, the degree distribution of the emerging network emerges as
\begin{equation}
P_t(k)\sim (c_0 t)^{\beta}(k-\langle r \rangle_t)^{-1-(\beta/\alpha)}
\label{eq:ptk}
\end{equation}
where $\langle r \rangle_t$ is the average number of times a node becomes active up to time $t$. Equation~(\ref{eq:ptk}) leads to a relation between the power-law exponents of the degree, waiting time, and heterogeneity distributions as $\gamma=1+\beta/\alpha$, indicating dependencies between the topological properties of the network and the distribution of renewal events. Based on this the authors show that the model is affected by ageing effects when the waiting time distributions have the power-law tail with $\alpha<1$, which they demonstrate using numerical simulations and empirical results measured in scientific co-publication networks.
In another work Ubaldi \textit{et al.}~\cite{Ubaldi2016Burstiness} also build on the activity driven network but extended it with two mechanisms. First of all they introduced bursty dynamics directly by assuming that inter-event times $\tau_i$ for node $i$ were drawn from a power-law distribution of the form
\begin{equation}
P(\tau_i)=\frac{\alpha}{\xi_i^{-\alpha}}\tau_i^{-(1+\alpha)}, \hspace{.2in} \tau_i \in [\xi_i,+\infty ).
\end{equation}
Here $\xi_i$ is a lower time cutoff for the minimum inter-event time of node $i$, which in turn determines its characteristic time-scale as its activity $\xi_i\sim 1/a_i$. If $\xi_i$ is distributed as a power-law $P(\xi_i)\sim \xi_i^{\nu-1}$, for small $\xi_i$ values the induced node activities will also be power-law distributed with a corresponding exponent of $-(\nu+1)$. In this way the burstiness directly governs the evolution of the network. Another mechanism considered by the authors is a memory driven tie allocation process that enhances repeated interactions of already existing ties. They show analytically and by means of numerical simulations that the simultaneous control of the relative strength of burstiness and the tie reinforcement leads to a non-trivial phase diagram determined by the interplay of the two processes. They found two different dynamical regimes, one in which the burstiness governs the evolution of the network, and another in which the dynamics is completely determined by the process of tie allocation. Interestingly, if the reinforcement of previously activated connections is sufficiently strong, the burstiness governs the network evolution even in the presence of large inter-event time fluctuations.
\chapter{Dynamical processes on bursty systems}
\label{chapter:processes}
The bursty temporal patterns in human interactions are important not only for understanding the dynamics of the egocentric and global social networks but also because they have indisputable effects on the evolution of dynamical processes taking place on them like random walks, information diffusion, epidemic and social contagion, or various types of evolutionary games, just to mention a few. In the earlier studies of these processes it was commonly assumed that they evolve over static structures. In these cases links representing interactions between nodes, were always present in the network, while the question was about the effects of structural heterogeneities and correlations on the final outcome of the process in question~\cite{Barrat2008Dynamical}. However, the recent availability of large digital datasets recording temporally detailed interactions of individuals led to the advent of the new field of temporal networks~\cite{Holme2012Temporal}. In this representation interactions are not taken to be static, but assumed to vary in time and allow information to pass between connected nodes only at the time of their interactions. Parallel to the foundation of the methodologies, models, and theories of temporal networks, several studies addressed the effect of time-varying interactions on the evolution of dynamical processes.
Importantly, it has been found that the bursty nature of human interactions has dramatic effects on the unfolding of several modelled processes. First reported observations were the results of data-driven simulations, where synthetic dynamical processes were simulated on real and inherently bursty interaction sequences~\cite{Karsai2011Small, Miritello2011Dynamical, Rocha2011Simulated}. These observations together with early theoretical results~\cite{Vazquez2007Impact, Kivela2012Multiscale, Iribarren2009Impact} disclosed a main puzzle, as burstiness was found to slow down the emergence of several types of global phenomena, while in some other cases it appeared to show opposite effects, leading to faster scenarios as compared to the Poissonian case. In order to address these seemingly contradicting observations two general modelling directions have been considered. On one hand, for data-driven simulations a new modelling concept using random reference models (RRMs) has been proposed (for a brief discussion see Section~\ref{sec:otherindivmods}). These models define several ways of shuffling interaction sequences to remove temporal and structural correlations in a controlled way for identifying their effects on the simulated dynamical processes. On the other hand, more formal approaches consider the effects of bursty characteristics, like the heterogeneous inter-event time distribution, residual times, and local temporal correlations to explain the observed behaviour.
All these results tend to draw a very heterogeneous picture, with some comprehensive understanding about the effects of bursty interactions on dynamical processes, but leaving some other problems to be opened in several ways for further research. We have learned that the observed effects strongly depend on the actual datasets in use and the model we chose to investigate. Thus, instead of providing a closed theory about the effects of bursty patterns on dynamical processes, we first discuss the different characteristics of bursty behaviour, which were found to be relevant in various studies, and then we present the main findings on different types of dynamical processes, which were investigated on bursty temporal networks of human interactions.
\section{Bursty characteristics controlling dynamical processes}
\subsection{Inter-event time and residual time distributions}
In human interactions the bursty dynamics has been characterised by a broad inter-event time distribution $P(\tau)$, which commonly appears as a power-law, potentially with an exponential cutoff or in a log-normal form, as discussed in Chapter~\ref{chapter:emp}. It indicates that individual dynamics are typically non-Poissonian with events being separated by heterogeneous inter-event times, unlike in case of Poisson dynamics with exponentially distributed inter-event times. In this Section we are mostly interested in effects induced by non-Poissonian dynamics, while the corresponding Poissonian system will be used as a reference. Any dynamical process that unfolds in bursty temporal networks can be effected by the broad $P(\tau)$ such that short inter-event times tend to help the rapid update of interacting nodes while long inter-event times act in an opposite way, keeping information locally stuck for long period of times.
These effects can be easily verified by using appropriate random reference models in data-driven simulations on bursty temporal networks. One of the most frequently used RRMs takes the event sequence of interacting individuals and shuffle the interaction times between events~\cite{Karsai2011Small, Kivela2012Multiscale, Miritello2011Dynamical}. Shuffling in this way destroys any temporal correlation in the original event sequence, including the bursty temporal patterns, as it assigns a random time to each event over the observation time window $T$. Note that an equivalent method would be to pick a random time for each event from the window $T$. Using both of these RRMs one can obtain a sequence of interactions, which shows Poissonian dynamics and exponentially distributed inter-event times, while keeping the network structurally unchanged. This removal of bursty patterns in most observations fastens the emergence of a global phenomena, as demonstrated in Fig.~\ref{fig:SbSW}(a) (time-shuffled curve with blue squares), which in turn suggests that burstiness actually slows down the dynamical process~\cite{Karsai2011Small, Kivela2012Multiscale, Miritello2011Dynamical, Gauvin2013Activity, Delvenne2015Diffusion, Karimi2013Threshold, Backlund2014Effects}. On the other hand, some exceptions have also been reported~\cite{Rocha2013Bursts, Rocha2011Simulated, Takaguchi2013Bursty}, where the same procedure indicates that burstiness accelerates some diffusion processes. As we will discuss below, heterogeneous inter-event times have different effects on the early and late stage dynamics, which tends to give to some extended explanation of these seemingly contradicting observations.
\begin{figure}[!t]
\centering
\includegraphics[width=.8\columnwidth]{figs/Karsai2011Small.pdf}
\caption{Demonstration of RRMs in case of Susceptible-Infected or SI spreading on mobile communication networks. (a) Average fraction of infected nodes $\left<I(t)/N\right>$ at each point in time for the original event sequence ($\circ$) and null models: equal-weight link-sequence shuffled DCWB ($\lozenge$), link-sequence shuffled DCB ($\vartriangle$), time-shuffled DCW ($\square$) and configuration model D ($\triangledown$). (b) Distribution of full prevalence times $P(t_f)$ due to randomness in the initial conditions. (\emph{Source:} Adapted from Ref.~\cite{Karsai2011Small} under Copyright (2011) by the American Physical Society.)}
\label{fig:Malmgren2008A_2}
\label{fig:SbSW}
\end{figure}
\subsubsection{The waiting-time paradox}
\label{sec:wtp}
One simple mathematical argument, known as the \emph{waiting-time paradox} (a.k.a the bus paradox, or hitch-hiker's paradox) provides a simple explanation about the effect of temporal heterogeneity on the speed of any dynamical processes. It concerns a single point process capturing the interaction dynamics of an individual $i$ (or a social tie, or the arrival of buses to a stop, etc.), where events are assumed to be independent, following each other with inter-event times sampled from the distribution $P(\tau)$. The waiting-time paradox states that if information (random walker, virus in epidemics, rumour, etc.) arrives to node $i$ from another node, it needs to wait on average longer than the half of the average inter-event time before it can leave node $i$ and pass to another node $j$. This is true for point processes with any level of temporal heterogeneity, including Poisson and bursty systems, even if the arrival time is uniformly distributed during two consecutive events of node $i$.
In order to understand better this paradox we need to recall the dependence we already discussed in Section \ref{sec:iet_rt_wt} between the residual time $\tau_r$ and the inter-event time distribution $P(\tau)$~\cite{Kivela2012Multiscale}. Let us assume that we have two connected nodes $i$ and $j$ and $i$ receives information at a uniformly random point in time $t_0$. In this setting the residual time $\tau_r$ is defined as the random variable that represents the time between this random time $t_0$ of information receiving and the time of the next event occurring between $i$ and $j$. As we have seen in Eq.~(\ref{eq:rst_iet}), the residual time $\tau_r$ of the link is obviously determined by the inter-event time distribution of events of the actual link. Note that the inter-event times and residual times have the same distributions when the process is Poissonian, while for bursty processes they both appear with power-law tails with exponents related as $P(\tau)\sim\tau^{-\alpha}$ and $P(\tau_r)\sim\tau_r^{-(\alpha-1)}$ \cite{Lambiotte2013Burstiness}.
We have also seen in Eq.~(\ref{eq:taurderiv}) that the residual time can be derived as
\begin{equation}
\langle \tau_r \rangle=\frac{\langle \tau^2 \rangle}{2\langle \tau \rangle}.
\end{equation}
Consequently, the value of $\tau_r$ depends heavily on the first and the second moment of the inter-event time distribution, or equivalently, on the average and fluctuation of event rates taking place on a link. In case the point process is maximally regular, i.e. $P(\tau)$ is a delta function, $\langle \tau^2 \rangle = \langle \tau \rangle^2$ and we obtain the intuitive result
\begin{equation}
\langle \tau_r \rangle = \frac{\langle \tau \rangle}{2}.
\end{equation}
However, in case of a Poisson process with $P(\tau)=\langle \tau \rangle^{-1} e^{-\tau / \langle \tau \rangle}$ we obtain $\langle \tau^2 \rangle = 2 \langle \tau \rangle^2$, which leads to a twice larger mean residual time when compared to the regular case. If the inter-event time distribution is broader than exponential, e.g., a power-law distribution, the deviation from the regular case is even bigger. We consider $P(\tau)=(\alpha-1)\tau_0^{\alpha-1}\tau^{-\alpha}$ with the lower bound of inter-event times, $\tau_0$. If the power-law exponent $\alpha$ is larger than $3$, both $\langle \tau \rangle$ and $\langle \tau^2 \rangle$ are finite, and the relation
\begin{equation}
\langle \tau_r \rangle = \frac{\alpha-2}{2(\alpha-3)}\tau_0
\end{equation}
is obtained. On the other hand, if $\alpha\leq 3$, the diverging $\langle \tau^2 \rangle$ leads to the diverging mean residual time $\langle \tau_r \rangle$~\cite{Kivela2012Multiscale}.
As we are primarily interested in the effects of the shape of the inter-event time distribution on $\langle \tau_r \rangle$, it is natural to use the Poisson model as a reference. Thus we may consider a normalised mean residual time as it has been introduced in Eq.~(\ref{r_definition}). This quantity measures the ratio of the second moment to the square of the first moment of the inter-event time distribution. Generally, the broader the distribution is, the larger the second moment is as compared to the square of the first moment. Hence, Eq.~(\ref{r_definition}) indicates that the more bursty an event sequence is, the longer the residual times are. In case of a power-law $P(\tau)$ distribution, this ratio becomes infinite when the power-law exponent $\alpha=3$ while it decreases with increasing $\alpha>3$, reaching $1$ when $\alpha=2+\sqrt{2}$. Thus, for power-law inter-event time distributions in this regime, the mean residual times are longer than those for the Poissonian reference case.
\subsubsection{Ordering of events}
\label{sec:OE}
In social networks the ties may show very different activity levels, which in turn can lead to different residual time distributions for each link. This has a consequence for several dynamical processes where the ordering and the timing of interactions determine the path of diffusion.
\emph{Random walk processes} are considered as generic models for diffusion and are commonly studied on static or temporal networks. One of the variants of these models is defined on temporal networks and is called \emph{greedy random walk}, where a single random walker is diffusing in the network hopping from one node to another, only at the time of their temporal interactions. The walker is greedy because after arriving to a node $i$ it leaves immediately via the next event towards some node $j$. In this way the probability that the walker at node $i$ will end up to a specific neighbour $j$ depends on one hand on $P_{ij}(\tau_r)$, but also on the residual-time distribution $P_{ik}(\tau_r)$ of any other $k$ neighbour of $i$. If an event towards $k$ appears earlier than towards $j$, the random walker will necessarily hop to node $k$ instead of node $j$. The probability that the random walker will end up on $j$, can be written as:
\begin{equation}
P^{RW}_{ij}(\tau_r)=P_{ij}(\tau_r) \prod_{k\neq j}\int_{\tau_r}^{\infty}P_{ik}(\tau'_r)d\tau'_r,
\label{eq:rwtransp}
\end{equation}
where the product denotes the probability that no event appeared earlier than the one with $j$.
\emph{Spreading processes} are also largely influenced by the ordering and timing of the interactions~\cite{Scholtes2014Causality}, which determine time-respecting paths in a temporal structure, along which information, disease, or rumor can travel. Spreading processes are commonly modelled by assuming that the nodes of a network can be, e.g., in three mutually exclusive states: Susceptible (S), infected (I), or recovered (R). The susceptible node (S) can become infected (I) with the infection rate $\tilde \beta$ due to the interaction with an infected neighbour. The infected node can spontaneously recover with the recovery rate $\tilde \mu$, corresponding to the transition from I to R. In other model definitions, the infected node can return back to the susceptible pool. Thus, what matters for spreading is that an interaction event of an infected node $i$ with a susceptible node $j$ occurs earlier than an infected node recovers. This happens with the probability
\begin{equation}
P^{SIR}_{ij}(\tau_r)=P_{ij}(\tau_r) \int_{\tau_r}^{\infty}r_i(t)dt,
\end{equation}
where $r_i(t)$ is the probability that the infected node $i$ recovers after time $t$.
Consequently, in case of a random walk the relative behaviour of the residual time distributions on neighbouring links is important, while in case of spreading the relative behaviour of residual time distribution and recovery time distributions. It indicates that not only the heterogeneous temporal behaviour but also the ordering of events are crucial~\cite{Lambiotte2013Burstiness}. If ties with low activity and bursty interaction dynamics occupy important positions in the network (like bridges between communities), they may have a large impact on the final outcome of spreading as they are able to keep spreading local inside well connected communities with active links.
\subsubsection{Early and late time effects of burstiness}
Heterogeneous inter-event times may have different effects when considering the early and late time behaviour of a dynamical process. Recently much effort has been devoted to clarify how the burstiness of events influences the spreading speed, partly by using empirical data analysis~\cite{Vazquez2007Impact, Karsai2011Small, Iribarren2011Branching, Miritello2011Dynamical, Rocha2011Simulated, Gauvin2013Activity} and partly by model calculations~\cite{Holme2012Temporal, Vazquez2007Impact, Iribarren2009Impact, Rocha2013Bursts, VanMieghem2013NonMarkovian, Jo2014Analytically}. In those studies the bursty character of an event sequence was found to slow down the late time dynamics of spreading. However, for the early time dynamics, conflicting results have been reported~\cite{Masuda2013Predicting}. In studies by Vazquez~\emph{et al.}~\cite{Vazquez2007Impact} and Karsai~\emph{et al.}~\cite{Karsai2011Small} the burstiness is found to slow down spreading, while other works point towards the opposite direction~\cite{Iribarren2011Branching, Rocha2011Simulated, Rocha2013Bursts}. In the following we address separately the early and late time effects of heterogeneous temporal behaviour by means of modelling the deterministic Susceptible-Infected (SI) processes at these two extremes. The SI processes are a specific case of SIR models where recovery is not possible ($\tilde \mu=0$), thus once a node is infected it keeps its state until the end of the process. More specifically, we consider a deterministic SI process where infection pass between connected nodes with probability $1$, which corresponds to the fastest possible spreading scenario, determined exclusively by the ordering and timing of temporal interactions.
\paragraph{\emph{Early time effects:}}
The early stage dynamics of a spreading process is mainly driven by small inter-event times, which generally leads to the faster spreading for non-Poissonian dynamics as compared to Poisson-like cases. Since at the early time of spreading most of the nodes are still susceptible one can safely assume in the modelling that finite size effects do not play a role and an infected node can always find a susceptible neighbour. To understand this limit we follow the argumentation of Jo \emph{et al.} presented in details in Ref.~\cite{Jo2014Analytically}.
\begin{figure}[!t]
\center
\includegraphics[width=.8\columnwidth]{figs/fig_branching.pdf}
\caption{Schematic diagram of the infections by an already infected node (a) and by a newly infected node (b). Vertical lines and vertical arrows denote activation timings of nodes and infections from infected nodes (solid horizontal line) to susceptible nodes (dotted horizontal line). Inter-event times $\tau$, $\tau'$, and $\tau''$ are independent of each other, and so are residual times $\tau_r$, $\tau_r'$, and $\tau_r''$.}
\label{fig:branching}
\end{figure}
Let us consider a system with $N$ nodes, which perform instantaneous interactions with dynamics modelled by a renewal process~\cite{Feller1971Introduction} with an arbitrary inter-event time distribution $P(\tau)$, which is the same for every node in the whole population. Note that $P(\tau)$ determines only the activation times of nodes, irrespective of whether the nodes are susceptible or infected. Whenever an infected node becomes active, it chooses randomly another node from the remaining $N-1$ nodes and if the chosen node is susceptible, then it becomes infected. Here the probability of choosing a susceptible node is $1$ in the infinite size system as the dynamics starts from a single infected node. The newly infected node remains inactive as long as its residual time $\tau_r$ before it becomes active and selects randomly a node to infect. The early stage of the spreading dynamics is sensitive to the variation of the initial distribution of active or inactive nodes. Note that this model is related to a class of Bellman-Harris branching processes~\cite{Harris2002Theory, Iribarren2009Impact, Iribarren2011Branching}, which have been used to address similar phenomena.
We investigate the spreading dynamics starting from one infected and active node at time $t=0$. Hence the number of infected nodes is initially $I_0(t)=1$ and remains unchanged for a time interval $\tau$ until the next event of the initially infected node takes place. At $t=\tau$, $I_0(t)$ can be written as the sum of two numbers: One is for the infecting node and its subsequent infected nodes, which can be denoted by an independent and identical copy of $I_0$ but starting at $t=\tau$, i.e., $I'_0(t-\tau)$. The other is for the newly infected node and its subsequent infected nodes, similarly denoted by $I'_1(t-\tau)$, where $I'_1$ is an independent and identical copy of $I_1$, to be defined below. Thus we get
\begin{eqnarray}
I_0(t)&=&\left\{\begin{tabular}{ll}
$1$ & \textrm{if}\ $t<\tau$,\\
$I_0'(t-\tau)+I_1'(t-\tau)$ & \textrm{if}\ $t\geq \tau$.
\end{tabular}\right.
\end{eqnarray}
Since the newly infected node must wait a residual time $\tau_r$ as in Fig.~\ref{fig:branching}(b), the number of infected nodes starting from one infected and inactive node can be written as
\begin{eqnarray}
I_1(t)&=&\left\{\begin{tabular}{ll}
$1$ & \textrm{if}\ $t<\tau_r$,\\
$I_0''(t-\tau_r)+I_1''(t-\tau_r)$ & \textrm{if}\ $t\geq \tau_r$,
\end{tabular}\right.
\end{eqnarray}
where $I''$s are independent and identical copies of $I$. The generating function for $I_0(t)$ is defined as $F_0(z,t)=\sum_{k\geq 0}\Pr[I_0(t)=k]z^k$, and we get
\begin{equation}
F_0(z,t)= \left\{\begin{tabular}{ll}
$z$ & \textrm{if}\ $t<\tau$,\\
$F_0(z,t-\tau)F_1(z,t-\tau)$ & \textrm{if}\ $t\geq \tau$,
\end{tabular}\right.
\end{equation}
where $F_1(z,t)$ is the generating function defined for $I_1(t)$. By taking the expectation value over $\tau$ with $P(\tau)$, we obtain
\begin{equation}
F_0(z,t)=z\int_t^\infty P(\tau)d\tau+\int_0^t F_0(z,t-\tau)F_1(z,t-\tau)P(\tau)d\tau.
\end{equation}
We can use the generating function to calculate the average number of $n_0(t) \equiv \langle I_0(t)\rangle$ as
\begin{equation}
n_0(t)=\left.\frac{\partial F_0(z,t)}{\partial z}\right|_{z=1}=\int_t^\infty P(\tau)d\tau+\int_0^t [n_0(t-\tau)+n_1(t-\tau)]P(\tau)d\tau
\end{equation}
where $n_1(t)\equiv \langle I_1(t)\rangle$. Taking the Laplace transform gives
\begin{eqnarray}
\tilde n_0(s)&=& \frac{1-\tilde P(s)}{s}+[\tilde n_0(s)+\tilde n_1(s)]\tilde P(s),\\
\tilde n_1(s)&=& \frac{1-\tilde P_r(s)}{s}+[\tilde n_0(s)+\tilde n_1(s)]\tilde P_r(s),
\end{eqnarray}
where $\tilde P(s)$ and $\tilde P_r(s)$ denote the Laplace transforms of $P(\tau)$ and $P(\tau_r)$, respectively. This straightforwardly leads to
\begin{equation}
\tilde n_0(s)=\frac{1}{s}+\frac{\tilde P(s)}{(s-\langle \tau \rangle^{-1})[1-\tilde P(s)]},
\end{equation}
where we have used the relation $\tilde P_r(s)=\frac{1}{\langle \tau \rangle s}[1-\tilde P(s)]$. Then, $n_0(t)$ can be calculated by taking the inverse Laplace transform of $\tilde n_0(s)$. Note that this solution has been obtained for arbitrary inter-event time distributions, which enables us to evaluate the effect of burstiness on spreading for both Poissonian and non-Poissonian cases.
In order to investigate the effect of the lower bound of inter-event times, we consider the shifted power-law distribution with exponential cutoff defined as
\begin{equation}
\label{eq:shiftedNonPoisson_P0}
P(\tau)= \frac{\tau_c^{\alpha-1}}{\Gamma(1-\alpha,y)} \tau^{-\alpha}e^{-\tau/\tau_c}\theta(\tau-\tau_0),
\end{equation}
where $\Gamma$ is the upper incomplete Gamma function, $\theta$ is the Heaviside step function, and $y\equiv \tau_0/\tau_c$ with $\tau_0$ and $\tau_c$ being the lower bound and the exponential cutoff of $ P(\tau)$, respectively. The Laplace transform of $P(\tau)$ in Eq.~(\ref{eq:shiftedNonPoisson_P0}) is as follows:
\begin{equation}
\label{eq:shiftedNonPoisson_P0s}
\tilde P(s)=(s\tau_c+1)^{\alpha-1}\frac{\Gamma(1-\alpha,y(s\tau_c+1))}{\Gamma(1-\alpha,y)}.
\end{equation}
To investigate the early time dynamics of $n_0(t)$, we consider the case when $s\gg 1$. By expanding the incomplete Gamma function, we obtain
\begin{equation}
n_0(t) \approx 1+A\left(e^{\frac{t-\tau_0}{\langle \tau \rangle}}-e^{-\frac{t-\tau_0}{\tau_c}}\right)\theta(t-\tau_0),
\end{equation}
where $A=\frac{1}{x+y}\frac{y^{1-\alpha}e^{-y}}{\Gamma(1-\alpha,y)}$ with $x\equiv \tau_0/\langle \tau \rangle$. The spreading rate $C_0$ at $t=\tau_0^+$ is obtained as
\begin{eqnarray}
\label{eq:shiftedNonPoisson_C0}
C_0(x,y,\alpha) \equiv \langle \tau \rangle \left.\frac{dn_0}{dt}\right|_{t=\tau_0^+} =\frac{1}{x}\frac{y^{1-\alpha}e^{-y}}{\Gamma(1-\alpha,y)}.
\label{eq:C0}
\end{eqnarray}
By considering the inter-event time distribution in Eq.~(\ref{eq:shiftedNonPoisson_P0}), a Poissonian dynamics corresponds to the case when $\alpha=0$, while we obtain non-Poissonian interaction dynamics of $\alpha>0$. Using this parameterisation, Eq.~(\ref{eq:C0}) leads to
\begin{equation}
C_0(x,y,\alpha)\geq C_0(x,y,0),
\end{equation}
suggesting that non-Poissonian bursty activity always accelerates the early time spreading dynamics as compared to the shifted Poissonian case with the same mean $\langle \tau \rangle$ and lower bound $\tau_0$ of the inter-event time distribution~\cite{Jo2014Analytically}.
Note that the accelerating effect of bursts on the short-term dynamics of SI processes has been observed by means of numerical simulations in independent studies by Rocha \emph{et al.}~\cite{Rocha2013Bursts} and Horv\'ath \emph{et al.}~\cite{Horvath2014Spreading}.
\paragraph{\emph{Late time effects:}}
As we mentioned earlier, a spreading process may behave differently in the late time limit as long inter-event times may slow down the process to reach the full prevalence. In order to better understand this limit we present here the argumentation of Min, Vazquez, and others~\cite{Vazquez2007Impact, Min2011Spreading, Min2013Burstiness}, which utilises branching processes in a somewhat similar way as we discussed for the early time limit.
In this case the deterministic SI process is diffusing on a temporal network with an underlying static tree-like structure. Its dynamics if determined by the generation time $\Delta$, which is defined as the time interval between the infection of a node and the transmission of the infection to one of its neighbours. In this model if an infection starting from a single node at time $t=0$, the average number of new infected nodes at time $t$ can be expressed as
\begin{equation}
n(t)=\sum_{d=1}^{D}z_d g^{*d}(t),
\label{eq:ntltb}
\end{equation}
where $z_d$ is the average number of nodes in $d$ contacts away from the seed node, and $D$ is the maximum of $d$. $g^{*d}(t)$ is the $d$th order convolution of $g(\Delta)$ corresponding to the probability density function of the sum of $d$ residual times, i.e., $g^{*1}(t)=g(t)$ for the immediate neighbour of the seed, and $g^{*d}(t)=\int_0^tg(t')g^{*d-1}(t-t')dt'$ in general.
The long time behaviour of $n(t)$ can be obtained by using Eq.~(\ref{eq:ntltb}), e.g., for the case where $g(\Delta)\sim\Delta^{-\nu}$ with $1<\nu<2$, corresponding to the stable L\'evy regime~\cite{Min2011Spreading, Min2013Burstiness}, we obtain $g^{*d}(t)\sim t^{-\nu}$ in the limits of $t\rightarrow \infty$ and $d\gg 1$ independently of the network structure. This corresponds to the asymptotic scaling of the prevalence as
\begin{equation}
n(t)\sim t^{-\nu},
\end{equation}
which means that in case of heterogeneous activity patterns the system slows down in the long-time limit with prevalence, which decays with the same exponent as the generation time distribution.
Next we assume that the interaction dynamics of nodes is dictated by a renewal process, generating independent events with an arbitrary inter-event time distribution $P(\tau)$. In this case if a node $i$ is infected at time $t_i$ and having a susceptible neighbour $j$, the generating time that the node $j$ receives the infection from node $i$ corresponds to the residual time $\tau_r$ between $t_i$ and their next interaction at $t_j$. As we shown in Eq.~(\ref{eq:rst_iet}), the residual time distribution can be easily derived from the inter-event time distribution. Therefore, for the activity patterns of uncorrelated events with $P(\tau)\sim \tau^{-\alpha}$ ($2<\alpha <3$) we obtain
\begin{equation}
n(t)\sim t^{-(\alpha-1)}.
\end{equation}
This can be compared to the case where the renewal process follows a Poisson dynamics with an exponentially decaying prevalence, which can be obtained along the same logic.
Note that the same conclusion can be drawn following the same argumentation used for the early time behaviour~\cite{Jo2014Analytically}, provided that the network size is assumed to be finite. In addition several numerical studies have confirmed this result~\cite{Vazquez2007Impact, Min2011Spreading, Min2013Burstiness, Jo2014Analytically} or more generally, showed that SI spreading slows down in the long time regime due to burstiness~\cite{Karsai2011Small, Miritello2011Dynamical, Vazquez2007Impact, Min2011Spreading}.
The non-stationarity of the interaction dynamics provides another way to address the early versus late time behaviour of dynamical processes. As it has been shown by Rocha \emph{et al.}~\cite{Rocha2013Bursts} a non-stationary contact dynamics may induce a rapid SI spreading with more infected nodes for early times as compared to a system with Poisson dynamics. The same was concluded by Horv\'ath \emph{et al.}~\cite{Horvath2014Spreading} who showed that power-law governed, non-stationary processes of young age can cause very rapid spreading, even for power-law exponents that would result in slow spreading in the stationary state. Consequently, the age of the processes has a strong influence on the outcome of spreading if the inter-event time distribution is heavy-tailed.
\subsection{Triggered event correlations}
\label{sec:triggev}
Another important character of bursty temporal networks, influencing dynamical processes, is the existence of causal correlations between the events sharing at least a node in common. Such triggered event pairs are the responsible for the emergence of mesoscopic temporal motifs~\cite{Kovanen2011Temporal} in which a larger number of correlated events are performed in a bursty fashion between two or more individuals. This kind of behaviour has been observed in human communication systems~\cite{Karsai2011Small, Kivela2012Multiscale, Miritello2011Dynamical,Scholtes2014Causality} and were shown to enhance the diffusion of information locally, and to accelerate globally the spreading dynamics at the early time limit.
Random reference models provide a straightforward way to study the effects of triggered event correlations on the global spreading dynamics. Taking an empirical temporal network one can obtain the sequence of interactions on each link. In order to remove only triggered event correlations between neighbouring links we can shuffle the network by re-assigning the entire interaction sequence of each link to randomly selected other links with the same number of events~\cite{Karsai2011Small}. In this way the synchronisation of events, i.e., triggered causal correlations, between neighbouring links are destroyed, while the system remains otherwise unchanged. Note that another equivalent method would be to add a random offset time to each event time on a link while applying temporal periodic boundary conditions~\cite{Backlund2014Effects}. As it is shown in Fig.~\ref{fig:SbSW}(a) and (b) (DCWB model assigned with green rhombus), triggered event correlations turn out to accelerate the process in the early stage while slowing down in the long run.
This effect has been identified in several works using numerical modelling and analytical calculations. Kivel\"a \emph{et al.}~\cite{Kivela2012Multiscale} have studied the case of an SI spreading process between three nodes connected by two links. In this toy system an event on one link may induce a triggered event on the other link with a given probability $p$, or the events are performed independently otherwise. Interesting quantity here is the average triggered relay time $\langle \tau_t \rangle$, which indicates the average time that information needs to wait to pass over the second link if it arrived at an earlier time on the first link. They show that
\begin{equation}
\langle \tau_t \rangle=\left( 1-\frac{p}{2}\frac{n-1}{n} \right) \langle \tau_r \rangle,
\label{eq:taut}
\end{equation}
where $n$ is the number of events on the second link. Equation~(\ref{eq:taut}) indicates that if $p=0$, i.e., all events are independent on the two links, the mean triggered relay time is equal to the mean residual time. However, for $p>0$, the greater the number of triggered events is, the shorter the triggered relay times are on average, which indicates that information spreading takes place faster.
Miritello \emph{et al.} have addressed the same problem~\cite{Miritello2011Dynamical} using data-driven simulations on real interaction sequences of mobile phone communication events. They considered a Susceptible-Infected-Recovered (SIR) spreading process with the infection rate $\tilde \beta$ and a constant recovery time $\Delta t$. They addressed the effects of causally correlated event pairs taking place within a short time on two neighbouring links $(*,i)$ and $(i,j)$. They have considered the case when the spreading reaches the node $i$ from an arbitrary neighbour $*$ (other than $j$) at time $t_0$, after which it could infect the node $j$ if any event occurred between $i$ and $j$ before it is recovered at time $t_0+\Delta t$. They mapped this problem to a static link percolation problem~\cite{Newman2002Spread} and showed that the average transmissibility, i.e., the probability that the infection passes over a triggered event on link $(i,j)$ is
\begin{equation}
\mathcal{T}_{ij} (\tilde \beta,\Delta t)=
\begin{cases}
\tilde \beta \langle n_{ij} \rangle & \hspace{.1in} \text{for}\ \hspace{.1in} \tilde \beta \ll 1 \\
1-P_{ij} & \hspace{.1in} \text{for}\ \hspace{.1in} \tilde \beta \simeq 1
\end{cases}
\label{eq:transmiss}
\end{equation}
where $\langle n_{ij}\rangle$ denotes the average number of events between $i$ and $j$ after $i$ becomes infected and before it is recovered, and $P_{ij}$ denotes the probability that there is no such event during the period of $\Delta t$.
Using the data analysis and random reference models they have shown that due to the correlation between events on neighbouring links, the number of events in a tie following an incoming call is always larger for the real-time data than for the time-shuffled case, corresponding to a Poisson process. Consequently, for small $\tilde \beta$, the average transmissibility and the size of the epidemic cascades are always larger in the real case than the time-shuffled case. In contrast, the bursty nature of the communication makes the tail for the real inter-event time distribution heavier than an exponential distribution found in time-shuffled case. Thus, if the recovery time is large enough, $P_{ij}$ is larger in the real-time data than in the shuffled-time data, leading to smaller spreading cascades.
In another study, Starnini \emph{et al.} modelled random walk processes on empirical temporal networks of face-to-face interactions~\cite{Starnini2012Random}. They found that the random walker explores more slowly the network with longer mean first-passage time on the empirical sequences than that for the mean-field solution assuming Poissonian dynamics. They have argued that the temporal correlations between consecutive conversations constitute a unique reason for this slowing down over the heterogeneously distributed conversation lengths.
\subsection{Effects of link burstiness}
In the previous Section we have discussed the importance of triggering effects, and causal correlations between events on neighbouring links. However, causal correlations not only appear between events on different links of the same individual, but more commonly they evolve between events on the same social tie~\cite{Kovanen2011Temporal}. Such correlations are responsible for the emergence of long bursty trains of interactions (see Section~\ref{sec:PE} and~\ref{sec:egoburst}), which were shown to be induced by dyadic conversations rather than between a larger group of people. Consequently, causally correlated event trains reflect the characteristics of links rather than those of the nodes~\cite{Karsai2012Correlated}. Here we summarise the studies, which address the effects of bursty links rather than nodes on dynamical processes.
In their modelling study of SIR and SIS processes on a real temporal network, Holme and Liljeros~\cite{Holme2014Birth} considered whether the bursty link dynamics or merely the life span of links matters. Assuming a finite observation time window $T$ they considered two interpretation scenarios of link dynamics: The ongoing link picture assumes that observed link has been created earlier and survived longer than the observation time window. In this case the important temporal structure is the time between events over the link. On the other hand, the link turnover picture suggests that a large fraction of links are created and broken during the observation. This picture is motivated by the observations that the time between the beginning of $T$ and the first event on a given link, and equivalently between the last event and the end of $T$ is longer than one would expect from the observed inter-event time distribution $P(\tau)$. Then the question is whether the life span of links or the precise timing of bursty interactions matters more for the final outcome of the simulated spreading process. They argued by defining null models and performing large-scale numerical simulations on $12$ empirical temporal networks. They found that by assuming that the events on a link occur regularly with the same inter-event time over $T$, i.e., by destroying the inter-event time distribution, the epidemic outbreak size does not change considerably. On the other hand, what matters more are the beginning and ending of the life span of a given link. If these are destroyed by letting all links begin or end simultaneously, the epidemic outbreak size changes radically. This alone does not disqualify the ongoing link picture with burstiness being important in disease spreading, but suggests that the creation and dissolution of ties should also be considered in studying epidemic model as they may considerably affect the final outcome of the epidemics.
In another work by Saram\"aki and Holme~\cite{Saramaki2015Exploring}, they simulated a greedy random walks on $8$ empirical temporal networks. This process is particularly sensitive to temporal-topological patterns involving repeated contacts between sets of nodes. This is evident by the small coverage a random walker takes when compared to a temporal reference model. This shows that in empirical temporal networks greedy walks often get stuck within a small set of nodes. This is because of non-Markovian contact patterns on single links, such as bursty trains of so-called ping-pong callings between two individuals.
\subsection{Other bursty characters}
As we have discussed earlier, data analysis and modelling studies suggested that \emph{periodic circadian fluctuations} could potentially explain the fat-tailed inter-event time distributions of human interactions. We have also discussed some pro and con arguments and concluded that such periodic patterns may not play deterministic roles. In order to evaluate the exclusive impact of daily patterns on spreading dynamics, some work has been done by using mobile phone communication sequences and random reference models~\cite{Karsai2011Small}. From the call data sequences a weighted aggregated network structure can be obtained by taking individuals (as nodes) and link them if they called each other during the observation period, with link weights defined as the number of their dyadic interactions. In order to study the effects of circadian fluctuations one can use this static structure and generate an interaction sequence on each link by two Poisson processes that conserve the original link weights: One is a homogeneous Poisson process with a constant rate $\lambda$, and the other is a non-homogeneous Poisson process whose instantaneous rate $\lambda(t)$ follows the daily pattern as calculated from the call statistics on the hourly basis. Simulating SI dynamics for both cases reveals that the difference between the spreading curves on networks with homogeneous and non-homogeneous Poisson link dynamics is negligible, demonstrating that the daily pattern has only a minor impact on the spreading speed.
We would like to point out that it is not only the microscopic bursty features that can influence the dynamical processes but also the heterogeneous temporal characters, which appear in the interaction dynamics at the system level. As described in Chapter~\ref{chapter:meas}, the \emph{temporal sparsity} of a network can capture its overall burstiness by measuring for a given time window the effective number of links divided by that for a reference system, where the timings of events on each link are randomised. Perotti~\emph{et al.}~\cite{Perotti2014Temporal} have shown that spreading velocity of an SI process is strongly correlated with the sparsity of the underpinning temporal network. They found that the smaller the temporal sparsity of the network is, the more heterogeneous the bursty temporal patterns appear to be. This has a direct implication on the dynamical processes on a temporal network. They simulated an SI spreading dynamics on various kinds of empirical temporal networks and measured the slow down coefficient defined as the actual spreading speed divided by that for the reference systems. The smaller value of slow down coefficient implies the slower spreading. They observed that the slow down coefficient is almost linearly dependent on the temporal sparsity, which indicates that the burstiness at the system level slows down the spreading dynamics.
Medvedev and Kert\'esz~\cite{Medvedev2017Empirical} studied how \emph{bridging interactions} between nodes in a population speed up the SI spreading on temporal networks of mobile phone communication. They categorised people into three groups: White nodes (customers of the provider with ZIP code), grey nodes (customers of the provider without ZIP code), and black nodes (customers of other providers). For the spreading dynamics they considered only grey and black nodes who have at least two connections to white nodes as they can be identified as bridges for spreading processes between white nodes. They found that such bridges speed up the spreading even if their interactions are bursty, independently of the city population.
\subsection{Dominant characters}
Having discussed various characteristics of bursty behaviour, which were shown to influence the dynamical processes, we yet need to consider which of them are the most dominant. This is not an easy problem and sometimes it turns out to drive to seemingly contradicting observations on different datasets. The real interaction sequences are not only bursty, but also correlated in time and with the interaction structure that consists of communities and ties with heterogeneous activities. All these correlations play some roles simultaneously during the unfolding of dynamical processes. Hence to say something exclusively about the effects of burstiness is challenging.
Once again a straightforward approach to distinguish between the effects of different structural and temporal correlations is provided by random reference models. Comparing simulation results on random reference networks after removing some correlations from the temporal network would tell us which bursty characteristics affect the outcomes of dynamical processes and how. This type of analysis~\cite{Karsai2011Small, Kivela2012Multiscale, Miritello2011Dynamical, Holme2015Information, Holme2016Temporal, Rocha2011Simulated} has shown that the most dominant character for controlling the speed of epidemic spreading is the temporal heterogeneity (burstiness) of interactions. As we have explained via the waiting-time paradox, any level of temporal heterogeneity has an overall slowing down effect. However, when compared to Poissonian systems, bursty interactions accelerate spreading for early times while slowing down for the later time dynamics~\cite{Karsai2011Small, Kivela2012Multiscale, Miritello2011Dynamical, Rocha2011Simulated}. At the same time triggered event correlations were found to be somewhat less dominant in enhancing the spreading behaviour at the early time limit~\cite{Karsai2011Small, Kivela2012Multiscale, Miritello2011Dynamical}, while they were found to slow down the diffusion of a random walker~\cite{Saramaki2015Exploring, Starnini2012Random}. In terms of the structure, the weight-topology correlations were found to be important~\cite{Karsai2011Small, Kivela2012Multiscale, Rocha2011Simulated} as high activity links located inside communities may enhance spreading, while low activity links, which are responsible for bridging communities and connecting the network together, may have the opposite effects by keeping information local due to their infrequent interactions~\cite{Granovetter1973Strength}.
Recently Delvenne~\emph{et al.}~\cite{Delvenne2015Diffusion} have addressed a similar question regarding whether temporal inhomogeneities or structural properties influence more diffusion on a temporal network. To answer this question they provided a mathematical framework to describe diffusion in linear multi-agent systems with $N$ interacting nodes as follows:
\begin{equation}
D\vec x = L\vec x,
\end{equation}
where the vector $\vec x$ consists of variables $x_i$ denoting the state of node $i$, $L$ is an $N\times N$ matrix describing the interaction structure between nodes, while $D$ captures the time evolution of variables. Assuming a random walker diffusing on the network, the temporal inhomogeneity can be incorporated into a waiting time distribution $\rho(\Delta t)$, implying that the random walker hops from one node $i$ to its neighbouring node $j$ after waiting the time $\Delta t$ on the node $i$. Then the above equation reads in terms of the argument $s$ of the Laplace transform:
\begin{equation}
\left(\frac{1}{\rho(s)}-1\right)\vec x(s)= \left(\frac{1}{\rho(s)}-1\right)\frac{1}{s}\vec x(t=0) + L\vec x(s),
\end{equation}
with Laplacian $L$. Here the mixing time $\tau_{\textrm{mix}}$, i.e., the relaxation time to stationarity, can be approximated as
\begin{equation}
\tau_{\textrm{mix}}\approx \max\{\mu\epsilon^{-1},\ \frac{\sigma^2-\mu^2}{2\mu},\ \tau_c\},
\end{equation}
where $\mu$, $\sigma^2$, and $\tau_c$ are respectively the mean, the variance, and the exponential cutoff of the waiting time distribution, while $\epsilon$ is the spectral gap of $L$, representing the structural property of the system. Therefore, the mixing time of diffusion on such temporal networks can be dominated either by the temporal inhomogeneity or by the structural properties. They analysed several empirical datasets and concluded that in the absence of some temporal correlations, the characteristic times of the dynamics are dominated either by temporal or by structural heterogeneities, as those observed in real-life systems. In systems where correlated temporal patterns are the dominating factor, the aggregation of communities are not necessarily relevant in general, but the temporal characteristics impose the natural description levels of the dynamics.
\section{Dynamical processes on bursty temporal networks}
In the second part of this Chapter we will discuss the representative dynamical processes, which were investigated for bursty systems. Earlier we have discussed the effects of different bursty characteristics on the dynamical processes. Here our focus is more on identifying the dependencies of dynamical processes in the bursty temporal patterns. We will summarise how the process-specific characteristics depend on the bursty behaviour of the underpinning temporal network. We will address five different classes of dynamical processes without going in details about their definitions and critical behaviour. However, we refer the interested reader to books~\cite{Barrat2008Dynamical, Porter2016Dynamical} and a review paper~\cite{PastorSatorras2015Epidemic}, where these processes and their dynamics on static networks are addressed in detail.
\subsection{Epidemic spreading}
As we have already discussed, several epidemic models like SI~\cite{Karsai2011Small, Kivela2012Multiscale, Vazquez2007Impact, Min2011Spreading, Min2013Burstiness, Horvath2014Spreading, Rocha2013Bursts, Gueuning2015Imperfect, Starnini2013Immunization, Jo2014Analytically}, SIR~\cite{Iribarren2009Impact, Miritello2011Dynamical, Rocha2013Bursts, Zhu2014Effect, Holme2015The, Lambiotte2013Burstiness} and SIS~\cite{Lambiotte2013Burstiness, Holme2014Birth} have been studied on bursty temporal networks. These processes are commonly characterised by the infection rate $\tilde \beta$ and the recovery rate $\tilde \mu$. Their long-term dynamics has been described by a ratio $R_0\equiv \tilde \beta/\tilde \mu$, called the basic reproduction number. This ratio gives the average number of infections that a single infected node generates in a population. This number can also be used to characterise whether the process is in a subcritical phase ($R_0<1$), where the epidemic process vanishes spontaneously, or in a supercritical/endemic phase ($R_0>1$), where a considerable fraction of the population is infected to evolve into a stationary state. We have seen earlier that heterogeneous temporal interaction patterns may influence the dynamics of a spreading process, thus it is straightforward to ask how they behave as the function of $R_0$ in a bursty system.
Iribarren~\emph{et al.}~\cite{Iribarren2009Impact} addressed this question by modelling information propagation based on observations from an online email recommendation experiment. They interpreted the spreading process in terms of a Bellman-Harris branching process model. Precisely, in their model the average fraction of infected nodes at time $t$ is written as
\begin{equation}
i(t)=1-G(t)+R_0\int_0^t i(t-\tau_r)P(\tau_r )d\tau_r,
\label{eq:itBH}
\end{equation}
where $G(t)=\int_0^tP(\tau_r)d\tau_r$ is the cumulative distribution of the residual time distribution. They showed that for processes with $P(\tau_r)$ decaying slower than exponential, including bursty processes with log-normal and power-law tails, if $R_0<1$ then Eq.~(\ref{eq:itBH}) is reduced to $i(t)\sim \frac{1-G(t)}{1-R_0}$. This indicates that the spreading depends mostly on those individuals whose residual time is the longest. Thus temporal heterogeneity has a profound impact on the dynamics of information spreading. It does not depend on the mean value of $\tau_r$ but on the tail of its distribution $G(t)$, which drastically slows down the propagation of information. Interestingly, large temporal heterogeneity has the opposite effect above the epidemic threshold ($R_0 > 1$). In this case the Bellman-Harris model predicts an initial exponential growth of the epidemic spreading where information shows faster spreading than expected.
In another work, Miritello~\emph{et al.}~\cite{Miritello2011Dynamical} studied the effects of heterogeneous residual times and triggered events on SIR processes. As we discussed earlier in Section~\ref{sec:triggev}, they found that in random networks the basic reproduction number\footnote{In their work Miritello \emph{et al.}~\cite{Miritello2011Dynamical} called the basic reproduction number as the secondary reproduction rate, $R_1$, and defined as the average number of secondary infections produced by an infectious individual, which is the definition of $R_0$. Moreover, in their definition they referred to other works~\cite{Barthelemy2004Velocity,Newman2002Spread}, which concerns $R_0$, thus we decided to adopt the notation $R_0$ in Eq.~(\ref{eq:MirBRN}), rather than $R_1$ as in the original paper.} is dependent on the transmissibility, defined in Eq.~(\ref{eq:transmiss}), as
\begin{equation}
R_{0}(\tilde \beta,\Delta t)=\frac{\langle (\sum_j \mathcal{T}_{ij})^2\rangle_i - \langle \sum_j \mathcal{T}^2_{ij} \rangle_i}{\langle \sum_j \mathcal{T}_{ij} \rangle_i}.
\label{eq:MirBRN}
\end{equation}
In case of homogeneous dynamics ($\mathcal{T}_{ij}=\mathcal{T}$) Eq.~(\ref{eq:MirBRN}) recovers the common result $R_0=\mathcal{T} (\langle k^2_i \rangle / \langle k_i \rangle -1)$ found in random networks. This is an important result as $R_0$ can be used to determine the critical point of the SIR spreading even in bursty systems, while its value scales proportionally with the speed of the epidemics.
Rocha~\emph{et al.}~\cite{Rocha2013Bursts} addressed various characteristics of spreading processes as a function of the system's stationarity and its temporal heterogeneity. In their systematic study they defined a temporal network model where nodes are activated by an independent renewal process with exponential (homogeneous case) or power-law (heterogeneous case) inter-event time distributions and contact each other randomly. In addition they introduced node turnover processes by replacing nodes with disconnected new ones with a given rate in order to ensure that the system reaches a stationary state. As for the fabric of this temporal network they simulated SI and SIR models and measured the peak and volume of prevalence, and then estimated $R_0$ and the distribution of the epidemic outbreak sizes. They showed that the prevalence curve at the early stages is characterised by a faster and steeper growth of infected nodes in the case of heterogeneous contact patterns, while at the later stages its characteristics depend more on the epidemics model, turnover rate, and other parameter values. In the absence of replacement of nodes, the prevalence of the infection is generally higher for homogeneous contact patterns, however, for later times the heterogeneous contact patterns slow down the spread of the infection. For some configurations of the SIR dynamics, heterogeneous patterns provide a way to decrease the global impact of the epidemic. In terms of $R_0$ they found that it depends both on the heterogeneity and the node turnover rate of the network. In general, heterogeneous temporal patterns tend to result in higher values of $R_0$, with the exception in the case of stochastic SIR dynamics with the infection probability around $1$. Note that a similar picture has been presented by Zhu~\emph{et al.}~\cite{Zhu2014Effect} from SIR simulation results on temporal scale-free networks.
Gueuning~\emph{et al.}~\cite{Gueuning2015Imperfect} studied the SI spreading process on temporal networks but with the probability $p$ for the successful infection. This $p$ is essentially related to the infection rate $\tilde\beta$ and it also determines the average residual time as
\begin{equation}
\langle \tau_r \rangle =\frac{\langle \tau^2 \rangle}{2\langle \tau \rangle} + \frac{1-p}{p} \langle \tau \rangle.
\end{equation}
In case when $p=1$, the deterministic SI process is recovered, where the spreading dynamics is determined by the waiting-time paradox as mentioned with Eq.~(\ref{eq:taurderiv}) in Section~\ref{sec:iet_rt_wt}. On the other hand, if $p<1$, the slowing-down effect of heterogeneous interaction dynamics becomes weaker. As $p\to 0$, what determines the spreading is the average residual time rather than the tail part of residual time distribution. In addition, the transmissibility of interactions decreases for the increasing $p$ in bursty cases, indicating their important effects in hindering the epidemic spreading.
Another important characteristics of an SIR spreading is the quantity $\Omega$ that describes the fraction of infected nodes in the population after the outbreak has passed and the process reached its disease-free absorbing state. In static networks a unique deterministic relation exists between $R_0$ and $\Omega$, while Holme~\emph{et al.}~\cite{Holme2015The} have found that the relation is violated in case of temporal networks. They showed that the different pairs of $\tilde\beta$ and $\tilde\mu$, leading to the same value of $R_0$, may lead to different outbreak sizes. Hence the question is which structural and temporal features of a temporal network determine the most the correlation between $\Omega$ and $R_0$. It has been found that as a temporal quantity, the burstiness parameter $B$ (defined in Chapter~\ref{chapter:meas}) determines dominantly the correlation between these two quantities. Results showed that the more heterogeneous the inter-event time distribution is, the less predictable the value of $\Omega$ is from the corresponding $R_0$.
Finally, it should be noted that there have been some studies considering bursty behaviour in order to design efficient immunisation strategies. While only system-level effects of burstiness have been addressed by using random reference models in Ref.~\cite{Starnini2013Immunization}, an immunization strategy has been proposed in Ref.~\cite{Lee2012Exploiting}, which exploits heterogeneous temporal behaviour by immunizing the last interacting neighbour of a randomly selected node at a random time. This strategy has been shown to be effective in data recording face-to-face interactions, where the turnover of relationships is large.
\subsection{Random walks}
Random walks serve as a model dynamics that has extensively been used to study bursty temporal networks. As we have discussed in Section~\ref{sec:OE}, a special model variant called \emph{greedy random walks} has lately attracted much attention as it is defined on temporal networks and its dynamics is sensitive to temporal heterogeneity. The temporal network is commonly defined as a static structure with interaction dynamics on links defined as renewal processes with an arbitrary inter-event time distribution but with parameters characteristic to each link. A single greedy random walker is diffusing on such a network by moving between nodes via temporal interactions whenever it is possible, i.e., after arriving to the node $i$ it always takes the first emerging link to move to another node. Two variants of greedy random walks have been proposed by Speidel~\emph{et al.}~\cite{Speidel2015Steady}:
\begin{enumerate}[(a)]
\item In case of the \emph{active random walk}, after the walker arrives at a node $i$, it re-initialises the inter-event times of all the links. Then, the residual time, i.e., the time a walker waits on a node before the link appears, is equivalent to the inter-event time.
\item In case of the \emph{passive random walk}, the re-initialisation of each link is not assumed. Instead, a new inter-event time is chosen only for the link through which the walker arrived at the node. Then the transition rates of the passive random walk depend on the trajectory that the walker has taken, implying that one has to account for the entire trajectory of the random walker to accurately evaluate its behaviour~\cite{Speidel2015Steady}.
\end{enumerate}
Note that if the dynamics of links are driven by Poisson processes with exponentially distributed inter-event times, the active and passive random walks are identical and reduce to the usual continuous-time random walk on the static network.
\subsubsection{Active random walks: generalised mean-field equations}
In order to characterise these two models, we are interested in their steady state behaviour and the mean recurrence time $\langle T_{i|i}\rangle$, i.e., the average duration it takes for the random walker having initiated from the node $i$ to return to $i$ for the first time. The steady state behaviour of the active random walk problem was studied by Hoffman~\emph{et al.}~\cite{Hoffmann2012Generalized, Hoffmann2013Random}. They introduced the probability that a random walker makes a step from node $j$ to $i$ accounting for all other processes on $j$ in a similar way as in Eq.~(\ref{eq:rwtransp}) as
\begin{equation}
T_{ij}(\tau)=P_{ij}(\tau)\times \prod_{k\neq i}\left( 1- \int_{0}^{\tau}P_{kj}(\tau')d\tau'\right),
\end{equation}
where $P_{ij}(\tau)$ denotes the distribution of waiting times on a link between nodes $i$ and $j$. Using this probability they introduce a generalised Montroll-Weiss master equation~\cite{Montroll1965Random} describing the evolution of the probability mass function $n_i(t)$ for a walker to occupy node $i$ in time $t$. In general this can be written as
\begin{equation}
n_i(t)=\int_0^t \phi_i(t-t')q_i(t')dt',
\end{equation}
where $q_i(t')$ is the probability that the walker arrived at the node $i$ in time $t'\leq t$ weighted by the probability $\phi_i(t-t')$ of not leaving the node since then. The Laplace transform reduces $n_i(t)$ to a product in the Laplace space as
\begin{equation}
\hat{n}_i(s)=\hat{\phi}_i(s)\hat{q}_i(s).
\label{eq:nlt}
\end{equation}
Here an expression for $\hat{\phi}_i(s)$ can be obtained by taking the probability distribution $T_i(t)=\sum_{j=1}^{N}T_{ij}(t)$ to make a step from node $i$ to any other node, which leads to the probability density function of remaining at $i$ for a time $t$:
\begin{equation}
\phi_i(t)=1-\int_0^t T_i(t')dt'
\end{equation}
with a Laplace transform as
\begin{equation}
\hat{\phi}_i(s)=s^{-1}(1-\hat{T}_i(s)).
\label{eq:philt}
\end{equation}
One can obtain an expression for $\hat{q}_i(s)$ by considering that $q_i(t)=\sum_{k=0}^{\infty}q_i^{(k)}(t)$ where $q_i^{(k)}(t)$ is the probability to arrive at the node $i$ in time $t$ in exactly $k$ steps. Taking its Laplace transform and summing it over all $k$ the authors yield
\begin{equation}
\hat{q}(s)=\left( I - \hat{T}(s)\right)^{-1}n(0),
\label{eq:qlt}
\end{equation}
where $I$ is the identity matrix and $q$ and $n$ are vectors. After substituting Eq.~(\ref{eq:philt}) and Eq.~(\ref{eq:qlt}) into Eq.~(\ref{eq:nlt}) they obtained a generalised Montroll-Weiss master equation~\cite{Montroll1965Random} that applies to arbitrary network structures:
\begin{equation}
\hat{n}(s)=s^{-1}\left( I-\hat{D}_T(s)\right)\left( I-\hat{T}(s)\right)^{-1} n(0),
\end{equation}
where the components of the diagonal matrix are given as $\left( \hat{D}_T \right)_{ij}(s)\equiv \hat{T}_i(s)\delta_{ij}$. Taking the inverse Laplace transform leads to
\begin{equation}
\frac{dn}{dt}=\left( T(t) \ast \mathcal{L}^{-1}\left\{ \hat{D}^{-1}_T(s)\right\}-\delta (t)\right)\ast K(t)\ast n(t)
\end{equation}
where $\mathcal{L}^{-1}$ denotes the inverse Laplace transform and $\ast$ is the convolution respect to time. Here the memory kernel $K$ characterises the amount of memory in the system. Because of the convolution they conclude that the temporal evolution of $n_i(t)$ depends on the states of the system at all times since the initial setting. For further details on the derivation see Refs.~\cite{Hoffmann2012Generalized, Hoffmann2013Random}.
The authors further obtained an effective transmission matrix $\mathbb{T}_{ij}$ for the whole network and found that if the dynamics of links are dictated by a Poisson process, a random walk on the temporal network is equivalent to a Poisson continuous-time random walk on a static network with links weighted by the number of interactions. They also concluded that in the Poissonian case the stationary solution of the random walk is a uniform vector. In contrast, if the dynamics of links is non-Poissonian, e.g., bursty, the stationary solution appears only in the limit $\tau_r \to \infty$ and it is not uniform. In terms of mean recurrence time, Speidel~\emph{et al.}~\cite{Speidel2015Steady} found that if inter-event times on different links are identically distributed then $\langle T_{i|i}\rangle \propto 1/k_i$, i.e., it is inversely proportional to the degree of nodes, thus determined by the structure and not by the dynamics of the network.
\subsubsection{Passive random walks}
In case of the passive random walk problem, the inter-event and residual time distributions are not identical but related to each other as shown in Eq.~(\ref{eq:rst_iet}). If $P(\tau_r)$ has a heavy tail, the inter-event time picked for the last active link, which transferred the walker from node $j$ to node $i$, will most likely to be shorter than the residual times on other links of the node $i$, leading that the walker will most likely return back to node $j$. This behaviour of getting stuck in conversations between two nodes has somehow already been observed empirically by Saram\"aki and Holme~\cite{Saramaki2015Exploring}. This mechanism makes the system non-Markovian as the destination of the walker at any node $i$ depends on its origin and not only its actual state. Furthermore, it was shown that unlike for the active random walk, the approximated steady state of the passive random walk is the uniform distribution for any network and distribution of inter-event times. Neither in this case the mean recurrence time depends on the distribution of inter-event times as it appears as $\langle T_{i|i}\rangle \propto N\langle \tau \rangle/k_i$. It has also been shown that the active random walk produces smaller mean recurrence times for each node than the passive walk does when the inter-event time follows the power-law distribution. In contrast, the mean recurrence times are larger for the active random walk than for the passive random walk when inter-event time follows a less heterogeneous Weibull distribution.
\subsection{Threshold models}
Threshold-driven contagion models define a family of dynamical processes, where the infection of an individual is conditional to some individual threshold of pathogen concentration or social influence, etc. In these systems individual thresholds together with the temporal and topological structure of the network determine the spreading dynamics. This is fundamentally different from the case of epidemic spreading, where the process is stochastic and controlled by a single rate of infection, characteristic to the modelled disease and not to the individual. Threshold models are important not only due to their epidemiological relevance, but also because they capture mechanisms that are recognised to drive social contagion phenomena, such as the spreading of memes, adoption of innovations, and decisions to join collective actions~\cite{Centola2007Complex}.
A widely known threshold model for static networks was proposed by Watts~\cite{Watts2002Simple}, which was recently extended for temporal networks~\cite{Backlund2014Effects, Karimi2013Threshold, Karimi2013Temporal, Karimi2015Tightly}. The model of temporal networks assumes that nodes can be in two mutually exclusive states, susceptible or infected (also called adopted). Initially each node is susceptible except a randomly selected seed node, which is set to be in infected state. During simulations we follow the set of contacts in timely order and let each contact be an opportunity for the nodes to learn about the state of their neighbours and to potentially change state. A node $i$ changes from susceptible to infected state if the $\phi_i$ number (or fraction) of its observed infected neighbours overcomes a given threshold $\Phi$. However, nodes remember the state of their observed neighbours only for a finite time window $\theta$. Thus a node gets infected at time $t$ only if it has $\phi_i>\Phi$ within a time frame $[t-\theta,t]$.
Karimi~\emph{et al.}~\cite{Karimi2013Threshold} have studied two versions of this model simulated on six different empirical temporal networks and on the corresponding random reference models where they shuffled the interaction times to eliminate burstiness. In one case, they defined $\phi_i$ as the fraction of infected neighbours among all neighbours observed in $\theta$ and found that the size of the infection cascade decreases by $\theta$. As they explained, longer memory time window means larger number of observed neighbours who are mostly susceptible in the beginning of the process, thus they decrease the probability of infection of the central node. In addition, they also showed that burstiness slows down the emergence of infection cascades. On the other hand, in the model variant where they define $\phi_i$ as the absolute number of infected neighbours observed in $\theta$, the cascade size increases with $\theta$ and cascades evolve faster due to burstiness in most of the empirical networks.
Backlund~\emph{et al.}~\cite{Backlund2014Effects} have studied yet two other model variants, where they assumed that $\phi_i$ is defined as the fraction of a number of infected neighbours of node $i$ with static degree $k_i$ observed in $\theta$. They simulated the process on large empirical temporal networks of mobile calls, SMS, emails, and face-to-face interactions. Similarly to Karimi~\emph{et al.} they used the time-shuffled random reference model to address the effects of burstiness and in addition a random offset model (see Section~\ref{sec:triggev}) to eliminate triggered event correlations. In one model variant, which they called \emph{stochastic threshold model}, they assumed a linear correspondence between $\phi_i$ of node $i$ and the probability of its getting infected. Although the threshold rule does not directly count the number of contacts from the same adopted neighbour, these interactions still contribute indirectly because the stochastic rule is activated whenever a contact occurs. The authors observed that this indirect effect of burstiness hinders the infection rate because of increased waiting times on links and redundant repeated events. Here multiple adopted neighbours drive the adoption, which are unlikely if the bursty periods evolve only on links or between limited number of people. In this case time shuffling destroys burstiness, and spreads events more evenly across time, thus giving rise to an increased number of temporal paths ending at nodes within short time windows.
History-dependent contagion is a slightly different type of threshold model, which was studied by Takaguchi~\emph{et al.} on empirical bursty temporal networks~\cite{Takaguchi2013Bursty}. In this model each node $i$ of a network is assigned with an internal variable $\nu_i$, which represents, e.g., the concentration of pathogen in the individual, or her actual interest in adopting something. An initially susceptible node becomes infected once the concentration $\nu_i$ reaches a threshold $\nu_{\textrm{th}}$, and keeps this state until the end of the process. The initially zero concentration of a node, i.e., $\nu_i=0$, is increased by unity each time the node interacts with an infected neighbour, and it is decreasing exponentially with the rate $\tau_d$, otherwise as the function of the time between consecutive interactions (for more precise definition, see Ref.~\cite{Takaguchi2013Bursty}). In order to study the effect of burstiness on this process, simulations have been carried out on real face-to-face interactions and email networks and on corresponding random reference systems where all temporal heterogeneity were removed by shuffling interaction times. By measuring the final infection size as a function of $\nu_{\textrm{th}}$ and $\tau_d$ in both real and randomised networks, it has been shown that in the original bursty temporal network the spreading evolves faster, it reaches more nodes in its final state, and the parameter space of global contagion is expanded, all compared to the reference systems. However, the reachability ratio~\cite{Holme2012Temporal} of nodes were found to be smaller. As the authors explained, this inconsistency may be caused by the competition between two opposite effects by randomisation, which increases the reachability ratio of each node to enhance spreading but eliminates the burstiness to suppress the epidemic.
\subsection{Evolutionary games}
There is yet another set of dynamical processes that have been studied on bursty temporal networks, namely different evolutionary games. All of the related studies employed empirical temporal networks and random reference models to understand how bursty temporal patterns affect the emergence of cooperation. Cardillo~\emph{et al.}~\cite{Castellano2009Statistical} studied the Hawk-Dove game and the Prisoner's dilemma on face-to-face interaction sequences. They used a snapshot representation of the temporal networks~\cite{Holme2012Temporal}, such that each snapshot represents the set of interactions appearing within a unit time period between any individuals in the dataset. A random reference model was defined by shuffling the snapshots, which provided a null model where the number of interactions per link and circadian fluctuations were kept, but temporal heterogeneity and event correlations were destroyed. Simulating games on the original and shuffled temporal networks showed that the temporal dynamics of social ties has a dramatic impact on the evolution of cooperation. In fact they showed that the dynamics of pairwise interactions favours selfish behaviour, and the cooperation is seriously hindered when the agent strategy is updated too frequently with respect to the typical time scale of agent interaction, and when realistic link temporal correlations are present.
Similar conclusions were drawn by Li~\emph{et al.}~\cite{Li2016Evolution} who studied the Prisoner's dilemma on similar real temporal networks. First they showed that the temporal network enhances the emergence of cooperation when compared to corresponding static structures. They also concluded that removing burstiness by shuffling interaction times in the dataset leads to improved cooperations. Thus they found that burstiness actually slows down the emergence of cooperations just like in case of many other dynamical processes.
\subsection{Dynamical process induced bursty behaviour}
Finally we would like to mention some studies, which propose potentially reversed situations, where instead of burstiness influencing dynamical processes, it is induced by them. More precisely, it has been shown that certain processes, such as the adoption of products or information, may induce bursty patterns in the interaction behaviour of individuals. We have already discussed one model study of Ref.~\cite{FernandezGracia2013Timing} in Section~\ref{sec:otherindivmods}, where in a voter model due to exogenous and endogenous update rules bursty patterns of update frequencies occurred at the individual level. However, one can find real world examples for similar phenomena. Kikas~\emph{et al.}~\cite{Kikas2013Bursty} showed that in the online social network of Skype, the link creation dynamics of individuals evolve through long bursty trains, which are commonly triggered by the adoption of different services. This in turn evolves as a complex contagion process on the fabric of the emerging network~\cite{Karsai2016Local}.
In another study, Myers~\emph{et al.}~\cite{Myers2014Bursty} arrived at a similar conclusion by analysing link creation and removal bursts in Twitter. They identified bursts by comparing link addition rates to the average daily activity curves. They argue that link creation bursts are commonly induced by external processes like retweet-bursts, content download, or protests and helped people to evolve a more homogeneous egocentric network in terms of interest. Furthermore, they showed that most of the new links created in bursty periods closed triangles in the network, thus were responsible to shape the structure locally to help the formation of communities.
Finally another direction of modelling was recently proposed by \'Odor~\cite{Odor2014Slow}, showing that interaction dynamics of systems in the critical Griffiths phase \footnote{In a critical system disorder can smear the phase transitions, making a discontinuous transition continuous or generating Griffiths phase, in which critical-like power-law dynamics appears over an extended region around the critical point.} exhibits slow bursty dynamics with power-law interconnection times. Various static and dynamic network topologies including one-dimensional rings, generalised small-world, and ageing scale-free structures were considered. On the top of these structures dynamical processes were simulated, such as the contact process or susceptible-infected-susceptible (SIS) dynamics, which are all known to exhibit a Griffiths phase when the topological disorder exists. It was shown that the inter-communication time between neighbouring agents appears with a power-law tail with various exponent value depending on the system considered. These observations suggest that in the case of non-stationary bursty systems, the observed non-Poissonian dynamics can emerge as a consequence of an underlying hidden Poissonian network process that is either critical or exhibits strong rare-region effects.
\chapter{Discussion}
\label{chapter:concl}
In this monograph we have presented an up-to-date overview of dynamical systems of human behaviour that show bursty phenomena. These systems evolve through inhomogeneous temporal event sequences with periods of high event frequency alternating with low frequency periods. It is this dynamical feature that makes such systems very interesting yet very challenging to understand and explain. Systems that show bursty behaviour can not be characterised just as a Poisson process with a single temporal scale and exponential inter-event time distributions. Instead, bursty systems show strong temporal heterogeneities such that their dynamics is deemed to be non-Poissonian with broad inter-event time distributions.
Indeed, the quest to understand the bursty behaviour is interesting because it occurs in a variety of systems of Nature but also in man-made systems. One of the best-known examples of bursty behaviour is the dynamics of earthquakes, where the shocks at a given location appear burstily with the frequency of aftershocks decreasing as a power law and leading to a broad inter-event time distribution of shocks. From the theoretical point of view the stochastic processes underlying this and some other apparently very different phenomena like solar flares show universal features having the distributions of sizes, inter-event times, and temporal clustering, explainable in general by the theory of self-organised criticality (SOC). An example of a bursty system at different scale is a single neuron or group of neurons firing spike trains with high frequency separated by intervals of low frequency of activities, which is proposed to be the result of integrate-and-fire mechanism, commonly assumed in case of SOC systems. Further examples of bursty patterns can be found in switching between contrasting activities as in case of sleep-wake patterns of animals and humans or stop-start motion of fruit flies.
These few examples of systems showing bursty temporal patterns were presented to define theoretical concepts and develop models towards understanding their behaviour. However, in case of human bursty behaviour these concepts and models may be different especially when it comes to the behaviour of a collection of mutually linked individuals forming a social connectome~\cite{SocialConnectome} or a social network. This issue of connectivity constituted yet another dimension and a challenge to study bursty temporal patterns of human sociality in terms of quantitative analysis and of phenomenological and quantitative theory. These systems have recently become attainable to quantitative studies due to digital communication technologies through which most human socio-economic transactions now occur and are recorded in large datasets.
At the behavioural level the timings of individual actions present heterogeneous temporal patterns, while similar dynamics was observed in dyadic social interactions between individuals, or in collective social phenomena of groups, communities and societies. One of the first observations of this kind was made in a study of email correspondence that reported a broad inter-event time distribution with a power-law tail~\cite{Eckmann2004Entropy} and was explained using a priority queuing model~\cite{Barabasi2005Origin}.
At the group or societal level, one has observed bursty dynamics e.g., in the emergence of causal temporal behaviour motifs, the evolution of mass demonstrations, revolutions, global information cascades, and even wars of various kinds. In all these cases of human bursty phenomena there is a challenge to characterise and model them in a unified way. A step forward to this direction has been the proposition of human bursty behaviour belonging to one of the two universality classes with two different exponent values characterising the power-law inter-event time distributions and queuing models. However, this picture turned out not to be complete as further empirical evidences from some bursty systems were found to give rise to various different exponent values.
In describing the human bursty behaviour the perspective of the priority queuing model is that the bursty patterns are consequences of people prioritising their tasks in the order of perceived importance, inducing intrinsic correlations between different tasks and resulting in bursty patterns of completed activities. Alternatively human behaviour is considered to be driven by external factors like circadian and weekly cycles without any intrinsic correlations, introducing a set of distinct characteristic time scales and giving rise to heavy tails due to alternating homogeneous and non-homogeneous Poisson processes. As further alternative approaches in describing bursty patterns one has assumed strong correlations between consecutive events and employed memory functions, or self-exciting point processes, or reinforcement mechanisms. Yet there are other models that have been proposed to describe human bursty behaviour based on self-organised criticality, or local structural correlations, or random walk, or contact process, or voter model process to introduce heterogeneous temporal patterns at the individual or system level.
The richness of emergent features in human bursty dynamics has generated the development of methodologies and models to ask even more complex scientific questions about the effects of non-Poissonian patterns of individuals on collective dynamical processes. A typical example is diffusion of information in a temporal social network, where individuals interact burstily but are connected together in a network where information can diffuse globally. Beyond the conventional modelling and simulation techniques of such processes, data-driven models and random reference systems were recently shown to be successful in addressing such questions.
As is evident from the above not at all comprehensive set of examples, there is still a lot of open directions to take towards the better understanding of the underlying mechanisms and processes that lead to burstiness appearing in the systems of human dynamics. In this monograph we have embarked on to the road of addressing these questions of the underlying mechanisms of bursty behaviour in terms of data analysis as well as using various theoretical and modelling approaches. In order to make our research endeavour of human bursty behaviour as a logically proceeding narrative we organised our review in six Chapters, including this Chapter. Starting with a general introduction in Chapter \ref{chapter:intro} we provided a broader overview on bursty phenomena observed in Nature and in human dynamics together with the general motivations and organisation principles for this monograph.
In Chapter \ref{chapter:meas} we presented the theoretical description and characterisation of bursty human dynamics. Starting from the description of discrete time series we went through all characteristic measures, like inter-event time distribution, burstiness parameter, memory coefficient, bursty train size distribution, autocorrelation function, and so on, which were borrowed or introduced to describe human bursty systems from the individual to the population level. With these quantities, we showed how to detect the temporal inhomogeneities and long-range memory effects in the event sequences of human dynamics. At the same time we also introduced methods of system-level characterisation, mainly in the frame of temporal networks, which have been intensively studied in recent years to describe temporal human social behaviour. Finally, as human dynamics intrinsically shows the cyclic patterns like the daily and weekly ones, the methods for deciphering the effects of such cycles were also described.
In Chapter \ref{chapter:emp}, we made a comprehensive collection of a large number of empirical observations of human bursty systems recorded in various situations and stored in a number of datasets. We divided these observations into two main categories, i.e., individual activities and interaction-driven collective activities. In addition, we briefly discussed examples from human mobility, financial systems, and animal behaviour. Precisely, as for the interaction-driven case, we sorted out the empirical findings from different social interaction modalities like face-to-face interactions, mobile-phone based interactions, communication by posted letters and emails to web-based social interactions, as they may reflect the different degree of sociality between a pair of individuals. To make the overview for the reader easier to follow such a large set of empirical studies, we presented a systematic summary of all these observations in tables including a short description of each dataset, the observed values of some bursty characters, and the references to the original works.
Next in Chapter \ref{chapter:model} we summarised the main modelling directions, which have been studied for the understanding of the emergence of bursty human behaviour. We addressed three main modelling paradigms concerning priority queuing models, reinforcement and memory driven processes, and Poisson models of bursty phenomena. In addition, we summarised less recognised modelling directions together with random reference models, which have been used lately to highlight the effects of burstiness and temporal correlations in empirical event sequences. Furthermore, we discussed several generative models at the individual level, where activity dynamics of a single person were in focus, but we also summarised models of bursty dyadic interactions, and network models with emergent bursty behaviour.
In Chapter \ref{chapter:processes} we summarised studies addressing any type of dynamical processes of bursty human interaction networks. Bursty human interactions have indisputable consequences on dynamical processes, as their heterogeneous timings largely control the possible transmission of any kind of information between interacting peers, or the timely connectedness of the temporal structure. To give a comprehensive review we first discussed all possible bursty characters like the inter-event and residual time distributions, ordering of events, triggered event correlations, node and link burstiness, etc., which were shown to play important roles in the early and late time behaviour of collective dynamical phenomena. In the second part of Chapter \ref{chapter:processes} we went through all the main families of dynamical processes studied so far in bursty interaction networks to understand how process specific behaviour is dependent on the heterogeneous dynamics.
\section{Future directions and methodological approaches}
As stated before this monograph is meant to serve as an up-to-date overview of what has been learned so far about human bursty phenomena. However, we need to ask what is next, what can one learn more, should one try to combine different perspectives, and what should one focus on? Here our perspective to study human burstiness has largely been that of statistical physics, at least when it comes to methodology. This includes the analyses of various kinds of small or large datasets to learn about the dynamical characters and other basic properties of human burstiness at the level of individuals, pairs of individuals, and networks of individuals. This is followed by the theory, building plausible models, and doing the actual computational modelling to understand and explain how these properties at different levels could have emerged. This is also important in describing on one hand the processes leading to burstiness and on the other the dynamics of it. This perspective is very much data-driven but also data-limited, since we are dependent on the availability of data.
One of the recent ICT-related developments in studies of social systems is continuous and automated app-based data collection using smart mobile phones and wearable devices~\cite{Aledavood2015Daily,Stopczynski2014Measuring,Eagle2006Reality,Karikoski2010Measuring}. As today's smart phones include a number of sensors, it is possible to continuously collect lots of different types of data from single individuals, such as their activity times by monitoring phone screen on/off sequences, location, accelerometer data, calls, messages, data from apps and services usage, social network data (e.g., Facebook and Twitter), and data from wearables. Together with the smart phones' built-in facility to make online surveys and questionnaires, it is possible to collect qualitative and quantitative truly multidimensional ``social diary" data from a single individual and from interacting individuals in groups or in larger social networks. The smart phone based research approach can join seemingly different viewpoints to eventually one research perspective, that could be called ``computational social science" and once again aiming at getting even deeper understanding of the processes involved in human behaviour in general and human bursty behaviour in more detail.
As for data-driven observations of human dynamics, especially when the system behaves burstily, we raise yet another open question, namely the stationarity of the process. The reason for this is the fact that human activities are predominantly driven by circadian rhythms with a characteristic time scale of one day, meaning that every day a new process is started. This is particularly important if the inter-event time distribution, the dynamic system produces, is fat tailed, because then it takes infinite time for the process to become stationary. This in turn induces a non-stationary bursty dynamics at the individual level, but also at the network level where it is entangled with the evolving network structure with created and broken ties and with nodes arriving and leaving the system. Hence this question is indeed crucial at any level of human dynamical systems and ongoing dynamical processes. However, despite its importance, the non-stationarity has so far neither been well-demonstrated from data nor been systematically considered in the framework of modelling, except in few cases~\cite{Vazquez2007Impact,Guo2011Weblog,Rocha2013Bursts,Horvath2014Spreading,Delvenne2015Diffusion,Krings2012Effects}. Thus the non-stationary nature of the dynamics of the system together with the somewhat better characterised feature of higher-order temporal correlations between events and inter-event times still remain as questions to be answered for more comprehensive and deeper understanding of bursty human dynamics.
Our quest has so far been mostly concentrated on gaining understanding and explaining human bursty behaviour based on direct observations and analysis of related data. In order to learn more there is need to reach out and consider whether bursty behaviour in other complex systems show behavioural similarities, patterns, and universalities rather than differences, variation, and specifics. The former viewpoint is considered Platonic and it is pondered to be akin to physics while the latter is considered Aristotelian and pondered to be akin to biology~\cite{Ball2017Complexity}. In studies of complex systems both viewpoints are of course needed as they complement each other. According to the Platonic view one tends at least implicitly to assume---on the basis of observed regularities---that there should be some kind of, yet uncovered, governing laws that would lead to the behaviour of the complex system or at least give us some insight what kind of plausible processes or mechanisms could be involved. As these regularities appear in various complex natural, social, and man-made systems and at different scales, one is led to believe that there are observable similarities that can be characterised with the same type of mathematical relationships, scaling laws, and behavioural models. With this type of over-arching perspective one could take the next step to uncover the similarities and even possible universalities as well as some kind of governing principles of human bursty behaviour at the multiple levels ranging from individuals to social networks. This could be achieved on one hand with data-analytics approach and on the other hand with computational modelling, which could be seen to constitute a physics approach to decipher human bursty behaviour in terms of structure, function, and response.
However, at this point one should ask whether the above described Platonic viewpoint of research is too one-sided. Is it too simplistic and overly self-assured as well as falling short of addressing some key properties of human burstiness while ignoring some of the possibly important details of the complex system of interest? This is a specifically relevant question in case of humans who can be observed as individuals or as members of larger social networks of various kinds as well as in a society with all sorts of cultural and socioeconomic ramifications. These issues are traditionally the realms of Cognitive or even Neurocognitive Science, Psychology, Social Psychology, Social Sciences as well as even Political Science, carrying their own research perspective(s) and methodological approaches. In them the details, especially behavioural differences, variation, and specifics matter thus making the perspective more akin to Aristotelian viewpoint. These systems are studied using various kinds of experimental methodologies to observe individual behaviour and alternatively surveys targeted to groups of individuals.
This Aristotelian approach may provide also advances in the future. By focusing on individuals and pairs of individuals using various brain research methodologies, one may be bound to get deeper insight into more of the ``micro'' level properties of human burstiness. In addition in case of small groups using observational cognitive research methodologies one may understand better human burstiness as part of social gathering. Although these experiments, due to being specifically set up for certain purpose, carry a kind of ``in vitro'' flavour, they help us to build more realistic models of human dynamics. The same can be said about the survey or questionnaire studies of hundreds or thousands of individuals, which can be carried out with new digital platforms. Such studies may have features of individual subjectivity, which can be tuned to investigate better certain social situations in a controlled way. So rather than saying that one of these two viewpoints, either Platonic or Aristotelian, is more important than the other, we emphasise their equal importance and their mutual methodological complementarity in building deeper insight into human behaviour and its dynamics in general. This kind of complementary and joint Platonic and Aristotelian perspective can be expected to shed light to the governing laws (if any) or functional rules of human bursty behaviour and to possible behavioural similarities and universalities.
\pagebreak
\section*{Acknowledgement}
\small{Although this monograph has three authors it was written with the direct and indirect help of many others. First of all we owe many thanks to our collaborators who motivated and followed us to explore aspects of human bursty dynamics. They are Jari Saram\"aki, J\'anos Kert\'esz, Albert-L\'aszl\'o Barab\'asi, Mikko Kivel\"a, Lauri Kovanen, Raj Kumar Pan, Juan I. Perotti, Dashun Wang, Chaoming Song, Ginestra Bianconi, Nicola Perra, Enrico Ubaldi, Raffaella Burioni, Alessandro Vezzani, Riivo Kikas, Marlon Dumas, Eunyoung Moon, and Eun-Kyeong Kim. H.-H.J. personally thanks Woo-Sung Jung and Seunghwan Kim for their support.
We are especially thankful for Jari Saram\"aki and J\'anos Kert\'esz for the insightful discussions and for reading the manuscript and helping us to improve the clarity of the content and text of our monograph.
We are also very grateful for our editors Elisabeth A.L. Mol, Annelies Kersbergen, and Stephen Soehnlen, who accommodated our monograph at Springer from the beginning and provided support and useful advices throughout the writing and publication process. We would also like to thank to our anonymous reviewers who provided us several constructive comments at different stages of the process and helped us to open the perspective of our monograph.
M.K is thankful for the DANTE Inria team from the Laboratoire de l'Informatique du Parall\'elisme at the Ecole Normale Sup\'erieure de Lyon and the IXXI Rh\^one Alpes Complex System Institute to assure the stimulating environment. H.-H.J. and M.K are both thankful for the multiple visiting research grants from the Aalto Science Institute, which helped largely their joint work despite large distances. We are also thankful for the Complex Networks research team at the Department of Computer Science at Aalto University to host H.-H.J. and M.K on several occasions.
Finally we all owe the greatest thanks to our families for their continuous support, patience and encouragement during the writing process. They provided us time and the background without which this monograph would not have been written.}
\chapter{Introduction}
\label{chapter:intro}
To begin with, one defines bursty behaviour or burstiness of a system as intermittent increases and decreases in the activity or frequency of events. Such a dynamical system showing large temporal fluctuations cannot be characterised by a Poisson process with a single temporal scale. Rather it can be considered as a result of non-Poissonian dynamics with strong temporal heterogeneities on various temporal scales\footnote{Non-Poissonian bursty dynamics is in general characterised by the heterogeneous distribution of inter-event times passing between the consecutive occurrences of a given type of event. In contrast a system with Poissonian dynamics, inter-event times are distributed exponentially. However, many empirical inter-event time distributions are broad and follow a log-normal, Weibull, or power-law form, implying that the underlying mechanisms behind them maybe different than a Poisson process. See more about this question in Chapter~\ref{chapter:meas}.}.
There are a number of systems in Nature that evolve following non-Poissonian dynamics. One of the commonly known examples is the emergent dynamics of earthquakes~\cite{Corral2004LongTerm, Davidsen2013Earthquake,Bak2002Unified, deArcangelis2006Universality,Smalley1987A}, in which the times of shocks occurring at a given location show bursty temporal patterns, as illustrated in Fig.~\ref{fig:BurstySignals}(a). The occurrence of such events is governed by the modified Omori's Law~\cite{Powell1975Statistical}, which states that the frequency of aftershocks decreases as a power law and can lead to a broad inter-event time distribution of shocks, when observed over a longer period of time. Another example of a natural phenomenon exhibiting bursty temporal patterns is solar flares induced by huge and rapid releases of energy~\cite{McAteer2007The,Wheatland1998The}. It has been shown that the stochastic processes underlying these apparently different phenomena show such universal properties that lead to the same distributions of event sizes, inter-event times, and temporal clustering~\cite{deArcangelis2006Universality}. These kinds of heterogeneities in the behaviour of systems emerging from different origins have been explained in the frame of self-organised criticality (SOC)~\cite{Bak1996How}, which provides a commonly accepted example of a theory for describing the burstiness of a system.
Also in case of neuronal firing its sequences are featured as having bursty temporal patterns ~\cite{Tsubo2012Powerlaw,Kepecs2003Information,Grace1984The,Kemuriyama2010A}, as depicted in Fig.~\ref{fig:BurstySignals}(b) illustrating a firing sequence of a single neuron observed in-vitro in a rat's hippocampus. Consecutive firings of a single neuron but also of groups of neurons evolve in spike trains, in which the short high-activity periods are separated by periods without any activity. Moreover, it has been suggested that neuronal firing patterns might be the result of integrate-and-fire mechanism~\cite{Hesse2014Self}, commonly assumed to occur in self-organised critical systems. This theory accounts for bursty patterns evolving at the single neuron level, but at the same time could explain collective firing patterns in a connected network of neurons.
\begin{figure}[!ht]
\centering
\includegraphics[width=.8\textwidth]{figs/BurstySignals.pdf}
\caption{(a) Sequence of earthquakes with magnitude larger than two at a single location (south of Chishima Island, 8th--9th October 1994). (b) Firing sequence of a single neuron from a rat's hippocampal. (c) Outgoing mobile phone call sequence of an individual. The shorter the time between the consecutive events are, the darker color is coded. (\emph{Source:} Adapted from Ref.~\cite{Karsai2012Universal} under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.)}
\label{fig:BurstySignals}
\end{figure}
Further examples of burstiness have been observed in the context of biological evolution passing through bursty patterns~\cite{Uyeda2011The, Pagel1999Inferring} both on short and long time-scales with consistent patterns in the times of divergence across taxonomic groups. Here it has been argued~\cite{Uyeda2011The} that changes for short temporal scales (of the order of $\sim1$ million years) are constrained fluctuations and do not accumulate over time, while for long temporal scales ($\sim1$--$360$ million years) the evolution yields bursty patterns of increasing divergence due to radial phenotypic changes.
Burstiness is also seen in the contexts of ecology and animal dynamics where heterogeneous temporal patterns characterise the dynamics of single animal movements or even the evolution of larger ecological systems~\cite{Blonder2012Temporal,Bohorquez2009Common}. Several examples have shown~\cite{Proekt2012Scale,Sorribes2011Origin,Wearmouth2014Scaling} that the dynamics of animals, e.g., in initiating conflicts \cite{Proekt2012Scale}, communication, foraging~\cite{Sorribes2011Origin}, predators waiting in ambush~\cite{Wearmouth2014Scaling}, or the displacement of monkeys or mice~\cite{Boyer2012Nonrandom,Nakamura2008Of} form complex self-similar temporal patterns reproduced on multiple time scales very similarly to examples observed in human behaviour. In addition bursty temporal patterns of switching between contrasting activities have been found in case of humans~\cite{Barabasi2010Bursts} and animals as well as in mammalian wake-sleep patterns and in the stop-start motion of fruit flies~\cite{Reynolds2011On}. Based on these observations, it has been proposed~\cite{Sorribes2011Origin,Reynolds2011On} that such dynamics can be commonly investigated with priority queuing models, which were developed primarily to understand human behaviour, but could also be used more generally to make an association between the activity dynamics of animals and humans.
Apart from the above examples, scale-invariant bursty temporal patterns have also been found in several man-made systems. One example is written text, in which successive occurrences of the same word display deviations due to burstiness from the Poisson behaviour and are well characterised by a stretched exponential or Weibull distribution function~\cite{Altmann2009Beyond}. In case of engineering systems perhaps some of the best examples of bursty behaviour are in the context of package-based traffic and wireless communication signals, which were found to evolve through non-Poissonian dynamics~\cite{Chlebus1995Is,Janevski2003Traffic,Lee2013Mobile,Paxson1995WideArea}. Due to their importance in package overload and resource management, thorough methodology has been developed to detect and predict bursty package arrival patterns, while several communication protocols were proposed to avoid such situations~\cite{Kouvatsos2009Traffic}. As a final example of man-made system behaving burstily we mention financial markets, in which non-Poissonian dynamics characterises time series of returns of financial assets, stock sales, order books, and other transactions. The characterisation of such phenomena falls within the scope of econophysics~\cite{Mantegna2007Introduction}, which has successfully applied methods borrowed from statistical physics and signal processing to understand the dynamics of financial systems.
Although the above mentioned examples represent vastly different systems they are all similar in showing bursty dynamical patterns at the phenomenological level. Due to these apparent similarities the expectation is that these systems can be studied with similar methodologies in terms of measuring and analysing their properties as well as developing analytical theories and modelling to describe their behaviour. Perhaps some of the best examples of these commonly applied developments are the concepts of self-organised criticality for $1/f$ noise~\cite{Bak1996How, Jensen1998Self,deArcangelis2006Universality,Ramos2011Self,Brunk2001Self,Beggs2003Neuronal}, priority queuing processes~\cite{Cobham1954Priority,Barabasi2005Origin}, and self-exciting point processes~\cite{Hawkes1971Spectra, Mehrdad2015Hawkes}. These have been successfully used to model several of the above-mentioned systems, suggesting common mechanisms, like integrate-fire, prioritising, or reinforcement processes, to be acting in the background. Moreover, the burstiness can appear at different scales of the system or levels of its organisation. In some cases it characterises the dynamics of single units, like the firing of a neuron, movement of an animal, earthquakes at a given location, or bursty overload of a single communication router. However, in other examples burstiness appears as the mesoscale or system-level phenomena, like the collective firing of networked neurons, collective migration of animals, emerging earthquakes in a larger area, or correlated bursty traffic in communication networks. All in all, these examples provide evidence of some form of universality and multi-scale feature of burstiness, which commonly appears in Nature, man-made systems, and also in human dynamics, as we will next explain in more detail.
\section{Bursty human dynamics}
Let us now shift our focus from bursty behaviour in physical, biological, ecological and man-made technological systems, on burstiness appearing in human behaviour, social relations, and various other endeavours of human sociality. In the observation of these systems the technological development plays an ever-increasing role, by having facilitated novel means for people to connect, communicate and interact with each other, at the same time leaving behind digital footprints of these events. All these have already affected and continue to mold our social actions and behaviour including the functions and services of our societies to the level that we can speak about the techno-social behaviour of people. In addition, the vast amounts of digital footprint data, which people generate using information communication technology (ICT), reflect their social interactions as part of their life course and as members of society.
In studying social systems the researchers have earlier confronted a quite insurmountable obstacle of the lack of data on human behaviour at multiple scales and channels of communication. The availability of large-scale data and recent advances in complex systems research, computational and data science, computer science, network science, and social science would facilitate the quantitative analysis and description of individual and social behaviour in a rather unprecedented way and detail. Advances in these areas were limited by the difficulties of getting access or collecting large amounts of detailed data (Big Data), which is necessary for validating theories and developing quantitative approach. However, we are more and more in the position to follow the dynamics of multiple and simultaneous actions and interactions of individuals, the interaction dynamics of groups and communities, and even the evolution of large-scale social systems. All this is possible with access to large amount of anonymised data (for privacy preservation) collected from communication logs or personal electronic devices. This in turn allows us to observe directly the dynamics of millions of individuals or even to detect the emergence of collective behaviour ``in vivo'' with minimal observational bias or intervention. As we will discuss below these advances have already led to various observations of bursty temporal patterns in several aspects of human dynamics.
Bursty behaviour has been found at different levels of human dynamics. At the behavioural level the timings of actions by individuals were shown to present heterogeneous temporal patterns, while similar dynamics have been observed in dyadic social interactions, or even in collective social phenomena. Among the first observations was the study of Eckmann~\emph{et al.}~\cite{Eckmann2004Entropy}, who reported a broad inter-event time distribution with a power-law tail by analysing the dataset for email correspondence recorded at a university domain. A few months after that Barab\'asi published his paper entitled by ``The origin of bursts and heavy tails in human dynamics''~\cite{Barabasi2005Origin}, where he proposed a priority queuing model to explain the broad inter-event time distributions. This seminal paper initiated an avalanche of studies to observe, characterise, and model bursty phenomena detected in a number of human activities. Various examples of burstiness were found, like emails~\cite{Eckmann2004Entropy,Barabasi2005Origin}, letter correspondence~\cite{Oliveira2005Human}, mobile phone calls and short messages~\cite{Karsai2012Universal}, web browsing~\cite{Dezso2006Dynamics}, printing~\cite{Harder2006Correlated}, library loans~\cite{Vazquez2006Modeling}, job submission to computers~\cite{Kleban2003Hierarchical}, and file transfer in computer network~\cite{Paxson1995WideArea}, or even in arm movements of human subjects~\cite{Coley2008Arm}, just to mention a few. To demonstrate a typical signal of bursty activity we show the outgoing mobile phone call activity of a single person in Fig.~\ref{fig:BurstySignals}(c). In addition, further examples were identified at the group or societal level, such as the emergence of causal temporal motifs~\cite{Kovanen2013Temporal}, the evolution of mass demonstrations, revolutions, global information cascades, and wars~\cite{Bouchaud2013Crises,Tang2010Stretched}. For further information about these phenomena we refer the reader to a popular science book by Barab\'asi~\cite{Barabasi2010Bursts}, which gives an entertaining summary about several of these observations.
In his original modelling study Barab\'asi suggested that bursty activity patterns could be the consequence of prioritising tasks~\cite{Barabasi2005Origin,Oliveira2005Human,Vazquez2006Modeling}. In other words people do not execute their ``to-dos'' in a random fashion but assign importance to each task at hand. This induces intrinsic correlations between different tasks and results in bursty patterns of completed activities. Since then alternative and fundamentally different approaches have been proposed. One of the main alternative concepts was suggested by Malmgren~\emph{et al.}~\cite{Malmgren2008Poissonian,Malmgren2009Universality}, who argued that ``human behaviour is primarily driven by external factors such as circadian and weekly cycles, which introduces a set of distinct characteristic time scales, thereby giving rise to heavy tails''. This approach assumes no intrinsic correlations in human activities but models the dynamics as alternating homogeneous and non-homogeneous Poisson processes. The third main modelling concept assumes strong correlations between consecutive events and employs memory functions~\cite{Vazquez2006Impact, Han2008Modeling}, self-exciting point processes~\cite{Masuda2013SelfExciting,Jo2015Correlated}, or reinforcement mechanisms~\cite{Karsai2012Correlated,Wang2014Modeling} in simulating bursty activity patterns. Finally, several other modelling ideas were suggested assuming self-organised criticality~\cite{Tang2010Stretched}, local structural correlations~\cite{Myers2014Bursty}, some dynamical process like random walk~\cite{Goetz2009Modeling}, contact process~\cite{Odor2014Slow}, or voter model~\cite{FernandezGracia2013Timing} to introduce heterogeneous temporal patterns at the individual or system levels.
All these efforts lead to the situation in which bursty human dynamics became a well-recognised research area, with wide-ranging studies, a rich set of methodologies, and several modelling concepts. Based on these advancements more far-reaching scientific questions have been addressed about the effects of non-Poissonian patterns of individuals on collective dynamical processes, whether they are ongoing or co-evolving with bursty action and interaction patterns of individuals. A typical example is the diffusion of information in a temporal social network where individuals interact in a bursty fashion but are connected together in a network where information can diffuse globally. The main question here is whether bursty dyadic interactions enhance or slow down the speed and/or control the emergence of globally spreading processes, like information diffusion, epidemics, or random walk~\cite{Holme2012Temporal}. Beyond the conventional modelling and simulation techniques of such processes, data-driven models and random reference systems~\cite{Karsai2011Small,Miritello2011Dynamical} were recently shown to be very successful in addressing such questions.
\section{About this monograph}
As we briefly summarised above the fascinating phenomenon of bursty dynamics of various human activities has been investigated widely over the last decade. All these studies contributed to this field that emerged with a broad set of observations, methodologies, modelling, and applications. Although there are still several open questions, this field became specialised enough to benefit from a structured review of already established results. This has been the main reason to motivate us to write this monograph. Over the last ten years categorically different interpretations were proposed to explain bursty patterns in human dynamics. Thus to inform the reader about all the concepts and ideas, beyond a categorical summary of earlier results, our secondary aim has been to introduce various explanations objectively and report the related scientific discussions.
After this brief introduction we have organised our work in five chapters. First in Chapter~\ref{chapter:meas} we summarise the relevant methodologies developed to observe, characterise, and measure the non-Poissonian dynamics of human activities. After the reader is familiarised with these techniques, in Chapter~\ref{chapter:emp} we turn to collect a number of related empirical observations of human bursty phenomena in various systems and at different organisational levels. In Chapter~\ref{chapter:model} we give a systematic summary of modelling concepts and principles, and finally in Chapter~\ref{chapter:processes} we discuss several studies addressing the effects of bursty behaviour on different dynamical processes. We close the monograph with a Chapter to summarise, discuss, and conclude as well as to propose some directions for future research.
To the reader of this monograph we want to emphasise that we focus exclusively on heterogeneous temporal patterns in human dynamics. Thus the observations and methodologies herein for studying other systems are out of our scope. More precisely we focus on systems where the observed phenomena directly reflect the dynamics of human actions or interactions. Hence we do not discuss the dynamics of systems that are only indirectly related to human actions, like in the case of financial or transportation systems. We also remark that although our aim has been to complete as comprehensive review as possible of the field of human bursty behaviour, we might have unintentionally missed some related articles, which we apologise for. Also note that a review paper has been written recently about related topics~\cite{Zhou2013Statistical}, however using a language which is not common in the international scientific community. Thus we hope that our work gives a valuable contribution to the field and helps students and experts who are interested to learn about bursty human dynamics.
|
{
"timestamp": "2018-03-08T02:06:12",
"yymm": "1803",
"arxiv_id": "1803.02580",
"language": "en",
"url": "https://arxiv.org/abs/1803.02580"
}
|
\section{ Introduction}
\noindent
For a positive integer $n > 1 $, let $[n] = \{1, 2, ... , n\} $ and $V$ be the set of all $k$-subsets
and $(n-k)$-subsets of $[n]$.
The $bipartite\ Kneser\ graph$ $H(n, k)$ has
$V$ as its vertex set, and two vertices $A, B$ are adjacent if and only if $A \subset B$ or $B\subset A$. If $n = 2k$ it is obvious that
we do not have any edges, and in such a case, $H(n, k)$ is a null graph, and hence we assume that $n \geq 2k + 1$.
It follows from the definition of the graph $H(n, k)$ that it has
2${n}\choose{k}$ vertices and the degree of each of its vertices is
${n-k}\choose{k}$= ${n-k}\choose{n-2k}$, hence it is a regular graph. It is clear that $H(n, k)$ is a bipartite graph.
In fact,
if $V_1=\{ v\in V(H(n ,k)) | \ |v| =k \}$ and $V_2=\{ v\in V(H(n ,k)) | \ |v| =n-k \}$, then $\{ V_1, V_2\}$
is a partition of $V(H(n ,k))$ and every edge of $H(n, k)$ has a vertex in $V_1$ and a vertex in $V_2$ and
$| V_1 |=| V_2 |$. It is an easy task to show that the graph $H(n, k)$ is a connected graph. The bipartite Kneser graph $H(2n+1, n)$ is known as the $middle\ cube$ $MQ_{2n+1} = {Q_{2n+1}}(n,n+1)$ [3] or $regular\ hyperstar$ graph $HS(2(n+1),n+1)$ [11,13].
The regular hyperstar graph $ {Q_{2n+1}}(n,n+1) $ has been investigated from various aspects, by various authors and some of the recent works about this class of graphs are [3,6,11,13,16,17]. The following figure shows the graph $H(5,2)$ ($ Q_{5}(2,3) $) in plane. Note that in this figure the set $\{i,j,k\} $ ($\{ i,j \}$) is denoted by $ijk$ ($ij$). \
\definecolor{qqqqff}{rgb}{0.,0.,1.}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=.65cm,y=.80cm]
\clip(-4.3,-2.38) rectangle (11.32,6.3);
\draw (-0.9,3.74) node[anchor=north west] {13};
\draw (0.9,5.6) node[anchor=north west] {123};
\draw (4.88,5.3) node[anchor=north west] {124};
\draw (5.7,3.78) node[anchor=north west] {24};
\draw (5.7,1.9) node[anchor=north west] {245};
\draw (5.0,0.66) node[anchor=north west] {45};
\draw (3.0,0.04) node[anchor=north west] {453};
\draw (1.14,0.52) node[anchor=north west] {43};
\draw (-0.8,1.68) node[anchor=north west] {143};
\draw (3.28,5.9) node[anchor=north west] {12};
\draw (4.92,4.76)-- (3.06,5.52);
\draw (3.06,5.52)-- (1.14,4.88);
\draw (1.14,4.88)-- (0.12,3.52);
\draw (0.12,3.52)-- (0.22,1.64);
\draw (0.22,1.64)-- (1.24,0.64);
\draw (1.24,0.64)-- (3.44,0.2);
\draw (3.44,0.2)-- (4.88,0.64);
\draw (5.72,3.38)-- (5.6,3.34);
\draw (4.92,4.76)-- (5.6,3.34);
\draw (5.72,1.68)-- (5.66,1.82);
\draw (5.66,1.82)-- (5.6,3.34);
\draw (5.66,1.82)-- (4.88,0.64);
\draw (3.96,3.8)-- (3.24,4.48);
\draw (3.96,3.8)-- (4.02,1.8);
\draw (4.02,1.8)-- (3.44,1.28);
\draw (1.64,3.54)-- (1.66,1.82);
\draw (4.86,3.38)-- (4.8,1.88);
\draw (0.7,3.82) node[anchor=north west] {23};
\draw (0.6,2.12) node[anchor=north west] {234};
\draw (4.68,3.86) node[anchor=north west] {14};
\draw (4.3,1.8) node[anchor=north west] {145};
\draw (3.32,5.) node[anchor=north west] {125};
\draw (3.5,4.48) node[anchor=north west] {25};
\draw (3.52,2.44) node[anchor=north west] {235};
\draw (2.84,1.18) node[anchor=north west] {35};
\draw (1.96,1.4) node[anchor=north west] {135};
\draw (2.0,4.2) node[anchor=north west] {15};
\draw (4.86,3.38)-- (0.22,1.64);
\draw (5.6,3.34)-- (1.66,1.82);
\draw (2.62,3.54)-- (2.58,3.54);
\draw (2.58,3.54)-- (3.24,4.48);
\draw (2.7,1.66)-- (2.68,1.46);
\draw (2.68,1.46)-- (2.58,3.54);
\draw (3.44,1.28)-- (2.68,1.46);
\draw (1.64,3.54)-- (4.02,1.8);
\draw (1.66,1.82)-- (1.24,0.64);
\draw (2.68,1.46)-- (0.12,3.52);
\draw (1.64,3.54)-- (1.14,4.88);
\draw (2.58,3.54)-- (4.8,1.88);
\draw (3.24,4.48)-- (3.06,5.52);
\draw (4.92,4.76)-- (4.86,3.38);
\draw (3.96,3.8)-- (5.66,1.82);
\draw (4.8,1.88)-- (4.88,0.64);
\draw (3.44,1.28)-- (3.44,0.2);
\draw (-1.9,-0.48) node[anchor=north west] {Fig 1. The bipartite Kneser graph H(5,2)};
\begin{scriptsize}
\draw [fill=qqqqff] (0.12,3.52) circle (1.5pt);
\draw [fill=qqqqff] (0.22,1.64) circle (1.5pt);
\draw [fill=qqqqff] (1.24,0.64) circle (1.5pt);
\draw [fill=qqqqff] (3.44,0.2) circle (1.5pt);
\draw [fill=qqqqff] (4.88,0.64) circle (1.5pt);
\draw [fill=qqqqff] (1.14,4.88) circle (1.5pt);
\draw [fill=qqqqff] (3.06,5.52) circle (1.5pt);
\draw [fill=qqqqff] (4.92,4.76) circle (1.5pt);
\draw [fill=qqqqff] (5.6,3.34) circle (1.5pt);
\draw [fill=qqqqff] (5.66,1.82) circle (1.5pt);
\draw [fill=qqqqff] (1.64,3.54) circle (1.5pt);
\draw [fill=qqqqff] (1.66,1.82) circle (1.5pt);
\draw [fill=qqqqff] (4.86,3.38) circle (1.5pt);
\draw [fill=qqqqff] (4.8,1.88) circle (1.5pt);
\draw [fill=qqqqff] (3.24,4.48) circle (1.5pt);
\draw [fill=qqqqff] (3.44,1.28) circle (1.5pt);
\draw [fill=qqqqff] (3.96,3.8) circle (1.5pt);
\draw [fill=qqqqff] (4.02,1.8) circle (1.5pt);
\draw [fill=qqqqff] (2.58,3.54) circle (1.5pt);
\draw [fill=qqqqff] (2.68,1.46) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}\
It was conjectured by Dejter, Erd\H{o}s, and Havel [6] among others, that the middle cube ${Q_{2n+1}}(n,n+1)$
is Hamiltonian.
Recently, M\"utze and Su [17] showed that the bipartite Kneser graph $H(n, k)$ has a Hamilton cycle for all values of $k$. Among various interesting properties of the bipartite Kneser graph $H(n, k)$, we are interested in its automorphism group and we want to know how this group acts on the vertex set of $H(n, k)$. Mirafzal [13] determined the automorphism group of $ HS(2n,n)= H(2n-1, n-1)$ and showed that $HS(2n,n)$ is a vertex-transitive non-Cayley graph. Also, he showed that $ HS(2n,n)$ is arc-transitive. \newline
Some of the symmetry properties of the bipartite Kneser graph $H(n,k)$, are as follows.
\begin{prop}$[16, \ Lemma \ 3.1]$ The graph $H(n, k)$ is a vertex-transitive graph.
\end{prop}
\begin{prop}$[16, \ Theorem \ 3.2] $ The graph $H(n, k)$ is a symmetric (or arc-transitive) graph.
\end{prop}
\begin{cor}$[16, \ Corollary \ 3.3]$ The connectivity of the bipartite Kneser graph $H(n, k)$ is
maximum, namely, ${n-k}\choose{k}$.
\end{cor}
\begin{prop}$[16, \ Proposition \ 3.5] $ The bipartite Kneser graph $H(n, 1)$ is a Cayley graph.
\end{prop}
\begin{thm}$[16, \ Theorem \ 3.6] $ Let $H(n,1)$ be a bipartite Kneser graph.
Then, $Aut(H(n,1)) \cong Sym([n] ) \times \mathbb{Z}_2$, where $\mathbb{Z}_2$ is the cyclic group of order $2$.
\end{thm}
In $[16]$ the authors proved the following theorem.
\begin{thm}$[16, \ Theorem \ 3.8]$ Let $n=2k-1$. Then, for the bipartite Kneser graph $H(n,k-1)$, we have
$Aut(H(n,k)) \cong Sym([n]) \times \mathbb{Z}_2$, where $\mathbb{Z}_2$ is the cyclic group of order $2$.
\end{thm}
In [16] the authors asked the following question. \\\\
{ \bf Question } Is the above theorem true for all possible values of $n,k$ ( $2k < n$)? \\
In the sequel, we want to answer the above question. We show that the above theorem is true for all possible values of $n,k$ ( $2k < n$). \
In fact, to the best of our knowledge, the present work is the first answer on this problem.
We determine the automorphism group of the graph $H(n, k)$ and show that $Aut(H(n, k)) \cong Sym([n] ) \times \mathbb{Z}_2$, where $\mathbb{Z}_2$ is the cyclic group of order $2$. In the final step of our work, we offer a new proof for determining the automorphism group of the Kneser graph $K(n,k)$ which we belief this proof is more elementary than other known proofs of this result. Note that the known proofs for determining the automorphism groups of Johnson graph $J(n,k)$ and Kneser graph $ K(n,k)$ are independent from each other. we show how we can have the automorphism group of the Kneser graph $K(n,k)$ in the hand, if we have the automorphism group of the Johnson graph $J(n,k)$ in another hand. \\
There are various important families of graphs $\Gamma$, in which we know that for a particular group $G$, we have
$G \leq Aut(\Gamma)$, but showing that in fact we have $G = Aut(\Gamma)$, is a difficult task. For example note to the following cases. \newline
(1) \ The Boolean lattice $BL_n, n \geq 1$, is the graph whose vertex set is the set of all subsets of $[n]= \{ 1,2,...,n \}$, where two subsets $x$ and $y$ are adjacent if their symmetric difference has precisely one element. The hypercube $Q_n$ is the graph whose vertex set is $ \{0,1 \}^n $, where two $n$-tuples are adjacent if they differ in precisely one coordinates. It is an easy task to show that $Q_n \cong BL_n $, and $ Q_n \cong Cay(\mathbb{Z}_{2}^n, S )$, where $\mathbb{Z}_{2}$ is
the cyclic group of order 2, and $S=\{ e_i \ | \ 1\leq i \leq n \}, $ where $e_i = (0, ..., 0, 1, 0, ..., 0)$, with 1 at the $i$th position. It is an easy task to show that the set $H= \{ f_\theta |\ \theta \in Sym([n]) \} $, $ f_\theta (\{x_1, ..., x_n \}) = \{ \theta (x_1), ..., \theta (x_n) \}$ is a subgroup of $Aut(BL_n)$, and hence $H$ is a subgroup of the group $Aut(Q_n)$. We know that in every Cayley graph $\Gamma= Cay(G,S)$, the group $Aut(\Gamma)$ contains a subgroup isomorphic with the group $G$. Therefore, $\mathbb{Z}_{2}^n $ is a subgroup of $Aut(Q_n)$. Now, showing that $Aut(Q_n) = <\mathbb{Z}_{2}^n, Sym([n])>( \cong \mathbb{Z}_{2}^n \rtimes Sym([n]))$, is not an easy task [14]. \newline
(2) \ Let $n,k \in \mathbb{ N}$ with $ k < \frac{n}{2} $ and Let $[n]=\{1,...,n\}$. The Kneser graph $K(n,k)$ is defined as the graph whose vertex set is $V=\{v\mid v\subseteq [n], |v|=k\}$ and two vertices $v$,$w$ are adjacent if and only if $|v\cap w|$=0. The Kneser graph $K(n,k)$ is a vertex-transitive graph [5]. It is an easy task to show that the set $H= \{ f_\theta \ |\ \theta \in Sym([n]) \} $, $ f_\theta (\{x_1, ..., x_k \}) = \{ \theta (x_1), ..., \theta (x_k) \}$, is a subgroup of $ Aut ( K(n,k) )$ [5]. But, showing that
$$ H= \{ f_\theta \ |\ \theta \in Sym([n]) \}= Aut ( K(n,k) )$$
is rather a difficult work [5, chapter 7]. \newline
(3) \ Let $n$ and $k$ be integers with $n> k\geq1$ and let $[n] = \{1, 2, ... , n\}$. We now consider the bipartite Kneser graph $\Gamma = H(n,k)$. Let $A,B$ be $m$-subsets of $[n]$ and let $ | A \cap B |=t$. Let $\theta$ be a permutation in $Sym([n])$. It is an easy task to show that
$ | f_\theta(A) \cap f_\theta(B) |=t$, where $ f_\theta (\{x_1, ..., x_m \}) = \{ \theta (x_1), ..., \theta (x_m) \}$.
Moreover, if $ A\subset B$, then $ f_\theta(A) \subset f_\theta(B) $. Therefore, if $\theta \in Sym([n])$, then
$$ f_\theta : V(H(n,k) )\longrightarrow V(H(n,k)),
f_\theta (\{x_1, ..., x_k \}) = \{ \theta (x_1), ..., \theta (x_k) \} $$
is an automorphism of $ H(n,k) $ and the mapping, \newline
$ \psi : Sym ([n]) \longrightarrow Aut (H(n,k) )$, defined by
the rule $ \psi ( \theta ) = f_\theta $ is an injection. Therefore, the set $H= \{ f_\theta \ |\ \theta \in Sym([n]) \} $, is a subgroup of $ Aut ( H(n,k) ) $ which is isomorphic with $Sym([n])$.
Also, the mapping $\alpha : V(\Gamma)\rightarrow V(\Gamma) $, defined by the rule, $\alpha(v) = v^c$, where
$v^c$ is the complement of the subset $v$ in $[n]$, is an automorphism of the graph $B(n,k)$. In fact,
if $ A\subset B$, then $ B^c\subset A^c$, and hence if \{A,B\} is an edge of the graph $ B(n,k) $, then $\{\alpha(A), \alpha(B)\}$ is an edge of the graph $ H(n,k) $. Therefore we have,
$ < H, \alpha> \leq Aut(H(n,k)) $. \newline
In this paper, we want to show that for the bipartite Kneser graph $H(n,k)$, in fact we have, $ Aut(H(n,k))=<H, \alpha>$($ \cong Sym([n])\times \mathbb{Z}_2)$.
\section{Preliminaries}
In this paper, a graph $\Gamma=(V,E)$ is
considered as a finite undirected simple graph where $V=V(\Gamma)$ is the vertex-set
and $E=E(\Gamma)$ is the edge-set. For all the terminology and notation
not defined here, we follow $[1,4,5]$.
The graphs $\Gamma_1 = (V_1,E_1)$ and $\Gamma_2 =
(V_2,E_2)$ are called $isomorphic$, if there is a bijection $\alpha
: V_1 \longrightarrow V_2 $ such that $\{a,b\} \in E_1$ if and
only if $\{\alpha(a),\alpha(b)\} \in E_2$ for all $a,b \in V_1$.
In such a case the bijection $\alpha$ is called an isomorphism.
An automorphism of a graph $\Gamma $ is an isomorphism of $\Gamma
$ with itself. The set of automorphisms of $\Gamma$ with the
operation of composition of functions is a group, called the
$automorphism\ group$ of $\Gamma$ and denoted by $ Aut(\Gamma)$.
The
group of all permutations of a set $V$ is denoted by $Sym(V)$ or
just $Sym(n)$ when $|V| =n $. A $permutation\ group$ $G$ on
$V$ is a subgroup of $Sym(V)$. In this case we say that $G$ act
on $V$. If $X$ is a graph with vertex-set $V$, then we can view
each automorphism as a permutation of $V$, and so $Aut(X)$ is a
permutation group. If $G$ acts on $V$, we say that $G$ is
$transitive$ (or $G$ $acts\ transitively$ on $V$), when there is just
one orbit. This means that given any two elements $u$ and $v$ of
$V$, there is an element $ \beta $ of $G$ such that $\beta (u)= v
$.
The graph $\Gamma$ is called $vertex$-$transitive$, if $Aut(\Gamma)$
acts transitively on $V(\Gamma)$. For $v\in V(\Gamma)$ and $G=Aut(\Gamma)$, the stabilizer subgroup
$G_v$ is the subgroup of $G$ consisting of all automorphisms that
fix $v$. We say that $\Gamma$ is $symmetric$ (or $arc$-$transitive$) if, for all vertices $u, v, x, y$ of $\Gamma$ such that $u$ and $v$ are adjacent, also, $x$ and $y$ are adjacent, there is an automorphism $\pi$ in $Aut(\Gamma)$ such that $\pi(u)=x$ and $\pi(v)=y$.
Let $n,k \in \mathbb{ N}$ with $ k \leq \frac{n}{2}$, and let $[n]=\{1,...,n\}$. The $Johnson\ graph$ $J(n,k)$ is defined as the graph whose vertex set is $V=\{v\mid v\subseteq [n], |v|=k\}$ and two vertices $v$,$w$ are adjacent if and only if $|v\cap w|=k-1$. The Johnson graph $J(n,k)$ is a vertex-transitive graph [5]. It is an easy task to show that the set $H= \{ f_\theta |\ \theta \in Sym([n]) \} $, $f_\theta (\{x_1, ..., x_k \}) = \{ \theta (x_1), ..., \theta (x_k) \} $, is a subgroup of $ Aut( J(n,k) ) $[5]. It has been shown that $Aut(J(n,k)) \cong Sym([n])$, if $ n\neq 2k, $ and $Aut(J(n,k)) \cong Sym([n]) \times \mathbb{Z}_2$, if $ n=2k$, where $\mathbb{Z}_2$ is the cyclic group of order 2 [2,9,15]. \
Although, in most situations it is difficult to determine the automorphism group
of a graph $\Gamma$ and how it acts on the vertex set of $\Gamma$, there are various papers in the literature, and some of the recent works
appear in the references [7,8,9,10,12,13,14,15,16,18,19].
\section{Main results}
\begin{lem} Let $n$ and $k$ be integers with $\frac{n}{2}> k\geq1$, and
let $\Gamma= (V,E)= H(n,k)$ be a bipartite Kneser graph with partition $V=V_1 \cup V_2 $, $V_1 \cap V_2 =\varnothing$, where $ V_1= \{ v \ | \ v\subset [n], |v|=k \}$ and $ V_2= \{ w \ |\ w \subset [n], |w|=n-k \}$. If $f$ is an automorphism of $ \Gamma$ such that $f(v)=v$ for every $v\in V_1$, then $f$ is the identity automorphism of $ \Gamma$.
\end{lem}
\begin{proof}
First, note that since $f$ is a permutation of the vertex set $V$
and $f(V_1)=V_1$, then $f(V_2)= V_2$.
Let $ w\in V_2$ be an arbitrary vertex in $V_2$. Since $f$ is an automorphism of the graph $\Gamma$, then for the set $N(w)= \{ v | v\in V_1, v\leftrightarrow w \}$, we have $f(N(w))= \{ f(v) | v\in V_1, v\leftrightarrow w \}=N(f(w))$. On the other hand, since for every $v\in V_1$, $f(v)=v$, then
$f(N(w))=N(w) $, and therefore $N(f(w))=N(w) $. In other words, $w$ and $f(w)$ are $(n-k)$-subsets of $[n]$ such that
their family of $k$-subsets are the same. Now, it is an easy task to show that $f(w)=w$. Therefore, for every vertex $x$ in $\Gamma$ we have $f(x)=x$ and thus $f$ is the identity automorphism of $\Gamma$.
\end{proof}
\begin{rem} If in the assumptions of the above lemma, we replace with $f(v)=v$ for every $v\in V_2$, then we can show, by a similar discussion, that $f$ is the identity automorphism of $ \Gamma$.
\end{rem}
\begin{lem} Let $\Gamma =(V, E)$ be a connected bipartite graph with partition $V=V_1 \cup V_2$, $V_1 \cap V_2 = \varnothing $. Let $ f$ be an automorphism of $\Gamma $ such that for a fixed vertex $v \in V_1 $, we have $ f(v) \in V_1$. Then, $f(V_1) = V_1$ and $f(V_2) =V_2$. Or, for a fixed vertex $v \in V_1 $, we have $ f(v) \in V_2$. Then, $f(V_1) = V_2$ and $f(V_2) =V_1$.
\end{lem}
\begin{proof}
In the first step, we show that if $ w \in V_1 $ then $f(w) \in V_1$. We know that if $ w\in V_1$, then $ d_\Gamma (v, w) = d(v, w)$, the distance between $ v$ and $ w$ in the graph $ \Gamma$, is an even integer.
Assume $ d(v, w) =2l$, $ 0\leq 2l \leq D$, where $ D$ is the diameter of $\Gamma $. We prove by induction on $ l$, that $ f(w) \in V_1$. If $ l=0$, then $ d(v, w) =0$, thus $v=w$, and hence $f(w)=f(v) \in V_1$.
Suppose that if $ w_1 \in V_1$ and $ d(v, w_1)= 2(k-1)$, then $ f(w_1) \in V_1$.
Assume $ w \in V_1$ and $d(v, w)=2k $. Then, there is a vertex $ u \in \Gamma $ such that
$d(v, u)=2k-2=2(k-1)$ and $ d(u, w)=2$.
We know (by the induction assumption) that $ f(u) \in V_1$ and since $ d(f(u),f(w))=2$, therefore $f(w) \in V_1 $. Now, it follows that $ f(V_1)=V_1$ and consequently $ f(V_2)=V_2$.
\end{proof}
\begin{cor}
Let $\Gamma =H(n,k) = (V,E) $ be a bipartite Kneser graph with partition $V=V_1 \cup V_2$, $V_1 \cap V_2 = \varnothing $. If $f$ is an automorphism of the graph $\Gamma$, then $ f(V_1)=V_1$ and $f(V_2) =V_2$, or $ f(V_1) = V_2 $ and $f(V_2) = V_1$.
\end{cor}
In the sequel, we need the following result for proving our main theorem.
\begin{lem}
Let $l,m,u $ are positive integers with $ l > u$ and $ m> u$. If $l>m$ then ${l} \choose {u}$ $>$ ${m} \choose {u}$.
\end{lem}
\begin{proof}
The proof is straightforward.
\end{proof}
\begin{thm}
Let $n$ and $k$ be integers with $\frac{n}{2}> k\geq1$, and
let $\Gamma= (V,E)= H(n,k)$ be a bipartite Kneser graph with partition $V=V_1 \cup V_2 $, $V_1 \cap V_2 = \varnothing$, where $ V_1= \{ v \ | \ v\subset [n], |v|=k \}$ and $ V_2= \{ w \ | \ w \subset [n], |w|=n-k \}$. Then, $Aut(\Gamma) \cong Sym([n]) \times \mathbb{Z}_2$, where $\mathbb{Z}_2$ is the cyclic group of order $2$.
\end{thm}
\begin{proof}Let $\alpha : V(\Gamma)\rightarrow V(\Gamma) $, defined by the rule, $\alpha(v) = v^c$, where
$v^c$ is the complement of the subset $v$ in $[n]$.
Also, let $ H =\{ f_\theta |\ \theta \in Sym ([n]) \} $, $f_\theta (\{x_1, ..., x_k \}) = \{ \theta (x_1), ..., \theta (x_k)$. We have seen already that $H( \cong Sym([n]))$
and $<\alpha>( \cong \mathbb{Z}_2)$ are subgroups of the group $G= Aut(\Gamma)$.
We can see that $ \alpha \not\in H $, and for every $\theta \in Sym([n])$, we have, $f_{\theta}\alpha= \alpha f_{\theta}$ [15]. Therefore, \
$$ Sym([n]) \times \mathbb{Z}_2 \cong H \times <\alpha>\cong <H, \alpha> $$
$$ =\{ f_{\gamma} \alpha^i\ | \ \gamma \in Sym([n]), 0\leq i \leq 1 \}=S$$ \
is a subgroup of $G$. We now want to show that $G=S$. Let $f \in Aut(\Gamma)=G$. We show that $ f \in S$. There are two cases\newline
(i) There is a vertex $v \in V_1$ such that $f(v) \in V_1$, and hence by Lemma 3.3. we have $f(V_1)=V_1$.\newline
(ii) There is a vertex $v \in V_1$ such that $f(v) \in V_2$, and hence by Lemma 3.3. we have $f(V_1)=V_2$. \
\
\ \ (i) Let $f(V_1)=V_1$. Then, for every vertex $v \in V_1$ we have $f(v) \in V_1$, and therefore the mapping $ g=f_{|V_1}: V_1 \rightarrow V_1$, is a permutation of $V_1$ where $ f_{|V_1} $ is the restriction of $f$ to $V_1$. Let $ \Gamma_2= J(n,k)$ be the Johnson graph with the vertex set $V_1$. Then, the vertices $ v,w \in V_1$ are adjacent in $\Gamma_2$ if and only if $| v \cap w| =k-1$.\
We assert that the permutation $ g= f_{|V_1} $ is an automorphism of the graph $\Gamma_2$. \newline
For proving our assertion, it is sufficient to show that if $v,w \in V_1$ are such that $| v \cap w| =k-1$ then we have $ |g(v) \cap g(w) |= k-1$. Note that since $v,w$ are $k$-subsets of $[n]$, then if $u$ is a common neighbour
of $v,w$ in the bipartite Kneser graph $ \Gamma=H(n,k)$, then the set $u$ contains the sets $v$ and $w$. In particular $u$ contains the $(k+1)$-subset $ v\cup w$. We now can see that the number of vertices $u$, such that $u$ is adjacent in $\Gamma$ to both of the vertices $v$ and $w$, is ${n-k-1}\choose {n-2k-1}$. Note that if $t$ is a positive integer such that $ k+1+t=n-k $, then $t= n-2k-1$. Now, if we
adjoin to the $(k+1)$-subset $v \cup w$ of $[n]$, $n-2k-1$ elements of the complement of $ v \cup w $ in $[n]$, then we obtain a subset $u$ of $[n]$ such that $v \cup w \subseteq u$ and $u$ is a $(n-k)$-subset of $[n]$. Now, since $v$ and $w$ have $ {n-k-1} \choose{n-2k-1} $ common neighbours in the graph
$\Gamma$, then the vertices $g(v)$ and $g(w)$ must have
${n-k-1} \choose{n-2k-1}$=${n-k-1} \choose{k}$ neighbours in $\Gamma$, and
therefore $| g(v) \cap g(w) | $= $k-1$.
In fact, if $|g(v) \cap g(w)|= k-h < k-1 $, then $ h>1$ , and hence $ |g(v) \cup g(w)| = k+h $. Thus, if $t$ is a positive integer such that $ k+h+t=n-k $, then $t= n-2k-h$. Hence, for constructing a $(n-k)$-subset $u \supseteq g(v) \cup g(w) $ we must adjoin $t=n-2k-h$ elements of the complement of $ g(v) \cup g(w) $ in $[n]$, to the set $ g(v) \cup g(w) $. Therefore the number of common neighbours of vertices $g(v)$ and $ g(w) $ in the graph $\Gamma$ is
${n-k-h} \choose{n-2k-h}$=${n-k-h} \choose{k}$. Note that by Lemma 3.5. it follows that ${n-k-h} \choose{k}$ $\neq$ ${n-k-1} \choose{k}$. \
Our argument shows that the permutation $g=f_{|V_1}$ is an automorphism of the Johnson graph $ \Gamma_2=J(n,k) $ and therefore by [2 chapter 9, 15] there is a permutation $ \theta \in Sym([n]) $
such that $g= f_{\theta}$. \
On the other hand, we know that $ f_{\theta} $ by its natural action on the vertex set of the bipartite Kneser graph $ \Gamma= H(n,k) $ is an automorphism of $ \Gamma$. Therefore, $l=f_{\theta }^{-1}f $ is an automorphism of the bipartite Kneser graph $\Gamma$
such that $l$ is the identity automorphism on the subset $V_1$. We now can conclude, by Lemma 3.1. that $l= f_{\theta }^{-1}f$, is the identity automorphism of $ \Gamma $, and therefore $f=f_{\theta }$. \
In other words, we have proved that if $f$ is an automorphism of $\Gamma = H(n,k)$ such that $f(V_1)=V_1$, then $f=f_{\theta}$, for some $ \theta \in Sym([n] )$, and hence $f \in S$. \
\ (ii) We now assume that $ f(V_1) \neq V_1 $. Then, $f(V_1) = V_2$. Since the mapping $ \alpha $ is an automorphism of the graph $ \Gamma $, then $f \alpha$ is an automorphism of $\Gamma $ such that $f \alpha(V_1) = f(\alpha(V_1))= f(V_2)=V_1$. Therefore, by what is proved in (i), we have $ f \alpha=f_{\theta} $, for some $ \theta \in Sym([n]) $. Now since $ \alpha $ is of order $2$, then $f= f_{\theta} \alpha \in S=\{ f_{\gamma} \alpha^i \ | \ \gamma \in Sym([n]), 0\leq i \leq 1 \}$.
\end{proof}
Let $n,k$ be integers and $n >4, \ k< \frac {n}{2}, \ [n]=\{1,2,...,n \}$. Let $K(n,k)= \Gamma$
be a Kneser graph. It is a well known fact that, $Aut (\Gamma)\cong Sym([n])$ [5, chap 7]. In fact,
the proof in [5, chap 7] shows that the automorphism group of the Kneser graph $K(n,k)$ is the group
$ H =\{ f_\theta \ |\ \theta \in Sym ([n]) \}(\cong Sym([n])) $.
The proof of this result, that appears in [5, chap 7], uses the following fact which is one of the fundamental results in extermal set theory. \
\
{\bf Fact} (Erd\H{o}s-Ko-Rado) If $ n> 2k $, then $ \alpha (K(n,k)) $ = $ {n-1} \choose {k-1}$, where $ \alpha (K(n,k)) $ is the independence number of the Kneser graph $K(n,k)$.
\
In the sequel, we provide a new proof for determining the automorphism groups of Kneser graphs. The main tool which we use in our method is Theorem 3.6. Note that, the main tool which we use for proving Theorem 3.6. is the automorphism group of Johnson graph $J(n,k)$, which have been already obtained [2 chapter 9, 15] by using elementary and relevant facts of graph theory and group theory.
\begin{thm}
Assume $n,k$ are integers and $n >4, \ k< \frac {n}{2}, \ [n]=\{1,2,...,n \}$. If $K(n,k)= \Gamma$ is a Kneser graph, then we have $Aut (\Gamma)\cong Sym([n])$.
\end{thm}
\begin{proof}
Let $g$ be an automorphism of the graph $\Gamma$. We now consider the bipartite Kneser graph $ \Gamma_1=H(n,k)=( V,E) $, with partition $V=V_1 \cup V_2 $, $V_1 \cap V_2 =\varnothing$, where $ V_1= \{ v \ | \ v\subset [n], |v|=k \}$ and $ V_2= \{ w \ | \ w \subset [n], |w|=n-k \}$. We define the mapping $ f: V \rightarrow V $ by the following rule; \
$$ f(v) = \begin{cases}
g(v) \ \ \ \ \ \ \ \ \ \ v \in V_1 \\ (\alpha g \alpha) (v) \ \ \ \ v \in V_2\\
\end{cases} $$ \
It is an easy task to show that $f$ is a permutation of the vertex set $V$ such that $f(V_1) = V_1 = g(V_1)$. We show that $f$ is an automorphism of the bipartite Kneser graph $\Gamma_1$. Let $\{v,w\}$ be an edge of the graph $\Gamma_1$ with $v \in V_1$. Then $v \subset w $, and hence $ v \cap w^c=v \cap \alpha(w) = \varnothing$. In other words $ \{v , \alpha(w) \}$ is an edge of the Kneser graph $\Gamma$. Now, since the mapping $g$ is an automorphism of the Kneser graph $\Gamma$, then $ \{g(v), \ g(\alpha (w)) \}$ is an edge of the Kneser graph $\Gamma$, and therefore we have $ g(v)\cap g(\alpha (w)) = \varnothing$. This implies that $ g(v) \subset {(g(\alpha (w))) }^c=\alpha( g(\alpha (w)) )$. In other words $ \{g(v), \alpha(g(\alpha (w)) ) \} = \{f(v), f(w)\}$ is an edge of the bipartite Kneser graph $\Gamma_1$.
Therefore $f$ is an automorphism of the bipartite Kneser graph $H(n,k)$. Now, since $f({V_1}) =V_1$, then by Theorem 3.6. there is a permutation $\theta$ in $Sym([n])$ such that $f= f_{ \theta}$. Then, for every $v\in V_1$ we have $ g(v)= f(v)=f_{\theta}(v)$, and therefore $g=f_{\theta} $.
We now can conclude that $ Aut( K(n,k))$ is a subgroup of the group $ H =\{ f_\gamma \ |\ \gamma \in Sym ([n]) \} $. On the other hand, we can see that $H$ is a subgroup of $Aut(K(n,k))$, and therefore we have $Aut(K(n,k))=H =\{ f_\gamma \ | \ \gamma \in Sym ([n]) \} \cong Sym ([n])$.
\end{proof} \
\section{ Conclusion}
In this paper, we studied one of the algebraic properties of the bipartite Kneser graph $H(n,k)$. We determined
the automorphism group of this graph for all $n,k,$ where $2k < n$
(Theorem 3.6). Then, by Theorem 3.6. we offered a new proof for determining the automorphism group of the Kneser graph $K(n,k)$(Theorem 3.7).
\section{ Acknowledgements}
The author is thankful to the anonymous reviewers for their valuable comments and suggestions.
|
{
"timestamp": "2018-09-25T02:21:16",
"yymm": "1803",
"arxiv_id": "1803.02524",
"language": "en",
"url": "https://arxiv.org/abs/1803.02524"
}
|
\section{Derivation of Eq.~(\ref{eq:effectivefokker})}
\label{effectiveequation}
\noindent Here, we derive Eq.~\eqref{eq:effectivefokker}, which describes the steady state distribution of traced variables.
Integrating out the unobserved degrees of freedom on both sides of the Fokker-Plank equation (Eq.~\eqref{eq:Fokker}), and using the Einstein notation for summing over repeated indexes, we obtain:
\begin{equation}
\label{eq:FP3parts}
\overbracket{\int d\mathbf{x}_{\rm l} \partial_t p(\mathbf{x})}^{(I)}=-\overbracket{\int d\mathbf{x}_{\rm l} \partial_i [a_{ij}x_j p(\mathbf{x},t)]}^{(II)}+\overbracket{\int d\mathbf{x}_{\rm l} d_{ij}\partial_i\partial_j p(\mathbf{x},t)}^{(III)}
\end{equation}
where $a_{ij}$ and $d_{ij}$ are the elements of the interaction matrix $\mathbf{A}$ and the diffusion matrix $\mathbf{D}$, respectively. Rewriting the probability as $p(\mathbf{x},t)=p(\mathbf{x}_{\rm l} |\mathbf{x}_{\rm r} ,t)p_{\rm r}(\mathbf{x}_{\rm r} ,t)$, we can separately calculate each term in Eq.$(\ref{eq:FP3parts})$. The first term $(I)$ gives:
\begin{align}
\begin{split}
\int d\mathbf{x}_{\rm l}\partial_t p_{\rm r}(\mathbf{x}_{\rm r} ,t)p(\mathbf{x}_{\rm l} |\mathbf{x}_{\rm r} ,t)&=\partial_t p_{\rm r}(\mathbf{x}_{\rm r},t)\int d\mathbf{x}_{\rm l} p(\mathbf{x}_{\rm l} |\mathbf{x}_{\rm r} ,t)=\partial_t p_{\rm r}(\mathbf{x}_{\rm r} ,t)
\end{split}
\end{align}
For the second term (II), we obtain
\begin{align}
\begin{split}
\label{eq.partII}
\int d\mathbf{x}_{\rm l} \partial_i [p_{\rm r}(\mathbf{x}_{\rm r} ,t)p(\mathbf{x}_{\rm l} |\mathbf{x}_{\rm r} ,t)a_{ij}x_j]&= \delta_{i,[{\rm r}]} \partial_i [p_{\rm r}(\mathbf{x}_{\rm r},t)\int d\mathbf{x}_{\rm l} p(\mathbf{x}_{\rm l} |\mathbf{x}_{\rm r} ,t)a_{ij}x_j] \\
&=\delta_{i,[{\rm r}]} \partial_i [ p_{\rm r}(\mathbf{x}_{\rm r} ,t)a_{ij} \mean{x_j|\mathbf{x}_{\rm r} ,t} ]
\end{split}
\end{align}
where $\delta_{i,[{\rm r}]}=1$ if $x_i$ is one of the observed coordinates and zero otherwise. In the first line we use that the probability density vanishes at infinity faster than $1/x$. Similarly, the third term (III) can be written as
\begin{align}
\begin{split}
\label{eq.partIII}
\int d\mathbf{x}_{\rm l} d_{ij}\partial_i\partial_j [p_{\rm r}(\mathbf{x}_{\rm r} ,t)p(\mathbf{x}_{\rm l} |\mathbf{x}_{\rm r} ,t)] &=\delta_{i,[{\rm r}]}\delta_{j,[{\rm r}]} d_{ij}\partial_i\partial_j [p_{\rm r}(\mathbf{x}_{\rm r} ,t) \int d\mathbf{x}_{\rm l} p(\mathbf{x}_{\rm l}|\mathbf{x}_{\rm r},t)]\\
&=\delta_{i,[{\rm r}]}\delta_{j,[{\rm r}]} d_{ij}\partial_i\partial_j p_{\rm r}(\mathbf{x}_{\rm r} ,t)
\end{split}
\end{align}
An explicit calculation of the conditional averages appearing in Eq.(\ref{eq.partII}) yields $ \mean{\mathbf{x}_{\rm l}|\mathbf{x}_{\rm r}} = \mathbf{C}_{[{\rm l},{\rm r}]}\mathbf{C}_{[{\rm r},{\rm r}]}^{-1} \mathbf{x}_{\rm r} $~\cite{Tong}. We can substitute contributions (I), (II) and (III) in Eq.~(\ref{eq:FP3parts}) under steady state conditions to obtain Eq.~(\ref{eq:effectivefokker}).
\section{Derivation of Eq.~(\ref{eq:torque})}
\noindent Here we derive the expression in Eq.~\eqref{eq:torque} for the cycling frequencies. To this end, we first show that the right hand side of this equation is invariant under orientation preserving linear transformations restricted to the 2-dimensional reduced subspace. Let us consider such a transformation: $\mathbf{x}_{\rm r}'=\mathbf{B}\mathbf{x}_{\rm r}$, $\mathbf{f}_{\rm r}'=\mathbf{B}\mathbf{f}_{\rm r}$, and denote by $\mathbf{C}_{[{\rm r},{\rm r}]}'$ the reduced covariance matrix in the transformed coordinates.
\begin{equation}
\label{eq:detB}
\mathbf{B}\mathbf{C}_{[{\rm r},{\rm r}]}\mathbf{B}^T=\mathbf{C}_{[{\rm r},{\rm r}]}'\quad\Longrightarrow\quad \det\mathbf{B}=\sqrt{\frac{\det\mathbf{C}_{[{\rm r},{\rm r}]}'}{\det\mathbf{C}_{[{\rm r},{\rm r}]}}}
\end{equation}
Using this result together with the transformation properties of the vector product, we obtain
\begin{equation}
\frac{\mean{\tau_{ij}}}{\sqrt{\det\mathbf{C}_{[{\rm r},{\rm r}]}}}=\frac{\mean{\mathbf{x}_{\rm r}\times\mathbf{f}_{\rm r}(\mathbf{x})}}{\sqrt{\det\mathbf{C}_{[{\rm r},{\rm r}]}}}=\frac{\mean{\mathbf{x}_{\rm r}'\times\mathbf{f}_{\rm r}'(\mathbf{x}')}}{\sqrt{\det\mathbf{C}_{[{\rm r},{\rm r}]}}}\frac{1}{\det \mathbf{B}}=\frac{\mean{\tau'_{ij}}}{\sqrt{\det\mathbf{C}_{[{\rm r},{\rm r}]}'}}.
\end{equation}
The coordinate invariance of this term allows us to specifically consider the convenient coordinates in which $\mathbf{C}_{[{\rm r},{\rm r}]}=\mathbf{I}$:
\begin{equation}
\label{eq:taucalc}
\frac{1}{\gamma}\mean{\tau_{ij}}=\frac{1}{\gamma}\mean{\mathbf{x}_{\rm r}\times\mathbf{f}_{\rm r}(\mathbf{x}) }=\frac{1}{\gamma}\int d\mathbf{x}_{\rm r}\mean{\mathbf{x}_{\rm r}\times\mathbf{f}_{\rm r}(\mathbf{x})|\mathbf{x}_{\rm r}}p_{\rm r}(\mathbf{x}_{\rm r})=\frac{1}{\gamma}\int d\mathbf{x}_{\rm r}\ \mathbf{x}_{\rm r}\times\mean{\mathbf{f}_{\rm r}(\mathbf{x})|\mathbf{x}_{\rm r}}p_{\rm r}(\mathbf{x}_{\rm r})
\end{equation}
We can further expand this expression by using $\mathbf{\Omega}_{\rm r}=\mathbf{A}_\textup{eff}+\mathbf{D}_{[{\rm r},{\rm r}]}\mathbf{C}_{[{\rm r},{\rm r}]}^{-1}$. (The expression for $\mathbf{\Omega}_{\rm r}$ follows immediately from Eq.~\eqref{eq:effectivecurrent}, since we require $\mathbf{j}_{\rm r}(\mathbf{x}_{\rm r})=\mathbf{\Omega}_{\rm r}\mathbf{x}_{\rm r} p_{\rm r}(\mathbf{x}_{\rm r})$.)
\begin{equation}
\label{eq:fcond}
\frac{1}{\gamma}\mean{\mathbf{f}_{\rm r}(\mathbf{x})|\mathbf{x}_{\rm r}}=\mathbf{A}_\textup{eff}\mathbf{x}_{\rm r}=\mathbf{\Omega}_{\rm r}\mathbf{x}_{\rm r}-\mathbf{D}_{[{\rm r},{\rm r}]}\mathbf{C}_{[{\rm r},{\rm r}]}^{-1}\mathbf{x}_{\rm r}.
\end{equation}
Combining this result with Eq.~\eqref{eq:taucalc}, we arrive at
\begin{equation}
\label{eq:meantau}
\frac{1}{\gamma}\mean{\tau_{ij}}=\int d\mathbf{x}_{\rm r}\ \mathbf{x}_{\rm r}\times (\mathbf{\Omega}_{\rm r}\mathbf{x}_{\rm r})p_{\rm r}(\mathbf{x}_{\rm r})-\int d\mathbf{x}_{\rm r}\ \mathbf{x}_{\rm r}\times(\mathbf{D}_{[{\rm r},{\rm r}]}\mathbf{C}_{[{\rm r},{\rm r}]}^{-1}\mathbf{x}_{\rm r})p_{\rm r}(\mathbf{x}_{\rm r}).
\end{equation}
Using the explicit form of $\mathbf{\Omega}_{\rm r}$ (see Eq.~\eqref{eq:omr}), we evaluate the first term in this expression,
\begin{equation}
\int d\mathbf{x}_{\rm r}\ \mathbf{x}_{\rm r}\times (\mathbf{\Omega}_{\rm r}\mathbf{x}_{\rm r})p_{\rm r}(\mathbf{x}_{\rm r})=\int d\mathbf{x}_{\rm r}\ \omega_{ij}(x_i^2+x_j^2)p_{\rm r}(\mathbf{x}_{\rm r})=\omega_{ij}(c_{ii}+c_{jj})=2\omega_{ij}.
\end{equation}
In addition, we confirm by direct calculation, that, as expected, the second term in Eq.~\eqref{eq:meantau} vanishes:
\begin{align}
-\int d\mathbf{x}_{\rm r}\ \mathbf{x}_{\rm r}\times(\mathbf{D}_{[{\rm r},{\rm r}]}\mathbf{x}_{\rm r})p_{\rm r}(\mathbf{x}_{\rm r})&=\int d\mathbf{x}_{\rm r}\ (-x_j,\ x_i)
\left(\begin{array}{cc}
d_{ii} & d_{ij} \\
d_{ij} & d_{jj} \\
\end{array}\right)\czus{x_i}{x_j}p_{\rm r}(\mathbf{x}_{\rm r})=\\
&=\int d\mathbf{x}_{\rm r}\ [-d_{ii}x_ix_j-d_{ij}x_j^2+d_{ij}x_i^2+d_{jj}x_ix_j]p_{\rm r}(\mathbf{x}_{\rm r})=\\
&=\underbrace{c_{ij}}_0 (d_{jj}-d_{ii})+d_{ij}\underbrace{(c_{ii}-c_{jj})}_0=0
\end{align}
Altogether, this gives us the desired result:
\begin{equation}
\frac{1}{2\gamma}\frac{\mean{\tau_{ij}}}{\sqrt{\det\mathbf{C}_{[{\rm r},{\rm r}]}}}=\omega_{ij}
\end{equation}
\section{Derivation of Eq. (\ref{eq:entropyred})}
\noindent Here we show that $\Pi _\textup{tot} \ge \Pi_{\rm rr} $.
\begin{align}
\begin{split}
\frac{\Pi _\textup{tot}-\Pi_{\rm r}}{k_{\rm B}}
&=\int d\mathbf{x}\frac{\mathbf{j}^T(\mathbf{x})\mathbf{D}^{-1}\mathbf{j}(\mathbf{x})}{p(\mathbf{x})}-\int d\mathbf{x}_{\rm r}\frac{\mathbf{j}_{\rm r}^T (\mathbf{x}_{\rm r})\mathbf{D}_{[{\rm r},{\rm r}]}^{-1}\mathbf{j}_{\rm r} (\mathbf{x}_{\rm r})}{p(\mathbf{x}_{\rm r})}\\
&=\frac{\gamma}{k_{\rm B}}\sum_{j\in [l]}\int d\mathbf{x}\frac{v_j^2(\mathbf{x})}{(T+\alpha_j)}p(\mathbf{x}) + \frac{\gamma}{k_{\rm B}}\sum_{i\in [r]}\ \left[\left(\int d\mathbf{x}\frac{v_i^2(\mathbf{x})}{(T+\alpha_i)}p(\mathbf{x})\right)- \int d\mathbf{x}_{\rm r} \frac{\langle v_i(\mathbf{x})|\mathbf{x}_{\rm r} \rangle^2}{(T+\alpha_i)}p(\mathbf{x}_{\rm r})\right] \\
&=\frac{\gamma}{k_{\rm B}}\left[\sum_{j\in [l]}\int d\mathbf{x} \frac{v_j^2(\mathbf{x})}{(T+\alpha_j)}p(\mathbf{x}) +\sum_{i\in [r]}\ \int d\mathbf{x}_{\rm r}\left[\left(\int d\mathbf{x}_{\rm l} \frac{v_i^2(\mathbf{x})}{(T+\alpha_i)}p(\mathbf{x}_{\rm l}|\mathbf{x}_{\rm r})p(\mathbf{x}_{\rm r})\right)-\frac{\langle v_i(\mathbf{x})|\mathbf{x}_{\rm r} \rangle ^2}{(T+\alpha_i)} p(\mathbf{x}_{\rm r}) \right]\right]\\
&=\frac{\gamma}{k_{\rm B}}\left[\sum_{j\in [l]}\frac{\langle v_j^2(\mathbf{x})\rangle}{(T+\alpha_j)}+ \sum_{i\in [r]}\ \int d\mathbf{x}_{\rm r} \underbrace{\left(\langle v_i^2(\mathbf{x})|\mathbf{x}_{\rm r}\rangle - \langle v_i(\mathbf{x})|\mathbf{x}_{\rm r} \rangle ^2 \right)}_{\geq 0} \frac{p(\mathbf{x}_{\rm r})}{(T+\alpha_i)}\right]\geq 0
\label{eq.entineq}
\end{split}
\end{align}
where in the second line we use that $\mathbf{D}$ is diagonal, $\mathbf{v}(\mathbf{x})=\mathbf{j}(\mathbf{x})/p(\mathbf{x})$, and $\mathbf{j}_{\rm r}(\mathbf{x}_{\rm r})=p(\mathbf{x}_{\rm r}) \int d\mathbf{x}_{\rm l}\ \mathbf{v}_{\rm r}(\mathbf{x}) p(\mathbf{x}_{\rm l}|\mathbf{x}_{\rm r})= p(\mathbf{x}_{\rm r}) \mean{\mathbf{v}_{\rm r}(\mathbf{x})|\mathbf{x}_{\rm r}}$, which follows from the derivation of Eq.~\eqref{eq:effectivefokker}.
\section{Derivation of Eq.~(\ref{eq:entropy2d})}
\noindent Here we derive the expression for the partial entropy production rate in terms of the cycling frequencies (see Eq.\eqref{eq:entropy2d}). It is convenient to substitute the current field $\mathbf{j}=\mathbf{\Omega}\mathbf{x} p(\mathbf{x})$ in Eq.~\eqref{eq:entropyFokker}, which gives
\begin{align}
\begin{split}
\Pi&=k_{\rm B}\int d\mathbf{x} (\mathbf{\Omega} \mathbf{x})^T\mathbf{D}^{-1}(\mathbf{\Omega} \mathbf{x})p(\mathbf{x})= k_{\rm B} \int d\mathbf{x} \, x_i \Omega_{ij}^T (\mathbf{D}^{-1})_{jl} \Omega_{lm} x_m \\
&=k_{\rm B}\Omega_{ij}^T(\mathbf{D}^{-1})_{jl}\Omega_{lm} c_{mi}=k_{\rm B} \Tr{(\mathbf{\Omega}^T \mathbf{D}^{-1}\mathbf{\Omega} \mathbf{C})}.
\end{split}
\end{align}
Since the entropy production is invariant under coordinate transformations, we can use a more suitable coordinate system. In particular, we choose a set of coordinates such that $\mathbf{C}=\mathbb{1}$. In this set of coordinates, the entries of the matrix $\Omega _{ij}$ correspond to the cycling frequencies in the coordinates space of the $i^{th}$ and $j^{th}$ coordinates~\cite{Weiss2003}. Thus, in the 2D case $\mathbf{\Omega}_{\rm r}$ is given by
\begin{equation}
\label{eq:omr}
\mathbf{\Omega}_{\rm r} =\begin{pmatrix}
0 &\omega \\
-\omega & 0
\end{pmatrix}
\end{equation}
Furthermore, in this coordinate system $\mathbf{C}_{[{\rm r},{\rm r}]}$ and $\mathbf{\Omega}_{\rm r}$ commute, yielding
\begin{equation}
\Pi_{\rm r}^{\rm 2D}=k_B\omega ^2\Tr{(\mathbf{C}_{[{\rm r},{\rm r}]} \mathbf{D}_{[{\rm r},{\rm r}]}^{-1})}
\end{equation}
Note, this expression is invariant under coordinate transformations.
\end{appendices}
\end{document}
|
{
"timestamp": "2018-03-08T02:12:06",
"yymm": "1803",
"arxiv_id": "1803.02797",
"language": "en",
"url": "https://arxiv.org/abs/1803.02797"
}
|
\section{Introduction}
Relativistic heavy-ion collisions are well suited to generate hot
and dense matter in the laboratory, although the matter is produced within small space-time
regimes. Whereas in low energy collisions one produces dense nuclear
matter with moderate temperature $T$ and large baryon chemical potential
$\mu_B$, ultra-relativistic collisions at Relativistic Heavy Ion
Collider (RHIC) or Large Hadron Collider (LHC) energies produce
extremely hot matter at small baryon chemical potential. In order to
explore the phase diagram of strongly interacting matter as a
function of $T$ and $\mu_B$ both type of collisions are mandatory.
According to lattice calculations of quantum chromodynamics
(lQCD)~\cite{Bernard:2004je,Aoki:2006we,Bazavov:2011nk}, the phase
transition from hadronic to partonic degrees of freedom (at
vanishing baryon chemical potential $\mu_B$=0) is a crossover. This
phase transition is expected to turn into a first order transition
at a critical point $(T_r, \mu_r)$ in the phase diagram with
increasing baryon chemical potential $\mu_B$. Since this critical
point cannot be determined theoretically in a reliable way the beam
energy scan (BES) program at RHIC aims to find the
critical point and the phase boundary by gradually decreasing the
collision energy~\cite{Mohanty:2011nm,Kumar:2011us}. Furthermore,
new facilities such as FAIR (Facility for Antiproton and Ion
Research) and NICA (Nuclotron-based Ion Collider fAcility) are
under construction to explore in particular the intermediate energy
range where one might study also the competition between chiral
symmetry restoration and deconfinement as suggested in Refs.
\cite{Cas16,Palmese}.
Since the partonic phase in relativistic heavy-ion collisions
appears only for a couple of fm/c, it is quite a challenge for
experiment to investigate its properties. The heavy flavor mesons
are considered to be promising probes in this search since the
production of heavy flavor requires a large energy-momentum
transfer. Thus it takes place early in the heavy-ion collisions, and
- due to the large energy-momentum transfer - should be described by
perturbative quantum chromodynamics (pQCD). The produced heavy
flavor then interacts with the hot dense matter
(of partonic or hadronic nature) by exchanging energy and momentum.
As a result, the ratio of the measured number of heavy flavors in
heavy-ion collisions to the expected number in the absence of
nuclear or partonic matter is suppressed at high transverse
momentum, and the elliptic flow of heavy flavor is generated by the
interactions in noncentral heavy-ion collisions. The experimental
data at RHIC and LHC show that the suppression of heavy-flavor
hadrons at high transverse momentum and its elliptic flow $v_2$ are
comparable to those of light
hadrons~\cite{ALICE:2012ab,Abelev:2013lca}. This is a puzzle for
heavy-flavor production and dynamics in relativistic heavy-ion
collisions as pointed out by many groups
~\cite{Moore:2004tg,Zhang:2005ni,Molnar:2006ci,Linnyk:2008hp,Gossiaux:2010yx,Nahrgang:2013saa,He:2011qa,He:2012df,He:2014epa,Uphoff:2011ad,Uphoff:2012gb,Cao:2011et,Greco,Nahrgang:2016lst}
and a subject of intense studies both theoretically and
experimentally. For recent reviews we refer the reader to Refs.
\cite{Andro,Rapp16}.
Furthermore, the electromagnetic emissivity of strongly interacting
matter is a subject of longstanding interest
\cite{Feinb,Shur,Bauer,ChSym} and is explored also in relativistic
nucleus-nucleus collisions, where the photons (and dileptons)
measured experimentally provide a time-integrated picture of the
collision dynamics. Especially dileptons are of particular interest
since their invariant mass provides an additional scale compared to
photons and allows to partly separate the production channels from
the early (possibly partonic) phase with those from the late
hadronic phase. After decades of experimental and theoretical
studies it has become clear that dileptons with invariant masses
below about 1.2 GeV preferentially stem from hadronic decays
providing some glance at the modification of hadron properties in
the dense and hot hadronic medium (cf. \cite{ChSym,PHSDreview} and
references therein) while the intermediate mass regime 1.2 GeV $< M
<$ 3 GeV should provide information about 'thermal' dileptons from the QGP
($q+{\bar q} \rightarrow e^+ e^-, \ q+\bar{q}\rightarrow g+\gamma^*,
g+q({\bar q}) \rightarrow, \ q({\bar q}) + e^+ e^-$)
as well as the amount of correlated open
charm (semileptonic) decays from early production of $c{\bar c}$
pairs. Whereas at RHIC and LHC energies the background from $D {\bar
D}$ pairs overshines the contribution from the QGP in the
intermediate mass regime \cite{PHSDreview}, one might expect to find
some window in bombarding energy where the partonic sources dominate
since the charm production drops rapidly with decreasing bombarding
energy. In this work we intend to quantify this expectation and to
identify optimal systems for future measurements at FAIR/NICA
or at the RHIC Beam-Energy-Scan (BES) as well as at
the Super Proton Synchrotron (SPS) by the NA61 collaboration.
We recall that previously we have studied the contribution of semileptonic decays
from $D$-mesons to the dilepton spectra at RHIC ($\sqrt{s_{\rm NN}}$ = 200 GeV)
and LHC ($\sqrt{s_{\rm NN}}$ = 2.76 TeV) energies based on
an extended statistical hadronization model \cite{Jaako,Jaako2}.
The charm production in $AA$ collisions was accounted for by scaling the contribution
from $p+p$ collisions with the number of binary $NN$ collisions.
However, in these studies only the semileptonic decays of correlated
(and unscattered) $D {\bar D}$ pairs were considered whereas the contribution
from rescattered $D$ and ${\bar D}$ mesons had been neglected.
Also only hadronic rescattering has been incorporated for the decorrelation
of the produced $D {\bar D}$ pair.
Since these assumptions are too crude to correctly reflect the actual experimental
measurements with their detailed acceptance cuts a fully microscopic reanalysis of the
charm dynamics and charm pair angular correlation is mandatory.
We here employ the microscopic parton-hadron-string dynamics (PHSD) approach, which
differs from the conventional Boltzmann-type models in the
aspect~\cite{Cassing:2009vt} that the degrees-of-freedom for the QGP
phase are off-shell massive strongly-interacting quasi-particles
that generate their own mean-field potential. The masses of the
dynamical quarks and gluons in the QGP are distributed according to
spectral functions whose pole positions and widths, respectively,
are defined by the real and imaginary parts of their self-energies
\cite{PHSDreview}. The partonic propagators and self-energies,
furthermore, are defined in the dynamical quasiparticle model (DQPM)
in which the strong coupling and the self-energies are fitted to
lattice QCD results \cite{Cassing:2008nn}.
We recall that the PHSD approach has successfully described numerous
experimental data in relativistic heavy-ion collisions from the
Alternating Gradient Synchrotron (AGS), SPS, RHIC to LHC
energies~\cite{Cassing:2009vt,PHSDrhic,Volo,PHSDreview,Eduard}. More
recently, the charm production and propagation has been explicitly
implemented in the PHSD and detailed studies on the charm dynamics
and hadronization/fragmention have been performed at top RHIC and
LHC energies in comparison to the available
data~\cite{Song:2015sfa,Song:2015ykw,Song2016}. In the PHSD approach
the initial charm and anticharm quarks are produced by using the
PYTHIA event generator~\cite{Sjostrand:2006za} which is tuned to the
transverse momentum and rapidity distributions of charm and
anticharm quarks from the Fixed-Order Next-to-Leading Logarithm
(FONLL) calculations~\cite{Cacciari:2012ny}. The produced charm and
anticharm quarks interact in the QGP with off-shell partons and are
hadronized into $D-$mesons close to the critical energy density
($\sim$ 0.5 GeV/fm$^3$) for the crossover transition either through
fragmentation or coalescence. We stress that the coalescence is a
genuine feature of heavy-ion collisions and does not show up in p+p
interactions. The hadronized $D-$mesons then interact with light
hadrons in the hadronic phase until freeze out and final
semileptonic decay. We have found that the PHSD approach, which has
been applied for charm production in Au+Au collisions at
$\sqrt{s_{\rm NN}}=$200 GeV~\cite{Song:2015sfa} and in Pb+Pb
collisions at $\sqrt{s_{\rm NN}}=$2.76 TeV~\cite{Song:2015ykw},
describes the $R_{\rm AA}$ of $D-$mesons in reasonable agreement
with the experimental data from the STAR
collaboration~\cite{Adamczyk:2014uip,Tlusty:2012ix} and from the
ALICE collaboration~\cite{Adam:2015sza,Abelev:2014ipa} when
including the initial shadowing effect in the latter case. In this
work we, furthermore, apply the PHSD approach to charm and dilepton
production in relativistic heavy-ion collisions from
$\sqrt{s_{\rm NN}}=$ {8 GeV to 2.76 TeV}, analyse the angular correlation
between the charm quarks or $D$-mesons, respectively, and evaluate
the contribution to the dilepton spectra from their semileptonic
decays. Furthermore,
we will give predictions for dilepton mass spectra from Pb+Pb collisions
at the top LHC energy of $\sqrt{s_{\rm NN}}=$ 5.02 TeV for low and intermediate invariant masses.
This paper is organized as follows: The production of heavy mesons
in p+p collisions is described in Sec. II and $c{\bar c}$ pair
multiplicities in central Pb+Pb collisions are evaluated within PHSD
as a function of invariant energy. We then present the heavy quark
interactions in the QGP, their hadronization and
hadronic interactions, respectively, in Sec. III as well as the
semileptonic decays of the charm hadrons. Sec. IV is devoted to the
description of the dilepton sources incorporated in the actual PHSD calculations
while in Sec. V we calculate the $R_{AA}$ of single electrons from open charm mesons at midrapidity as a function of
transverse momentum and the modification of the $c{\bar c}$
correlation angle due to the partonic and hadronic interactions in central Pb+Pb collisions from
$\sqrt{s_{\rm NN}}=$ 8 to 200 GeV. We continue with excitation functions for dilepton spectra in these collisions
and investigate separately the contributions from hadronic and partonic sources as well as semi-leptonic decays from open charm. In
Sec. VI we will compare the PHSD calculations for dilepton spectra with experimental data
from $\sqrt{s_{\rm NN}}$ = 19.6 GeV to 2.76 TeV and present predictions for dilepton mass spectra from
Pb+Pb collisions at the top LHC energy of $\sqrt{s_{\rm NN}}=$ 5.02 TeV. Sec. VII closes our study
with a summary while Appendices A and B include the details of the partonic production channels for lepton pairs as well as an examination of the uncertainties in the charm cross section and the effects of experimental cuts on the dilepton spectra.
\section{Charm pairs from p+p collisions}\label{pp}
As pointed out in the Introduction the charm quark ($c {\bar c}$) pairs are
produced through initial hard nucleon-nucleon scattering in
relativistic heavy-ion collisions. We employ the PYTHIA event
generator to produce the heavy-quark pairs and modify their
transverse momentum and rapidity such that they are similar to those
from the FONLL calculations at RHIC and LHC energies (cf. Ref.
\cite{Song2016}). At SPS and lower energies we do not employ any
modification of the PYTHIA results. Fig.
\ref{fig1} a) {shows the charm production cross section for p+p collisions (as implemented in PHSD)}
as a function of the invariant energy
$\sqrt{s_{\rm NN}}$ {which is fitted to a wide range of experimental data.}
{We can see} a rather fast drop of the $c{\bar c}$ cross
section with decreasing energy especially close to the threshold energy for charm-pair production. Note, however, that
the data show an uncertainty of about a factor of two which implies a corresponding uncertainty
in the following PHSD calculations.
\begin{figure} [h]
\centerline{
\includegraphics[width=8.6 cm]{fig1a.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig1b.eps}}
\caption{a) The $c{\bar c}$ pair cross section in p+p
reactions as a function of the invariant energy $\sqrt{s_{\rm NN}}$ as implemented in PHSD. The
symbols denote experimental data from Refs. \cite{delValle:2011ex,Adamczyk:2012af,Adare:2017caq}. b) The number of primary
$c{\bar c}$ pairs in Pb+Pb collisions at b=2 fm as a function of
$\sqrt{s_{\rm NN}}$. The shaded area in (b) shows the uncertainty
in the number of $c{\bar c}$ pairs due to the uncertainty in the charm production cross section in p+p collisions.} \label{fig1}
\end{figure}
\subsection{Multiplicities for $c{\bar c}$ pairs in central Pb+Pb reactions}
We recall that in heavy-ion reactions the number of $c{\bar c}$
pairs produced is approximately given by the number of binary
nucleon-nucleon collisions $N_{bin}(b)$ (at given impact parameter
$b$) times the probability to produce a $c{\bar c}$ pair in an
inelastic nucleon-nucleon collision at given $\sqrt{s_{\rm NN}}$ which
is the ratio of the $c{\bar c}$ cross section to the
inelastic $N+N$ cross section. The scaling of the $c{\bar c}$
multiplicity with the number of binary $N+N$ collisions is rather
well reproduced in actual PHSD calculations where additionally the
smearing of $\sqrt{s_{\rm NN}}$ by Fermi motion is taken into account as
well as fluctuations in the number of binary nucleon-nucleon
collisions $N_{bin}(b)$ on an event by event basis. The corresponding
PHSD results for Pb+Pb collisions at b= 2 fm are displayed in Fig. \ref{fig1} b) as a function of
$\sqrt{s_{\rm NN}}$ and
demonstrate that the average $c{\bar c}$ pair multiplicity in
central collisions is far below unity at SPS and FAIR/NICA energies. In this case
we may gate in the PHSD calculations on events with a single $
c{\bar c}$ pair - selected by Monte-Carlo from the number of possible
binary $N+N$ reactions - and follow the dynamics of the charm quarks
throughout the time evolution in PHSD, i.e. partonic scattering,
hadronization by coalescence or fragmentation, and final hadronic
rescattering of charmed mesons and baryons (see below). At the end
all observables have to be multiplied by the probability for the
charm event as illustrated in Fig. \ref{fig1} b). The shaded area in Fig.
\ref{fig1} b) shows the uncertainty in the number of $c{\bar c}$ pairs due to the uncertainty
of the charm cross section in p+p collisions (cf. Fig. \ref{fig1} a)).
Note that for $\sqrt{s_{\rm NN}} <$ 20 GeV no data are available and the number of $c{\bar c}$ pairs
entirely stem from a parameterized function which takes into account the phase space of final states.
\subsection{Fragmentation of charm and bottom in p+p collisions}
The produced charm and bottom quarks in hard nucleon-nucleon
collisions are hadronized by emitting soft gluons, which is denoted
by `fragmentation'. As in Ref. \cite{Song:2015sfa} we use the
fragmentation function of Peterson which reads
as~\cite{Peterson:1982ak}
\begin{eqnarray}
D_Q^H(z)\sim \frac{1}{z[1-1/z-\epsilon_Q/(1-z)]^2},
\end{eqnarray}
where $z$ is the momentum fraction of the hadron $H$ fragmented from
the heavy quark $Q$ while $\epsilon_Q$ is a fitting parameter which
is taken to be $\epsilon_Q$ = 0.01 for charm~\cite{Song:2015sfa} {and 0.004 for bottom~\cite{Song:2015ykw}. We note that the fragmentation function is applied only for the transverse momentum of the hadron while the rapidity is assumed to be the same as before the fragmentation.}
The chemical fractions of the charm quark decay into
$D^+,~D^0,~D^{*+},~D^{*0},~D_s$, and $\Lambda_c$ are taken to be
14.9, 15.3, 23.8, 24.3, 10.1, and 8.7
\%~\cite{Gladilin:1999pj,Chekanov:2007ch,Abelev:2012vra,Song:2015ykw},
respectively{, and those of the bottom quark decay into
$B^-,~\bar{B^0},~\bar{B^0}_s$, and $\Lambda_b$ are 39.9, 39.9, 11,
and 9.2 \%~\cite{DELPHI:2011aa}}. After the momentum and the species of the fragmented
particle are decided by Monte Carlo, the energy of the fragmented
particle is adjusted to be on-shell. Furthermore, the $D^*$ mesons
first decay into $D+\pi$ or $D+\gamma$, and then the $D-$ mesons
produce single electrons through the semileptonic
decay~\cite{Agashe:2014kda}, which is evaluated within PYTHIA.
\section{Heavy quark dynamics in A+A collisions}
We here briefly recall the various interactions of charm quarks (or
charm hadrons) in the partonic (hadronic) medium as introduced in
Ref. \cite{Song2016}.
\subsection{Heavy-quark interactions in the QGP}\label{QGP}
In PHSD the baryon-baryon and baryon-meson collisions at high-energy
produce strings. If the local energy density is above the critical
energy density ($\sim$ 0.5 GeV/fm$^3$), the strings melt into quarks
and antiquarks with masses determined by the temperature-dependent
spectral functions from the DQPM~\cite{Cassing:2008nn}. Massive
gluons are formed through flavor-neutral quark and antiquark fusion
in line with the DQPM. In contrast to normal elastic scattering,
off-shell partons may change their mass after the elastic scattering
according to the local temperature $T$ in the cell (or local
space-time volume) where the scattering happens. This automatically
updates the parton masses as the hot and dense matter expands,
i.e. the local temperature decreases with time. The same holds true
for the reaction chain from gluon decay to quark+antiquark ($g
\rightarrow q + {\bar q}$) and the inverse reaction ($q + {\bar q}
\rightarrow g$) following detailed balance. The local temperature is
determined from the local energy density in the rest frame of the
cell by employing the lattice QCD equation of state from Ref.
\cite{Aoki:2009sc}.
Due to the finite spectral width of the partonic degrees-of-freedom,
the parton spectral function has time-like as well as space-like
parts. The time-like partons propagate in space-time within the
light-cone while the space-like components are attributed to a
scalar potential energy density~\cite{Cassing:2009vt}. The gradient
of the potential energy density with respect to the scalar density
generates a repulsive force in relativistic heavy-ion collisions and
plays an essential role in reproducing experimental flow data and
transverse momentum spectra of hadrons with light quarks (see Ref.
\cite{PHSDreview} for a review). For charm quarks we assume in this
study that the heavy quark has a constant (on-shell) mass: the charm
quark mass is taken to be 1.5 GeV, however, the light
quarks/antiquarks as well as gluons are treated fully off-shell.
The heavy quarks and antiquarks produced in early hard collisions -
as described above - interact with the dressed lighter off-shell
partons in the QGP. The cross sections for the heavy-quark
scattering with massive off-shell partons have been calculated by
considering explicitly the mass spectra of the final-state particles
in Refs.~\cite{Berrehrah:2013mua,Berrehrah:2014kba}. The elastic
scattering of heavy quarks in the QGP is treated by including the
non-perturbative effects of the strongly interacting quark-gluon
plasma (sQGP) constituents, i.e. the temperature-dependent coupling
$g(T/T_c)$ which rises close to $T_c$, the multiple scattering etc.
The multiple strong interactions of quarks and gluons in the sQGP
are encoded in their effective propagators with broad spectral
functions (imaginary parts). As pointed out above, the effective
propagators, which can be interpreted as resummed propagators in a
hot and dense QCD environment, have been extracted from lattice data
in the scope of the DQPM~\cite{Cassing:2008nn}. We recall that the
divergence encountered in the $t$-channel scattering is cured
self-consistently, since the infrared regulator is given by the
finite DQPM gluon mass and width. For further details we refer the
reader to Refs.~\cite{Berrehrah:2013mua,Berrehrah:2014kba}.
We recall that charm interactions in the QGP -- as described by the
DQPM charm scattering cross sections -- differ substantially form
the pQCD scenario, however, the spacial diffusion constant for charm
quarks $D_s(T)$ is consistent with the lQCD data
\cite{Song:2015ykw,Berrehrah:2016vzw} within errorbars.
\subsection{Heavy-quark hadronization}\label{hadronization}
The heavy-quark hadronization in nucleus-nucleus collisions is
realized via 'dynamical coalescence' and fragmentation. Here
`dynamical coalescence' means that the probability to find a
coalescence partner is {calculated from the Wigner density in coordinate and momentum space and the coalescence is realized } by Monte Carlo in the vicinity of the
critical energy density $0.4\le \epsilon \le 0.75$ GeV/fm$^3$ as
described in Ref. \cite{Song2016}. We note that such a dynamical
realization of heavy-quark coalescence is in line with the dynamical
hadronization of light quarks in the PHSD. Summing up the
coalescence probabilities from all candidates, whether the heavy
quark or heavy antiquark hadronizes by coalescence or not, and which
quark or antiquark among the candidates will be the coalescence
partner, is decided by Monte Carlo. If a random number is above the
sum of the coalescence probabilities, it is tried again in the next
time step till the local energy density is lower than 0.4 $\rm
GeV/fm^3$. The heavy quark or heavy antiquark, which does not
succeed to hadronize by coalescence throughout the expansion phase
of the partonic subsystem, then hadronizes through fragmentation as
in p+p collisions. We recall that charm quarks with low transverse
momenta $p_T$ dominantly hadronize {by coalescence} while those with large $p_T$
undergo fragmentation \cite{Song2016}.
\subsection{Interactions of charm mesons with the hadronic
medium}\label{hg} After the hadronization of heavy quarks and their
subsequent decay into $D, D^*$ mesons, the final
stage of the evolution concerns the interaction of these states with
the hadrons forming the expanding bulk medium. A realistic
description of the hadron-hadron scattering
---potentially affected by resonant interactions--- includes
collisions with the states
$\pi,K,\bar{K},\eta,N,\bar{N},\Delta,\bar{\Delta}$. A description of
their interactions has been developed in
Refs.~\cite{GarciaRecio:2008dp,Abreu:2011ic,Romanets:2012hm,Abreu:2012et,GarciaRecio:2012db,Garcia-Recio:2013gaa,Tolos:2013kva,Torres-Rincon:2014ffa,Tolos:2013gta}
using effective field theory. Moreover, after the application of an
effective theory, one should implement a unitarization method to the scattering amplitudes
to better control the behavior of the cross sections at moderates energies.
The details of the interaction for the four heavy states follows
quite in parallel by virtue of the ``heavy-quark spin-flavor
symmetry''. It accounts for the fact that if the heavy masses are
much larger than any other typical scale in the system, like
$\Lambda_{QCD}$, temperature and the light hadron masses, then the
physics of the heavy subsystem is decoupled from the light sector,
and the former is not dependent on the mass nor on the spin of the
heavy particle. This symmetry is exact in the ideal limit $m_Q
\rightarrow \infty$, with $m_Q$ being the mass of the heavy quark
confined in the heavy hadron. In the opposite limit $m_Q \rightarrow
0$, one can exploit the chiral symmetry of the QCD Lagrangian to
develop an effective realization for the light particles. This
applies to the pseudoscalar meson octet ($\pi,K,\bar{K},\eta$).
Although both symmetries are broken in nature (as in our approach,
when implementing physical masses), the construction of the
effective field theories incorporates the breaking of these
symmetries in a controlled way. In particular, it provides a
systematic expansion in powers of $1/m_H$ (inverse heavy-meson mass)
and powers of $p, m_l$ (typical momentum and mass of the light
meson). Following these ideas, we use two effective Lagrangians for
the interaction of a heavy meson with light mesons and with baryons,
respectively.
In the scattering with light mesons, the scalar ($D$) and vector
($D^*$) mesons are much heavier than the pseudoscalar meson octet
($\pi,K,\bar{K},\eta$). The latter have, in addition, masses smaller
than the chiral scale $\Lambda_{\chi} \simeq 4 \pi f_\pi$, where
$f_\pi$ is the pion decay constant. In this case one can exploit
standard chiral perturbation theory for the dynamics of the (pseudo)
Goldstone bosons, and add the heavy-quark mass expansion up to the
desired order to account for the interactions with heavy mesons. In
our case the effective Lagrangian is kept to next-to-leading order
in the chiral expansion, but to leading order in the heavy-quark
expansion~\cite{Abreu:2011ic,Abreu:2012et}. From this effective
Lagrangian one can compute the tree-level amplitude (or potential),
which describes the scattering of a heavy meson off a light meson as
worked out in Refs.~\cite{Tolos:2013kva,Torres-Rincon:2014ffa}.
For the heavy meson--baryon interaction we use an effective
Lagrangian based on a low-energy realization of a $t-$channel vector
meson exchange between mesons and baryons. In the low-energy limit
the interaction provides a generalized Weinberg-Tomozawa contact
interaction as worked out in Refs.
~\cite{GarciaRecio:2008dp,Romanets:2012hm,GarciaRecio:2012db,Garcia-Recio:2013gaa}.
The effective Lagrangian obeys SU(6) spin-flavor symmetry in the
light sector, {plus heavy-quark spin symmetry} (HQSS) in the heavy sector (which is preserved
either the heavy quark is contained in the meson or in the baryon).
The tree-level amplitudes for meson-meson and meson-baryon
scattering have strong limitations in the energy range in which they
should be applied. It is limited for those processes in which the
typical momentum transfer is low, and below any possible resonance.
To increase the applicability of the tree-level scattering
amplitudes and restore exact unitarity for the scattering-matrix
elements, we apply a unitarization method, which consists in solving
a coupled-channel Bethe-Salpeter equation for the unitarized
scattering amplitude $T_{ij}$ using the potential $V_{ij}$ as a kernel,
\begin{equation}
T_{ij} = V_{ij} + V_{ik} G_k T_{kj} \ ,
\end{equation}
where $G_k$ is the diagonal meson-meson (or meson-baryon) propagator
which is regularized by dimensional regularization in the
meson-meson (or meson-baryon) channel. We adopt the ``on-shell''
approximation to the kernel of the Bethe-Salpeter equation to reduce
it into a set of algebraic equations. We refer the reader to
Refs.~\cite{GarciaRecio:2008dp,Romanets:2012hm,GarciaRecio:2012db,Garcia-Recio:2013gaa,Tolos:2013kva,Torres-Rincon:2014ffa}
for technical details and individual results.
The unitarization procedure allows for the possibility of generating
resonant states as poles of the scattering amplitude $T_{ij}$ in the
complex plane. Even when these resonances are not explicit
degrees-of-freedom, and we do not propagate them in our PHSD
simulations, they are automatically incorporated into the two-body
interaction. This is an important extension, because such
(intermediate) resonant states will strongly affect the scattering
cross section of heavy mesons due to the presence of resonances,
subthreshold states (bound states), and other effects like the
opening of a new channel when a resonance is forming (Flatt\'e
effect).
The resulting (unitarized) cross sections for the binary scattering
of $D,D^*$ (with any possible charged states) with
$\pi,K,\bar{K},\eta,N,\bar{N},\Delta,\bar{\Delta}$ are implemented
in the PHSD code considering both elastic and inelastic channels.
About 200 different channels are taken into account. Although the
unitarization method helps to extend the validity of the tree-level
amplitudes into the resonant region, one cannot trust the final
cross sections for higher energies. Beyond the resonant region the
transition between the high and low energy regimes is
interpolated such that the cross sections are continuous.
\section{Dilepton production channels}
We recall that in the hadronic sector PHSD is equivalent to the
Hadron-String-Dynamics (HSD) transport approach \cite{HSD} that has
been used for the description of $pA$ and $AA$ collisions from SIS
to SPS energies and has lead to a fair reproduction of hadron
abundances, rapidity distributions and transverse momentum spectra
as well as dilepton spectra. In particular, HSD incorporates
off-shell dynamics for vector mesons and a set of vector-meson
spectral functions~\cite{Brat08dil} that covers possible scenarios
for their in-medium modification, i.e. in particular a collisional
broadening of the vector resonances. Note that in the off-shell
transport description, the hadron spectral functions change
dynamically during the propagation through the medium and evolve
towards the on-shell spectral function in the vacuum. The dilepton
production by a (baryonic or mesonic) resonance $R$ decay can be
schematically presented in the following way:
\begin{eqnarray}
BB &\to&R X \label{chBBR} \\
mB &\to&R X \label{chmBR} \\
&& R \to e^+e^- X, \label{chRd} \\
&& R \to m X, \ m\to e^+e^- X, \label{chRMd} \\
&& R \to R^\prime X, \ R^\prime \to e^+e^- X, \label{chRprd}
\end{eqnarray}
i.e. in a first step a resonance $R$ might be produced in
baryon-baryon ($BB$) or meson-baryon ($mB$) collisions. Then this
resonance can couple to dileptons directly (\ref{chRd}) (e.g.,
Dalitz decay of the $\Delta$ resonance: $\Delta \to e^+e^-N$) or
decays to a meson $m$ (+ baryon) (\ref{chRMd}){, which} produce{s}
dileptons via direct decays ($\rho, \omega, \phi$) or Dalitz decays
($\pi^0, \eta, \omega$). The resonance $R$ might also decay into
another resonance $R^\prime$ (\ref{chRprd}) which later produces
dileptons via Dalitz decay. Note, that in the combined model the
final particles -- which couple to dileptons -- can be produced also
via non-resonant mechanisms, i.e. `background' channels at low and
intermediate energies or string decay at high energies. In addition
to the hadronic channels above we account for the '$4 \pi$' channels,
i.e. the dilepton production in the two-body reactions $\pi+\rho$,
$\pi+\omega$, $\rho+\rho$, $\pi+a_1$ as described in detail in Ref.
\cite{Olenasps}.
The latter provide the background from hadronic channels in the
intermediate mass regime 1.2 GeV $< M <$ 3 GeV \cite{Olenasps}, which is not
shown explicitly in this study since the contribution of '$4\pi$'
channels is much smaller than the contribution
from open charm decays and the QGP radiation.
We recall that the influence of in-medium effects on the
vector mesons ($\rho, \omega, \phi$) has been extensively studied
within the PHSD approach in the past (cf. Refs. \cite{Brat08dil,Olenasps,PHSDreview})
and it has been shown that the collisional broadening scenario for the
in-medium vector-meson spectral functions is
consistent with experimental dilepton data from SPS to LHC energies in line with the findings by other groups \cite{ChSym}. Accordingly, in the present study we will adopt the
collisional broadening scenario for the vector-meson spectral functions
as the 'default' scenario.
In order to address the electromagnetic radiation of the partonic
phase, off-shell cross sections of $q\bar q\to\gamma^*$, $q\bar
q\to\gamma^*g$ and $qg\to\gamma^*q$ ($\bar q g\to\gamma^* \bar q$)
reactions taking into account the effective propagators for quarks
and gluons from the DQPM have been calculated in
Ref.~\cite{olena2010}. Here $\gamma ^*$ stands for the $e^+e^-$ or
$\mu^+ \mu^-$ pair. Dilepton production in the QGP - as created in
early stages of heavy-ion collisions - is calculated by implementing
these off-shell processes into the PHSD transport approach on the
basis of the same partonic propagators as used for the
time-evolution of the partonic system. For a review on
electromagnetic production channels within PHSD we refer the reader
to Ref. \cite{PHSDreview} and for the details of the dilepton cross sections
from off-shell partonic channels to Appendix A.
\section{Results for heavy-ion reactions}\label{results}
So far we have described the interactions of the heavy flavor -
produced in relativistic heavy-ion collisions - with partonic and
hadronic degrees-of-freedom. Since the matter produced in heavy-ion
collisions is extremely dense, the interactions with the bulk matter
suppresses heavy flavors at high-$\rm p_T$. On the other hand, the
partonic or nuclear matter is accelerated outward (exploding), and a
strong flow is generated via the interactions of the bulk particles
and the repulsive scalar interaction for partons. Since the heavy
flavor strongly interacts with the expanding matter, it is also
accelerated outwards. Such effects of the medium on the heavy-flavor
dynamics are expressed in terms of the nuclear modification factor
defined as
\begin{eqnarray}
R_{\rm AA}(\rm p_T)\equiv\frac{dN_{\rm AA}/d{\rm p_T}}{N_{\rm binary}^{\rm AA}\times dN_{\rm pp}/d{\rm p_T}},
\label{raa}
\end{eqnarray}
where $N_{\rm AA}$ and $N_{\rm pp}$ are, respectively, the number of
particles produced in heavy-ion collisions and that in p+p
collisions, and $N_{\rm binary}^{\rm AA}$ is the number of binary
nucleon-nucleon collisions in the heavy-ion collision for the
centrality class considered. Note that if the heavy flavor does not
interact with the medium in heavy-ion collisions, the numerator of
Eq.~(\ref{raa}) will be similar to the denominator. For the same
reason, a $\rm R_{\rm AA}$ smaller (larger) than one in a specific
$\rm p_T$ region implies that the nuclear matter suppresses
(enhances) the production of heavy flavors in that transverse
momentum region.
In noncentral heavy-ion collisions the produced matter expands
anisotropically due to the different pressure gradients between in
plane and out-of plane. If the heavy flavor interacts strongly with
the nuclear matter, then it also follows this anisotropic motion to
some extend. The anisotropic flow is expressed in terms of the
elliptic flow $v_2$ which reads
\begin{eqnarray}
v_2({\rm p_T})\equiv\frac{\int d\phi \cos2\phi (dN_{\rm AA}/d{\rm p_T}d\phi)}{2\pi dN_{\rm AA}/d{\rm p_T}},
\end{eqnarray}
where $\phi$ is the azimuthal angle of a particle in momentum space.
\subsection{Nuclear modification of dielectrons from heavy flavor}
In this subsection we focus on the $c{\bar c}$ dynamics and the dielectrons produced from heavy flavor pairs and their modification in relativistic heavy-ion collisions.
\begin{figure} [h!]
\centerline{
\includegraphics[width=8.6 cm]{fig2a.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig2b.eps}}
\caption{The transverse momentum spectra of $D$ mesons (a) and the $R_{AA}(p_T)$ of single electrons from semi-leptonic decay of $D$ mesons (b) as a function of the transverse momentum $p_T$ in central Pb+Pb collisions from PHSD at $\sqrt{s_{\rm NN}}$ = 8, 11.5, 17.3, 39 and 200 GeV at midrapidity.} \label{fig2}
\end{figure}
Fig. \ref{fig2} (a) shows the transverse momentum spectra of $D$ mesons in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 8, 11.5, 17.3, 39, and 200 GeV for $|y| < $ 1.
Since the cross section for charm production increases with collision energy as shown in Fig. \ref{fig1} (a), the transverse momentum spectrum of $D$ meson enhances strongly with increasing collision energy and also becomes harder.
Fig. \ref{fig2} (b) {displays} the nuclear modification factor of single electrons from $D$ meson semi-leptonic decays at mid-rapidity ($|y|<1$) for the same set of central Pb+Pb collisions.
We mention that for the semi-leptonic decays of heavy flavors we use the subroutine `pydecay' of the PYTHIA event generator~\cite{Sjostrand:2006za}. Contrary to the $R_{\rm AA}$ at RHIC and LHC energies we find ratios well above unity at $\sqrt{s_{\rm NN}}$ = 8 and 11.5 GeV which implies an enhancement of the yield (at higher momenta) rather than the familiar suppression at RHIC and LHC.
The enhanced $R_{\rm AA}$ at low energies (8 and 11.5 GeV) may be dominantly attributed to the Fermi motion of nucleons in the colliding nuclei, which does not exist in p+p collisions and slightly increases the collision energy in binary nucleon-nucleon scattering.
Since the collision energies are close to the threshold energy for charm-pair production, where the production cross section increases rapidity as shown in Fig.~\ref{fig1} (a), a small enhancement of the collision energy gives a sizeable increase of the charm production and subsequently the decay products.
We note in passing that the $R_{\rm AA}$ of single electrons at $\sqrt{s_{\rm NN}}$ = 39 and 200 GeV is consistent with our recent results in Ref.~\cite{Song2016}, where the $R_{\rm AA}$ is shown also for higher transverse momenta.
\begin{figure}[h!]
\centerline{
\includegraphics[width=8.6 cm]{fig3a.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig3b.eps}}
\caption{Azimuthal angular distribution between the transverse
momentum of a heavy-flavor meson and that of an antiheavy-flavor
meson for each heavy flavor pair {at midrapidity $(|y|<1)$} before (dashed lines) and after the interactions
with the medium (solid lines) in {central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 17.3 (a) and 200 GeV} (b). }
\label{fig3}
\end{figure}
Since heavy flavor is always produced by pairs, there is an angular correlation between the heavy quark and heavy antiquark. If the heavy quark and antiquark from the same pair (through semi-leptonic decays) produce a positron and an electron, respectively, the produced dielectron also has an angular correlation.
On the other hand, the matter produced in heavy-ion collisions changes the transverse momentum of each heavy flavor and consequently also the angular correlation of the heavy flavor pair.
It has been suggested that the analysis of the azimuthal
angular correlation might provide information on the energy loss
mechanism of heavy quarks in the QGP~\cite{Cao:2015cba}, because
stronger interactions should result in less pronounced angular
correlations. Since in the PHSD we can follow up the fate of an
initial heavy quark-antiquark pair throughout the partonic
scatterings, the hadronization and final hadronic rescatterings, the
microscopic calculations allow to shed some light on the correlation
between the in-medium interactions and the final angular
correlations.
Fig. \ref{fig3} shows the azimuthal angular distribution between the
transverse momentum of {charm ($D$)} and that of {anti-charm ($\bar{D}$)} for each {charm} pair {at midrapidity $(|y|<1)$} before (dashed lines) and after
the interactions with the medium (solid lines) in central Pb+Pb
collisions {at $\sqrt{s_{\rm NN}}$ = 17.3 and 200 GeV}.
The azimuthal angle between the initial charm and anti-charm quarks is provided by the PYTHIA event generator and
peaks around $\phi= 0$ for $\sqrt{s_{\rm NN}}$ = 17.3 GeV, while we find a maximum around $\phi= \pi$ for $\sqrt{s_{\rm NN}}$ = 200 GeV.
After the interaction with the hadronic and partonic matter, however, the azimuthal angle between the $D$ and $\bar{D}$ has a maximum near $\phi= 0$ at both collision energies.
In other words, the azimuthal angle changes little in low-energy collisions, but considerably in high-energy collisions. As shown in our previous study~\cite{Song:2015ykw} the shift of the maximum in the azimuthal angle from $\pi$ to $0$ at $\sqrt{s_{\rm NN}}=$ 200 GeV can be attributed to the strong interaction of charm with radial flow.
\begin{figure}[h!]
\centerline{
\includegraphics[width=8.6 cm]{fig4a.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig4b.eps}}
\caption{Invariant mass spectra of dielectrons from charm pairs with (red lines) and without the interactions with the hot medium (blue lines) in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 17.3 (a) and 200 GeV (b).}
\label{fig4}
\end{figure}
Fig.~\ref{fig4} shows the invariant mass spectra of dielectrons from charm pairs with (red lines) and without the interactions with hot medium (blue lines) in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 17.3 (a) and 200 GeV (b).
We can see that the invariant mass spectrum of dielectrons changes little for $\sqrt{s_{\rm NN}}$ = 17.3 GeV, while it is considerably suppressed at large invariant mass at $\sqrt{s_{\rm NN}}$ = 200 GeV.
This suppression can be understood from Figs.~\ref{fig2} and \ref{fig3}, considering that
the invariant mass of the dielectron depends on the momenta of electron and positron and also on the angle between them.
Figs.~\ref{fig2} and \ref{fig3} clearly show that the momenta of electron and positron are suppressed and the azimuthal angle between them decreases at $\sqrt{s_{\rm NN}}$ = 200 GeV; both effects decrease the invariant mass of the dielectron.
On the other hand, the momenta of electron and positron and the azimuthal angle do not change much at $\sqrt{s_{\rm NN}}$ = 17.3 GeV such that the dielectron spectrum stays approximately unchanged.
\subsection{Excitation function of dielectron production in Pb+Pb collisions from $\sqrt{s_{\rm NN}}=$8 to 200 GeV}
As mentioned in the previous sections, the dileptons produced in relativistic heavy-ion collisions can be classified into three parts: i) dileptons from heavy flavor pairs, ii) from partonic scatterings in the QGP phase, and iii) from hadronic interactions in the hadronic (HG) phase.
In this subsection we compare the separate contributions in central Pb+Pb collisions at various energies from {8} to 200 GeV.
\begin{figure} [h!]
\centerline{
\includegraphics[width=8.6 cm]{fig5a.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig5b.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig5c.eps}}
\caption{The invariant mass spectra of dileptons from the hadronic sources (HG) (a), the QGP (b), and $D\bar{D}$ pairs (c) in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 8, 11.5, 17.3, 39 and 200 GeV from the PHSD.} \label{fig5}
\end{figure}
Fig.~\ref{fig5} shows the dielectron mass spectra from hadronic channels (a), from partonic interactions in the QGP (b), and from the semi-leptonic decays of $D\bar{D}$ pairs (c) in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 8, 11.5, 17.3, 39, and 200 GeV at mid-pseudorapidity $|\eta_e|<1$ for the leptons.
We find that the contribution from the hadronic channels increases moderately with collision energy (in line with the hadron abundances), the contribution from the QGP raises more steeply (in line with the enhanced space-time volume of the QGP phase) and that from $D\bar{D}$ pairs is most dramatically increasing (in line with the number of $c{\bar c}$ pairs, cf. Fig. 1b)). Accordingly, the contribution from heavy flavor is small at low-energy collisions, but becomes more and more important with increasing collision energy in competition with the production from the QGP channels.
\begin{figure*}[h!]
\centerline{
\includegraphics[width=8.6 cm]{fig6a.eps}
\includegraphics[width=8.6 cm]{fig6b.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig6c.eps}
\includegraphics[width=8.6 cm]{fig6d.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig6e.eps}}
\caption{The invariant mass spectra of dileptons from partonic interactions (red lines) and from $D\bar{D}$ pairs (green lines) together with total dielectron spectrum including hadronic contributions (blue lines) in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 8, 11.5, 17.3, 39 and 200 GeV from the PHSD at mid-pseudorapidity for the leptons.}
\label{fig6}
\end{figure*}
In order to show the separate contributions explicitly, we compare in Fig.~\ref{fig6} the contributions from the QGP (red lines) and from $D\bar{D}$ pairs (green lines) with the total dielectron spectrum (blue lines) at different collision energies for central Pb+Pb collisions.
In low-energy collisions the dielectrons from hadronic channels dominate in the low-mass region and those from partonic interactions dominate in the intermediate-mass range while the contribution from $D\bar{D}$ pairs is negligible.
With increasing collision energy the contribution from $D\bar{D}$ pairs becomes more and more significant and comparable to that from partonic interactions at $\sqrt{s_{\rm NN}} \approx$ 39 GeV in the intermediate-mass range.
Finally, it overshines the partonic contribution at $\sqrt{s_{\rm NN}}$ = 200 GeV (and above).
\begin{figure} [h]
\centerline{
\includegraphics[width=8.6 cm]{fig7a.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig7b.eps}}
\caption{The contributions to intermediate-mass dielectrons (1.2 GeV $< M <$ 3 GeV) from $D\bar{D}$ pairs (green lines), different channels of partonic interactions, $q+\bar{q}\rightarrow e^++e^-$, $q+\bar{q}\rightarrow g+e^++e^-$, $q(\bar{q})+g\rightarrow q(\bar{q})+e^++e^-$ (see legend) as a function of $\sqrt{s_{\rm NN}}$ for Pb+Pb collisions at b=2 fm (for midrapidity leptons). The red solid line displays the sum of the partonic contributions.} \label{fig7}
\end{figure}
Fig.~\ref{fig7} compares the contributions from $D\bar{D}$ pairs (green lines) to three partonic channels, i.e. $q+\bar{q}\rightarrow e^++e^-$, $q+\bar{q}\rightarrow g+e^++e^-$, and $q(\bar{q})+g\rightarrow q(\bar{q})+e^++e^-$, for intermediate mass dileptons (1.2 GeV $< M <$ 3 GeV) as a function of collision energy $\sqrt{s_{\rm NN}}$ for Pb+Pb collisions at b=2 fm. The figure clearly shows that the contribution from partonic interactions, especially from $q+\bar{q}\rightarrow e^++e^-$, dominates the intermediate-mass range in low-energy collisions.
However, the contribution from $D\bar{D}$ pairs rapidly increases with increasing collision energy, because the scattering cross section for charm production grows fast above the threshold energy as shown in Fig.~\ref{fig1} (a).
It overshines the contribution from partonic interactions around $\sqrt{s_{\rm NN}} \approx$ 40 GeV and dominates at higher energies.
Since the detectors of different collaborations have a different acceptance, we show in Fig.~\ref{fig7} (b) the results without any acceptance cuts, while Fig.~\ref{fig7} (a) shows the results for a mid-pseudorapidity cut on leptons of $|\eta^e|<1$.
However, the contributions from the partonic interactions and from $D\bar{D}$ pairs show a similar behavior in both cases.
One of most important issues in heavy-ion physics is to find and study the properties of partonic nuclear matter which is created in a small space-time volume in relativistic heavy-ion collisions. To this end one needs observables that are not blurred by hadronic interactions.
Our results in Figs.~\ref{fig6} and \ref{fig7} clearly demonstrate that the window to study partonic matter by dielectrons at intermediate masses without substantial background from heavy flavor decays opens for collision energies $\sqrt{s_{\rm NN}} <$ 40 GeV.
\subsection{Transverse mass spectra at midrapidity}
In this subsection we explore central Pb+Pb collisions at various energies with a focus on the transverse mass spectra of dileptons with intermediate-mass at midrapidity. To this aim we show in Fig.~\ref{spec} the Lorentz invariant transverse mass spectra for ($b$=2 fm) Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 8, 11.5, 17.3, 39 and 200 GeV for the dielectrons with the invariant mass between 1.2 GeV and 3 GeV from the QGP (a), from $D$-mesons (b), and the dileptons from all channels (including especially $D, {\bar D}$ decay) (c). All spectra show an approximately exponential decay (fat solid lines) in the transverse mass $m_T$ for 1.75 GeV $< m_T <$ 2.95 GeV, which can be characterized by an inverse slope parameter $\beta$ which is different for dileptons from open charm and those from the QGP at all bombarding energies.
\begin{figure} [t!]
\centerline{
\includegraphics[width=8.6 cm]{fig8a.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig8b.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig8c.eps}}
\caption{The transverse mass spectra of dileptons with the invariant mass between 1.2 and 3 GeV from the QGP (a), $D\bar{D}$ pairs (b), and all sources (c) in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 8, 11.5, 17.3, 39 and 200 GeV from the PHSD. The fat solid lines show exponential fits to the PHSD results in the transverse mass range $[1.75, 2.95]$ GeV.} \label{spec}
\end{figure}
The excitation function in the inverse slope parameters $\beta$ is shown in Fig. \ref{temp} for the three cases of Fig. \ref{spec}, i.e. dileptons with the invariant mass between 1.2 and 3 GeV from the QGP (red line with dots), $D\bar{D}$ pairs (green line with squares), and all dilepton sources (blue line with triangles) in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 8, 11.5, 17.3, 39 and 200 GeV. We find that the inverse slope parameter from the QGP contribution (red line with dots) is larger than the contribution from $D$-decays (green line with squares) at all energies and almost identical to the inverse slope for the total dilepton spectra (blue line with triangles) in the transverse mass range $[1.75, 2.95]$ GeV at SPS energies. Since the contribution from the $D$-decays increases with bombarding energy, a small wiggle in $\sqrt{s_{\rm NN}}$ can be found in the inverse slope for the total dilepton spectra (blue line with triangles) in the lower RHIC energy regime. This wiggle should be seen in experiment provided that high statistics data become available for intermediate mass dileptons.
\begin{figure} [h]
\centerline{
\includegraphics[width=8.6 cm]{fig9.eps}}
\caption{The inverse slope parameters of intermediate-mass dielectrons from the QGP (red line with dots), $D\bar{D}$ pairs (green line with squares), and all sources (blue line with triangles) in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 8, 11.5, 17.3, 39 and 200 GeV from the PHSD.}
\label{temp}
\end{figure}
\section{PHSD versus experimental data and predictions for the top LHC energy}
\subsection{Au+Au and Pb+Pb collisions from 19.6 GeV to 2.76 TeV}
In this section, we compare the invariant mass spectra of dielectrons from the PHSD to the experimental data in Au+Au collisions from $\sqrt{s_{\rm NN}}$ = 19.6 to 200 GeV from the STAR collaboration and those in Pb+Pb collisions from the ALICE collaboration at $\sqrt{s_{\rm NN}}$ = 2.76 TeV.
We note that the experimental data from the STAR collaboration and those from the ALICE collaboration have different centralites and different acceptance cuts.
The STAR data are obtained for minimum-bias Au+Au collisions and electrons and positrons with transverse momenta $p_T \geq$ 0.2 GeV and pseudo-rapidities $|\eta^e| < $ 1.0.
On other hand, the ALICE data are obtained for 0-10 \% central Pb+Pb collisions and the electrons and positrons with transverse momenta $p_T \geq$ 0.4 GeV and pseudo-rapidities $|\eta^e| < $ 0.8.
The sensitivity of the invariant mass spectra of dielectrons to the cross section for charm production and cuts in $p_T$ and pseudo-rapidity $\eta^e$ is discussed in more detail in Appendix B.
\begin{figure*}[h]
\centerline{
\includegraphics[width=8.6 cm]{fig10a.eps}
\includegraphics[width=8.6 cm]{fig10b.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig10c.eps}
\includegraphics[width=8.6 cm]{fig10d.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig10e.eps}
\includegraphics[width=8.6 cm]{fig10f.eps}}
\caption{The invariant mass spectra of dielectrons from the PHSD in comparison to the STAR data in Au+Au collisions from $\sqrt{s_{\rm NN}}$ = 19.6 to 200 GeV~\cite{Huck:2014mfa,Adamczyk:2015lme} and to the ALICE data in Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV~\cite{Gunji:2017kot}. The total yield is displayed in terms of the blue lines while the different contributions are specified in the legends. Note that the contribution from $J/\Psi$ and $\Psi^\prime$ decays are not included in the PHSD calculations.}
\label{fig8}
\end{figure*}
The first five panels of Fig.~\ref{fig8} show the invariant mass spectra of dielectrons from the Beam-Energy-Scan (BES) at $\sqrt{s_{\rm NN}}$ = 19.6, 27, 39, and 62.4 GeV and from the top RHIC energy.
As discussed in the previous subsection, the contribution from hadrons is dominant in the low-mass region and signals a broadening of the $\rho$ meson spectral function in dense nuclear matter (cf. Ref. \cite{PHSDreview}.
On the other hand, the intermediate-mass range originates predominantly by dielectrons from partonic interactions and those from heavy flavor decays.
Similar to the Pb+Pb collisions in Fig.~\ref{fig6}, the contribution from heavy flavor becomes more and more important with increasing collision energy.
The contribution from heavy flavors and from partonic interactions cross around invariant masses $M\approx$ 1 GeV in Au+Au collisions at $\sqrt{s_{\rm NN}}$ = 19.6 GeV.
However, the crossing point shifts to 1.6 GeV at $\sqrt{s_{\rm NN}}$ = 27 GeV and to $\sim$2.0 GeV at $\sqrt{s_{\rm NN}}$ = 39 and 62.4 GeV.
At the top RHIC energy they cross at $\sim$2.4 GeV.
The last panel of Fig.~\ref{fig8} is the invariant mass spectrum of dielectrons in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV.
As in Au+Au collisions at the RHIC energies, the low-mass range is dominated by the dielecrons from hadronic channels and the intermediate-mass region by those from partonic interactions and heavy flavor decays.
However, the crossing point of the contribution from partonic interactions and that from heavy flavor is lower than at the top RHIC energy, which is due to a couple of effects:
i) the cross section for charm production no longer increases rapidly at the LHC energies as shown in Fig.~\ref{fig1} (a). It is also seen in Fig.~\ref{fig1} (b), which shows the number of produced charm pairs as a function of collision energy. As a result, the growth in the number of produced charm pairs is not faster than the growth of dielectrons from partonic interactions.
Additionally the shadowing effect, which is the modification of the parton distribution function in nuclei~\cite{Eskola:2009uj}, considerably suppresses charm production at the LHC energies~\cite{Song:2015ykw} (see below).
ii) Another reason is the stronger suppression of the charm four-momentum by partonic scattering at the LHC energies. As already discussed in the context of Fig.~\ref{fig4}, the strong interaction of heavy flavor with the medium reduces the invariant mass of dielectrons. Since the interaction is stronger at the LHC energies, we can expect a larger suppression of the dielectron spectrum at larger invariant masses.
iii) Furthermore, at the LHC energies the contribution from semileptonic $B\bar{B}$ decays becomes important.
Comparing the lower two panels of Fig.~\ref{fig8}, the contribution from $B\bar{B}$ decays is found to be larger than that from $D\bar{D}$ decays above $M \approx$ 2.2 GeV in Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV, while the contribution from $B\bar{B}$ decays is larger only above $M \approx$ 2.8 GeV in Au+Au collisions $\sqrt{s_{\rm NN}}$ = 200 GeV. Since the contribution from $B\bar{B}$ decays amounts to about 50\% of the contribution from partonic interactions at the LHC energies, it will distort the information on partonic matter in the intermediate-mass range of the dielectron spectrum.
Besides the interesting points mentioned above, we close this subsection with the comment that the dilepton invariant mass spectra from the PHSD describe reasonably well the available experimental data for collision energies from 19.6 GeV to 2.76 TeV although the experimental data at $\sqrt{s_{\rm NN}}$ = 2.76 TeV are available only for invariant masses $M \leq$ 1 GeV.
\subsection{Predictions for central Pb+Pb {collisions} at $\sqrt{s_{\rm NN}}$= 5.02 TeV}
Based on the successful description of experimental data from the Beam Energy Scan for $\sqrt{s_{\rm NN}}$= 19.6 GeV to the LHC energy at $\sqrt{s_{\rm NN}}$= 2.76 TeV, we here provide predictions for dielectron production in central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV.
As mentioned above, a proper description of heavy flavor production and interactions in heavy-ion collisions is necessary to allow for reliable predictions.
\begin{figure} [h]
\centerline{
\includegraphics[width=8.6 cm]{fig11a.eps}}
\centerline{
\includegraphics[width=8.6 cm]{fig11b.eps}}
\caption{The $R_{AA}(p_T)$ (a) and the elliptic flow $v_2(p_T)$ (b) for $D$ meson from 0-10 \% central Pb+Pb
collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (from PHSD) as a function of the
transverse momentum with (solid line) and without shadowing effects
(dashed line). Experimental data are from the CMS collaboration~\cite{Sirunyan:2017xss,Sirunyan:2017plt}} \label{fig9}
\end{figure}
Fig.~\ref{fig9} shows the $R_{AA}$ (a) and the elliptic flow $v_2$ (b) of $D$ mesons as functions of transverse momentum in 0-10 \% central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV.
In both panels the dashed lines are the results without the shadowing effect and the solid lines with EPS09 shadowing \cite{Eskola:2009uj} included.
The upper panel shows that shadowing reduces the $R_{AA}$ considerably at low transverse momentum, which can be explained as follws:
If the collision energy is very large, charm quark pairs with small transverse momentum are dominantly produced by partons with a small energy-momentum fraction $x$ of the nucleon.
On the other hand, the parton distribution function of a nucleon in a heavy nucleus is considerably suppressed at small $x$ in such high-energy collisions~\cite{Eskola:2009uj}, which leads to a suppression of charm production at low transverse momentum.
Fig.~\ref{fig9} (a) clearly shows that the shadowing effect is necessary to explain the experimental data from the ALICE collaboration.
We note that the PHSD results are presently available only up to $p_T=$ 20 GeV/c due to the limited statistics and huge CPU time required. In case of the open charm elliptic flow $v_2(p_T)$ the statistics do no allow for robust results for $p_T >$ 6 GeV/c. On the other hand, the shadowing effect is seen to have no substantial effect on the elliptic flow of $D$ mesons up to $p_T \approx$ 6 GeV/c since shadowing changes the production of charm from initial hard collisions but does not change the interactions of produced charm in the partonic medium.
Fig.~\ref{fig9} demonstrates that both the $R_{AA}$ and the elliptic flow $v_2$ of $D$ mesons are approximately described at $\sqrt{s_{\rm NN}}$= 5.02 TeV by the PHSD.
Although the $v_2$ of $D$ mesons is slightly underestimated, this will have practically no effect on the dielectron spectrum.
\begin{figure} [h]
\centerline{
\includegraphics[width=8.6 cm]{fig12.eps}}
\caption{The invariant mass spectra of dielectrons for 0-10 \%
central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV from the PHSD for $|p_T^e| > $ 0.4 GeV and $|\eta^e| <$ 0.8. }
\label{fig10}
\end{figure}
Fig.~\ref{fig10} shows the prediction from PHSD for the inva\-riant mass spectra of dielectrons in 0-10 \% central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
within the acceptance $(p_T>0.4 {\rm GeV},~ |\eta^e|<0.8)$ as in Fig.~\ref{fig8} (f).
Comparing with the results at 2.76 TeV we find no dramatic change in the shape of the spectrum except for an overall enhancement of the dielectron yield.
The yields of dielectrons from hadronic channels, from partonic interactions, and from heavy flavor decays are, respectively, enhanced by
55 \%, 54 \%, and 36 \% at $\sqrt{s_{\rm NN}}$ = 5.02 TeV.
We note that the dielectron yield from hadronic channels and that from partonic interactions increase by a similar amount suggesting that both dielectron yields are produced from bulk matter whereas the
dielectron yield from heavy flavor decays is less enhanced due to a lower increase in the charm production cross section.
\section{Summary}\label{summary}
We have studied correlated electron ($e^+ e^-$) production through
the semileptonic decay of charm hadrons in relativistic heavy-ion
collisions from $\sqrt{s_{\rm NN}}=$ 8 GeV to 5 TeV within the PHSD
transport approach in extension of our work on $D-$meson production
in relativistic heavy-ion collisions at RHIC and LHC
energies~\cite{Song:2015sfa,Song:2015ykw,Song2016} and low mass dilepton
production from SIS to RHIC energies \cite{PHSDreview}.
In the PHSD the charm partons - produced by the initial hard
nucleon-nucleon scattering - interact with the massive quarks and
gluons in the QGP by using the scattering cross sections calculated
in the Dynamical Quasi-Particle Model (DQPM) which reproduces
heavy-quark diffusion coefficients from lattice QCD calculations at
temperatures above the deconfinement transition. When approaching
the critical energy density for the phase transition from above, the
charm (anti)quarks are hadronized into $D-$mesons through the
coalescence with light (anti)quarks. Those heavy quarks, which fail
in coalescence until the local energy density is below 0.4 $\rm
GeV/fm^3$, hadronize by fragmentation as in p+p collisions. The
hadronized $D-$mesons then interact with light hadrons in the
hadronic phase with cross sections that have been calculated in an
effective lagrangian approach with heavy-quark spin symmetry.
Finally, after freeze-out of the $D-$mesons they produce single
electrons through semileptonic decays with the branching ratios
given by the PYTHIA event generator.
The dilepton production from hadronic and
partonic channels in central Pb+Pb (or Au+Au) collisions has been calculated including
also the contribution from the semileptonic decays of heavy flavors in
PHSD for the first time on a fully microscopic level.
We recall that also the cross sections for dilepton production have been calculated by employing the same propagators and couplings as incorporated in the partonic dynamics in PHSD (cf. Appendix A).
We find that even in central Pb+Pb
collisions at $\sqrt{s_{\rm NN}}=$ {8} to 20 GeV the contribution from
$D,{\bar D}$ mesons to the intermediate mass dilepton spectra is
subleading and one should have a rather clear signal from the QGP
radiation whereas at the top RHIC energy this contribution
overshines the intermediate mass dileptons from the QGP.
{It is interesting to note that the dielectrons from $D,{\bar D}$ mesons do not increase any more relative to partonic interactions at the LHC energies for a couple of reasons:
i) the cross section for charm production does not grow as fast as at low energies;
ii) the shadowing effects, which suppress charm production at low transverse momentum, are stronger at LHC than at RHIC energies (cf. Fig. 11);
iii) the charm quark pair looses more four-momentum in the partonic medium produced at the LHC, which suppresses the invariant mass of the dielectrons from the semileptonic decays.
Furthermore, the contribution from $B,{\bar B}$ meson decays becomes more important and
superseeds the contribution from $D,{\bar D}$ meson decays above $M=$ 2.2$\sim$2.3 GeV at the LHC energies and amounts to about half the contribution from partonic interactions.
All these effects strongly distort the information about partonic matter from intermediate-mass dielectrons at the LHC energies.} The dilepton spectra at lower masses ($0.2 ~{\rm GeV} \leq M \leq
0.7 ~{\rm GeV})$ at SPS, FAIR/NICA and BES RHIC energies show some sensitivity to the medium
modification of the $\rho$ meson where the data favor an in-medium
broadening as pointed out in the earlier studies on dilepton production reviewed in Refs. \cite{ChSym,PHSDreview}.
Additionally, we have explored the transverse mass spectra of dileptons in the invariant mass range from 1.2 GeV to 3 GeV in central Pb+Pb collisions for $\sqrt{s_{\rm NN}}$ = 8 to 200 GeV and find approximately exponential spectra for transverse masses in the energy range $[1.75, 2.95]$ GeV (cf. Fig. 8). Since the inverse slope parameters differ for the contributions from the QGP and are higher than that from $D$-decays we expect a wiggle in the excitation function of the inverse slope parameter for these intermediate mass dileptions (cf. Fig. 9) which should be seen experimentally in high statistics data.
In general the PHSD calculations compare well with the available dilepton data from the BES program at RHIC as well as the LHC energy of $\sqrt{s_{\rm NN}}$ = 2.76 TeV where, unfortunately, only low mass dilepton data are available so far.
Explicit predictions for central Pb+Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV have been provided (cf. Fig. 12),
however, the partonic contribution in the intermediate mass range has a large background from $D, {\bar D}$ as
well as $B, {\bar B}$ correlated semi-leptonic decays. As noted above, this background - in the intermediate mass range - is by far subleading at lower SPS and FAIR/NICA energies which provides promising perspectives for the future dilepton measurements at these facilities and allows for a fresh look at the electromagnetic radiation from the QGP
at finite baryon chemical potential.
\section*{Acknowledgements}
The authors acknowledge inspiring discussions with
J. Butterworth, F. Geurts and O. Linnyk.
This work was supported by the LOEWE center "HIC for FAIR", the HGS-HIRe for FAIR and
the COST Action THOR, CA15213.
Furthermore, PM and EB acknowledge support by DFG through the grant CRC-TR 211 'Strong-interaction matter under extreme conditions'.
The computational resources have been provided by the LOEWE-CSC.
\hfil\break
|
{
"timestamp": "2018-06-05T02:18:54",
"yymm": "1803",
"arxiv_id": "1803.02698",
"language": "en",
"url": "https://arxiv.org/abs/1803.02698"
}
|
\section{Introduction}
Let $P\in X$ be the germ of a smooth variety and $\mathfrak{a}=\prod_{j=1}^e\mathfrak{a}_j^{r_j}$ be an $\mathbf{R}$-ideal on $X$. We write $\mld_P(X,\mathfrak{a})$ for the minimal log discrepancy of the pair $(X,\mathfrak{a})$ at $P$. For a subset $I$ of the positive real numbers, we mean by $\mathfrak{a}\in I$ that the exponents $r_j$ in $\mathfrak{a}$ belong to $I$. ACC stands for the ascending chain condition while DCC stands for the descending chain condition. This paper discusses the ACC conjecture for minimal log discrepancies on smooth threefolds, which was conjectured by Shokurov \cite{BS10}, \cite{Sh88} for arbitrary lc pairs.
\begin{conjectureAlph}\label{cnj:acc}
Fix a subset $I$ of the positive real numbers which satisfies the DCC. Then the set
\begin{align*}
\{\mld_P(X,\mathfrak{a})\mid\textup{$P\in X$ a smooth threefold},\ \textup{$\mathfrak{a}$ an $\mathbf{R}$-ideal},\ \mathfrak{a}\in I\}
\end{align*}
satisfies the ACC.
\end{conjectureAlph}
We approach it with the theory of the generic limit of ideals introduced by de Fernex and Musta\c{t}\u{a} \cite{dFM09}. Our earlier work \cite{K14} shows the finiteness of the set of $\mld_P(X,\mathfrak{a})$ in which the germ $P\in X$ of a klt variety and the exponents in $\mathfrak{a}$ are fixed. Instead, if the exponents in $\mathfrak{a}$ move in an infinite set satisfying the DCC, then we require the stability of minimal log discrepancies for generic limits. This stability connects Conjecture \ref{cnj:acc} to the following important conjectures equivalently, as it was indicated essentially by Musta\c{t}\u{a} and Nakamura \cite{MN16}.
The first is the ACC for $a$-lc thresholds, a generalisation of lc thresholds.
\begin{conjectureAlph}\label{cnj:alc}
Fix a non-negative real number $a$ and a subset $I$ of the positive real numbers which satisfies the DCC. Then the set
\begin{align*}
\{t\in\mathbf{R}_{\ge0}\mid\textup{$P\in X$ a smooth threefold},\ \textup{$\mathfrak{a}$, $\mathfrak{b}$ $\mathbf{R}$-ideals},\ \mld_P(X,\mathfrak{a}\mathfrak{b}^t)=a,\ \mathfrak{a}\mathfrak{b}\in I\}
\end{align*}
satisfies the ACC.
\end{conjectureAlph}
The second is a uniform version of the $\mathfrak{m}$-adic semi-continuity, which was proposed originally by Musta\c{t}\u{a}.
\begin{conjectureAlph}\label{cnj:madic}
Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $I$ such that if $P\in X$ is the germ of a smooth threefold and if $\mathfrak{a}=\prod_{j=1}^e\mathfrak{a}_j^{r_j}$ and $\mathfrak{b}=\prod_{j=1}^e\mathfrak{b}_j^{r_j}$ are $\mathbf{R}$-ideals on $X$ satisfying that $r_j\in I$ and $\mathfrak{a}_j+\mathfrak{m}^l=\mathfrak{b}_j+\mathfrak{m}^l$ for any $j$, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$, then $\mld_P(X,\mathfrak{a})=\mld_P(X,\mathfrak{b})$.
\end{conjectureAlph}
The last is the boundedness of the log discrepancy of some divisor which computes the minimal log discrepancy, proposed by Nakamura.
\begin{conjectureAlph}\label{cnj:nakamura}
Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $I$ such that if $P\in X$ is the germ of a smooth threefold and $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a})$ and satisfies the inequality $a_E(X)\le l$.
\end{conjectureAlph}
The first main result of this paper is to reduce these conjectures to the case when the boundary is the product of a canonical part and the maximal ideal to some power.
\begin{theorem}\label{thm:first}
Conjectures \textup{\ref{cnj:acc}}, \textup{\ref{cnj:alc}}, \textup{\ref{cnj:madic}} and \textup{\ref{cnj:nakamura}} are equivalent to Conjecture \textup{\ref{cnj:product}}.
\end{theorem}
\begin{conjecture}\label{cnj:product}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a positive rational number $q$ and a non-negative rational number $s$. Then there exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\end{conjecture}
Our earlier work derives this boundedness when $(X,\mathfrak{a}^q)$ is terminal or $s$ is zero.
\begin{theorem}\label{thm:terminal}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$.
\begin{enumerate}
\item\label{itm:terminal}
Fix a positive rational number $q$ and a non-negative rational number $s$. Then there exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is terminal, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:zero}
Fix a positive rational number $q$. Then there exists a positive integer $l$ depending only on $q$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$ and satisfies the inequality $a_E(X)\le l$.
\end{enumerate}
\end{theorem}
The second main result is to prove Conjecture \ref{cnj:product} when the lc threshold of the maximal ideal is either at most one-half or at least one.
\begin{theorem}\label{thm:second}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a positive rational number $q$ and a non-negative rational number $s$.
\begin{enumerate}
\item\label{itm:half}
There exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical and that $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})$ is not positive, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:one}
There exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical and that $(X,\mathfrak{a}^q\mathfrak{m})$ is lc, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\end{enumerate}
\end{theorem}
Once Theorem \ref{thm:second}(\ref{itm:half}) is established, it is relatively simple to obtain Conjecture \ref{cnj:product} when $s$ is close to zero in terms of a scale determined by $q$.
\begin{corollary}\label{crl:main}
Conjecture \textup{\ref{cnj:product}} holds when $s$ is at most $1/n$ for some integer $n$ greater than one such that $nq$ is integral.
\end{corollary}
We shall explain the outline of our research. Fix the germ $P\in X$ of a smooth threefold and a positive rational number $q$. For a sequence $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ of ideals on $X$, its generic limit $\mathsf{a}$ is defined on the spectrum $\hat P\in \hat X$ of the completion of the local ring $\mathscr{O}_{X,P}\otimes_kK$, where $K$ is an extension of the ground field $k$. The stability of minimal log discrepancies means the equality
\begin{align*}
\mld_{\hat P}(\hat X,\mathsf{a}^q)=\mld_P(X,\mathfrak{a}_i^q)
\end{align*}
for infinitely many $i$, to which any of Conjectures \ref{cnj:acc} to \ref{cnj:nakamura} is equivalent (Theorem \ref{thm:equiv}). The strategy employed in this paper is to pursue Conjecture \ref{cnj:nakamura}.
Our previous work \cite{K15} derived the above stability except for the case when $(\hat X,\mathsf{a}^q)$ has the smallest lc centre of dimension one, which implies that $\mld_{\hat P}(\hat X,\mathsf{a}^q)$ is at most one. By this result, in order to prove Conjecture \ref{cnj:nakamura}, one has only to consider those ideals $\mathfrak{a}$ which have $\mld_P(X,\mathfrak{a}^q)$ less than one. We begin with the ACC for $1$-lc thresholds \cite{St11}. Using it together with the classification of divisorial contractions \cite{K01}, \cite{Km96}, we construct a birational morphism $Y\to X$ with bounded log discrepancies by which $(X,\mathfrak{a})$ can be replaced with a pair $(Y,(\mathfrak{a}')^q\mathfrak{b}^q)$ satisfying that $(Y,(\mathfrak{a}')^q)$ is canonical and that $\mathfrak{b}$ has bounded colength (Theorem \ref{thm:canonical}).
We study the generic limit $\mathsf{a}$ of a sequence of ideals $\mathfrak{a}_i$ on $X$ such that $(X,\mathfrak{a}_i^q)$ is canonical. We may assume that $(\hat X,\mathsf{a}^q)$ has the smallest lc centre $\hat C$ of dimension one. Then the $\mld_{\hat P}(\hat X,\mathsf{a}^q)$ equals one by the canonicity of $(X,\mathfrak{a}_i^q)$, and so does the $\mld_P(X,\mathfrak{a}_i^q)$. By our result \cite{K17} in dimension two, there exists a divisor $\hat E$ over $\hat X$ computing $\mld_{\eta_{\hat C}}(\hat X,\mathsf{a}^q)=0$ which is obtained at the generic point $\eta_{\hat C}$ of $\hat C$ by a weighted blow-up. On our extra condition that $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$, we find $\hat E$ for which the weighted blow-up at $\eta_{\hat C}$ is extended to the closed point $\hat P$ (Theorem \ref{thm:wbu}).
We associate the minimal log discrepancy on $\hat X$ with that on $\hat E$ by precise inversion of adjunction (Section \ref{sct:reduction}). The generic limit $\mathsf{b}$ of a sequence of ideals of bounded colength satisfies that $\mathsf{b}\mathscr{O}_{\hat C}=\hat\mathfrak{m}^b\mathscr{O}_{\hat C}$ for some integer $b$, where $\hat\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_{\hat X}$. Then Conjecture \ref{cnj:nakamura} is reduced to the case when $\mathfrak{b}$ is the maximal ideal $\mathfrak{m}$ to the power of $b$, which completes Theorem \ref{thm:first}.
Suppose that the lc threshold of $\mathfrak{m}$ with respect to $(X,\mathfrak{a}^q)$ is at most one-half. Under our assumptions on the generic limit $\mathsf{a}$ involved, we derive a special conclusion that $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)=1-2s$, which includes the boundedness stated in Theorem \ref{thm:second}(\ref{itm:half}). A similar argument is applied to the case of lc threshold at least one, Theorem \ref{thm:second}(\ref{itm:one}).
Conjecture \ref{cnj:product} remains open when $\mld_P(X,\mathfrak{a}^q)$ equals one and $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})$ is positive. In this case, every divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a}^q)$ satisfies that $\ord_E\mathfrak{m}$ equals one. We supply a classification of the centre of $E$ on a certain weighted blow-up of $X$ (Theorem \ref{thm:crepant}).
\section{Preliminaries}
We shall fix the notation and review the basics of singularities in birational geometry. Refer to \cite{Ko13} for details.
We work over an algebraically closed field $k$ of characteristic zero. We omit to write the bases of tensor products over $k$ and of products over $\Spec k$ when it is clear. A \textit{variety} is an integral separated scheme of finite type over $\Spec k$. The \textit{dimension} of a scheme means the Krull dimension. A variety of dimension one (resp.\ two) is called a \textit{curve} (resp.\ a \textit{surface}).
The \textit{germ} is considered at a closed point algebraically. When we work on the spectrum of a noetherian ring, we identify an ideal in the ring with its coherent ideal sheaf. For an irreducible closed subset $Z$ of a scheme, we write $\eta_Z$ for the generic point of $Z$.
The \textit{round-down} $\rd{r}$ of a real number $r$ is the greatest integer at most $r$. The \textit{natural number} starts from zero.
\medskip
\textit{Orders}.
Let $X$ be a noetherian scheme and $Z$ be an irreducible closed subset of $X$. The \textit{order} of a coherent ideal sheaf $\mathfrak{a}$ on $X$ along $Z$ is the maximal $\nu\in\mathbf{N}\cup\{+\infty\}$ satisfying that $\mathfrak{a}\mathscr{O}_{X,\eta_Z}\subset\mathscr{I}^\nu\mathscr{O}_{X,\eta_Z}$ for the ideal sheaf $\mathscr{I}$ of $Z$, and it is denoted by $\ord_Z\mathfrak{a}$. If $Y\to X$ is a birational morphism from a noetherian normal scheme, then we set $\ord_E\mathfrak{a}=\ord_E\mathfrak{a}\mathscr{O}_Y$ for a prime divisor $E$ on $Y$. The $\ord_Zf$ for a function $f$ in $\mathscr{O}_X$ stands for $\ord_Z(f\mathscr{O}_X)$.
Suppose that $X$ is normal. For an effective $\mathbf{Q}$-Cartier divisor $D$ on $X$, we set $\ord_ZD=r^{-1}\ord_Z\mathscr{O}_X(-rD)$ for a positive integer $r$ such that $rD$ is Cartier, which is independent of the choice of $r$. The notion of $\ord_ZD$ is extended to $\mathbf{R}$-Cartier $\mathbf{R}$-divisors by linearity.
\medskip
\textit{$\mathbf{R}$-ideals}.
An $\mathbf{R}$-\textit{ideal} on a noetherian scheme $X$ is a formal product $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$ of finitely many coherent ideal sheaves $\mathfrak{a}_j$ on $X$ with positive real exponents $r_j$. For a positive real number $t$, the $\mathfrak{a}$ to the \textit{power} of $t$ is $\mathfrak{a}^t=\prod_j\mathfrak{a}_j^{tr_j}$. The \textit{cosupport} of $\mathfrak{a}$ is the union of the supports of $\mathscr{O}_X/\mathfrak{a}_j$ for all $j$. The \textit{order} of $\mathfrak{a}$ along an irreducible closed subset $Z$ of $X$ is $\ord_Z\mathfrak{a}=\sum_jr_j\ord_Z\mathfrak{a}_j$. The $\mathfrak{a}$ is said to be \textit{invertible} if all $\mathfrak{a}_j$ are invertible. The \textit{pull-back} of $\mathfrak{a}$ by a morphism $Y\to X$ is $\mathfrak{a}\mathscr{O}_Y=\prod_j(\mathfrak{a}_j\mathscr{O}_Y)^{r_j}$. For a subset $I$ of the positive real numbers, we mean by $\mathfrak{a}\in I$ that all exponents $r_j$ belong to $I$.
If $\mathfrak{a}$ is invertible, then the $\mathbf{R}$-divisor $A=\sum_jr_jA_j$ for which $\mathfrak{a}_j=\mathscr{O}_X(-A_j)$ is called the $\mathbf{R}$-divisor \textit{defined by} $\mathfrak{a}$. When we work on the germ $P\in X$, the $\mathbf{R}$-divisor \textit{defined by a general member in} $\mathfrak{a}$ means an $\mathbf{R}$-divisor $\sum_jr_j(f_j)$ on $X$ with a general member $f_j$ in $\mathfrak{a}_j$. The $\mathfrak{a}$ is said to be \textit{$\mathfrak{m}$-primary} if all $\mathfrak{a}_j$ are $\mathfrak{m}$-primary, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$.
\begin{convention}
It is sometimes convenient to allow an exponent in an $\mathbf{R}$-ideal to be zero. We define a coherent ideal sheaf to the power of zero as the structure sheaf.
\end{convention}
\medskip
\textit{The minimal log discrepancy}.
A \textit{subtriple} $(X,\Delta,\mathfrak{a})$ consists of a normal variety $X$, an $\mathbf{R}$-divisor $\Delta$ on $X$ such that $K_X+\Delta$ is $\mathbf{R}$-Cartier, and an $\mathbf{R}$-ideal $\mathfrak{a}$ on $X$. The $(X,\Delta,\mathfrak{a})$ is called a \textit{triple} if $\Delta$ is effective. We omit to write $\mathfrak{a}$ or $\Delta$ and call $(X,\Delta)$ or $(X,\mathfrak{a})$ a (\textit{sub})\textit{pair} when $\mathfrak{a}=\mathscr{O}_X$ or $\Delta=0$. The $\Delta$ or $\mathfrak{a}$ is called the \textit{boundary} when $(X,\Delta)$ or $(X,\mathfrak{a})$ is a pair.
A prime divisor $E$ on a normal variety $Y$ equipped with a birational morphism $\pi\colon Y\to X$ is called a divisor \textit{over} $X$, and the closure of the image $\pi(E)$ is called the \textit{centre} of $E$ on $X$ and denoted by $c_X(E)$. We write $\mathcal{D}_X$ for the set of all divisors over $X$. Two elements in $\mathcal{D}_X$ are usually identified when they define the same valuation on the function field of $X$. The \textit{log discrepancy} of $E$ with respect to $(X,\Delta,\mathfrak{a})$ is
\begin{align*}
a_E(X,\Delta,\mathfrak{a})=1+\ord_EK_{Y/(X,\Delta)}-\ord_E\mathfrak{a},
\end{align*}
where $K_{Y/(X,\Delta)}=K_Y-\pi^*(K_X+\Delta)$.
Let $Z$ be a closed subvariety of $X$. The \textit{minimal log discrepancy} of $(X,\Delta,\mathfrak{a})$ at the generic point $\eta_Z$ is
\begin{align*}
\mld_{\eta_Z}(X,\Delta,\mathfrak{a})=\inf\{a_E(X,\Delta,\mathfrak{a})\mid E\in\mathcal{D}_X,\ c_X(E)=Z\},
\end{align*}
It is either a non-negative real number or minus infinity. We say that $E\in\mathcal{D}_X$ \textit{computes} $\mld_{\eta_Z}(X,\Delta,\mathfrak{a})$ if $c_X(E)=Z$ and $a_E(X,\Delta,\mathfrak{a})=\mld_{\eta_Z}(X,\Delta,\mathfrak{a})$ (or is negative when $\mld_{\eta_Z}(X,\Delta,\mathfrak{a})=-\infty$). It is often enough to study the case when $Z$ is a closed point by the relation $\mld_{\eta_Z}(X,\Delta,\mathfrak{a})=\mld_P(X,\Delta,\mathfrak{a})-\dim Z$ for a general closed point $P$ in $Z$. It is sometimes convenient to use the \textit{minimal log discrepancy} of $(X,\Delta,\mathfrak{a})$ in a closed subset $W$ of $X$ which is defined by
\begin{align*}
\mld_W(X,\Delta,\mathfrak{a})=\inf\{a_E(X,\Delta,\mathfrak{a})\mid E\in\mathcal{D}_X,\ c_X(E)\subset W\}.
\end{align*}
\medskip
\textit{Singularities}.
The subtriple $(X,\Delta,\mathfrak{a})$ is said to be \textit{log canonical} (\textit{lc}) (resp.\ \textit{Kawamata log terminal} (\textit{klt})) if $a_E(X,\Delta,\mathfrak{a})\ge0$ (resp.\ $>0$) for all $E\in\mathcal{D}_X$. It is said to be \textit{purely log terminal} (\textit{plt}) (resp.\ \textit{canonical}, \textit{terminal}) if $a_E(X,\Delta,\mathfrak{a})>0$ (resp.\ $\ge1$, $>1$) for all $E\in\mathcal{D}_X$ exceptional over $X$. For a closed point $P$ in $X$, $(X,\Delta,\mathfrak{a})$ is lc about $P$ iff $\mld_P(X,\Delta,\mathfrak{a})$ is not minus infinity. When $(X,\Delta,\mathfrak{a})$ is lc, the \textit{lc threshold} with respect to $(X,\Delta,\mathfrak{a})$ of a non-trivial $\mathbf{R}$-ideal $\mathfrak{b}$ on $X$ is the maximal real number $t$ such that $(X,\Delta,\mathfrak{a}\mathfrak{b}^t)$ is lc.
Let $Y$ be a normal variety birational to $X$. A centre $c_Y(E)$ on $Y$ of $E\in\mathcal{D}_Y$ such that $a_E(X,\Delta,\mathfrak{a})\le0$ is called a \textit{non-klt centre} on $Y$ of $(X,\Delta,\mathfrak{a})$. The union of all non-klt centres on $Y$ is called the \textit{non-klt locus} on $Y$ of $(X,\Delta,\mathfrak{a})$. When we just say a non-klt centre or the non-klt locus, we mean that it is on $X$.
When $(X,\Delta,\mathfrak{a})$ is lc, a non-klt centre of $(X,\Delta,\mathfrak{a})$ is often called an \textit{lc centre}. When we work on the germ of a variety, an lc centre contained in every lc centre is called the \textit{smallest lc centre}. The smallest lc centre exists and it is normal \cite[Theorem 9.1]{F11}.
The \textit{index} of a normal $\mathbf{Q}$-Gorenstein singularity $P\in X$ is the least positive integer $r$ such that $rK_X$ is Cartier at $P$.
\medskip
\textit{Birational transformations}.
A reduced divisor $D$ on a smooth variety $X$ is said to be \textit{simple normal crossing} (\textit{snc}) if $D$ is defined at every closed point $P$ in $X$ by the product of a part of a regular system of parameters in $\mathscr{O}_{X,P}$. A \textit{stratum} of $D=\sum_{i\in I}D_i$ is an irreducible component of $\bigcap_{i\in I'}D_i$ for a subset $I'$ of $I$. For a smooth morphism $X\to S$, the $D$ said to be snc \textit{relative to} $S$ if every stratum of $D$ is smooth over $S$.
A \textit{log resolution} of a subtriple $(X,\Delta,\mathfrak{a})$ is a projective birational morphism from a smooth variety $Y$ to $X$ such that
\begin{itemize}
\item
the exceptional locus is a divisor and $\mathfrak{a}\mathscr{O}_Y$ is invertible,
\item
the union of the exceptional locus, the support of the strict transform of $\Delta$, and the cosupport of $\mathfrak{a}\mathscr{O}_Y$ is snc, and
\item
it is isomorphic on the maximal open locus $U$ in $X$ such that $U$ is smooth, $\mathfrak{a}\mathscr{O}_U$ is invertible, and the union of the support of $\Delta|_U$ and the cosupport of $\mathfrak{a}\mathscr{O}_U$ is snc.
\end{itemize}
Let $(X,\Delta,\mathfrak{a})$ be a subtriple, where $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$, and $Y$ be a normal variety birational to $X$. A subtriple $(Y,\Gamma,\mathfrak{b})$ is said to be \textit{crepant} to $(X,\Delta,\mathfrak{a})$ if $a_E(X,\Delta,\mathfrak{a})=a_E(Y,\Gamma,\mathfrak{b})$ for any divisor $E$ over $X$ and $Y$. Suppose that $Y$ is smooth and has a birational morphism to $X$ whose exceptional locus is a divisor $\sum_iE_i$. The \textit{weak transform} on $Y$ of $\mathfrak{a}$ is the $\mathbf{R}$-ideal $\mathfrak{a}_Y=\prod_j(\mathfrak{a}_{jY})^{r_j}$ defined by
\begin{align*}
\mathfrak{a}_{jY}=\mathfrak{a}_j\mathscr{O}_Y(\textstyle\sum_i(\ord_{E_i}\mathfrak{a}_j)E_i).
\end{align*}
Remark that this notion is different from that of the strict transform, the $j$-th ideal of which is $\sum_{f\in\mathfrak{a}_j}f\mathscr{O}_Y(\sum_i(\ord_{E_i}f)E_i)$ (see \cite[III Definition 5]{H64}). The definition of the weak transform $\mathfrak{a}_Y$ is extended to the case when $Y$ is normal as far as $\sum_i(\ord_{E_i}\mathfrak{a}_j)E_i$ is Cartier for any $j$. We introduce
\begin{definition}
The \textit{pull-back} of $(X,\Delta,\mathfrak{a})$ by $Y\to X$ is the subtriple $(Y,\Delta_Y,\mathfrak{a}_Y)$ in which $\Delta_Y=-K_{Y/(X,\Delta)}+\sum_{ij}(r_j\ord_{E_i}\mathfrak{a}_j)E_i$.
\end{definition}
The pull-back $(Y,\Delta_Y,\mathfrak{a}_Y)$ is crepant to $(X,\Delta,\mathfrak{a})$.
\medskip
\textit{Weighted blow-ups}.
Let $P\in X$ be the germ of a smooth variety. Let $x_1,\ldots,x_c$ be a part of a regular system of parameters in $\mathscr{O}_{X,P}$ and $w_1,\ldots,w_c$ be positive integers. For $w\in\mathbf{N}$, let $\mathscr{I}_w$ be the ideal in $\mathscr{O}_X$ generated by all monomials $x_1^{s_1}\cdots x_c^{s_c}$ such that $\sum_{i=1}^cs_iw_i\ge w$. The \textit{weighted blow-up} of $X$ with $\wt(x_1,\ldots,x_c)=(w_1,\ldots,w_c)$ is $\Proj_X(\bigoplus_{w\in\mathbf{N}}\mathscr{I}_w)$.
\begin{remark}\label{rmk:wbu}
If $x'_1,\ldots,x'_c$ is a part of another regular system of parameters such that $x'_i\in\mathscr{I}_{w_i}\setminus\mathscr{I}_{w_i+1}$ for any $i$, then the weighted blow-up of $X$ with $\wt(x'_1,\ldots,x'_c)=(w_1,\ldots,w_c)$ is the same that is obtained by $\wt(x_1,\ldots,x_c)=(w_1,\ldots,w_c)$.
\end{remark}
Its explicit description is reduced to the case of the affine space by an \'etale morphism. Let $o\in\mathbf{A}^d$ be the germ at origin of the affine space with coordinates $x_1,\cdots,x_d$ and $Y$ be the weighted blow-up of $\mathbf{A}^d$ with $\wt(x_1,\ldots,x_d)=(w_1,\ldots,w_d)$. One may assume that $w_1,\ldots,w_d$ have no common divisors. Then $Y$ is covered by the affine charts $U_i=\mathbf{A}^d/\mathbf{Z}_{w_i}(w_1,\ldots,w_{i-1},-1,w_{i+1},\ldots,w_d)$ for $1\le i\le d$, and the exceptional divisor is isomorphic to the weighted projective space $\mathbf{P}(w_1,\ldots,w_d)$ (see \cite[6.38]{KSC04} for details).
Here the notation $\mathbf{A}^d/\mathbf{Z}_r(a_1,\ldots,a_d)$ stands for the quotient of $\mathbf{A}^d$ by the cyclic group $\mathbf{Z}_r$ of order $r$ whose generator sends the $i$-th coordinate $x_i$ of $\mathbf{A}^d$ to $\zeta^{a_i}x_i$, where $\zeta$ is a primitive $r$-th root of unity. The $x_1,\ldots,x_d$ on this quotient are called \textit{orbifold coordinates}. An isolated \textit{cyclic quotient singularity} means that the spectrum of the completion of its local ring coincides with the regular base change of some $\mathbf{A}^d/\mathbf{Z}_r(a_1,\ldots,a_d)$, in which it is said to be of \textit{type} $\frac{1}{r}(a_1,\ldots,a_d)$.
In terms of toric geometry following the notation in \cite{I14}, by setting $N=\mathbf{Z}^d+\mathbf{Z} v$ where $v=\frac{1}{r}(a_1,\ldots,a_d)$, the quotient $\mathbf{A}^d/\mathbf{Z}_r(a_1,\ldots,a_d)$ is the toric variety $T_N(\Delta)$ which corresponds to the cone $\Delta$ spanned by the standard basis $e_1,\ldots,e_d$ of $\mathbf{Z}^d$. For $e=\frac{1}{r}(w_1,\ldots,w_d)\in N\cap\Delta$, the \textit{weighted blow-up} of $\mathbf{A}^d/\mathbf{Z}_r(a_1,\ldots,a_d)$ with respect to $\wt(x_1,\ldots,x_d)=\frac{1}{r}(w_1,\ldots,w_d)$ is defined by adding the ray generated by $e$.
\medskip
\textit{Adjunction}.
Let $X$ be a normal variety and $S+B$ be an effective $\mathbf{R}$-divisor on $X$ such that $S$ is reduced and has no common components with the support of $B$. Suppose that they form a pair $(X,S+B)$. Then one has the \textit{adjunction}
\begin{align*}
\nu^*(K_X+S+B|_S)=K_{S^\nu}+B_{S^\nu}
\end{align*}
on the normalisation $\nu\colon S^\nu\to S$ of $S$, in which $B_{S^\nu}$ is an effective $\mathbf{R}$-divisor called the \textit{different} on $S^\nu$ of $B$ (see \cite[Chapter 16]{Ko92} or \cite[Section 3]{Sh93}).
\begin{example}\label{exl:different}
Let $X=\mathbf{A}^2/\mathbf{Z}_r(1,w)$ with orbifold coordinates $x_1,x_2$ such that $w$ is coprime to $r$. Let $S$ be the curve on $X$ defined by $x_1$ and $P$ be the origin of $X$. Then $K_X+S|_S=K_S+(1-r^{-1})P$.
\end{example}
The singularity on $X$ is associated with that on $S^\nu$ by
\begin{theorem}[Inversion of adjunction]\label{thm:ia}
Notation as above.
\begin{enumerate}
\item
\textup{(\cite[Theorem 17.6]{Ko92})}\;
$(X,S+B)$ is plt about $S$ iff $(S^\nu,B_{S^\nu})$ is klt. In this case, $S$ is normal.
\item
\textup{(\cite{K07})}\;
$(X,S+B)$ is lc about $S$ iff $(S^\nu,B_{S^\nu})$ is lc.
\end{enumerate}
\end{theorem}
\medskip
\textit{$R$-varieties}.
The notions explained above make sense over the ring $R$ of formal power series over a field of characteristic zero, which has been discussed by de Fernex, Ein and Musta\c{t}\u{a} \cite{dFEM11}, \cite{dFM09}. We mean by an \textit{$R$-variety} an integral separated scheme of finite type over $\Spec R$. We consider regular $R$-varieties instead of smooth $R$-varieties.
The canonical divisor $K_X$ on a normal $R$-variety $X$ is defined by the \textit{sheaf of special differentials} in \cite{dFEM11}. Let $Y\to X$ be a birational morphism between regular $R$-varieties. The relative canonical divisor $K_{Y/X}$ is the effective divisor defined by the zeroth Fitting ideal of $\Omega_{Y/X}$ \cite[Remark A.12]{dFEM11}. In particular, $K_{Y/X}$ is independent of the structure of $X$ as an $R$-variety.
\begin{remark}\label{rmk:regular}
Let $P\in X$ be the germ of a normal $\mathbf{Q}$-Gorenstein $R$-variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$. Let $X'$ be either
\begin{itemize}
\item
the spectrum of the completion of the local ring $\mathscr{O}_{X,P}$, or
\item
$X\times_{\Spec R}\Spec R'$, where $R'$ is the completion of $R\otimes_KK'$ for a field extension $K'$ of $K$,
\end{itemize}
which has a regular morphism $\pi\colon X'\to X$. Then $K_{X'}=\pi^*K_X$, by which one has that $a_{E'}(X',\mathfrak{a}\mathscr{O}_{X'})=a_E(X,\mathfrak{a})$ and $\mld_{P'}(X',\mathfrak{a}\mathscr{O}_{X'})=\mld_P(X,\mathfrak{a})$ for any components $E'$ of $E\times_XX'$ and $P'$ of $P\times_XX'$.
\end{remark}
\begin{lemma}\label{lem:regular}
Let $P\in X$ be the germ of an $R$-variety. Let $\hat X$ be the spectrum of the completion of the local ring $\mathscr{O}_{X,P}$ and $\hat P$ be its closed point.
\begin{enumerate}
\item\label{itm:idealbij}
Let $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$ and $\hat\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_{\hat X}$. Then the pull-back defines a bijective map from the set of $\mathfrak{m}$-primary $\mathbf{R}$-ideals on $X$ to the set of $\hat\mathfrak{m}$-primary $\mathbf{R}$-ideals on $\hat X$.
\item\label{itm:divisorbij}
Suppose that $X$ is normal. Then the base change defines a bijective map from the set of divisors over $X$ with centre $P$ to the set of divisors over $\hat X$ with centre $\hat P$.
\end{enumerate}
\end{lemma}
\begin{proof}
The (\ref{itm:idealbij}) follows from the isomorphisms $\mathscr{O}_X/\mathfrak{m}^l\simeq\mathscr{O}_{\hat X}/\hat\mathfrak{m}^l$, while (\ref{itm:divisorbij}) follows from the property that blowing-up commutes with flat base changes.
\end{proof}
By Lemma \ref{lem:regular}, in order to study the minimal log discrepancy at the closed point of the germ $P\in X$, one may often replace $P\in X$ with a germ the completion of the local ring of which is isomorphic to that of $\mathscr{O}_{X,P}$.
\section{The generic limit of ideals}\label{sct:limit}
We recall the generic limit of ideals on a fixed germ. It was introduced by de Fernex and Musta\c{t}\u{a} \cite{dFM09} and simplified by Koll\'ar \cite{Ko08}. We follow our style of the definition in \cite{K15}.
Let $P\in X$ be the germ of a scheme of finite type over $k$ and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Let $\mathcal{S}=\{(\mathfrak{a}_{i1},\ldots,\mathfrak{a}_{ie})\}_{i\in\mathbf{N}}$ be an infinite sequence of $e$-tuples of ideals in $\mathscr{O}_X$. For every positive integer $l$, the ideal $(\mathfrak{a}_{ij}+\mathfrak{m}^l)/\mathfrak{m}^l$ in $\mathscr{O}_X/\mathfrak{m}^l$ for $i\in\mathbf{N}$ and $1\le j\le e$ corresponds to a closed point $P_{ij}(l)$ in the Hilbert scheme $H_l$ parametrising ideals in $\mathscr{O}_X/\mathfrak{m}^l$. The $H_l$ is a scheme of finite type over $k$ and there exists a natural rational map $H_{l+1}\to H_l$. Take the Zariski closure $Z_j(l)$ of the subset $\{P_{ij}(l)\}_{i\in\mathbf{N}}$ in $H_l$. By finding a locally closed irreducible subset $Z_l$ of $Z_1(l)\times\cdots\times Z_e(l)$ inductively, one obtains a family of approximations of $\mathcal{S}$ defined below.
\begin{definition}\label{dfn:approx}
A \textit{family} $\mathcal{F}=(Z_l,(\mathfrak{a}_j(l))_j,N_l,s_l,t_l)_{l\ge l_0}$ \textit{of approximations} of $\mathcal{S}$ consists of a fixed positive integer $l_0$ and for every $l\ge l_0$,
\begin{itemize}
\item
a variety $Z_l$,
\item
an ideal sheaf $\mathfrak{a}_j(l)$ on $X\times Z_l$ for every $1\le j\le e$ which is flat over $Z_l$ and contains $\mathfrak{m}^l\mathscr{O}_{X\times Z_l}$,
\item
an infinite subset $N_l$ of $\mathbf{N}$ and a map $s_l\colon N_l\to Z_l(k)$, where $Z_l(k)$ denotes the set of the $k$-points in $Z_l$, and
\item
a dominant morphism $t_l\colon Z_{l+1}\to Z_l$,
\end{itemize}
such that
\begin{itemize}
\item
$\mathfrak{a}_j(l)\mathscr{O}_{X\times Z_{l+1}}=\mathfrak{a}_j(l+1)+\mathfrak{m}^l\mathscr{O}_{X\times Z_{l+1}}$ by $\id_X\times t_l$,
\item
$\mathfrak{a}_j(l)_i=\mathfrak{a}_{ij}+\mathfrak{m}^l$ for $i\in N_l$, where $\mathfrak{a}_j(l)_i=\mathfrak{a}_j(l)\otimes_{\mathscr{O}_{Z_l}}k$ is the ideal in $\mathscr{O}_X$ given by the closed point $s_l(i)\in Z_l$,
\item
the image of $N_l$ by $s_l$ is dense in $Z_l$, and
\item
$N_{l+1}$ is contained in $N_l$ and $t_l\circ s_{l+1}=s_l|_{N_{l+1}}$.
\end{itemize}
\end{definition}
For the above $\mathcal{F}$, let $K=\varinjlim_lK(Z_l)$ be the union of the function fields $K(Z_l)$ of $Z_l$ by the inclusions $t_l^*\colon K(Z_l)\to K(Z_{l+1})$. Let $\hat X$ be the spectrum of the completion of the local ring $\mathscr{O}_{X,P}\otimes_kK$. Let $\hat P$ be the closed point of $\hat X$ and $\hat\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_{\hat X}$.
\begin{definition}
The \textit{generic limit} of $\mathcal{S}$ with respect to $\mathcal{F}$ is the $e$-tuple $(\mathsf{a}_1,\ldots,\mathsf{a}_e)$ of ideals in $\mathscr{O}_{\hat X}$ defined by
\begin{align*}
\mathsf{a}_j=\varprojlim_l\mathfrak{a}_j(l)_K,
\end{align*}
where $\mathfrak{a}_j(l)_K=\mathfrak{a}_j(l)\otimes_{\mathscr{O}_{Z_l}}K$ is the ideal in $\mathscr{O}_X\otimes_kK$ given by the natural $K$-point $\Spec K\to Z_l$.
\end{definition}
\begin{remark}
Let $R$ be the completion of the local ring $\mathscr{O}_{X,P}$. In the literature, the generic limit is defined for a sequence of $e$-tuples of ideals in $R$. When $\mathfrak{a}_{ij}$ are ideals in $R$, the generic limit is defined in the same way just by replacing the condition $\mathfrak{a}_j(l)_i=\mathfrak{a}_{ij}+\mathfrak{m}^l$ in Definition \ref{dfn:approx} with $\mathfrak{a}_j(l)_i=(\mathfrak{a}_{ij}+\mathfrak{m}^lR)\cap\mathscr{O}_X$.
\end{remark}
By the very definition, one has
\begin{lemma}
Let $(\mathsf{a},\mathsf{b})$ be a generic limit of a sequence $\{(\mathfrak{a}_i,\mathfrak{b}_i)\}_{i\in\mathbf{N}}$ of pairs of ideals in $\mathscr{O}_X$.
\begin{enumerate}
\item
If $\mathfrak{a}_i\subset\mathfrak{b}_i$ for any $i$, then $\mathsf{a}\subset\mathsf{b}$.
\item
The $\mathsf{a}+\mathsf{b}$ and $\mathsf{a}\mathsf{b}$ are the generic limits of $\{\mathfrak{a}_i+\mathfrak{b}_i\}_{i\in\mathbf{N}}$ and $\{\mathfrak{a}_i\mathfrak{b}_i\}_{i\in\mathbf{N}}$.
\end{enumerate}
\end{lemma}
The generic limit depends on the choice of $\mathcal{F}$ but remains the same after the replacement of $\mathcal{F}$ with a subfamily.
\begin{definition}
A family $\mathcal{F}'=(Z'_l,(\mathfrak{a}'_j(l))_j,N'_l,s'_l,t'_l)_{l\ge l'_0}$ of approximations of $\mathcal{S}$ is called a \textit{subfamily} of $\mathcal{F}$ if $l'_0$ is at least $l_0$ and if there exists an open immersion $i_l\colon Z'_l\to Z_l$ for every $l\ge l'_0$ such that
\begin{itemize}
\item
$t_l\circ i_{l+1}=i_l\circ t'_l$,
\item
$\mathfrak{a}_j(l)\mathscr{O}_{X\times Z'_l}=\mathfrak{a}'_j(l)$ by $\id_X\times i_l$, and
\item
$N'_l$ is a subset of $N_l$ and $i_l\circ s'_l=s_l|_{N'_l}$.
\end{itemize}
\end{definition}
\begin{convention}\label{cnv:retain}
Later we shall often replace $\mathcal{F}$ with a subfamily, but we retain the same notation $\mathcal{F}=(Z_l,(\mathfrak{a}_j(l))_j,N_l,s_l,t_l)_{l\ge l_0}$ to avoid intricacy.
\end{convention}
The theory of the generic limit of ideals was developed for the study of the singularities on the germ $P\in X$. When $X$ is klt, the singularities on $\hat X$ are associated with those on $X$ (see \cite{dFEM11}). The existence of log resolutions supplies
\begin{lemma}\label{lem:resolution}
Notation as above and assume that $X$ is klt. Then $\hat X$ is klt, and after replacing $\mathcal{F}$ with a subfamily but using the same notation,
\begin{align*}
\mld_{\hat P}(\hat X,{\textstyle\prod}_{j=1}^e(\mathsf{a}_j+\hat\mathfrak{m}^l)^{r_j})=\mld_P(X,{\textstyle\prod}_{j=1}^e\mathfrak{a}_j(l)_i^{r_j})
\end{align*}
for any positive real numbers $r_1,\ldots,r_e$ and for any $i\in N_l$ and $l\ge l_0$.
\end{lemma}
\begin{remark}\label{rmk:descend}
\begin{enumerate}
{\setlength{\itemindent}{25pt}\item\label{itm:descendE}}
Let $\hat E$ be a divisor over $\hat X$ with centre $\hat P$. Then replacing $\mathcal{F}$ with a subfamily (but using the same notation as in Convention \ref{cnv:retain}), one can descend $\hat E$ to a divisor $E_l$ over $X\times Z_l$ for any $l\ge l_0$, that is, $E_{l'}=E_l\times_{Z_l}Z_{l'}$ when $l\le l'$, and $\hat E=E_l\times_{X\times Z_l}\hat X$. Let $E_i$ be any connected component of the fibre of $E_l$ at $s_l(i)\in Z_l$, which is independent of $l$ as far as $i\in N_l$. Replacing $\mathcal{F}$ with a subfamily again, for any $i\in N_l$ and $1\le j\le e$, $E_i$ is a divisor over $X$ and satisfies that
\begin{align*}
\ord_{\hat E}\mathsf{a}_j=\ord_{\hat E}(\mathsf{a}_j+\hat\mathfrak{m}^l)&=\ord_{E_i}(\mathfrak{a}_{ij}+\mathfrak{m}^l)=\ord_{E_i}\mathfrak{a}_{ij}<l,\\
a_{\hat E}(\hat X,\mathsf{a})&=a_{E_i}(X,\mathfrak{a}_i).
\end{align*}
\item
Let $\hat\pi\colon\hat Y\to\hat X$ be a projective birational morphism isomorphic outside $\hat P$. Then $\hat\pi$ is descendible as stated in \cite[Proposition A.7]{K15}, that is, after replacing $\mathcal{F}$ with a subfamily, there exist projective morphisms $\pi_l\colon Y_l\to X\times Z_l$ such that $\pi_{l'}=\pi_l\times_{Z_l}Z_{l'}$ when $l\le l'$ and such that $\hat\pi=\pi_l\times_{X\times Z_l}\hat X$.
\end{enumerate}
\end{remark}
\begin{remark}
The $E_l$ is treated in \cite{K15} as if it has connected fibres, which should have been corrected appropriately.
\end{remark}
Now we fix positive real numbers $r_1,\ldots,r_e$ and consider the $\mathbf{R}$-ideals $\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}$ and $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$. The $\mathsf{a}$ is called the \textit{generic limit} of the sequence $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ of $\mathbf{R}$-ideals on $X$ with respect to $\mathcal{F}$. The most important achievement at present is the following theorem due to de Fernex, Ein and Musta\c{t}\u{a}. Indeed, as an application, they proved first the ACC for lc thresholds restricted on smooth varieties.
\begin{theorem}[\cite{dFEM10}, \cite{dFEM11}]\label{thm:lct}
Notation as above and assume that $X$ is klt. If $(\hat X,\mathsf{a})$ is lc, then so is $(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily which depends on $r_1,\ldots,r_e$.
\end{theorem}
Theorem \ref{thm:lct} is a corollary to the effective $\mathfrak{m}$-adic semi-continuity of lc thresholds, which was globalised in \cite[Theorem 4.11]{K15}. We prove its relative version.
\begin{theorem}\label{thm:relative}
Let $X$ be a klt variety and $X\to T$ be a morphism to a variety. Suppose that every closed fibre of $X\to T$ is klt. Let $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$ be an $\mathbf{R}$-ideal on $X$ and $Z$ be an irreducible closed subset of $X$ which dominates $T$. Suppose that $\mld_{\eta_Z}(X,\mathfrak{a})=0$ and it is computed by a divisor $E$ over $X$. Then after replacing $X$ and $T$ with their dense open subsets, the following hold for any $t\in T$.
\begin{itemize}
\item
The fibre of $E$ at $t$ is non-empty, and its arbitrary connected component $E_t$ is a divisor over a component $X_t$ of the fibre of $X$ at $t$.
\item
The centre $Z_t$ on $X_t$ of $E_t$ is smooth.
\item
If an $\mathbf{R}$-ideal $\mathfrak{b}=\prod_j\mathfrak{b}_j^{r_j}$ on $X_t$ satisfies that $\mathfrak{a}_j\mathscr{O}_{X_t}+\mathfrak{p}_j=\mathfrak{b}_j+\mathfrak{p}_j$ for any $j$, where $\mathfrak{p}_j=\{f\in\mathscr{O}_{X_t}\mid\ord_{E_t}f>\ord_E\mathfrak{a}_j\}$, then $(X_t,\mathfrak{b})$ is lc about $Z_t$ and $\mld_{\eta_{Z_t}}(X_t,\mathfrak{b})=0$.
\end{itemize}
\end{theorem}
\begin{proof}
Take a log resolution $\pi\colon Y\to X$ of $(X,\mathfrak{a}\mathscr{I}_Z)$, where $\mathscr{I}_Z$ is the ideal sheaf of $Z$, such that $E$ is realised as a divisor on $Y$. We may shrink $T$ so that $T$ and $Y\to T$ are smooth and so that the union $F$ of the exceptional locus of $\pi$ and the cosupport of $\mathfrak{a}\mathscr{I}_Z\mathscr{O}_Y$ is an snc divisor relative to $T$. Replace $X$ with an open subset $X'$ containing $\eta_Z$ such that $Z'=Z|_{X'}$ is smooth over $T$ and such that if the restriction $S'=S|_{\pi^{-1}(X')}$ of a stratum $S$ of $F$ satisfies that $S'\neq\emptyset$ and $\pi(S')\subset Z'$, then $S'\to Z'$ is smooth.
Set $n=\dim Z-\dim T$. Then for any $t\in T$ and $z\in Z_t$,
\begin{align*}
\mld_z(X_t,\mathfrak{m}_z^n\cdot\mathfrak{a}\mathscr{O}_{X_t})=0
\end{align*}
for the maximal ideal sheaf $\mathfrak{m}_z$ on $X_t$ defining $z$, and it is computed by the divisor $G_z$ obtained by the blow-up of $Y_t=Y\times_XX_t$ along a component of $E_t\cap\pi^{-1}(z)$. This is verified from the local description at each closed point $y$ in $\pi^{-1}(z)$. Indeed, let $v_1,\ldots,v_s$ be a part of a regular system of parameters in $\mathscr{O}_{Y,y}$ such that $F$ is defined at $y$ by $\prod_{l=1}^sv_l$. Since every stratum of $F$ mapped into $Z$ is smooth over $Z$, they are extended to a part $v_1,\ldots,v_s,w_1,\ldots,w_n$ of a regular system of parameters in $\mathscr{O}_{Y,y}$ such that their images form a part of a regular system of parameters in $\mathscr{O}_{Y_t,y}$ and such that
\begin{align*}
\mathfrak{m}_z\mathscr{O}_{Y_t,y}=(w_1,\ldots,w_n,\textstyle\prod_{l=1}^sv_l^{m_l})\mathscr{O}_{Y_t,y},
\end{align*}
where $m_l$ is the order of $\mathscr{I}_Z$ along the divisor defined by $v_l$. (Note that the corresponding expression in the proof of \cite[Theorem 4.11]{K15} is incorrect).
Since $\ord_{G_z}\mathfrak{a}_j\mathscr{O}_{X_t}=\ord_E\mathfrak{a}_j$ and $\ord_{G_z}f\ge\ord_{E_t}f$ for any $f\in\mathscr{O}_{X_t}$, by \cite[Theorem 1.4]{dFEM10} we conclude that $\mld_z(X_t,\mathfrak{m}_z^n\mathfrak{b})=0$ for the $\mathfrak{b}$ in the statement. Hence $(X_t,\mathfrak{b})$ is lc about $Z_t$, and $\mld_{\eta_{Z_t}}(X_t,\mathfrak{b})=0$ by $a_{E_t}(X_t,\mathfrak{b})=0$.
\end{proof}
\begin{corollary}\label{crl:relative}
Let $X$ be a klt variety and $X\to T$ be a morphism to a variety. Suppose that the fibre $X_t$ at every closed point $t$ in $T$ is klt. Let $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$ be an $\mathbf{R}$-ideal on $X$ and $Z$ be a closed subset of $X$ such that $(X,\mathfrak{a})$ is lc about $Z$. Set $Z_t=Z\times_XX_t$ and let $\mathscr{I}_t$ denote the ideal sheaf of $Z_t$ on $X_t$. Then there exists a positive integer $l$ such that after replacing $T$ with its dense open subset, for any $t\in T$ if an $\mathbf{R}$-ideal $\mathfrak{b}=\prod_j\mathfrak{b}_j^{r_j}$ on $X_t$ satisfies that $\mathfrak{a}_j\mathscr{O}_{X_t}+\mathscr{I}_t^l=\mathfrak{b}_j+\mathscr{I}_t^l$ for any $j$, then $(X_t,\mathfrak{b})$ is lc about $Z_t$.
\end{corollary}
\begin{proof}
We shall prove it by noetherian induction on $Z$. Let $Z_0$ be an irreducible component of $Z$, which may be assumed to dominate $T$. Let $\mathscr{I}_{Z_0}$ be the ideal sheaf of $Z_0$ and $r$ be the non-negative real number such that $\mld_{\eta_{Z_0}}(X,\mathfrak{a}\mathscr{I}_{Z_0}^r)$ equals zero. Applying Theorem \ref{thm:relative}, after shrinking $T$ there exist open subset $X'$ of $X$ containing $\eta_{Z_0}$ and a positive integer $l_0$ such that for any $t\in T$, if an $\mathbf{R}$-ideal $\mathfrak{b}=\prod_j\mathfrak{b}_j^{r_j}$ on $X_t$ satisfies that $\mathfrak{a}_j\mathscr{O}_{X'_t}+\mathscr{I}_{Z_0}^{l_0}\mathscr{O}_{X'_t}=\mathfrak{b}_j\mathscr{O}_{X'_t}+\mathscr{I}_{Z_0}^{l_0}\mathscr{O}_{X'_t}$ for any $j$ on $X'_t=X'\times_XX_t$, then $(X'_t,\mathfrak{b}\mathscr{O}_{X'_t})$ is lc about $Z_0\times_XX'_t$. Thus the assertion is reduced to that for the closure of $Z\setminus(Z_0\cap X')$, which follows from the hypothesis of induction.
\end{proof}
\section{Singularities on a fixed variety}
In this section, we fix the germ $P\in X$ of a klt variety and review an approach to the study of $\mld_P(X,\mathfrak{a})$ for $\mathbf{R}$-ideals $\mathfrak{a}$ which uses the generic limit of ideals on $X$. Our earlier work shows the discreteness for log discrepancies $a_E(X,\mathfrak{a})$.
\begin{theorem}[\cite{K14}]\label{thm:discrete}
Let $P\in X$ be the germ of a klt variety. Fix a finite subset $I$ of the positive real numbers. Then the set
\begin{align*}
\{a_E(X,\mathfrak{a})\mid\textup{$\mathfrak{a}$ an $\mathbf{R}$-ideal},\ \mathfrak{a}\in I,\ E\in\mathcal{D}_X,\ \textrm{$(X,\mathfrak{a})$ lc about $\eta_{c_X(E)}$}\}
\end{align*}
is discrete in $\mathbf{R}$.
\end{theorem}
We shall explain the equivalence of several important conjectures on a fixed germ with the help of Theorems \ref{thm:lct} and \ref{thm:discrete}.
\begin{conjecture}\label{cnj:equiv}
Let $P\in X$ be the germ of a klt variety.
\begin{enumerate}[series=equiv]
\item\label{itm:acc}
\textup{(ACC for minimal log discrepancies)}\;
Fix a subset $I$ of the positive real numbers which satisfies the DCC. Then the set
\begin{align*}
\{\mld_P(X,\mathfrak{a})\mid\textup{$\mathfrak{a}$ an $\mathbf{R}$-ideal},\ \mathfrak{a}\in I\}
\end{align*}
satisfies the ACC.
\item\label{itm:alc}
\textup{(ACC for $a$-lc thresholds)}\;
Fix a non-negative real number $a$ and a subset $I$ of the positive real numbers which satisfies the DCC. Then the set
\begin{align*}
\{t\in\mathbf{R}_{\ge0}\mid\textup{$\mathfrak{a}$, $\mathfrak{b}$ $\mathbf{R}$-ideals},\ \mld_P(X,\mathfrak{a}\mathfrak{b}^t)=a,\ \mathfrak{a}\mathfrak{b}\in I\}
\end{align*}
satisfies the ACC.
\item\label{itm:madic}
\textup{(uniform $\mathfrak{m}$-adic semi-continuity)}\;
Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $X$ and $I$ such that if $\mathfrak{a}=\prod_{j=1}^e\mathfrak{a}_j^{r_j}$ and $\mathfrak{b}=\prod_{j=1}^e\mathfrak{b}_j^{r_j}$ are $\mathbf{R}$-ideals on $X$ satisfying that $r_j\in I$ and $\mathfrak{a}_j+\mathfrak{m}^l=\mathfrak{b}_j+\mathfrak{m}^l$ for any $j$, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$, then $\mld_P(X,\mathfrak{a})=\mld_P(X,\mathfrak{b})$.
\item\label{itm:nakamura}
\textup{(boundedness)}\;
Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $X$ and $I$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a})$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:limit}
\textup{(generic limit)}\;
Let $r_1,\ldots,r_e$ be positive real numbers and $\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$. Notation as in Section \textup{\ref{sct:limit}}, so set the generic limit $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ on $\hat P\in\hat X$. Then $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily but using the same notation.
\end{enumerate}
\end{conjecture}
\begin{remark}\label{rmk:limit}
We provide a few remarks on Conjecture \ref{cnj:equiv}(\ref{itm:limit}).
\begin{enumerate}
\item\label{itm:limitineq}
Lemma \ref{lem:resolution} means the equality
\begin{align*}
\mld_{\hat P}(\hat X,{\textstyle\prod}_{j=1}^e(\mathsf{a}_j+\hat\mathfrak{m}^l)^{r_j})=\mld_P(X,{\textstyle\prod}_{j=1}^e(\mathfrak{a}_{ij}+\mathfrak{m}^l)^{r_j})
\end{align*}
for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. Take a divisor $\hat E$ over $\hat X$ which computes $\mld_{\hat P}(\hat X,\mathsf{a})$ and choose an integer $l_1\ge l_0$ such that $\ord_{\hat E}\mathsf{a}_j\le l_1\ord_{\hat E}\hat\mathfrak{m}$ for any $j$. Then for $l\ge l_1$, the left-hand side equals $\mld_{\hat P}(\hat X,\mathsf{a})$ while the right-hand side is at least $\mld_P(X,\mathfrak{a}_i)$. Thus after replacing $l_0$ with $l_1$, one has the inequality
\begin{align*}
\mld_{\hat P}(\hat X,\mathsf{a})\ge\mld_P(X,\mathfrak{a}_i)
\end{align*}
for any $i\in N_{l_0}$. The intrinsic part in Conjecture \ref{cnj:equiv}(\ref{itm:limit}) is the opposite inequality.
\item\label{itm:limitresult}
In particular, Conjecture \ref{cnj:equiv}(\ref{itm:limit}) holds when $\mld_{\hat P}(\hat X,\mathsf{a})$ is not positive by Theorem \ref{thm:lct}. The conjecture also holds when $(\hat X,\mathsf{a})$ is klt \cite[Theorem 5.1]{K15}. Thus, we know that Conjecture \ref{cnj:equiv}(\ref{itm:limit}) holds unless $(\hat X,\mathsf{a})$ is not klt but $\mld_{\hat P}(\hat X,\mathsf{a})$ is positive.
\end{enumerate}
\end{remark}
We prepare basic lemmata.
\begin{lemma}\label{lem:DCC}
Let $I$ and $J$ be subsets of the positive real numbers both of which satisfy the DCC. Then the set $\{rs\mid r\in I,\ s\in J\}$ satisfies the DCC.
\end{lemma}
\begin{proof}
Let $\{r_is_i\}_{i\in\mathbf{N}}$ be an arbitrary non-increasing sequence where $r_i\in I$ and $s_i\in J$. It is enough to show that $r_is_i$ is constant passing to a subsequence. We claim that there exists a strictly increasing sequence $\{i_j\}_{j\in\mathbf{N}}$ such that $\{r_{i_j}\}_{j\in\mathbf{N}}$ is a non-decreasing sequence. Indeed, let $i_1$ be a number such that $r_{i_1}$ attains the minimum of the set $\{r_i\mid i\in\mathbf{N}\}$, which exists since this set satisfies the DCC. If one constructed $i_1,\ldots,i_j$, then take $i_{j+1}$ as a number such that $r_{i_{j+1}}$ attains the minimum of the set $\{r_i\mid i>i_j\}$.
By replacing $\{r_is_i\}_{i\in\mathbf{N}}$ with $\{r_{i_j}s_{i_j}\}_{j\in\mathbf{N}}$, we may assume that $r_i$ is non-decreasing. Applying the same argument to $\{s_i\}_{i\in\mathbf{N}}$, we may also assume that $s_i$ is non-decreasing. Then the sequence $\{r_is_i\}_{i\in\mathbf{N}}$ becomes both non-increasing and non-decreasing, so $r_is_i$ must be constant.
\end{proof}
\begin{lemma}\label{lem:mld}
Let $P\in X$ be the germ of a normal $\mathbf{Q}$-Gorenstein variety and $\mathfrak{a}_1,\ldots,\mathfrak{a}_e$ be $\mathbf{R}$-ideals on $X$. Let $t_1,\ldots,t_e$ be non-negative real numbers such that $\sum_{i=1}^et_i=1$.
\begin{enumerate}
\item\label{itm:mldconvex}
$\mld_P(X,\prod_{i=1}^e\mathfrak{a}_i^{t_i})\ge\sum_{i=1}^et_i\mld_P(X,\mathfrak{a}_i)$.
\item\label{itm:mldequal}
If a divisor $E$ over $X$ computes all $\mld_P(X,\mathfrak{a}_i)$, then $\mld_P(X,\prod_{i=1}^e\mathfrak{a}_i^{t_i})=\sum_{i=1}^et_i\mld_P(X,\mathfrak{a}_i)$ and it is computed by $E$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $F$ be a divisor over $X$ which computes $\mld_P(X,\prod_{i=1}^e\mathfrak{a}_i^{t_i})$. Then,
\begin{align*}
\mld_P(X,{\textstyle\prod_{i=1}^e\mathfrak{a}_i^{t_i}})=a_F(X,{\textstyle\prod_{i=1}^e\mathfrak{a}_i^{t_i}})=\sum_{i=1}^et_i\cdot a_F(X,\mathfrak{a}_i)\ge\sum_{i=1}^et_i\mld_P(X,\mathfrak{a}_i),
\end{align*}
which is (\ref{itm:mldconvex}). On the other hand, if $E$ computes all $\mld_P(X,\mathfrak{a}_i)$, then
\begin{align*}
\mld_P(X,{\textstyle\prod_{i=1}^e\mathfrak{a}_i^{t_i}})\le a_E(X,{\textstyle\prod_{i=1}^e\mathfrak{a}_i^{t_i}})=\sum_{i=1}^et_i\cdot a_E(X,\mathfrak{a}_i)=\sum_{i=1}^et_i\mld_P(X,\mathfrak{a}_i),
\end{align*}
which with (\ref{itm:mldconvex}) shows the assertion (\ref{itm:mldequal}).
\end{proof}
\begin{theorem}\label{thm:equiv}
Let $P\in X$ be the germ of a klt variety. Then the five statements in Conjecture \textup{\ref{cnj:equiv}} are equivalent.
\end{theorem}
\begin{proof}
\textit{Step} 1.
The generic limit of ideals was invented from the insight of the implication from (\ref{itm:limit}) to (\ref{itm:acc}). Musta\c{t}\u{a} informed us the proof of this implication and we wrote it in \cite[Proposition 4.8]{K15}. Note that the proof in \cite{K15} works even if $X$ has klt singularities. We also note that though the statement in \cite{K15} assumes the assertion in (\ref{itm:limit}) for ideals $\mathfrak{a}_{ij}$ in the completion of the local ring $\mathscr{O}_{X,P}$, its proof uses only the assertion for ideals in $\mathscr{O}_X$ which is exactly (\ref{itm:limit}). We derived from (\ref{itm:limit}) in fact the following ACC which was formulated by Cascini and McKernan \cite{M13}.
\begin{enumerate}[topsep=\smallskipamount,resume=equiv]
\item\label{itm:CM}
\textit{Fix subsets $I$ of the positive real numbers and $J$ of the non-negative real numbers both of which satisfy the DCC. Then there exist finite subsets $I_0$ of $I$ and $J_0$ of $J$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$ and $\mld_P(X,\mathfrak{a})\in J$, then $\mathfrak{a}\in I_0$ and $\mld_P(X,\mathfrak{a})\in J_0$.}
\end{enumerate}
The assertion (\ref{itm:acc}) follows from (\ref{itm:CM}) immediately. We shall derive (\ref{itm:alc}) from (\ref{itm:CM}). Let $\{t_i\}_{i\in\mathbf{N}}$ be a non-decreasing sequence of positive real numbers such that there exist ideals $\mathfrak{a}_i$ and $\mathfrak{b}_i$ on $X$ satisfying that $\mld_P(X,\mathfrak{a}_i\mathfrak{b}_i^{t_i})=a$ and $\mathfrak{a}_i\mathfrak{b}_i\in I$. It is enough to show that $T=\{t_i\mid i\in\mathbf{N}\}$ satisfies the ACC. By Lemma \ref{lem:DCC}, the set $IT=\{rt\mid r\in I,\ t\in T\}$ satisfies the DCC. Applying (\ref{itm:CM}) to $I\cup IT$ and $\{a\}$, one obtains a finite subset $I_0$ of $I\cup IT$ such that $\mathfrak{a}_i\mathfrak{b}_i^{t_i}\in I_0$ for any $i$. Particularly, $T$ is contained in the set $I^{-1}I_0=\{r^{-1}s\mid r\in I,\ s\in I_0\}$ which satisfies the ACC.
\medskip
\textit{Step} 2.
The conjecture (\ref{itm:nakamura}) was proposed by Nakamura. His joint work \cite{MN16} with Musta\c{t}\u{a} shows the equivalence of (\ref{itm:madic}), (\ref{itm:nakamura}) and (\ref{itm:limit}). They treated the assertion in (\ref{itm:limit}) for ideals in the completion, but their proof works for our (\ref{itm:limit}). They also provided a direct proof of the implication from (\ref{itm:nakamura}) to (\ref{itm:acc}) which uses the ACC for lc thresholds on $X$ and Theorem \ref{thm:discrete}. We write the argument from (\ref{itm:limit}) to (\ref{itm:nakamura}) as Lemma \ref{lem:limtonak} since it will be used later.
\medskip
\textit{Step} 3.
Hence it is enough to show the implications from (\ref{itm:acc}) to (\ref{itm:nakamura}) and from (\ref{itm:alc}) to (\ref{itm:nakamura}). If (\ref{itm:nakamura}) were false, then there would exist a strictly increasing sequence $\{l_i\}_{i\in\mathbf{N}}$ and a sequence $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ of $\mathbf{R}$-ideals on $X$ such that $\mathfrak{a}_i\in I$ and such that every divisor $E_i$ over $X$ computing $\mld_P(X,\mathfrak{a}_i)$ satisfies the inequality $a_{E_i}(X)\ge l_i$. The assertion (\ref{itm:nakamura}) for those $\mathfrak{a}$ whose $\mld_P(X,\mathfrak{a})$ is not positive will be proved in Theorem \ref{thm:nonpos} independently. We assume that $\mld_P(X,\mathfrak{a}_i)$ is positive for any $i$ here.
By Theorem \ref{thm:discrete}, the set
\begin{align*}
M=\{a_E(X,\mathfrak{a})\mid\mathfrak{a}\in I,\ E\in\mathcal{D}_X,\ \textrm{$(X,\mathfrak{a})$ lc}\}
\end{align*}
is discrete in $\mathbf{R}$. In particular, all $\mld_P(X,\mathfrak{a}_i)$ belong to a finite set since they are bounded from above by $\mld_PX$. Thus we may assume that $\mld_P(X,\mathfrak{a}_i)$ is constant, say $m$, which is positive by our assumption. We may assume that $\mathfrak{a}_i$ is non-trivial, then $m$ is less than $\mld_PX$. By the discreteness of $M$, there exists a real number $m'$ greater than $m$ such that $r\not\in M$ for any real number $m<r\le m'$.
Let $t_i$ be the positive real number such that $\mld_P(X,\mathfrak{a}_i^{1-t_i})=m'$, which exists and satisfies that $0<t_i<1$ by $m<m'<\mld_PX$. Take a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i^{1-t_i})$. Then $a_{E_i}(X,\mathfrak{a}_i)<m'$, so $E_i$ also computes $\mld_P(X,\mathfrak{a}_i)=m$ by the property of $m'$, and thus $\ord_{E_i}\mathfrak{a}_i=a_{E_i}(X)-m\ge l_i-m$. Since $t_i\ord_{E_i}\mathfrak{a}_i=a_{E_i}(X,\mathfrak{a}_i^{1-t_i})-a_{E_i}(X,\mathfrak{a}_i)=m'-m$, one has the estimate $t_i\le(m'-m)/(l_i-m)$ when $l_i>m$, showing that $t_i$ approaches to zero as $i$ increases.
This contradicts the ACC for $m'$-lc thresholds in (\ref{itm:alc}). It is sufficient to verify that our situation also contradicts the ACC for minimal log discrepancies in (\ref{itm:acc}). By passing to a subsequence, we may assume that $t_i$ are less than one-half and form a strictly decreasing sequence whose limit is zero. Then $\{1-(1-t_i)t_i\}_{i\in \mathbf{N}}$ is a strictly increasing sequence. We set
\begin{align*}
T=\{1-(1-t_i)t_i\mid i\in\mathbf{N}\},
\end{align*}
which satisfies the DCC.
Note that $1-(1-t_i)t_i=(1-t_i)(1-t_i)+t_i$. Because $E_i$ computes both $\mld_P(X,\mathfrak{a}_i^{1-t_i})$ and $\mld_P(X,\mathfrak{a}_i)$, by Lemma \ref{lem:mld}(\ref{itm:mldequal}) one has that
\begin{align*}
\mld_P(X,\mathfrak{a}_i^{1-(1-t_i)t_i})=(1-t_i)\mld_P(X,\mathfrak{a}_i^{1-t_i})+t_i\mld_P(X,\mathfrak{a}_i)=m'-t_i(m'-m)
\end{align*}
which is computed by $E_i$. But then $\mld_P(X,\mathfrak{a}_i^{1-(1-t_i)t_i})$ is strictly increasing. This contradicts (\ref{itm:acc}) for $IT=\{rt\mid r\in I,\ t\in T\}$ since $IT$ satisfies the DCC by Lemma \ref{lem:DCC}.
\end{proof}
\begin{lemma}\label{lem:limtonak}
Let $P\in X$ be the germ of a klt variety. Let $r_1,\ldots,r_e$ be positive real numbers and $\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$. Notation as in Section \textup{\ref{sct:limit}}, so set the generic limit $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ on $\hat P\in\hat X$. If $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$, then there exists a positive rational number $l$ such that for infinitely many indices $i$, there exists a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i)$ and satisfies the equality $a_{E_i}(X)=l$.
\end{lemma}
\begin{proof}
Take a divisor $\hat E$ over $\hat X$ which computes $\mld_{\hat P}(\hat X,\mathsf{a})$. As in Remark \ref{rmk:descend}(\ref{itm:descendE}), replacing $\mathcal{F}$ with a subfamily, one can descend $\hat E$ to a divisor $E_l$ over $X\times Z_l$ for any $l\ge l_0$. For a component $E_i$ of the fibre of $E_l$ at $s_l(i)\in Z_l$, one may assume that $a_{\hat E}(\hat X,\mathsf{a})=a_{E_i}(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$. Then $E_i$ computes $\mld_P(X,\mathfrak{a}_i)$ and $a_{E_i}(X)$ equals the constant $a_{\hat E}(\hat X)$.
\end{proof}
\begin{theorem}\label{thm:nonpos}
Let $P\in X$ be the germ of a klt variety. Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $l$ depending only on $X$ and $I$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$ and that $\mld_P(X,\mathfrak{a})$ is not positive, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a})$ and satisfies the inequality $a_E(X)\le l$.
\end{theorem}
\begin{proof}
Let $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ be an arbitrary sequence of $\mathbf{R}$-ideals on $X$ such that $\mathfrak{a}_i\in I$ and such that $\mld_P(X,\mathfrak{a}_i)$ is not positive. It is sufficient to show the existence of a positive rational number $l$ such that for infinitely many indices $i$, there exists a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i)$ and satisfies the equality $a_{E_i}(X)=l$.
Write $\mathfrak{a}_i=\prod_{j=1}^{e_i}\mathfrak{a}_{ij}^{r_{ij}}$ so $r_{ij}\in I$. We may assume that every $\mathfrak{a}_{ij}$ is non-trivial. Let $r$ be the minimum of the elements of $I$. Then $\mld_P(X,\mathfrak{a}_i)\le\mld_PX-\sum_{j=1}^{e_i}r_{ij}\le\mld_PX-re_i$. Let $e'$ denote the greatest integer such that $re'\le\mld_PX$. If $(X,\mathfrak{a}_i)$ is lc, then $e_i\le e'$. If $(X,\mathfrak{a}_i)$ is not lc and $e_i>e'$, then we may replace $\mathfrak{a}_i$ with $\mathfrak{a}'_i=\prod_{j=1}^{e'+1}\mathfrak{a}_{ij}^{r_{ij}}$ because every divisor computing $\mld_P(X,\mathfrak{a}'_i)=-\infty$ also computes $\mld_P(X,\mathfrak{a}_i)$. Hence by passing to a subsequence, we may assume that $e_i$ is constant, say $e$, and that $r_{ij}$ is constant, say $r_j$, for each $1\le j\le e$. That is, $\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}$.
Following Section \ref{sct:limit}, we construct a generic limit $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ of $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$. We use the notation in Section \ref{sct:limit}, so $\mathsf{a}$ is an $\mathbf{R}$-ideal on $\hat P\in\hat X$. If $\mld_{\hat P}(\hat X,\mathsf{a})$ were positive, then there would exist a positive real number $t$ such that $(\hat X,\mathsf{a}\hat\mathfrak{m}^t)$ is lc. By Theorem \ref{thm:lct}, $(X,\mathfrak{a}_i\mathfrak{m}^t)$ is lc for infinitely many $i$, which contradicts that $\mld_P(X,\mathfrak{a}_i)$ is not positive. Thus $\mld_{\hat P}(\hat X,\mathsf{a})$ is not positive. Then by Remark \ref{rmk:limit}(\ref{itm:limitresult}), $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily, and the existence of $l$ follows from Lemma \ref{lem:limtonak}.
\end{proof}
The conjectures hold in dimension two.
\begin{theorem}\label{thm:surface}
Conjecture \textup{\ref{cnj:equiv}} holds when $X$ is a klt surface.
\end{theorem}
\begin{proof}
By Theorem \ref{thm:equiv}, it is enough to verify one of the statements. The (\ref{itm:nakamura}) is stated in \cite[Theorem 1.3]{MN16}. Alternatively, one may derive (\ref{itm:acc}) from \cite[Theorem 3.8]{Al94}, or derive (\ref{itm:limit}) from \cite{K13} by replacing $X$ with its minimal resolution.
\end{proof}
Roughly speaking, our former work \cite{K15} asserts a part of the conjectures in dimension three in the case when the minimal log discrepancy is greater than one.
\begin{theorem}\label{thm:grthan1}
Let $P\in X$ be the germ of a smooth threefold. Let $r_1,\ldots,r_e$ be positive real numbers and $\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$. Notation as in Section \textup{\ref{sct:limit}}, so set the generic limit $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ on $\hat P\in\hat X$. Then the pair $(\hat X,\mathsf{a})$ satisfies one of the following cases.
\begin{enumerate}[label=\textup{\arabic*.},ref=\arabic*]
\item\label{cas:case1}
The $\mld_{\hat P}(\hat X,\mathsf{a})$ is not positive.
\item\label{cas:case2}
$(\hat X,\mathsf{a})$ is klt.
\item\label{cas:case3}
$(\hat X,\mathsf{a})$ is lc and has the smallest lc centre which is normal and of dimension two.
\item\label{cas:case4}
$(\hat X,\mathsf{a})$ is lc and has the smallest lc centre which is regular and of dimension one.
\end{enumerate}
Moreover, the following hold.
\begin{enumerate}
\item\label{itm:cases123}
In the cases \textup{\ref{cas:case1}}, \textup{\ref{cas:case2}} and \textup{\ref{cas:case3}}, $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily.
\item\label{itm:case4}
In the case \textup{\ref{cas:case4}}, $\mld_{\hat P}(\hat X,\mathsf{a})$ is at most one.
\end{enumerate}
\end{theorem}
\begin{proof}
The case division follows from the existence of the smallest lc centre \cite[Theorem 1.2]{K15}. The equality in (\ref{itm:cases123}) holds in the cases \textup{\ref{cas:case1}} and \textup{\ref{cas:case2}} by Remark \ref{rmk:limit}(\ref{itm:limitresult}), and in the case \textup{\ref{cas:case3}} by \cite[Theorem 5.3]{K15}. The assertion (\ref{itm:case4}) is \cite[Proposition 6.1]{K15}.
\end{proof}
We reduce Conjecture \ref{cnj:equiv}(\ref{itm:nakamura}) to the case of $\mathbf{Q}$-ideals.
\begin{lemma}\label{lem:rational}
Let $P\in X$ be the germ of a klt variety. Suppose that for any positive integer $n$, there exists a positive integer $l$ depending only on $X$ and $n$ such that if $\mathfrak{a}$ is an ideal on $X$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^{1/n})$ and satisfies the inequality $a_E(X)\le l$. Then Conjecture \textup{\ref{cnj:equiv}} holds for $P\in X$.
\end{lemma}
\begin{proof}
In Conjecture \ref{cnj:equiv}, the assertion (\ref{itm:limit}) follows from (\ref{itm:nakamura}) for $I=\{r_1,\ldots,r_e\}$ by \cite{MN16}. Thus we may assume Conjecture \ref{cnj:equiv}(\ref{itm:limit}) in the case when $e=1$ and $r_1=1/n$ for some positive integer $n$. By Theorem \ref{thm:equiv}, it is enough to derive the full statement of (\ref{itm:limit}) from this special case.
We want the equality $\mld_{\hat P}(\hat X,\mathsf{a})=\mld_P(X,\mathfrak{a}_i)$, where $\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}$ and $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$. We write $m=\mld_{\hat P}(\hat X,\mathsf{a})$ for simplicity. By Remark \ref{rmk:limit}(\ref{itm:limitresult}), we may assume that $m$ is positive. By Theorem \ref{thm:discrete}, the set
\begin{align*}
M=\{\mld_P(X,\mathfrak{a})\mid\mathfrak{a}\in\{r_1,\ldots,r_e\},\ \textrm{$(X,\mathfrak{a})$ lc}\}
\end{align*}
is discrete in $\mathbf{R}$. Thus there exists a real number $m'$ less than $m$ such that $r\not\in M$ for any real number $m'<r<m$.
Since the set
\begin{align*}
Q=\{(q_1,\ldots,q_e)\in(\mathbf{R}_{\ge0})^e\mid\textrm{$(\hat X,{\textstyle\prod_{j=1}^e}\mathsf{a}_j^{q_j})$ lc}\}
\end{align*}
is a rational polytope, the vector $r=(r_1,\ldots,r_e)$ in $Q$ is expressed as $r=\sum_{s\in S}t_sq_s$, where $S$ is a finite set, all $q_s=(q_{1s},\ldots,q_{es})$ belong to $Q\cap\mathbf{Q}^e$, and $t_s$ are positive real numbers such that $\sum_{s\in S}t_s=1$. By choosing $q_s$ close to $r$, we may assume that
\begin{align*}
m'=\mld_{\hat P}(\hat X,\mathsf{a})-(m-m')<\sum_{s\in S}t_s\mld_{\hat P}(\hat X,{\textstyle\prod_{j=1}^e}\mathsf{a}_j^{q_{js}}).
\end{align*}
Write $q_{js}=m_{js}/n$ with positive integers $n$ and $m_{1s},\ldots,m_{es}$ for $s\in S$. Then $\mld_{\hat P}(\hat X,\prod_{j=1}^e\mathsf{a}_j^{q_{js}})=\mld_{\hat P}(\hat X,(\prod_{j=1}^e\mathsf{a}_j^{m_{js}})^{1/n})$ and the ideal $\prod_{j=1}^e\mathsf{a}_j^{m_{js}}$ is the generic limit of the sequence $\{\prod_{j=1}^e\mathfrak{a}_{ij}^{m_{js}}\}_{i\in\mathbf{N}}$ of ideals on $X$. By our assumption, the equality $\mld_{\hat P}(\hat X,(\prod_{j=1}^e\mathsf{a}_j^{m_{js}})^{1/n})=\mld_P(X,(\prod_{j=1}^e\mathfrak{a}_{ij}^{m_{js}})^{1/n})$ holds for any $i\in N_{l_0}$ and $s\in S$ after replacing $\mathcal{F}$ with a subfamily. Hence with Lemma \ref{lem:mld}(\ref{itm:mldconvex}), one has that
\begin{align*}
m'<\sum_{s\in S}t_s\mld_P(X,{\textstyle\prod_{j=1}^e}\mathfrak{a}_{ij}^{q_{js}})\le\mld_P(X,{\textstyle\prod_{j=1}^e}\mathfrak{a}_{ij}^{r_j})=\mld_P(X,\mathfrak{a}_i)\in M,
\end{align*}
which implies that $\mld_P(X,\mathfrak{a}_i)\ge m$ by the property of $m'$. Together with Remark \ref{rmk:limit}(\ref{itm:limitineq}), we obtain the required equality $m=\mld_P(X,\mathfrak{a}_i)$.
\end{proof}
\begin{proposition}\label{prp:mult}
Let $P\in X$ be the germ of a klt variety and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a finite subset $I$ of the positive real numbers. Then there exists a positive real number $t$ depending only on $X$ and $I$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$ and that $\mld_P(X,\mathfrak{a})$ is positive, then $(X,\mathfrak{a}\mathfrak{m}^t)$ is lc.
\end{proposition}
\begin{proof}
Fix $r_1,\ldots,r_e\in I$ and let $\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$ such that $\mld_P(X,\mathfrak{a}_i)$ is positive. It is enough to show the existence of a positive real number $t$ such that $(X,\mathfrak{a}_i\mathfrak{m}^t)$ is lc for infinitely many indices $i$.
Following Section \ref{sct:limit}, we construct a generic limit $\mathsf{a}$ of $\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ on $\hat P\in\hat X$. Then $\mld_{\hat P}(\hat X,\mathsf{a})$ is positive by Remark \ref{rmk:limit}(\ref{itm:limitineq}), so there exists a positive real number $t$ such that $(\hat X,\mathsf{a}\hat\mathfrak{m}^t)$ is lc for the maximal ideal $\hat\mathfrak{m}$ in $\mathscr{O}_{\hat X}$. By Theorem \ref{thm:lct}, there exists an infinite subset $N_{l_0}$ of $\mathbf{N}$ such that $(X,\mathfrak{a}_i\mathfrak{m}^t)$ is lc for any $i\in N_{l_0}$.
\end{proof}
\begin{corollary}[\cite{MN16}]\label{crl:mult}
Let $P\in X$ be the germ of a klt variety and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a finite subset $I$ of the positive real numbers. Then there exists a positive integer $b$ depending only on $X$ and $I$ such that if $\mathfrak{a}$ is an $\mathbf{R}$-ideal on $X$ satisfying that $\mathfrak{a}\in I$ and that $\mld_P(X,\mathfrak{a})$ is positive, then $\ord_E\mathfrak{m}$ is at most $b$ for every divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a})$.
\end{corollary}
\begin{proof}
Take $t$ in Proposition \ref{prp:mult}. Let $E$ be an arbitrary divisor over $X$ which computes $\mld_P(X,\mathfrak{a})$. The log canonicity of $(X,\mathfrak{a}\mathfrak{m}^t)$ implies that
\begin{align*}
\ord_E\mathfrak{m}^t\le a_E(X,\mathfrak{a})=\mld_P(X,\mathfrak{a})\le\mld_PX,
\end{align*}
that is, $\ord_E\mathfrak{m}\le t^{-1}\mld_PX$. The $b=\rd{t^{-1}\mld_PX}$ is a required integer.
\end{proof}
\section{Construction of canonical pairs}
The objective of this section is to prove the following theorem.
\begin{theorem}\label{thm:canonical}
Let $P\in X$ be the germ of a smooth threefold. Fix a positive rational number $q$. Then there exist positive integers $l$ and $c$ both of which depend only on $q$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $\mld_P(X,\mathfrak{a}^q)$ is positive, then at least one of the following holds.
\begin{enumerate}
\item\label{itm:bounded}
There exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:reduced}
There exists a birational morphism from the germ $Q\in Y$ of a smooth threefold to the germ $P\in X$ such that
\begin{itemize}
\item
every exceptional prime divisor $F$ on $Y$ satisfies the inequalities $a_F(X)\le c$ and $a_F(X,\mathfrak{a}^q)<1$, by which the pull-back $(Y,\Delta,\mathfrak{a}_Y^q)$ of $(X,\mathfrak{a}^q)$ is defined with an effective $\mathbf{Q}$-divisor $\Delta$,
\item
$\mld_Q(Y,\Delta,\mathfrak{a}_Y^q)=\mld_P(X,\mathfrak{a}^q)$, and
\item
$\mld_Q(Y,\mathfrak{a}_Y^q)$ is at least one.
\end{itemize}
\end{enumerate}
\end{theorem}
We recall a part of the classification of threefold divisorial contractions which will play an important role in our argument.
\begin{definition}
A projective birational morphism $Y\to X$ between $\mathbf{Q}$-factorial terminal varieties is called a \textit{divisorial contraction} if its exceptional locus is a prime divisor.
\end{definition}
\begin{theorem}\label{thm:divisorial}
Let $\pi\colon Y\to X$ be a threefold divisorial contraction which contracts its exceptional divisor to a closed point $P$ in $X$.
\begin{enumerate}
\item\label{itm:kawamata}
\textup{(\cite{Km96})}\;
Suppose that $P$ is a quotient singularity of $X$. The spectrum of the completion of $\mathscr{O}_{X,P}$ is the regular base change of $\mathbf{A}^3/\mathbf{Z}_r(w,-w,1)$ with orbifold coordinates $x_1,x_2,x_3$, where $w$ is a positive integer less than $r$ and coprime to $r$. Then $\pi$ is base-changed to the weighted blow-up with $\wt(x_1,x_2,x_3)=(w/r,(r-w)/r,1/r)$.
\item\label{itm:kawakita}
\textup{(\cite{K01})}\;
Suppose that $P$ is a smooth point of $X$. Then there exists a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_{X,P}$ and coprime positive integers $w_1,w_2$ such that $\pi$ is the weighted blow-up with $\wt(x_1,x_2,x_3)=(w_1,w_2,1)$.
\end{enumerate}
\end{theorem}
Stepanov proved the ACC for canonical thresholds on smooth threefolds as an application of Theorem \ref{thm:divisorial}(\ref{itm:kawakita}).
\begin{theorem}[\cite{St11}]\label{thm:stepanov}
The set
\begin{align*}
\{t\in\mathbf{Q}_{\ge0}\mid\textup{$P\in X$ a smooth threefold},\ \textup{$\mathfrak{a}$ an ideal},\ \mld_P(X,\mathfrak{a}^t)=1\}
\end{align*}
satisfies the ACC.
\end{theorem}
\begin{proof}
Let $S$ denote the set in the theorem. The original statement \cite[Theorem 1.7]{St11} asserts that the set
\begin{align*}
T=\biggl\{t\in\mathbf{Q}_{\ge0}\;\biggm|
\begin{array}{l}
\textrm{$P\in X$ a smooth threefold},\ \textrm{$D$ an effective divisor},\\
\textrm{$(X,tD)$ canonical but not terminal}
\end{array}
\biggr\}
\end{align*}
satisfies the ACC. It is enough to show that if $t$ is an arbitrary element of $S$, then $t/3$ belongs to $T$. For such $t$, there exists an ideal $\mathfrak{a}$ on the germ $P\in X$ of a smooth threefold such that $\mld_P(X,\mathfrak{a}^t)=1$. Then $t\le\mld_PX=3$ and $t/3$ is en element of $S$ since $\mld_P(X,(\mathfrak{a}^3)^{t/3})=1$. Thus it is sufficient to show that if $t\in S$ is at most one, then $t$ belongs to $T$.
Take a germ $P\in X$ on which $t$ is realised by $\mld_P(X,\mathfrak{a}^t)=1$. Let $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. By replacing $\mathfrak{a}$ with $\mathfrak{a}+\mathfrak{m}^l$ for a large integer $l$, we may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary. We take a log resolution $Y$ of $(X,\mathfrak{a})$. Then $\mathfrak{a}\mathscr{O}_Y=\mathscr{O}_Y(-A)$ for an effective divisor $A$ such that $-A$ is free over $X$. Thus there exists a reduced divisor $H$ linearly equivalent to $-A$ such that $Y$ is also a log resolution of $(X,H_X,\mathfrak{a})$, where $H_X$ is the push-forward of $H$. Then $\mld_P(X,tH_X)=1$ and $(X,tH_X)$ is canonical, meaning that $t\in T$.
\end{proof}
We shall use a consequence of the minimal model program.
\begin{definition}
Let $P\in X$ be the germ of a $\mathbf{Q}$-factorial terminal variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that $\mld_P(X,\mathfrak{a})$ equals one. A divisorial contraction to $X$ is said to be \textit{crepant} with respect to $(P,X,\mathfrak{a})$ if its exceptional divisor computes $\mld_P(X,\mathfrak{a})$.
\end{definition}
\begin{lemma}\label{lem:crepant}
Let $P\in X$ be the germ of a $\mathbf{Q}$-factorial terminal variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that $\mld_P(X,\mathfrak{a})$ equals one. Then there exists a divisorial contraction crepant with respect to $(P,X,\mathfrak{a})$.
\end{lemma}
\begin{proof}
By replacing $\mathfrak{a}$ with $\mathfrak{b}$ in Lemma \ref{lem:perturb}, we may assume that $\mathfrak{a}$ is an $\mathfrak{m}$-primary $\mathbf{R}$-ideal, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$, such that there exists a unique divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a})$. Then by \cite[Corollary 1.4.3]{BCHM10}, there exists a projective birational morphism $Y\to X$ from a $\mathbf{Q}$-factorial normal variety whose exceptional locus is a prime divisor which coincides with $E$. We may assume that the weak transform $\mathfrak{a}_Y$ on $Y$ of $\mathfrak{a}$ is defined. Then $(Y,\mathfrak{a}_Y)$ is the pull-back of $(X,\mathfrak{a})$, and it is terminal by the uniqueness of $E$. In particular $Y$ itself is terminal, so $Y\to X$ is a required contraction.
\end{proof}
\begin{lemma}\label{lem:perturb}
Let $P\in X$ be the germ of a $\mathbf{Q}$-factorial klt variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that $(X,\mathfrak{a})$ is lc. Then there exists an $\mathbf{R}$-ideal $\mathfrak{b}$ such that
\begin{itemize}
\item
$\mathfrak{b}$ is $\mathfrak{m}$-primary, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$,
\item
$\mld_P(X,\mathfrak{a})=\mld_P(X,\mathfrak{b})$,
\item
there exists a unique divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{b})$, and
\item
$E$ also computes $\mld_P(X,\mathfrak{a})$.
\end{itemize}
\end{lemma}
\begin{proof}
Writing $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$, if we take a large integer $l$, then $\mathfrak{a}'=\prod_j(\mathfrak{a}_j+\mathfrak{m}^l)^{r_j}$ satisfies that $\mld_P(X,\mathfrak{a}')=\mld_P(X,\mathfrak{a})$ and any divisor computing $\mld_P(X,\mathfrak{a}')$ also computes $\mld_P(X,\mathfrak{a})$. By replacing $\mathfrak{a}$ with $\mathfrak{a}'$, we may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary.
Let $Y$ be a log resolution of $(X,\mathfrak{a})$ and $\{E_i\}_{i\in I}$ be the set of the exceptional prime divisors on $Y$ contracting to the point $P$. Let $A$ be the $\mathbf{R}$-divisor on $Y$ defined by $\mathfrak{a}\mathscr{O}_Y$ and $I'$ be the subset of $I$ consisting of the indices $i$ such that $E_i$ computes $\mld_P(X,\mathfrak{a})$. There exists an effective exceptional divisor $F$ such that $-F$ is very ample and such that the minimum $m$ of $\{\ord_{E_i}A/\ord_{E_i}F\}_{i\in I'}$ attains by only one index, say $i_0\in I'$. One can take a small positive real number $\epsilon$ such that $b_i=a_{E_i}(X,\mathfrak{a})+\epsilon(\ord_{E_i}A-m\ord_{E_i}F)$ is greater than $\mld_P(X,\mathfrak{a})$ for any $i\in I\setminus\{i_0\}$. Note that $b_{i_0}$ remains equal to $\mld_P(X,\mathfrak{a})$.
Let $\mathfrak{c}$ be the ideal on $X$ given by the push-forward of $\mathscr{O}_Y(-F)$ and set the $\mathbf{R}$-ideal $\mathfrak{b}=\mathfrak{a}^{1-\epsilon}\mathfrak{c}^{\epsilon m}$. Possibly replacing $\epsilon$ with a smaller real number, we may assume that $(X,\mathfrak{b})$ is lc. $Y$ is also a log resolution of $(X,\mathfrak{b})$ and $a_{E_i}(X,\mathfrak{b})=b_i$ for any $i\in I$. Thus $\mathfrak{b}$ satisfies all the required properties but being $\mathfrak{m}$-primary. However, one can replace $\mathfrak{b}$ with an $\mathfrak{m}$-primary $\mathbf{R}$-ideal just by the argument of constructing $\mathfrak{a}'$ from $\mathfrak{a}$.
\end{proof}
We consider the following algorithm in order to prove Theorem \ref{thm:canonical}.
\begin{algorithm}\label{alg:canonical}
Let $q$ be a positive rational number. Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{a}$ be an ideal on $X$ such that $(X,\mathfrak{a}^q)$ is lc. Let $E$ be a divisor over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$.
\begin{enumerate}[indented,label=\texttt{\arabic*}.,ref=\texttt{\arabic*}]
\item
Start with $X_0=X$.
\item\label{prc:initial}
Suppose that $X_i$ is given, which has only terminal quotient singularities.
\item\label{prc:notpoint}
If the centre $c_{X_i}(E)$ on $X_i$ of $E$ is of positive dimension, then output $X_i$.
\item\label{prc:point}
Suppose that $c_{X_i}(E)$ is a closed point, which will be denoted by $P_i$. Let $r_i$ be the index of the germ $P_i\in X_i$. One can define the weak transform $\mathfrak{b}_i$ on $P_i\in X_i$ of $\mathfrak{a}^{r_i}$ and let $\mathfrak{a}_i$ be the $\mathbf{Q}$-ideal $\mathfrak{b}_i^{1/r_i}$. The pair $(X_i,\mathfrak{a}_i^q)$ is lc at $P_i$.
\item\label{prc:smoothout}
If $P_i$ is a smooth point of $X_i$ and $\mld_{P_i}(X_i,\mathfrak{a}_i^q)\ge1$, then output $X_i$.
\item
If $P_i$ is a smooth point of $X_i$ and $\mld_{P_i}(X_i,\mathfrak{a}_i^q)<1$, then go to \ref{prc:iplus1}.
\item\label{prc:singout}
If $P_i$ is a singular point of $X_i$ and $\mld_{P_i}(X_i,\mathfrak{a}_i^q)>1$, then output $X_i$.
\item
If $P_i$ is a singular point of $X_i$ and $\mld_{P_i}(X_i,\mathfrak{a}_i^q)\le1$, then go to \ref{prc:iplus1}.
\item\label{prc:iplus1}
Let $q_i$ be the positive rational number such that $\mld_{P_i}(X_i,\mathfrak{a}_i^{q_i})=1$. Fix a divisorial contraction $X_{i+1}\to X_i$ crepant with respect to $(P_i,X_i,\mathfrak{a}_i^{q_i})$ by Lemma \ref{lem:crepant}. Go back to \ref{prc:initial} and proceed with $X_{i+1}$ instead of $X_i$.
\end{enumerate}
In this algorithm, we fix the notation that $F_i$ is the exceptional divisor of $X_{i+1}\to X_i$ and that $F_j^i$ is the strict transform on $X_i$ of $F_j$ for $j<i$, and we set $\Delta_i=\sum_{j=0}^{i-1}(1-a_{F_j}(X,\mathfrak{a}^q))F_j^i$ and $S_i=\sum_{j=0}^{i-1}F_j^i$.
\end{algorithm}
\begin{remark}
By the very definition,
\begin{enumerate}
\setlength{\itemindent}{25pt}\item
$q_i\le q$, and $q_i<q$ when $q_i$ is defined at a smooth point $P_i$ of $X_i$, and
\item
$(X_i,\Delta_i,\mathfrak{a}_i^q)$ is crepant to $(X,\mathfrak{a}^q)$.
\end{enumerate}
\end{remark}
In order to run Algorithm \ref{alg:canonical}, we need to verify that
\begin{itemize}
\item
$X_i$ has at worst quotient singularities in the process \ref{prc:initial}, and
\item
$(X_i,\mathfrak{a}_i^q)$ is lc at $P_i$ in the process \ref{prc:point},
\end{itemize}
besides the termination. The first claim follows from Theorem \ref{thm:divisorial}. For the second claim, let $b_i=1-a_{F_i}(X_i,\mathfrak{a}_i^q)$. Then $(X_{i+1},b_iF_i,\mathfrak{a}_{i+1}^q)$ is crepant to $(X_i,\mathfrak{a}_i^q)$ and $b_iF_i$ is effective since $a_{F_i}(X_i,\mathfrak{a}_i^q)\le a_{F_i}(X_i,\mathfrak{a}_i^{q_i})=1$ by $q_i\le q$. Thus the log canonicity of $(X_i,\mathfrak{a}_i^q)$ follows from that of $(X,\mathfrak{a}^q)$ inductively. Hence Algorithm \ref{alg:canonical} runs up to the termination.
We prepare several basic properties of the algorithm before completing its termination in Proposition \ref{prp:termination}.
\begin{lemma}\label{lem:algorithm}
The following hold in Algorithm \textup{\ref{alg:canonical}}.
\begin{enumerate}
\item\label{itm:nondecr}
The $q_i$ form a non-decreasing sequence.
\item\label{itm:atmost1}
$a_{F_i}(X,\mathfrak{a}^{q_i})\le1$.
\item\label{itm:lessthan1}
$a_{F_i}(X,\mathfrak{a}^q)<1$.
\item\label{itm:easybound}
If $q_i\neq q$, then $a_{F_i}(X)\le q(q-q_i)^{-1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $(X_{i+1},\mathfrak{a}_{i+1}^{q_i})$ is crepant to $(X_i,\mathfrak{a}_i^{q_i})$, one has that $\mld_{P_{i+1}}(X_{i+1},\mathfrak{a}_{i+1}^{q_i})\ge\mld_{P_i}(X_i,\mathfrak{a}_i^{q_i})=1$. Thus $q_i\le q_{i+1}$, which shows (\ref{itm:nondecr}).
For $j<i$, let $b_{ij}=1-a_{F_j}(X_j,\mathfrak{a}_j^{q_i})$. Then $(X_{j+1},b_{ij}F_j,\mathfrak{a}_{j+1}^{q_i})$ is crepant to $(X_j,\mathfrak{a}_j^{q_i})$ and $b_{ij}F_j$ is effective by $q_j\le q_i$ in (\ref{itm:nondecr}). Thus one has the inequality $a_{F_i}(X_j,\mathfrak{a}_j^{q_i})\le a_{F_i}(X_{j+1},\mathfrak{a}_{j+1}^{q_i})$ and inductively $a_{F_i}(X,\mathfrak{a}^{q_i})\le a_{F_i}(X_i,\mathfrak{a}_i^{q_i})=1$, which is (\ref{itm:atmost1}).
The (\ref{itm:lessthan1}) follows from (\ref{itm:atmost1}) unless $q_i=q$. If $q_i=q$, then $q_i$ is defined at a singular point $P_i$ of $X_i$, so $q_0<q$. Let $j$ be the greatest integer such that $q_j<q$. Then $(X_i,\mathfrak{a}_i^q)$ is crepant to $(X_{j+1},\mathfrak{a}_{j+1}^q)$. On the other hand, $(X_{j+1},\Delta_{j+1},\mathfrak{a}_{j+1}^q)$ is crepant to $(X,\mathfrak{a}^q)$. By (\ref{itm:atmost1}), $\Delta_{j+1}$ is effective and its support coincides with $S_{j+1}$. Thus one has that $a_{F_i}(X,\mathfrak{a}^q)=a_{F_i}(X_{j+1},\Delta_{j+1},\mathfrak{a}_{j+1}^q)<a_{F_i}(X_{j+1},\mathfrak{a}_{j+1}^q)=a_{F_i}(X_i,\mathfrak{a}_i^q)=1$.
To see (\ref{itm:easybound}), suppose that $q_i<q$. By (\ref{itm:atmost1}) and $a_{F_i}(X,\mathfrak{a}^q)\ge0$, one computes that
\begin{align*}
\mathfrak{a}_{F_i}(X)=a_{F_i}(X,\mathfrak{a}^{q_i})+q_i\ord_{F_i}\mathfrak{a}&\le1+q_i\ord_{F_i}\mathfrak{a}\\
&=1+q_i(q-q_i)^{-1}(a_{F_i}(X,\mathfrak{a}^{q_i})-a_{F_i}(X,\mathfrak{a}^q))\\
&\le1+q_i(q-q_i)^{-1}=q(q-q_i)^{-1}.
\end{align*}
\end{proof}
\begin{lemma}\label{lem:e}
Fix a positive rational number $q$. Then there exists a positive rational number $\epsilon$ depending only on $q$ such that every $q_i$ defined at a smooth point $P_i$ of $X_i$ in Algorithm \textup{\ref{alg:canonical}} satisfies the inequality $q_i\le q-\epsilon$.
\end{lemma}
\begin{proof}
It follows from Theorem \ref{thm:stepanov}.
\end{proof}
\begin{proposition}\label{prp:termination}
Algorithm \textup{\ref{alg:canonical}} terminates.
\end{proposition}
\begin{proof}
Take $\epsilon$ in Lemma \ref{lem:e}. Then every divisor $F_i$ defined over a smooth point $P_i$ of $X_i$ satisfies that $a_{F_i}(X,\mathfrak{a}^{q-\epsilon})\le a_{F_i}(X,\mathfrak{a}^{q_i})\le1$ by Lemma \ref{lem:algorithm}(\ref{itm:atmost1}). The number of such $F_i$ is finite because $(X,\mathfrak{a}^{q-\epsilon})$ is klt. In particular, there exists an integer $e$, depending on $(X,\mathfrak{a}^q)$ and $E$, such that $P_i$ is a singular point of $X_i$ for any $i>e$. By Theorem \ref{thm:divisorial}(\ref{itm:kawamata}), the $r_i$ for $i>e$ form a strictly decreasing sequence. Hence the algorithm must terminate.
\end{proof}
One can also bound $r_i$ and $a_{F_i}(X)$.
\begin{lemma}\label{lem:r}
Fix a positive rational number $q$. Then there exists a positive integer $r$ depending only on $q$ such that every $r_i$ in Algorithm \textup{\ref{alg:canonical}} satisfies the inequality $r_i\le r$.
\end{lemma}
\begin{proof}
Take $\epsilon$ in Lemma \ref{lem:e}. We shall show that any positive integer $r$ at least $q\epsilon^{-1}-2$ satisfies the required property. The $r_0$ is one. By Theorem \ref{thm:divisorial}, the $r_{i+1}$ satisfies that $r_{i+1}<r_i$ when $P_i$ is a singular point of $X_i$ and that $r_{i+1}\le a_{F_i}(X_i)-2$ when $P_i$ is a smooth point of $X_i$. Thus it is enough to show that $a_{F_i}(X_i)\le q\epsilon^{-1}$ when $P_i$ is a smooth point of $X_i$. Since
\begin{align*}
a_{F_i}(X_i)\le a_{F_i}(X_i)+\sum_{j=0}^{i-1}(a_{F_j}(X)-1)\ord_{F_i}F_j^i=a_{F_i}(X),
\end{align*}
the required inequality follows from Lemma \ref{lem:algorithm}(\ref{itm:easybound}).
\end{proof}
\begin{lemma}\label{lem:c}
Fix a positive rational number $q$. Then there exists a positive integer $c$ depending only on $q$ such that every $F_i$ in Algorithm \textup{\ref{alg:canonical}} satisfies the inequality $a_{F_i}(X)\le c$.
\end{lemma}
\begin{proof}
Take a positive integer $n$ such that $nq$ is integral, and take $\epsilon$ in Lemma \ref{lem:e} and $r$ in Lemma \ref{lem:r}. Fix a positive integer $c_0$ at least $q\epsilon^{-1}$ and define positive integers $c_1,\ldots,c_r$ inductively by the recurrence relation
\begin{align*}
c_{j+1}=2+(c_j-1)n
\end{align*}
for $0\le j<r$. We shall prove that $c_r$ is a required constant.
Let $e$ be the greatest integer such that $q_e$ is defined at a smooth point $P_e$ of $X_e$. Then by Lemmata \ref{lem:algorithm}(\ref{itm:nondecr}), (\ref{itm:easybound}) and \ref{lem:e}, the estimate $a_{F_i}(X)\le c_0$ holds for any $i\le e$, and by Lemma \ref{lem:r}, if the algorithm defines $P_{e+1}\in X_{e+1}$, then $r_{e+1}\le r$. In particular, the algorithm terminates with the output $X_{e+r'}$ for some $r'\le r$ by Theorem \ref{thm:divisorial}(\ref{itm:kawamata}). Thus it is enough to show that $a_{F_{e+j}}(X)\le c_j$ for any $j\le r$ as far as $F_{e+j}$ is defined. This is reduced to proving that if $F_i$ is defined at a singular point $P_i$ of $X_i$ and if $a_{F_j}(X)$ is bounded from above by a positive integer $c'$ for all $j<i$, then $a_{F_i}(X)$ is at most $2+(c'-1)n$.
Suppose that $F_i$ and $c'$ are given as above. By Lemma \ref{lem:algorithm}(\ref{itm:lessthan1}), the $\mathbf{Q}$-divisor $\Delta_i$ satisfies that $S_i\le n\Delta_i$. Since $(X_i,\Delta_i,\mathfrak{a}_i^q)$ is crepant to $(X,\mathfrak{a}^q)$, one has that
\begin{align*}
\ord_{F_i}S_i\le n\ord_{F_i}\Delta_i&=n(a_{F_i}(X_i,\mathfrak{a}_i^q)-a_{F_i}(X_i,\Delta_i,\mathfrak{a}_i^q))\\
&\le n(a_{F_i}(X_i,\mathfrak{a}_i^{q_i})-a_{F_i}(X,\mathfrak{a}^q))=n(1-a_{F_i}(X,\mathfrak{a}^q))\le n,
\end{align*}
where the second inequality follows from $q_i\le q$. Together with $a_{F_i}(X_i)=1+1/r_i$ by Theorem \ref{thm:divisorial}(\ref{itm:kawamata}), one computes that
\begin{align*}
a_{F_i}(X)&=a_{F_i}(X_i)+\sum_{j=0}^{i-1}(a_{F_j}(X)-1)\ord_{F_i}F_j^i\\
&\le1+\frac{1}{r_i}+(c'-1)\ord_{F_i}S_i<2+(c'-1)n.
\end{align*}
\end{proof}
In order to control the log discrepancy of a divisor computing $\mld_P(X,\mathfrak{a}^q)$, we need an extra assumption that $\mld_P(X,\mathfrak{a}^q)$ is positive.
\begin{lemma}\label{lem:process37}
Fix a positive rational number $q$. Then there exists a positive integer $l$ depending only on $q$ such that in Algorithm \textup{\ref{alg:canonical}} if $\mld_P(X,\mathfrak{a}^q)$ is positive and if the algorithm terminates at the process \textup{\ref{prc:notpoint}} or \textup{\ref{prc:singout}}, then there exists a divisor $E'$ over $X$ which computes $\mld_P(X,\mathfrak{a})$ and satisfies the inequality $a_{E'}(X)\le l$.
\end{lemma}
\begin{proof}
\medskip
\textit{Step} 1.
We take $c$ in Lemma \ref{lem:c}. Let $\eta$ be a positive rational number such that the exist no integers $a$ satisfying that $q<1/a<q+\eta$. Let $n$ be a positive integer such that $nq$ is integral. Since Conjecture \ref{cnj:equiv}(\ref{itm:nakamura}) holds for in dimension two by Theorem \ref{thm:surface}, there exists a positive integer $l'$ depending only on $n$ such that if $Q\in H$ is the germ of a smooth surface and $\mathfrak{a}_H$ is an ideal on $H$, then there exists a divisor $E_H$ over $H$ which computes $\mld_Q(H,\mathfrak{a}_H^{1/n})$ and satisfies the inequality $a_{E_H}(H)\le l'$.
Let $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Let $E'$ be an arbitrary divisor over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$. By Corollary \ref{crl:mult}, there exists a positive integer $b$ depending only on $q$ such that $\ord_{E'}\mathfrak{m}\le b$. Note that $b$ can be taken independent of the germ $P\in X$ of a smooth threefold. Indeed, $E'$ also computes $\mld_P(X,\mathfrak{a}'^q)$ for the $\mathfrak{m}$-primary ideal $\mathfrak{a}'=\mathfrak{a}+\mathfrak{m}^e$ as far as a positive integer $e$ satisfies that $\ord_{E'}\mathfrak{a}\le e\ord_{E'}\mathfrak{m}$. Thus by Lemma \ref{lem:regular}, one can take the $b$ on the germ at origin of the affine space $\mathbf{A}^3$.
For any $i$, one has the estimate $\ord_{E'}S_i\le\ord_{E'}\mathfrak{m}$ because $\mathfrak{m}^r\mathscr{O}_{X_i}$ is contained in $\mathscr{O}_{X_i}(-rS_i)$ for a positive integer $r$ such that $rS_i$ is Cartier. Hence,
\begin{align*}
\ord_{E'}S_i\le b.
\end{align*}
Supposing that the algorithm terminates at the process \ref{prc:notpoint} or \ref{prc:singout}, we shall bound the log discrepancy of some divisor which computes $\mld_P(X,\mathfrak{a})$ in terms of $q$, $c$, $\eta$, $l'$ and $b$.
\medskip
\textit{Step} 2.
Suppose that the algorithm terminates at the process \ref{prc:notpoint} and outputs $X_i$. Then the centre $c_E(X_i)$ on $X_i$ of $E$ is either a divisor or a curve. If it is a divisor, then $E=F_{i-1}$ and it computes $\mld_P(X,\mathfrak{a}^q)$. By Lemma \ref{lem:c}, $F_{i-1}$ satisfies that
\begin{align*}
a_{F_{i-1}}(X)\le c.
\end{align*}
Suppose that $c_E(X_i)$ is a curve $C$. Let $H$ be a general hyperplane section of $X_i$ and $Q$ be a closed point in $H\cap C$. Considering a log resolution, one has that
\begin{align*}
\mld_Q(H,\Delta_i|_H,(\mathfrak{a}_i\mathscr{O}_H)^q)=\mld_{\eta_C}(X_i,\Delta_i,\mathfrak{a}_i^q)=\mld_P(X,\mathfrak{a}^q),
\end{align*}
where the second equality holds since $E$ computes $\mld_P(X,\mathfrak{a}^q)$. Moreover by the expression $\mld_Q(H,\Delta_i|_H,(\mathfrak{a}_i\mathscr{O}_H)^q)=\mld_Q(H,(\mathfrak{a}_i^{nq}\mathscr{O}_H(-n\Delta_i|_H))^{1/n})$, there exists a divisor $E'$ over $X_i$ with $c_{X_i}(E')=C$ such that an irreducible component $E'_H$ of $E'\times_{X_i}H$ mapped to $Q$ computes $\mld_Q(H,\Delta_i|_H,(\mathfrak{a}_i\mathscr{O}_H)^q)$ and satisfies the inequality $a_{E'_H}(H)\le l'$. The $E'$ computes $\mld_P(X,\mathfrak{a}^q)$ as well as $\mld_{\eta_C}(X_i,\Delta_i,\mathfrak{a}_i^q)$, and $a_{E'}(X_i)=a_{E'_H}(H)\le l'$. Therefore,
\begin{align*}
a_{E'}(X)=a_{E'}(X_i)+\sum_{j=0}^{i-1}(a_{F_j}(X)-1)\ord_{E'}F_j^i\le l'+(c-1)\ord_{E'}S_i\le l'+(c-1)b,
\end{align*}
where the last inequality follows from Step 1.
\medskip
\textit{Step} 3.
Suppose that the algorithm terminates at the process \ref{prc:singout} and outputs $X_i$. Let $\mathfrak{n}$ be the maximal ideal in $\mathscr{O}_{X_i}$ defining $P_i$. Recall that $\mathfrak{a}_i=\mathfrak{b}_i^{1/r_i}$. Set $\mathfrak{b}'_i=\mathfrak{b}_i+\mathfrak{n}^e$ for a positive integer $e$ such that $\ord_E\mathfrak{b}_i\le e\ord_E\mathfrak{n}$. Since $\mld_{P_i}(X_i,\mathfrak{a}_i^q)$ is greater than one, there exists a rational number $q'$ greater than $q$ such that $\mld_{P_i}(X_i,(\mathfrak{b}'_i)^{q'/r_i})=1$. There exists a divisorial contraction to $X_i$ crepant to $(P_i,X_i,(\mathfrak{b}'_i)^{q'/r_i})$ by Lemma \ref{lem:crepant} and it is uniquely determined by Theorem \ref{thm:divisorial}(\ref{itm:kawamata}). Its exceptional divisor $F$ satisfies that $a_F(X_i)=1+1/r_i$. Thus
\begin{align*}
q'\ord_F\mathfrak{b}'_i=r_i(a_F(X_i)-a_F(X_i,(\mathfrak{b}'_i)^{q'/r_i}))=r_i(a_F(X_i)-1)=1,
\end{align*}
which derives that $q'$ is at least $q+\eta$ by the definition of $\eta$. In particular, the pair $(X_i,(\mathfrak{b}'_i)^{(q+\eta)/r_i})$ is canonical so $a_E(X_i,\mathfrak{a}_i^{q+\eta})=a_E(X_i,(\mathfrak{b}'_i)^{(q+\eta)/r_i})\ge1$. Hence one computes that
\begin{align*}
a_E(X_i)&=a_E(X_i,\mathfrak{a}_i^q)+q\ord_E\mathfrak{a}_i\\
&=a_E(X_i,\mathfrak{a}_i^q)+q\eta^{-1}(a_E(X_i,\mathfrak{a}_i^q)-a_E(X_i,\mathfrak{a}_i^{q+\eta}))\\
&\le(1+q\eta^{-1})a_E(X_i,\mathfrak{a}_i^q)-q\eta^{-1}\\
&=(1+q\eta^{-1})(a_E(X_i,\Delta_i,\mathfrak{a}_i^q)+\ord_E\Delta_i)-q\eta^{-1}\\
&\le(1+q\eta^{-1})(a_E(X,\mathfrak{a}^q)+\ord_ES_i)-q\eta^{-1}
\end{align*}
and
\begin{align*}
a_E(X)&=a_E(X_i)+\sum_{j=0}^{i-1}(a_{F_j}(X)-1)\ord_EF_j^i\\
&\le(1+q\eta^{-1})(a_E(X,\mathfrak{a}^q)+\ord_ES_i)-q\eta^{-1}+(c-1)\ord_ES_i.
\end{align*}
Together with $a_E(X,\mathfrak{a}^q)=\mld_P(X,\mathfrak{a}^q)\le3$ and $\ord_ES_i\le b$ in Step 1, one concludes that
\begin{align*}
a_E(X)\le(1+q\eta^{-1})(3+b)-q\eta^{-1}+(c-1)b.
\end{align*}
\medskip
\textit{Step} 4.
By Steps 2 and 3, any integer $l$ at least $c$, $l'+(c-1)b$ and $(1+q\eta^{-1})(3+b)-q\eta^{-1}+(c-1)b$ satisfies the required property.
\end{proof}
\begin{proof}[Proof of Theorem \textup{\ref{thm:canonical}}]
We shall verify that the $l$ in Lemma \ref{lem:process37} and $c$ in Lemma \ref{lem:c} satisfy the assertion. Let $\mathfrak{a}$ be an ideal on $X$ such that $\mld_P(X,\mathfrak{a}^q)$ is positive. Run Algorithm \ref{alg:canonical} which terminates by Proposition \ref{prp:termination}. If the algorithm terminates at the process \ref{prc:notpoint} or \ref{prc:singout}, then the property (\ref{itm:bounded}) holds by Lemma \ref{lem:process37}. If it terminates at the process \ref{prc:smoothout}, then let $Q\in Y$ be the output $P_i\in X_i$. The $Q\in Y$ satisfies the property (\ref{itm:reduced}) by Lemmata \ref{lem:algorithm}(\ref{itm:lessthan1}) and \ref{lem:c}.
\end{proof}
\section{Extraction by weighted blow-ups}
Recall the classification of divisors over a smooth surface computing the minimal log discrepancy.
\begin{theorem}[\cite{K17}]\label{thm:mldwbu}
Let $P\in X$ be the germ of a smooth surface and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$.
\begin{enumerate}
\item
If $(X,\mathfrak{a})$ is lc, then every divisor computing $\mld_P(X,\mathfrak{a})$ is obtained by a weighted blow-up.
\item
If $(X,\mathfrak{a})$ is not lc, then some divisor computing $\mld_P(X,\mathfrak{a})$ is obtained by a weighted blow-up.
\end{enumerate}
\end{theorem}
We want to apply this theorem with the object of extracting by a weighted blow-up a divisor over a smooth threefold whose centre is a curve and which computes the lc threshold. In order to use such extraction in the study of the generic limit of ideals, we need to formulate it for $R$-varieties. We let $K$ be a field of characteristic zero throughout this section. The purpose of this section is to prove
\begin{theorem}\label{thm:wbu}
Let $X$ be the spectrum of the ring of formal power series in three variables over $K$ and $P$ be its closed point. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that
\begin{itemize}
\item
$\mld_P(X,\mathfrak{a})$ equals one, and
\item
$(X,\mathfrak{a})$ has an lc centre $C$ of dimension one.
\end{itemize}
Then there exist a divisor $E$ over $X$ computing $\mld_{\eta_C}(X,\mathfrak{a})$ and a part $x_1,x_2$ of a regular system of parameters in $\mathscr{O}_X$ such that $E$ is obtained by the weighted blow-up of $X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some coprime positive integers $w_1,w_2$.
\end{theorem}
When a divisor over a smooth variety is given, we often realise it by a finite sequence of blow-ups.
\begin{definition}\label{dfn:tower}
Let $X$ be a smooth variety and $E$ be a divisor over $X$ whose centre $Z$ on $X$ has codimension at least two in $X$. A \textit{tower} on $X$ with respect to $E$ is a finite sequence of projective birational morphisms $X_{i+1}\to X_i$ of smooth varieties for $0\le i<l$ such that
\begin{itemize}
\item
$X_0=X$ and $Z_0=Z$,
\item
$X_{i+1}$ is about $\eta_{Z_i}$ the blow-up of $X_i$ along $Z_i$,
\item
$E_i$ is the exceptional prime divisor on $X_{i+1}$ contracting onto $Z_i$,
\item
$Z_{i+1}$ is the centre on $X_{i+1}$ of $E$, and
\item
$E_{l-1}=E$.
\end{itemize}
A tower is called the \textit{regular tower} if for any $i<l$, the centre $Z_i$ is smooth and $X_{i+1}$ is globally the blow-up of $X_i$ along $Z_i$. Note that the regular tower is uniquely determined by $E$ if it exists.
\end{definition}
\begin{remark}\label{rmk:toric}
Let $P\in X$ be the germ of a smooth variety. Let $x_1,\ldots,x_c$ be a part of a regular system of parameters in $\mathscr{O}_X$ and $E$ be the divisor obtained by the weighted blow-up of $X$ with $\wt(x_1,\ldots,x_c)=(w_1,\ldots,w_c)$, where $c$ is at least two. Then one can see that the regular tower on $X$ with respect to $E$ exists in terms of toric geometry. Following the notation in \cite{I14}, set $N=\mathbf{Z}^d$ with the standard basis $e_1,\ldots,e_d$ for $d=\dim X$. One may assume that $w=(w_1,\ldots,w_c,0,\ldots,0)$ is primitive in $N$. Construct a finite sequence of fans $(N,\Delta_i)$ for $0\le i\le l$ such that
\begin{itemize}
\item
$I_i=\{e_1,\ldots,e_d\}\cup\{v_1,\ldots,v_i\}$,
\item
$\Delta_i$ is the set of all cones spanned by a subset of $I_i$,
\item
$J_i$ is the smallest subset of $I_i$ such that $w$ belongs to the cone spanned by $J_i$,
\item
$v_{i+1}=\sum_{v\in J_i}v$, and
\item
$J_i\neq\{w\}$ for $i<l$ and $J_l=\{w\}$.
\end{itemize}
Set the toric variety $T_i=T_N(\Delta_i)$ and let $E_i^T$ be the exceptional divisor of $T_{i+1}\to T_i$. Then $X$ has an \'etale morphism to $T_0$ by corresponding $e_i$ to $x_i$. The base changes of $T_i$ to $X$ form the regular tower on $X$ with respect to $E$, and every $E_i=E_i^T\times_{T_0}X$ is obtained by a weighted blow-up of $X$.
\end{remark}
We collect basic properties of the log discrepancies in a tower which was essentially written in \cite[Proposition 6]{K17}.
\begin{lemma}\label{lem:tower}
Notation as in Definition \textup{\ref{dfn:tower}}. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ and $\mathfrak{a}_i$ be its weak transform on $X_i$. Set $a_i=a_{E_i}(X,\mathfrak{a})$.
\begin{enumerate}
\item\label{itm:order}
The $\ord_{Z_i}\mathfrak{a}_i$ form a non-increasing sequence.
\item\label{itm:order1}
If $\ord_Z\mathfrak{a}\le1$, then $a_i\ge1$ and the $a_i$ form a non-decreasing sequence.
\item\label{itm:ordlthan1}
If $\ord_Z\mathfrak{a}<1$, then $a_i>1$ and the $a_i$ form a strictly increasing sequence.
\end{enumerate}
\end{lemma}
\begin{proof}
Take a subvariety $V_{i+1}$ of $Z_{i+1}$ such that $V_{i+1}\to Z_i$ is finite and dominant. Then $\ord_{Z_{i+1}}\mathfrak{a}_{i+1}\le\ord_{V_{i+1}}\mathfrak{a}_{i+1}\le\ord_{Z_i}\mathfrak{a}_i$ by \cite[III Lemmata 7 and 8]{H64}, which is (\ref{itm:order}).
The assertion (\ref{itm:order1}) is reduced to (\ref{itm:ordlthan1}) because $a_i$ is the limit of $a_{E_i}(X,\mathfrak{a}^{1-\epsilon})$ when $\epsilon$ decreases to zero. Suppose that $\ord_Z\mathfrak{a}<1$ in order to see (\ref{itm:ordlthan1}). Then so are $\ord_{Z_i}\mathfrak{a}_i$ by (\ref{itm:order}). In particular, $a_{E_i}(X_i,\mathfrak{a}_i)=a_{E_i}(X_i)-\ord_{Z_i}\mathfrak{a}_i>1$. Since $(X_i,\sum_{j=0}^{i-1}(1-a_j)E_j^i,\mathfrak{a}_i)$ is crepant to $(X,\mathfrak{a})$, where $E_j^i$ is the strict transform of $E_j$, one computes that
\begin{align*}
a_i=a_{E_i}(X_i,\mathfrak{a}_i)+\sum_{j=0}^{i-1}(a_j-1)\ord_{E_i}E_j^i>1+\sum_{j=0}^{i-1}(a_j-1)\ord_{E_i}E_j^i.
\end{align*}
This derives that $a_i>1$ by induction, and then derives that $a_i>a_{i-1}$ again by induction.
\end{proof}
\begin{proposition}\label{prp:tower}
Let $P\in X$ be the germ of a smooth variety and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$. Let $E$ be the divisor obtained by the blow-up of $X$ at $P$.
\begin{enumerate}
\item\label{itm:Ecomp}
If $\ord_P\mathfrak{a}\le1$, then $E$ computes $\mld_P(X,\mathfrak{a})$.
\item
If $\ord_P\mathfrak{a}<1$, then $E$ is the unique divisor computing $\mld_P(X,\mathfrak{a})$.
\end{enumerate}
\end{proposition}
\begin{proof}
It is \cite[Proposition 6]{K17} exactly. Just apply Lemma \ref{lem:tower}(\ref{itm:order1}) and (\ref{itm:ordlthan1}) to the tower on $X$ with respect to a divisor which computes $\mld_P(X,\mathfrak{a})$.
\end{proof}
We shall study divisors computing the minimal log discrepancy on a $K$-variety of dimension two.
\begin{lemma}\label{lem:Kpoint}
Let $P\in X$ be the germ at a $K$-point of a regular $K$-variety of dimension two and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$. Then there exists a divisor over $X$ computing $\mld_P(X,\mathfrak{a})$ which is obtained by a sequence of finitely many blow-ups at a $K$-point.
\end{lemma}
\begin{proof}
Let $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. By adding a high multiple of $\mathfrak{m}$ to each component of $\mathfrak{a}$, we may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary. Suppose that $(X,\mathfrak{a})$ is not lc. Then there exists a positive real number $t$ less than one such that $\mld_P(X,\mathfrak{a}^t)$ is zero. Replacing $\mathfrak{a}$ with $\mathfrak{a}^t$, we may assume that $(X,\mathfrak{a})$ is lc.
We write $m=\mld_P(X,\mathfrak{a})$ for simplicity. Let $Y$ be the blow-up of $X$ at $P$ and $E$ be its exceptional divisor. There is nothing to prove if $E$ computes $\mld_P(X,\mathfrak{a})$. Thus we may assume that $a_E=a_E(X,\mathfrak{a})$ is greater than $m$. Then $a_E=2-\ord_P\mathfrak{a}<1$ by Proposition \ref{prp:tower}(\ref{itm:Ecomp}) (which also holds for regular $K$-varieties by Remark \ref{rmk:regular}). That is, $m< a_E<1$. Let $\mathfrak{a}_Y$ be the weak transform on $Y$ of $\mathfrak{a}$, then $(Y,(1-a_E)E,\mathfrak{a}_Y)$ is crepant to $(X,\mathfrak{a})$. We claim that there exists a unique point $Q$ in $Y$ such that $\mld_Q(Y,(1-a_E)E,\mathfrak{a}_Y)=m$, and claim that $Q$ is a $K$-point.
Let $Q$ be an arbitrary closed point in $Y$ such that $\mld_Q(Y,(1-a_E)E,\mathfrak{a}_Y)=m$. Such $Q$ exists since $a_E\neq m$. Set the base change $\bar X=X\times_{\Spec K}\Spec\bar K$ of $X$ to the algebraic closure $\bar K$ of $K$. Let $\bar P$, $\bar\mathfrak{a}$, $\bar Y$, $\bar E$ and $\bar\mathfrak{a}_Y$ be the base changes of $P$, $\mathfrak{a}$, $Y$, $E$ and $\mathfrak{a}_Y$ to $\bar K$ as well. Then every closed point $\bar Q$ in $Q\times_X\bar X$ satisfies that $\mld_{\bar Q}(\bar Y,(1-a_E)\bar E,\bar\mathfrak{a}_Y)=\mld_{\bar P}(\bar X,\bar\mathfrak{a})$. Thus our claims on $Q$ come from those on $\bar Q$, so we may assume that $K$ is algebraically closed.
One has that $\mld_Q(Y,E,\mathfrak{a}_Y)\le\mld_Q(Y,(1-a_E)E,\mathfrak{a}_Y)-a_E=m-a_E<0$, which means that $(Y,E,\mathfrak{a}_Y)$ is not lc at $Q$. By inversion of adjunction, $(E,\mathfrak{a}_Y\mathscr{O}_E)$ is not lc at $Q$, that is, $\ord_Q(\mathfrak{a}_Y\mathscr{O}_E)>1$. Hence the number of $Q$ is less than the degree of the divisor on $E\simeq\mathbf{P}_K^1$ defined by $\mathfrak{a}_Y\mathscr{O}_E$, which equals $\ord_E\mathfrak{a}=2-a_E$. Thus, the uniqueness of $Q$ follows.
While $a_E$ is greater than $m$, we replace $P\in(X,\mathfrak{a})$ with $Q\in(Y,\mathscr{O}_Y(-E)^{1-a_E}\cdot\mathfrak{a}_Y)$ and repeat the same argument. This procedure terminates at finitely many times. Indeed, let $l$ be the minimum of $a_F(X)$ for all divisors $F$ over $X$ computing $\mld_P(X,\mathfrak{a})$. Then after at most $(l-1)$ blow-ups, one attains a divisor which computes $\mld_P(X,\mathfrak{a})$.
\end{proof}
\begin{example}
There may exist a divisor computing $\mld_P(X,\mathfrak{a})$ which is not obtained by a sequence of blow-ups at a $K$-point. For example, let $P\in\mathbf{A}_\mathbf{R}^2$ be the germ at origin of the affine plane over $\mathbf{R}$ with coordinates $x_1$, $x_2$, and $H$ be the divisor on $\mathbf{A}_\mathbf{R}^2$ defined by $x_1^2+x_2^2$. Then $\mld_P(\mathbf{A}_\mathbf{R}^2,H)=0$. Let $Y$ be the blow-up of $\mathbf{A}_\mathbf{R}^2$ at $P$ and $E$ be its exceptional divisor. Then $(Y,H_Y+E)$ is crepant to $(\mathbf{A}_\mathbf{R}^2,H)$, where $H_Y$ is the strict transform. The intersection $Q$ of $H_Y$ and $E$ is a $\mathbf{C}$-point such that $\mld_Q(Y,H_Y+E)=0$.
\end{example}
Now we apply Theorem \ref{thm:mldwbu} to $K$-varieties of dimension two.
\begin{proposition}\label{prp:wbu}
Let $P\in X$ be the germ at a $K$-point of a regular $K$-variety of dimension two and $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$. Then there exists a divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a})$ which is obtained by a weighted blow-up.
\end{proposition}
\begin{proof}
We may assume the log canonicity of $(X,\mathfrak{a})$ by the argument in the first paragraph of the proof of Lemma \ref{lem:Kpoint}. By Lemma \ref{lem:Kpoint}, there exists a divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a})$ which is obtained by a sequence of finitely many blow-ups at a $K$-point. Set the base change $\bar X=X\times_{\Spec K}\Spec\bar K$ of $X$ to the algebraic closure $\bar K$ of $K$ and let $\bar P$ and $\bar\mathfrak{a}$ be the base changes of $P$ and $\mathfrak{a}$ to $\bar X$. Since $E$ is obtained by finitely many blow-ups at a $K$-point, its base change $\bar E=E\times_X\bar X$ is irreducible, so $\bar E$ is a divisor over $\bar X$. Thus by Theorem \ref{thm:mldwbu}, there exists a regular system $x_1,x_2$ of parameters in $\mathscr{O}_{\bar X,\bar P}$ such that $\bar E$ is obtained by the weighted blow-up of $\bar X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some coprime positive integers $w_1,w_2$.
We shall show that one can take $x_1$ and $x_2$ from $\mathscr{O}_X$. This is obvious when $w_1=w_2=1$ because the weighted blow-up in this case is nothing but the blow-up at the point. Suppose that $w_1>w_2$. Let $L$ be a finite Galois extension of $K$ such that $x_1$ and $x_2$ belong to $\mathscr{O}_X\otimes_KL$. Then for any element $\sigma$ of the Galois group $G$ of $L/K$, the $\bar E=\bar E^\sigma$ is obtained by the weighted blow-up with $\wt(x_1^\sigma,x_2^\sigma)=(w_1,w_2)$. Thus one can replace $x_i$ with its trace $\sum_{\sigma\in G}x_i^\sigma$ by Remark \ref{rmk:wbu}. Here one can assume that $\sum_{\sigma\in G}x_i^\sigma\in\mathfrak{m}\setminus\mathfrak{m}^2$, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$, by replacing $x_i$ with $\lambda_ix_i$ for a general member $\lambda_i$ in $L$.
Now we may assume that $x_1$ and $x_2$ belong to $\mathscr{O}_X$. Then $E$ is obtained by the weighted blow-up of $X$ with $\wt(x_1,x_2)=(w_1,w_2)$.
\end{proof}
\begin{proof}[Proof of Theorem \textup{\ref{thm:wbu}}]
\textit{Step} 1.
First of all, remark that $C$ is the smallest lc centre of $(X,\mathfrak{a})$. The $C$ is regular by \cite[Theorem 1.2]{K15}, and it is geometrically irreducible because its base change to any field is again the smallest lc centre of the base change of $(X,\mathfrak{a})$. Thus, there exists a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_X$ such that the ideal $\mathscr{I}_C$ in $\mathscr{O}_X$ defining $C$ is generated by $x_1$ and $x_2$. If we consider instead of $\mathfrak{a}=\prod_j\mathfrak{a}_j^{r_j}$ the $\mathbf{R}$-ideal $\mathfrak{b}=\prod_j(\mathfrak{a}_j+(x_1,x_2)^l\mathscr{O}_X)^{r_j}$ for a large integer $l$, then $C$ is still the smallest lc centre of $(X,\mathfrak{b})$ and $\mld_P(X,\mathfrak{b})\ge\mld_P(X,\mathfrak{a})=1$. On the other hand, $\mld_P(X,\mathfrak{b})$ is at most one by \cite[Proposition 6.1]{K15}. Thus $\mld_P(X,\mathfrak{b})$ must equal one.
Hence by replacing $\mathfrak{a}$ with $\mathfrak{b}$, we may assume that $\mathfrak{a}$ is the pull-back of an $\mathbf{R}$-ideal $\mathfrak{a}'$ on $X'=\Spec K[[x_3]][x_1,x_2]$. Set $X''=\Spec K((x_3))[x_1,x_2]$, where $K((x_3))$ is the quotient field of $K[[x_3]]$. There exist natural morphisms
\begin{align*}
X\to X'\leftarrow X''.
\end{align*}
Let $P'$ be the point of $X'$ defined by $(x_1,x_2,x_3)\mathscr{O}_{X'}$ and $P''$ be the point of $X''$ defined by $(x_1,x_2)\mathscr{O}_{X''}$.
One has that $\mld_{P''}(X'',\mathfrak{a}'\mathscr{O}_{X''})=\mld_{\eta_C}(X,\mathfrak{a})=0$. By Proposition \ref{prp:wbu}, there exists a divisor $E''$ over $X''$ computing $\mld_{P''}(X'',\mathfrak{a}'\mathscr{O}_{X''})$ which is obtained by a weighted blow-up of $X''$. Let $E'$ be the unique divisor over $X'$ such that $E''=E'\times_{X'}X''$ and let $E=E'\times_{X'}X$. Note that $C$ is the centre of $E$ on $X$.
\medskip
\textit{Step} 2.
There exists a regular tower $\mathcal{T}''$ on $X$ with respect to $E''$ in Definition \ref{dfn:tower} (which can be extended to $K((x_3))$-varieties). As seen in Remark \ref{rmk:toric}, $\mathcal{T}''$ is a finite sequence $X''_l\to\cdots\to X''_0=X''$ of blow-ups at a $K((x_3))$-point and the exceptional divisor $F''_i$ of $X''_{i+1}\to X''_i$ is obtained by a weighted blow-up of $X''$. Note that $E''=F''_{l-1}$. Possibly by replacing $E''$ with some $F''_i$, we may assume that $F_i''$ does not compute $\mld_{P''}(X'',\mathfrak{a}'\mathscr{O}_{X''})$ unless $i=l-1$.
The $\mathcal{T}''$ is compactified over $X'$, that is, $X''_{i+1}\to X''_i$ is the base change of a projective birational morphism $X'_{i+1}\to X'_i$ of regular schemes. Then the base changes $X_i=X'_i\times_{X'}X$ to $X$ form a tower $\mathcal{T}$ on $X$ with respect to $E$. Let $C_i$ be the centre on $X_i$ of $E$. Since $\mathcal{T}''$ consists of blow-ups at a $K$-point, $C_i$ is birational to $C$ for any $i<l$. Hence $C_i$ must be isomorphic to the regular scheme $C$. Therefore one can replace $X_i$ and $X'_i$ inductively so that $\mathcal{T}$ is the regular tower on $X$ with respect to $E$.
Let $F_i$ denote the exceptional divisor of $X_{i+1}\to X_i$, and set $a_i=a_{F_i}(X,\mathfrak{a})$. By our construction, every $a_i$ is positive except for $i=l-1$ while $a_{l-1}$ is zero. Let $\mathfrak{a}_i$ be the weak transform on $X_i$ of $\mathfrak{a}$ and set the $\mathbf{R}$-divisor $\Delta_i=\sum_{j=0}^{i-1}(1-a_j)F_j^i$ on $X_i$, where $F_j^i$ is the strict transform of $F_j$. Then $(X_i,\Delta_i,\mathfrak{a}_i)$ is crepant to $(X,\mathfrak{a})$. We claim that $a_i<1$ for any $i$. This is obvious for $i=l-1$ since $a_{l-1}=0$. In order to see the inequality $a_i<1$ for the fixed index $i<l-1$ by induction, assume that $a_j<1$ for any $j$ less than $i$. Then $\Delta_i$ is effective. Since $F_i$ does not compute $\mld_{\eta_C}(X,\mathfrak{a})$, one has that $\ord_{F_i}\Delta_i+\ord_{F_i}\mathfrak{a}_i>1$ by Proposition \ref{prp:tower}. Hence one obtains that $a_i=a_{F_i}(X_i,\Delta_i,\mathfrak{a}_i)=2-(\ord_{F_i}\Delta_i+\ord_{F_i}\mathfrak{a}_i)<1$.
\medskip
\textit{Step} 3.
We have that $0<a_i<1$ for any $i<l-1$ while $a_{l-1}=0$. We let $f_i$ denote the fibre of $F_i\to C$ at $P$, which is isomorphic to $\mathbf{P}_K^1$. For $i<l$, let $P_i$ be the $K$-point in $C_i$ mapped to $P$. We claim that for any indices $i$ and $j$ such that $j<i<l$, the centre $C_i$ is either disjoint from $F_j^i$ or contained in $F_j^i$.
Indeed if $C_i$ intersected $F_j^i$ properly at $P_i$, then the morphism $F_j^{i+1}\to F_j^i$ would not be an isomorphism. Thus $F_j^{i+1}$ must contain the fibre $f_i$ of $F_i\to C_i$. In particular, $F_j^{i+1}$ intersects $C_{i+1}$. On the other hand, $C_{i+1}$ is not contained in $F_j^{i+1}$ as $C_i$ is not in $F_j^i$. Thus one obtains that $C_{i+1}$ must also intersect $F_j^{i+1}$ properly at $P_{i+1}$, unless $i+1=l$. Repeating this argument, one would have that $F_j^l$ contains $f_{l-1}$ as well as $F_{l-1}$ contains $f_{l-1}$. Now let $G$ be the divisor obtained by the blow-up of $X_l$ along $f_{l-1}$. One computes that
\begin{align*}
a_G(X,\mathfrak{a})\le a_G(X_l,\Delta_l)\le2-(1-a_j)-(1-a_{l-1})=a_j<1,
\end{align*}
which contradicts that $\mld_P(X,\mathfrak{a})=1$.
\medskip
\textit{Step} 4.
Let $i$ be any index such that $\ord_{F_i}\mathscr{I}_C=1$. We shall show that there exists a part $y_1$ of a regular system of parameters in $\mathscr{O}_X$ such that $C_i$ is contained in the strict transform $H_i$ on $X_i$ of the divisor on $X$ defined by $y_1$. This is obvious for $i=0$. The condition $\ord_{F_i}\mathscr{I}_C=1$ for the fixed $i\ge1$ implies that $\ord_{F_{i-1}}\mathscr{I}_C=1$ since
\begin{align*}
\ord_{F_i}\mathscr{I}_C=\ord_{F_i}\mathscr{I}_i+\sum_{j=0}^{i-1}\ord_{F_j}\mathscr{I}_C\cdot\ord_{F_i}{F_j^i}\ge\ord_{F_{i-1}}\mathscr{I}_C
\end{align*}
for the weak transform $\mathscr{I}_i$ on $X_i$ of $\mathscr{I}_C$. Hence by induction on $i$, we may assume the existence of $y_1$ such that $C_{i-1}$ is contained in $H_{i-1}$.
We extend $y_1$ to a regular system $y_1,y_2,x_3$ of parameters in $\mathscr{O}_X$ in which $y_2$ is a general member in $\mathscr{I}_C$. Then for any $j\le i-1$, $F_j$ is as a divisor over $X$ obtained by the weighted blow-up of $X$ with $\wt(y_1,y_2)=(j+1,1)$, and the $y_1/y_2^j,y_2,x_3$ form a regular system of parameters in $\mathscr{O}_{X_j,P_j}$. In particular, $f_{i-1}\simeq\mathbf{P}_K^1$ has homogeneous coordinates $y_1/y_2^{i-1}$, $y_2$. Moreover, the $K$-point $P_i\in f_{i-1}$ is not defined by $[y_1/y_2^{i-1}:y_2]=[1:0]$. This follows when $i=1$ from the general choice of $y_2$, and when $i\ge2$ from the property in Step 3 that $C_i$ does not intersect $F_{i-2}^i$. Take $c\in K$ such that $P_i\in f_{i-1}$ is defined by $[y_1/y_2^{i-1}:y_2]=[c:1]$. Replacing $y_1$ with $y_1-cy_2^i$, we may assume that $c=0$.
Then $y_1/y_2^i,y_2,x_3$ form a regular system of parameters in $\mathscr{O}_{X_i,P_i}$. The $H_i$, $F_{i-1}$ and $f_{i-1}$ are defined at $P_i$ by $y_1/y_2^i$, $y_2$ and $(y_2,x_3)\mathscr{O}_{X_i}$. Because the fibration $F_{i-1}\to C_{i-1}$ is isomorphic to the projection of the product $\mathbf{P}_K^1\times_{\Spec K}C_{i-1}$, its section $C_i$ is defined at $P_i$ by $(y_1/y_2^i+x_3v(x_3),y_2)\mathscr{O}_{X_i}$ for some $v(x_3)\in K[[x_3]]$. After replacing $y_1$ with $y_1+y_2^ix_3v(x_3)$, one has that $C_i$ is contained in $H_i$.
\medskip
\textit{Step} 5.
Let $e$ be the maximal index such that $\ord_{F_e}\mathscr{I}_C=1$ and choose a regular system $y_1,y_2,x_3$ of parameters in $\mathscr{O}_X$ such that $y_1$ satisfies the condition in Step 4 for $i=e$ and $y_2$ is a general member in $\mathscr{I}_C$. Now repeating the process in Step 1 for $y_1,y_2,x_3$ instead of $x_1,x_2,x_3$, we may assume that $x_1=y_1$ and $x_2=y_2$. Then by Remark \ref{rmk:toric}, one can obtain $E''=F''_{l-1}$ by a weighted blow-up with respect to the coordinates $x_1,x_2$. More precisely, there exist a non-negative integer $p$ and a positive integer $q$ such that $E''$ is obtained by the weighted blow-up of $X''$ with $\wt(x_1,x_2)=p(e,1)+q(e+1,1)$. Note that $p$ is positive iff $e+1<l$.
Therefore, we conclude that $E$ is also obtained by the weighted blow-up of $X$ with $\wt(x_1,x_2)=p(e,1)+q(e+1,1)$.
\end{proof}
\section{Reduction to the case of decomposed boundaries}\label{sct:reduction}
The objective of this section is to complete the reduction to Conjecture \ref{cnj:product}.
\begin{remark}\label{rmk:independent}
In order to prove Conjecture \ref{cnj:product} or a statement of the same kind, it is sufficient to find an integer $l$ which satisfies the required property but may depend on the germ $P\in X$ of a smooth threefold, for the reason that one has only to consider those $\mathfrak{a}$ which are $\mathfrak{m}$-primary. Indeed, there exists an \'etale morphism from $P\in X$ to the germ $o\in\mathbf{A}^3$ at origin of the affine space. Then as it is seen in Lemma \ref{lem:regular}, any $\mathfrak{m}$-primary ideal $\mathfrak{a}$ on $X$ is the pull-back of some ideal $\mathfrak{b}$ on $\mathbf{A}^3$, and $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ coincides with $\mld_o(\mathbf{A}^3,\mathfrak{b}^q\mathfrak{n}^s)$, where $\mathfrak{n}$ is the maximal ideal in $\mathscr{O}_{\mathbf{A}^3}$ defining $o$. Thus, the bound $l$ on the germ $o\in\mathbf{A}^3$ can be applied to an arbitrary germ $P\in X$.
\end{remark}
We shall make the reduction by using the generic limit of ideals. For a moment, we work in the general setting that $P\in X$ is the germ of a klt variety. Let $r_1,\ldots,r_e$ be positive real numbers and $\mathcal{S}=\{\mathfrak{a}_i=\prod_{j=1}^e\mathfrak{a}_{ij}^{r_j}\}_{i\in\mathbf{N}}$ be a sequence of $\mathbf{R}$-ideals on $X$. Let $\mathsf{a}=\prod_{j=1}^e\mathsf{a}_j^{r_j}$ be a generic limit of $\mathcal{S}$. We use the notation in Section \ref{sct:limit}. The $\mathsf{a}$ is the generic limit with respect to a family $\mathcal{F}=(Z_l,(\mathfrak{a}_j(l))_j,N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$, and $\mathsf{a}$ is an ideal on $\hat P\in\hat X$ where $\hat X$ is the spectrum of the completion of the local ring $\mathscr{O}_{X,P}\otimes_kK$.
Let $\hat f\colon\hat Y\to\hat X$ be a projective birational morphism isomorphic outside $\hat P$. Suppose that $\hat Y$ is klt and the exceptional locus of $\hat f$ is a $\mathbf{Q}$-Cartier prime divisor $\hat F$. Let $\hat C$ be a closed proper subset of $\hat F$. As in Remark \ref{rmk:descend}, after replacing $\mathcal{F}$ with a subfamily, for any $l\ge l_0$ the $\hat f$ is descended to a projective morphism $f_l\colon Y_l\to X\times Z_l$ from a klt variety whose exceptional locus is a $\mathbf{Q}$-Cartier prime divisor $F_l$. One may assume that for any $i\in N_l$, the fibre $f_i\colon Y_i\to X$ at $s_l(i)\in Z_l$ is a morphism from a klt variety whose exceptional locus is a $\mathbf{Q}$-Cartier prime divisor $F_i$. Refer to \cite[Section B]{dFEM11} for the properties of a family of normal $\mathbf{Q}$-Gorenstein rational singularities. We may assume that $\hat C$ is descended to a closed subset $C_l$ in $F_l$. The $f_i$, $F_i$ and $C_i=C_l\times_{Y_l}Y_i$ are independent of $l$ because they are compatible with $t_l$.
\begin{lemma}\label{lem:relative}
Notation and assumptions as above. Suppose that $a_{\hat F}(\hat X,\mathsf{a})$ is at most one and that the intersection of $\hat F$ and the non-klt locus on $\hat Y$ of $(\hat X,\mathsf{a})$ is contained in $\hat C$. Then there exists a positive integer $l$ depending only on $\mathsf{a}$ and $\hat f$ such that after replacing $\mathcal{F}$ with a subfamily, for any $i\in N_{l_0}$ if a divisor $E$ over $X$ computes $\mld_P(X,\mathfrak{a}_i)$ and has centre $c_{Y_i}(E)$ not contained in $C_i$, then $a_E(X)\le l$.
\end{lemma}
\begin{proof}
Let $r$ be a positive integer such that $r\hat F$ is Cartier. We may assume that $rF_l$ is Cartier. By replacing $\mathfrak{a}_{ij}$ with $(\mathfrak{a}_{ij})^r$ and $r_j$ with $r_j/r$, we may assume that $\mathfrak{a}_{ij}$ is an ideal to the power of $r$ and so is $\mathsf{a}_j$. Thus one can define the weak transform $\mathfrak{a}_{iY}=\prod_j(\mathfrak{a}_{ijY})^{r_j}$ on $Y_i$ of $\mathfrak{a}_i$, as well as the weak transform $\mathsf{a}_Y$ on $\hat Y$ of $\mathsf{a}$. We may assume that $\ord_{\hat F}\mathsf{a}_j=\ord_{F_l}\mathfrak{a}_j(l)=\ord_{F_i}\mathfrak{a}_{ij}<l$ for any $i\in N_l$ and $j$. Set $\mathfrak{a}_{jY}(l)=\mathfrak{a}_j(l)\mathscr{O}_{Y_l}(a_jF_l)$ and $\mathfrak{a}_Y(l)=\prod_j(\mathfrak{a}_{jY}(l))^{r_j}$ for $a_j=\ord_{\hat F}\mathsf{a}_j$, which is divisible by $r$.
One can fix a positive real number $t$ such that the intersection of $\hat F$ and the non-klt locus on $\hat Y$ of $(\hat X,\mathsf{a}^{1+t})$ is still contained in $\hat C$. Set the real number $b$ so that $(\hat Y,b\hat F,\mathsf{a}_Y^{1+t})$ is crepant to $(\hat X,\mathsf{a}^{1+t})$, then $0\le b<1$ by $a_{\hat F}(\hat X,\mathsf{a})\le1$. Then $(Y_l,bF_l,\mathfrak{a}_Y(l)^{1+t})$ is crepant to $(X\times Z_l,\mathfrak{a}(l)^{1+t})$ while $(Y_i,bF_i,(\mathfrak{a}_{iY})^{1+t})$ is crepant to $(X,\mathfrak{a}_i^{1+t})$. One may assume that $(Y_l,bF_l,\mathfrak{a}_Y(l)^{1+t})$ is klt about $F_l\setminus C_l$.
Apply Corollary \ref{crl:relative} to the family $Y_l\setminus C_l\to Z_l$. Since $\mathfrak{a}_{ijY}+\mathfrak{m}^l\mathscr{O}_{Y_i}(a_jE_i)=\mathfrak{a}_{jY}(l)\mathscr{O}_{Y_i}$, one has that $(Y_i,bF_i,(\mathfrak{a}_{iY})^{1+t})$ is lc about $F_i\setminus C_i$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. Thus if a divisor $E$ over $X$ satisfies that $c_{Y_i}(E)\not\subset C_i$, then $a_E(X,\mathfrak{a}_i^{1+t})\ge0$, that is, $t\ord_E\mathfrak{a}_i\le a_E(X,\mathfrak{a}_i)$. Hence,
\begin{align*}
a_E(X)=a_E(X,\mathfrak{a}_i)+\ord_E\mathfrak{a}_i\le(1+t^{-1})a_E(X,\mathfrak{a}_i).
\end{align*}
In addition if $E$ computes $\mld_P(X,\mathfrak{a}_i)$, then
\begin{align*}
a_E(X)\le(1+t^{-1})\mld_P(X,\mathfrak{a}_i)\le(1+t^{-1})\mld_PX.
\end{align*}
Hence any integer $l$ at least $(1+t^{-1})\mld_PX$ satisfies the required property.
\end{proof}
We provide a meta theorem which connects statements involving the maximal ideal $\mathfrak{m}$ to those involving $\mathfrak{m}$-primary ideals. For the property $\mathscr{P}$ in the theorem, one can take for example empty or being terminal.
\begin{theorem}\label{thm:meta}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Fix a positive rational number $q$. Let $\mathscr{P}$ be a property of canonical pairs $(X,\mathfrak{a}^q)$ for ideals $\mathfrak{a}$ on $X$. Then the following statements are equivalent.
\begin{enumerate}
\item\label{itm:maximal}
Fix a non-negative rational number $s$. Then there exists a positive integer $l$ depending only on $q$ and $s$ such that if $\mathfrak{a}$ is an ideal on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical and has the property $\mathscr{P}$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ and satisfies the inequality $a_E(X)\le l$.
\item\label{itm:mprimary}
Fix a non-negative rational number $s$ and a positive integer $b$. Then there exists a positive integer $l$ depending only on $q$, $s$ and $b$ such that if $\mathfrak{a}$ and $\mathfrak{b}$ are ideals on $X$ satisfying that $(X,\mathfrak{a}^q)$ is canonical and has the property $\mathscr{P}$ and that $\mathfrak{b}$ contains $\mathfrak{m}^b$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{b}^s)$ and satisfies the inequality $a_E(X)\le l$.
\end{enumerate}
\end{theorem}
\begin{proof}
\textit{Step} 1.
The (\ref{itm:maximal}) follows from the special case of (\ref{itm:mprimary}) when $b=1$. It is necessary to derive (\ref{itm:mprimary}) from (\ref{itm:maximal}). Let $\mathcal{S}=\{(\mathfrak{a}_i,\mathfrak{b}_i)\}_{i\in\mathbf{N}}$ be an arbitrary sequence of pairs of ideals on $X$ such that $(X,\mathfrak{a}_i^q)$ is canonical and has the property $\mathscr{P}$ and such that $\mathfrak{b}_i$ contains $\mathfrak{m}^b$. Assuming the (\ref{itm:maximal}), it is sufficient to find an integer $l$ such that for infinitely many $i$, there exists a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$ and satisfies the inequality $a_{E_i}(X)\le l$. Note Remark \ref{rmk:independent}. By Theorem \ref{thm:nonpos}, we may assume that $\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$ is positive. Then by Corollary \ref{crl:mult}, there exists a positive integer $b_0$ depending only on $q$ and $s$ such that $\ord_{G_i}\mathfrak{m}\le b_0$ for every divisor $G_i$ over $X$ computing $\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$.
\medskip
\textit{Step} 2.
We construct a generic limit $(\mathsf{a},\mathsf{b})$ of $\mathcal{S}$ using the notation in Section \ref{sct:limit}. The $(\mathsf{a},\mathsf{b})$ is the generic limit with respect to a family $\mathcal{F}=(Z_l,(\mathfrak{a}(l),\mathfrak{b}(l)),N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$. The $\mathsf{a}$ and $\mathsf{b}$ are ideals on $\hat P\in\hat X$ where $\hat X$ is the spectrum of the completion of the local ring $\mathscr{O}_{X,P}\otimes_kK$. We let $\hat\mathfrak{m}$ denote the maximal ideal in $\mathscr{O}_{\hat X}$. Note that $\hat\mathfrak{m}^b\subset\mathsf{b}$ by $\mathfrak{m}^b\subset\mathfrak{b}_i$. By Lemma \ref{lem:limtonak} and Remark \ref{rmk:limit}(\ref{itm:limitineq}), the existence of $l$ is reduced to the inequality $\mld_{\hat P}(\hat X,\mathsf{a}^q\mathsf{b}^s)\le\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. By Theorem \ref{thm:grthan1}, we may assume that $(\hat X,\mathsf{a}^q\mathsf{b}^s)$ has the smallest lc centre $\hat C$ which is regular and of dimension one.
Since $\mathsf{b}$ is $\hat\mathfrak{m}$-primary, $\hat C$ is also the smallest lc centre of $(\hat X,\mathsf{a}^q)$. In particular, $\mld_{\hat P}(\hat X,\mathsf{a}^q)\le1$ by Theorem \ref{thm:grthan1}(\ref{itm:case4}), while $\mld_{\hat P}(\hat X,\mathsf{a}^q)\ge1$ by Remark \ref{rmk:limit}(\ref{itm:limitineq}). Thus $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$.
\medskip
\textit{Step} 3.
We apply Theorem \ref{thm:wbu} to $(\hat X,\mathsf{a}^q)$. There exist a divisor $\hat E$ over $\hat X$ computing $\mld_{\eta_{\hat C}}(\hat X,\mathsf{a}^q)$ and a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_{\hat X}$ such that $\hat E$ is obtained by the weighted blow-up of $\hat X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some coprime positive integers $w_1,w_2$. We take $x_3$ generally from $\hat\mathfrak{m}$ so that $\ord_{x_3}\mathsf{a}$ is zero, where $\ord_{x_3}$ stands for the order along the divisor on $\hat X$ defined by $x_3$. Note that $\hat C$ is geometrically irreducible.
We fix a positive integer $j$ such that
\begin{align*}
j>bb_0.
\end{align*}
Let $\hat f\colon\hat Y\to\hat X$ be the weighted blow-up with $\wt(x_1,x_2,x_3)=(jw_1,jw_2,1)$ and $\hat F$ be its exceptional divisor. By Remark \ref{rmk:wbu}, there exists a regular system $y_1,y_2,y_3$ of parameters in $\mathscr{O}_{X,P}\otimes_kK$ such that $\hat f$ is also the weighted blow-up with $\wt(y_1,y_2,y_3)=(jw_1,jw_2,1)$ (after regarding $y_1,y_2,y_3$ as elements in $\mathscr{O}_{\hat X}$).
Discussed in the paragraph prior to Lemma \ref{lem:relative}, after replacing $\mathcal{F}$ with a subfamily, $\hat f$ is descended to a projective morphism $f_l\colon Y_l\to X\times Z_l$ for any $l\ge l_0$. One can assume that $y_1,y_2,y_3$ come from $\mathscr{O}_{X,P}\otimes_k\mathscr{O}_{Z_l}$ and that for any $i\in N_l$, their fibres $y_{1i},y_{2i},y_{3i}$ at $s_l(i)\in Z_l$ form a regular system of parameters in $\mathscr{O}_{X,P}$. $Y_l$ is klt and the exceptional locus of $f_l$ is a $\mathbf{Q}$-Cartier prime divisor $F_l$. The fibre $f_i\colon Y_i\to X$ of $f_l$ at $s_l(i)$ is the weighted blow-up of $X$ with $\wt(y_{1i},y_{2i},y_{3i})=(jw_1,jw_2,1)$ whose exceptional divisor is $F_i=F_l\times_{Y_l}Y_i$.
Since $(jw_1,jw_2,1)=j(w_1,w_2,0)+(0,0,1)$, one has the inequality
\begin{align*}
\ord_{\hat F}\mathsf{a}^q\ge j\ord_{\hat E}\mathsf{a}^q+\ord_{x_3}\mathsf{a}^q=j(w_1+w_2)
\end{align*}
using $\ord_{\hat E}\mathsf{a}^q=a_{\hat E}(\hat X)-a_{\hat E}(\hat X,\mathsf{a}^q)=w_1+w_2$. Equivalently, $a_{\hat F}(\hat X,\mathsf{a}^q)=a_{\hat F}(\hat X)-\ord_{\hat F}\mathsf{a}^q\le 1$. Hence $a_{\hat F}(\hat X,\mathsf{a}^q)=1$ by $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$ in Step 2.
\medskip
\textit{Step} 4.
Let $\hat Q$ be the closed point in $\hat F$ which lies on the strict transform of $\hat C$. For $i\in N_l$, let $Q_i$ be the closed point in $F_i$ which lies on the strict transform of the curve on $X$ defined by $(y_{1i},y_{2i})\mathscr{O}_X$. Applying Lemma \ref{lem:relative} to $(\hat X,\mathsf{a}^q\mathsf{b}^s)$, $\hat f$, and $\hat Q$, one has only to treat the case when $\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$ is computed by a divisor $G_i$ such that $c_{Y_i}(G_i)=Q_i$.
For such $G_i$, the inequality $\ord_{G_i}\mathfrak{m}\le b_0$ holds by Step 1. Thus,
\begin{align*}
\ord_{G_i}\mathfrak{b}_i\le\ord_{G_i}\mathfrak{m}^b\le bb_0<j\le jw_2&=\ord_{F_i}(y_{1i},y_{2i})\mathscr{O}_X\\
&\le\ord_{F_i}(y_{1i},y_{2i})\mathscr{O}_X\cdot\ord_{G_i}F_i\\
&\le\ord_{G_i}(y_{1i},y_{2i})\mathscr{O}_X,
\end{align*}
whence $\ord_{G_i}\mathfrak{b}_i=\ord_{G_i}(\mathfrak{b}_i+(y_{1i},y_{2i})\mathscr{O}_X)$.
By $\hat\mathfrak{m}^b\subset\mathsf{b}$, there exists a non-negative integer $b'$ at most $b$ satisfying that
\begin{align*}
\mathsf{b}+(y_1,y_2)\mathscr{O}_{\hat X}=\hat\mathfrak{m}^{b'}+(y_1,y_2)\mathscr{O}_{\hat X}.
\end{align*}
Then one can assume that $\mathfrak{b}(l)+(y_1,y_2)\mathscr{O}_{X\times Z_l}=(\mathfrak{m}^{b'}+\mathfrak{m}^l)\mathscr{O}_{X\times Z_l}+(y_1,y_2)\mathscr{O}_{X\times Z_l}$ for any $l\ge l_0$, which derives the inclusion $\mathfrak{m}^{b'}\subset\mathfrak{b}_i+\mathfrak{m}^l+(y_{1i},y_{2i})\mathscr{O}_X$. One may assume that $l_0\ge b$, then $\mathfrak{m}^{b'}\subset\mathfrak{b}_i+(y_{1i},y_{2i})\mathscr{O}_X$. Thus, $\ord_{G_i}\mathfrak{b}_i=\ord_{G_i}(\mathfrak{b}_i+(y_{1i},y_{2i})\mathscr{O}_X)\le\ord_{G_i}\mathfrak{m}^{b'}$. In particular,
\begin{align*}
\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})\le a_{G_i}(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})\le a_{G_i}(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)=\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s).
\end{align*}
\medskip
\textit{Step} 5.
We want the inequality $\mld_{\hat P}(\hat X,\mathsf{a}^q\mathsf{b}^s)\le\mld_P(X,\mathfrak{a}_i^q\mathfrak{b}_i^s)$, as seen in Step 2. Applying our assumption (\ref{itm:maximal}) in the case when the exponent of $\mathfrak{m}$ is one of $0,s,2s,\ldots,bs$, there exists a positive integer $l'$ depending only on $q$, $s$ and $b$ such that $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})$ is computed by a divisor $E_i$ satisfying the inequality $a_{E_i}(X)\le l'$. One has that $\ord_{E_i}\mathfrak{a}_i=q^{-1}(a_{E_i}(X)-a_{E_i}(X,\mathfrak{a}_i^q))\le q^{-1}l'$, so $E_i$ computes $\mld_P(X,(\mathfrak{a}_i+\mathfrak{m}^e)^q\mathfrak{m}^{sb'})$ for any integer $e$ at least $q^{-1}l'$, which equals $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})$. Together with Lemma \ref{lem:resolution}, one obtains that $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^{sb'})=\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{sb'})$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. Hence by Step 4, the problem is reduced to showing the equality
\begin{align*}
\mld_{\hat P}(\hat X,\mathsf{a}^q\mathsf{b}^s)=\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^{sb'}).
\end{align*}
The equality $\mathsf{b}+(y_1,y_2)\mathscr{O}_{\hat X}=\hat\mathfrak{m}^{b'}+(y_1,y_2)\mathscr{O}_{\hat X}$ tells that
\begin{align*}
\mathsf{b}+(y_1,y_2,y_3^j)\mathscr{O}_{\hat X}=\hat\mathfrak{m}^{b'}+(y_1,y_2,y_3^j)\mathscr{O}_{\hat X}.
\end{align*}
Since $(y_1,y_2,y_3^j)\mathscr{O}_{\hat X}=\hat f_*\mathscr{O}_{\hat Y}(-j\hat F)=\mathscr{I}_{\hat C}+\hat\mathfrak{m}^j$, where $\mathscr{I}_{\hat C}$ is the ideal sheaf of $\hat C$, one concludes that $\mathsf{b}+\mathscr{I}_{\hat C}=\hat\mathfrak{m}^{b'}+\mathscr{I}_{\hat C}$, using $\hat\mathfrak{m}^j\subset\hat\mathfrak{m}^b\subset\mathsf{b}$ and $\hat\mathfrak{m}^j\subset\hat\mathfrak{m}^{b'}$. Therefore, the required equality follows from the precise inversion of adjunction, Corollary \ref{crl:pia}.
\end{proof}
Precise inversion of adjunction compares the minimal log discrepancy of a pair and that of its restricted pair by adjunction. Let $P\in X$ be the germ of a normal variety and $S+B$ be an effective $\mathbf{R}$-divisor on $X$ such that $S$ is a normal prime divisor which does not appear in $B$. Suppose that they form a pair $(X,S+B)$, then one has the adjunction $K_X+S+B|_S=K_S+B_S$ in which $B_S$ is the different on $S$ of $B$.
\begin{conjecture}[Precise inversion of adjunction]\label{cnj:pia}
Notation as above. Then one has that $\mld_P(X,S+B)=\mld_P(S,B_S)$.
\end{conjecture}
This conjecture is regarded as the more precise version of Theorem \ref{thm:ia}. At present we know two cases when it holds. One is when $X$ is smooth \cite{EMY03} or more generally has lci singularities \cite{EM04}. The other is when the minimal log discrepancy is at most one \cite{BCHM10}, that is,
\begin{theorem}\label{thm:pia}
Conjecture \textup{\ref{cnj:pia}} holds when $(X,\Delta)$ is klt for some boundary $\Delta$ and $\mld_P(X,S+B)$ is at most one.
\end{theorem}
\begin{proof}
It is enough to show the inequality $\mld_P(X,S+B)\ge\mld_P(S,B_S)$. By inversion of adjunction, we may assume that $0<\mld_P(X,S+B)\le1$. Then by \cite[Corollary 1.4.3]{BCHM10}, there exists a projective birational morphism $\pi\colon Y\to X$ from a $\mathbf{Q}$-factorial normal variety such that the divisorial part of its exceptional locus is a prime divisor $E$ computing $\mld_P(X,S+B)$. Let $S_Y$ and $B_Y$ denote the strict transforms of $S$ and $B$. Then the pull-back of $(X,S+B)$ is $(Y,S_Y+B_Y+bE)$ where $b=1-\mld_P(X,S+B)\ge0$. Let $C$ be an arbitrary irreducible component of $E\cap S_Y$. By $\mld_P(X,S+B)>0$, the $(Y,S_Y+B_Y+bE)$ is plt about the generic point $\eta_C$ of $C$. By adjunction, one can write
\begin{align*}
K_Y+S_Y+B_Y+bE|_{S_Y}=K_{S_Y}+B_{S_Y}
\end{align*}
about $\eta_C$. By \cite[Corollary 3.10]{Sh93}, $C$ has coefficient at least $b$ in $B_{S_Y}$. Hence, one obtains that $\mld_P(S,B_S)\le a_C(S_Y,B_{S_Y})\le1-b=\mld_P(X,S+B)$.
\end{proof}
\begin{lemma}\label{lem:pia}
Let $P\in X$ be the germ of a klt variety and $S$ be a prime divisor on $X$ such that $(X,S)$ is plt. Let $\Delta$ be the different on $S$ defined by $K_X+S|_S=K_S+\Delta$. Let $\hat X$ be the spectrum of the completion of the local ring $\mathscr{O}_{X,P}$ and $\hat P$ be its closed point. Set $\hat S=S\times_X\hat X$ and $\hat\Delta=\Delta\times_X\hat X$. Let $\mathsf{a}$ be an $\mathbf{R}$-ideal on $\hat X$ such that $\mld_{\hat P}(\hat X,\hat S,\mathsf{a})\le1$. Then $\mld_{\hat P}(\hat X,\hat S,\mathsf{a})=\mld_{\hat P}(\hat S,\hat\Delta,\mathsf{a}\mathscr{O}_{\hat S})$.
\end{lemma}
\begin{proof}
Adding a high multiple of the maximal ideal $\hat\mathfrak{m}$ in $\mathscr{O}_{\hat X}$ to each component of $\mathsf{a}$, we may assume that $\mathsf{a}$ is $\hat\mathfrak{m}$-primary. Then $\mathsf{a}$ is the pull-back of an $\mathbf{R}$-ideal $\mathfrak{a}$ on $X$. By Remark \ref{rmk:regular}, the assertion is reduced to the precise inversion of adjunction $\mld_P(X,S,\mathfrak{a})=\mld_P(S,\Delta,\mathfrak{a}\mathscr{O}_S)$ for varieties, which follows from Theorem \ref{thm:pia}.
\end{proof}
\begin{proposition}\label{prp:piaE}
Let $X$ be the spectrum of the ring of formal power series in three variables over a field $K$ of characteristic zero and $P$ be its closed point. Let $x_1,x_2$ be a part of a regular system of parameters in $\mathscr{O}_X$ and $w_1,w_2$ be coprime positive integers. Let $Y\to X$ be the weighted blow-up of with $\wt(x_1,x_2)=(w_1,w_2)$, $E$ be its exceptional divisor, and $f$ be the fibre of $E\to X$ at $P$. Let $\Delta$ be the different on $E$ defined by $K_Y+E|_E=K_E+\Delta$. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ whose weak transform $\mathfrak{a}_Y$ on $Y$ is defined. Suppose that $a_E(X,\mathfrak{a})$ is zero. Then $\mld_P(X,\mathfrak{a})=\mld_f(E,\Delta,\mathfrak{a}_Y\mathscr{O}_E)$.
\end{proposition}
\begin{proof}
By the regular base change, we may assume that $K$ is algebraically closed. Since $(Y,E,\mathfrak{a}_Y)$ is crepant to $(X,\mathfrak{a})$, it is enough to prove that $\mld_f(Y,E,\mathfrak{a}_Y)=\mld_f(E,\Delta,\mathfrak{a}_Y\mathscr{O}_E)$.
Extend the $x_1,x_2$ to a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_X$ and set $X'=\Spec K[x_1,x_2,x_3]$. Then $Y$ is the base change of the weighted blow-up of $X'$ with $\wt(x_1,x_2)=(w_1,w_2)$. Thus, the equality $\mld_{\eta_f}(Y,E,\mathfrak{a}_Y)=\mld_{\eta_f}(E,\Delta,\mathfrak{a}_Y\mathscr{O}_E)$ follows from Lemma \ref{lem:pia} by cutting by the strict transform of the divisor on $X$ defined by $x_1^{w_2}+\lambda x_2^{w_1}$ for a general member $\lambda$ in $K$. Together with $\mld_f(Y,E,\mathfrak{a}_Y)\le\mld_{\eta_f}(Y,E)=1$, it is sufficient to verify that $\mld_Q(Y,E,\mathfrak{a}_Y)=\mld_Q(E,\Delta,\mathfrak{a}_Y\mathscr{O}_E)$ for any closed point $Q$ in $f$ such that $\mld_Q(Y,E,\mathfrak{a}_Y)\le1$, which follows from Lemma \ref{lem:pia} again.
\end{proof}
\begin{corollary}\label{crl:pia}
Let $X$ be the spectrum of the ring of formal power series in three variables over a field $K$ of characteristic zero and $P$ be its closed point. Let $\mathfrak{a}$, $\mathfrak{b}$ and $\mathfrak{c}$ be $\mathbf{R}$-ideals on $X$. Suppose that $\mld_P(X,\mathfrak{a})$ equals one and that $(X,\mathfrak{a})$ has an lc centre $C$ of dimension one on which $\mathfrak{b}\mathscr{O}_C=\mathfrak{c}\mathscr{O}_C$. Then $\mld_P(X,\mathfrak{a}\mathfrak{b})=\mld_P(X,\mathfrak{a}\mathfrak{c})$.
\end{corollary}
\begin{proof}
We may assume that $C$ is not contained in the cosupport of $\mathfrak{b}\mathfrak{c}$, because otherwise $\mld_P(X,\mathfrak{a}\mathfrak{b})=\mld_P(X,\mathfrak{a}\mathfrak{c})=-\infty$. By Theorem \ref{thm:wbu}, there exist a divisor $E$ over $X$ computing $\mld_{\eta_C}(X,\mathfrak{a})=0$ and a part $x_1,x_2$ of a regular system of parameters in $\mathscr{O}_X$ such that $E$ is obtained by the weighted blow-up $Y$ of $X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some $w_1,w_2$. We may assume that the weak transform $\mathfrak{a}_Y$ on $Y$ of $\mathfrak{a}$ is defined. Then, the assertion follows from Proposition \ref{prp:piaE} by $\mathfrak{b}\mathscr{O}_E=\mathfrak{c}\mathscr{O}_E$.
\end{proof}
\begin{proof}[Proof of Theorem \textup{\ref{thm:first}}]
It is sufficient to derive Conjectures \ref{cnj:acc}, \ref{cnj:alc}, \ref{cnj:madic} and \ref{cnj:nakamura} from Conjecture \ref{cnj:product}. All $\mathbf{R}$-ideals on the germ $P\in X$ of a smooth threefold in Conjectures \ref{cnj:acc} to \ref{cnj:nakamura} may be assumed to be $\mathfrak{m}$-primary, where $\mathfrak{m}$ is the maximal ideal in $\mathscr{O}_X$ defining $P$. There exists an \'etale morphism from $P\in X$ to the germ $o\in\mathbf{A}^3$ at origin of the affine space, by which any $\mathfrak{m}$-primary $\mathbf{R}$-ideal $\mathfrak{a}$ on $X$ is the pull-back of some $\mathbf{R}$-ideal $\mathfrak{b}$ on $\mathbf{A}^3$ by Lemma \ref{lem:regular}. Thus for Conjectures \ref{cnj:acc} to \ref{cnj:nakamura}, one has only to consider $\mathbf{R}$-ideals on the fixed germ $P\in X$ of a smooth threefold.
By Lemma \ref{lem:rational}, these conjectures are reduced to the case $I=\{1/n\}$ of Conjecture \ref{cnj:nakamura}, that is, for a fixed positive integer $n$, it is enough to find an integer $l$ such that if $\mathfrak{a}$ is an ideal on $X$, then there exists a divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^{1/n})$ and satisfies the inequality $a_E(X)\le l$. By Theorem \ref{thm:nonpos}, we have only to consider those $\mathfrak{a}$ for which $\mld_P(X,\mathfrak{a}^{1/n})$ is positive. Then by Corollary \ref{crl:mult}, there exists a positive integer $b$ depending only on $n$ such that $\ord_E\mathfrak{m}\le b$ for every divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a}^{1/n})$.
Set $q=1/n$ and apply Theorem \ref{thm:canonical}. It is enough to bound $a_E(X)$ for those $\mathfrak{a}$ in the case (\ref{itm:reduced}) of Theorem \ref{thm:canonical}. Suppose this case and use the notation in Theorem \ref{thm:canonical}. Let $E$ be an arbitrary divisor over $Y$ which computes $\mld_Q(Y,\Delta,\mathfrak{a}_Y^q)$, that equals $\mld_P(X,\mathfrak{a}^q)$. Then,
\begin{align*}
a_E(X)=a_E(Y)+\sum_F(a_F(Y)-1)\ord_EF\le a_E(Y)+(c-1)\sum_F\ord_EF,
\end{align*}
in which the summation takes over all exceptional prime divisors on $Y$, and
\begin{align*}
\sum_F\ord_EF\le\ord_E\mathfrak{m}\le b.
\end{align*}
Thus the boundedness of $a_E(X)$ is reduced to that of $a_E(Y)$. In other words, it is sufficient to treat the divisors computing $\mld_Q(Y,\Delta,\mathfrak{a}_Y^q)$. The ideal $\mathfrak{b}=\mathscr{O}_Y(-n\Delta)$ in $\mathscr{O}_Y$ is defined since $n\Delta$ is integral, for which $(Y,\mathfrak{a}_Y^q\mathfrak{b}^q)$ is crepant to $(Y,\Delta,\mathfrak{a}_Y^q)$. The $\mathfrak{b}$ satisfies that $\ord_E\mathfrak{b}\le n\sum_F\ord_EF\le nb$ by $\mathscr{O}_Y(-n\sum F)\subset\mathfrak{b}$. In particular, $E$ computes $\mld_Q(Y,\mathfrak{a}_Y^q(\mathfrak{b}+\mathfrak{n}^{nb})^q)$ as well as $\mld_Q(Y,\Delta,\mathfrak{a}_Y^q)$ for the maximal ideal $\mathfrak{n}$ in $\mathscr{O}_Y$ defining $Q$.
Replacing the notation $(Y,\mathfrak{a}_Y^q(\mathfrak{b}+\mathfrak{n}^{nb})^q)$ with $(X,\mathfrak{a}^q\mathfrak{b}^q)$ and $\mathfrak{n}$ with $\mathfrak{m}$, Conjectures \ref{cnj:acc} to \ref{cnj:nakamura} follow from the boundedness of $a_E(X)$ for some divisor $E$ over $X$ which computes $\mld_P(X,\mathfrak{a}^q\mathfrak{b}^q)$ such that $\mld_P(X,\mathfrak{a}^q)\ge1$ and such that $\mathfrak{m}^{nb}\subset\mathfrak{b}$. One may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary. Then one can apply Theorem \ref{thm:meta} with the property $\mathscr{P}$ being empty, which reduces the boundedness of $a_E(X)$ to Conjecture \ref{cnj:product}.
\end{proof}
\section{Boundedness results}
In this section, we shall prove Conjecture \ref{cnj:product} in several cases. First we treat the case when either $(X,\mathfrak{a}^q)$ is terminal or $s$ is zero.
\begin{proof}[Proof of Theorem \textup{\ref{thm:terminal}}]
Let $\mathcal{S}=\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ be an arbitrary sequence of ideals on $X$ such that $(X,\mathfrak{a}_i^q)$ is terminal. We construct a generic limit $\mathsf{a}$ of $\mathcal{S}$. We use the notation in Section \ref{sct:limit}, so $\mathsf{a}$ is the generic limit with respect to a family $\mathcal{F}=(Z_l,\mathfrak{a}(l),N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$, and $\mathsf{a}$ is an ideal on $\hat P\in\hat X$. We let $\hat\mathfrak{m}$ denote the maximal ideal in $\mathscr{O}_{\hat X}$. To see the assertion (\ref{itm:terminal}), by Lemma \ref{lem:limtonak} and Remark \ref{rmk:independent}, it is enough to show the equality $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s)=\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. One has that $\mld_{\hat P}(\hat X,\mathsf{a}^q)>1$ by Remark \ref{rmk:limit}(\ref{itm:limitineq}). Thus $(\hat X,\mathsf{a}^q)$ satisfies the case \ref{cas:case1}, \ref{cas:case2} or \ref{cas:case3} in Theorem \ref{thm:grthan1}, which derives that $(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s)$ does not have the smallest lc centre of dimension one. Hence the required equality holds by Theorem \ref{thm:grthan1}.
For (\ref{itm:zero}), starting instead with $\mathcal{S}$ such that $(X,\mathfrak{a}_i^q)$ is canonical, we need to show the equality $\mld_{\hat P}(\hat X,\mathsf{a}^q)=\mld_P(X,\mathfrak{a}_i^q)$. This holds in the cases other than the case \ref{cas:case4} in Theorem \ref{thm:grthan1}, so we may assume the case \ref{cas:case4}, in which $\mld_{\hat P}(\hat X,\mathsf{a}^q)\le1$. By $\mld_P(X,\mathfrak{a}_i^q)\ge1$, the required equality follows from Remark \ref{rmk:limit}(\ref{itm:limitineq}).
\end{proof}
We shall study the case in Theorem \ref{thm:second}(\ref{itm:half}) when the lc threshold of the maximal ideal is at most one-half. We prepare a useful criterion for identifying a divisor over a variety.
\begin{lemma}\label{lem:parallel}
Let $P\in X$ be the germ of a smooth variety and $E$ be a divisor over $X$. Let $x_1,\ldots,x_c$ be a part of a regular system of parameters in $\mathscr{O}_{X,P}$ and $w_1,\ldots,w_c$ be positive integers. Let $Y\to X$ be the weighted blow-up with $\wt(x_1,\ldots,x_c)=(w_1,\ldots,w_c)$, $F$ be its exceptional divisor, and $H_i$ be the strict transform of the divisor on $X$ defined by $x_i$. Suppose that
\begin{itemize}
\item
$c_X(E)$ coincides with $c_X(F)$, and
\item
the vector $(w_1,\ldots,w_c)$ is parallel to $(\ord_Ex_1,\ldots,\ord_Ex_c)$.
\end{itemize}
Then the centre on $Y$ of $E$ is not contained in the union $\bigcup_{i=1}^cH_i$.
\end{lemma}
\begin{proof}
The idea has appeared already in \cite[Lemma 6.1]{K03}. We may assume that $w_1,\ldots,w_c$ have no common divisors. One computes that
\begin{align*}
\ord_Ex_i=\ord_EH_i+\ord_Fx_i\cdot\ord_EF=\ord_EH_i+w_i\ord_EF.
\end{align*}
The $\ord_EH_i$ is positive iff the centre $c_Y(E)$ lies on $H_i$. Since the intersection $\bigcap_{i=1}^cH_i$ is empty, at least one of $\ord_EH_i$ is zero. Because $(\ord_Ex_1,\ldots,\ord_Ex_c)$ is parallel to $(w_1,\ldots,w_c)$, one concludes that $\ord_EH_i$ is zero for every $i$, which proves the assertion.
\end{proof}
The next lemma plays a central role in the proof of Theorem \ref{thm:second}(\ref{itm:half}).
\begin{lemma}\label{lem:half}
Let $C$ be the spectrum of the ring of formal power series in one variable over $k$ and $P$ be its closed point. Let $X\to C$ be a smooth projective morphism of relative dimension one and $f$ be its fibre at $P$. Let $\Delta$ be an effective $\mathbf{R}$-divisor on $X$, $Q$ be a closed point in $f$, and $t$ be a positive real number. Suppose that
\begin{itemize}
\item
$(K_X+\Delta)\cdot f=0$,
\item
$\mld_f(X,\Delta)=1$, and
\item
$\mld_Q(X,\Delta+tf)=0$.
\end{itemize}
Then $t$ is at least one-half. Moreover if $t$ equals one-half, then $\mld_Q(X,\Delta+sf)=1-2s$ for any non-negative real number $s$ at most one-half.
\end{lemma}
\begin{proof}
\textit{Step} 1.
Let $E$ be a divisor over $X$ which computes $\mld_Q(X,\Delta+tf)=0$. We define the coprime positive integers $w_1$ and $w_2$ so that the vector $(w_1,w_2)$ is parallel to $(\ord_Ef,\ord_E\mathfrak{n})$, where $\mathfrak{n}$ is the maximal ideal in $\mathscr{O}_X$ defining $Q$. Take a regular system $x_1,x_2$ of parameters in $\mathscr{O}_{X,Q}$ such that $x_1$ defines $f$ and $x_2$ is a general member in $\mathfrak{n}$. We claim that the divisor $F$ obtained by the weighted blow-up $Y\to X$ with $\wt(x_1,x_2)=(w_1,w_2)$ computes $\mld_Q(X,\Delta+tf)$.
This claim can be verified in the same way as in \cite{K17}. Assuming that $a_F(X,\Delta+tf)$ is positive, we shall derive a contradiction. For $i=1,2$, let $H_i$ be the strict transform of the divisor defined on $X$ by $x_i$. By Lemma \ref{lem:parallel}, the centre on $Y$ of $E$ would be a closed point $R$ in $F\setminus(H_1+H_2)$. The pull-back of $(X,\Delta+tf)$ is $(Y,bF+\Delta_Y+tH_1)$ in which $\Delta_Y$ is the strict transform of $\Delta$ and $b=1-a_F(X,\Delta+tf)<1$. Thus $(Y,F+\Delta_Y)$ is not lc about $R$, so $(F,\Delta_Y|_F)$ is not lc about $R$ by inversion of adjunction. Remark that this inversion of adjunction on $R\in Y$ holds by Lemma \ref{lem:pia} because $Y\to X$ is the base change of the weighted blow-up of $\Spec k[x_1,x_2]$ with $\wt(x_1,x_2)=(w_1,w_2)$. This means that $\ord_R(\Delta_Y|_F)$ is greater than one.
One computes that
\begin{align*}
1&=-(K_Y+bF+\Delta_Y+tH_1)\cdot F+1\\
&\le-(K_Y+bF+tH_1)\cdot F-\ord_R(\Delta_Y|_F)+1\\
&<((w_1+w_2-1)+b-tw_1)(-F^2)=\frac{1}{w_1}+\frac{1-t}{w_2}-\frac{1-b}{w_1w_2}.
\end{align*}
Together with $w_1\ge w_2$ and $b<1$, one would obtain that $w_2=1$ and $tw_1<b$. But then $a_F(X,\Delta)=a_F(X,\Delta+tf)+t\ord_Ff=(1-b)+tw_1<1$, which contradicts that $\mld_f(X,\Delta)=1$.
\medskip
\textit{Step} 2.
We have seen that $F$ computes $\mld_Q(X,\Delta+tf)=0$. Then $a_F(X,\Delta)=t\ord_Ff=tw_1$. Since $\ord_Q\Delta\le1$ by $\mld_f(X,\Delta)=1$, one has that $\ord_F\Delta\le w_1$. Thus,
\begin{align*}
w_2\le w_1+w_2-\ord_F\Delta=a_F(X,\Delta)=tw_1.
\end{align*}
By $(K_X+\Delta)\cdot f=0$, one has that $(\Delta\cdot f)=2$. Hence
\begin{align*}
w_2^{-1}\ord_F\Delta=(\ord_F\Delta)(F\cdot H_1)\le(\Delta_Y+(\ord_F\Delta)F)\cdot H_1=(\Delta\cdot f)=2,
\end{align*}
where the inequality follows from the fact that $f$ does not appear in $\Delta$ by $\mld_f(X,\Delta)=1$. Thus $\ord_F\Delta\le2w_2$ and
\begin{align*}
w_1-w_2\le w_1+w_2-\ord_F\Delta=a_F(X,\Delta)=tw_1,
\end{align*}
that is, $(1-t)w_1\le w_2$.
We have obtained that $(1-t)w_1\le w_2\le tw_1$. Therefore $t\ge1/2$, and moreover if $t=1/2$, then $w_1=2w_2$ so $(w_1,w_2)=(2,1)$.
\medskip
\textit{Step} 3.
Suppose that $t=1/2$. Let $s$ be a non-negative real number at most one-half. It is necessary to show that $\mld_Q(X,\Delta+sf)=1-2s$. One has that $\mld_Q(X,\Delta+(1/2)f)=0$ and it is computed by $F$. In particular, $a_F(X,\Delta)=2^{-1}\ord_Ff=1$. By $\mld_f(X,\Delta)=1$, one obtains that $\mld_Q(X,\Delta)=1$ and it is also computed by $F$. Then Lemma \ref{lem:mld}(\ref{itm:mldequal}) provides that
\begin{align*}
\mld_Q(X,\Delta+sf)=(1-2s)\mld_Q(X,\Delta)+2s\mld_Q(X,\Delta+(1/2)f)=1-2s.
\end{align*}
\end{proof}
\begin{proposition}\label{prp:half}
Let $X$ be the spectrum of the ring of formal power series in three variables over a field $K$ of characteristic zero and $P$ be its closed point. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal such that $\mld_P(X,\mathfrak{a})$ equals one and such that $(X,\mathfrak{a})$ has an lc centre of dimension one. Then one of the following holds for the maximal ideal $\mathfrak{m}$ in $\mathscr{O}_X$.
\begin{enumerate}
\item\label{itm:eqhalf}
The $\mld_P(X,\mathfrak{a}\mathfrak{m}^s)$ equals $1-2s$ for any non-negative real number $s$ at most one-half.
\item\label{itm:grthanhalf}
The $\mld_P(X,\mathfrak{a}\mathfrak{m}^{1/2})$ is positive.
\end{enumerate}
\end{proposition}
\begin{proof}
We may assume that $K$ is algebraically closed. By Theorem \ref{thm:wbu}, there exist a divisor $E$ over $X$ and a part $x_1,x_2$ of a regular system of parameters in $\mathscr{O}_X$ such that $a_E(X,\mathfrak{a})=0$ and such that $E$ is obtained by the weighted blow-up $Y\to X$ with $\wt(x_1,x_2)=(w_1,w_2)$ for some $w_1,w_2$. We may assume that the weak transform $\mathfrak{a}_Y$ of $\mathfrak{a}$ is defined. Let $f$ be the fibre of $E\to X$ at $P$ and $\Delta$ be the different on $E$ defined by $K_Y+E|_E=K_E+\Delta$. Take an $\mathbf{R}$-divisor $A_X=e^{-1}\sum_{i=1}^eA_i$ on $X$ for large $e$ in which $A_i$ are defined by general members in $\mathfrak{a}$. Let $A$ be the strict transform on $Y$ of $A_X$ and set $A_E=A|_E$. By Proposition \ref{prp:piaE}, one obtains that
\begin{align*}
\mld_P(X,\mathfrak{a}\mathfrak{m}^s)=\mld_f(E,\Delta+A_E+sf)
\end{align*}
for any non-negative real number $s$. This equality for $s=0$ supplies that $\mld_f(E,\Delta+A_E)=1$. In particular, $f$ does not appear in $\Delta+A_E$.
Let $t$ be the positive real number such that $\mld_P(X,\mathfrak{a}\mathfrak{m}^t)=0$. If $t>1/2$, then the case (\ref{itm:grthanhalf}) holds. Suppose that $t\le1/2$. Then $\mld_f(E,\Delta+A_E+tf)=0$ but $\mld_{\eta_f}(E,\Delta+A_E+tf)=1-t>0$, so there exists a closed point $Q$ in $f$ such that $\mld_Q(E,\Delta+A_E+tf)=0$. With $(K_E+\Delta+A_E)\cdot f=0$, one can apply Lemma \ref{lem:half} to $(E,\Delta+A_E)$, which derives that $t=1/2$ and $\mld_f(E,\Delta+A_E+sf)=1-2s$. Hence the case (\ref{itm:eqhalf}) holds.
\end{proof}
\begin{proof}[Proof of Theorem \textup{\ref{thm:second}(\ref{itm:half})}]
Let $\mathcal{S}=\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ be an arbitrary sequence of ideals on $X$ such that $(X,\mathfrak{a}_i^q)$ is canonical and such that $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{1/2})$ is not positive. We construct a generic limit $\mathsf{a}$ of $\mathcal{S}$. We use the notation in Section \ref{sct:limit}, so $\mathsf{a}$ is the generic limit with respect to a family $\mathcal{F}=(Z_l,\mathfrak{a}(l),N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$, and $\mathsf{a}$ is an ideal on $\hat P\in\hat X$. We let $\hat\mathfrak{m}$ denote the maximal ideal in $\mathscr{O}_{\hat X}$. By Lemma \ref{lem:limtonak} and Remarks \ref{rmk:limit}(\ref{itm:limitineq}) and \ref{rmk:independent}, it is enough to show the inequality $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s)\le\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. By Theorem \ref{thm:grthan1}, we may assume that $(\hat X,\mathsf{a}^q)$ has the smallest lc centre of dimension one, in which $\mld_{\hat P}(\hat X,\mathsf{a}^q)\le1$. By Remark \ref{rmk:limit}(\ref{itm:limitineq}) again, one obtains that $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$ from the canonicity of $(X,\mathfrak{a}_i^q)$.
Let $t$ be the positive rational number such that $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^t)=0$. By Theorem \ref{thm:nonpos}, we may assume that $s<t$. By Remark \ref{rmk:limit}(\ref{itm:limitresult}), one has that $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^t)=0$ for any $i\in N_{l_0}$ after replacing $\mathcal{F}$ with a subfamily. In particular, $t\le1/2$ by $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{1/2})\le0$. Applying Proposition \ref{prp:half} to $(\hat X,\mathsf{a}^q)$, one obtains that $t=1/2$ and $\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s)=1-2s$. Thus $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{1/2})=0$. By Lemma \ref{lem:mld}(\ref{itm:mldconvex}), one obtains that
\begin{align*}
\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)&\ge(1-2s)\mld_P(X,\mathfrak{a}_i^q)+2s\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^{1/2})\\
&\ge1-2s=\mld_{\hat P}(\hat X,\mathsf{a}^q\hat\mathfrak{m}^s).
\end{align*}
\end{proof}
\begin{proof}[Proof of Corollary \textup{\ref{crl:main}}]
We may assume that $\mathfrak{a}$ is $\mathfrak{m}$-primary. By Theorems \ref{thm:terminal}(\ref{itm:terminal}) and \ref{thm:second}(\ref{itm:half}), we have only to consider ideals $\mathfrak{a}$ such that $\mld_P(X,\mathfrak{a}^q)$ equals one and such that $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})$ is positive. By Lemma \ref{lem:mld}(\ref{itm:mldconvex}), such $\mathfrak{a}$ satisfies that
\begin{align*}
\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})\ge\Bigl(1-\frac{2}{n}\Bigr)\mld_P(X,\mathfrak{a}^q)+\frac{2}{n}\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})>1-\frac{2}{n},
\end{align*}
whence $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})\ge1-1/n$ since $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})$ belongs to $n^{-1}\mathbf{Z}$.
Let $E$ be an arbitrary divisor over $X$ which computes $\mld_P(X,\mathfrak{a}^q)$. Then
\begin{align*}
1-\frac{1}{n}\le\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})\le a_E(X,\mathfrak{a}^q\mathfrak{m}^{1/n})=a_E(X,\mathfrak{a}^q)-\frac{1}{n}\ord_E\mathfrak{m}\le1-\frac{1}{n},
\end{align*}
so $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/n})=1-1/n$ and it is computed by $E$. By Lemma \ref{lem:mld}(\ref{itm:mldequal}), $E$ also computes $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^s)$ for any non-negative real number $s$ at most $1/n$. Thus Corollary \ref{crl:main} follows from Theorem \ref{thm:terminal}(\ref{itm:zero}).
\end{proof}
By a similar argument, one can prove Conjecture \ref{cnj:product} in the opposite case when the lc threshold of the maximal ideal is at least one.
\begin{proof}[Proof of Theorem \textup{\ref{thm:second}(\ref{itm:one})}]
Let $\mathcal{S}=\{\mathfrak{a}_i\}_{i\in\mathbf{N}}$ be an arbitrary sequence of ideals on $X$ such that $(X,\mathfrak{a}_i^q)$ is canonical and such that $(X,\mathfrak{a}_i^q\mathfrak{m})$ is lc. It is enough to show the existence of a positive integer $l$ such that for infinitely many indices $i$, there exists a divisor $E_i$ over $X$ which computes $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)$ and satisfies the equality $a_{E_i}(X)=l$. Note Remark \ref{rmk:independent}. As in the proof of Theorem \ref{thm:second}(\ref{itm:half}), we construct a generic limit $\mathsf{a}$ on $\hat P\in\hat X$ of $\mathcal{S}$ with respect to a family $\mathcal{F}=(Z_l,\mathfrak{a}(l),N_l,s_l,t_l)_{l\ge l_0}$ of approximations of $\mathcal{S}$. By Lemma \ref{lem:limtonak} and Theorem \ref{thm:grthan1}, we may assume that $(\hat X,\mathsf{a}^q)$ has the smallest lc centre of dimension one, in which $\mld_{\hat P}(\hat X,\mathsf{a}^q)\le1$. By Remark \ref{rmk:limit}(\ref{itm:limitineq}), one has that $\mld_{\hat P}(\hat X,\mathsf{a}^q)=1$.
Let $\hat E$ be a divisor over $\hat X$ which computes $\mld_{\hat P}(\hat X,\mathsf{a}^q)$. As in Remark \ref{rmk:descend}(\ref{itm:descendE}), replacing $\mathcal{F}$ with a subfamily, one can descend $\hat E$ to a divisor $E_l$ over $X\times Z_l$ for any $l\ge l_0$. Writing $E_i$ for a component of the fibre of $E_l$ at $s_l(i)\in Z_l$, one may assume that $a_{E_i}(X)=a_{\hat E}(\hat X)$ and $a_{E_i}(X,\mathfrak{a}_i^q)=1$ for any $i\in N_l$. Then for any $i\in N_{l_0}$, $\mld_P(X,\mathfrak{a}_i^q)=1$ by the canonicity of $(X,\mathfrak{a}_i^q)$ and it is computed by $E_i$. By the log canonicity of $(X,\mathfrak{a}_i^q\mathfrak{m})$, the $\ord_{E_i}\mathfrak{m}$ must equal one and $E_i$ also computes $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m})=0$. Therefore, $E_i$ computes $\mld_P(X,\mathfrak{a}_i^q\mathfrak{m}^s)$ by Lemma \ref{lem:mld}(\ref{itm:mldequal}).
\end{proof}
\section{Rough classification of crepant divisors}
By Theorems \ref{thm:terminal}(\ref{itm:terminal}) and \ref{thm:second}(\ref{itm:half}), for Conjecture \ref{cnj:product} one has only to consider ideals $\mathfrak{a}$ such that $\mld_P(X,\mathfrak{a}^q)$ equals one and such that $\mld_P(X,\mathfrak{a}^q\mathfrak{m}^{1/2})$ is positive. Then every divisor $E$ over $X$ computing $\mld_P(X,\mathfrak{a}^q)$ satisfies that $\ord_E\mathfrak{m}$ equals one. We close this paper by providing a rough classification of $E$.
\begin{theorem}\label{thm:crepant}
Let $P\in X$ be the germ of a smooth threefold and $\mathfrak{m}$ be the maximal ideal in $\mathscr{O}_X$ defining $P$. Let $\mathfrak{a}$ be an $\mathbf{R}$-ideal on $X$ such that $\mld_P(X,\mathfrak{a})$ equals one. Let $E$ be a divisor over $X$ which computes $\mld_P(X,\mathfrak{a})$ such that $\ord_E\mathfrak{m}$ equals one. Then there exist a regular system $x_1,x_2,x_3$ of parameters in $\mathscr{O}_{X,P}$ and positive integers $w_1,w_2$ with $w_1\ge w_2$ such that for the weighted blow-up $Y$ of $X$ with $\wt(x_1,x_2,x_3)=(w_1,w_2,1)$, one of the following cases holds by identifying the exceptional divisor $F$ with $\mathbf{P}(w_1,w_2,1)$ with weighted homogeneous coordinates $x_1,x_2,x_3$.
\begin{enumerate}[label=\textup{\arabic*.},ref=\arabic*]
\item\label{cas:toric}
$E$ equals $F$ as a divisor over $X$.
\item\label{cas:hypers}
The centre $c_Y(E)$ is the curve on $F$ defined by $x_1x_3^p+x_2^q$ for some positive integers $p$ and $q$ satisfying that $w_1+p=qw_2\le w_1+w_2$.
\item\label{cas:saturated}
The centre $c_Y(E)$ is the curve on $F$ defined by $x_1x_2+x_3^{w_1+w_2}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\textit{Step} 1.
Let $w_1$ be the maximum of $\ord_Ex_1$ for all elements $x_1$ in $\mathfrak{m}\setminus\mathfrak{m}^2$. To see the existence of this maximum, let $Z\to X$ be the birational morphism from a smooth threefold $Z$ on which $E$ appears as a divisor. Applying Zariski's subspace theorem \cite[(10.6)]{Ab98} to $\mathscr{O}_{X,P}\subset\mathscr{O}_{Z,Q}$ for a closed point $Q$ in $E$, one has an integer $w$ such that $\mathscr{O}_Z(-wE)_Q\cap\mathscr{O}_{X,P}\subset\mathfrak{m}^2$. Then $\ord_Ex_1$ is less than $w$ for any $x_1\in\mathfrak{m}\setminus\mathfrak{m}^2$, so $w_1$ exists. Fix $x_1$ for which $\ord_Ex_1$ attains the maximum $w_1$.
Then let $w_2$ be the maximum of $\ord_Ex_2$ for those $x_2$ such that $x_1,x_2$ form a part of a regular system of parameters in $\mathscr{O}_{X,P}$. Note that $w_1\ge w_2$. Fix $x_2$ for which $\ord_Ex_2$ attains the maximum $w_2$, and take a general member $x_3$ in $\mathfrak{m}$. Note that $\ord_Ex_3=\ord_E\mathfrak{m}=1$. The $x_1,x_2,x_3$ form a regular system of parameters in $\mathscr{O}_{X,P}$. Let $Y$ be the weighted blow-up of $X$ with $\wt(x_1,x_2,x_3)=(w_1,w_2,1)$ and $F$ be its exceptional divisor. We identify $F$ with $\mathbf{P}(w_1,w_2,1)$ with weighted homogeneous coordinates $x_1,x_2,x_3$.
\medskip
\textit{Step} 2.
Let $L$ be an arbitrary locus in $F$ defined by a weighted homogeneous polynomial of form either $u_1$, $u_2$ or $x_3$ where
\begin{itemize}
\item
$u_1=x_1+\sum_{i=0}^\rd{ w_1/w_2}\lambda_ix_2^ix_3^{w_1-iw_2}$ for some $\lambda_i\in k$,
\item
$u_2=x_2+\lambda x_3^{w_2}$ for some $\lambda\in k$.
\end{itemize}
We claim that the centre $c_Y(E)$ is not contained in any such $L$. Indeed by Lemma \ref{lem:parallel}, $c_Y(E)$ is not contained in the locus defined by $x_1x_2x_3$. In particular, $\ord_EF=\ord_E\mathfrak{m}=1$. If $L$ is defined by $u_i$ for $i=1$ or $2$, then the $u_i$, as an element in $\mathscr{O}_X$, satisfies that
\begin{align*}
w_i\ge\ord_Eu_i\ge\ord_EL+\ord_Fu_i\cdot\ord_EF=\ord_EL+w_i.
\end{align*}
Thus $\ord_EL=0$, meaning that $c_Y(E)\not\subset L$. One also has that $\ord_Ex_i=\ord_Eu_i$. By Remark \ref{rmk:wbu}, we are free to replace $x_1$ with $u_1$ as well as $x_2$ with $u_2$ for the regular system $x_1,x_2,x_3$ of parameters constructed in Step 1.
\medskip
\textit{Step} 3.
Since an arbitrary closed point in $F$ lies on some $L$ in Step 2, the $c_Y(E)$ is either a curve or $F$ itself. The case $c_Y(E)=F$ is nothing but the case \ref{cas:toric}. We shall investigate the case when $c_Y(E)$ is an irreducible curve $C$ other than any $L$.
Let $d$ be the weighted degree of $C$ in $F\simeq\mathbf{P}(w_1,w_2,1)$. We may assume that the weak transform $\mathfrak{a}_Y$ of $\mathfrak{a}$ is defined, so $(Y,bF,\mathfrak{a}_Y)$ is the pull-back of $(X,\mathfrak{a})$ where $b=1-a_F(X,\mathfrak{a})\le0$. Since
\begin{align*}
a_E(Y,F,\mathfrak{a}_Y)=a_E(Y,bF,\mathfrak{a}_Y)-(1-b)\ord_EF\le a_E(X,\mathfrak{a})-(1-b)=b\le0,
\end{align*}
the $(Y,F,\mathfrak{a}_Y)$ is not plt about $\eta_C$. Thus $(F,\mathfrak{a}_Y\mathscr{O}_F)$ is not klt about $\eta_C$ by inversion of adjunction, that is, $\ord_C(\mathfrak{a}_Y\mathscr{O}_F)\ge1$. Thus the strict transform $A_Y$ of the $\mathbf{R}$-divisor on $X$ defined by a general member in $\mathfrak{a}$ satisfies the inequality $A_Y|_F\ge C$. One computes that
\begin{align*}
d=w_1w_2C\cdot(-F)&\le dw_1w_2A_Y|_F\cdot(-F)=w_1w_2(\ord_F\mathfrak{a})F^3\\
&=\ord_F\mathfrak{a}=w_1+w_2+1-a_F(X,\mathfrak{a})\le w_1+w_2.
\end{align*}
\medskip
\textit{Step} 4.
Let $f$ be the weighted homogeneous polynomial in $x_1,x_2,x_3$ defining $C$, which has weighted degree $d\le w_1+w_2$. Since any weighted homogeneous polynomial in $x_2,x_3$ is decomposed as the product of polynomials of form $x_3$ or $x_2+\lambda x_3^{w_2}$ for some $\lambda\in k$, by Step 2 and the irreducibility of $C$, there exists a monomial which involves $x_1$ and appears in $f$.
Suppose that $d<w_1+w_2$. Then $x_1x_3^p$ is the only monomial of weighted degree $d$ involving $x_1$, in which $p=d-w_1\ge0$. The $p$ must be positive by Step 2, so $1\le p<w_2$. The $f$ is, up to constant, written as
\begin{align*}
f=x_1x_3^p+\sum_{i=0}^q\lambda_ix_2^ix_3^{w_1+p-iw_2}=\Bigl(x_1+\sum_{i=0}^{q-1}\lambda_ix_2^ix_3^{w_1-iw_2}\Bigr)x_3^p+\lambda_qx_2^qx_3^{w_1+p-qw_2}
\end{align*}
for some $\lambda_i\in k$, where $q=\rd{(w_1+p)/w_2}$. Since $f$ is irreducible, one has that $\lambda_q\neq0$ and $w_1+p=qw_2$. Replacing $f$ with $\lambda_q^{-1}f$ and $x_1$ with $\lambda_q^{-1}(x_1+\sum_{i=0}^{q-1}\lambda_ix_2^ix_3^{w_1-iw_2})$, $f$ is expressed as $x_1x_3^p+x_2^q$, which is the case \ref{cas:hypers}.
Suppose that $d=w_1+w_2$ and $w_1>w_2$. Then $x_1x_2$ and $x_1x_3^{w_2}$ are the only monomials of weighted degree $d$ involving $x_1$. If only $x_1x_3^{w_2}$ appears in $f$, then the case \ref{cas:hypers} holds by the same discussion as in the case $d<w_1+w_2$. If $x_1x_2$ appears in $f$, then the part in $f$ involving $x_1$ is, up to constant, written as $x_1(x_2+\lambda x_3)$ for some $\lambda\in k$. Replacing $x_2$ with $x_2+\lambda x_3$, one may write $f$ as
\begin{align*}
f=x_1x_2+\sum_{i=0}^q\lambda_ix_2^ix_3^{w_1+w_2-iw_2}=\Bigl(x_1+\sum_{i=1}^q\lambda_ix_2^{i-1}x_3^{w_1+w_2-iw_2}\Bigr)x_2+\lambda_0x_3^{w_1+w_2}
\end{align*}
for some $\lambda_i\in k$, where $q=\rd{w_1/w_2}+1$. One has that $\lambda_0\neq0$ by the irreducibility of $f$. Replacing $f$ with $\lambda_0^{-1}f$ and $x_1$ with $\lambda_0^{-1}(x_1+\sum_{i=1}^q\lambda_ix_2^{i-1}x_3^{w_1+w_2-iw_2})$, $f$ is expressed as $x_1x_2+x_3^{w_1+w_2}$, which is the case \ref{cas:saturated}.
Finally, suppose that $d=2w$ and $w_1=w_2=w$ for some $w$. If $w=1$, then $C$ must be a conic in $F\simeq\mathbf{P}^2$, so the case \ref{cas:saturated} holds after replacing $x_1,x_2,x_3$. If $w\ge2$, then after replacing $x_1,x_2$ with their suitable linear combinations, we may assume that the part in $f$ not involving $x_3$ is either $x_1x_2$ or $x_2^2$. In the first case, $f$ is written as $f=(x_1+\lambda_1x_3^w)(x_2+\lambda_2x_3^w)+\lambda_3x_3^{2w}$ for some $\lambda_1,\lambda_2,\lambda_3\in k$. Then $\lambda_3\neq0$ by the irreducibility. Replacing $f$ with $\lambda_3^{-1}f$, $x_1$ with $\lambda_3^{-1}(x_1+\lambda_1x_3^w)$, and $x_2$ with $x_2+\lambda_2x_3^w$, $f$ is expressed as $x_1x_2+x_3^{2w}$, which is the case \ref{cas:saturated}. In the second case, $f$ is written as $f=(\lambda_1x_1+\lambda_2x_2+\lambda_3x_3^w)x_3^w+x_2^2$ for some $\lambda_1,\lambda_2,\lambda_3\in k$. Then $\lambda_1\neq0$ by the irreducibility. Replacing $x_1$ with $\lambda_1x_1+\lambda_2x_2+\lambda_3x_3^w$, $f$ is expressed as $x_1x_3^w+x_2^2$, which is the case \ref{cas:hypers}.
\end{proof}
One can compute the lc threshold of the maximal ideal in the case \ref{cas:saturated}.
\begin{proposition}
Suppose the case \textup{\ref{cas:saturated}} in Theorem \textup{\ref{thm:crepant}}. Then $(X,\mathfrak{a}\mathfrak{m})$ is lc.
\end{proposition}
\begin{proof}
We keep the notation in Theorem \ref{thm:crepant}. $Y$ is the weighted blow-up of $X$ with $\wt(x_1,x_2,x_3)=(w_1.w_2,1)$ and $F$ is its exceptional divisor. For $i=1,2,3$, let $H_i$ be the strict transform of the divisor defined by $x_i$. Let $Q_i$ be the closed point in $F$ which lies on $H_j\cap H_k$ for a permutation $\{i,j,k\}$ of $\{1,2,3\}$. Let $C$ denote the centre on $Y$ of $E$, which is the curve defined in $F\simeq\mathbf{P}(w_1,w_2,1)$ by $x_1x_2+x_3^{w_1w_2}$ of weighted degree $d=w_1+w_2$ in our case \ref{cas:saturated}. Let $A$ be the $\mathbf{R}$-divisor defined by a general member in $\mathfrak{a}$ and $A_Y$ be its strict transform on $Y$.
We have seen in Step 3 of the proof of Theorem \ref{thm:crepant} that
\begin{align*}
d=w_1w_2C\cdot(-F)&\le dw_1w_2A_Y|_F\cdot(-F)=w_1+w_2+1-a_F(X,\mathfrak{a})\le w_1+w_2.
\end{align*}
Since $d=w_1+w_2$, the above inequalities are eventually equalities, whence $a_F(X,\mathfrak{a})=1$ and $A_Y|_F=C$.
The triple $(Y,F+A_Y+H_3)$ is crepant to $(X,A+H_{3X})$, where $H_{3X}$ is the divisor defined by $x_3$. Thus it is enough to show the log canonicity of $(Y,F+A_Y+H_3)$. Let $\Delta$ be the different on $F$ defined by $K_Y+F|_F=K_F+\Delta$. By inversion of adjunction, the log canonicity of $(Y,F+A_Y+H_3)$ is equivalent to that of $(F,\Delta+A_Y|_F+H_3|_F)$.
Let $g$ be the greatest common divisor of $w_1$ and $w_2$. Let $L\simeq\mathbf{P}^1$ be the line in $F$ defined by $x_3$. Since $Y$ has a quotient singularity of type $\frac{1}{g}(1,-1)$ at $\eta_L$, the different $\Delta$ equals $(1-g^{-1})L$ as in Example \ref{exl:different}. Together with $A_Y|_F=C$ and $H_3|_F=g^{-1}L$, one has that $\Delta+A_Y|_F+H_3|_F=C+L$.
The assertion is reduced to the log canonicity of $(F,C+L)$, which can be checked directly by using the explicit expression $(x_1x_2+x_3^{w_1+w_2})x_3$ of the defining weighted polynomial of $C+L$. Along $L$, one can use inversion of adjunction again, which tells that $K_F+C+L|_L=K_L+Q_1+Q_2$.
\end{proof}
\section*{Acknowledgements}
I should like to thank Professor M. Musta\c{t}\u{a} for introducing me to the approach by using the generic limit of ideals to the ACC conjecture for minimal log discrepancies. I should also like to thank Dr.\ Y. Nakamura for the discussions on the relationship between (\ref{itm:madic}), (\ref{itm:nakamura}) and (\ref{itm:limit}) in Conjecture \ref{cnj:equiv}.
|
{
"timestamp": "2018-03-08T02:04:49",
"yymm": "1803",
"arxiv_id": "1803.02539",
"language": "en",
"url": "https://arxiv.org/abs/1803.02539"
}
|
\section{INTRODUCTION}
Recently robotics researchers have focused more on studies on learnable robot by using advanced schemes of deep learning \cite{Cangelosi,levineabbel,hwangsystem,ogata}. Obvious benefit is that learning by robots themselves can ease difficulty in describing precise models of the robots and their environment by the users. The most popular approach in applying deep learning to robots is to use convolutional neural network (CNN) for developing visuomotor mapping possibly by using reinforcement learning framework \cite{peter,levineabbel}. Another interesting approach is using the framework of predictive coding \cite{rao,friston} to robot learning problems \cite{book, taninolfi,hwangsystem,ogata,nagai,yamashita}. The predictive coding framework can allow robots to develop task specific internal models by extracting latent causality between intention states and the resultant outcomes of perceptual sequences through learning of accumulated sensory-motor experiences. Yamashita \& Tani \cite{yamashita} and Noda et al. \cite{ogata} showed that a set of skilled behaviors like manipulating objects can be learned for robust generation using the predictive mechanism in this framework. Hwang et al. \cite{hwangsystem} showed that imitation learning using pixel level dynamic vision can be performed successfully by using predictive coding type deep visuomotor deep RNN model. Although the current application of the predictive coding is limited to simple prediction of action outcomes, it can be applied to more cognitively challenging problems involved with optimal action planning and their dynamic execution for achieving arbitrarily given goals. The current paper presents the first step toward such research goals by reporting a set of results from our robotic experiments.
The basic ideas and trails shown in the current study is briefly described in the following. The predictive coding scheme is implemented into a neural network model, referred to as predictive coding type deep visuomotor recurrent neural network model (P-DVMRNN). A real arm robot with vision is tutored for object-directed behavior generation tasks such as grasping an object for placing it on a goal target sheet. The tutoring is repeated for teaching a set of different trajectories dealt with large variation in positions such as for the object and the target goal sheet. The tutoring of each goal-directed trajectory for a particular task provides a robot with related multimodal perceptual experience consisting of pixel level vision and proprioception in terms of the joint angles which are extended in time as synchronized.
A set of visuo-proprioceptive sequences obtained through tutoring of a particular task is used for off-line training of the P-DVMRNN model. P-DVMRNN model learns to regenerate each training trajectory by inferring the corresponding intention state. Here, the intention state which is represented by the internal neural activity in the model network encodes the way of the robot intending to interact with the environment. It is noted that each intention state is self-determined in the course of learning. Consequently, after adequate training with good generalization it is expected that a causal mapping from the intention state space to the corresponding perceptual sequence space can be developed in the model network. After successful learning, the model network can generate mental image for visuo-proprioceptive sequence for the intention state inferred for the tutoring sequence. Moreover, it is assumed that an intention state located neighboring among those inferred in the training can generate analogous one by possibly interpolating those trained trajectories if generalization in learning can be done successfully.
Let us consider further extension of the scheme to involve with goal-directed planning as the main objective in the current study. Suppose that goal state is given in terms of the corresponding perceptual state such as a visual frame image of a robot putting an object on a goal sheet. Then, the problem of planning is to infer the corresponding intention state which can achieve the specified perceptual state in the distal step by inversely applying the acquired causality between the intention to the perceptual sequence. Although it would be trivial to generate corresponding trajectories to a prior learned goal states, the same may not be assured for the case of unlearned goal states. We examine this issue by conducting robotic experiments by changing task difficulty. Although all the robot tasks considered in the current study might be relatively simple, our trial should be the first one for applying the predictive coding framework to the learning-based robot action planning by using deep learning scheme. The paper will focus on some difficulty we encountered in terms of generalization in planning and will discuss how the problem could be resolved by improving the scheme in future.
\section{Method}
\label{sec:arch}
In the predictive coding framework, all three processes of learning, recognition, and generation can be conducted by means of the prediction error minimization. Firstly, the learning is a process to map between intention states and the resultant perceptual sequences by self-determining the corresponding intention state for each sequence and connectivity weights for minimizing the prediction error. In the case of using RNN models for implementing the predictive coding scheme as like in the current study, the intention state can be represented by the initial states of internal neural units by utilizing the initial sensitivity characteristics of the RNN dynamics. Recognition is a process to infer inversely the corresponding intention state for a given target perceptual sequence. Finally, plan generation is to infer the corresponding intention state to achieve a goal state given at the distal step. The intentions state inferred is used to generate perceptual sequence reaching to the goal state. Next, we show how this predictive coding idea can be implemented in the current proposed neural network model.
\subsection{Neural Network Architecture}
Our network architecture (shown in Fig. \ref{arch}) uses a recurrent neural network (RNN) based on predictive coding \cite{rao} capable of learning, generating, and recognizing multi-modality perceptual sequence inputs. The network has two closely related paths dedicated to processing visual input and motor joint angles respectively. At each time step, visual input in the form of a frame captured from an RGB camera is provided to the network, as well as the corresponding motor joints angles from a robot arm. The visual image and joint angles are fed as inputs to the lowest layer of the network and processed through three layers, then finally merged in the highest layer. The outputs predicting both the next visual input and joint angles are generated based on the internal neural activity in the lowest layer. Each layer is only connected to its neighbors (above/below) and the adjacent visual/motor counterpart. These structural characteristics enable network to process incoming data in an hierarchical manner \cite{hwangsystem,choi}.
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.3]{IROS2018_figure-model}
\caption{Overall network architecture. The proposed predictive coding type deep recurrent neural network model dealing with visuomotor sequences. }
\label{arch}
\end{figure}
For the visual path, convolutional Long Short-term Memory (LSTM) networks \cite{convlstm} are used as the basic building blocks of the network in order to process spatial and temporal information simultaneously \cite{hwangsystem,jung,choi,convlstm}. In each path, there are two streams of information: top-down and bottom-up. The top-down connection projects the current prediction to the lower layers, while the bottom-up connection carries information from outside the network or errors between predictions and actual inputs. Top-down connections in vision path utilize convolutional operations and pooling, while the bottom-up connections are implemented as a transposed convolution \cite{deconv}. The size of the feature maps for each layer is designed to be half of the previous lower level.
For the motor path, which operates on lower dimensional data compared to the visual path, LSTM is used. Similar to the feature maps in the visual path, the number of neurons in the motor path decreases along the hierarchy in the network. In order to improve learning with low dimensional data, sparse encoding is utilized \cite{hwangsystem,yamashita}. In this work, each motor joint is represented by a 10 dimensional sparsely encoded vector.
The number of layers in both visual and motor pathways are identical, and as mentioned previously the layers at the same level in both visual and motor pathways are connected to each other horizontally. In Fig. \ref{arch}, the lateral connections at each level enable the whole network to exchange information between the two paths. Thus, this lateral connection is key for the network to closely couple the two modalities and maintains the link between a given visual input and motor joint angle. The network forms the common internal states of the highest layer by recognizing both visual input and current motor state from bottom-up connections. Since the raw visuomotor information is processed hierarchically through multiple layers from the lowest layer to highest layer, this internal state presents abstract information of the current environment as well as robot's state. Based on this abstract representation, the model is able to predict future visuomotor input by projecting it towards the lowest layer, generating a pixel-level visual prediction and a sparsely encoded joint angle prediction.
A breakdown of each layer in the two network paths is shown in Table \ref{networkdetail}.
\begin{table}[]
\setlength\tabcolsep{5pt}
\centering
\caption{Breakdown of the size \& number of convolution feature maps (vision) and LSTM layers (motor) per layer}
\label{networkdetail}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& In/Out & L1 & L2 & L3 & L4 \\ \hline
\begin{tabular}[c]{@{}c@{}}Vision\\ Path\end{tabular} & 64$\times$64$\times$1 & 32$\times$32$\times$40 & 16$\times$16$\times$80 & 8$\times$8$\times$80 & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}4$\times$4$\times$12\\ Shared\end{tabular}} \\ \cline{1-5}
\begin{tabular}[c]{@{}c@{}}Motor\\ Path\end{tabular} & 40 & 1024 & 1024 & 16 & \\ \hline
\end{tabular}
\end{table}
\subsection{Training}
During training, our model learns to predict/generate a set of training sequences by inferring the corresponding intention states represented by initial states (IS) of the internal units for each sequence as well as connectivity weights using back-propagation through time (BPTT) \cite{bptt} towards the direction of minimizing the prediction error.
In this model, IS are self-determined for each training sequence.
\subsection{Inference of intention for novel sequences}
With the trained network parameters (for e.g., weights and biases), it is expected that each training sequence can be regenerated using the corresponding IS in a closed-loop manner\footnote{Closed-loop: giving the previous time step's output as the current step's input, as opposed to Open-loop: input to each time step is given from ground truth data.}.
When a novel perceptual sequence pattern is given to the learned network, it can be recognized by inferring the corresponding IS values by means of error regression (ER) scheme for minimizing the prediction error without changing connectivity weights. When the learning can be done with sufficient generalization, it is generally assumed that a novel sequence pattern which is similar to a particular trained one tends to be inferred with a similar IS value.
For example, consider a scenario in which we wish to find the IS (\(h_0\)) which encodes a given sequence in a simple RNN. Since we have no information of the actual IS, we set the current IS to a random value and start searching. In this search, we first generate a prediction output using the randomly set IS in a closed-loop manner as noted previously. Given the random IS, the output (\(O_{1:T}\)) of this process is unlikely to match our target sequence (\(T_{1:T}\)), producing a prediction error (\(E_{prediction}\)).
\begin{equation}
E_{prediction} = \sum_{t=1}^T ||O_t - T_t||^2
\end{equation}
This prediction error (\(E_{prediction}\)) is then back-propagated through time (BPTT) \cite{bptt}. Unlike training, during the ER process, model parameters such as weights and biases are left unchanged. Only the IS is optimized for prediction error minimization. This process is iterated multiple times until the predicted output follows the target sequence by minimizing prediction error. Once the optimal IS is found, the network is able to generate (or decode) the corresponding sequence.
\subsection{Planning}
This subsection describes how goal-directed action plans can be generated by extending the scheme of the ER. Let us suppose that the robot waits for a goal to be specified while staying at predefined home position posture. We consider that a goal state is given in terms of its corresponding perceptual state, i.e., visual state (\(V_{target, T}\)) and joint angle state (\(M^j_{target,T}\), where \(j\) is an index of joints and \(J\) is the number of joints, \(j \in J\)). Then the problem to solve is to generate an optimal visuomotor sequence which can rationally connect the perceptual state in the home posture in the initial step (\(V_{target,1}\) and \(M^j_{target,1}\)) and the one in the goal state.
Fig. \ref{plan1} presents the available information for making plans and Fig. \ref{plan2} shows the generated visuomotor sequence connecting initial step and goal step.
Because the model network can generate various possible visuomotor sequences by changing the IS based on learning, it is considered that an optimal IS for generating such sequence can be searched by using the aforementioned ER scheme. A difference in the ER scheme for the plan generation is that the target perceptual states are given only partially, at the initial state and the end state.
Thus, the prediction error which will be used to optimize an IS by ER is given as follows:
\begin{equation} \label{eq:visionerror}
\begin{split}
E_{prediction} = &||V_{out,1} - V_{target,1}||^2 + ||V_{out,\hat{T}} - V_{target,T}||^2
\\&+ \sum_{j=1}^J KL(M^j_{out,1}|| M^j_{target,1})
\\&+ \sum_{j=1}^J KL(M^j_{out,T}|| M^j_{target,T})
\end{split}
\end{equation}
where \(V_{out,t}, V_{target,t}, M^j_{out,t}, M^j_{target,t}\) are the visual predicted output at step t, the visual target at step t, the \(j^{th}\) joint angle predicted output at step \(t\) and the \(j^{th}\) joint angle target at step \(t\) respectively. \(\hat{T}\) is a target step for the robot producing a target output. The time step at which the robot would achieve the given goal state is not known so it may be different from the ground truth target \(T\). Therefore, during the optimization process, \(\hat{T}\) is inferred. During the ER process, based on the current IS, a closed-loop prediction is generated until a predefined step \(T_{max}\) which is long enough to achieve the goal state. Then the ground truth visual target image \(V_{target,T}\) is compared against all generated prediction output frames (from \(V_{out,1}\) to \(V_{out,T_{max}}\)) and it generates errors for each step. In order to promote the model to achieve the goal faster, in a more optimized way, compensation value \(1.01^{t}\) is multiplied to the prediction error calculated for each time step. Among those predicted frames, the frame that has the smallest compensated error compared to \(V_{target,T}\) is set as \(V_{out,\hat{T}}\) and the respective output step \(\hat{T}\).
As noted in Section \ref{sec:experiments}, this can result in a shorter sequence of steps to reach the goal.
In this work, as sparse coding is applied to the motor joint angles, Kullback-Leibler divergence (KL) is used to measure error between motor targets and outputs.
When the IS for visual path is found by optimization, the robot is able to generate motor joint angles associated with each predicted image. As shown in the network architecture, the two modalities of vision and motor are correlated through lateral connections. Therefore, by searching optimal IS for one modality, it is possible to induce the other modality. Inducing motor joints angles is therefore possible by optimizing IS in the visual path and vice versa.
The error from Equation ~\ref{eq:visionerror} describes the case when the both the visual and motor target is given. However, when only a target from one modality is given, eliminating the corresponding target term from Equation \ref{eq:visionerror} will yield a new prediction error. For example, in case the motor target is not given, the term \(\sum_{j=1}^J KLD(M^j_{out,T}|| M^j_{target,T})\) should be removed.
\begin{figure}[!t]
\centering
\subfloat[Given condition for planning]{\includegraphics[width=\linewidth]{plan0}
\label{plan1}}
\hfil
\subfloat[Generated plan for vision and motor]{\includegraphics[width=\linewidth]{plan1}
\label{plan2}}
\caption{Planning by error regression. For both (a) and (b), left figures show visual data and right graphs present motor joint angles}
\label{initBias}
\end{figure}
\section{Experiments}
\label{sec:experiments}
In this section, we describe the experimental procedures and results obtained using an arm robot and camera connected to our network. Three tasks with different behavioral complexity were considered. The goals of the three tasks were, 1) reaching to a single point in the task space, 2) reaching to two points sequentially in the task space, and 3) grasping an object and putting it on a goal target sheet in the task space\footnote{Videos can be found at \url{https://sites.google.com/site/academicpapersubmission/p-dvmrnn}}.
For the reaching tasks, our robot was configured to used 4 of its 7 joints, while in the grasping task, 5 joints including an end effector were used. The camera was in a fixed position facing the workspace and robot. While collecting training and testing data, both visual data and motor joint angles were sampled at 10Hz (for the grasping task, this was reduced to 2.5Hz). Each frame from the camera was resized to $64 \times 64$ pixels and 8 bit grayscale before being provided to the network.
Data collection was conducted in two phases: first, a human operator moved the robot by hand following a set of randomly generated positions. After the joint angles were recorded, the robot recreated the recorded trajectory and captured the video of the motion. For testing purposes, we only used the initial and final states of visuomotor trajectories from a test set and compared the prediction to the ground truth test trajectories.
Because this data was generated by a human operator, it will naturally have noise and fluctuations. If our model is able to generalize the training trajectories to reach the goal state, it should be able to ignore the unnecessary pauses and fluctuations. Finally, both the pixel level vision and joint angles were normalized to $[-1, 1]$, and we utilized the Adam optimizer for training the network \cite{adam}.
\subsection{Experiment 1: Reaching}
For the first experiment, a frame from the target vision data showing the last position of the robot arm is given as its goal state. To successfully accomplish this task, the network must generate a plausible prediction for both visual input and corresponding joint angles forming a trajectory of the arm robot. This experiment used a table (74cm x 74cm)
for the task space where 100 reaching trajectories for training were generated by randomly allocating the target position on the table. Testing was done on 40 randomly sampled positions that were not part of the training set.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.8]{exp1_nn}
\caption{Example results from the reaching experiment. The upper row shows images of the initial and target visual states in the top level, visual image sequence generated for plan in the second level, and actual visual input perceived during execution of the plan in the bottom level. The lower row shows joint angles generated for the plan. Fig. 4 and Fig. 5 are presented in the similar manner.}
\label{exp1-1}
\end{figure}
As described in equation \ref{eq:visionerror}, the IS of the network is optimized based on the errors between the visual predictions and visual targets given in the first and the last frames and then motor joint angles are generated based on the optimized IS. Fig. \ref{exp1-1} shows the results of this experiment. Although there is some visible blurring in prediction outputs (second row), the overall shape of the arm and its movement are maintained.
We also observe that the motor joint angles were successfully generated even though the target values of the joints were not provided to the network.
As mentioned previously, since the training patterns were generated by a human operator, the trajectories are inconsistent. However, the robot reaches the goal state faster than the similar training trajectories. This suggests the model can generate more optimized trajectories that still reach the goal state, by generalizing training patterns.
To evaluate planning performance, we measured the distance between the final position reached based on the plan and the target position specified. Given the imprecision in the visual input, if the deviation at the end of the trajectory was less than 4\(cm\) (i.e., less than 3 pixels), the result was judged to be successful. Considering the overall size of robot arm (approximately 80\(cm\)), a 4\(cm\) error is believed to be reasonable. Over 40 test data points, the recorded average deviation was 2.6\(cm\) with a maximum deviation of 5.4\(cm\). The overall success rate of experiment 1 was 84\%.
Although some degree of position generalization was achieved as shown by the success rate of 84\% in the test generation, it was true that a relatively large amount of tutoring trajectories were used for the learning. Therefore, we examined how much the position generalization depended on the amount of tutoring trajectories. For this purpose, the same experiment was repeated by reducing number of tutoring trajectories, to 50 and 25. The result of the success rate in test generation is summarized in Table \ref{exp1_rate}. It can be seen that the success rate decreased significantly when the number of tutoring trajectories was reduced. It can be said that by using the current model, a reasonable success rate in test generation requires a relatively large amount of tutoring data of around 100 trajectories even for a relatively simple task as like the current one.
\begin{table}[]
\centering
\caption{Success rate for experiment 1 with varying training set sizes}
\label{exp1_rate}
\begin{tabular}{|c|c|c|c|}
\hline
\begin{tabular}[c]{@{}c@{}}Training set size\end{tabular} & 25 & 50 & 100 \\ \hline
\begin{tabular}[c]{@{}c@{}}Average deviation\end{tabular} & 5.3\(cm\) & 3.2\(cm\) & 2.6\(cm\) \\ \hline
\begin{tabular}[c]{@{}c@{}}Success rate\end{tabular} & 45\% & 70\% & 84\% \\ \hline
\end{tabular}
\end{table}
\subsection{Experiment 2: Reaching two points}
For the second experiment, we extended the first task by adding an intermediate target that the robot must touch before reaching the goal. The intermediate target was marked by a filled circle with a diameter of 12\(cm\). The goal state was given as the last visual frame showing the intermediate target marker and the arm in the final position.
To accommodate the two distributions of locations, the task space was expanded to 100\(cm\) by 100\(cm\).
The task for the robot is to 1) touch the intermediate marker and then 2) move to the final position. For this task, if the robot touches a point within the marker and reaches the final position with a deviation of the end effector of less than 4\(cm\), the trial is regarded as successful. For training the network, 100 training sequences were collected. Fig. \ref{exp2} shows the target, predicted and actual visual frames as well as joint angles for one trial. The overall success rate was 75\% for this task.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.8]{exp2_nn}
\caption{Results of touching two points experiment}
\label{exp2}
\end{figure}
\subsection{Experiment 3: Moving an object}
For the third experiment, we added an object to the task space for the robot to manipulate. The object and the target circle are placed in two randomly sampled locations as in experiment 2 and the task for the robot was to 1) grasp the object and 2) place it in the target circle. The object was a plastic cylinder with a diameter of 5\(cm\) and a height of 10\(cm\). The target circle and workspace were the same as in experiment 2. 100 training sequences and 50 test sequences are used. In testing, if the robot grasped the object and placed it upright anywhere within the target circle, it was considered a success.
The difficulty of the task is considerably higher compared to the previous experiments, because the end effector must be moved accurately to grasp the object without sensory feedback.
A successful trial is shown in Fig. \ref{exp3}. The overall success rate was 48\%. The challenge here was primarily the low resolution of visual input and resulting inaccurate predicted trajectories. Due to the size and shape of the the object and the end effector, any deviation greater than 2\(cm\) often resulted in failure to grasp the object. Despite this, we noted that once the robot grasped the object, it was able to successfully place the object in the target circle (94\% success rate, with an average deviation of 3\(cm\)).
In order to break down the performance in this task further, we considered the deviation from the center of the object to the center of where the end effector actuated. Allowing a 1 to 3 pixel error (1.3\(cm\) to 3.9\(cm\) deviation) in grasping, in line with previous tasks, the success rate was improved considerably as shown in Table \ref{exp3_rate}.
Additionally, we tested the ability of our model to produce one-step predictions. Unlike the previous tests, for one-step prediction the network observed ground truth visual data and motor joint angles after making a prediction at each timestep. This prediction scheme is employed in several other works \cite{rahmatizadeh,nagai,levine}. As the model receives sensory feedback, it yields better results than closed loop prediction. This difference is shown in Table \ref{exp3_rate} as closed loop prediction and open loop prediction respectively.
\begin{table}[]
\centering
\caption{Success rate for experiment 3 with varying amounts of error allowed in grasping}
\label{exp3_rate}
\begin{tabular}{|c|c|c|c|c|}
\hline
& \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Strict \\ grasping\end{tabular}} & \multicolumn{3}{c|}{Allowed error in grasping} \\ \cline{3-5}
& & 1 pixel & 2 pixels & 3 pixels \\ \hline
\begin{tabular}[c]{@{}c@{}}Closed loop \\Prediction\end{tabular} & 48\% & 48\% & 64\% & 71\% \\ \hline
\begin{tabular}[c]{@{}c@{}}Open loop \\Prediction\end{tabular} & 74\% & 74\% & 88\% & 93\% \\ \hline
\end{tabular}
\end{table}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.8]{exp3_nn}
\caption{Results of moving object experiment}
\label{exp3}
\end{figure}
\section{Conclusion}
In this paper, we proposed a novel architecture for goal-directed action planning using a predictive coding type deep dynamic neural network. In the robot experiment, the network learned to generate a set of visuo-proprioceptive sequences by self-determining the corresponding intention state in terms of the IS for each sequence as well as connectivity weights in the whole network. After learning, the network was able to generate optimal visuomotor plans for the specified goal states by inferring the corresponding IS with some degree of generalization. Our experimental results have shown that our architecture can produce not only near future predictions (one-step ahead) as used in existing works, but also far future states (multi-step ahead) for both visual and motor modalities. However, due to restrictions in image resolution used in the vision network, the robot frequently failed in a grasping task that required precise positioning. This issue can be ameliorated somewhat by increasing image resolution or adding additional sensory input (for e.g., depth perception or tactile sensation) at the expense of increased computational cost.
A significant issue we observed was that to achieve fair generalization in learning and plan generation required a relatively large amount of training data. As shown with the first experimental task, the success rate in reaching the goal state was significantly reduced as number of training sequences was decreased. How can we solve this generalization problem? One possible solution may be to introduce a variational Bayes (VB) scheme to the model network \cite{kingma}. Recently, VB schemes have been introduced to several RNN models \cite{bengio,rezavb}. RNN models using a VB scheme show better generalization in learning by extracting probabilistic structures hidden in perturbed sequence data when the regularization term of controlling entropy in the neural activity is adequately tuned \cite{rezavb}. Examination of such models applied to learning-based goal-directed planning of robots is left for future study.
\addtolength{\textheight}{-12cm}
|
{
"timestamp": "2018-06-06T02:06:18",
"yymm": "1803",
"arxiv_id": "1803.02578",
"language": "en",
"url": "https://arxiv.org/abs/1803.02578"
}
|
\section{Introduction}
A \emph{proper total $k$-colouring} of a graph $G=(V,E)$ is an assignment $c$ of colours from the set $\{1,2,\ldots,k\}$ to the edges and the vertices of $G$ such that
adjacent edges and vertices are coloured differently and the colour of every edge is distinct from those assigned to its end-vertices.
A \emph{total neighbour sum distinguishing $k$-colouring} of $G$, or \emph{tnsd $k$-colouring} for short,
is its proper total $k$-colouring $c$ such that for every edge $uv\in E$, there is no \emph{conflict} between $u$ and $v$,
{\it i.e.}, $s(u)\neq s(v)$, where $s(w)$ is the sum of colours taken on the edges incident with $w$ and the colour of the vertex $w$ for $w\in V$.
In other words, for every vertex $w\in V$, $\displaystyle s(w) = \sum_{e \in E_w} c(e)+c(w)$, where $E_w$ is the set of edges incident with $w$ in $G$.
We denote by $\chi''_{\Sigma}(G)$ the \emph{total neighbour sum distinguishing index} of $G$, which is the least integer $k$ such that a tnsd $k$-colouring of $G$ exists.
The roots of this branch of graph theory date back to the '80s, and the papers~\cite{ChartrandErdosOellermann,Chartrand}
on degree irregularities in graphs (and multigraphs) and the parameter \emph{irregularity strength} of a graph. For more details concerning a motivation for investigating integer graph colourings and a few crucial results on the irregularity strength see {\it e.g.}~\cite{Aigner
Lazebnik
Frieze,KalKarPf,Lehel,MajerskiPrzybylo2,Nierhoff}.
By definition and the requirement of properness of colourings investigated, the total neighbour sum distinguishing index of every graph $G$ is not smaller than $\Delta(G)+1$.
The following conjecture was proposed by Pil\'{s}niak and Wo\'{z}niak in~\cite{PW},
where it was also verified for a few classical graph families, including, {\it e.g.},
complete graphs, bipartite graphs and
graphs with maximum degree at most three.
\begin{conjecture}[\cite{PW}]\label{MoMa}
For every graph $G$, $\chi''_{\Sigma}(G) \leq \Delta(G) + 3$.
\end{conjecture}
Note that such postulated upper bound exceeds just by one the bound implied by the famous Total Colouring Conjecture
posed by Vizing~\cite{Vizing2} in 1968 and independently by Behzad~\cite{Behzad} in 1965, and
concerning proper total colourings (without our additional requirement on sum distinction between adjacent vertices).
The both conjectures seem to be very challenging and are in general widely open. The best general result concerning the latter one~\cite{MolloyReedTotal} confirms however the Total Colouring Conjecture up to a (large) additive constant.
The best general upper bound concerning Conjecture~\ref{MoMa} implies that $\chi''_{\Sigma}(G)\leq (1+o(1))\Delta$ for every graph $G$ with maximum degree $\Delta$, see~\cite{Przybylo_asym_optim_total} and~\cite{Przybylo_asymptotic_note}. See also~\cite{total_sum_planar,LiLiuWang_sum_total_K_4,PW,Przybylo_CN_3} for partial results on this conjecture.
In particular Ding et al. first confirmed Conjecture~\ref{MoMa} for planar graphs with sufficiently large maximum degree:
\begin{theorem} [\cite{total_sum_planar}]
Any planar graph $G$ with $\Delta(G) \geq 13$ satisfies $\chi''_\Sigma(G) \leq \Delta(G)+3$.
\end{theorem}
This was then improved by Yang et al. in the following form.
\begin{theorem} [\cite{total_sum_planar2}]
Any planar graph $G$ satisfies $\chi''_\Sigma(G) \leq \max\{\Delta(G)+2,13\}$.
\end{theorem}
Let ${\rm mad}(G)=\max\left\{\frac{2|E(H)|}{|V(H)|},\;H \subseteq G\right\}$ be the \emph{maximum average degree} of a graph $G$, where $V(H)$ and $E(H)$ are the sets of vertices and edges of $H$, respectively.
This is a conventional measure of sparseness of an arbitrary graph (not necessary planar). For more details on this invariant see {\it e.g.} \cite{Coh10,Toft}.
Dong and Wang first made the link between the maximum average degree and the total neighbour sum distinguishing index.
They proved the following result.
\begin{theorem}[\cite{DongWang_mad}] \label{th:summadCite1}
Any graph $G$ with $\Delta(G) \geq 5$ and ${\rm mad}(G)<3$ satisfies $\chi''_\Sigma(G) \leq \Delta(G)+2$.
\end{theorem}
This subject was intensively studied afterwards, and the following improvement has been announced recently (but no proof of such a supposed fact
was published thus far).
\begin{theorem}[\cite{Qiu_Wang_Liu_Xu}] \label{th:summadCite2}
Any graph $G$ with $\Delta(G) \geq 8$ and ${\rm mad}(G)< \frac92$ satisfies $\chi''_\Sigma(G) \leq \Delta(G)+3$.
\end{theorem}
In this paper, we prove a stronger statement than
the above:
\begin{theorem}\label{th:summad}
Any graph $G$ with $\Delta(G) \geq 8$ and ${\rm mad}(G)<\frac{14}{3}$ satisfies $\chi''_\Sigma(G) \leq \Delta(G)+3$.
\end{theorem}
Recall that the girth of a graph is the length of a shortest cycle in it.
As every planar graph with girth $g$ satisfies ${\rm mad}(G) < \frac{2g}{g-2}$, the following corollary can be easily derived from Theorem~\ref{th:summad}:
\begin{corollary}
Any triangle-free planar graph $G$ with $\Delta(G) \geq 8$ satisfies $\chi''_\Sigma(G) \leq \Delta(G)+3$.
\end{corollary}
\section{Proof of Theorem~\ref{th:summad}}
\subsection{Preliminaries}
Fix an integer $k\geq 8$.
In the following, $n_i(G)$ denotes the number of vertices of degree $i$ in a graph $G$ (and similarly for $n_{i^+}(G)$ with ``at least $i$'' and for $n_{i^-}(G)$ with ``at most $i$'').
We say a graph $G$ is \emph{smaller} than a graph $H$, $G \prec H$ if $|E(G)|+|V(G)|<|E(H)|+|V(H)|$.
We say a graph is \emph{minimal} for a property when no smaller graph verifies it.
We shall also call any vertex of degree $d$ ($\geq d$, $\leq d$) in a given graph a \emph{$d$-vertex} (\emph{$d^+$-vertex}, \emph{$d^-$-vertex}, resp.) of this graph.
The same nomenclature shall be used for neighbours as well.\\
\subsection{Structural properties of H}
Suppose that $H$ is a minimal graph with maximum degree $\Delta\leq k$,
${\rm mad}(H)<\frac{14}{3}$ and $\chi''_\Sigma(H)>k+3$ (hence $H$ is connected and $\delta(H)\geq 1$).
In the remaining part of the paper we argument that in fact $H$ cannot exist, {\it i.e.} that there exists a tnsd $(k+3)$-colouring of $H$, and thus prove Theorem~\ref{th:summad}.
In this subsection we exhibit some structural properties of $H$.
The following lemma shall be very useful to this end.
Its proof was inspired by the research from~\cite{BP14}.
The same result but in the case of lists of integers can also be derived from~\cite{Alon}.
\begin{lemma}[\cite{HocquardPrzybylo1}]\label{MartheLemma}
For any finite sets $L_1,\ldots,L_t$ of real numbers with $|L_i|\geq t$ for $i=1,\ldots,t$,
the set $\{x_1+\ldots+x_t:x_1\in L_1,\ldots,x_t\in L_t; x_i\neq x_j~{\rm for}~i\neq j\}$ contains
at least $\sum_{i=1}^t|L_i|-t^2+1$ distinct elements.
\end{lemma}
\begin{observation}\label{obs3v}
Every $3^-$-vertex $v$ in $H$ can be recoloured (or coloured if it has no colour assigned) so that it has a different colour than its adjacent vertices and incident edges, and so that $v$ is not in conflict with any of its neighbours.
\end{observation}
\begin{proof}
This follows directly by the fact we have $k+3\geq 11$ colours available, while at most $9$ of them might be blocked by the requirements from the thesis.
\end{proof}
\begin{lemma}\label{lemma4v}
For every vertex $v \in V(H)$,
$n_{4^+}(v) \ge n_{2^-}(v)+1+ n_{3^-}(v)\times(k - d(v))$.
\end{lemma}
\begin{proof}
Suppose on the contrary that $v\in V(H)$ with $d(v)=d\geq 1$ is adjacent to $\alpha$ $2^-$-vertices $u_1,\dots,u_\alpha$, $\beta$ $3$-vertices $w_1,\dots,w_\beta$ and to $\gamma$ $4^+$-vertices where $\gamma < \alpha+1+(\alpha+\beta)(k-d)$, hence $\alpha+\beta\geq 1$.
Colour $H'=H-\{vu_1,\ldots,vu_\alpha,vw_1,\ldots,vw_\beta\}$ by minimality ({\it i.e.} fix any tnsd $(k+3)$-colouring of $H'$, which must exist due to the fact that $H$ is our minimal counterexample, while $H' \prec H$, $\Delta(H')\leq k$ and ${\rm mad}(H')<\frac{14}{3}$) and uncolour $u_i$ and $w_j$ for all $i \in \{1,\ldots,\alpha\}$ and $j \in \{1,\ldots,\beta\}$. Let $L_i$ and $L'_j$, with $i \in \{1,\ldots,\alpha\}$ and $j \in \{1,\ldots,\beta\}$ be the sets of available colours respectively for the edges $vu_i$ and $vw_j$ ({\it i.e.} those colours in $\{1,\ldots,k+3\}$ not used by their adjacent edges in $H$ or $v$).
Note that
$|L_1|,\dots,|L_\alpha| \ge \alpha + \beta + 1 + k -d$ and $|L'_1|,\dots,|L'_\beta| \ge \alpha + \beta+ k -d$.
By Lemma~\ref{MartheLemma}, we may extend this colouring to a (partial) proper colouring of $H$ in different ways, obtaining at least $\alpha(\alpha+\beta)+\alpha+\alpha(k-d)+\beta(\alpha+\beta)+\beta(k-d)-(\alpha+\beta)^2+1=\alpha+1+(\alpha+\beta)(k-d)>\gamma$ distinct sums for~$v$.
Thus we can do it in such a way that $v$ is no in conflict with any of its $4^+$-neighbours.
By Observation~\ref{obs3v} we therefore obtain a contradiction. $\blacksquare$\\
\end{proof}
\begin{corollary}\label{obs2v}
For every vertex $v \in V(H)$ with $d(v) \ge 7$, $n_{2^-}(v) \le d(v)-5$.
\end{corollary}
\begin{proof}
Suppose to the contrary that $n_{2^-}(v) \ge d(v)-4$ for some $v \in V(H)$ with $d(v) \ge 7$. By Lemma~\ref{lemma4v} we have: $n_{4^+}(v) \ge n_{2^-}(v)+1+ n_{3^-}(v)\times(k - d(v))\ge d(v)-4+1+n_{3^-}(v)\times(k - d(v)) \ge d(v)-3$.
Consequently, we have:
$d(v) \ge n_{2^-}(v)+n_{4^+}(v) \ge d(v)-4+d(v)-3=2d(v)-7$.
Hence, $d(v) \le 7$, {\it i.e.} $d(v)=7$. As $k \ge 8$, then by Lemma~\ref{lemma4v} we thus obtain:
$n_{4^+}(v) \ge n_{2^-}(v)+1+ n_{3^-}(v)\times(k - d(v))\ge 4 + n_{3^-}(v)\times(k - d(v)) \ge d(v)$, a contradiction with the fact that $n_{2^-}(v) \ge d(v)-4$. $\blacksquare$
\end{proof}
Within the proof of the remaining structural properties of $H$, aggregated in Claim~\ref{claimstructure} below, we shall apply several times the following algebraic tool due to Alon~\cite{Alon}.
\begin{theorem}[Combinatorial
Nullstellensatz]\label{Comb_Nul} Let $\mathbb{F}$ be an arbitrary
field, and let $P=P(x_1,\ldots,x_n)$ be a polynomial in
$\mathbb{F}[x_1,\ldots,x_n]$.
Suppose the coefficient of a monomial $x_1^{k_1}\ldots
x_n^{k_n}$, where each $k_i$ is a non-negative
integer, is non-zero in $P$ and the degree ${\rm deg}(P)$ of
$P$ equals $\sum_{i=1}^n k_i$.
If moreover $S_1,\ldots,S_n$ are any
subsets of $\mathbb{F}$ with $|S_i|>k_i$ for $i=1,\ldots,n$,
then there are $s_1\in
S_1,\ldots,s_n\in S_n$ so that $P(s_1,\ldots,s_n)\neq 0$.
\end{theorem}
\begin{claim}\label{claimstructure}
The graph $H$ does not contain any of:
\begin{enumerate}
\item[(C1)] \label{c1} a $2^-$-vertex $v$ adjacent to a $(\frac{k}{2}+1)^-$-vertex $u$;
\item[(C2)] \label{c2} a $4^-$-vertex $v$ adjacent to a $4^-$-vertex $u$;
\item[(C3)] \label{c3} a $3^-$-vertex $v$ adjacent to a $5^-$-vertex $u$;
\item[(C4)] \label{c4} a $5$-vertex $v$ adjacent to three $4$-vertices $v_1,v_2,v_3$;
\item[(C5)] \label{c5} a $6$-vertex $v$ adjacent to a $3^-$-vertex $u$ and to a $4^-$-vertex $w$;
\item[(C6)] \label{c12} a $7$-vertex $v$ adjacent to a $2^-$-vertex $u$, to a $3^-$-vertex $w$ and to a $4^-$-vertex $y$.
\item[(C7)] \label{c6} a vertex $v$ of degree $d\geq 8$ adjacent to $(d-7)$ $2^-$-vertices $v_1,\ldots,v_{d-7}$, to two $3^-$-vertices $u_1,u_2$ and to a $4^-$-vertex $w$;
\item[(C8)] \label{c11} a vertex $v$ of degree $d=\Delta\geq 3$ adjacent to $(d-2)$ $3^-$-vertices $v_1,\ldots,v_{d-2}$ and to one $4^-$-vertex $u$.
\end{enumerate}
\end{claim}
\medskip
\begin{figure}[!h]
\begin{center}
\centering
\begin{tikzpicture}[scale=0.75,auto]
\tikzstyle{w}=[draw,circle,fill=white,minimum size=5pt,inner sep=0pt]
\tikzstyle{b}=[draw,circle,fill=black,minimum size=5pt,inner sep=0pt]
\tikzstyle{t}=[rectangle,minimum size=5pt,inner sep=0pt]
\tikzstyle{whitenode}=[draw,ellipse,fill=white,minimum size=9pt,inner sep=0pt]
\tikzstyle{blacknode}=[draw,circle,fill=black,minimum size=9pt,inner sep=0pt]
\tikzstyle{texte}=[minimum size=9pt,inner sep=0pt]
\draw (0,0) node[whitenode] (u) [label=above:$u$] {$\left(\frac{k}{2}+1\right)^-$}
--++ (0:4cm) node[whitenode] (b) [label=above:$v$] {$2^-$};
\draw (2.2,-1.5) node[t] (t1) {$(C_1)$};
\draw (6,0) node[whitenode] (b) [label=above:$u$] {$4^-$}
--++ (0:2.5cm) node[whitenode] (b) [label=above:$v$] {$4^-$};
\draw (7.2,-1.5) node[t] (t1) {$(C_2)$};
\draw (11,0) node[whitenode] (b) [label=above:$u$] {$5^-$}
--++ (0:2.5cm) node[whitenode] (b) [label=above:$v$] {$3^-$};
\draw (12.3,-1.5) node[t] (t1) {$(C_3)$};
\draw (17.3,0) node[whitenode] (v) [label=below:$v$] {$5$}
--++ (0:-2.2cm) node[w] (u) [label=below:$v_4$] {}
--++ (0:4.4cm) node[w] (w) [label=below:$v_5$] {};
\draw (v)
--++ (150:2cm) node[w] [label=right:$v_1$] (v5) {$4$};
\draw (v)
--++ (90:1.5cm) node[w] [label=right:$v_2$] (v6) {$4$};
\draw (v)
--++ (30:2cm) node[w] [label=right:$v_3$] (v7) {$4$};
\draw (17.3,-1.5) node[t] (t1) {$(C_4)$};
\draw (2.2,-4.5) node[whitenode] (v) [label=below:$v$] {$6$}
--++ (0:-2.2cm) node[w] (u) [label=below:$u$] {$3^-$}
--++ (0:4.4cm) node[w] (w) [label=below:$w$] {$4^-$};
\draw (v)
--++ (150:2cm) node[w] [label=right:$v_1$] (v5) {};
\draw (v)
--++ (110:1.5cm) node[w] [label=right:$v_2$] (v6) {};
\draw (v)
--++ (70:1.5cm) node[w] [label=right:$v_3$] (v7) {};
\draw (v)
--++ (30:2cm) node[w] [label=right:$v_4$] (v8) {};
\draw (2.2,-7) node[t] (t1) {$(C_5)$};
\draw (10.3,-4.5) node[whitenode] (v) [label=below:$v$] {$7$}
--++ (0:-2.2cm) node[w] (u) [label=below:$u$] {$2^-$}
--++ (0:4.4cm) node[w] (w) [label=below:$y$] {$4^-$};
\draw (v)
--++ (90:1.5cm) node[w] [label=right:$w$] (v6) {$3^-$};
\draw (v)
--++ (-30:1.5cm) node[w] (v8) [label=below:$z_4$] {};
\draw (v)
--++ (-60:1.5cm) node[w] (v9) [label=below:$z_3$] {};
\draw (v)
--++ (-120:1.5cm) node[w] (v10) [label=below:$z_2$] {};
\draw (v)
--++ (-150:1.5cm) node[w] (v11) [label=below:$z_1$] {};
\draw (10.3,-7) node[t] (t1) {$(C_6)$};
\draw (17.3,-4.5) node[whitenode] (v) [label=below:$v$] {$8^+$}
--++ (0:-2.2cm) node[w] (u) [label=below:$u_1$] {$3^-$}
--++ (0:4.4cm) node[w] (w) [label=below:$u_2$] {$3^-$};
\draw (v)
--++ (150:2cm) node[w] [label=left:$v_1$] (v5) {$2^-$};
\draw (v)
--++ (90:1.5cm) node[w] [label=right:$v_{d-7}$] (v6) {$2^-$};
\draw (v)
--++ (30:2cm) node[w] [label=right:$w$] (v7) {$4^-$};
\draw (v)
--++ (-30:1.5cm) node[w] (v8) [label=below:$y_4$] {};
\draw (v)
--++ (-60:1.5cm) node[w] (v9) [label=below:$y_3$] {};
\draw (v)
--++ (-120:1.5cm) node[w] (v10) [label=below:$y_2$] {};
\draw (v)
--++ (-150:1.5cm) node[w] (v11) [label=below:$y_1$] {};
\draw (v5) edge [bend left,loosely dotted,thick] node {} (v6);
\draw (17.3,-7) node[t] (t1) {$(C_7)$};
\draw (2.2,-10) node[whitenode] (v) [label=below:$v$] {$d=\Delta$}
--++ (0:-2.2cm) node[w] (u) [label=below:$v_1$] {$3^-$}
--++ (0:4.4cm) node[w] (w) [label=below:$u$] {$4^-$};
\draw (v)
--++ (50:2cm) node[w] [label=right:$v_{d-2}$] (v7) {$3^-$};
\draw (u) edge [bend left,loosely dotted,thick] node {} (v7);
\draw (v)
--++ (-50:1.5cm) node[w] (v8) [label=below:$w$] {};
\draw (2.2,-12.5) node[t] (t1) {$(C_8)$};
\end{tikzpicture}
\caption{Forbidden configurations in $H
}\label{FCG_fig}
\end{center}
\end{figure}
\begin{proof}
We shall argument `reducibility' of each of these 8 configurations separately,
following a similar pattern of reasoning.
I.e., we shall first suppose by contradiction that a given configuration exists in $H$.
Then we shall consider a graph $H'$ smaller than $H$ with $\Delta(H')\leq k$ and ${\rm mad}(H')<\frac{14}{3}$
(usually guaranteing these properties by constructing $H'$ simply via deleting some edges or vertices from $H$),
and \emph{colour} it \emph{by minimality},
what shall mean from now on that we choose any tnsd $(k+3)$-colouring for $H'$.
Finally, in each case, we shall obtain a contradiction by extending the colouring chosen to a tnsd $(k+3)$-coloring of the entire $H$.
Whenever we analyze a partial colouring of a graph, the \emph{sum at a} given \emph{vertex}, $s(v)$ is defined as above, but every uncoloured edge and vertex contributes $0$ to this sum. We write that $u$ and $v$ are \emph{sum-distinguished}, if $s(u)\neq s(v)$.
\begin{enumerate}
\item[1.] Suppose there exists a $2^-$-vertex $v$ adjacent to a $(\frac{k}{2}+1)^-$-vertex $u$ in $H$. Colour $H'=H-\{uv\}$ by minimality and uncolour $v$. In order to colour $uv$ so that a (partial) $(k+3)$-colouring of $H$ obtained is proper we have to avoid at most $\frac{k}{2}+2$ colours,
and possibly at most $\frac{k}2$ more colours to ensure the sum-distinction (of $u$ from its neighbours other than $v$). Hence, we have at least one colour left to extend the colouring, and thus obtain
a tnsd $(k+3)$-colouring of $H$ via Observation~\ref{obs3v} applied to $v$, a contradiction.
\item[2;3.] Suppose there exists an edge $uv$ with $d(u),d(v)\leq 4$ or with $d(u)\leq 5$ and $d(v)\leq 3$ in $H$. Denote the neighbours of $u$ other than $v$ by $u_1,\ldots,u_q$, and denote the neighbours of $v$ other than $u$ by $v_1,\ldots,v_p$ (hence $d(u)=q+1$ and $d(v)=p+1$).
By the minimality of $H$ there exists a tnsd $(k+3)$-colouring $c$ of $H'=H-\{uv\}$.
Let us now undelete the edge $uv$ and remove the colours from $u$ and $v$.
In order to extend the current partial colouring of $H$ to its proper total $(k+3)$-colouring we may use at least $11-6=5$ colours for $u$, at least $11-6=5$ colours for $uv$ and at least $11-6=5$ colours for $v$ if $d(u),d(v)\leq 4$, or otherwise: at least $3$ colours for $u$, at least $5$ colours for $uv$ and at least $7$ colours for $v$. Denote the respective lists of available colours by $L_u,L_{uv},L_v$, the sum at $u_i$
by $s_i$ and the sum at $v_j$
by $s'_j$ for $i=1,\ldots,q$, $j=1,\ldots,p$. Consider a polynomial with real variables:
\begin{eqnarray}
f(x_0,x_1,x_2) &=& (x_0-x_1)(x_0-x_2)(x_1-x_2)\left(x_0+\sum_{i=1}^q c(uu_i)-x_2-\sum_{i=1}^p c(vv_i)\right)\nonumber\\
&\times&\prod_{i=1}^q\left(x_0+x_1+\sum_{j=1}^qc(uu_j)-s_i\right)\prod_{i=1}^p\left(x_2+x_1+\sum_{j=1}^pc(vv_j)-s'_i\right).\nonumber
\end{eqnarray}
Note that in order to extend the colouring $c$ to a tnsd $(k+3)$-colouring
of $H$ it is now sufficient to find a non-zero (i.e. with non-zero value of $f$) substitution for $f$ such that $x_0\in L_u$, $x_1\in L_{uv}$ and $x_2\in L_v$. It is thus the more sufficient to find a non-zero substitution from these list for the polynomial $g$ defined as $g(x_0,x_1,x_2):=f(x_0,x_1,x_2)\cdot(x_0+x_1)^{3-q}(x_2+x_1)^{3-p}$ if $d(u),d(v)\leq 4$ or by $g(x_0,x_1,x_2):=f(x_0,x_1,x_2)\cdot(x_0+x_1)^{4-q}(x_2+x_1)^{2-p}$ otherwise.
In the first of these cases however, the coefficient of the monomial $x_0^4x_1^3x_2^3$ in $g$ is the same as in $$h_1(x_0,x_1,x_2)=(x_0-x_1)(x_0-x_2)^2(x_1-x_2)(x_0+x_1)^{3}(x_2+x_1)^{3},$$ and equals\footnote{This and further computations
were obtained
by means of a computer program; one may verify these using {\it e.g.}
\emph{Wolfram Mathematica}.}: $2$. Analogously, in the second case, the coefficient of the monomial $x_0^2x_1^3x_2^5$ in $g$ is the same as in $$h_2(x_0,x_1,x_2)=(x_0-x_1)(x_0-x_2)^2(x_1-x_2)(x_0+x_1)^{4}(x_2+x_1)^{2},$$ and equals also: $2$.
In the both cases we thus obtain a contradiction by the Combinatorial Nullstellensatz.
\item[4.] Suppose there exists a $5$-vertex $v$ with $N(v)=\{v_1,\ldots,v_5\}$ such that $d(v_1)=d(v_2)=d(v_3)=4$ in $H$.
By the minimality of $H$ there exists a tnsd $(k+3)$-colouring $c$ of $H'=H-\{vv_1,vv_2,vv_3\}$.
Delete the colours of $v,v_1,v_2,v_3$. We associate variables $x_0,x_1,x_2,x_3,x_4,x_5,x_6$ with $v,vv_1,vv_2,vv_3,v_1,v_2,v_3$, respectively.
For these we
denote the lists of their available colours by $L_0,\ldots,L_6$ (obtained after excluding from $\{1,\ldots,k+3\}$ the colours already used on their respective adjacent or incident vertices and edges),
respectively. Then $|L_0|\geq 7, |L_1|,|L_2|,|L_3|\geq 6, |L_4|,|L_5|,|L_6|\geq 5$.
Let
\begin{eqnarray}
f(x_0,\ldots,x_6) &=& (x_0-x_1)(x_0-x_2)(x_0-x_3)(x_0-x_4)(x_0-x_5)(x_0-x_6)(x_1-x_2)(x_1-x_3)(x_2-x_3)\nonumber\\
&\times& (x_1-x_4)(x_2-x_5)(x_3-x_6)\prod_{i=1}^3(x_0+x_1+x_2+x_3+s(v)-x_i-x_{i+3}-s(v_i))\nonumber\\
&\times& \prod_{i=4}^5(x_0+x_1+x_2+x_3+s(v)-s(v_i)) \prod_{i=1}^3\prod_{u\in N(v_i)\smallsetminus\{v\}}(x_i+x_{i+3}+s(v_i)-s(u))\nonumber
\end{eqnarray}
(where $s(w)$ refers to the contemporary partial sum for every vertex $w$ in $H$). Note that the coefficient of the monomial $x_0^6x_1^5x_2x_3^5x_4^2x_5^3x_6^4$ in $f$ is the same as in the following polynomial:
\begin{eqnarray}
g(x_0,\ldots,x_6) &=& (x_0-x_1)(x_0-x_2)(x_0-x_3)(x_0-x_4)(x_0-x_5)(x_0-x_6)(x_1-x_2)(x_1-x_3)(x_2-x_3)\nonumber\\
&\times& (x_1-x_4)(x_2-x_5)(x_3-x_6)(x_0+x_2+x_3-x_4)(x_0+x_1+x_3-x_5)(x_0+x_1+x_2-x_6)\nonumber\\
&\times& (x_0+x_1+x_2+x_3)^2(x_1+x_4)^3(x_2+x_5)^3(x_3+x_6)^3,\nonumber
\end{eqnarray}
and equals $16$. By the Combinatorial Nullstellensatz we thus may extend our colouring to a tnsd $(k+3)$-colouring of $H$, a contradiction.
\item[5.] Suppose there exists a $6$-vertex $v$ adjacent to a $3^-$-vertex $u$, to a $4^-$-vertex $w$ and to vertices $v_1,v_2,v_3,v_4$ in $H$.
By the minimality of $H$ there exists a tnsd $(k+3)$-colouring $c$ of $H'=H-\{vu,vw\}$.
Delete the colours of $u,v,w$ and associate variables $x_0,x_1,x_2,x_3,x_4$ with $v,vu,vw,u,w$, respectively.
Denote the lists of available colours for these by $L_0,\ldots,L_4$, resp., and note that $|L_0|\geq 3, |L_1|\geq 5, |L_2|\geq 4, |L_3|\geq 7, |L_4|\geq 5$.
Consider a polynomial:
\begin{eqnarray}
f(x_0,\ldots,x_4) &=& (x_0-x_1)(x_0-x_2)(x_0-x_3)(x_0-x_4)(x_1-x_2)(x_1-x_3)(x_2-x_4)\nonumber\\
&\times& (x_0+x_2+s(v)-x_3-s(u))(x_0+x_1+s(v)-x_4-s(w))\nonumber\\
&\times& \prod_{i=1}^4(x_0+x_1+x_2+s(v)-s(v_i))\nonumber\\
&\times& \prod_{y\in N(u)\smallsetminus\{v\}}(x_1+x_3+s(u)-s(y))
\prod_{y\in N(w)\smallsetminus\{v\}}(x_2+x_4+s(w)-s(y))\nonumber
\end{eqnarray}
and set $g(x_0,\ldots,x_4)=f(x_0,\ldots,x_4)\cdot(x_1+x_3)^{3-d(u)}(x_2+x_4)^{4-d(w)}$.
Note that the coefficient of the monomial $x_0^2x_1^4x_2^3x_3^5x_4^4$ in $g$ is the same as in:
\begin{eqnarray}
h(x_0,\ldots,x_4) &=& (x_0-x_1)(x_0-x_2)(x_0-x_3)(x_0-x_4)(x_1-x_2)(x_1-x_3)(x_2-x_4)\nonumber\\
&\times& (x_0+x_2-x_3)(x_0+x_1-x_4)(x_0+x_1+x_2)^4(x_1+x_3)^2(x_2+x_4)^3,\nonumber
\end{eqnarray}
and equals $-10$. By the Combinatorial Nullstellensatz there exists a non-zero substitution for $g$, hence the more for $f$, from the corresponding lists $L_0,\ldots,L_4$, and thus we may extend our partial colouring to a tnsd $(k+3)$-colouring of $H$, a contradiction.
\item[6.] Suppose there exists a $7$-vertex $v$ adjacent to a $2^-$-vertex $u$, to a $3^-$-vertex $w$ and to a $4^-$-vertex $y$ in $H$.
Denote the remaining neighbours of $v$ by $z_1,z_2,z_3,z_4$.
By the minimality of $H$ there exists a tnsd $(k+3)$-colouring $c$ of $H'=H-\{vu,vw,vy\}$.
Delete the colours of $u,w,y$ and associate variables $x_1,x_2,x_3,x_4$ with $vu,vw,vy,y$, respectively.
Denote the lists of available colours for these by $L_1,L_2,L_3,L_4$, resp., and note that $|L_1|\geq 5, |L_2|\geq 4, |L_3|\geq 3, |L_4|\geq 4$.
Consider a polynomial (and note that by Observation~\ref{obs3v} we shall be able to colour properly vertices $u$ and $w$ at the end so that these are sum distinguished from their neighbours, thus we omit the corresponding requirements within the polynomial below):
\begin{eqnarray}
f(x_1,x_2,x_3,x_4) &=& (x_1-x_2)(x_1-x_3)(x_2-x_3)(x_3-x_4)\prod_{i=1}^4(x_1+x_2+x_3+s(v)-s(z_i))\nonumber\\
&\times& (x_1+x_2+s(v)-x_4-s(y))
\prod_{z\in N(y)\smallsetminus\{v\}}(x_3+x_4+s(y)-s(z)).\nonumber
\end{eqnarray}
Let $g(x_1,x_2,x_3,x_4)=f(x_1,x_2,x_3,x_4)\cdot (x_3+x_4)^{4-d(y)}$.
Note that the coefficient of the monomial $x_1^4x_2^3x_3^2x_4^3$ in $g$ is the same as in:
$$h(x_1,x_2,x_3,x_4) = (x_1-x_2)(x_1-x_3)(x_2-x_3)(x_3-x_4)(x_1+x_2+x_3)^4(x_1+x_2-x_4)(x_3+x_4)^3,$$
and equals $-6$. Therefore, analogously as above we may extend our colouring, first to $vu,vw,vy,y$ by the Combinatorial Nullstellensatz, and then to $u$ and $w$ by Observation~\ref{obs3v}, to a tnsd $(k+3)$-colouring of $H$, a contradiction.
\item[7.] Suppose there exists
a vertex $v$ of degree $d\geq 8$
adjacent to $(d-7)$ $2^-$-vertices $v_1,\ldots,v_{d-7}$, to two $3^-$-vertices $u_1,u_2$ and to a $4^-$-vertex $w$ in $H$. The remaining neighbours of $v$ we denote by $y_1,y_2,y_3,y_4$.
By the minimality of $H$ there exists a tnsd $(k+3)$-colouring $c$ of $H'=H-\{vv_1,vu_1,vu_2,vw\}$.
Delete the colours of $v,v_1,\ldots,v_{d-7},u_1,u_2,w$ and associate variables $x_0,x_1,x_2,x_3,x_4,x_5$ with $v,vv_1,vu_1,vu_2,vw,w$, respectively.
Denote the lists of available colours for these by $L_0,\ldots,L_5$, resp., and note that $|L_0|\geq 3, |L_1|\geq 6, |L_2|,|L_3|\geq 5, |L_4|\geq 4, |L_5|\geq 5$.
Consider a polynomial
(the vertices $v_1,\ldots,v_{d-7},u_1,u_2$ shall be coloured at the end via Observation~\ref{obs3v}):
\begin{eqnarray}
f(x_0,\ldots,x_5) &=& (x_0-x_1)(x_0-x_2)(x_0-x_3)(x_0-x_4)(x_0-x_5)(x_1-x_2)(x_1-x_3)(x_1-x_4)\nonumber\\
&\times& (x_2-x_3)(x_2-x_4)(x_3-x_4)(x_4-x_5)\prod_{i=1}^4(x_0+x_1+x_2+x_3+x_4+s(v)-s(y_i))\nonumber\\
&\times& (x_0+x_1+x_2+x_3+s(v)-x_5-s(w))\prod_{z\in N(w)\smallsetminus\{w\}}(x_4+x_5+s(w)-s(z)).\nonumber
\end{eqnarray}
Let $g(x_0,\ldots,x_5)=f(x_0,\ldots,x_5)\cdot(x_4+x_5)^{4-d(w)}$. Then the coefficient of the monomial $x_0x_1^5x_2^4x_3^3x_4^3x_5^4$ in $g$ is the same as in:
\begin{eqnarray}
h(x_0,\ldots,x_5) &=& (x_0-x_1)(x_0-x_2)(x_0-x_3)(x_0-x_4)(x_0-x_5)(x_1-x_2)(x_1-x_3)(x_1-x_4)\nonumber\\
&\times& (x_2-x_3)(x_2-x_4)(x_3-x_4)(x_4-x_5)(x_0+x_1+x_2+x_3+x_4)^4\nonumber\\
&\times& (x_0+x_1+x_2+x_3-x_5)(x_4+x_5)^3.\nonumber
\end{eqnarray}
and equals $5$. By the Combinatorial Nullstellensatz and Observation~\ref{obs3v} we may thus extend our partial colouring to a tnsd $(k+3)$-colouring of $H$, a contradiction.
\item[8.] Suppose $v$ is a vertex of degree $d=\Delta\geq 3$ adjacent to $(d-2)$ $3^-$-vertices $v_1,\ldots,v_{d-2}$ and to one $4^-$-vertex $u$ in $H$.
Denote the remaining neighbour of $v$ by $w$.
By the minimality of $H$ there exists a tnsd $(k+3)$-colouring $c$ of $H'=H-\{vv_1,\ldots,vv_{d-2},vu\}$.
Delete the colours of $v,v_1,\ldots,v_{d-2},u$. First extend such a partial colouring of $H$ by choosing
a colour (in $\{1,\ldots,k+3\}$) for $v$ in the following manner. If $d(v_1)=3$, denote the colours associated to the edges incident with $v_1$ and different from $vv_1$ by $a$ and $b$, and if $c(vw)\notin \{a,b\}$, choose for $v$ any colour in $\{a,b\}\smallsetminus\{c(w)\}$.
In all other cases, choose for $v$ any colour distinct from $c(vw)$ and $c(w)$. Denote the colour of $v$ by $c(v)$.
Then choose a colour $c(u)$ for $u$ as small as possible
(and note that as our total colouring must be proper, this implies that either $u$ or some of its incident edges other than $uv$ has now colour at most $5$). Next we choose any colour $c(uv)$ so that the obtained (partial) colouring of $H$ is proper and $u$ is sum distinguished from its neighbours other than $v$ (this is possible, as $k+3>9$). Then we subsequently choose greedily colours for $vv_2,\ldots,vv_{d-2}$ so that the obtained partial total colouring of $H$ is proper. Finally we choose a colour $c(vv_1)$ for $vv_1$ distinct from the colours of its incident edges and the colour of $v$ (by our choice of $c(v)$, this blocks at most $\Delta+1$ choices) so that the sum at $v$ is distinct from the sum at $w$,
and if $\Delta\leq k-1$, also distinct from the sum at $u$.
We complete our colouring by choosing the colours for $v_1,\ldots,v_{d-2},u$ consistently with
Observation~\ref{obs3v}. In order to see that the obtained colouring of $H$ is sum distinguishing it is sufficient to note that the sum at $v$ is distinct from the sum at $u$ when $\Delta=k$. Indeed,
the sum of colours incident with $v$ except for the colour of $uv$ equals at least $1+\ldots+k=k(k+1)/2$, while by our choice of the colour for $u$, the sum of its incident colours except the one of $uv$ is at most $5+(k+3)+(k+2)+(k+1)=3k+11<k(k+1)/2$ (for $k\geq 8$). Thus we obtain a contradiction with the minimality of $H$. $\blacksquare$
\end{enumerate}
\end{proof}
\subsection{Discharging procedure}
In this subsection we use the discharging technique exploiting the vertices of the graph $H$.
For this aim we first define the \emph{weight
function} $\omega: V(H) \rightarrow \mathbb{R}$ by setting $\omega(x)=d(x)-\frac{14}{3}$ for every $x\in V(H)$.
Next we shall apply so called \emph{Ghost vertices method}, introduced earlier by Bonamy, Bousquet and Hocquard~\cite{BonamyEtAl},
and based on the following observation (where given any subsets $U,U'\subseteq V(H)$ and a vertex $v$, $d_U(v)$ denotes the number of neighbours of $v$ from $U$, while $E(U,U')$ is the set of edges joining $U$ and $U'$ in the graph $H$).
\begin{observation}\label{obs1v}
\noindent Let $V_1 \cup V_2$ be a partition of $V(H)$ where, say $V_1$ is the set of vertices of degree at least $3$ and $V_2$ -- the set of vertices of degree at most $2$ in $H$;
\begin{itemize}
\item every vertex $u$ in $H$ has an initial weight $w(u)=d(u)-\frac{14}{3}$.
\item If we can discharge the weights in $H$ so that:
\begin{enumerate}
\item every vertex in $V_1$ has a non-negative weight;
\item and every vertex $u$ in $V_2$ has a final weight of at least $d(u)-\frac{14}{3}+d_{V_1}(u)$, then\\
for the new weight assignment $\omega'$, we have $\sum_{v \in V_2} (d(v)-\frac{14}{3}+d_{V_1}(v)) \leq \sum_{v \in V_2} \omega'(v)$, as well as\\
$\sum_{v \in V} \omega(v)=\sum_{v \in V} \omega'(v)$ and $\sum_{v \in V_1} \omega'(v) \geq 0$. Therefore,
\begin{eqnarray}
\sum_{v \in V_1} \left(d_{V_1}(v)-\frac{14}{3}\right) & \geq & \sum_{v \in V_1} \left(d_{V_1}(v)-\frac{14}{3}\right) + \sum_{v \in V_2} (d(v)-\frac{14}{3}+d_{V_1}(v)) - \sum_{v \in V_2} \omega'(v) \nonumber\\
& \geq & \sum_{v \in V_1} \left(d_{V_1}(v)-\frac{14}{3}\right) + |E(V_1,V_2)|+\sum_{v \in V_2} \left(d(v)-\frac{14}{3}\right) - \sum_{v \in V_2} \omega'(v) \nonumber\\
& \geq & \sum_{v \in V_1} \left(d(v)-\frac{14}{3}\right) + \sum_{v \in V_2} \left(d(v)-\frac{14}{3}\right) - \sum_{v \in V_2} \omega'(v) \nonumber\\
& \geq & \sum_{v \in V} \omega(v) - \sum_{v \in V_2} \omega'(v) \nonumber\\
& \geq & \sum_{v \in V_1} \omega'(v) \nonumber\\
& \geq & 0. \nonumber
\end{eqnarray}
\end{enumerate}
Thus we can conclude that ${\rm mad}(H) \geq {\rm mad}(H[V_1]) \geq \frac{14}{3}$.
\end{itemize}
\end{observation}
In other words, the vertices in $V_2$ can be seen but, in a way, do not contribute to the sum analysis.
\bigskip
In order to finish the proof of Theorem~\ref{th:summad}, it suffices to obtain a contradiction, \emph{e.g.} with the fact that ${\rm mad}(H)<\frac{14}{3}$,
implying that in fact no counterexample to its thesis may exist.
By Observation~\ref{obs1v}, it is thus enough to
redistribute the weight (defined by $\omega$ above) in $H$
so that every vertex of degree at least $3$ has a non-negative resulting weight, every vertex of degree $2$ has weight at least $2-\frac{14}{3}+2$ and every vertex of degree $1$ has weight at least $1-\frac{14}{3}+1$.
\medskip
The discharging rules we shall use for this aim are defined as follows:
\begin{enumerate}
\item[(R1)] A vertex of degree $d\geq 6$ gives $1$ to every adjacent $1$-vertex and to every adjacent $2$-vertex.
\item[(R2)] A vertex of degree $d\geq 6$ gives $\frac59$ to every adjacent $3$-vertex.
\item[(R3)] A vertex of degree $d\geq 5$
gives $\frac{1}{6}$ to every adjacent $4$-vertex.
\end{enumerate}
Let $v$ be a vertex in $H$. We consider different cases depending on the degree of $v$.
\begin{itemize}
\item Assume $d(v)=1$. By $(C1)$, $v$ is adjacent to a vertex of degree at least $6$. Thus, by $(R1)$, $v$ receives $1$. So every vertex of degree $1$ in $H$ has an initial weight of $-\frac{11}{3}$, gives nothing according to our rules and receives $1$, hence has the final weight of $-\frac{8}{3}$.
\item Assume $d(v)=2$. By $(C1)$, $v$ is adjacent to two vertices of degree at least $6$, and thus receives $2 \times 1$ by $(R1)$ and gives away nothing according to the rules above. So every vertex of degree $2$ in $H$ has an initial weight of $-\frac{8}{3}$ and has the final weight of $-\frac{2}{3}$.
\item Assume $d(v)=3$. By $(C3)$, $v$ is adjacent to three vertices of degree at least $6$, and thus receives $3 \times \frac59$ by $(R2)$ and gives away nothing according to the rules above. So every vertex of degree $3$ in $H$ has
the final weight $0$.
\item Assume $d(v)=4$. By $(C2)$, $v$ is adjacent to four vertices of degree at least $5$, and thus receives $4 \times \frac16$ by $(R3)$ and gives away nothing according to the rules above. So every vertex of degree $4$ in $H$ has
the final weight $0$.
\item Assume $d(v)=5$. By $(C3)$, $v$ is not adjacent to a vertex with degree at most $3$, and by $(C4)$, $v$ is adjacent to at most two vertices of degree $4$, and thus gives at most $2 \times \frac16$ by $(R3)$. Hence, $\omega'(v) \ge 5-\frac{14}{3}-2 \times \frac16=0$.
\item Assume $d(v)=6$.
Then consider the following subcases:
\begin{itemize}
\item if $v$ is adjacent to a $3^-$-vertex, then by $(C5)$, $v$ is adjacent to five vertices of degree at least $5$. Hence, by $(R1)$ and $(R2)$, $\omega'(v) \geq 6-\frac{14}{3}-1 \times \max \{1 ; \frac59\}=\frac13 \ge 0$.
\item if $v$ is not adjacent to a $3^-$-vertex, then $v$ is adjacent to at most six $4$-vertices. Hence, by $(R3)$, $\omega'(v) \ge 6-\frac{14}{3}-6 \times \frac16=\frac13 \ge 0$.
In both cases $v$ has a non-negative final weight.
\end{itemize}
\item Assume $d(v) \ge 7$. Recall that by Corollary~\ref{obs2v}, $n_{2^-}(v) \le d(v)-5$. Then consider the following subcases:
\begin{itemize}
\item If $n_{2^-}(v) = d(v)-5$ then by $(C6)$ and $(C7)$, $v$ is not adjacent to another $4^-$-vertex. Hence, by $(R1)$, $\omega'(v) \ge d(v)-\frac{14}{3}-(d(v)-5)\times 1 \ge 0$.
\item If $n_{2^-}(v) = d(v)-6$ then either $v$ is adjacent to no $3$-vertex or, by $(C6)$ and $(C7)$,
to exactly
one $3$-vertex and to no $4$-vertices. Hence, by $(R1)$, $(R2)$ and $(R3)$, $\omega'(v) \ge d(v)-\frac{14}{3}-(d(v)-6)\times 1 - \max\{6\times \frac16;1\times \frac59\} \ge 0$.
\item If $n_{2^-}(v) = d(v)-7$ then $v$ is adjacent to at most three $3$-vertices;
for $d(v)\geq 8$ it follows by $(C7)$, while for $d(v)=7$ by Lemma~\ref{lemma4v}, which implies then that
$n_{4^+}(v) \ge 1+ n_{3^-}(v)(k - d(v))\geq 1+n_{3^-}(v)$.
Hence, by $(R1)$, $(R2)$ and $(R3)$, $\omega'(v) \ge d(v)-\frac{14}{3}-(d(v)-7)\times 1-3\times \frac59 - 4 \times \frac16 \ge 0$.
\item If $n_{2^-}(v) \le d(v)-8$ then by $(C8)$:
\begin{itemize}
\item if $v$ is not adjacent to any $4$-vertex then $v$ is adjacent to
$(d(v)-8-\alpha)$ $2^-$-vertices and to at most $(\alpha+6)$ $3$-vertices for some $\alpha \ge 0$. Hence, by $(R1)$, $(R2)$ and $(R3)$, $\omega'(v) \ge d(v)-\frac{14}{3}-(d(v)-8-\alpha)\times 1-(\alpha+6)\times \frac59=\frac49 \alpha \ge 0$.
\item if $v$ is adjacent to at least one $4$-vertex then $v$ is adjacent to
$(d(v)-8-\alpha)$ $2^-$-vertices,
to
$(\alpha+6-\beta)$ $3$-vertices and to at most $(\beta+2)$ $4$-vertices for some $\alpha \ge 0$ and some $\beta \ge 1$. Hence, by $(R1)$, $(R2)$ and $(R3)$, $\omega'(v) \ge d(v)-\frac{14}{3}-(d(v)-8-\alpha)\times 1-(\alpha+6-\beta)\times \frac59 - (\beta+2)\times \frac16=\frac49 \alpha + \frac{7}{18}\beta - \frac13 \ge 0$.
\end{itemize}
\end{itemize}
\end{itemize}
In all cases $v$ has a non-negative final weight.
This, by Observation~\ref{obs1v}, completes the proof of Theorem~\ref{th:summad}. $\blacksquare$
|
{
"timestamp": "2018-03-08T02:09:15",
"yymm": "1803",
"arxiv_id": "1803.02686",
"language": "en",
"url": "https://arxiv.org/abs/1803.02686"
}
|
\section{Introduction}
Recently, neural network-based acoustic models~\cite{sainath2015deep,sak2015acoustic,hsu2016prioritized} have greatly improved the performance of automatic speech recognition (ASR) systems.
Unfortunately, it is well known (e.g., ~\cite{hsu2017unsuperviseddomain}) that ASR performance can degrade significantly when testing in a domain that is mismatched from training.
A major reason is that speech data have complex distributions and contain information about not only linguistic content, but also speaker identity, background noise, room characteristics, etc.
Among these sources of variability, only a subset are relevant to ASR, while the rest can be considered as a nuisance and therefore hurt the performance if the distributions of these attributes are mismatched between training and testing.
To alleviate this issue, some robust ASR research focuses on mapping the out-of-domain data to in-domain data using enhancement-based methods~\cite{narayanan2013ideal,isik2016single,feng2014speech}, which generally requires parallel data from both domains.
Another popular strategy is to train an ASR system with as large, and as diverse a dataset as possible~\cite{li2012improving,seltzer2013investigation};
however, this strategy is not feasible when the labeled data are not available for all domains.
Alternatively, robustness can also be achieved by training using features that are domain invariant~\cite{kingsbury1998robust,stern2012features,vinyals2011comparing,sainath2012auto,sun2017unsupervised}.
In this case, we would not have domain mismatch issues, because domain information is now transparent to the ASR system.
In this paper, we consider the same highly adverse scenario as in~\cite{hsu2017unsuperviseddomain}, where both clean and noisy speech are available, but the transcripts are only available for clean speech.
We study the use of a recently proposed model, called Factorized Hierarchical Variational Autoencoder (FHVAE)~\cite{hsu2017unsupervised}, for learning domain invariant ASR features without supervision.
FHVAE models learn to factorize sequence-level attributes and segment-level attributes into different latent variables.
By training an ASR system on the latent variables that encode segment-level attributes, and testing the ASR in mismatched domains, we demonstrate that these latent variables contain linguistic information and are more domain invariant.
Comprehensive experiments study the effect of different FHVAE architectures, training strategies, and the use of derived domain features on the robustness of ASR systems.
Our proposed method is evaluated on Aurora-4~\cite{pearce2002aurora} and CHiME-4~\cite{vincent2016analysis} datasets, which contain artificially corrupted noisy speech and real noisy speech respectively.
The proposed FHVAE-based feature reduces the absolute word error rate (WER) by 27\% to 41\% compared to filter bank features, and by 14\% to 16\% compared to variational autoencoder-based features.
We have released the code of FHVAEs described in the paper.\footnote{\url{https://github.com/wnhsu/FactorizedHierarchicalVAE}}
The rest of the paper is organized as follows.
In Section~\ref{sec:method}, we introduce the FHVAE model and a method to extract domain invariant features. Section~\ref{sec:setup} describes the experimental setup, while Section~\ref{sec:exp} presents results and discussion.
We conclude our work in Section~\ref{sec:conclu}.
\section{Learning Domain Invariant Features}\label{sec:method}
\subsection{Modeling a Generative Process of Speech Segments}
As mentioned above, generation of speech data often involves many independent factors, which are however unseen in the unsupervised setting.
It is therefore natural to describe such a generative process using a latent variable model, where a latent variable ${\mathbf z}$ is first sampled from a prior distribution, and a speech segment ${\mathbf x}$ is then sampled from a distribution conditioned on ${\mathbf z}$.
In~\cite{hsu2017learning}, a convolutional variational autoencoder (VAE) is proposed to model such process;
by assuming the prior to be a diagonal Gaussian, it is shown that the VAE automatically learns to model independent attributes regarding generation, such as the speaker identity and the linguistic content, using orthogonal latent subspaces.
This result provided a mechanism of potentially learning domain invariant features for ASR by discovering latent variables that do not contain domain information.
\subsection{Extracting Domain Invariant Features from FHVAEs}
The generation of sequential data often involves multiple independent factors operating at different scales.
For instance, the speaker identity affects the fundamental frequency (F0) at the utterance level, while the phonetic content affects spectral characteristics at the segment level.
As a result, sequence-level attributes, such as F0 and volume, tends to have a smaller amount of variation within an utterance, compared to between utterances, while the other attributes, such as spectral contours, tend to have similar amounts of variation within and between utterances.
Based on this observation, FHVAEs~\cite{hsu2017unsupervised} formulate the generative process of sequential data with a factorized hierarchical graphical model that imposes sequence-dependent priors and sequence-independent priors to different sets of latent variables.
Specifically, given a dataset $\mathcal{D} = \{ {\mathbf X}^{(i)} \}_{i=1}^M$ consisting of $M$ i.i.d. sequences, where ${\mathbf X}^{(i)} = \{ {\mathbf x}^{(i,n)} \}_{n=1}^{N^{(i)}}$ is a sequence of $N^{(i)}$ segments (sub-sequence), a sequence ${\mathbf X}$ of $N$ segments is assumed to be generated from a random process that involves latent variables ${\mathbf Z}_1 = \{ {\mathbf z}_1^{(n)} \}_{n=1}^N$, ${\mathbf Z}_2 = \{ {\mathbf z}_2^{(n)} \}_{n=1}^N$, and ${\bm \mu}_2$ as follows:
\begin{enumerate*}[label=(\arabic*)]
\item an \textit{s-vector} ${\bm \mu}_2$ is drawn from a prior distribution $p_{\theta}({\bm \mu}_2) = \mathcal{N}({\bm \mu}_2 | \bm{0}, \sigma^2_{{\bm \mu}_2}\bm{I})$;
\item $N$ i.i.d. \textit{latent segment variables} $\{ {\mathbf z}_1^{(n)} \}_{n=1}^N$ and \textit{latent sequence variables} $\{ {\mathbf z}_2^{(n)} \}_{n=1}^N$ are drawn from a sequence-independent prior $p_{\theta}({\mathbf z}_1) = \mathcal{N}({\mathbf z}_1 | \bm{0}, \sigma^2_{{\mathbf z}_1}\bm{I})$ and a sequence-dependent prior $p_{\theta}({\mathbf z}_2 | {\bm \mu}_2) = \mathcal{N}({\mathbf z}_2 | {\bm \mu}_2, \sigma^2_{{\mathbf z}_2}\bm{I})$ respectively;
\item $N$ i.i.d. speech segments $\{ {\mathbf x}^{(n)} \}_{n=1}^N$ are drawn from a condition distribution $p_{\theta}({\mathbf x} | {\mathbf z}_1, {\mathbf z}_2) = \mathcal{N}(\bm{x} | f_{\mu_x}({\mathbf z}_1, {\mathbf z}_2), diag(f_{\sigma^2_x}({\mathbf z}_1, {\mathbf z}_2)))$, whose mean and diagonal variance are parameterized by neural networks.
\end{enumerate*}
The joint probability for a sequence is formulated in Eq.~\ref{eq:joint}:
\begin{equation}
p_{\theta}({\bm \mu}_2) \prod_{n=1}^{N} p_{\theta}({\mathbf x}^{(n)} | {\mathbf z}_1^{(n)}, {\mathbf z}_2^{(n)}) p_{\theta}({\mathbf z}_1^{(n)})p_{\theta}({\mathbf z}_2^{(n)} | {\bm \mu}_2). \label{eq:joint}
\end{equation}
Based on this formulation, ${\bm \mu}_2$ can be regarded as a summarization of sequence-level attributes for a sequence,
and ${\mathbf z}_2$ is encouraged to encode sequence-level attributes for a segment that are similar within an utterance.
Consequently, ${\mathbf z}_1$ encodes the residual segment-level attributes for a segment,
such that ${\mathbf z}_1$ and ${\mathbf z}_2$ together provide sufficient information for generating a segment.
Since exact posterior inference is intractable, FHVAEs introduce an inference model $q_{\phi}( {\mathbf Z}_1^{(i)}, {\mathbf Z}_2^{(i)}, {\bm \mu}_2^{(i)} | \bm{X}^{(i)})$ as formulated in Eq.~\ref{eq:inf} that approximates the true posterior $p_{\theta}({\mathbf Z}_1^{(i)}, {\mathbf Z}_2^{(i)}, {\bm \mu}_2^{(i)} | \bm{X}^{(i)})$:
\begin{align}
&q_{\phi}( {\bm \mu}_2^{(i)}) \prod_{n=1}^{N^{(i)}} q_{\phi}({\mathbf z}_1^{(i,n)} | {\mathbf x}^{(i,n)}, {\mathbf z}_2^{(i,n)}) q_{\phi}({\mathbf z}_2^{(i,n)} | \bm{x}^{(i,n)}), \label{eq:inf}
\end{align}
from which we observe that inference of ${\mathbf z}_1^{(i,n)}$ and ${\mathbf z}_2^{(i,n)}$ only depends on the corresponding segment ${\mathbf x}^{(i,n)}$;
in particular, the posteriors,
$q_{\phi}({\mathbf z}_1 | {\mathbf x}, {\mathbf z}_2) = \mathcal{N}({\mathbf z}_1 | g_{\mu_{{\mathbf z}_1}}({\mathbf x}, {\mathbf z}_2), diag( g_{\sigma^2_{{\mathbf z}_1}}({\mathbf x}, {\mathbf z}_2) ) )$ and
$q_{\phi}({\mathbf z}_2 | {\mathbf x}) = \mathcal{N}({\mathbf z}_2 | g_{\mu_{\bm{z}_2}}({\mathbf x}), diag( g_{\sigma^2_{{\mathbf z}_2}}({\mathbf x}) ) )$, are approximated with diagonal Gaussian distributions whose mean and diagonal variance are also parameterized by neural networks.
On the other hand, $q_{\phi}({\bm \mu}_2^{(i)})$ is modeled as an isotropic Gaussian, $\mathcal{N}({\bm \mu}_2^{(i)} | g_{\mu_{{\bm \mu}_2}}(i), \sigma^2_{{\bm {\tilde \mu}}_2}\bm{I})$, where $g_{\mu_{{\bm \mu}_2}}(i)$ is a trainable lookup table of the posterior mean of ${\bm \mu}_2$ for each training sequence.
Estimation of ${\bm \mu}_2$ for testing sequences can be found in~\cite{hsu2017unsupervised}.
As pointed out in~\cite{hsu2017unsuperviseddomain}, nuisance attributes regarding ASR, such as speaker identity, room geometry, and background noise, are generally consistent within an utterance.
If we treat each utterance as a sequence, these attributes then become sequence-level attributes, which would be encoded by ${\mathbf z}_2$ and ${\bm \mu}_2$.
As a result, ${\mathbf z}_1$ encodes the residual linguistic information and is invariant to these nuisance attributes, which is our desired domain invariant ASR feature.
\subsection{Training FHVAE and Preventing S-Vector Collapsing}
As in other generative models, FHVAEs aim to maximize the marginal likelihood of the observed dataset;
due to the intractability of the exact posterior, FHVAEs optimize the \textit{segment variational lower bound}, ${\cal L}(\theta, \psi; {\mathbf x}^{(i,n)})$, which is formulated as follows:
\begin{align}
& \mathbb{E}_{q_{\phi}(\bm{z}_1^{(i,n)}, \bm{z}_2^{(i,n)} | \bm{x}^{(i,n)})} \big[
\log p_{\theta}(\bm{x}^{(i,n)} | \bm{z}_1^{(i,n)}, \bm{z}_2^{(i,n)}) \big] \nonumber \\
& - \mathbb{E}_{q_{\phi}(\bm{z}_2^{(i,n)} | \bm{x}^{(i,n)})} \big[
D_{KL}(
q_{\phi}(\bm{z}_1^{(i,n)} | \bm{x}^{(i,n)}, \bm{z}_2^{(i,n)}) ||
p_{\theta}(\bm{z}_1^{(i,n)})) \big] \nonumber \\
& - D_{KL}(
q_{\phi}(\bm{z}_2^{(i,n)} | \bm{x}^{(i,n)}) ||
p_{\theta}(\bm{z}_2^{(n)} | g_{\mu_{{\bm \mu}_2}}(i))) + \dfrac{1}{N} \log p_{\theta}(g_{\mu_{{\bm \mu}_2}}(i)). \nonumber
\end{align}
Notice that if the ${\bm \mu}_2$ are the same for all utterances, an FHVAE would then degenerate to a vanilla VAE.
To prevent ${\bm \mu}_2$ from collapsing, we can add an additional discriminative objective, $\log p(i|{\mathbf z}_2^{(i,n)})$, that encourages the discriminability of ${\mathbf z}_2$ regarding which utterance the segment is drawn from.
Specifically, we define it as
$\log p_\theta( {\mathbf z}_2^{(i,n)} | g_{\mu_{{\bm \mu}_2}}(i)) - \log \sum_{j=1}^M p_\theta( {\mathbf z}_2^{(i,n)} | g_{\mu_{{\bm \mu}_2}}(j) ) $.
By combining the two objectives with a weighting parameter $\alpha$, we obtain the \textit{discriminative segment variational lower bound}:
\begin{equation}
\mathcal{L}^{dis}(\theta,\phi; \bm{x}^{(i,n)})
= \mathcal{L}(\theta,\phi; \bm{x}^{(i,n)})
+ \alpha \log p(i | \bm{z}_2^{(i,n)}). \label{eq:lb_dis}
\end{equation}
\begin{table*}[t!]
\centering
\begin{tabular}{|llllll|l|llll|}
\hline
\multicolumn{6}{|c}{Setting} & \multicolumn{1}{|c|}{WER (\%)} & \multicolumn{4}{c|}{WER (\%) by Condition} \\
Exp. Index & Feature & \#Layers & \#Units & $\alpha$ & Seq. Label & Avg. & A & B & C & D \\
\hline
\multirow{4}{*}{1} & FBank & - & - & - & - & 65.64 & 3.21 & 61.61 & 51.78 & 82.39 \\
& ${\mathbf z}$ & 1/1 & 256/256 & - & - & 44.79 & 4.22 & 38.16 & 36.11 & 59.63 \\
& ${\mathbf z}$ & 1/1 & 512/256 & - & - & 40.31 & 4.35 & 33.83 & 34.43 & 53.77 \\
& ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & uttid & \textbf{26.58} & 4.54 & 19.28 & 20.85 & 38.50\\
\hline\hline
\multirow{3}{*}{2} & ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & uttid & 26.58 & 4.54 & 19.28 & 20.85 & 38.50\\
& ${\mathbf z}_1$ & 2/2 & 256/256 & 10 & uttid & 25.54 & 4.11 & 16.90 & 20.62 & 38.58 \\
& ${\mathbf z}_1$ & 3/3 & 256/256 & 10 & uttid & \textbf{24.30} & 4.91 & 15.44 & 22.83 & 36.63 \\
\hline\hline
\multirow{3}{*}{3} & ${\mathbf z}_1$ & 1/1 & 128/128 & 10 & uttid & 34.66 & 5.06 & 26.70 & 25.39 & 49.09 \\
& ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & uttid & \textbf{26.58} & 4.54 & 19.28 & 20.85 & 38.50\\
& ${\mathbf z}_1$ & 1/1 & 512/512 & 10 & uttid & 26.97 & 5.32 & 18.18 & 23.13 & 40.01 \\
\hline\hline
\multirow{5}{*}{4} & ${\mathbf z}_1$ & 1/1 & 256/256 & 0 & uttid & 33.30 & 4.86 & 25.67 & 25.46 & 46.97 \\
& ${\mathbf z}_1$ & 1/1 & 256/256 & 5 & uttid & 30.55 & 4.63 & 22.66 & 23.33 & 43.96 \\
& ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & uttid & \textbf{26.58} & 4.54 & 19.28 & 20.85 & 38.50 \\
& ${\mathbf z}_1$ & 1/1 & 256/256 & 15 & uttid & 29.92 & 5.01 & 20.82 & 24.79 & 44.03 \\
& ${\mathbf z}_1$ & 1/1 & 256/256 & 20 & uttid & 32.64 & 5.57 & 25.48 & 24.53 & 45.66 \\
\hline\hline
\multirow{3}{*}{5} & ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & uttid & \textbf{26.58} & 4.54 & 19.28 & 20.85 & 38.50\\
& ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & noise & 32.27 & 4.33 & 23.89 & 28.96 & 45.86 \\
& ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & speaker & 34.95 & 4.39 & 27.27 & 32.22 & 48.20 \\
\hline\hline
\multirow{2}{*}{6} & ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & uttid & \textbf{26.58} & 4.54 & 19.28 & 20.85 & 38.50\\
& ${\mathbf z}_1$-${\bm \mu}_2$ & 1/1 & 256/256 & 10 & uttid & 43.61 & 5.08 & 42.47 & 27.55 & 53.85 \\
\hline
\end{tabular}
\caption{Aurora-4 test\_eval92 set word error rate of acoustic models trained on different features.}
\label{tab:a4_wer}
\end{table*}
\section{Experiment Setup}\label{sec:setup}
To evaluate the effectiveness of the proposed method on extracting domain invariant features, we consider domain mismatched ASR scenarios. Specifically, we train an ASR system using a clean set, and test the system on both a clean and noisy set. The idea is that one would observe a smaller performance discrepancy between different domains if the feature representation is more domain invariant. We next introduce the datasets, as well as the model architectures and training configurations for the experiments.
\subsection{Dataset}
We use Aurora-4~\cite{pearce2002aurora} as the primary dataset for our experiments. Aurora-4 is a broadband corpus designed for noisy speech recognition tasks based on the Wall Street Journal (WSJ0) corpus~\cite{garofalo2007csr}.
Two microphone types, clean/channel are included, and six noise types are artificially added to both microphone types, which results in four conditions: clean(A), channel(B), noisy(C), and channel+noisy(D).
We use the multi-condition development set for training the VAE and FHVAE models, because the development set contains both noise labels and speaker labels for each utterance, which are used in \textit{Exp. Index 5}, while the training set only contains speaker labels.
The ASR system is trained on the clean \texttt{train\_si84\_clean} set and evaluated on the multi-condition \texttt{test\_eval92} set.
To verify our proposed method on a non-artificial dataset, we repeat our experiments on the CHiME-4~\cite{vincent2016analysis} dataset, which contains real distant-talking recordings in noisy environments.
We use the original 7,138 clean utterances and the 1,600 single channel real noisy utterances in the training partition to train the VAE and FHVAE models.
The ASR system is trained on the original clean training set and evaluated on the CHiME-4 development set.
\subsection{VAE/FHVAE Setup and Training}
The VAE is trained with stochastic gradient descent using a mini-batch size of 128 without clipping to minimize the negative variational lower bound plus an $L2$-regularization with weight $10^{-4}$. The Adam~\cite{kingma2014adam} optimizer is used with $\beta_1=0.95$, $\beta_2=0.999$, $\epsilon=10^{-8}$, and initial learning rate of $10^{-3}$. Training is terminated if the lower bound on the development set does not improve for 50 epochs.
The FHVAE is trained with the same configuration and optimization method, except that the loss function is replaced with the negative discriminative segment variational lower bound.
Seq2Seq-VAE~\cite{hsu2017unsuperviseddomain} and Seq2Seq-FHVAE~\cite{hsu2017unsupervised} architectures with LSTM units are used for all experiments.
We let the latent space of the VAEs contain 64 dimensions. Since the FHVAE models have two latent spaces, we let each of them be 32 dimensional.
Other hyper-parameters are explored in our experiments.
Inputs to VAE/FHVAE, ${\mathbf x}$, are chunks of 20 consecutive speech frames randomly drawn from utterances, where each frame is represented as 80 dimensional filter bank (FBank) energies.
To extract features from the VAE and FHVAE for ASR training, for each utterance, we compute and concatenate the posterior mean and variance of chunks shifted by one frame, which generates a sequence of new features that are 19 frames shorter than the original sequence.
We pad the first frame and the last frame at each end to match the original length.
\subsection{ASR Setup and Training}
Kaldi~\cite{povey2011kaldi} is used for feature extraction, decoding, forced alignment, and training of an initial HMM-GMM model on the original clean utterances.
The recipe provided by the CHiME-4 challenge (\texttt{run\_gmm.sh}) and the Kaldi Aurora-4 recipe are adapted by only changing the training data being used.
The Computational Network Toolkit (CNTK)~\cite{yu2014introduction} is used for neural network-based acoustic model training.
For all experiments, the same LSTM acoustic model~\cite{sak2014long} with the architecture proposed in~\cite{zhang2016highway} is applied, which has 1,024 memory cells and a 512-node projection layer for each LSTM layer, and 3 LSTM layers in total.
Following the training setup in~\cite{hsu2016exploiting}, LSTM acoustic models are trained with a cross-entropy criterion, using truncated backpropagation-through-time (BPTT)~\cite{williams1990efficient} to optimize.
Each BPTT segment contains 20 frames, and each mini-batch contains 80 utterances, since we find empirically that 80 utterances has similar performance to 40 utterances.
A momentum of 0.9 is used starting from the second epoch~\cite{hsu2016prioritized}.
Ten percent of the training data is held out as a validation set to control the learning rate. The learning rate is halved when no gain is observed after an epoch. The same language model is used for decoding for all experiments.
\begin{table*}[tbh]
\centering
\begin{tabular}{|llllll|cc|cccc|}
\hline
\multicolumn{6}{|c}{Setting} & \multicolumn{2}{|c|}{WER (\%)} & \multicolumn{4}{c|}{WER (\%) by Noise Type} \\
Exp. Index & ASR Feature & \#Layers & \#Units & $\alpha$ & Seq. Label & Clean & Noisy & BUS & CAF & PED & STR \\
\hline
\multirow{3}{*}{1} & FBank & - & - & - & - & 19.37 & 87.69 & 95.56 & 92.05 & 78.77 & 84.37 \\
& ${\mathbf z}$ & 1/1 & 512/256 & - & - & 19.47 & 73.95 & 70.10 & 91.45 & 64.26 & 69.99 \\
& ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & uttid & 19.57 & \textbf{67.94} & 71.96 & 79.37 & 59.32 & 61.11 \\
\hline\hline
\multirow{3}{*}{2} & ${\mathbf z}_1$ & 1/1 & 256/256 & 10 & uttid & 19.57 & 67.94 & 71.96 & 79.37 & 59.32 & 61.11 \\
& ${\mathbf z}_1$ & 2/2 & 256/256 & 10 & uttid & 19.73 & 62.44 & 71.28 & 71.86 & 52.46 & 54.18 \\
& ${\mathbf z}_1$ & 3/3 & 256/256 & 10 & uttid & 19.52 & \textbf{60.39} & 69.13 & 66.24 & 51.22 & 54.96 \\
\hline
\end{tabular}
\caption{CHiME-4 development set word error rate of acoustic models trained on different features.}
\label{tab:chime4_wer}
\end{table*}
\section{Experimental Results and Discussion}\label{sec:exp}
In this section, we report the experimental results on both datasets, and provide insights on the outcome.
Table~\ref{tab:a4_wer} and \ref{tab:chime4_wer} summarize the results on Aurora-4 and CHiME-4 respectively.
For both tables, different experiments are separated by double horizontal lines and indexed by the \textit{Exp. Index} on the first column.
The second column, \textit{Feature}, refers to the frame representations used for training ASR models.
The third to the sixth column explains the model configuration and the discriminative training weight for VAE or FHVAE models.
We separate the encoder and decoder parameters by ``/'' in the third and the fourth column.
Averaged and by-condition word error rate (WER) are shown in the rest of the columns.
\subsection{Baseline}
We start with establishing Aurora-4 baseline results trained on different types of feature representations, including
\begin{enumerate*}[label=(\arabic*)]
\item FBank,
\item latent variable, ${\mathbf z}$, extracted from the VAE, and
\item latent segment variable, ${\mathbf z}_1$, extracted from the FHVAE.
\end{enumerate*}
Because each FHVAE model has two encoders, to have a fair comparison between VAE and FHVAE models, we also consider a VAE model with 512 hidden units at each encoder layer.
The results are shown in Table~\ref{tab:a4_wer} \textit{Exp. Index 1}.
As mentioned, condition A is the matched domain, while conditions B, C, and D are all mismatched domains.
FBank degrades significantly in the mismatched conditions, producing between 49\% to 79\% absolute WER increase.
On the other hand, both VAE and FHVAE models improve the performance in the mismatched domains by a large margin, with only a slight degradation in the matched domain.
In particular, the features learned by the FHVAE consistently outperform the VAE features in all mismatched conditions by 14\% absolute WER reduction.
We believe that this experiment verifies that FHVAEs can successfully retain domain invariant linguistic features in ${\mathbf z}_1$, while encode domain related information into ${\mathbf z}_2$.
In contrast, as the results suggests, VAEs encode all the information into a single set of latent variables, ${\mathbf z}$, which still contain domain related information that can hurt ASR performance on the mismatched domains.
\subsection{Comparing Model Architectures}
We next explore the optimal FHVAE architectures for extracting domain invariant features.
In particular, we study the effect of the number of hidden units at each layer and the number of layers.
Results of each variant are listed in Table~\ref{tab:a4_wer} \textit{Exp. Index 2} and \textit{Exp. Index 3} respectively.
Regarding the averaged WER, the model with 256 hidden units at each layer and in total three layers achieves the lowest WER (24.30\%).
Interestingly, if we break down the WER by condition, it can be observed that increasing the FHVAE model capacity (i.e. increasing number of layers or hidden units) helps reducing the WER in the noisy condition (B), but deteriorates channel-mismatching condition (C) above 256 hidden units and 2 layers.
\subsection{Effect of FHVAE Discriminative Training}
Speaker verification experiments in~\cite{hsu2017unsupervised} suggest that discriminative training facilitates factorizing segment-level attributes and sequence-level attributes into two sets of latent variables. Here we study the effect of discriminative training on learning robust ASR features, and show the results in Table~\ref{tab:a4_wer} \textit{Exp. Index 4}. When $\alpha = 0$, the model is not trained with the discriminative object.
While increasing the discriminative weight from 0 to 10, we observe consistent improvement in all 4 conditions due to better factorization of segment and sequence information; however, when further increasing the weight to 20, the performance starts to degrade. This is because the discriminative object can inversely affects the modeling capacity by constraining the expressibility of the latent sequence variables.
\subsection{Choice of Sequence Label}
A core idea of FHVAE is to learn sequence-specific priors to model the generation of sequence-level attributes, which have a smaller amount of variation within a sequence.
Suppose we treat each utterance as one sequence, then both speaker and noise information belongs to sequence-level attributes, because they are consistent within an utterance.
Alternatively, we consider two FHVAE models that learn speaker-specific priors and noise-specific priors respectively.
This can be easily achieved by concatenating sequences of the same speaker label or noise label, and treating it as one sequence used for FHVAE training.
We report the results in Table~\ref{tab:a4_wer} \textit{Exp. Index 5}.
It may at first seem surprising that utilizing supervised information in this fashion does not improve performance.
We believe that concatenating utterances actually discards some useful information with respect to learning domain invariant features.
FHVAEs use latent segment variables to encode attributes that are not consistent within a sequence.
By concatenating speaker utterances, noise information is no longer consistent within sequences, and would thus be encoded into latent segment variables;
similarly, latent segment variables would not be speaker invariant in the other case.
\subsection{Use of S-Vector}
Lastly, we study the use of s-vectors, ${\bm \mu}_2$, derived from the FHVAE model, which can be seen as a summarization of sequence-level attributes of an utterance. We apply the same procedure as i-vector based speaker adaptation~\cite{saon2013speaker}: For each utterance, we first estimate its s-vector, and then concatenate s-vectors with the feature representation of each frame to generate the new feature sequence.
Results are shown in Table~\ref{tab:a4_wer} \textit{Exp. Index 6}, from which we observe a significant degradation of WER that is similar to those of the VAE models.
This is reasonable because ${\mathbf z}_1$ and ${\bm \mu}_2$ in combination actually contains similar information as the latent variable ${\mathbf z}$ in VAE models, and the degradation is due to the mismatch between the distributions of ${\bm \mu}_2$ in the training and testing sets.
\subsection{Verifying Results on CHiME4}
In this section, we repeat the baseline and the layer experiments on the CHiME-4 dataset, in order to verify the effectiveness of the FHVAE and the optimality of the FHVAE architecture on a non-artificial dataset.
The results are shown in Table~\ref{tab:chime4_wer}.
From \textit{Exp. Index 1}, we see that the same trend applies to the CHiME-4 dataset, where the latent segment variables from the FHVAE outperform those from the VAE, and both latent variable representations outperform FBank features.
For the FHVAE architectures, a 7\% absolute WER decrease is achieved by increasing the number of encoder/decoder layers from 1 to 3, which is also consistent with the trends we saw on Aurora-4.
\section{Conclusion and Future Work}\label{sec:conclu}
In this paper, we conduct comprehensive experiments on studying the use of FHVAE models domain invariant ASR features extractors.
Our feature demonstrates superior robustness in mismatched domains compared to FBank and VAE-based features by achieving 41\% and 27\% absolute WER reduction on Aurora-4 and CHiME-4 respectively.
In the future, we plan to study FHVAE-based augmentation methods similar to~\cite{hsu2017unsuperviseddomain}.
\vfill
\pagebreak
\clearpage
\bibliographystyle{IEEEbib}
|
{
"timestamp": "2018-03-08T02:05:06",
"yymm": "1803",
"arxiv_id": "1803.02551",
"language": "en",
"url": "https://arxiv.org/abs/1803.02551"
}
|
\section{Introduction}\label{sec:intro}
As noted by Herbert Simon when receiving his Nobel Prize, ``decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world.'' Oriented around the former, much of the bandit learning literature has focused on algorithms that aim to converge on an optimal action. As this area progresses, researchers study increasingly complex models, often with very large or infinite action sets. For some such models, convergence to optimality can be a very slow process. It is natural to ask whether a satisficing action can be identified much more quickly. If so, that may be preferable, especially considering that the model itself is a simplified approximation to reality and thus even an exact optimal action is not truly optimal. The following example illustrates the issue. \\
\begin{example}{\bf (Many-Armed Deterministic Bandit)}
\label{ex:many-armed}
Consider an action set $\mathcal{A}=\{1,\ldots,K\}$. Each action $a \in \mathcal{A}$ results in reward $\theta_a$. We refer to this as a {\it deterministic bandit} because the realized reward
is determined by the action and not distorted by noise. The agent begins with a prior over each $\theta_a$ that is independent and uniform over $[0,1]$
and sequentially applies actions $A_0,A_1,A_2,\ldots$, selected by an algorithm that adapts decisions as rewards are observed.
As $K$ grows, it takes longer to identify the optimal action $A^* = \argmax_{a \in \mathcal{A}} \theta_a$. Indeed, for any algorithm, $\mathbb{P}(A^* \in \{A_0,\ldots A_{t-1}\}) \leq t/K$.
Therefore, no algorithm can expect to select $A^*$ within time $t\ll K$. On the other hand, by simply selecting actions in order, with $A_0 = 1, A_1 = 2, A_2 = 3, \ldots$,
the agent can expect to identify an $\epsilon$-optimal action within $t = 1/\epsilon$ time periods, independent of $K$.
\end{example}
\vspace{\baselineskip}
It is disconcerting that popular algorithms perform poorly when specialized to this simple problem. Thompson sampling (TS) \cite{thompson1933},
for example, is likely to sample a new action
in each time period so long as $t \ll K$. The underlying issue is most pronounced in the asymptotic regime of $K \to \infty$,
for which TS never repeats any action because, at any point in time, there will be actions better than those previously selected.
A surprisingly simple modification offers dramatic improvement: settle for the first action $a$ for which $\theta_a \geq 1-\epsilon$.
This alternative can be thought of as a variation of TS that aims to learn a {\it satisficing} action $\tilde{A} = \min\{a: \theta_a \geq 1-\epsilon\}$.
We will refer to an algorithm that samples from the posterior
distribution of a satisficing action instead of the optimal action, as {\it satisficing Thompson sampling} (STS).
While stylized, the above example captures the essence of a basic dilemma faced in all decision problems and not adequately addressed by
popular algorithms. The underlying issue is time preference. In particular, if an agent is only concerned about performance over an
asymptotically long time horizon, it is reasonable to aim at learning $A^*$, while this can be a bad idea if shorter term performance matters
and a satisficing action $\tilde{A}$ can be learned more quickly.
To model time preference and formalize benefits of STS, we will assess performance in terms of expected discounted regret,
which for the many-armed deterministic bandit can be written as $\mathbb{E}\left[\sum_{t=0}^\infty \alpha^t (\theta_{A^*} - \theta_{A_t})\right]$.
The constant $\alpha \in [0,1]$ is a discount factor that conveys time preference.
It is easy to show, as is done through Theorem \ref{thm: regret of TS} in the appendix, that in the asymptotic regime of $K \to \infty$, TS experiences expected discounted regret of
$1/2(1-\alpha)$, whereas that of STS is bounded above by $1/\sqrt{1-\alpha}$. For $\alpha$ close to $1$, we have $1/\sqrt{1-\alpha} \ll 1/(1-\alpha)$, and therefore STS vastly outperforms TS.
In fact, as $\alpha$ approaches $1$, the ratio between expected discounted regret of TS and that of STS goes to infinity.
This stylized example demonstrates potential advantages of STS.
Of course, satisficing can also play a critical role in more complicated decision problems. The following example provides one simple illustration by adding a structured correlation pattern to the infinite-armed deterministic bandit treated in Example 1. \\
\begin{example}{\bf (Infinite-Armed Deterministic Bandit With Hierarchical Structure)}
A website seeks the best advertisement to display out of an enormous number of alternatives. Each particular ad $a\in \mathcal{A}$ is associated with a known binary feature vector $\phi(a) \in \{0, 1\}^d$, where individual features may indicate whether an ad pertains to a particular category of product, contains an image of a particular celebrity, includes bright colors, etc. When displayed, advertisement $a$ generates revenue governed by the linear mixed model
\[
R_a=\sum_{j=1}^{d} \phi(a)_j \theta_{0j} + \theta_{1a}
\]
where $(\theta_{01},\cdots, \theta_{0d})$ encodes the effect of each feature, while $\theta_{1a}$ encodes an ad-specific effect. Any given feature vector $\phi(a)$ for $a\in \mathcal{A}$ is shared by an infinite number of other ads, i.e. $|\{ a'\in \mathcal{A} \mid \phi(a') = \phi(a)\}| =\infty$. This serves to model settings in which there are many ads with common features relative to the problem's time horizon. The agent begins with a prior over $\theta=((\theta_{0j})_{j=1,\cdots d}, (\theta_{1a})_{a\in \mathcal{A}} )$ under which each variable is independent with marginal distributions $\theta_{0j} \sim N\left( 0, \sigma_0^2 \right)$ and $\theta_{1a} \sim {\rm Uniform}(0,1)$.
\end{example}
\vspace{\baselineskip}
Similar to the infinite armed bandit with independent arms, identifying an exactly optimal arm is hopeless, though it may be possible to quickly identify a satisficing arm. The search for a satisficing arm can be further accelerated because the linear mixed model enables some generalization across arms. In particular, data gathered from trying some arms inform estimates for other arms, helping to guide the search.
In order to avoid trying each individual arm, one might be tempted to approximate this model via a linear bandit, ignoring the ad specific effect $\theta_{1a}$ altogether. In particular, one could assume that rewards are generated according to $\sum_{j=1}^{d} \phi(a)_j \theta_{0j} + W_t$, where $W_t$ is iid zero-mean noise. But ignoring consistent ad-specific effects in this way can lead popular algorithms like Thompson sampling and linear upper-confidence bound approaches to converge on poorly performing arms. We will revisit this example in Section \ref{sec: Hierarchical}, where we demonstrate how our approach can efficiently identify satisficing actions while modeling ad-specific effects.
\subsection{Our Contributions.}
This paper develops a general framework for studying satisficing in sequential learning. Satisficing algorithms aim to learn a satisficing action $\tilde{A}$. Building on the work of \citet{russo2016info}, we will establish a general information-theoretic regret bound, showing that any algorithm's expected discounted regret relative to a satisficing action is bounded in terms of the mutual information $I(\theta; \tilde{A})$ between the model parameters $\theta$ and the satisficing action and a newly-defined notion of information ratio, which measures the cost of information acquired about the satisficing action. The mutual information $I(\theta; \tilde{A})$ can be thought of as the number of bits of information about $\theta$ required to identify $\tilde{A}$, and the fact that the bound depends on this quantity instead of the entropy of $A^*$, as does the bound of \citep{russo2016info}, allows it to capture the reduction of discounted regret made possible by settling for the satisficing action.
A natural and deep question concerns the the choice of satisficing action $\tilde{A}$ and the limits of performance attainable via satisficing. An exploration of this question yields novel connections between sequential learning and rate-distortion theory. In Section, \ref{sec: rate distortion} we define a natural rate-distortion function for Bayesian decision making, which captures the minimal information about $\theta$ a decision-maker must acquire in order to reach an $\epsilon$-optimal decision. Combining this rate-distortion function with our general regret bound leads to new results and insights. As an example, while previous information-theoretic regret bounds for the linear bandit problem become vacuous in contexts with infinite action spaces, our rate-distortion function leads to a strong bound on expected discounted regret.
We will also study the infinite-armed bandit problem with noisy rewards. Here, we will consider a satisficing action $\tilde{A} = \min\{a: \theta_a \geq 1-\epsilon\}$. Simple numerical experiments demonstrate the benefits of STS over TS. We instantiate our general regret analysis in the infinite-armed bandit problem by bounding the mutual information and information ratio in that problem. This yields a bound on expected discounted regret that formalized the benefits of STS over TS. We complement this upper bound by establishing a matching lower bound on the regret of any algorithm for infinite armed bandit.
\subsection{Alternative approaches.}
Many papers \citep{kleinberg2008multi, rusmevichientong2010linearly, bubeck2011xarmed} have studied bandit problems with continuous action spaces, where it is also necessary to learn only approximately optimal actions. However, because these papers focus on the asymptotic growth rate of regret they implicitly emphasize later stages of learning, where the algorithm has already identified extremely high performing actions but exploration is needed to identify even better actions. Our discounted framework instead focuses on the initial cost of learning to attain good, but not perfect, performance. Recent papers \citep{francetich2016toolkita,francetich2016toolkitb} study several heuristics for a discounted objective, though without an orientation toward formal regret analysis. The knowledge gradient algorithm of \citet{ryzhov2012knowledge} also takes time horizon into account and can learn suboptimal actions when its not worthwhile to identify the optimal action. This algorithm tries to directly approximate the optimal Bayesian policy using a one-step lookahead heuristic, but there are no performance guarantees for this method. \citet{deshpande2012linear} consider a linear bandit problem with dimension that is too large relative to the desired horizon. They propose an algorithm specifically for that problem that limits exploration and learns something useful within this short time frame.
\citet{berry1997bandit, wang2009algorithms} and \citet{bonald2013two} study an infinite-armed bandit problem in which it is impossible to identify an optimal action and propose algorithms to minimizes the asymptotic growth rate of regret.
Their strategies are carefully designed, but appear to be difficult to adapt to more complex problems. For example, one algorithm in \citet{berry1997bandit} discards an arm as soon as it produces a reward of zero in some period, which is sensible only for infinite armed bandits with uniform prior and binary rewards. A brief section describes extensions to non-uniform priors, but still these cannot extend beyond binary feedback. \citet{bonald2013two} design a procedure specifically for the infinite-armed bandit with uniform prior and binary rewards under which the first-order term in an asymptotic expansion of regret is minimized. The algorithm of \citet{wang2009algorithms} discards all but $k$ arms at the start of the problem then applies a standard algorithm for $k$--armed bandits. The main contribution of that work is to carefully analyze regret as a function of the prior, allowing them to choose $k$ to minimize regret upper bounds. While we will instantiate our general regret bound for STS on the infinite-armed bandit problem, we view this example as a very simple and stylized special case that we provide to illustrate basic concepts. The flexibility of STS and our analysis framework allow this work to be applied to more complicated time-sensitive learning problems.
Our approach is built on Thompson sampling. One simple reason is that Thompson sampling is enjoying wide practical use due to its ease of use, ability to incorporate complex prior information, and resilience to delayed feedback \cite{russo2018tutorial, scott2010modern, chapelle2011empirical}. Given this, we see value in broadening the class of problem to which Thompson sampling approaches may be applied. It is also worth emphasizing that formulations of problems like the infinite armed-bandit in Example 1 are inherently Bayesian. Arms are modeled as independent and yet the decision-maker is required to make inferences about the quality of arms for which no data is available. By contrast, frequentist algorithms like KL-UCB \cite{KL-UCB2013} usually require an initial phase in which all arms are tested at least once. Beyond simple infinite-armed bandits, developing satisficing variants of Thompson sampling appears to be a natural way to approach more complex problems like the hierarchical bandit in Example 2 -- which require both satisficing and generalizing across arms. A separate motivation for us is to improve the information theoretic analysis of complex Bayesian bandit problems. This analysis is centered on Thompson sampling, but beyond any interest in the algorithm it has been an effective tool for establishing regret upper bounds, including for bandit convex optimization \cite{bubeck2015multi}, for partial monitoring \cite{lattimore19Info}, for bandits with graph-structured feedback \cite{tossou2017thompson}, and for reinforcement learning \cite{lu2019information}.
\section{Problem Formulation.}\label{sec: formulation}
An agent sequentially chooses actions $(A_t)_{t\in \mathbb{N}_0}$ from the action set $\mathcal{A}$ and observes the corresponding outcomes $\left(Y_{t}\right)_{t\in \mathbb{N}_0} \subset \mathcal{Y}$. The agent associates a reward $R(y)$ with each outcome $y\in \mathcal{Y}$. Let $R_{t} \equiv R(Y_{t})$ denote the reward corresponding to outcome $Y_{t}$.
The outcome $Y_t$ in period $t$ depends on the chosen action $A_t$, idiosyncratic randomness associated with that time step, and a random variable $\theta$ that is fixed over time. Formally, there is a known system function $g$, and an iid sequence of disturbances $(W_t)_{t\in \mathbb{N}_0}$ such that
\[
Y_t = g(A_t, \theta, W_t).
\]
The disturbances $W_t$ are independent of $\theta$, and have a known distribution. This is without loss of generality, as uncertainty about $g$ and the distribution of $W_t$ could be included in the definition of $\theta$. From this, we can define
\[
\mu(a, \theta)= \mathbb{E}[R\left(g(a, \theta, W_t)\right) | \theta]
\]
to be the expected reward of an action $a$ under parameter $\theta$. Ours can be thought of as a Bayesian formulation, in which the distribution of $\theta$ represents the agent's prior uncertainty about the true characteristics of the system, and conditioned on $\theta$, the remaining randomness in $Y_t$ represents idiosyncratic noise in observed outcomes.
The history available when selecting action $A_t$ is $\mathcal{H}_{t-1} = (A_0, Y_{0}, \ldots, A_{t-1}, Y_{t-1})$. The agent selects actions according to a policy, which is a sequence of functions $\psi=(\psi_t)_{t \in \mathbb{N}_0}$, each mapping a history and an exogenous random variable $\xi$ to an action, with $A_t= \psi_{t}(\mathcal{H}_{t-1}, \xi)$ for each $t$. Throughout the paper, we use $\xi$ to denote some random variable that is independent of $\theta$ and the disturbances $(W_{t})_{t\in \mathbb{N}_0}$.
Let $R^* = \sup_{a\in \mathcal{A}} \mu(a, \theta)$ denote the supremal reward, and let $A^* \in \,\, \argmax_{a \in \mathcal{A}} \, \mu(a,\theta)$ denote the true optimal action, when this maximum exists. As a performance metric, we consider \emph{expected discounted regret} of a policy $\psi$ is defined by
\[
{\rm Regret} (\alpha, \psi) =\mathbb{E}^{\psi}\left[\sum_{t=0}^{\infty} \alpha^{t} (R^* - R_{t})\right],
\]
which measures a discounted sum of the expected performance gap between an omniscient policy which always chooses the optimal action $A^*$ and the policy $\psi$ which selects the actions $(A_t)_{t\in \mathbb{N}_0}$. This deviates from the typical notion of expected regret in its dependence on a discount factor $\alpha \in [0,1]$.
Regular expected regret corresponds to the case of $\alpha = 1$. Smaller values of $\alpha$ convey time preference by weighting gaps in nearer-term performance higher than gaps in longer-term performance.
The definition above compares regret relative to the optimal action $A^*$ and corresponding reward $R^*$. It is useful to also consider performance loss relative to a less stringent benchmark. We define the satisficing regret at level $D\geq 0$ to be
\[
{\rm SRegret} (\alpha, \psi, D) =\mathbb{E}^{\psi}\left[\sum_{t=0}^{\infty} \alpha^{t} (R^*-D - R_{t})\right].
\]
This measures regret relative to an action that is near-optimal, in the sense that it yields expected reward $R^*-D$, which is within $D$ of optimal. This notation was chosen due to the connection we develop with rate-distortion theory, where $D$ typically denotes a tolerable level of ``distortion'' in a lossy compression scheme. Of course, for all $D\geq 0$,
\begin{equation}\label{eq: sregret to regret}
{\rm Regret}(\alpha, \psi) = {\rm SRegret} (\alpha, \psi, D) + \frac{D}{1-\alpha}
\end{equation}
and so one can easily translate between bounds on regret and bounds on satisficing regret. However, directly studying satisficing regret helps focus our attention on the design of algorithms that purposefully avoid the search for exactly optimal behavior in order to limit exploration costs.
\subsection*{Additional Notation.}
Before beginning, let us first introduce some additional notation.
We denote the entropy of a random variable $X$ by $H(X)$, the Kullback-Leibler divergence between probability distributions $P$ and $Q$ by $D(P||Q)$, and the mutual information between two random variables $X$ and $Y$ by $I(X; Y)$. We will frequently be interested in the conditional mutual information $I(X; Y| \mathcal{H}_{t-1})$.
We sometimes denote by $\mathbb{E}_{t}[\cdot]=\mathbb{E}[\cdot | \mathcal{H}_{t-1} ]$ the expectation operator conditioned on the history up to time $t$ and similarly define $\mathbb{P}_{t}(\cdot) = \mathbb{P}(\cdot | \mathcal{H}_{t-1})$. The definitions of entropy and mutual information depend on a base measure. We use $H_{t}(\cdot)$ and $I_{t}(\cdot\, ,\cdot)$ to denote entropy and mutual-information when the base-measure is the posterior distribution $\mathbb{P}_{t}$. For example, if $X$ is and $Z$ are discrete random variables taking values in a sets $\mathcal{X}$ and $\mathcal{Z}$,
\[
I_{t}(X; Z) = \sum_{x\in \mathcal{X}}\sum_{z\in \mathcal{Z}} \mathbb{P}_{t}(X=x, Z=z) \log\left(\frac{\mathbb{P}_{t}(X=x, Z=z)}{\mathbb{P}_{t}(X=x)\mathbb{P}_{t}(Z=z)}\right).
\]
Due to its dependence on the realized history $\mathcal{H}_{t-1}$, $I_{t}(X;Z)$ is a random variable. The standard definition of conditional mutual information integrates over this randomness, and in particular, $\mathbb{E}[I_{t}(X;Z)] = I(X; Z | \mathcal{H}_{t-1})$.
\section{Satisficing Actions.}\label{sec: satisficing}
We will consider learning a satisficing action $\tilde{A}$ instead of an optimal action $A^*$. The idea is to target a satisficing action that is near-optimal yet easy to learn. The information about $\theta$ required to learn an action $\tilde{A}$ is captured by $I(\theta; \tilde{A})$, while the performance loss is $\mathbb{E}[R^* - \tilde{R}]$, where $\tilde{R} = \mu(\tilde{A}, \theta)$. For $\tilde{A}$ to be easy to learn relative to $A^*$, we want $I(\theta; \tilde{A}) \ll I(\theta; A^*)$. We will motivate this abstract notion through several examples.
Our first example addresses the infinite-armed deterministic bandit, as discussed in Section \ref{sec:intro}..
\begin{example}[first satisfactory arm]
Consider the infinite-armed deterministic bandit of Section \ref{sec:intro}. For this problem, the prior of $A^*$ is uniformly distributed across a large number $K$ of actions, and $I(\theta; A^*) = H(A^*) = \log K$. Consider a satisficing action $\tilde{A} = \min\{ k | \theta_{k} \geq R^*-\epsilon\}$,
which represents the first action that attains reward within $\epsilon$ of the optimum $R^*=\max_{k} \theta_k$. As $K \to \infty$, $I(\theta; A^*) \to \infty$. But $I(\theta; \tilde{A})$ remains finite, as in this limit $\tilde{A}$ converges weakly to a geometric random variable, with $I(\theta; \tilde{A}) = H(\tilde{A}) = -((1-\epsilon) \log(1-\epsilon) + \epsilon \log(\epsilon))/\epsilon$.
\end{example}
The next example addresses the infinite-armed bandit with hierarchical structure from Section \ref{sec:intro}. Naturally, the satisficing action is itself defined in a hierarchical way, first defining a subset of arms with the most attractive features and then selecting the first arm among those with a satisfactory arm-specific effect.
\begin{example}[hierarchical satisficing]\label{ex: hiearchical satisficing action}
Consider a hierarchical infinite-armed deterministic bandit of Section \ref{sec:intro}. The set of arms is $\mathcal{A}=\{1, 2, 3, \ldots\}$. The reward generated by an arm $a\in \mathcal{A}$ can be written as $\mu(a, \theta) = \langle \phi(a) \, , \, \theta_0 \rangle + \theta_{1a}$, where $\phi(a) \in \{0,1 \}^d$ is a known feature vector associated with the arm, $\theta_0 \in \mathbb{R}^d$ is drawn from some prior and the arm-specific effects $\theta_{1,a} \sim {\rm Unif}[0,1]$ are drawn independently across arms $a\in \mathcal{A}$. For each action $a$, we assume there are is infinite collection of other actions $\{a' \mid \phi(a') = \phi(a)\}$ that share the same feature vector. (One should have in mind problems where the features encode relatively coarse categories.) We take $\mathcal{A}(\theta_0) =\{a\in \mathcal{A} \mid \langle \phi(a), \theta_0 \rangle \geq \langle \phi(a')\,,\, \theta_0 \rangle \quad \forall a' \in \mathcal{A} \}$ to be the subset of actions that have optimal features. We take a satisficing action to be the first such action that has a sufficiently large ad-specific effect.
\begin{equation}\label{eq: hierarchical satisficing}
\tilde{A} = \argmin_{ a \in \mathcal{A}(\theta_0)} \{a| \theta'_a \geq 1- \epsilon \}.
\end{equation}
In this case, although there are an infinite number of arms, the entropy of the satisficing action is bounded as:
\begin{align*}
I(\tilde{A}; \theta) = I(\tilde{A}; \theta_0) + I(\tilde{A}; \theta_1 \mid \theta_0) \leq d\log(2)-((1-\epsilon) \log(1-\epsilon) + \epsilon \log(\epsilon))/\epsilon.
\end{align*}
where we have used that $\theta_0$ is supported on $2^d$ possible values and that, conditioned on $\theta_0$, $\tilde{A}$ follows a geometric distribution with survival probability $1-\epsilon$.
\end{example}
The next example involves reducing the granularity of a discretized action space.
\begin{example}[discretization]
Consider a linear bandit with $\mathcal{A} \subset \mathbb{R}^p$ and $\mu(a, \theta) = a^\top \theta$ for an unknown vector $\theta$. Suppose that $\theta \sim N(0,I)$ and $\mathcal{A}$ consists of $K$ vectors spread out uniformly along boundary of the $d$-dimensional unit sphere $\{a \in \mathbb{R}^p : \| a\|_2 =1\}$. The optimal action $A^* = \argmax_{a\in \mathcal{A}} a^\top \theta$ is then uniformly distributed over $\mathcal{A}$, and therefore $I(\theta; A^*) = H(A^*) = \log K$. As $K \to \infty$, it takes an enormous amount of information to exactly identify $A^*$. The results of \cite{russo2014learning} become vacuous in this limit. Consider a satisficing action $\tilde{A}$ that represents a coarser version of $A^*$. In particular, for $M \ll K$, let $\tilde{\mathcal{A}}$ consist of $M$ vectors spread out uniformly along boundary of the $d$-dimensional unit sphere, with $M$ chosen such that for each element of $\mathcal{A}$ there is a close approximation in $\tilde{\mathcal{A}}$.
Let $\tilde{A} = \argmax_{a\in \tilde{\mathcal{A}}} \theta^\top a$. This can be viewed as a form of lossy-compression, for which $H(\tilde{A}) \ll H(A^*)$ while $\mathbb{E}[R^* - \tilde{R}]$ remains small.
\end{example}
In the previous examples, $\theta$ determined $\tilde{A}$, and therefore $I(\theta; \tilde{A}) = H(\tilde{A})$. We now consider an example in which $I(\theta; \tilde{A})$ is controlled by randomly perturbing the satisficing action. Here, $I(\theta; \tilde{A})$ can be small even though $H(\tilde{A})$ is large.
\begin{example}[random perturbation]
Consider again the linear bandit from the previous example. An alternative satistficing action $\tilde{A}$ results from optimizing a perturbed objective $\tilde{A} \in \argmax_{a \in \mathcal{A}} a^T (\theta + Z)$ where $Z\sim N(0, (\epsilon/p)^2 I)$. Since $Z$ is not observable, it is not possible in this case to literally learn $\tilde{A}$. Instead, we consider learning to behave in a manner indistinguishable from $\tilde{A}$. The variance of $Z$ is chosen such that $\mathbb{E}[R^* - \tilde{R}] = \epsilon$. Moreover, it can be shown that $I(\tilde{A}; \theta)\leq I(\theta+Z; \theta) = p\log(1+p^2/\epsilon^2)$ and therefore, the information required about $\theta$ is bounded independently of the number of actions.
\end{example}
\section{A General Regret Bound.}
This section provides a general discounted regret bound and a new information-theoretic analysis technique. The first subsection introduces an alternative to the information ratio of \citet{russo2016info}, which is more appropriate for time-sensitive online learning problems. The following subsection establishes a general discounted regret bound in terms of this information ratio.
\subsection{A New Information Ratio.}
First, we make a simplification to the information ratio $(\mathbb{E}_{t}[R^* - R_{t}])^2 / I_{t}(A^* ; (A_t, Y_{t}))$ defined by \citet{russo2016info}. That expression depends on the history $\mathcal{H}_{t-1}$ and hence is a random variable. In this paper, we observe that this can be avoided, and instead take as a starting point a simplified form of the information ratio that integrates out all randomness. In particular, we study
\begin{equation}\label{eq: old-info-ratio in expectation}
\frac{(\mathbb{E}[R^* - R_{t}])^2}{I( A^* ; (A_t, Y_{t}) \mid \mathcal{H}_{t-1})}.
\end{equation}
Uniform bounds on the information ratio of the type established in past work \cite{bubeck2015multi,russo2016info, liu2017information} imply those on \eqref{eq: old-info-ratio in expectation}. Precisely, if $(\mathbb{E}_{t}[R^* - R_{t}])^2 / I_{t}(A^* ; (A_t, Y_{t}))$ is bounded by $\lambda \in \mathbb{R}$ almost surely (i.e for any history $\mathcal{H}_{t-1}$), then \eqref{eq: old-info-ratio in expectation} is bounded by $\lambda$ since
\[
(\mathbb{E}[R^* - R_{t}])^2 \leq \mathbb{E}[(\mathbb{E}_{t}[R^* - R_{t}])^2] \leq \mathbb{E}[\lambda I_{t}(A^* ; (A_t, Y_{t}))] = \lambda I(A^*; (A_t, Y_{t}) | \mathcal{H}_{t-1}).
\]
A more important change comes from measuring information about a benchmark action $\tilde{A}$, which could be defined as in the examples in the previous section, rather than with respect to the optimal action $A^*$. For a benchmark action $\tilde{A}$ we consider the single period information ratio
\[
\frac{(\mathbb{E}[\tilde{R} - R_{t}])^2}{I( \tilde{A} ; (A_t, Y_{t}) \mid \mathcal{H}_{t-1})}
\]
where $\tilde{R} = \mu(\tilde{A}, \theta)$. This ratio relates the current shortfall in performance relative to the benchmark action $\tilde{A}$ to the amount of information acquired about the benchmark action. We study the discounted average of these single period information ratios, defined for any policy $\psi$ as
\begin{equation}\label{eq: information-ratio}
\Gamma\left(\tilde{A}, \psi \right) = (1-\alpha^2) \sum_{t=0}^\infty \alpha^{2t} \left(\frac{(\mathbb{E}[\tilde{R} - R_{t}])^2}{I(\tilde{A}; (A_t, Y_{t}) | \mathcal{H}_{t-1} )}\right),
\end{equation}
where the actions $(A_t)_{t\in \mathbb{N}_0}$ are chosen under $\psi$. The square in the discount factor $\alpha$ is consistent with the problem's original discount rate, since $(\mathbb{E}[\alpha^t(\tilde{R} - R_{t})])^2 = \alpha^{2t} (\mathbb{E}[\tilde{R} - R_{t}])^2$.
\subsection{General Regret Bound.}
The following theorem bounds the expected discounted regret of any algorithm, or policy, $\psi$ in terms of the information ratio \eqref{eq: information-ratio}.
\begin{thm
\label{th:discounted-regret}
For any policy $\psi$, any $D\geq 0$ and any $\tilde{A}=f(\theta, \xi)$ where $\xi$ is independent of the disturbances $(W_t)_{t\in \mathbb{N}_0}$, if $\mathbb{E}[\mu(\tilde{A}, \theta)] \geq R^* -D$, then
\begin{equation*}
{\rm SRegret}(\alpha, \psi, D) \leq \sqrt{\frac{\Gamma\left(\tilde{A}, \psi \right) I(\tilde{A}; \theta)}{1-\alpha^2}}.
\end{equation*}
\end{thm}
\begin{proof}
We first show that the mutual information between $\tilde{A}$ and $\theta$ bounds the expected accumulation of mutual-information between $\tilde{A}$ and observations $(A_t, Y_{t})_{t\in \mathbb{N}_0}$. By the chain rule for mutual information, for any $T$,
\begin{eqnarray*}
\sum_{t=0}^{T} I(\tilde{A}; (A_t, Y_{t}) \mid \mathcal{H}_{t-1})
&=& \sum_{t=0}^{T} I(\tilde{A}; (A_t, Y_{t}) \mid A_0, Y_{0}, \ldots, A_{t-1}, Y_{t-1}) \\
&=& I(\tilde{A}; \mathcal{H}_T)\\
&\leq& I(\tilde{A}; (\theta,\mathcal{H}_{T})) \\
&=& I(\tilde{A};\theta) + I(\tilde{A} ; \mathcal{F}_{T} | \theta) \\
&=& I(\tilde{A};\theta)
\end{eqnarray*}
where the final inequality uses that, conditioned on $\theta$, $\tilde{A}$ is independent of $\mathcal{H}_T$. Taking the limit as $T \rightarrow \infty$ implies
\[
\sum_{t=0}^{\infty} I(\tilde{A}; (A_t, Y_{t}) \mid \mathcal{H}_{t-1}) \leq I(\tilde{A};\theta),
\]
where, the infinite series is assured to converge by the non-negativity of mutual information. Now, let
\[
\Gamma_{t} \equiv \frac{(\mathbb{E}[\tilde{R} - R_{t}])^2}{I(\tilde{A}; (A_t,Y_{t}) | \mathcal{H}_{t-1})}
\]
denote the information ratio at time $t$ under the benchmark action $\tilde{A}$ and actions $\{A_t : t \in \mathbb{N}_0\}$ chosen according to $\psi$. Then
\begin{eqnarray*}
{\rm SRegret}(\alpha, \psi, D) = \mathbb{E}\left[\sum_{t=0}^\infty \alpha^t (R^*-D - R_{t})\right]
&=& \sum_{t=0}^\infty \alpha^t \mathbb{E}\left[\tilde{R} - R_{t, A_t}\right] \\
&=& \sum_{t=0}^\infty \sqrt{\alpha^{2t} \Gamma_{t}} \sqrt{I(\tilde{A}; (A_t,Y_{t}) | \mathcal{H}_{t-1})}\\
&\leq& \sqrt{\sum_{t=0}^\infty \alpha^{2t} \Gamma_{t}} \sqrt{\sum_{t=0}^\infty I(\tilde{A}; (A_t,Y_{t}) | \mathcal{H}_{t-1})} \\
&\leq& \sqrt{\left[\sum_{t=0}^\infty \alpha^{2t} \Gamma_{t}\right]}\sqrt{I(\tilde{A} ; \theta )} \\
&=& \sqrt{\frac{\Gamma\left(\tilde{A}, \psi \right) I(\tilde{A}; \theta)}{1-\alpha^2}},
\end{eqnarray*}
where the first inequality follows from the Cauchy-Schwarz inequality and the second was established earlier in this proof.
\end{proof}
An immediate consequence of this bound on satisficing regret is the discounted regret bound
\begin{equation}\label{eq: discounted-regret}
{\rm Regret}(\alpha, \psi) \leq \frac{\mathbb{E}[R^*- \tilde{R}] }{1-\alpha}+\sqrt{\frac{\Gamma\left(\tilde{A}, \psi \right) I(\tilde{A}; \theta)}{1-\alpha^2}}.
\end{equation}
This bound decomposes regret into the sum of two terms; one which captures the discounted performance shortfall of the benchmark action $\tilde{A}$ relative to $A^*$, and one which bounds the additional regret incurred while learning to identify $\tilde{A}$. Breaking things down further, the mutual information $I(\theta; \tilde{A})$ measures how much information the decision-maker must acquire in order to implement the action $\tilde{A}$, and the information ratio measures the regret incurred in gathering this information. It is worth highlighting that for any given action process, this bound holds simultaneously for all possible choices of $\tilde{A}$, and in particular, it holds for the $\tilde{A}$ minimizing the right hand side of \eqref{eq: discounted-regret}.
\section{Connections With Rate Distortion Theory.}\label{sec: rate distortion}
This section considers the optimal choice of satisfactory action $\tilde{A}$ and develops connections with the theory of rate-distortion in information theory. We construct a natural rate-distortion function for Bayesian decision making in the next subsection. Subsection \ref{subsec: uniformly bounded info ratios} then develops a general regret bound that depends on this rate-distortion function.
\subsection{A Rate Distortion Function for Bayesian Decision Making.}\label{subsec: a rate distortion function}
In information-theory, the entropy of a source characterizes the length of an optimal lossless encoding. The celebrated rate-distortion theory \citep[Chapter~10]{cover2012elements} characterizes the number of bits required for an encoding to be close in some loss metric. This theory resolves when it is possible to to derive a satisfactory lossy compression scheme while transmitting far less information than required for a lossless compression. The rate-distortion function for a random variable $X$ with domain $\mathcal{X}$ with respect to a loss function $\ell: \hat{\mathcal{X}} \times \mathcal{X} \to \mathbb{R}$ is
\begin{eqnarray} \label{eq: general rate distortion definition}
\mathcal{R}(D) =& \min & I(\hat{X}; X) \\ \nonumber
&s.t.& \mathbb{E}[\ell(\hat{X},X)] \leq D
\end{eqnarray}
where the minimum is taken the choice of random variables $\hat{X}$ with domain $\hat{\mathcal{X}}$, and $I(X; \hat{X})$ denotes the mutual information between $X$ and $\hat{X}$. One can view this optimization problem as specifying a conditional distribution $P(\hat{X} \in \cdot | X)$ that minimizes the information $\hat{X}$ uses about $X$ among all choices incurring average loss less than $D$.
We will explore a powerful link with sequential Bayesian decision-making, where the rate-distortion function characterizes the minimal amount of new information the decision--maker must gather in order make a satisfactory decision. Typically \eqref{eq: general rate distortion definition} is applied in the context of representing $X$ as closely as possible by $\hat{X}$, and the loss function is taken to be something like the squared distance or total variation distance between the two. For our purposes, we replace $X$ with $\theta$, and $\hat{X}$ with a benchmark action $\tilde{A}$. The interpretation is that $\tilde{A}=f(\theta; \xi)$ is a function of the unknown parameter $\theta$ and some exogenous randomness $\xi$ that offers a similar reward to playing $A^*$ but hopefully can be identified using much less information about $\theta$. We specify a loss function $\ell : \mathcal{A} \times \Theta \to \mathbb{R}$ measuring the single period regret from playing $a$ under $\theta$:
\[
\ell(a, \theta )= \max_{a' \in \mathcal{A}} \mu(a', \theta) - \mu(a, \theta).
\]
As a result
\[
\mathbb{E}[\ell(\tilde{A}, \theta )]= \mathbb{E}[R^*-\mu(\tilde{A}, \theta)].
\]
We come to the rate-distortion function
\begin{eqnarray}\label{eq: rate distortion function for decision making}
\mathcal{R}(D) := & \min & I(\tilde{A}; \theta) \\\nonumber
&s.t.& \mathbb{E}[R^*-\mu(\tilde{A}, \theta)] \leq D.
\end{eqnarray}
As before, the minimum above is taken over the choice of random variable $\tilde{A}$ taking values in $\mathcal{A}$. That is, the minimum is taken over all conditional probability distributions $\mathbb{P}(\tilde{A}\in \cdot | \theta)$ specifying a distribution over actions as a function of $\theta$. Since the choice $\tilde{A}=A^*$ is always feasible, for all $D>0$
\[
\mathcal{R}(D)\leq I(A^*; \theta)= H(A^*)
\]
where $H(A^*)$ denotes the entropy of the optimal action. Rate distortion is never larger than entropy, but it may be small even if the entropy of $A^*$ is infinite.
The following, somewhat artificial, example explicitly links communication with decision-making and may help clarify the role of the rate-distortion function $\mathcal{R}(D)$.
\begin{example}
A military command center waits to hear from an outpost before issuing orders. The outpost, stationed close to the conflict, determines its message based on a wealth of nuanced information -- at the level of readouts from weather sensors and full transcripts of intercepted enemy communication. The command post could make very complicated decisions as a function of the detailed information it receives, with the possibility of specifying commands at the level of individual troops and equipment. How much must decision quality degrade if decisions are based only on coarser information? At an intuitive level, the outpost only needs to communicate surprising revelations that are important to reaching a satisfactory decision. As a result, our answer can depend in a complicated way on the extent to which the outpost's observations are predictable apriori and the extent to which decision quality is reliant on this information. The rate-distortion function precisely quantifies these effects.
To map this problem onto our formulation of the rate-distortion function, take $\theta$ to consist of all information observed by the outpost, $\tilde{A}$ to be the order issued by the command center, and the rewards to indicate whether the orders led to a successful outcome. The mutual information $I(\tilde{A}; \theta)$ captures the average amount of information the outpost must send in order for $\tilde{A}$ to be implemented. The goal is to develop a plan for placing orders that requires minimal communication from the outpost among all plans that degrade the chance of success by no more than $D$.
\end{example}
\subsection{Uniformly Bounded Information Ratios.}\label{subsec: uniformly bounded info ratios}
The general regret bound in Theorem \ref{th:discounted-regret} has a superficial relationship to the rate-distortion function $\mathcal{R}(D)$ through its dependence on the mutual information $I(\tilde{A}; \theta)$ between the benchmark action and the true parameter $\theta$. Indeed, for a benchmark action $\tilde{A}$ attaining the rate-distortion limit, $I(\tilde{A}; \theta)=\mathcal{R}(D)$ and we attain a regret bound that depends explicitly on the rate-distortion level. However, the information ratio $\Gamma\left(\tilde{A}, \psi \right)$ also depends on the choice of benchmark action, and may be infinite for a poor choice.
This second dependence on $\tilde{A}$ does not appear in rate-distortion theory, and reflects a fundamental distinction between communication problems and sequential learning problems. Indeed, a key feature enabling the sharp results of rate distortion theory is that no bit of information is more costly to send and receive than others; the question is to what extent useful communication is possible while many fewer bits of information on average. By contrast, sequential learning agents must explore to uncover information and the cost per unit of information uncovered may vary widely depending on which pieces of information are sought. This is accounted for by the information ratio $\Gamma\left(\tilde{A}, \psi \right)$, which roughly captures the expected cost, in terms of regret incurred, per bit of information acquired about the benchmark action.
Despite this, regret bounds in terms of rate-distortion apply in many important cases. Theorem \ref{th:discounted-regret}, which is an immediate consequence of Theorem \ref{th:discounted-regret}, provides a general bound of this form. Roughly, the uniform information ratio $\Gamma_{U}$ in the theorem reflects something about quality of the feedback the agent receives when exploring; it means that for \emph{any} choice of benchmark action $\tilde{A}$ there is a sequential learning strategy that learns about $\tilde{A}$ with cost per bit of information less than $\Gamma_{U}$. The next section applies this result to online linear optimization, where several possible uniform information ratio bounds are possible depending on the problems precise feedback structure.
\begin{thm}
Suppose there is a uniform bound on the information ratio
\[
\Gamma_{U}:=\sup_{\tilde{A}} \inf_{\psi} \Gamma\left(\tilde{A}, \psi \right) <\infty.
\]
Then, for any $D\geq 0$ there exists a policy $\psi$ under which
\[
{\rm SRegret}(\alpha, \psi, D) \leq \sqrt{\frac{\Gamma_U \mathcal{R}(D)}{1-\alpha^2}}.
\]
\end{thm}
\section{Application to Online Linear Optimization.}\label{sec: linear optimization}
Consider a special case of our formulation: the problem of learning to solve a linear optimization problem. Precisely, suppose expected rewards follow the linear model $\mathbb{E}[R_{t}|\theta, A_t]= \theta^\top A_t$ where $A_t\in \mathcal{A} \subset \mathbb{R}^p$, $\theta \in \mathbb{R}^p$, and $R_{t}\in \left[-\frac{1}{2},\frac{1}{2}\right]$ almost surely. We will consider several natural forms a feedback $Y_{t}$ the decision-maker may receive.
In each case, uniform bounds on the information ratio hold for satisficing Thompson sampling. More precisely, for any $\tilde{A}$ let $\psi^{\rm STS}_{\tilde{A}}$ denote the strategy that randomly samples an action at each time $t$ by probability matching with respect to $\tilde{A}$, i.e. $\mathbb{P}(A_t \in \cdot | \mathcal{H}_{t-1}) = \mathbb{P}(\tilde{A} \in \cdot | \mathcal{H}_{t-1})$. Applying the same proofs as in \cite{russo2016info} yields bounds of the form $\Gamma(\tilde{A}; \psi^{\rm STS}_{\tilde{A}}) \leq \lambda$, where $\lambda$
depends on the problem's feedback structure but not the choice of benchmark action. Now, let us choose $\tilde{A}$ to attain the rate distortion limit \eqref{eq: rate distortion function for decision making}, so $I(\tilde{A}; \theta)=\mathcal{R}(D)$ and $\mathbb{E}[R^*-\mu(\tilde{A}; \theta) ] \leq D$. We denote by $\psi_D^{\rm STS}$ satisficing Thompson sampling with respect to this choice of satisfactory action.
\noindent{\bf Full Information.} Suppose $R_{t} =A_t^\top Z_t$ for a random vector $Z_t$ with $\mathbb{E}[Z_t | \theta, \mathcal{H}_{t-1}] = \theta$. This is an extreme point of our formulation, where all information is revealed without active exploration. For all $\tilde{A}$, the information ratio is bounded as $\Gamma(\psi_{\tilde{A}}^{\rm STS}; \tilde{A})\leq 1/2$ and hence
\[
{\rm SRegret}(\alpha, \psi_D^{\rm STS}, D) \leq \sqrt{\frac{ \mathcal{R}(D)}{2(1-\alpha^2)}}.
\]
\noindent{\bf Bandit Feedback.} Suppose the agent only observes the reward the action she chooses ($Y_{t}=R_{t}$). This is the so-called linear bandit problem. For all $\tilde{A}$, the information ratio is bounded as $\Gamma(\psi_{\tilde{A}}^{\rm STS}; \tilde{A})\leq p/2$. This gives the regret bound
\[
{\rm SRegret}(\alpha, \psi_D^{\rm STS}, D)
\leq \sqrt{\frac{ \mathcal{R}(D)p}{2(1-\alpha^2)}}.
\]
\noindent {\bf Semi-Bandit Feedback.} Assume again that $R_{t} =A_t^\top Z_t$ for all $a$. Take the action set $\mathcal{A} \subset \{0,1\}^p$ to consist of binary vectors where $\sum_{i=1}^{p} a_i \leq m$ for all $a\in \mathcal{A}$. Upon playing action $A_t=a$, the agent observes $Z_{t,i}$ for every component $\{i\in \{1,...,m\} :a_i =1 \} $ that was active in $a$. We make the additional assumption that the components of $Z_t$ are independent conditioned on $\mathcal{H}_{t-1}$. Then, for all $\tilde{A}$, the information ratio is bounded as $\Gamma(\psi_{\tilde{A}}^{\rm STS}; \tilde{A})\leq p/2m$ and hence
\[
{\rm SRegret}(\alpha, \psi^{\rm STS}_{D}, D)
\leq \sqrt{\frac{ \mathcal{R}(D)(p/m)}{2(1-\alpha^2)}}.
\]
By following the appendix of \cite{russo2016info}, each of these result can be extended gracefully to settings where noise distributions are \emph{sub-Gaussian}. For example, suppose $\theta$ follows a multivariate Gaussian distribution, and the reward at time $t$ is $R_t = \theta^\top A_t + W_t$ where $W_t$ is a zero mean Gaussian random variable. Then, if the variance of rewards $\mathbb{E}[(\theta^\top a + W-\mathbb{E}[\theta]^\top a)^2]\leq \sigma^2$
is bounded by some $\sigma^2$ for all $a$, the previous bounds on the information ratio scale by a factor of $\sigma$.
It is worth noting that these results immediately reduce to bounds on (non-satisficing) regret when the action space is finite. As mentioned in Subsection \ref{subsec: a rate distortion function}, for problems with a finite action set $\mathcal{R}(D) \leq H(A^*)\leq \log | \mathcal{A}|$ for all $D\geq 0$. For example, with a linear bandit with finite action set, \eqref{eq: sregret to regret} gives the bound
\[
{\rm Regret}( \alpha, \psi^{\rm STS}_{D}) \leq \sqrt{\frac{ H(A^*) p}{2(1-\alpha^2)}} + \frac{D}{1-\alpha},
\]
so STS with a small satisficing level attains desirable regret bounds when the action set is not too large. A special case of this problem is the classical $k$ armed bandit problem, in which case $p=k$. We can reach similar conclusions in other cases by arguing $\mathcal{R}(D)$ grows slowly as $D\to 0$. We illustrate this idea for a Gaussian linear bandit below.
The next theorem considers an explicit choice of satisfactory action $\tilde{A}$. This yields a computationally efficient version of STS as well as explicit upper bounds on the rate-distortion function. As above, consider the case where $\theta\sim N(\mu, \Sigma)$ follows a multivariate Gaussian prior and reward noise is Gaussian. We study the optimizer $\tilde{A}=\argmax_{a \in \mathcal{A}} \langle a, \theta+\xi \rangle$ of a randomly perturbed objective. The small perturbation controls the mutual information between $\tilde{A}$ and $\theta$ without substantially degrading decision quality.
It is easy to implement probability matching with respect to $\tilde{A}$ whenever linear optimization problems over $\tilde{A}$ are efficiently solvable. In particular, if $\mu_t = \mathbb{E}[\theta \mid\mathcal{H}_{t-1}]$ and $\Sigma_t = \mathbb{E}[(\theta-\mu_t)(\theta - \mu_t)^\top \mid \mathcal{H}_{t-1}]$ denote the posterior mean and covariance matrix, which are efficiently computable using Kalman filtering, then by sampling $\hat{\theta}_t \sim N(\mu_t, \Sigma_t)$ and $\hat{\xi}_t\sim N(0, \Sigma_\xi)$ and setting $A_t = \argmax_{a\in \mathcal{A}} \langle a, \hat{\theta}_t + \hat{\xi}_t \rangle$ one has $
\mathbb{P}(A_t \in \cdot \mid \mathcal{H}_{t-1}) = \mathbb{P}(\tilde{A} \in \cdot \mid \mathcal{H}_{t-1})$.
The result in the next theorem assumes the action set is contained within an ellipsoid $\{ x\in \mathbb{R}^{p} : x^\top Q^{-1} x \leq 1 \}$ and the resulting bound displays a logarithmic dependence on the eigenvalues of $Q$. Precisely, note that the trace of the matrix $Q$, or sum of its eigenvalues, provides one natural measure of the size of the ellipsoid. Our result also depends on the covariance matrix $\Sigma$ through the ${\rm Trace}(Q\Sigma)$. To understand this, consider applying similarity transforms to the parameter and action vectors so that $\theta' = \Sigma^{-1/2} \theta$ is isotropic and the set of action vectors is $\mathcal{A}'=\{\Sigma^{1/2}a : a \in \mathcal{A}\}$. This transformed action space is contained in the ellipsoid $\{x : x^{\top} Q'^{-1} x \leq 1 \}$, where $Q' = \Sigma^{1/2} Q \Sigma$. Then ${\rm Trace}(Q')={\rm Trace}(Q\Sigma)$ provides a measure of the size of this ellipsoid.
\begin{thm
\label{th:perturbed objective}
Suppose $\mathcal{A}$ is a compact subset of the ellipsoid $\mathcal{A} \subset \{ x\in \mathbb{R}^{p} : x^\top Q^{-1} x \leq 1 \}$ for some real symmetric matrix $Q$ and suppose $\theta\sim N(\mu,\Sigma)$ follows a $p$--dimensional multivariate Gaussian distribution. Set
\[
\tilde{A}= \argmax_{a \in \mathcal{A}} \langle a \,,\, \theta+\xi \rangle
\]
where $\xi$ is independent of $\theta$ and $\xi\sim N(0,\beta^2 \Sigma)$. For $\beta= D/\sqrt{{\rm Trace}(Q\Sigma)}$,
\[
\mathbb{E}\left[\langle\theta \,,\, \tilde{A} \rangle\right] \geq \mathbb{E} \left[\langle \theta \,,\, A^* \rangle \right] -D
\]
and
\[
\mathcal{R}(D)\leq I(\tilde{A}; \theta) \leq \frac{p}{2} \log\left(1+ \frac{{\rm Trace}(Q\Sigma)}{D^2}\right).
\]
\end{thm}
\begin{proof}
By Jensen's inequality
\[
\mathbb{E}[ \langle\tilde{A} \,,\, \theta+\xi\rangle] = \mathbb{E} \left[\max_{a \in \mathcal{A}} \langle a \,,\, \theta+\xi\rangle \right] \geq \mathbb{E} \left[\max_{a \in \mathcal{A}} \langle a \,,\, \theta \rangle \right] = \mathbb{E} \left \langle A^* \,,\, \theta\rangle \right].
\]
This implies
\[
\mathbb{E} \left[ \langle A^* \,,\, \theta\rangle \right] - \mathbb{E} \left[ \langle \tilde{A} \,,\, \theta\rangle \right] \leq \mathbb{E}[ \langle\tilde{A} \,,\, \xi\rangle] \leq \mathbb{E} \left[\max_{a \in \mathcal{A}} \langle a \,,\, \xi \rangle \right] \leq \mathbb{E} \left[ \max_{x \,: \|x\|_{Q^{-1}} \leq 1 } \langle a \,,\, \xi \rangle \right] = \mathbb{E}\left[\| \xi\|_{Q} \right]
\]
where the final equality uses the explicit formula for the maximum of a linear function over an ellipsoid. Now,
\[
\mathbb{E}\left[\| \xi\|_{Q} \right] \leq \sqrt{\mathbb{E}[\xi^\top Q \xi] } = \sqrt{\mathbb{E}[{\rm Trace}(\xi^\top Q \xi)] }=\sqrt{{\rm Trace}( Q \mathbb{E}[\xi \xi^\top])} = \sqrt{\beta^2 {\rm Trace}( Q \Sigma)}= D.
\]
Next we derive the bound on mutual information. We have
\begin{eqnarray*}
I(\tilde{A}; \theta) \leq I(\theta+W; \theta) &=& H(\theta+W)- H(\theta+W | \theta)\\
&=& H(\theta+W)-H(W) \\
&=& \frac{1}{2}\log\left( \frac{\det(\Sigma+\beta^2\Sigma)}{\det(\beta^2\Sigma)} \right)\\
&=&\frac{1}{2}\log\left( \frac{\det((1+\beta^2)\Sigma)}{\det(\beta^2\Sigma)} \right)\\
&=& \frac{p}{2} \log\left(1+\frac{1}{\beta^2} \right)\\
&=&\frac{p}{2} \log\left(1+\frac{{\rm Trace}( Q \Sigma)}{D^2} \right)
\end{eqnarray*}
where $\det(\cdot)$ denotes the determinant of a matrix. Here the first inequality uses the data processing inequality,
the third equality uses the explicit form the entropy of a multivariate Gaussian ($H(\theta)=\frac{1}{2}\log\left(2\pi e \det(\Sigma) \right)$) and the penultimate equality uses that $\det(c\Sigma)=c^p \det(\Sigma)$ for any scalar $c$.
\end{proof}
\section{Application to the Infinite-Armed bandit.}\label{sec: satisficingTS}
This section considers a generalization of the deterministic infinite-armed bandit problem in the introduction that allows for noisy observations and non-uniform priors. The action space is $\mathcal{A} = \{1,2,\ldots\}$. We assume $R_t \in [0,1]$ almost surely and $Y_t=R_t$, meaning the agent only observes rewards. The mean reward of action $a$ is $\mu(\theta, a) = \theta_a\in [0,1]$ where the prior distribution of $\theta_a$ is independent. Let $R^*$ denote the sepremal value in the support of $\theta_a$, so $\sup_{a \in \mathcal{A}} \theta_a =R^*$ almost surely.
\subsection{STS for the Infinite-Armed Bandit Problem.}\label{subsec: sts for infinite arms}
We consider the simple satisficing action defined in the introduction: $\tilde{A} = \min\{a \in \mathcal{A}: \theta_a \geq R^* - D\}$. Rather than continue to explore until identifying the optimal action $A^*$, we will settle for the \emph{first}\footnote{ Of course, there is nothing crucial about this ordering on actions. We can equivalently construct a randomized order in which actions are sampled; for each realization of the random variable $\xi$, let $\pi_{\xi} : \mathbb{N}_0 \to \mathbb{N}_0$ be a permutation, and take $\tilde{A} = \min \{\pi_{\xi}(a) : \theta_a \geq 1-D \}$.} action known to yield reward within $D$ of optimal.
We study satisficing Thompson sampling where actions are selected by probability matching with respect to $\tilde{A}$. Note that an algorithm for this problem must decide whether to sample a previously tested action -- and if so which one to sample -- or whether to try out an entirely new action. Let $\mathcal{A}_t = \{A_0,\ldots, A_{t-1}\}$ denote the set of previously sampled actions. STS may sample an untested action $A_t \notin \mathcal{A}$, and does so with probability
\[
\mathbb{P}(A_t \notin \mathcal{A}_{t} | \mathcal{H}_{t-1} ) = \mathbb{P}(\tilde{A}\notin \mathcal{A}_{t} | \mathcal{H}_{t-1}).
\]
equal to the posterior probability no satisfactory action has yet been sampled. The remainder of the action probabilities are allocated among previously tested actions, with
\[
\mathbb{P}(A_t = a | \mathcal{H}_{t-1}) = \mathbb{P}(\tilde{A} =a | \mathcal{H}_{t-1}) \qquad \forall a \in \mathcal{A}_t.
\]
There is a simple algorithmic implementation of STS that mirrors computationally efficient implementations of Thompson sampling (TS). At time $t$, TS selects a random action $A_t$ via probability matching with respect to $A^*$. Algorithmically, this is usually accomplished by first sampling $\hat{\theta}_t \sim \mathbb{P}(\theta \in \cdot | \mathcal{H}_{t-1})$ and solving for $A_t \in \argmax_{a \in \mathcal{A}} \mu(a, \hat{\theta}_t)$. Similarly, we can implement STS by sampling and approximately optimizing a posterior sample. Over each $t$th period, STS selects an action $A_t$ as follows: \\
\begin{enumerate}
\item For each $a\in \mathcal{A}_t$, sample $\hat{\theta}_a \sim \mathbb{P}(\theta_a \in \cdot| \mathcal{H}_{t-1})$
\item Let $\hat{\tau} = \min\left\{\tau \in \{1,\ldots,t-1\} : \mu(A_\tau, \hat{\theta}_t) \geq R^*-D \right\}$
\item If $\hat{\tau}$ is not null set $A_t=A_{\hat{\tau}}$. Otherwise, play an untested action $A_t\notin \mathcal{A}_t$.
\end{enumerate}
\vspace{\baselineskip}
Note that $D \geq 0$ is supplied to the algorithm as a tolerance parameter. When $D = 0$, STS is equivalent to TS. Otherwise,
STS attributes preference to selecting previously selected actions, which can yield substantial benefit in the face of time preference.
This definition can be generalized to treat problems with a large, but finite, number of independent arms. Define the satisficing action $\tilde{A} = \min\{ a \in \mathcal{A} \mid \theta_a \geq \sup_{a' \in \mathcal{A}} \mu(\theta, a') -D \}$. For the infinite armed bandit, $\sup_{a' \in \mathcal{A}} \mu(\theta, a') =R^*$ with probability 1, but for problems with a finite number of arms $\max_{a' \in \mathcal{A}} \mu(\theta, a')$ is not known apriori. One can efficiently sample from this satisficing action by modifying step 2 above with the alternative definition $\hat{\tau} = \min\left\{\tau \in \{1,\ldots,t-1\} : \mu(A_t, \hat{\theta}_t) \geq \sup_{a \in \mathcal{A}} \mu(a, \hat{\theta}_t) -D \right\}$.
\subsection{Computational Comparison of STS and TS.}
\label{se:computations}
We close with a simple computational illustration of the potential benefits afforded by STS. We consider two many-armed bandit problems, and demonstrate that per-period regret of STS diminishes much more rapidly than that of TS over early time periods.
We consider problems with 250 actions, where the mean reward $\theta_a$ associated with each action $a \in \{1,\ldots,250\}$ is independently sampled uniformly from $[0,1]$. We first consider the many-armed deterministic bandit, for which there is no observation noise. Figure \ref{fig:computational-results}(a) presents per-period regret of TS and STS over 500 time periods, averaged over 5000 simulations, each with an independently sampled problem instance. STS is applied with tolerance parameter 0.05. We next consider incorporating observation noise. In particular, instead of observing $\theta_a$, after selecting an action $a$, we observe a binary reward that is one with probability $\theta_a$. Figure \ref{fig:computational-results}(b) displays the results of this experiment.
\begin{figure}[h!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=2.1in]{./TSvsSTS-uniform-nonoise.png}
\label{fig:pic1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=2.1in]{./TSvsSTS-uniform-bernoulli.png}
\label{fig:pic2}
\end{subfigure}
\caption{TS versus STS for the (a) many-armed deterministic bandit and (b) many-armed bandit with observation noise.}
\label{fig:computational-results}
\end{figure}
\subsection{Information Ratio Analysis of the Infinite-Armed Bandit.}\label{subsec: infinite noisy}
The following theorem provides a discounted regret bound for STS in the infinite armed bandit
The result follows from bounding the problems information ratio and the mutual information $I(\tilde{A}; \theta)$ and applying the general regret bound of Theorem \ref{th:discounted-regret}. This requires substantial additional analysis, the details of which are delayed until Subsection \ref{subsec: proof of of infinite armed bandit bound}.
\begin{thm
\label{thm:noisy regret bound} Consider the the infinite-armed bandit with noisy observations, and let $\tilde{A} = \min\{a \in \mathcal{A}: \theta_a \geq R^*-D\}$. Denote the STS policy with respect to $\tilde{A}$ by $\psi_D^{\rm STS}$. Then,
\[
I( \tilde{A} ; \theta) \leq 1+ \log(1/\delta) \quad \mathrm{and} \quad \Gamma\left(\tilde{A}, \psi_D^{\rm STS} \right) \leq 6+ 4/\delta + (2/\delta)\log\left( \frac{1}{1-\alpha^2} \right)
\]
where $\delta=\mathbb{P}(\theta_a \geq R^*- D)$. Together with Theorem \ref{th:discounted-regret} this implies
\begin{align}\nonumber
{\rm SRegret}(\alpha, \psi_D^{\rm STS}, D)
&\leq \min\left\{ \sqrt{ \frac{\left(6+ 4/\delta + (2/\delta)\log\left( \frac{1}{1-\alpha^2} \right) \right)(1+\log(1/\delta)) }{1-\alpha^2}} , \, \frac{1}{1-\alpha} \right\} \\ \label{eq: noisy regret bound simplified}
&= \tilde{O}\left( \min\left\{ \sqrt{\frac{ 1/\delta }{1-\alpha}} \, , \, \frac{1}{1-\alpha} \right\} \right).
\end{align}
\end{thm}
The final expression \eqref{eq: noisy regret bound simplified} uses that
$1-\alpha \leq 1-\alpha^2 \leq 2(1-\alpha)$. The upper bound of $1/(1-\alpha)$ is naive and applies to all algorithms. The bound of order $\sqrt{\frac{ 1/\delta }{1-\alpha}}$ requires an intelligent balance of exploration and exploitation; it is much stronger when the prior probability $\delta$ an arm is a satisficing arm is not too small.
\subsection{A Lower Bound for the Infinite-Armed Bandit}
This section establishes a lower bound on regret that matches the scaling of the upper bound in Theorem \ref{thm:noisy regret bound}. In this problem, $\epsilon$ is the fraction of arms that are exactly optimal. For our purposes, the interesting regime is where $\epsilon \ll 1-\alpha$ but $\delta \gg 1-\alpha$, in which case identifying an optimal arm is hopeless but it is worthwhile to search for a satisficing arm. In that regime, Theorem \ref{thm:noisy regret bound} shows a bound on satisficing regret of $\tilde{O}\left( \sqrt{\frac{1/\delta}{1-\alpha} } \right)$ and Theorem \ref{thm: lower bound} shows this is unimprovable in general. The specific threshold on $\epsilon$ in the theorem statement was chosen for analytical convenience and could likely be tightened.
\begin{thm}\label{thm: lower bound}
Fix any $\alpha\in (0,1)$, $\delta \in (0, 1/2)$ and $D \in (0,1/4)$. Consider an instance of the infinite armed bandit problem in which, for all $a\in \mathcal{A}$,
\begin{align}\label{eq: lower bound construction}
\mathbb{P}\left(\theta_a = \frac{1}{2}-\Delta \right) = 1-\delta &&
\mathbb{P}\left(\theta_a = \frac{1}{2} \right) = \delta - \epsilon &&
\mathbb{P}\left(\theta_a = \frac{1}{2}+D \right) = \epsilon,
\end{align}
for $\Delta=\min\left\{\frac{1}{4} \, , \frac{1}{4\sqrt{2} } \cdot \sqrt{\frac{1-\alpha }{\delta}} \right\}$ and $\epsilon \leq \frac{1}{36} \cdot \min\left\{ (1-\alpha)^2 \, , \, \frac{(1-\alpha)^3}{2\delta} \right\}$.
Suppose $R_t \in \{0,1\}$ with $\mathbb{P}(R_t = 1 \mid \theta, A_t, \mathcal{H}_{t-1}) = \theta_{A_t}$. Then,
\[
\inf_{\psi} \,{\rm SRegret}(\alpha, \psi, D) \geq \frac{1}{32} \cdot \min\left\{ \frac{1}{1-\alpha} \, , \, \sqrt{\frac{1/2\delta}{1-\alpha}} \right\}
\]
where the infimum is over all adaptive policies.
\end{thm}
\subsection{Open Question: Gap Dependent Analysis of STS.}
It should be noted that Theorem \ref{thm: lower bound} is a worst-case construction. It shows that, for a given discount factor $\alpha$, satisficing level $D$ and fraction of satisficing arms $\delta$, there exists a hard instance of an infinite armed bandit problem in which the scaling of \eqref{eq: noisy regret bound simplified} is unavoidable. Stronger performance guarantees are likely possible for more benign problems, however. Here, we highlight one open problem in this direction.
As shown in \cite{berry1997bandit} and \cite{wang2009algorithms}, when the agent begins with a uniform prior over each $\theta_a$, it is possible to attain undscounted regret that scales as $\mathbb{E}\left[ \sum_{t=1}^{T} (R^*-R_t) \right] = O\left( \sqrt{T} \right)$. While the worst-case construction in Theorem \ref{thm: lower bound} shows that Theorem \ref{thm:noisy regret bound} is tight without additional assumptions, it seems to yield an overly conservative regret bound of $O\left(T^{2/3} \right)$ for this specific problem. To understand this, note that if the agent begins with a uniform prior over each $\theta_a$, we find $\delta=D$. Choosing $D\approx (1-\alpha)^{2/3}$ to minimize the regret upper bound from combining Theorem \ref{thm:noisy regret bound} with Equation \eqref{eq: discounted-regret}, we find ${\rm Regret}(\alpha, \psi^{D}) \leq \tilde{O}\left( \frac{1}{(1-\alpha)^{2/3}} \right),$ where $\psi^{D}$ is denotes satisficing Thompson sampling applied with parameter $D$. Since $\frac{1}{1-\alpha}$ is the effective time horizon in the problem, this roughly corresponds to a regret bound of $\tilde{O}(T^{2/3})$.
Simulations suggest that such a bound is conservative, and STS actually attains the optimal $\Theta(\sqrt{T})$ regret scaling in this problem. To test this, we applied STS over a range of horizons $T\in \{ e^{5}, e^{5.5},\cdots, e^{10.5}\}$. For each choice of horizon, we used the satisficing parameter $D=3/\sqrt{T}$ and ran 500 independent trials. The numerical constant $3$ was selected in a somewhat ad-hoc manner and may be further optimized by tuning it in simulation. Figure \ref{fig:regret scaling} seems to suggest
$\mathbb{E}\left[ \sum_{t=1}^{T} (R^*-R_t) \right] = \Theta\left( \sqrt{T} \right)$, since the logarithm of regret scales linearly as $\log(T)/2$. An analogous experiment in the discounted setting suggests ${\rm Regret(\alpha, \psi_D)} = \Theta\left( \frac{1}{\sqrt{1-\alpha}} \right)$ as $\alpha \to 1$.
To understand the two different scalings of regret, it may be helpful to draw an analogy to the standard $k$ armed bandit problem. When there is a fixed gap of $\Delta>0$ between the best and second best arm, it is possible to provide regret bounds of $O( \log(T) /\Delta )$ which scale very slowly with the time horizon but degrade as $\Delta$ shrinks. In worst-case instances, $\Delta \approx 1/\sqrt{T}$ is small relative to the horizon and regret bounds of $O(\sqrt{T})$, which are completely independent of the gap, are the best possible. Our analysis in Theorem \ref{th:discounted-regret} was effectively gap-independent, as it did not make assumptions about the separation between satisficing actions a non-satisficing actions. By imposing the additional assumption that each $\theta_a$ is drawn from a uniform prior, we ensure that most arms are easily distinguished from the satisficing arms--especially as the satisficing threshold $D$ tends to zero--- which is what makes the $O(\sqrt{T})$ bound possible. One may be able to leverage finite time gap-dependent analyses of Thompson sampling \cite{agrawal2013further} to show an $O(\sqrt{T})$ regret bound for STS under a restricted class of priors, but we leave this as an open question that is beyond the scope of this work.
\begin{figure}[h!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=2.1in]{infinite_arm_uniform.png}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[height=2.1in]{infinite_arm_uniform_undiscounted.png}
\end{subfigure}
\caption{Scaling of regret in an infinite armed bandit with uniform prior for (a) a discounted infinite horizon and (b) an undiscounted finite horizon problem.}
\label{fig:regret scaling}
\end{figure}
\section{Application of STS to the Hierarchical Infinite-Armed bandit.}\label{sec: Hierarchical}
This section offers a preliminary study of satisficing in the hierarchical infinite armed bandit as described in Example 2 of Section \ref{sec:intro}. Algorithms that succeed on this example must simultaneously leverage prior beliefs, generalize across arms, and satisfice.
Example \ref{ex: hiearchical satisficing action} of Section \ref{sec: satisficing} describes a satisficing action $\tilde{A}$ for this problem. Precisely, according to Equation \eqref{eq: hierarchical satisficing}, $\tilde{A} = \min\{a \in \mathcal{A}(\theta_0) \mid \theta_a \geq 1-D \}$ where $\mathcal{A}(\theta_0) = \argmax_{a' \in \mathcal{A}} \langle \phi(a') \, , \, \theta_0 \rangle \}$. Satisficing Thompson sampling performs probability matching with respect to $\tilde{A}$, setting $\mathbb{P}(A_t = a \mid \mathcal{H}_{t-1} )= \mathbb{P}(\tilde{A}=a \mid \mathcal{H}_{t-1})$ for each $a\in \mathcal{A}$.
Let us describe how to implement probability matching in a manner that mirrors the description of STS for the infinite armed bandit. Let $\mathcal{A}_t = \{ A_0, \ldots A_{t-1} \}$ denote the set of previously sampled actions and $\mathcal{A}_t(\theta_0) = \mathcal{A}_{t} \cap \mathcal{A}(\theta_0)$ denote previously sampled actions whose feature vectors are optimal under parameter $\theta_0\in \mathbb{R}^d$. Let $\theta_{1\mathcal{A}_t(\theta_0)} =\{\theta_{1a} : a \in \mathcal{A}_{t}(\theta_0) \}$ denote the corresponding components of $\theta_{1}$. Over each $t$th period, STS selects an action $A_t$ as follows:
\begin{enumerate}
\item Sample $\hat{\theta}_0 \sim \mathbb{P}(\theta_0 \in \cdot \mid \mathcal{H}_{t-1} )$
\item Sample $\hat{\theta}_{1\mathcal{A}_t(\hat{\theta}_0)} \sim \mathbb{P}\left( \theta_{1\mathcal{A}_t(\theta_0)} \in \cdot \mid \theta_0=\hat{\theta}_0, \mathcal{H}_{t-1} \right)$
\item Let $\hat{\tau} = \min\left\{\tau \in \{1,\ldots,t-1\} : \hat{\theta}_{1 A_{\tau}} \geq 1-D \right\}$
\item If $\hat{\tau}$ is not null set $A_t=A_{\hat{\tau}}$. Otherwise, play an untested action $A_t \in \mathcal{A}(\hat{\theta}_0)\setminus \mathcal{A}_t$.
\end{enumerate}
Steps 2-4 correspond to satisficing Thompson sampling for the infinite armed bandit as described in Section \ref{sec: satisficingTS}, but applied conditioned on $\theta_0=\hat{\theta}_0$. Using the ideas described in Subsection \ref{subsec: sts for infinite arms}, a simple modification of this applies to problems with a large but finite number of arms and a prior that is not necessarily uniform. We run STS with the satisficing action
\begin{equation}\label{eq: STS finite hierarchical}
\tilde{A} = \argmin_{ a' \in \mathcal{A}(\theta_0)} \left\{ a' \mid \theta_{1a'} \geq \max_{a \in \mathcal{A}(\theta_0)} \theta_{1a} -D \right\}
\end{equation}
As in Subsection \ref{subsec: sts for infinite arms}, we can sample from this action by applying the steps above with a modified definition of $\hat{\tau}$.
We run a simple numerical experiment to demonstrate the importance of satisficing and generalization to this problem. To simplify the implementation, we focus on the case where there is a normally distributed prior over $\theta_0$ and each $\theta_{1a}$, allowing the posterior distribution to be expressed in closed form. The experiment treats a problem with a large but finite number of arms, highlighting that satisficing is just as important in such settings.
Results are displayed in Figure \ref{fig:hierarchical}. The simulation focuses on performance over the first $T=50$ periods. The dimension of the linear model is $d=2$. There is no observation noise, so the reward at time $t$ is $\langle \phi(A_t), \theta_0 \rangle + \theta_{A_t}$. There are $k =400$ of actions each of which has a feature vector $\phi(a) \in \{0,1\}^2$ that is drawn randomly. Parameters are drawn randomly under a prior with $\theta_0 \sim N(0, d^{-1/2} I_d)$ and $\theta_{1,a} \sim N(0, 1)$ independently across arms $a$ and from $\theta_0$. We run STS with the satisficing action in \eqref{eq: STS finite hierarchical}.
We compare the performance of STS against Thompson sampling (TS) and an incorrect version of STS that does not model the linear component of the model, instead treating the problem as an infinite armed bandit independent arms and normally distributed prior. We choose $D=1$ for both variants of STS. Figure \ref{fig:hierarchical} shows the average reward earned by each algorithm in each of the first 50 periods. Results are averaged across 2000 trials\footnote{Standard errors are smaller than .01 for each algorithm and time period, so confidence intervals are omitted.}. We see that STS earns much higher rewards in early periods, earning an average reward across the 50 time periods of 1.21, compared with .79 for incorrect STS and .67 for standard Thompson sampling. In this case Thompson sampling generalizes across arms but does not satisfice, preventing it from settling on arms with large arm-specific effect. The incorrect variant of STS satisfices, but does not generalize across arms. STS can simultaneously generalize and satisfice, leading to large improvement in performance.
\begin{figure}
\centering
\includegraphics[height=2.1in]{hierarchical_bandit_per_period_reward.png}
\caption{Average reward earned in early periods of the hierarchical bandit problem.}
\label{fig:hierarchical}
\end{figure}
\section{Information Theoretic Proofs of the Infinite Armed Bandit Results}
\subsection{Proof of the upper bound: Theorem \ref{thm:noisy regret bound}.}
\label{subsec: proof of of infinite armed bandit bound}
Our proof will use the following fact, which is a consequence of Pinsker's inequality and is stated as Fact 9 in \cite{russo2016info}. \\
\begin{fact}\label{fact: DME to DKL} For any distributions $P$ and $Q$ such that that $P$ is absolutely continuous with respect to $Q$, any random variable $X: \Omega \rightarrow \mathcal{X}$ and any $g:\mathcal{X}\rightarrow \mathbb{R}$ such that $\sup g - \inf g \leq 1$,
$$\mathbb{E}_{P} \left[ g(X) \right] - \mathbb{E}_{Q} \left[ g(X) \right] \leq \sqrt{\frac{1}{2} D \left( P || Q \right) },$$
where $\mathbb{E}_{P}$ and $\mathbb{E}_{Q}$ denote the expectation operators under $P$ and $Q$.
\end{fact}
\vspace{\baselineskip}
We begin by showing the mutual information bound stated as part of Theorem \ref{thm:noisy regret bound}.
\begin{lemma}[Mutual Information Bound]
\label{lem: infinite armed bandit mutual info bound}
Let $\delta=\mathbb{P}(\theta_a \geq 1-D)$. Then
\[
I(\tilde{A}; \theta) \leq 1 + \log\left( 1/ \delta \right).
\]
\end{lemma}
\begin{proof}
Since $\tilde{A} = \min\{a \in \mathcal{A}: \theta_a \geq 1-D\}$ is a deterministic function of $\theta$, we have
$I(\tilde{A}; \theta)=H(\tilde{A}) = H(N)$ where $N\sim {\rm Geom}(\delta)$ is a geometric random variable. This implies
\begin{eqnarray*}
I(\tilde{A} ; \theta)
&=& H\left( N \right) \\
&=& -\sum_{k=1}^{\infty} \delta (1-\delta)^{k-1} \log(\delta (1-\delta)^{k-1}) \\
&=& -\sum_{k=1}^{\infty}\delta (1-\delta)^{k-1}\log(\delta) - \sum_{k=1}^{\infty}\delta(1-\delta)^{k-1}\log((1-\delta)^{k-1})\\
&=& \sum_{k=1}^{\infty}\mathbb{P}(N=k)\log(1/\delta) - \log(1-\delta) \sum_{k=1}^{\infty} \delta (1-\delta)^{k-1}(k-1) \\
&=& \log(1/\delta) + \log\left( \frac{1}{1-\delta} \right)(\mathbb{E}[N]-1)\\
&=& \log(1 /\delta) + \log\left(1 + \frac{\delta}{1-\delta} \right)\left( \frac{1-\delta}{\delta} \right) \\
&\leq & \log(1/\delta)+ \left( \frac{\delta}{1-\delta} \right)\left( \frac{1-\delta}{\delta} \right)\\
&=& 1+\log(1/\delta).
\end{eqnarray*}
\end{proof}
Throughout the remainder of the proof we use the notation
\[
\Gamma_{t} = \frac{(\mathbb{E}_{t}[\theta_{\tilde{A}} - \theta_{A_t}])^2}{I_{t}(\tilde{A} ; (A_t ,Y_{t}) )}.
\]
This represents the expected one-step information ratio in period $t$ under the posterior measure $\mathbb{P}(\cdot | \mathcal{H}_{t-1})$. The next lemma shows that the cumulative information ratio can be bounded by the expected discounted average of these one-step information ratios.
\begin{lemma}[Relating the information ratio to the one-step-information-ratio]
\label{lem: one step to total information ratio}
\[
\Gamma\left(\tilde{A}, \psi_D^{\rm STS} \right) \leq (1-\alpha^2)\sum_{t=0}^{\infty} \alpha^{2t} \mathbb{E}[ \Gamma_{t}]
\]
\end{lemma}
\begin{proof}
We have
\begin{eqnarray*}
\mathbb{E}[\tilde{R} - R_t] = \mathbb{E}\left[\theta_{\tilde{A}} - \theta_{A_t}\right] &=& \mathbb{E}\left[ \mathbb{E}_{t}\left[\theta_{\tilde{A}} - \theta_{A_t}\right] \right] \\
&=& \mathbb{E}\left[ \sqrt{\Gamma_{t}} \sqrt{I_{t}\left(\tilde{A}; (A_t, Y_{t})\right)} \right] \\
&\leq& \sqrt{ \mathbb{E}[\Gamma_t] \, \mathbb{E}\left[ I_{t}\left(\tilde{A}; (A_t, Y_{t})\right) \right]} \\
&=& \sqrt{ \mathbb{E}[\Gamma_t] \, I\left(\tilde{A}; (A_t, Y_{t}) \mid \mathcal{H}_{t-1} \right). }
\end{eqnarray*}
Then, by the definition of the information ratio
\[
\Gamma\left(\tilde{A}, \psi \right) = (1-\alpha^2) \sum_{t=0}^\infty \alpha^{2t} \left(\frac{(\mathbb{E}[\tilde{R} - R_{t}])^2}{I(\tilde{A}; (A_t, Y_{t}) | \mathcal{H}_{t-1} )}\right) \leq (1-\alpha^2)\sum_{t=0}^{\infty} \alpha^{2t} \mathbb{E}[ \Gamma_{t}].
\]
\end{proof}
The next lemma provides a bound on the one-step information ratio.
\begin{lemma}
\[
\Gamma_{t} \leq 2 |\mathcal{A}_{t}| +2/\delta
\]
where $\mathcal{A}_{t} = \cup_{s=1}^{t-1}\{ A_s \}$ is the set of actions that were sampled before period $t$, and $\delta \equiv \mathbb{P}(\theta_{i} \geq R^*-D)$ is the prior probability an arm is $D$--optimal.
\end{lemma}
\begin{proof}
Define
\[
L \equiv \mathbb{E}[\theta_{i} | \theta_{i} \geq R^*-D] -\mathbb{E}[\theta_{i}]
\]
and
\[
\delta \equiv \mathbb{P}(\theta_{i} \geq R^*-D).
\]
Here $\delta$ is the probability an unsampled arm is $D$--optimal, and $L$ is the difference between the expected reward of a $D$--optimal arm and that of an arm sampled uniformly at random. In the case where $\theta_i \sim {\rm Unif}(0,1)$, $\delta=D$ and $L= (1-D)/2$.
We can write expected regret as
\begin{eqnarray*}
\mathbb{E}_{t}[\theta_{\tilde{A}} - \theta_{A_t}] &=& \sum_{a \in \mathcal{A}}\mathbb{P}_{t}(\tilde{A}=a)\mathbb{E}_{t}[\theta_{a}|\tilde{A}=a] - \sum_{a \in \mathcal{A}}\mathbb{P}_{t}(A_t=a)\mathbb{E}_{t}[\theta_{a}] \\
&=&\sum_{a \in \mathcal{A}_{t}}\mathbb{P}_{t}(\tilde{A}=a)\left(\mathbb{E}_{t}[\theta_{a}|\tilde{A}=a]-\mathbb{E}_{t}[\theta_{a}]\right)\\
&&+ \sum_{a \notin \mathcal{A}_{t}}\mathbb{P}_{t}(\tilde{A}=a)\mathbb{E}_{t}[\theta_a | \tilde{A}=a]- \sum_{a \notin \mathcal{A}_t}\mathbb{P}_{t}(A_t=a)\mathbb{E}_{t}[\theta_{a}]\\
&=& \sum_{a \in \mathcal{A}_{t}}\mathbb{P}_{t}(\tilde{A}=a)\left(\mathbb{E}_{t}[\theta_{a}|\tilde{A}=a]-\mathbb{E}_{t}[\theta_{a}]\right)
+ \mathbb{P}_{t}(\tilde{A} \notin \mathcal{A}_{t})(\mathbb{E}[\theta_a | \theta_a \geq 1-D]- \mathbb{E}[\theta_a]) \\
&=& \underbrace{\sum_{a \in \mathcal{A}_{t}}\mathbb{P}_{t}(\tilde{A}=a) \left(\mathbb{E}_{t}[\theta_{a}|\tilde{A}=a] - \mathbb{E}_{t}[\theta_{a}]\right)}_{\Delta_{t,1}} + \underbrace{\mathbb{P}_{t}(\tilde{A} \notin \mathcal{A}_{t})L}_{\Delta_{t,2}}.
\end{eqnarray*}
This decomposes regret into the sum of two terms: one which captures the regret due to suboptimal selection within the set of previously sampled actions $\mathcal{A}_{t}$, and one due to the remaining possibility that none of the sampled actions are $D$--optimal. The proof develops a similar decomposition for mutual information, and then lower bounds both terms.
Let $Y_{t,a}=g(a, \theta, W_t)$ denote the outcome that would have been realized from a choice of action $a$ at time $t$. We can express mutual information as follows:
\begin{eqnarray*}
I_t( \tilde{A}; (A_t, Y_{t}) ) &=&\sum_{a\in \mathcal{A}} \mathbb{P}_{t}(A_t=a) I_t( \tilde{A}; Y_{t} | A_t =a) \\
&=&\sum_{a\in \mathcal{A}_t} \mathbb{P}_{t}(\tilde{A}=a) I_{t}(\tilde{A} ; Y_{t,a}) + \sum_{a\notin \mathcal{A}_t} \mathbb{P}_{t}(\tilde{A}=a) I_{t}(\tilde{A} ; Y_{t,a} | A_t =a)
\end{eqnarray*}
Let us focus on the second sum, which captures the information acquired due to sampling previously untested actions $a\notin \mathcal{A}_{t}$. Such an action provides information about $\tilde{A}$, since if $\tilde{A}\notin \mathcal{A}_t$ and $\theta_a\geq 1-D$, then $a$ is the first sampled action to be sufficiently close to optimal and $\tilde{A}=a$. Using the shorthand $P_{t}(X) = \mathbb{P}_{t}(X \in \cdot)$ to denote the posterior distribution of a random variable $X$, we have that for an untested action $a\notin \mathcal{A}_{t}$
\begin{eqnarray*}
I_{t}(\tilde{A} ; Y_{t,a} | A_t =a)& =& \sum_{\tilde{a} \in \mathcal{A}} \mathbb{P}_{t}\left( \tilde{A}=\tilde{a} \mid A_t =a \right) D\left(P_{t}(Y_{t, a} \mid \tilde{A} = \tilde{a}) \,||\, P_{t}(Y_{t,a}) \right) \\
&\geq& \mathbb{P}_{t}\left( \tilde{A}=a \mid A_t =a \right) D\left(P_{t}(Y_{t, a} \mid \tilde{A} = a) \,||\, P_{t}(Y_{t,a}) \right) \\
&=& \mathbb{P}_{t}\left( \tilde{A}=a \mid A_t =a \right) D\left(P_{t}(Y_{t, a} \mid \theta_{a}\geq 1-D ) \,||\, P_{t}(Y_{t,a}) \right) \\
&\geq & 2 \mathbb{P}_{t}\left( \tilde{A}=a \mid A_t =a \right) \left( \mathbb{E}_{t}\left[\theta_{a} \mid \theta_a \geq 1-D \right]- \mathbb{E}_{t}\left[\theta_{a}\right]\right)^2\\
&=& 2 \mathbb{P}_{t}\left( \tilde{A}=a \mid A_t =a \right) L^2 \\
&=& 2 \mathbb{P}_{t}\left( \tilde{A} \notin \mathcal{A}_t \right) \mathbb{P}_{t}\left( \tilde{A}=a \mid A_t =a, \tilde{A}\notin \mathcal{A}_t \right)L^2 \\
&=& 2 \mathbb{P}_{t}\left( \tilde{A} \notin \mathcal{A}_t \right) \delta L^2.
\end{eqnarray*}
The second inequality uses Fact \ref{fact: DME to DKL}. This implies
\[
\sum_{a\notin \mathcal{A}_t} \mathbb{P}_{t}(\tilde{A}=a) I_{t}(\tilde{A} ; Y_{t,a} | A_t =a) \geq 2 \mathbb{P}_{t}\left( \tilde{A} \notin \mathcal{A}_t \right)^2 \delta L^2.
\]
Next, following the proof of Proposition 3 of \citet{russo2016info} shows
\begin{eqnarray*}
\sum_{a\in \mathcal{A}_{t}} \mathbb{P}_{t}(\tilde{A}=a)I_{t}(\tilde{A} ; Y_{t,a}) &=& \sum_{a\in \mathcal{A}_{t}} \mathbb{P}_{t}(\tilde{A}=a) \sum_{\tilde{a} \in \mathcal{A}} D\left(P_{t}(Y_{t,a} \mid \tilde{A}=\tilde{a}) \,||\, P_{t}(Y_{t,a}) \right) \\
&\geq & \sum_{a\in \mathcal{A}_{t}} \mathbb{P}_{t}(\tilde{A}=a)^2 D\left(P_{t}(Y_{t,a}\mid \tilde{A}=a) \,||\, P_{t}(Y_{t,a}) \right) \\
&\geq & 2\sum_{a\in \mathcal{A}_{t}} \mathbb{P}_{t}(\tilde{A}=a)^2 \left(\mathbb{E}_{t}[\theta_{a} | \tilde{A}=a ] - \mathbb{E}_{t}[\theta_{a}] \right)^2 \\
& \geq & \frac{2}{|\mathcal{A}_{t}|} \left( \sum_{a \in \mathcal{A}_{t}} \mathbb{P}_{t}( \tilde{A} = a)\left(\mathbb{E}_{t}[\theta_{a} | \tilde{A}=a ] - \mathbb{E}_{t}[\theta_{a}] \right) \right)^2
\end{eqnarray*}
where the second inequality uses Fact \ref{fact: DME to DKL}. Therefore
\[
I_t( \tilde{A}; (A_t, Y_{t}) ) \geq \underbrace{\frac{2}{|\mathcal{A}_{t}| } \left( \sum_{a \in \mathcal{A}_{t}} \mathbb{P}_{t}( \tilde{A} = a)\left(\mathbb{E}_{t}[\theta_{a} | \tilde{A}=a ] - \mathbb{E}_{t}[\theta_{a}] \right) \right)^2}_{G_{t,1}} +\underbrace{ 2\mathbb{P}_{t}(\tilde{A} \notin \mathcal{A}_{t})^2 \delta L^2}_{G_{t,2}},
\]
is lower bounded by the sum of two terms: one which captures the information gain due to refining knowledge about previously sampled actions, and one that captures the expected information gathered about previously unexplored actions.
To bound the information ratio we'll separately consider two cases. If $\Delta_{t,1} \geq \Delta_{t,2}$,
then
\[
\frac{(\mathbb{E}_{t}[\theta_{\tilde{A}} - \theta_{A_t}])^2}{I_t( \tilde{A}; (A_t, Y_{t}) )} \leq \frac{(2\Delta_{t,1})^2}{G_{t,1}+G_{t,2}} \leq \frac{4(\Delta_{t,1})^2}{G_{t,1}} = 2 |\mathcal{A}_{t}|.
\]
If instead $\Delta_{t,1} < \Delta_{t,2}$, then
\[
\frac{(\mathbb{E}_{t}[\theta_{\tilde{A}} - \theta_{A_t}])^2}{I_t( \tilde{A}; (A_t, Y_{t}) )} \leq \frac{(2\Delta_{t,2})^2}{G_{t,1}+G_{t,2}} \leq \frac{4(\Delta_{t,2})^2}{G_{t,2}} = \frac{2}{\delta}.
\]
This shows
\[
\frac{(\mathbb{E}_{t}[\theta_{\tilde{A}} - \theta_{A_t}])^2}{I_t( \tilde{A}; (A_t, Y_{t}) )} \leq 2 |\mathcal{A}_{t}| +2/\delta.
\]
\end{proof}
Combining this result with Lemma \ref{lem: one step to total information ratio}
gives the bound
\begin{eqnarray*}
\Gamma\left(\tilde{A}, \psi^{\rm STS}_{D} \right) &\leq& (1-\alpha^2) \sum_{t=0}^\infty \alpha^{2t} \mathbb{E}\left[\Gamma_{t}\right] \\
&\leq & 2/\delta + 2(1-\alpha^2) \sum_{t=0}^\infty \alpha^{2t} \mathbb{E}[ |\mathcal{A} _{t}|].
\end{eqnarray*}
To use this result, we begin by bounding $\mathbb{E}[|\mathcal{A}_{t}|].$
\begin{lemma}\label{lem: bound on number of sampled actions}
$|\mathcal{A}_{0}|=0$ and for each $T \in \{1,2,\ldots\},$ $\mathbb{E}[|\mathcal{A}_{T}|] \leq 2+ \log(T)/\delta$.
\end{lemma}
\begin{proof}
Let $\tau_{k} = \min\{ t \leq T | |\mathcal{A}_{t}| \geq k \}$ denote the first period before $T$ in which $k$ actions have been sampled. Then
\begin{eqnarray*}
\mathbb{E}[|\mathcal{A}_{T}|] &=& \mathbb{E}[|\mathcal{A}_{\tau_{k}}|]+ \mathbb{E}[|\mathcal{A}_{T}|- |\mathcal{A}_{\tau_{k}}|]\\
&\leq & \mathbb{E}[|\mathcal{A}_{\tau_{k}}|]+ \mathbb{E}[|\mathcal{A}_{\tau_{k}+T}|- |\mathcal{A}_{\tau_{k}}|]\\
&\leq & k + \mathbb{E} \sum_{t=\tau_{k}}^{\tau_{k}+T-1} \mathbf{1}(A_t \notin \mathcal{A}_{t} ) \\
&=& k + \mathbb{E} \sum_{s=0}^{T-1} \mathbb{P}(A_{\tau_{k}+s} \notin \mathcal{A}_{\tau_{k}+s} | H_{\tau_k +s} ) \\
&=& k + \mathbb{E} \sum_{s=0}^{T-1} \mathbb{P}(\tilde{A} \notin \mathcal{A}_{\tau_{k}+s} | H_{\tau_k +s} ) \\
&= & k + \sum_{s=0}^{T-1} \mathbb{P}(\tilde{A} \notin \mathcal{A}_{\tau_{k}+s}) \\
& \leq & k + T \mathbb{P}(\tilde{A} \notin \mathcal{A}_{\tau_{k}}) \\
&= & k + T \mathbb{P}( {\rm Geom}(\delta) > k) \\
&=& k + T( 1- \delta)^{k} \\
&\leq & k +Te^{-\delta k}.
\end{eqnarray*}
Choosing $k = \lceil \log(T) / \delta \rceil \leq 1 +\log(T) / \delta,$ implies
\[
\mathbb{E}[ |\mathcal{A}_{T}|] \leq 2 + \log(T) / \delta.
\]
\end{proof}
The next technical lemma shows $\sum_{t=1}^{\infty} \gamma^{-t} \log(t)=O((1/\gamma)\log(1/\gamma)).$ The proof is given in Appendix \ref{se: geometric average of logs}.
\begin{lemma}\label{lem: geometric average of logs}
For any $\gamma \in (0,1)$,
\[
\sum_{t=1}^{\infty} \gamma^{-t} \log(t)\leq \frac{1}{1-\gamma} \left[ 1+ \log\left(\frac{1}{1-\gamma}\right)\right].
\]
\end{lemma}
Finally we can conclude with the proof of Theorem \ref{thm:noisy regret bound}. As shown before,
\begin{eqnarray*}
\Gamma\left(\tilde{A}, \psi^{\rm STS}_{D} \right) &\leq& (1-\alpha^2) \sum_{t=0}^\infty \alpha^{2t} \mathbb{E}\left[\Gamma_{t}\right] \\
&\leq & 2/\delta + 2(1-\alpha^2) \sum_{t=0}^\infty \alpha^{2t} \mathbb{E}[ |\mathcal{A} _{t}|].
\end{eqnarray*}
By Lemma \ref{lem: bound on number of sampled actions} and Lemma \ref{lem: geometric average of logs} we find
\begin{eqnarray*}
(1-\alpha^2) \sum_{t=0}^\infty \alpha^{2t} \mathbb{E}[ |\mathcal{A} _{t}|] &\leq & (1-\alpha^2) \sum_{t=1}^\infty \alpha^{2t} \left(2+ \log(t)/\delta\right) \\
& \leq & 3 + (1/\delta)(1-\alpha^2) \sum_{t=1}^\infty \alpha^{2t} \log(t) \\
& \leq & 3 + (1/\delta)\left[ 1+ \log\left(\frac{1}{1-\alpha^2}\right)\right].
\end{eqnarray*}
This implies
\[
\Gamma\left(\tilde{A},\psi^{\rm STS}_{D} \right) \leq 6 + 4/\delta + (2/\delta)\log\left(\frac{1}{1-\alpha^2} \right) = O\left( (1/\delta) \log\left(\frac{1}{1-\alpha^2}\right) \right)
\]
and concludes the proof of Theorem \ref{thm:noisy regret bound}.
\subsection{Proof of the Lower Bound: Theorem \ref{thm: lower bound}}
The lower bound analysis leverages many of the standard information theoretic techniques for estalbishing minimax lower bounds \cite[See e.g.][]{tsybakov2008introduction}. We first give a reduction to a hypothesis testing problem in which the goal is simply to identify a satisficing action. We show identifying a satisficing action is hard by upper bounding a certain KL divegence, through the data-processing inequality and the chain rule. These techniques are based on a classical change of measure argument by \cite{lai1985asymptotically} as well as other bandit lower bounds \cite{kaufmann2016complexity, bubeck2012regret}. Our proof resembles most closely a proof in \citet{bubeck2012regret} that the minimax regret for $k$-armed un-discounted stochastic bandits is lower bounded by $\frac{1}{20}\sqrt{kT}$, where $T$ is the number of time periods. There is some novelty to our lower bound analysis, however. The most significant change is that our problem involves an infinite number of arms and an independent prior, so there are many satisficing arms and we need to argue it is difficult to consistently play any arm from that set. The construction in \cite{bubeck2012regret} involves a problem instance with a dependent prior, under which it is difficult to identify the single arm that differs from the other $k-1$ arms. We also show how to carry out lower bound analyses of discounted problems by analyzing the distribution of $A_{\tau}$, the action chosen at some randomly selected time $\tau \sim {\rm Geom(1-\alpha)}$.
\begin{proof}
{\bf Step 1: Expressing discounted sums in terms of random times.}\\
Let $\tau \sim {\rm Geom}(1-\alpha)$ be distributed independently of all other random variables. In several places, we use that, by the explicit form of the PMF $\mathbb{P}(\tau = t) = \alpha^t (1-\alpha)$,
\[
\mathbb{E} \sum_{t=0}^{\infty} \alpha^t f(A_t) = (1-\alpha)^{-1} \mathbb{E}[f(A_\tau)]
\]
for an arbitrary function $f: \mathbb{N} \to \mathbb{R}$. The allows us to compactly compress functions of the entire sequence of interactions into expectations with respect to $A_{\tau}$ alone.
{\bf Step 2: Regret lower bound in terms of the probability of satisficing.}\\
In this problem, the optimal expected reward is $R^* = \frac{1}{2}+D$. We put $S_{\theta}=\{ i \in \mathbb{N} \, : \, \theta _i \geq \frac{1}{2} \}$ to be the set of satisficing actions and ${\rm OPT}_{\theta} =\{ i \in \mathbb{N} \, : \, \theta _i= R^* \}$ to be the set of optimal actions. These sets are random due to their dependence on $\theta$. We have
\begin{align*}
{\rm SRegret}(\alpha, \psi, D) = \mathbb{E}\left[ \sum_{t=0}^{\infty} \alpha^{t}(R^* - R_t - D)\right] &= \mathbb{E}\left[ \sum_{t=0}^{\infty} \alpha^{t}(\mathbb{E}\left[ R^* - R_t - D \mid \mathcal{H}_{t-1} \right] \right] \\
&=\mathbb{E}\left[ \sum_{t=0}^{\infty} \alpha^{t}(\mathbb{E}\left[ R^* - \theta_{A_t} - D \mid \mathcal{H}_{t-1} \right] \right] \\
&= \left(1-\alpha\right)^{-1} \mathbb{E}\left[ R^* - \theta_{A_\tau} - D\right]\\
& = \left(1-\alpha\right)^{-1} \left[ \Delta\mathbb{P}\left(\theta_{{A}_{\tau}}=\frac{1}{2}-D\right) - D\mathbb{P}\left(\theta_{{A}_{\tau}}=\frac{1}{2}+D\right) \right]\\
&= \left(1-\alpha\right)^{-1} \left[ \Delta \mathbb{P}(A_\tau \in S_{\theta}) - D \mathbb{P}(\theta_{A_\tau} \in {\rm OPT}_\theta) \right]
\end{align*}
We now upper bound $\mathbb{P}(A_\tau \in {\rm OPT}_\theta)$ in terms of $\epsilon$. Recall the definition $\mathcal{A}_t := \{A_0,\ldots, A_{t-1}\}$. Proceeding recursively, we have $\mathbb{P}(\theta_{A_1} < R^*) = 1-\epsilon$, since $A_1$ is chosen independently of $\theta$. Next, we have
\begin{align*}
\mathbb{P}(\theta_{A_1} < R^* \wedge \theta_{A_2} < R^* )&= (1-\epsilon)\mathbb{P}( \theta_{A_2} < R^* \mid \theta_{A_1} < R^*)\geq (1-\epsilon)^2.
\end{align*}
To understand the final inequality, note that $\mathbb{P}( \theta_{A_2} < R^* \mid \theta_{A_1} < R^*, A_1, A_2)$ is equal to one if $A_1=A_2$ and is equal to $1-\epsilon$ otherwise. Hence $\mathbb{P}( \theta_{A_2} < R^* \mid \theta_{A_1} < R^*, A_1, A_2) \geq 1-\epsilon$ almost surely. Repeating this process inductively gives
\[
\mathbb{P}(\theta_{A_1} < R^* \wedge\cdots \wedge \theta_{A_t} < R^*) \geq (1-\epsilon)^t.
\]
This gives the bound
\begin{align*}
\mathbb{P}( A_\tau \notin {\rm OPT}_\theta) &\geq \mathbb{P}(\theta_{A_\tau} < R^* \wedge\cdots \wedge \theta_{A_\tau} < R^*) \\
&= \mathbb{E}\left[ \mathbb{P}(\theta_{A_\tau} < R^* \wedge\cdots \wedge \theta_{A_\tau} < R^* \mid \tau) \right] \\
&\geq \mathbb{E}\left[ (1-\epsilon)^\tau \right] \\
&=\sum_{t=0}^{\infty} \alpha^{t} (1-\alpha) (1-2\epsilon)^t \\
&=\frac{1-\alpha}{1-\alpha(1-\epsilon)}.
\end{align*}
We find
\[
\mathbb{P}( A_\tau \in {\rm OPT}_\theta) \leq 1-\frac{1-\alpha}{1-\alpha(1-\epsilon)} = \frac{\alpha \epsilon}{1-\alpha(1-\epsilon)} \leq \frac{ \epsilon}{ 1-\alpha}.
\]
We've reached our desired result, which lower bounds satisficing regret in terms of the probability of playing a satisficing arm at the random time $\tau$:
\begin{equation}\label{eq: lower bound by prob of satisficing}
{\rm SRegret}(\alpha, \psi, D) \geq \Delta \cdot \left( \frac{ 1-\mathbb{P}( A_\tau \in S_{\theta}) }{ 1- \alpha}\right) - D \cdot \left(\frac{\epsilon }{ (1-\alpha)^2} \right).
\end{equation}
{\bf Step 3: Identifying a satisficing action is hard.}\\
Our goal is to lower bound $\mathbb{P}( A_\tau \in S_{\theta})$. To do this, we consider two alternative infinite armed bandit models. Each induces an a probability measure as follows: \\
\begin{enumerate}
\item The probability measure $\mathbb{P}(\cdot )$ corresponds to the infinite armed bandit model as described in the theorem statement. The collection $\theta \equiv (\theta_a)_{a \in \mathbb{N}}$ is drawn randomly according to the prior probabilities in \eqref{eq: lower bound construction}. The random seed $\xi$ is drawn uniformly from $[0,1]$. For each period $t$, the action $A_{t}=\psi(\mathcal{H}_{t-1}, \xi)$ is prescribed by the policy $\psi$. Then $\mathbb{P}(R_{t} = 1 \mid A_{t}, \theta, \mathcal{H}_{t-1}, \xi) = \theta_{A_t}$.
\item We consider an alternative model, which is identical except that rewards are always drawn from a Bernoulli distribution with mean $1/2-\Delta$. Precisely, we let $\mathbb{Q}$ be an alternative probability measure with the following properties: As before, the random seed $\xi$ is drawn uniformly from $[0,1]$, $A_{t} =\psi(\mathcal{H}_{t-1}, \xi)$ for each period $t$, and $\theta$ is drawn from according to the prior probabilities in \eqref{eq: lower bound construction}. However, rewards are now independent from $\theta$, with $\mathbb{Q}(R_t =1 \mid A_t, \theta, \mathcal{H}_{t-1}, \xi) = 1/2-\Delta$. \\
\end{enumerate}
The idea of this construction is that $D_{\rm KL}\left({\mathbb Q}(\mathcal{H}_{t-1} = \cdot) \, || \, {\mathbb P}(\mathcal{H}_{t-1} = \cdot) \right)$ will reduce to considering the divergence in reward distributions, since this is the only source of discrepancy between the probability distributions. We continue to let $\tau \sim {\rm Geom}(1-\alpha)$ denote a geometric random variable that is mutually independent from $\theta$ and $(\mathcal{H}_{t})_{t\in \mathbb{N}}$ under both $\mathbb{P}$ and $\mathbb{Q}$. We take $\mathbb{E}_{\mathbb Q}[\cdot]$ to denote the expectation under the probability measure $\mathbb{Q}$.
Under $\mathbb{Q}$, the algorithm's observations are independent of $\theta$, so
\[
\mathbb{Q}(A_{t} \in S_{\theta}) = \mathbb{E}_{\mathbb Q}\left[ \mathbb{Q}( A_{t} \in S_{\theta} \mid \theta) \right] = \mathbb{E}_{\mathbb Q}[ \delta ] = \delta.
\]
Applying this gives,
\[
\mathbb{Q}(A_{\tau} \in S_{\theta}) = \mathbb{E}_{\mathbb Q}\left[ \mathbb{Q}( A_{\tau} \in S_{\theta} \mid \tau) \right] = \delta.
\]
Then, by Pinsker's inequality
\begin{align}\nonumber
\mathbb{P}(A_{\tau} \in S_{\theta}) & \leq \mathbb{Q}(A_\tau \in S_{\theta}) + \sqrt{\frac{1}{2} D_{\rm KL}\left( \mathbb{Q}(A_{\tau} \in S_{\theta}) \, ||\, \mathbb{P}( A_{\tau} \in S_{\theta}) \right)} \\
&= \delta + \sqrt{\frac{1}{2} D_{\rm KL}\left( \mathbb{Q}(A_{\tau} \in S_{\theta}) \, ||\, \mathbb{P}( A_{\tau} \in S_{\theta}) \right)}. \label{eq: bound from pinsker}
\end{align}
We upper bound and expand the KL divergence through repeated use of the data-processing inequality and the chain rule. We have
\begin{align*}
D_{\rm KL}\left( \mathbb{Q}(A_\tau = \cdot ) \, ||\, \mathbb{P}( A_{\tau} = \cdot \right) &\leq D_{\rm KL}\left( \mathbb{Q}(A_\tau = \cdot ) \, ||\, \mathbb{P}( A_{\tau} = \cdot \right) \\
& \leq D_{\rm KL}\left( \mathbb{Q}( (\tau, \theta, \mathcal{H}_{\tau}, \xi ) = \cdot ) \, ||\, \mathbb{P}( (\tau, \theta, \mathcal{H}_{\tau}, \xi) = \cdot \right) \\
& = D_{\rm KL}\left( \mathbb{Q}( (\tau, \theta, \xi) = \cdot )\, ||\, \mathbb{P}( (\tau, \theta, \xi) = \cdot ) \right)+ D_{\rm KL}\left( \mathbb{Q}( \mathcal{H}_{\tau} = \cdot \mid \tau, \theta, \xi ) \, ||\, \mathbb{P}(\mathcal{H}_{\tau} = \cdot \mid \tau, \theta, \xi ) \right) \\
&= D_{\rm KL}\left( \mathbb{Q}( \mathcal{H}_{\tau} = \cdot \mid \tau, \theta, \xi ) \, ||\, \mathbb{P}(\mathcal{H}_{\tau} = \cdot \mid \tau, \theta, \xi) \right) \\
&= \sum_{t=0}^{\infty} \mathbb{Q}(\tau =t) D_{\rm KL}\left( \mathbb{Q}( \mathcal{H}_{\tau} = \cdot \mid \tau = t, \theta, \xi ) \, ||\, \mathbb{P}(\mathcal{H}_{\tau} = \cdot \mid \tau=t, \theta, \xi ) \right) \\
&= \sum_{t=0}^{\infty} \mathbb{Q}(\tau =t) D_{\rm KL}\left( \mathbb{Q}( \mathcal{H}_{t} = \cdot \mid \theta, \xi ) \, ||\, \mathbb{P}(\mathcal{H}_{t} = \cdot \mid \theta, \xi ) \right)
\end{align*}
where the final equality uses the independence of $\tau$ and $(\theta, (\mathcal{H}_t)_{t\in \mathbb{N}}, \xi)$. Define the binary KL divergence function $d: [0,1]\times [0,1] \to \mathbb{R}$ by $d(p|| q) = p \log(p/q) + (1-p)\log((1-p)/ (1-q))$. Now, by the chain rule and the relation $\mathcal{H}_t=(\mathcal{H}_{t-1}, A_t, R_t)$, we have
\begin{align*}
D_{\rm KL}\left( \mathbb{Q}( \mathcal{H}_{t} = \cdot \mid \theta, \xi ) \, ||\, \mathbb{P}(\mathcal{H}_{t} = \cdot \mid \theta, \xi ) \right) =& D_{\rm KL}\left( \mathbb{Q}( \mathcal{H}_{t-1} = \cdot \mid \theta, \xi) \, ||\, \mathbb{P}(\mathcal{H}_{t-1} = \cdot \mid \theta, \xi) \right)\\
&+ D_{\rm KL}\left( \mathbb{Q}( A_t = \cdot \mid \theta, \mathcal{H}_{t-1}, \xi) \,\, ||\,\, \mathbb{P}( A_t = \cdot \mid \theta, \mathcal{H}_{t-1}, \xi) \right) \\
&+ D_{\rm KL}\left( \mathbb{Q}( R_t = \cdot \mid \theta, \mathcal{H}_{t-1}, A_t, \xi) \, \, ||\,\, \mathbb{P}( R_t = \cdot \mid \theta, \mathcal{H}_{t-1}, A_t, \xi) \right) \\
=& D_{\rm KL}\left( \mathbb{Q}( \mathcal{H}_{t-1} = \cdot \mid \theta) \,\, ||\,\, \mathbb{P}(\mathcal{H}_{t-1} = \cdot \mid \theta) \right) +\mathbb{E}_{\mathbb{Q}}\left[ d\left( 1/2-D \,\, || \,\, \theta_{A_t} \right) \right] \\
=& \cdots\\
=& \mathbb{E}_{\mathbb{Q}} \left[ \sum_{\ell=0}^{t} d\left( 1/2-\Delta \,\, || \,\, \theta_{A_\ell} \right) \right] \\
\leq & \mathbb{E}_{\mathbb{Q}} \left[ \sum_{\ell=0}^{t} \left(\mathbf{1}(A_t \in S_{\theta}) d\left( 1/2-\Delta \,\, || \,\, 1/2\right)+ \mathbf{1}(A_t \in {\rm Opt}_{\theta} \right) d\left( 1/4 \,\, || \,\, 3/4\right) \right] \\
=& \sum_{\ell=0}^{t} \mathbb{Q}(A_t \in S_{\theta}) d\left( 1/2-\Delta \,\, || \,\, 1/2 \right) + \sum_{\ell=0}^{t} \mathbb{Q}(A_t \in {\rm OPt}_{\theta}) d\left( 1/2-\Delta \,\, || \,\, 3/4 \right).
\end{align*}
Here we used that $D_{\rm KL}\left( \mathbb{Q}( A_t = \cdot \mid \theta, \mathcal{H}_{t-1}, \xi) \,\, ||\,\, \mathbb{P}( A_t = \cdot \mid \theta, \mathcal{H}_{t-1}, \xi) \right)=0$ since, conditioned on $(\mathcal{H}_{t-1}, \xi)$, $A_t$ is almost surely equal to $\psi_{t}(\mathcal{H}_{t-1}, \xi)$ under $\mathbb{Q}(\cdot)$ or $\mathbb{P}(\cdot)$. The inequality uses that $1/2-\Delta \geq 1/4$ and $\theta_a \leq 3/4$ by hypothesis. Plugging this in above, we find
\begin{align*}
& D_{\rm KL}\left( \mathbb{Q}(A_\tau = \cdot ) \, ||\, \mathbb{P}( A_{\tau} = \cdot ) \right) \\
\leq& \sum_{t=0}^{\infty} \mathbb{Q}(\tau =t) \sum_{\ell=0}^{t} \left[ \mathbb{Q}(A_\ell \in S_{\theta}) d\left( 1/2-\Delta \,\, || \,\, 1/2 \right)+\mathbb{Q}(A_\ell \in {\rm OPt}_{\theta}) d\left( 1/4 \,\, || \,\, 3/4 \right)\right] \\
=& \sum_{t=0}^{\infty} \mathbb{Q}(\tau \geq t) \left[ \mathbb{Q}(A_t \in S_{\theta}) d\left( 1/2 - \Delta \,\, || \,\, 1/2 \right)+\mathbb{Q}(A_t \in {\rm OPt}_{\theta}) d\left( 1/4 \,\, || \,\, 3/4 \right)\right] \\
=& \sum_{t=0}^{\infty} \alpha^{t} \delta d\left( 1/2-\Delta \,\, || \,\, 1/2 \right) + \sum_{t=0}^{\infty} \alpha^{t} \epsilon d\left( 1/4 \,\, || \,\, 3/4 \right) \\
=& \frac{\delta \cdot \Delta \log(\frac{1+\Delta}{1-\Delta} )}{1-\alpha} + \frac{\epsilon \cdot (1/4) \log (\frac{5/4}{3/4})}{1-\alpha}\\
=& \frac{\delta \cdot \Delta \log(1+ \frac{2\Delta}{1-\Delta})}{1-\alpha}+ \frac{\epsilon \cdot (1/4) \log (\frac{5}{3})}{1-\alpha}\\
\leq& \frac{4\delta \cdot \Delta^2 }{1-\alpha}+ \frac{\epsilon/4}{1-\alpha}
\end{align*}
where the last step uses the requirement that $\frac{1}{1-\Delta}< 2$. To conclude, plugging the above into \eqref{eq: bound from pinsker} and using the concavity of the square root, we have shown
\begin{equation}\label{eq: bound on prob of satisficing}
\mathbb{P}(A_{\tau} \in S_{\theta}) \leq \delta + \Delta \sqrt{ \frac{2\delta }{1-\alpha}} + \sqrt{\frac{\epsilon/4}{1-\alpha}}
\end{equation}
{\bf Step 4: Conclusion by plugging in for $D$ and $\epsilon$.} \\
Combining \eqref{eq: lower bound by prob of satisficing} and \eqref{eq: bound on prob of satisficing}, we have
\begin{align*}
{\rm SRegret}(\alpha, \psi, D) &\geq \left( \frac{ \Delta \cdot (1-\delta) - \Delta^2 \cdot \sqrt{ \frac{2\delta }{1-\alpha}}}{1-\alpha} \right) - \sqrt{\epsilon} \cdot \frac{ \Delta/2 }{(1-\alpha)^{3/2}} - \epsilon \cdot \left(\frac{D }{ (1-\alpha)^2} \right)\\
&\geq \left( \frac{ \Delta/2 - \Delta^2 \cdot \sqrt{ \frac{2\delta }{1-\alpha}}}{1-\alpha} \right) - \sqrt{\epsilon} \cdot \frac{ 1/8 }{(1-\alpha)^{3/2}} -\epsilon \cdot \left(\frac{1/4 }{ (1-\alpha)^2} \right) \\
&\geq \left( \frac{ \Delta/2 - \Delta^2 \cdot \sqrt{ \frac{2\delta }{1-\alpha}}}{1-\alpha} \right) - \cdot \frac{\sqrt{\epsilon} \cdot (3/8) }{(1-\alpha)^{2}} \\
&:= f(\Delta) - g(\epsilon )
\end{align*}
where in the final step we will require $\epsilon \leq 1$.
We now focus on the first term, and will eventually pick $\epsilon$ so the remaining term is sufficiently small. Set
\[
f(\Delta) = \frac{1}{1-\alpha} \cdot \left[ \frac{\Delta}{2} - \Delta^2 \cdot \sqrt{ \frac{2\delta }{1-\alpha}} \right]
\]
This is a quadratic function with global minimum at
\[
\Delta^* = \argmin_{\Delta \in \mathbb{R} } f(\Delta) = \frac{1}{4\sqrt{2} } \cdot \sqrt{\frac{1-\alpha }{\delta}}
\]
For
\begin{equation}\label{eq: lower bound Delta choice}
\Delta_0 \equiv \min\left\{ \frac{1}{4} , \Delta^* \right\} = \bigg\{\begin{array}{lr}
1/4 & \quad \text{for } 2\delta \geq 1-\alpha \\
\Delta^* & \quad \text{for } 2\delta \leq 1-\alpha
\end{array}
\end{equation}
we have
\[
f(\Delta_0) = \left\{\begin{array}{lr}
\frac{1}{16} \cdot \frac{1}{1-\alpha} & \quad \text{for } \delta \geq 2(1-\alpha) \\
\frac{1}{16\sqrt{2} } \cdot \sqrt{\frac{1/\delta}{1-\alpha}} & \quad \text{for } \delta \leq 2(1-\alpha)
\end{array}\right\} \geq \frac{1}{16} \cdot \min\left\{ \frac{1}{1-\alpha} \, , \, \sqrt{\frac{1/2\delta}{1-\alpha}} \right\}
\]
Now, we pick $\epsilon_0\leq 1$ so that
\[
g(\epsilon_0) \leq f(\Delta)/2.
\]
We need
\[
\frac{\sqrt{\epsilon_0} \cdot (3/8) }{(1-\alpha)^{2}} \leq \frac{1}{16} \cdot \min\left\{ \frac{1}{1-\alpha} \, , \, \sqrt{\frac{1/2\delta}{1-\alpha}} \right\}.
\]
This is satisfied with equality for
\begin{equation}\label{eq: lower bound epsilon choice}
\epsilon_0= \frac{1}{36} \cdot \min\left\{ (1-\alpha)^2 \, , \, \frac{(1-\alpha)^3}{2\delta} \right\}
\end{equation}
This shows that for a choice of $\Delta= \Delta_0$ and $\epsilon \leq \epsilon_0$, as in the theorem statement, we have
\[
{\rm SRegret}(\alpha, \psi, D) \geq \frac{1}{32} \cdot \min\left\{ \frac{1}{1-\alpha} \, , \, \sqrt{\frac{1/2\delta}{1-\alpha}} \right\}.
\]
\end{proof}
\section{Closing Remarks}
We have put forth a way of thinking about satisficing in bandit learning.
The per-period regret of an algorithm that learns a satisficing action $\tilde{A}$ does not converge to zero,
but the time required can often be far less than what it would be to learn an optimal action $A^*$. Intuitively,
this advantage stems from the fact that the mutual information $I(\theta; \tilde{A})$, which can be thought of
as the number of bits of information about the model required to learn $\tilde{A}$, can be far less than $I(\theta; A^*)$.
Satisficing plays a particularly important role when there is a satisficing action $\tilde{A}$ for which $I(\theta; \tilde{A}) \ll I(\theta; A^*)$
and the agent exhibits time preference, valuing near-term over long-term rewards. To express this
in terms of a formal objective, we considered expected discounted regret. We also introduced satisficing Thompson sampling,
and established results pertaining to infinite-armed and linear bandits demonstrating that this variant of Thompson sampling
captures benefits of targeting a satisficing action.
We believe this paper puts forth a useful conceptual framework and that satisficing Thompson sampling could serve as a useful design principle for time-sensitive learning problems. But, we also feel this paper takes only a very preliminary step toward understanding satisficing in modern bandit learning. Future work might try to analyze the rate distortion function and information ratio for broader classes of problems than addressed in this paper. We suspect models like the hierarchical bandit in Example 2 may be quite useful in practice, and warrant more complete study. Satisficing is even more important in reinforcement learning than in bandit problems. Most ideas in this paper extend gracefully to contextual bandits, but extensions to reinforcement seem difficult and are a fascinating direction for future work.
\section*{Acknowledgments.}
The second author was generously supported by a research grant from Boeing and a Marketing Research Award from Adobe. A special thanks is owed to David Tse, who played an important role in the early stages of this work. It was David who first emphasized that bounds based on entropy can be vacuous and pointed us to references on rate-distortion theory. We also thank Tor Lattimore for thoughtful comments on an early draft of this work.
|
{
"timestamp": "2020-01-09T02:00:35",
"yymm": "1803",
"arxiv_id": "1803.02855",
"language": "en",
"url": "https://arxiv.org/abs/1803.02855"
}
|
\section{Introduction}
\label{Intro}
The origin of dark matter remains one of the biggest mysteries in fundamental physics. One of the simplest explanations, which would rely neither on the presence of new particles nor on modifications of the gravitational interaction, is black holes. An interesting region in parameter space where the contribution of black holes to the total dark matter abundance could be between $10\%$ and $100\%$ depending on astrophysical uncertainties is $10^{-17} M_\odot \lesssim M_{\scriptscriptstyle \rm BH}\lesssim 10^{-13} M_\odot$ \cite{Sasaki:2018dmp}, where the lower bound comes from extra-galatic $\gamma$ rays produced due to Hawking evaporation \cite{Hawking:1974rv}. This region, even if it is far from the one probed by LIGO, is very interesting since there is no known astrophysical explanation for black hole formation in this small mass window.\footnote{Depending on the interpretation of astrophysical and cosmological data, X-ray and CMB observations seem to rule out the case where black holes in the LIGO mass region can constitute a fraction of the dark matter abundance above $10\%$ \cite{Ali-Haimoud:2016mbv, Poulin:2017bwe, Gaggero:2016dpq}. Moreover the single-field inflationary dynamics seems to be very unlikely to generate black holes with masses as large as a few solar masses when the scalar spectral index is required to be compatible with CMB data.} On the other hand, these tiny black holes could be seeded by the dynamics of the early universe \cite{Hawking:1971ei, Carr:1974nx}. A tantalizing idea for the formation of these primordial black holes (PBHs) relies on an amplification of the density perturbations during inflation of order $\delta\rho \sim 0.1 \,\rho$ which then collapse to form PBHs at horizon re-entry.
This enhancement of the scalar power spectrum has to take place at momentum scales which are much larger than the ones associated with CMB observations where $\delta\rho \sim 10^{-5} \rho$. From the theoretical point of view, it is therefore important to identify mechanisms to generate the necessary enhancement at the right scales. Guided again by simplicity, we focus on single field inflationary models which also reproduce the Planck data rather well \cite{Ade:2015lrj}.\footnote{For PBH formation in multi-field inflationary models see \cite{Kawasaki:2012wr,Carr:2016drx,Garcia-Bellido:2016dkw}.} It has already been pointed out that the required inflationary potentials feature a slow-roll behaviour followed by a near inflection point region where the power spectrum is amplified since the system enters an ultra slow-roll regime \cite{Ivanov:1994pa, Garcia-Bellido:2017mdw, Ezquiaga:2017fvi, Germani:2017bcs, Ballesteros:2017fsr}.
Despite the fact that dark matter as PBHs formed during single-field inflation might seem a very appealing idea, its explicit realisation in concrete models has turned out to be rather complicated since the inflationary potential has to possess enough tuning freedom to allow for such dynamics \cite{Hertzberg:2017dkh}. Examples based on a radiative plateau have been recently studied in \cite{Ezquiaga:2017fvi, Germani:2017bcs, Ballesteros:2017fsr}. This is a bottom up perspective which tries to single out the simplest potential which allows for PBH formation via an inflationary plateau followed by a near inflection point. However this approach ignores the fundamental issue of deriving the model from a UV consistent theory where a symmetry argument can protect the flatness of the potential against quantum corrections.
In this paper we shall instead take a more top down approach and search for concrete examples of inflationary models in string theory whose structure is rich enough to allow for PBH formation. One of the main advantages of embedding inflation in string theory is the possibility to motivate the presence of a symmetry which can protect the inflaton potential against quantum corrections which can spoil its flatness \cite{Baumann:2009ni, Burgess:2013sla}. Particularly interesting cases include inflaton candidates which are pseudo Nambu-Goldstone bosons associated with slightly broken shift symmetries. Abelian symmetries involves both axions \cite{Pajer:2013fsa}, which are associated to compact $U(1)$ factors, and K\"ahler moduli \cite{Cicoli:2011zz}, which are associated with non-compact rescaling symmetries \cite{Burgess:2014tja}.\footnote{The non-Abelian case leads to a multi-field inflationary scenario which tends to be disfavoured by non-Gaussianity observations \cite{Ade:2015lrj}.}
This global rescaling symmetry is explicitly realised at tree-level in type IIB no-scale models since the K\"ahler moduli $\tau$ remain exact flat directions but needs to be slightly broken to generate the inflationary potential. This can be done either by non-perturbative effects or by perturbative power-law corrections which become exponential in terms of the canonically normalised inflaton: $V_0/\tau^n \sim V_0 \,e^{-n \phi/f}$. Notice that the shape of the inflationary potential is determined by both the effective `decay constant' $f$, i.e. the geometry of the moduli space (determined by the topology of the divisor whose volume is parameterised by the inflaton) and $n$, i.e. the exact moduli-dependence of the symmetry-breaking effects which develop the inflationary potential \cite{Burgess:2016owb}. Once a proper uplifting to dS has been achieved via the addition of a constant contribution (which can have several dynamical origins \cite{Kachru:2003aw, Kallosh:2014wsa, Bergshoeff:2015jxa, Burgess:2003ic, Cicoli:2013cha, Cicoli:2015ylx, Cicoli:2012fh}), these models tend to give rise to an inflationary potential of the schematic form \cite{Burgess:2016owb}:
\begin{equation}
V_{\rm inf} = V_0 \left(1- \,e^{-n \phi/f}\right)\,.
\label{Vinf1}
\end{equation}
These models go under the name of \textit{Fibre Inflation} since the underlying Calabi-Yau compactification manifold has a typical fibration structure \cite{Cicoli:2008gp, Broy:2015zba, Cicoli:2016chb}. They are interesting since they drive inflation successfully via a plateau-like region at large $\phi$ and also allow for a detailed analysis of the post-inflationary evolution \cite{Cicoli:2010ha, Cabella:2017zsa, Antusch:2017flz}. Moreover they provide string theory embeddings of Starobinsky inflation \cite{Starobinsky:1980te} and supergravity $\alpha$-attractors \cite{Kallosh:2013maa, Kallosh:2015zsa, Carrasco:2015pla} (where in our notation $\alpha \simeq (f/n)^2$). Nevertheless the potential (\ref{Vinf1}) is too simple to generate PHBs via a period of ultra slow-roll dynamics towards the end of inflation. However recent global constructions of fibre inflation models in concrete Calabi-Yau orientifolds with explicit brane setup and closed string moduli stabilisation have revealed the existence of new string loop corrections which look schematically like \cite{Cicoli:2016xae, Cicoli:2017axo}:
\begin{equation}
\delta V_{\rm inf} = - \epsilon_1 V_0 \,\frac{e^{2 n \phi/f}}{1+ \epsilon_2\,e^{3 n \phi/f}} \,,
\label{Vinf2}
\end{equation}
where $\epsilon_1 \ll 1$ and $\epsilon_2 \ll 1$ are two parameters which are tunable since they depend on background fluxes and the Calabi-Yau intersection numbers, and turn out to be naturally small since they are suppressed by inverse powers of the compactification volume, an exponentially large quantity \cite{Balasubramanian:2005zx, Cicoli:2008va}. Thanks to the additional perturbative contribution (\ref{Vinf2}), we will show that fibre inflation models are rich enough to produce a near inflation point region before the end of inflation which is perfectly suitable to generate PBHs in the mass window $10^{-17} M_\odot \lesssim M_{\scriptscriptstyle \rm PBH}\lesssim 10^{-13} M_\odot$ where they could constitute a significant fraction of the total dark matter abundance.
As pointed out in \cite{Kannike:2017bxn, Germani:2017bcs, Ballesteros:2017fsr, Motohashi:2017kbs}, the slow-roll approximation ceases to be valid in the near inflection point region. The primordial power spectrum has to be computed by solving the Mukhanov-Sasaki equations for the curvature perturbations \cite{Sasaki:1986hm, Mukhanov:1988jd}. By following this procedure, we shall show that the primordial power spectrum can feature the required enhancement for appropriate values of the underlying parameters. Let us stress that even if the choice of microscopic parameters needed for successful PBH formation looks very non-generic from the string landscape point of view, the values of these parameters are technically natural since they are protected against large quantum corrections by the effective rescaling shift symmetry typical of these models \cite{Burgess:2014tja}.
This paper is organised as follows. In Sec. \ref{FIReview} we provide a very brief review of fibre inflation models while in Sec. \ref{sec:PBH} we describe the mechanism of PBH generation in some detail. In Sec. \ref{PBHFibre} we then perform a careful analysis of the process of PBH formation in fibre inflation by implementing the Mukhanov-Sasaki formalism to derive the primordial power spectrum. We finally discuss our results and present our conclusions in Sec. \ref{Concl}.
\section{Fibre inflation models}
\label{FIReview}
Fibre inflation is a class of string inflationary models built within the framework of type IIB flux compactifications \cite{Cicoli:2008gp, Broy:2015zba, Cicoli:2016chb, Burgess:2016owb}. The inflaton $\tau_{\scriptscriptstyle {\rm K3}}$ is a K\"ahler modulus controlling the size of a K3 divisor fibred over a $\mathbb{P}^1$ base with volume $t_{\mathbb{P}^1}$. The simplest fibre inflation models feature a Calabi-Yau (CY) volume which looks like:
\begin{equation}
{{\cal{V}}} = t_{\small \mathbb{P}^1}\tau_{\scriptscriptstyle {\rm K3}} - \tau_{\scriptscriptstyle {\rm dP}}^{3/2}\,,
\end{equation}
where $\tau_{\scriptscriptstyle {\rm dP}}$ is the volume of a diagonal del Pezzo divisor which supports non-perturbative effects. Several effects come into play to stabilise the K\"ahler moduli in a typical LVS vacuum \cite{Balasubramanian:2005zx, Cicoli:2008va}. At leading order in a $1/{{\cal{V}}}\ll 1$ expansion only two directions, ${{\cal{V}}}$ and $\tau_{\scriptscriptstyle {\rm dP}}$, are lifted by non-perturbative contributions to the superpotential $W$ \cite{Kachru:2003aw} and perturbative $\alpha'$ corrections to the K\"ahler potential $K$ \cite{Becker:2002nn, Minasian:2015bxa, Bonetti:2016dqh}.\footnote{At this level of approximation, also the axionic partner of $\tau_{\scriptscriptstyle {\rm dP}}$ is fixed by non-perturbative effects.} Hence the remaining flat direction, which can be parametrised by $\tau_{\scriptscriptstyle {\rm K3}}$, represents a very promising inflaton candidate since it enjoys an effective non-compact rescaling symmetry which can be used to protect the flatness of the inflationary potential against quantum corrections \cite{Burgess:2014tja}.\footnote{There are actually other two flat directions corresponding to the axions associated with the base and the fibre which turn out to be much lighter than $\tau_{\scriptscriptstyle {\rm K3}}$. Hence these fields acquire isocurvature fluctuations during inflation. However present strong bounds on isocurvature fluctuations do not apply to our case since these axions tend to be too light to behave as dark matter.}
In order to generate the inflationary potential, this effective shift symmetry has to be slightly broken. This is realised by open string 1-loops which depend on all K\"ahler moduli \cite{Berg:2005ja, Berg:2007wt, Berg:2014ama, Haack:2015pbv} but are subdominant with respect to the leading $\alpha'$ effects thanks to the extended no-scale structure typical of these models \cite{Cicoli:2007xp}. Other contributions to the inflationary potential arise from higher derivative $\alpha'$ effects \cite{Ciupke:2015msa, Grimm:2017okk} but these are also ${{\cal{V}}}$-suppressed if the superspace derivative expansion is under control \cite{Cicoli:2013swa}. Moreover, all these corrections give rise to an AdS vacuum which needs to be uplifted to dS by the inclusion of anti-branes \cite{Kachru:2003aw, Kallosh:2014wsa, Bergshoeff:2015jxa}, hidden sector T-branes \cite{Burgess:2003ic, Cicoli:2013cha, Cicoli:2015ylx} or non-perturbative effects at singularities \cite{Cicoli:2012fh}. It is important to stress that all these uplifting effects are inflaton-independent since they depend just on the overall volume ${{\cal{V}}}$. Thus they give rise to a constant contribution to the inflationary potential which is crucial to develop a plateau-like behaviour at large inflaton values.
After canonical normalisation of the inflaton field, the resulting potential is qualitatively very similar to the one of Starobinsky inflation \cite{Starobinsky:1980te} and $\alpha$-attractor supergravity models \cite{Kallosh:2013maa, Kallosh:2015zsa, Carrasco:2015pla}. In fact, fibre inflation models require a trans-Planckian field range to obtain enough e-foldings of inflationary expansion, and so they can predict a tensor-to-scalar ratio as large as $r\sim 0.005 - 0.01$. These models are particularly interesting also because they can be embedded into globally consistent CY orientifold compactifications with an explicit brane setup and chiral matter \cite{Cicoli:2016xae, Cicoli:2017axo}. In the study of concrete CY realisations of string models where the inflaton is a K\"ahler modulus, it has been recently realised that the underlying K\"ahler cone conditions set strong geometrical constraints on the allowed inflaton range \cite{Cicoli:2018tcq}. Interestingly, it has been found that the distance travelled by inflaton in field space can generically be trans-Planckian only for K3-fibred CY threefolds which are exactly the necessary ingredients to construct fibre inflation models.
The two moduli which are stabilised at leading order in $1/{{\cal{V}}}$ are heavier than the Hubble constant whose size is set by the uplifting contribution. Hence ${{\cal{V}}}$ and $\tau_{\scriptscriptstyle {\rm dP}}$ do not play a significant r\^ole during inflation which is instead driven mainly by the light field $\tau_{\scriptscriptstyle {\rm K3}}$. Fibre inflation models are therefore, to a very good level of approximation, single-field inflationary models whose potential looks like \cite{Cicoli:2008gp, Broy:2015zba, Cicoli:2016chb}:
\begin{equation}
V_{\rm inf}=\left(\frac{C_{\scriptscriptstyle {\rm up}}}{{{\cal{V}}}^{4/3}} + g_s^2 \,\frac{C_{\scriptscriptstyle \rm KK}}{\tau_{\scriptscriptstyle {\rm K3}}^2} + \frac{W_0^2}{\sqrt{g_s}}\frac{\epsilon_{\scriptscriptstyle {\rm F}^4}}{{{\cal{V}}}\,\tau_{\scriptscriptstyle {\rm K3}}} -\frac{C_{\scriptscriptstyle \rm W}}{{{\cal{V}}}\sqrt{\tau_{\scriptscriptstyle {\rm K3}}}} +g_s^2\,D_{\scriptscriptstyle \rm KK}\,\frac{\tau_{\scriptscriptstyle {\rm K3}}}{{{\cal{V}}}^2} +\delta_{\scriptscriptstyle {\rm F}^4}\,\frac{W_0^2}{\sqrt{g_s}}\,\frac{\sqrt{\tau_{\scriptscriptstyle {\rm K3}}}}{{{\cal{V}}}^2}\right)\frac{W_0^2}{{{\cal{V}}}^2}\,,
\label{PotFI}
\end{equation}
where $g_s\ll 1$ is the string coupling and $W_0\sim \mathcal{O}(1-10)$ is the superpotential generated by background fluxes which is constant after the dilaton and the complex structure moduli are stabilised at tree-level. $C_{\scriptscriptstyle {\rm up}}$ controls the uplifting contribution and, depending on the particular mechanism employed it can have a different dependence on the internal volume ${{\cal{V}}}$, background or gauge fluxes. $C_{\scriptscriptstyle \rm KK}>0$, $D_{\scriptscriptstyle \rm KK}>0$ and $C_{\scriptscriptstyle \rm W}$ are the coefficients of 1-loop open string corrections which come respectively from the tree-level exchange of closed Kaluza-Klein strings between non-intersecting stacks of branes, and winding closed strings between intersecting branes \cite{Berg:2005ja, Berg:2007wt, Berg:2014ama, Haack:2015pbv}. These constants are also functions of the the vacuum expectation values of the complex structure moduli and are expected to be of order unity: $C_{\scriptscriptstyle \rm KK}\sim D_{\scriptscriptstyle \rm KK}\sim C_{\scriptscriptstyle \rm W} \sim \mathcal{O}(1)$. On the other hand, $\epsilon_{\scriptscriptstyle {\rm F}^4}$ and $\delta_{\scriptscriptstyle {\rm F}^4}$ are the coefficients of higher derivative $\alpha'$ $F^4$ effects which depend just on the topological properties of the underlying geometry and are expected to be positive but relatively small: $\epsilon_{\scriptscriptstyle {\rm F}^4}\sim\delta_{\scriptscriptstyle {\rm F}^4}\sim \mathcal{O}(10^{-3})$ \cite{Ciupke:2015msa, Grimm:2017okk}.
The potential (\ref{PotFI}) is rich enough to generate a minimum for small $\tau_{\scriptscriptstyle {\rm K3}}$, an inflationary plateau-like behaviour at large $\tau_{\scriptscriptstyle {\rm K3}}$ and finally a steepening region at very large $\tau_{\scriptscriptstyle {\rm K3}}$ where the system is in a fast-roll regime.\footnote{Pre-inflationary fast to slow-roll transitions in fibre inflation models can give rise to a power loss at large angular scales \cite{Cicoli:2013oba, Pedro:2013pba, Cicoli:2014bja}.} In order to perform a proper study of the inflationary dynamics, the field $\tau_{\scriptscriptstyle {\rm K3}}$ has to be written in terms of its canonically normalised counterpart $\phi$ as \cite{Cicoli:2008gp}:
\begin{equation}
\tau_{\scriptscriptstyle {\rm K3}} = e^{\frac{2}{\sqrt{3}} \phi} = \langle\tau_{\scriptscriptstyle {\rm K3}}\rangle \,e^{\frac{2}{\sqrt{3}} \hat\phi}\,,
\label{CanNorm}
\end{equation}
where we have expanded $\phi$ around its minimum as $\phi=\frac{\sqrt{3}}{2} \ln\langle\tau_{\scriptscriptstyle {\rm K3}}\rangle + \hat\phi$. Substituting (\ref{CanNorm}) in (\ref{PotFI}), we end up with:
\begin{equation}
V_{\rm inf}=V_0 \left(C_1 + C_2 \,e^{-\frac{4}{\sqrt{3}} \hat\phi} + C_3\,e^{-\frac{2}{\sqrt{3}} \hat\phi} - \,e^{-\frac{1}{\sqrt{3}} \hat\phi}
+C_4\,e^{\frac{2}{\sqrt{3}} \hat\phi} +C_5\,e^{\frac{1}{\sqrt{3}} \hat\phi}\right)\,,
\label{PotFInorm}
\end{equation}
where, parameterising the inflaton minimum as $\langle\tau_{\scriptscriptstyle {\rm K3}}\rangle^{3/2} \equiv \gamma\, {{\cal{V}}}$, we have:
\bea
V_0 &=& \frac{C_{\scriptscriptstyle \rm W}\,W_0^2}{\gamma^{1/3}{{\cal{V}}}^{10/3}}\,, \qquad
C_1 = \gamma^{1/3}\,\frac{C_{\scriptscriptstyle {\rm up}}}{C_{\scriptscriptstyle \rm W}}\,, \qquad
C_2 = g_s^2 \,\frac{C_{\scriptscriptstyle \rm KK}}{\gamma\,C_{\scriptscriptstyle \rm W}}\,, \nonumber \\
C_3 &=& \frac{W_0^2}{\gamma^{1/3}\,C_{\scriptscriptstyle \rm W}\,\sqrt{g_s}}\frac{\epsilon_{\scriptscriptstyle {\rm F}^4}}{{{\cal{V}}}^{1/3}}\,, \qquad
C_4 = \gamma\,g_s^2\,\frac{D_{\scriptscriptstyle \rm KK}}{C_{\scriptscriptstyle \rm W}}\,, \qquad C_5 = \gamma\,C_3\,\frac{\delta_{\scriptscriptstyle {\rm F}^4}}{\epsilon_{\scriptscriptstyle {\rm F}^4}}\,.
\label{Param}
\eea
The potential (\ref{PotFInorm}) can have a plateau-like region which can support enough efoldings of inflation only if the coefficients of the positive exponentials are suppressed, i.e. $C_4\ll 1$ and $C_5\ll 1$, which, in turn, requires $\gamma \ll 1$. This is naturally achieved if the three negative exponentials compete to give a minimum since this can happen when $\gamma \sim g_s^2 \ll 1$. The inflationary plateau is then generated mainly by the fourth term in (\ref{PotFInorm}). Notice that the Hubble constant during inflation is set by $V_0$ and scales as $H^2\sim M_p^2 /{{\cal{V}}}^{10/3}$.\footnote{$M_p$ denotes the reduced Planck mass $M_p = 1/\sqrt{8\pi G } \simeq 2.4\cdot 10^{18}$ GeV.} The mass of the inflaton around the minimum is of order $H$ but then quickly becomes exponentially smaller than $H$ for $\hat\phi>0$.
Even if (\ref{PotFInorm}) is a very promising potential to drive inflation, it is not rich enough to generate primordial black holes due to the requirement of a significant enhancement of the power spectrum at large momentum scales. However recent explicit constructions of fibre inflation models in concrete type IIB CY compactifications with D3/D7-branes and O3/O7-planes have reproduced the potential (\ref{PotFI}) in a slightly generalised form since \cite{Cicoli:2016xae, Cicoli:2017axo}:
\begin{itemize}
\item In general the coefficient $C_{\scriptscriptstyle \rm W}$ is not a constant but a function of the fibre modulus $\tau_{\scriptscriptstyle {\rm K3}}$ of the form:
\begin{equation}
C_{\scriptscriptstyle \rm W} \quad\to\quad C_{\scriptscriptstyle \rm W}(\tau_{\scriptscriptstyle {\rm K3}}) = C_{\scriptscriptstyle \rm W} - \frac{A_{\scriptscriptstyle \rm W}\sqrt{\tau_{\scriptscriptstyle {\rm K3}}}}{\sqrt{\tau_{\scriptscriptstyle {\rm K3}}}-B_{\scriptscriptstyle \rm W}}\,,
\label{Cnew}
\end{equation}
where the parameters $C_{\scriptscriptstyle \rm W}\sim\mathcal{O}(1)$ and $A_{\scriptscriptstyle \rm W}\sim\mathcal{O}(1)$ depend on the vacuum expectation values of the complex structure moduli, while $B_{\scriptscriptstyle \rm W}\sim\mathcal{O}(1)$ depends on topological properties of the underlying CY threefold like the intersection numbers and the Euler number.
\item The effective action features additional winding 1-loop corrections to the inflationary potential which will turn out to be crucial for the formation of primordial black holes and look like:
\begin{equation}
\delta V_{\scriptscriptstyle \rm W} = W_0^2\,\frac{\tau_{\scriptscriptstyle {\rm K3}}}{{{\cal{V}}}^4}
\left(D_{\scriptscriptstyle \rm W} - \frac{G_{\scriptscriptstyle \rm W}}{1+R_{\scriptscriptstyle \rm W} \,\frac{\tau_{\scriptscriptstyle {\rm K3}}^{3/2}}{{{\cal{V}}}}}\right)\,,
\label{AddPot}
\end{equation}
where again $D_{\scriptscriptstyle \rm W}\sim\mathcal{O}(1)$ and $G_{\scriptscriptstyle \rm W}\sim\mathcal{O}(1)$ become constants only after complex structure moduli stabilisation, while $R_{\scriptscriptstyle \rm W}\sim\mathcal{O}(1)$ depends on the topological features of the extra dimensions.
\end{itemize}
Depending on the details of a given brane setup (in particular the presence of intersections between D-branes and O-planes and the topological properties of two-cycles where different stacks can intersect), several contributions to the generic scalar potential (\ref{PotFI}), supplemented with (\ref{Cnew}) and (\ref{AddPot}), can be absent by construction. In what follows, we shall therefore focus just on winding 1-loop corrections that represent the simplest situation which can lead to a successful generation of primordial black holes. This is justified for example by the fact that the global chiral embedding of fibre inflation presented in \cite{Cicoli:2017axo} does not feature any Kaluza-Klein loop correction, i.e. $C_{\scriptscriptstyle \rm KK}=D_{\scriptscriptstyle \rm KK}=0$.\footnote{Even if both $C_{\scriptscriptstyle \rm KK}$ and $D_{\scriptscriptstyle \rm KK}$ are non-zero, in a vast region of the parameter space, Kaluza-Klein loops would still be subdominant with respect to winding loops due to the extra factors of $g_s^2\ll 1$ in (\ref{Param}). This is due to the fact that Kaluza-Klein loops feature an extended no-scale cancellation, and so they contribute to the scalar potential effectively only at 2-loop order \cite{Cicoli:2007xp}.} Moreover higher derivative $F^4$ terms tend also to be negligible since, as can be seen from (\ref{Param}), they should be suppressed by both inverse volume powers and by $\epsilon_{\scriptscriptstyle {\rm F}^4}\ll 1$ and $\delta_{\scriptscriptstyle {\rm F}^4}\ll 1$. Hence in Sec. \ref{PBHFibre} we shall study primordial black hole formation for the following simplified inflationary potential:
\begin{equation}
V_{\rm inf}= \frac{W_0^2}{{{\cal{V}}}^3}\left[\frac{C_{\scriptscriptstyle {\rm up}}}{{{\cal{V}}}^{1/3}} -\frac{C_{\scriptscriptstyle \rm W}}{\sqrt{\tau_{\scriptscriptstyle {\rm K3}}}} + \frac{A_{\scriptscriptstyle \rm W}}{\sqrt{\tau_{\scriptscriptstyle {\rm K3}}}-B_{\scriptscriptstyle \rm W}}
+\frac{\tau_{\scriptscriptstyle {\rm K3}}}{{{\cal{V}}}} \left(D_{\scriptscriptstyle \rm W}-\frac{G_{\scriptscriptstyle \rm W}}{1+R_{\scriptscriptstyle \rm W} \,\frac{\tau_{\scriptscriptstyle {\rm K3}}^{3/2}}{{{\cal{V}}}}}\right)\right]\,,
\label{eq:Vinf}
\end{equation}
which, when expressed in terms of the canonically normalised inflaton shifted from its minimum, takes the form:
\begin{equation}
V_{\rm inf}=V_0 \left[C_1 - \,e^{-\frac{1}{\sqrt{3}} \hat\phi} \left(1-\frac{C_6}{1-C_7\,e^{-\frac{1}{\sqrt{3}} \hat\phi}}\right)
+C_8\,e^{\frac{2}{\sqrt{3}} \hat\phi} \left(1- \frac{C_9}{1+C_{10}\,e^{\sqrt{3} \hat\phi}}\right)\right]\,,
\label{PotNorm}
\end{equation}
with:
\bea
C_6 &=& \frac{A_{\scriptscriptstyle \rm W}}{C_{\scriptscriptstyle \rm W}}\sim \mathcal{O}(1)\,, \qquad
C_7 = \,\frac{B_{\scriptscriptstyle \rm W}}{\gamma^{1/3}{{\cal{V}}}^{1/3}}\sim \mathcal{O}(1)\,, \qquad
C_8 = \gamma\, \frac{D_{\scriptscriptstyle \rm W}}{C_{\scriptscriptstyle \rm W}}\ll 1 \,, \nonumber \\
C_9 &=& \frac{G_{\scriptscriptstyle \rm W}}{D_{\scriptscriptstyle \rm W}}\sim \mathcal{O}(1)\,, \qquad
C_{10} = \gamma\,R_{\scriptscriptstyle \rm W} \ll 1\,.
\label{NewParam}
\eea
\section{PBH formation}
\label{sec:PBH}
Primordial black holes form when large and relatively rare density perturbations re-enter the Hubble horizon and undergo gravitational collapse. The fraction of the total energy density in PBHs with mass $M$ at PBH formation is given by:
\begin{equation}
\beta_{\rm f}(M)=\frac{\rho_{\scriptscriptstyle \rm PBH} (M)}{\rho_{\rm tot}}\Big|_{\rm f}\,.
\label{betaf}
\end{equation}
The curvature perturbations are assumed to follow a Gaussian distribution with width $\sigma_{\scriptscriptstyle {\rm M}}\equiv\sigma(M)$.\footnote{See \cite{Franciolini:2018vbk} for the case when non-Gaussianity effects cannot be neglected.} The probability of large fluctuations leading to the formation of PBHs with mass $M$ is then given by:
\begin{equation}
\beta_{\rm f}(M)=\int_{\zeta_c}^\infty \frac{1}{\sqrt{2\pi}\,\sigma_{\scriptscriptstyle {\rm M}}}\, e^{-\frac{\zeta^2}{2\sigma_{\scriptscriptstyle {\rm M}}^2}}\,d\zeta\ ,
\label{eq:beta}
\end{equation}
where $\zeta_c$ denotes the critical value for the collapse into a PBH to take place and plays a fundamental r\^ole in this discussion. It is usually taken to be close to unity, see e.g. \cite{Ballesteros:2017fsr,Motohashi:2017kbs,Germani:2017bcs}.\footnote{We note that some authors \cite{Garcia-Bellido:2017mdw,Ezquiaga:2017fvi} take it to be of the order $10^{-1}$ or $10^{-2}$. Given the exponential dependence of $\beta$ on $\zeta_c$ this significantly decreases the level of tuning required of the inflationary potential in models where PBHs are created within single field inflation.} For such a Gaussian distribution $\sigma_{\scriptscriptstyle {\rm M}}^2\sim \langle\zeta \zeta\rangle$ which on CMB scales is $\mathcal{O}(10^{-9})$. As we will show below, $\sigma_{\scriptscriptstyle {\rm M}} \ll \zeta_c$ and so we can approximate (\ref{eq:beta}) as:
\begin{equation}
\beta_{\rm f}(M)\sim \frac{\sigma_{\scriptscriptstyle {\rm M}}}{\sqrt{2 \pi}\,\zeta_c}\,e^{-\frac{\zeta_c^2}{2 \sigma_{\scriptscriptstyle {\rm M}}^2}}\,.
\label{beta}
\end{equation}
If PBHs are to be a significant fraction of dark matter, the fluctuations that give rise to them must not be too rare, meaning that $\sigma_{\scriptscriptstyle {\rm M}}$ cannot be arbitrarily smaller than $\zeta_c$. This implies that on smaller distance scales the scalar power spectrum must be orders of magnitude larger than on CMB scales. Let us quantify this statement and discuss how it may be achieved in single field models of inflation.
The mass of a PBH forming when a large density perturbation re-enters the horizon is assumed to be proportional to the horizon mass:
\begin{equation}
M= \gamma \,\frac{4\pi}{3} \frac{\rho_{\rm tot}}{H^3}\Big|_{\rm f}=4\pi\gamma\,\frac{M_p^2}{H_{\rm f}}\ ,
\label{eq:Mpbh}
\end{equation}
where $\gamma$ is a correction factor which depends on the details of the gravitational collapse and $H_{\rm f}$ denotes the Hubble parameter at the moment the perturbation re-enters the horizon. Noting that PBHs behave as matter, the fraction of the total energy density in PBHs at formation time (\ref{betaf}) can be related to the present PBH energy density as:
\begin{equation}
\beta_{\rm f}(M) = \left(\frac{H_0}{H_{\rm f}}\right)^2 \frac{\Omega_{\scriptscriptstyle \rm DM}}{a_{\rm f}^3}\,f_{\scriptscriptstyle \rm PBH}(M)\,,
\label{betaredshift}
\end{equation}
where $a_{\rm f}$ denotes the scale factor at PBH formation time, $H_0$ is the Hubble scale today, $\Omega_{\scriptscriptstyle \rm DM} =0.26$ is the present fraction of the total energy density in dark matter and $f_{\scriptscriptstyle \rm PBH} (M)$ is the fraction of the total dark matter energy density in PBHs with mass $M$ today. PBHs in the low mass region, which can be interesting dark matter candidates, get formed before matter-radiation equality in an epoch of radiation dominance. Hence the Hubble scale at PBH formation redshifts as:
\begin{equation}
H_{\rm f}^2 = \Omega_{\rm r}\,\frac{H_0^2}{a_{\rm f}^4}\left(\frac{g_{*{\rm f}}}{g_{*0}}\right)^{1/3}\,,
\label{Hf}
\end{equation}
where $\Omega_{\rm r}=8\times 10^{-5}$ is the present fraction of the total energy density in radiation, while $g_{*0}$ and $g_{*{\rm f}}$ are respectively the number of relativistic degrees of freedom today and at PBH formation time. Combining (\ref{eq:Mpbh}) with (\ref{Hf}), (\ref{betaredshift}) can be rewritten in terms of present day observables and in units of the solar mass $M_\odot$ as \cite{Carr:2009jm, Inomata:2017okj}:
\begin{equation}
\beta_{\rm f}(M) \simeq \frac{4}{\sqrt{\gamma}}\times 10^{-9} \left(\frac{g_{*{\rm f}}}{g_{*0}}\right)^{1/4} \sqrt{\frac{M}{M_\odot}}\,f_{\scriptscriptstyle \rm PBH} (M)\,.
\label{betanew}
\end{equation}
We can now get an estimate of the level of enhancement of the power spectrum required to have PBHs which constitute a significant fraction of dark matter. Setting $\gamma=1$ to be conservative and assuming that only SM degrees of freedom are present so that $g_{*0}=3.36$ and $g_* = 106.75$, if PBHs with mass $M$ constitute all of dark matter, i.e. $f_{\scriptscriptstyle \rm PBH}(M)=1$ , (\ref{betanew}) reduces to:
\begin{equation}
\beta_{\rm f}(M) \simeq 10^{-8} \sqrt{\frac{M}{M_\odot}}\,.
\label{betanew2}
\end{equation}
If we now focus on a mass distribution sharply peaked at $M=10^{-15} M_\odot$, we find $\beta_{\rm f}(M)\simeq 3\times 10^{-16}$. Comparing (\ref{betanew2}) with (\ref{beta}) for $\zeta_c=1$, we finally obtain $\sigma_{\scriptscriptstyle {\rm M}}=0.12$. This implies that the scalar power spectrum must be enhanced to $\mathcal{O}(10^{-2})$, a value 7 orders of magnitude larger that its value on CMB scales.\footnote{Had we assumed $\zeta_c=0.1$, we would have found $\sigma=0.012$, in agreement with the estimates of \cite{Garcia-Bellido:2017mdw,Ezquiaga:2017fvi}. This corresponds to an enhancement of the power spectrum by 5 orders of magnitude between PBH and CMB scales and requires less tuning of the inflationary potential.} This large enhancement can in principle be achieved within single field inflationary models by inducing an extremely flat and sufficiently long region in the scalar potential. Therefore the problem of PBHs in single field inflation is one of having a sufficiently rich structure in the scalar potential and the freedom to tune in a flat plateau in the later part of inflation.
Let us finally make two important observations:
\begin{itemize}
\item In the estimate above of the enhancement of the power spectrum, we considered PBHs with a given mass $M$. However, more generically, the PBH mass function is broadly peaked, and so the fraction of the total dark matter density in PBHs looks like \cite{Carr:2017jsz, Sasaki:2018dmp}:
\begin{equation}
f_{\scriptscriptstyle \rm PBH} = \int d f_{\scriptscriptstyle \rm PBH}(M) = \int \frac{d f_{\scriptscriptstyle \rm PBH}(M)}{d\ln M} \,d\ln M\,,
\end{equation}
where $d f_{\scriptscriptstyle \rm PBH}(M)$ is the fraction of PBHs with mass between $M$ and $M+d\ln M$, and the integration domain is bounded below by Hawking evaporation of very light PBHs and above by the mass corresponding to PBHs which re-enter the horizon after matter-radiation equality, see e.g. \cite{Motohashi:2017kbs}.
\item Assuming that the Hubble scale during inflation $H_{\rm inf}$ is approximately constant, (\ref{eq:Mpbh}) and (\ref{Hf}) can be used to write the number of efoldings between CMB and PBH horizon exit as \cite{Motohashi:2017kbs}:
\bea
\Delta N_{\scriptscriptstyle \rm CMB}^{\scriptscriptstyle \rm PBH}&=& \ln \left(\frac{a_{\scriptscriptstyle \rm PBH} H_{\rm inf}}{a_{\scriptscriptstyle \rm CMB} H_{\rm inf}}\right) = \ln \left(\frac{a_{\rm f} H_{\rm f}}{0.05\,{\rm Mpc}^{-1}}\right) \nonumber \\
&=& 18.4-\frac{1}{12}\ln\left(\frac{g_*}{g_{*0}}\right)+\frac12\ln\gamma-\frac12\ln \left(\frac{M}{M_\odot}\right)\,.
\eea
Setting again $\gamma=1$, $g_{*0}=3.36$ and $g_* = 106.75$ as in the SM case, the formation of PBHs with masses in the $[10^{-16},10^{-14}]\, M_\odot$ range implies that PBH scales leave the horizon approximately $34.2$ to $36.5$ efoldings after the CMB scales.
\end{itemize}
\section{PBHs from Fibre inflation}
\label{PBHFibre}
In order to produce a significant fraction of PBHs from inflationary density perturbations, we shall use the rich structure of the fibre inflation potential (\ref{eq:Vinf}) to induce a near inflection point close to the minimum as depicted in Fig. \ref{fig:V} .
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.6\textwidth]{V}
\caption{Scalar potential for the parameter set $\mathcal{P}_2$ of Tab. \ref{tab:pars}.}
\label{fig:V}
\end{center}
\end{figure}
Based on the scaling of each term in eq. \eqref{eq:Vinf} with the fibre modulus $\tau_{\scriptscriptstyle {\rm K3}}$ one can see that the second and third terms dominate at small field values and induce a minimum for the modulus around:
\begin{equation}
\langle\tau_{\scriptscriptstyle {\rm K3}}\rangle\sim\frac{C_{\scriptscriptstyle \rm W} B_{\scriptscriptstyle \rm W}^2}{(\sqrt{C_{\scriptscriptstyle \rm W}}-\sqrt{A_{\scriptscriptstyle \rm W}})^2}\ .
\end{equation}
The forth term, being proportional to $\tau_{\scriptscriptstyle {\rm K3}}$, dominates $V$ at large field values, while the fifth term has a maximum at $\frac{2^{2/3}}{(R_{\scriptscriptstyle \rm W} / {{\cal{V}}})^{2/3}}$ and scales as $-\tau_{\scriptscriptstyle {\rm K3}}$ at small and as $-\tau_{\scriptscriptstyle {\rm K3}}^{-1/2}$ at large field values respectively. It is this last term that will be instrumental in generating the enhancement in the scalar power spectrum that will ultimately lead to the formation of primordial black holes in this setup. This can be achieved for certain values of $G_{\scriptscriptstyle \rm W}$ and $R_{\scriptscriptstyle \rm W}$ such that the potential has a very flat region close to the post-inflationary minimum as illustrated in Fig. \ref{fig:V}.
Since in slow-roll $P_k\propto {H^2}/{\epsilon_V}$, an enhancement of the scalar power spectrum is in principle possible in the limit $\epsilon_V\equiv \frac{V_\phi^2}{2 V^2}\rightarrow 0$. Actually the situation is a little more involved since in the plateau the dynamics of the Universe deviates significantly from slow-roll, a fact that has been pointed out in \cite{Germani:2017bcs} (see also \cite{Motohashi:2017kbs}), and that calls for a more careful analysis of the observational signatures of such models, see e.g. \cite{Ballesteros:2017fsr}. Observables must therefore be computed from solutions to the Mukhanov-Sasaki equation for the rescaled curvature perturbations:
\begin{equation}
u_k''(\eta)+\left(k^2-z''/z\right)u_k(\eta)=0\ ,
\label{eq:MS}
\end{equation}
where $\eta$ denotes conformal time, $z\equiv \sqrt{2 \epsilon} \ a$ from which we find that the effective mass of the curvature perturbations is:
\begin{equation}
\frac{z''}{z}=(a H)^2\left[2-\epsilon+\frac32 \eta -\frac12 \epsilon \eta+\frac14 \eta^2+\frac12 \eta \kappa\right],
\label{eq:m}
\end{equation}
where:
\begin{equation}
\epsilon=-\frac{\dot{H}}{H^2} \qquad,\qquad \eta=\frac{\dot{\epsilon}}{\epsilon H} \qquad,\qquad \kappa=\frac{\dot{\eta}}{\eta H}\ ,
\end{equation}
are the Hubble slow-roll parameters.
One assumes that deep inside the horizon, the perturbations behave as if in flat space, which fixes the initial conditions to be of the Bunch-Davies type \cite{Bunch:1978yq}:
\begin{equation}
\lim_{k\eta \rightarrow -\infty} u_k(\eta)=\frac{e^{-i k \eta}}{\sqrt{2 k}}\ .
\end{equation}
This determines the solution to be given by a Hankel function of the first kind:
\begin{equation}
u_k(\eta)= \frac{\sqrt{-\pi \eta}}{2}H_{\nu}^{(1)}(-k \eta)\ ,
\end{equation}
with index $\nu$ determined from eq. \eqref{eq:m} once a given background is chosen.
For comparison with observations one is interested in the dimensionless power spectrum, defined as:
\begin{equation}
P_k=\frac{k^3}{2 \pi^2 }\left |\frac{u_k}{z}\right|^2\ ,
\label{eq:Pk}
\end{equation}
which in the superhorizon limit $k \eta \rightarrow 0$ can be written as:
\begin{equation}
P_k=\frac{H^2}{8 \pi^2 \epsilon}\frac{2^{2\nu-1}|\Gamma(\nu)|^2}{\pi}\left(\frac{k}{a H}\right)^{3-2\nu}\ .
\label{eq:Pk2}
\end{equation}
On CMB scales this is bound to be $P_k\big |_{\scriptscriptstyle \rm CMB}= 2\times10^{-9}$ and as shown in Sec. \ref{sec:PBH} it must be significantly enhanced on smaller scales if PBHs are to be significant fraction of all dark matter.\\
Up to this point the discussion of the behaviour of the perturbations assumed nothing about the type of background in which they evolve. In order to produce a significant amount of PBH from a inflection point in single field inflation, we will see that the universe has to evolve from a slow-roll inflation phase into a transient constant-roll background, where the scalar field acceleration plays an important role. These backgrounds are characterised by the parameter $\alpha$ defined as \cite{Martin:2012pe, Motohashi:2014ppa}:
\begin{equation}
\ddot{\phi} \equiv-(3+\alpha)H \dot{\phi} \ .
\end{equation}
Solutions with $\alpha=0$ are called ultra slow-roll \cite{Kinney:2005vj,Tsamis:2003px,Namjoo:2012aa}, whereas vanilla slow-roll inflation corresponds to $\alpha=-3$. The transient constant-roll period arises due the presence of an extremely flat region in the potential that causes the scalar field to brake upon reaching it, leading to a non negligible acceleration in the Klein-Gordon equation and consequently a departure from the slow-roll background. This behaviour is illustrated in Fig. \ref{fig:srpars} where we plot the evolution of the slow-roll parameters for evolution in the potential of Fig. \ref{fig:V}, corresponding to the parameter set $\mathcal{P}_2$ of Tab. \ref{tab:pars}. It is evident that the system undergoes a transition from slow-roll ($N_e>19$) to constant-roll ($15<N_e<19$) and finally to a large $\eta$ slow-roll phase ($N_e<15$).
\begin{figure}[h]
\begin{center}\centering
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=1\textwidth]{epsilonEta}
\end{minipage}
\hspace{0.05cm}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=1\textwidth]{epsilonEtaZoom}
\end{minipage}
\hspace{0.05cm}
\centering
\caption{Slow-roll parameters as functions of the number of efoldings $N$ from the end of inflation for parameter set $\mathcal{P}_2$. It is clear that the background evolves from slow-roll ($N_e>19$) to constant-roll ($15<N_e<19$) and back to slow-roll again ($N_e<15$). Dashed lines represent $10$, $6$ and $1$. }
\label{fig:srpars}
\end{center}
\end{figure}
In slow-roll $\epsilon, \eta, \kappa \ll1$, and consequently the effective mass takes the form $z''/z\approx \frac{2}{\eta^2}\left( 2+3\epsilon+3\eta \right)$, or equivalently $\nu=3/2+\epsilon +\eta/2$. One can then see that the curvature perturbations $\zeta=u/z$ remain constant on super-horizon scales and the two point function can therefore be evaluated at horizon crossing, yielding the familiar slow-roll result:
\begin{equation}
P_k=\frac{H^2}{8 \pi^2 \epsilon}\Big |_{k=a H}\ .
\label{eq:Pk_SR}
\end{equation}
Eq. \eqref{eq:Pk2} also captures the momentum dependence of the two point function, which can be written in terms of the spectral index $n_s$ and its running $\alpha$, given by:
\begin{equation}
n_s\equiv \frac{d \ln P_k}{d \ln k}=1-2\epsilon-\eta\,,
\end{equation}
and:
\begin{equation}
\frac{d n_s}{ d\ln k}= -2 \epsilon \eta - \eta \kappa\ .
\end{equation}
Both these quantities are subject of tight observational constraints \cite{Ade:2015lrj}. For this work we take:
\begin{equation}
n_s= 0.9650\pm 0.0050\qquad\text{and}\qquad \frac{d n_s}{ d\ln k}=-0.009 \pm 0.008
\end{equation}
at $68\%$CL and at a scale $k_*=0.05\ \text{Mpc}^{-1}$.
In the transient constant-roll regime one has $\eta\approx-2(3+\alpha-\epsilon)$ which implies $\epsilon\propto a^{-2(3+\alpha)}$. In the cases we consider $\alpha\in[0,1]$. In such a background the super-horizon behaviour of the power spectrum is determined by:
\begin{equation}
P_k\propto H^ {|2\alpha+3|-1} a^ {3+2\alpha+|3+2\alpha|} \ .
\end{equation}
Note that since $\epsilon$ is small and decreasing rapidly with the expansion (for $\alpha>-3$), one can take $H$ to be constant. We therefore see that for $-3\le\alpha< -3/2$ the curvature perturbations are frozen beyond the horizon (this includes the previously discussed case of slow-roll inflation, $\alpha=-3$), whereas for $\alpha> -3/2$, $P_k\propto a^ {2(3+2\alpha)}$, signaling the presence of a growing solution to the MS equation, and the breakdown of the approximation of Eq. \eqref{eq:Pk_SR}. In order to determine the two-point function in such backgrounds one must therefore solve the MS equation and evaluate $P_k$ at the end of inflation.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.7\textwidth]{Pk}
\caption{Power spectrum Eq. \eqref{eq:Pk} for the potential of Fig. \ref{fig:V} with parameter set $\mathcal{P}_2$. The dashed line represents the slow-roll estimate of Eq. \eqref{eq:Pk_SR} while the continuous line is obtained from the solutions to the MS equation. The circle correspond to the CMB scales if the peak is to be associated with PBH of mass $M=10^{-14} M_\odot$.}
\label{fig:Pk}
\end{center}
\end{figure}
In Fig. \ref{fig:Pk} we plot the power spectrum for scalar perturbations for the potential of Fig. \ref{fig:V} (parameter set $\mathcal{P}_2$) calculated from the solutions of Eq. \eqref{eq:MS} (continuous line) and the slow-roll estimate of Eq. \eqref{eq:Pk_SR} (dashed line). As expected the slow-roll approximation breaks down for modes that cross the horizon close to the onset of the constant-roll phase. Crucially for the production of PBH, the slow-roll result underestimates the power spectrum by several orders of magnitude in this range of momentum modes. This is to be expected given the existence of a growing mode solution in constant-roll backgrounds with $\alpha\in[0,1]$. In Fig. \ref{fig:Pk_SR_v_USR} we plot the evolution of the power spectrum for modes leaving the horizon $53$ and $22$ efoldings before the end of inflation, corresponding to CMB and PBH scales respectively. While both scales are affected by the growing mode during the constant roll phase, their superhorizon growth is determined by $\frac{k}{aH}$ at the onset of the constant roll period. This quantity is minute for modes on CMB scales but not for those on the PBH region. As a result the CMB mode essentially follow the slow-roll estimate of Eq. \eqref{eq:Pk_SR} after crossing the horizon, while on small scales we see that there is a large amplification of $P_k$ leading to a breakdown of the slow-roll approximation. Finally let us note that the $\mathcal{O}(1)$ deviation from the slow-roll estimate for modes on the smallest scales ($N_e\lesssim15$) can be attributed to the fact that in the final phase of expansion before the end of inflation, $\eta=\mathcal{O}(1)$.
\begin{figure}[h]
\begin{center}\centering
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=0.99\textwidth]{P_SR}
\end{minipage}
\hspace{0.05cm}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=0.92\textwidth]{P_USR}
\end{minipage}
\hspace{0.05cm}
\centering
\caption{Evolution of the curvature perturbations on different scales for example $\mathcal{P}_2$. On the left panel a mode that corresponds to CMB scales, leaving the horizon $53$ efoldings before the end of inflation, and keeping a constant value thereafter. On the right panel a mode that corresponds to PBH scales, leaving the horizon 22 efoldings before the end of inflation and undergoing super-horizon growth during the constant-roll phase. In both plots the dashed line corresponds to the slow-roll estimate of eq. \eqref{eq:Pk_SR}.}
\label{fig:Pk_SR_v_USR}
\end{center}
\end{figure}
In Tables \ref{tab:pars} and \ref{tab:obs} we present three numerical examples corresponding to cases where all DM is composed of $10^{-14} M_\odot$ PBHs, assuming $\zeta_c=1$. We stress that the choices of the microphysical parameters are in line with expectations (including the small values of $G_W$ and $R_W$) and that the desire to have PBH DM does not constrain the compactification volume, which varies by several orders of magnitude in between the three examples. All examples lead to a spectral index that is $2$ to $3$ sigma redder than the current best fit, while giving rise to a spectral index running and a tensor fraction that are in line with current bounds.
\begin{table}[htp]
\begin{center}
\begin{tabular}{c||c|c|c|c|c||c|c}
&$C_W$&$A_W$&$B_W$&$G_W/\langle\mathcal{V} \rangle$&$R_W/\langle\mathcal{V} \rangle$&$\langle \tau_{K3}\rangle $&$\langle \mathcal{V}\rangle $\\
\hline
$\mathcal{P}_1$& $1/10$ & $2/100$ & $1$ & $1.303386\times10^{-3}$ & $6.58724\times10^{-3}$& $3.89$&$107.3$\\
$\mathcal{P}_2$& $4/100$& $2/100$& $1$& $3.080548\times 10^{-5}$ &$7.071067 \times 10^{-4}$ &$14.30 $&$1000$ \\
$\mathcal{P}_3$& $1.978/100$&$1.65/100$&$1.01$ &$9.257715\times 10^{-8}$ &$1.414\times10^{-5}$ &$168.03$ &$5\times10^4$ \\
\end{tabular}
\end{center}
\caption{Examples of parameters leading to the production of PBHs with a mass peaked at $10^{-14} M_\odot$, together with geometrical compactification data. All examples exhibit $D_W=0$. Note that the small values of $G_W$ and $R_W$ are in line with their microscopic origin as explained in Sec. \ref{FIReview}.}
\label{tab:pars}
\end{table}
\begin{table}[htp]
\begin{center}
\begin{tabular}{c||c|c|c||c|c}
&$n_s$ & $r$ & $\frac{d n_s}{d \log k}$ & $\Delta N_{CMB}^{PHB}$ & $P_k|_{\rm peak}$ \\
\hline
$\mathcal{P}_1$&0.9457&$0.015$ &$-0.0017$&$34.5$&$0.01365$ \\
$\mathcal{P}_2$&0.9437&$0.015$ &$-0.0017$&$34.5$&$0.03998$ \\
$\mathcal{P}_3$&0.9457&$0.015$ &$-0.0019$&$34.5$&$0.013341$ \\
\end{tabular}
\end{center}
\caption{Inflationary observables on CMB and PBH scales for the examples of table \ref{tab:pars}.}
\label{tab:obs}
\end{table}
\section{Conclusions}
\label{Concl}
In this paper we have presented the first explicit example of a string inflationary model which is consistent with cosmological observations at CMB scales and, at the same time, can generate PBHs at small distance scales via an efficient enhancement of the power spectrum due to a period of ultra slow-roll. Our model leads to PBHs in the low-mass region where they constitute a significant fraction of the total dark matter abundance.
Three interesting features of fibre inflation models relevant for PBH formation are the following: ($i$) the coefficients of the different contributions to the inflationary potential depend on microscopic parameters like background fluxes and Calabi-Yau intersection numbers which take different values in the string landscape, and so give a very large tuning freedom that can be used to generate a near inflection point; ($ii$) the potential enjoys an approximate Abelian rescaling symmetry inherited from the underlying extended no-scale structure which suppresses quantum corrections to the inflationary dynamics; ($iii$) the contribution to the inflationary potential responsible for the emergence of a near inflection point at large momentum scales has been derived in global embeddings of fibre inflation models in explicit Calabi-Yau compactifications with chiral brane setup compatible with moduli stabilisation.
Moreover our model is characterised by a trans-Planckian field range during inflation, and so it predicts a large tensor-to-scalar ratio of order $r\sim 0.01$ which might be detected by the next generation of cosmological measurements. Similarly to previous works on PBH formation in single-field inflation \cite{Ezquiaga:2017fvi, Ballesteros:2017fsr}, the scalar spectral index turns out to be a bit too red since it is more than $3\sigma$ away from the Planck reference value. This tension might be resolved by the inclusion of non-zero neutrino masses which might make our result compatible with CMB data within just $2\sigma$ \cite{Gerbino:2016sgw}. The tension in the values of $n_s$ can also decrease in compactifications where the approach to the inflationary plateau occurs faster than the $1/\sqrt{\tau_{K3}}$ considered here, a possibility in potentials of the form of Eq. \eqref{PotFI}. Another interesting cosmological observable in our model is the running of the spectral index which turns out to be sizable.
In this paper we investigated the possibility to generate PBHs from string inflation by taking the most conservative point of view since we focused on models which are effectively single-field and, above all, we considered PBH formation with horizon re-entry in a radiation dominated era. However, two generic features of string compactifications are the presence of several scalar fields which might play an important r\^ole during inflation \cite{Dimopoulos:2005ac, Easther:2005zr, Bond:2006nc, Berglund:2009uf, BlancoPillado:2009nw, Burgess:2010bz, Cicoli:2012cy, Cicoli:2014sva}, and light moduli with only gravitational couplings to ordinary matter which are long-lived and tend to give rise to early periods of matter domination \cite{Acharya:2008bk, Kane:2011ih, Cicoli:2012aq, Higaki:2012ar, Allahverdi:2013noa, Dutta:2014tya, Cicoli:2016olq, Allahverdi:2016yws}. Hence in the future it would be very interesting to study the impact on PBH formation in string models of additional light fields, like for example the axionic partner of the inflaton of fibre inflation models.
We finally mention the fact that PBHs can be generated with the required efficiency only if $\delta\rho \sim 0.1\, \rho$ at small distance scales. It would therefore be important to perform a more careful analysis of stochastic effects since the perturbative expansion might not be fully justified \cite{Pattison:2017mbe}. Finally we stress that non-gaussianities in large momentum fluctuations might alter significantly the PBH production mechanism and, in turn, their present abundance \cite{Franciolini:2018vbk}. We leave a deeper study of both stochastic and non-gaussianity effects for future investigation.
\section*{Acknowledgements}
We are thankful to R. Allahverdi, B. Dutta and C.~Germani for helpful discussions. VAD thanks Matteo Capozi for useful programming related discussions and the Max-Planck-Institut f\"{u}r Physik for hospitality and support in the final stages of this work.
\bibliographystyle{JHEP}
|
{
"timestamp": "2018-03-19T01:10:21",
"yymm": "1803",
"arxiv_id": "1803.02837",
"language": "en",
"url": "https://arxiv.org/abs/1803.02837"
}
|
\section{Introduction}
\subsection{Stochastic nonlinear Schr\"{o}dinger equations}
In this paper, we study the following Cauchy problem
associated to a stochastic nonlinear Schr\"{o}dinger equation (SNLS) of the form:
\begin{equation}
\label{SNLS}
\begin{cases}
i\partial_t u - \Delta u \pm |u|^{2k}u = F(u,\phi\xi)\\
u|_{t=0}=u_0 \in H^s(\mathbb{T}^d)
\end{cases}
\ (t,x)\in (0,\infty)\times\mathbb{T}^d,
\end{equation}
where $k,d\geq 1$ are integers,
$\mathbb{T}^d~:=~ \mathbb{R}^d/\mathbb{Z}^d$,
and
$u:[0,\infty)\times \mathbb{T}^d\to\mathbb{C}$ is the unknown stochastic process.
The term $F(u,\phi\xi)$ is a stochastic forcing and in this paper
we treat the following cases: the \emph{additive noise}, i.e.
\begin{equation}
\label{add-noise}
F(u,\phi\xi) = \phi\xi
\end{equation}
and the \emph{(linear) multiplicative noise}, i.e.
\begin{equation}
\label{mult-noise}
F(u,\phi\xi) = u\cdot \phi\xi ,
\end{equation}
where the right-hand side of \eqref{mult-noise} is understood as an It\^o product\footnote{
The multiplicative noise given by the Stratonovich product $u \circ \phi\xi$ with real-valued $\xi$
is relevant in physical applications, as it conserves the mass of $u$
(i.e. $t\mapsto \|u(t)\|^2_{L_x^2(\mathbb{T}^d)}$ is constant) almost surely.
Our analysis can handle either the It\^{o} or the Stratonovich product,
and we choose to work with the former for the sake of simpler exposition.
}.
Here, $\xi$ is a space-time white noise, i.e. a Gaussian stochastic process with correlation function
$\mathbb{E}[\xi(t,x)\xi(s,y)] = \delta(t-s) \delta(x-y)$, where $\delta$ denotes the Dirac delta function.
We recall that the white noise is very rough: the spatial regularity of $\xi$ is less than $-\frac{d}{2}$.
Since the linear Schr\"{o}dinger equation does not provide any smoothing properties,
we consider instead a spatially smoothed out version $\phi\xi$,
where $\phi$ is a linear operator from $L^2(\mathbb{T}^d)$ into $H^s(\mathbb{T}^d)$,
on which we make certain assumptions, depending on
whether we are working with \eqref{add-noise} or \eqref{mult-noise}.
Our main goal in this paper is to prove local well-posedness of SNLS
with either additive or multiplicative noise in the Sobolev space $H^s(\mathbb{T}^d)$,
for any subcritical non-negative regularity $s$ (see below for the meaning of ``subcritical'').
In this work,
solutions to \eqref{SNLS}
are understood as solutions to the
\emph{mild formulation}
\begin{equation}
\label{SNLS-mild}
u(t) = S(t) u_0 \pm i\int_0^t S(t-t') (|u|^{2k}u)(t')\,dt' - i \Psi(t)\ ,\ t\geq0\,,
\end{equation}
where $S(t) :=e^{-it\Delta}$ is the linear Schr\"{o}dinger propagator. The term $\Psi(t)$ is a \emph{stochastic convolution}
corresponding to the stochastic forcing $F(u,\phi\xi)$,
see \eqref{sc:Psia} and \eqref{sc:Psim} below.
Our local-in-time argument uses the Fourier restriction norm method introduced by Bourgain \cite{bourgain-93-1}
and the periodic Strichartz estimates proved by Bourgain and Demeter~\cite{bourgain-demeter}.
In establishing local well-posedness for the multiplicative SNLS,
we also have to combine these tools with the truncation method
used by de Bouard and Debussche
\cite{debouard-debussche-nls, debouard-debussche-nls-mult, debouard-debussche-kdv}.
Moreover,
by proving probabilistic a priori bounds on the mass and energy of solutions,
we establish global well-posedness in
(i) $L^2(\mathbb{T})$ for cubic nonlinearities (i.e. $k=1$) when $d=1$, and
(ii) $H^1(\mathbb{T}^d)$ for all defocusing
energy-subcritical nonlinearities -- see Theorem~\ref{gwp:add} and the preceding discussion for more details.
Previously, de Bouard and Debussche \cite{ debouard-debussche-nls-mult, debouard-debussche-nls}
studied SNLS on $\mathbb{R}^d$.
They considered noise $\phi\xi$ that is white in time but correlated in space,
where $\phi$ is a smoothing operator from $L^2(\mathbb{R}^d)$ to $H^s(\mathbb{R}^d)$.
They proved global existence and uniqueness of mild solutions in (i) $L^2(\mathbb{R})$
for the one-dimensional cubic SNLS
and (ii) $H^1(\mathbb{R}^d)$ for defocusing energy-subcritical SNLS.
Other works related to SNLS on $\mathbb{R}^d$ include the works by
Barbu, R\"{o}ckner, and Zhang \cite{barbu-rockner-zhang14, barbu-rockner-zhang16}
and by Hornung \cite{hornung}.
On the $\mathbb{R}^d$ setting, the arguments given in \cite{ debouard-debussche-nls-mult, debouard-debussche-nls}
use fixed point arguments in the space $C_tH_x^1 \cap L_t^pW_x^{1,q}([0,T]\times\mathbb{R}^d)$,
for some $T>0$ and some suitable $p,q\ge 1$.
\footnote{Here, $W^{s, r}(\mathbb{T}^d)$
denotes the $L^r$-based Sobolev space defined by the Bessel potential norm:
\[\| u\|_{W^{s, r}(\mathbb{T}^d)} := \| \jb{\nabla}^s u \|_{L^r(\mathbb{T}^d)} = \big\| \mathcal{F}^{-1}( \jb{n}^s \widehat u(n))\big\|_{\ell_n^r(\mathbb{Z}^d)},\]
where $\jb{n}:=\sqrt{1+|n|^2}$.
When $r = 2$, we have $H^s(\mathbb{T}^d) = W^{s, 2}(\mathbb{T}^d)$.}
In particular, they use the (deterministic) Strichartz estimates:
\begin{equation}
\|S(t)f\|_{L_t^pL_x^q(\mathbb{R}\times\mathbb{R}^d)} \le C_{p,q} \|f\|_{L^2_x(\mathbb{R}^d)},
\end{equation}
where the pair $(p,q)$ is admissible, i.e. $\frac2p+ \frac{d}{q} = \frac{d}{2}$, $2\le p,q,\le \infty$,
and $(p,q,d)\neq (2,\infty,2)$.
On $\mathbb{T}^d$,
Bourgain and Demeter \cite{bourgain-demeter} proved the $\ell^2$-decoupling conjecture,
and as a corollary,
the following periodic Strichartz estimates:
\begin{equation}
\label{perStrich}
\big\|S(t)P_{\le N} f\big\|_{L_{t,x}^p([0,T]\times\mathbb{T}^d)} \le
C_{p,T,\varepsilon} N^{\frac{d}{2}- \frac{d+2}{p} +\varepsilon}
\|f\|_{L^2_x(\mathbb{T}^d)}\,.
\end{equation}
Here, $P_{\le N}$ is the Littlewood-Paley projection onto frequencies $\{n \in \mathbb{Z}^d : |n|\le N\}$, \linebreak
$p\ge \frac{2(d+2)}{d}$, and $\varepsilon>0$ is an arbitrarily small quantity \footnote{More recently, Killip and Vi\c{s}an \cite{killip-visan} removed the arbitrarily small loss of $\varepsilon$ derivatives in
\eqref{perStrich} when $p>\frac{2(d+2)}{d}$.
However, we do not need this scale-invariant improvement in our results.}.
However, such Strichartz estimates are not strong enough for a fixed point argument in mixed Lebesgue spaces for the deterministic NLS on $\mathbb{T}^d$.
To overcome this problem, we shall employ the Fourier restriction norm method
by means of $X^{s,b}$-spaces defined via the norms
\begin{equation}
\norm{u}_{X^{s,b}}
:=\big\| \jb{n}^s\jb{\tau-|n|^2}^b\mathcal{F}_{t,x}(u)(\tau,n)\big\|_{L^2_\tau\l^2_n(\mathbb{R}\times \mathbb{Z}^d)}\,.
\end{equation}
The indices $s,b\in\mathbb{R}$ measure the spatial and temporal regularities of functions $u\in X^{s,b}$, and
$\mathcal{F}_{t,x}$ denotes Fourier transform of functions defined on $\mathbb{R}\times \mathbb{T}^d$.
This harmonic analytic method was introduced by Bourgain \cite{bourgain-93-1}
for the deterministic nonlinear Schr\"{o}dinger equation (NLS):
\begin{align}
i\partial_t u - \Delta u \pm |u|^{2k}u =0 \,.
\end{align}
\subsection{Main results}
We now state more precisely the problems considered here. Let
$(\Omega, \mathcal{A},\{\mathcal{A}_t\}_{t\ge 0}, \mathbb{P})$ be a filtrated probability space. Let $W$ be the $L^2(\mathbb{T}^d)$-cylindrical Wiener process given by
\begin{equation}
W(t,x,\omega) := \sum_{n\in\mathbb{Z}^d} \beta_n(t,\omega) e_n(x),\,
\end{equation}
where $\{\beta_n\}_{n\in\mathbb{Z}^d}$ is a family of independent complex-valued Brownian motions
associated with the filtration $\{\mathcal{A}_t\}_{t\ge 0}$
and
$e_n(x):= \exp( 2\pi i n\cdot x)$, $n\in\mathbb{Z}^d$.
The space-time white noise $\xi$
is given by the (distributional) time derivative of $W$, i.e. $\xi=\frac{\partial W}{\partial t}$.
Since the spatial regularity of $W$ is too low (more precisely,
for each fixed $t\ge 0$,
$W(t)\in H^{-\frac{d}{2}-\varepsilon}(\mathbb{T}^d)$
almost surely for any $ \varepsilon>0$),
we consider a smoothed out version $\phi W$ as follows.
Recall that a bounded linear operator $\phi$ from a separable Hilbert space $H$ to a Hilbert space $K$ is Hilbert-Schmidt if
\begin{equation}
\|\phi\|_{\mathcal{L}^2(H;K)}^2 := \sum_{n\in \mathbb{Z}^d} \| \phi h_n\|_{K}^2 <\infty\,,
\end{equation}
where $\{h_n\}_{n\in\mathbb{Z}^d}$ is an orthonormal basis of $H$ (recall that $\|\cdot\|_{\mathcal{L}^2(H;K)}$ does not depend on the choice of $\{h_n\}_{n\in\mathbb{Z}^d}$).
Throughout this work,
we assume $\phi\in \mathcal{L}^2(L^2(\mathbb{T}^d); H^s(\mathbb{T}^d))$ for appropriate $s\ge 0$.
In this case, $\phi W$ is a Wiener process with sample paths in $H^s(\mathbb{T}^d)$ and
its time derivative $\phi\xi$ corresponds to a noise
which is white in time and correlated in space (with correlation function depending on $\phi$).
We can now define the stochastic convolution $\Psi(t)$ from \eqref{SNLS-mild} for
(i) the additive noise \eqref{add-noise}:
\begin{equation}
\label{sc:Psia}
\Psi}%^{\textup{(a)}}(t) :=\int_0^t S(t-t') \phi \,dW(t')
\end{equation}
and (ii) the multiplicative noise \eqref{mult-noise}:
\begin{equation}
\label{sc:Psim}
\Psi}%^{\textup{(m)}}(t):= \Psi[u](t) := \int_0^t S(t-t') u(t') \phi \,dW(t')\,.
\end{equation}
\noindent
We are now ready to state our first result.
\begin{theorem}[Pathwise local well-posedness for additive SNLS]
\label{lwp:add}
Given $s>s_{\textup{crit}}$ non-negative, let $\phi\in \mathcal{L}^2(L^2(\mathbb{T}^d); H^s(\mathbb{T}^d))$ and
$F(u,\phi)=\phi\xi$.
Then for any $u_0\in H^s(\mathbb{T}^d)$, there exist a stopping time $T=T(\|u_0\|_{H^s},\Psi)$ that is almost surely positive, and a unique adapted process $u \in C([0,T];H^s(\mathbb{T}^d)) \cap X^{s,\frac12-\varepsilon}([0,T])$ solving SNLS with additive noise on $[0,T]$ almost surely, for some $\varepsilon>0$.
\end{theorem}
Here, $X^{s,b}([0,T])$ is a time restricted version of the $X^{s,b}$-space, see \eqref{Xsblocal} below.
The proof of this result relies on a fixed point argument for \eqref{SNLS-mild} in a closed subset
of $X^{s,b}([0,T])$.
We are required to use
$b=\frac12-\varepsilon$ in order to capture the temporal regularity of $\Psi$.
Since $X^{s,b}([0,T])$ does not embed into $C([0,T];H^s)$ when $b<\frac12$,
we need to prove the continuity in time of solutions a posteriori.
Our local well-posedness result above (as well as Theorem~\ref{lwp:mult} below) covers all non-negative subcritical regularities.
\begin{remark}\label{remark-critical}
We point out that $s_{\textup{crit}}$ is negative only for the one-dimensional cubic NLS,
i.e. $(d,k)=(1,1)$ for which $s_{\textup{crit}}=-\frac12$.
Below $L^2(\mathbb{T})$, the deterministic cubic NLS on $\mathbb{T}$ was shown to be ill-posed.
Indeed, Christ, Colliander and Tao \cite{CCT03instab} and Molinet \cite{MolinetMRL}
showed that the solution map $u_0\in H^s(\mathbb{T}) \mapsto u(t)\in H^s(\mathbb{T})$ is discontinuous whenever $s<0$.
More recently, Guo and Oh \cite{guo-oh} showed an even stronger ill-posedness result,
in the sense that for any $u_0\in H^s(\mathbb{T})$, $s\in (-\frac18,0)$,
there is no distributional solution $u$ that is also a limit of smooth solutions in $C([-T,T]; H^s(\mathbb{T}))$.
In the (super)critical regime, i.e. for $s\le -\frac12 =s_{\textup{crit}}$,
Oh \cite{Oh17rmk} and Oh and Wang \cite{OhWangillposed}
showed a norm inflation phenomenon at any $u_0\in H^s(\mathbb{T})$: for any $\varepsilon>0$ and $u_0\in H^s(\mathbb{T})$,
there exists a solution $u^{\varepsilon}$ to NLS such that
$\|u^{\varepsilon}(0)-u_0\|_{H^s(\mathbb{T})}<\varepsilon$ and
$\|u^{\varepsilon}(t)\|_{H^s(\mathbb{T})}>\varepsilon^{-1}$
for some $t\in (0, \varepsilon)$.
\end{remark}
\begin{remark}
Although we present our results for SNLS on the standard torus $\mathbb{T}^d=\mathbb{R}^d/\mathbb{Z}^d$, our arguments hold on any torus $\mathbb{T}_{{\boldsymbol\alpha}}^d=\prod_{j=1}^d\mathbb{R}/{\alpha_j}\mathbb{Z}\,$, where $\boldsymbol{\alpha}=(\alpha_1, ..., \alpha_d)\in [0,\infty)^d$. This is because the periodic Strichartz estimates \eqref{perStrich} of Bourgain and Demeter \cite{bourgain-demeter} hold for irrational tori ($\mathbb{T}_{{\boldsymbol\alpha}}^d$ is irrational if there is no $\boldsymbol\gamma\in\mathbb Q^d$ such that $\boldsymbol{\gamma}\cdot\boldsymbol{\alpha}=0$).
Prior to \cite{bourgain-demeter},
Strichartz estimates were harder to establish on irrational tori
-- see \cite{guo-oh-wang} and references therein.
\end{remark}
\begin{remark}
The deterministic NLS is
locally well-posed in the critical space $H^{s_{\textup{crit}}}(\mathbb{T}^d)$,
for almost all pairs $(d,k)$, except for the cases $(1,2), (2,1), (3,1)$ which are still open -- see \cite{BourgainIJM13,HTTduke11,HTTjram14,wang-nls}.
In these papers, the authors employ the critical spaces $X^s, Y^s$
based on the spaces $U^2$, $V^2$ of Koch and Tataru \cite{KochTataruCPAM05}.
We point out that Brownian motions belong almost surely to $V^p$, for $p>2$,
but not $V^2$ (hence neither to $U^2$).
Consequently, the spaces $X^s, Y^s$ are not suitable for obtaining local well-posedness of SNLS.
\end{remark}
Now let us recall the following conservation laws for the
deterministic NLS:
\begin{align}
\label{defn:mass}
M(u(t)) &:= \frac12\int_{\mathbb{T}^d} |u(t,x)|^2\,dx\\
\label{defn:energy}
E(u(t)) &:= \frac12 \int_{\mathbb{T}^d} |\nabla_x u(t,x)|^2 \pm \frac{1}{2k+2} \int_{\mathbb{T}^d} |u(t,x)|^{2k+2} \,dx,
\end{align}
where the sign $\pm$ in \eqref{defn:energy} matches that in \eqref{SNLS} and \eqref{SNLS-mild}.
Recall that SNLS \eqref{SNLS} with the $+$ sign is called defocusing (and focusing for the $-$ sign).
We say that SNLS is energy-subcritical if $s_{\textup{crit}}<1$
(i.e. for any $k\ge 1$ when $d=1,2$ and for $k=1$ when $d=3$).
For solutions of SNLS these quantities are no longer necessarily conserved.
However, It\^{o}'s lemma allows us to bound these in a probabilistic manner similarly to de Bouard and Debussche \cite{debouard-debussche-nls, debouard-debussche-nls-mult}.
Therefore, we obtain the following:
\begin{theorem}[Pathwise global well-posedness for additive SNLS]\label{gwp:add}
Let $s\ge 0$. Given $\phi\in\mathcal{L}^2(L^2(\mathbb{T}^d); H^s(\mathbb{T}^d))$, let $F(u,\phi)=\phi\xi$
and $u_0\in H^s(\mathbb{T}^d)$. Then the $H^s$-valued solutions of Theorem~\ref{lwp:add} extend globally in time almost surely in the following cases:
\vspace{.1cm}
\noindent
\textup{ (i)} the (focusing or defocusing) one-dimensional cubic SNLS for all $s\ge 0$;
\noindent
\textup{\ (ii)} the defocusing energy-subcritical SNLS for all $s\ge 1$.
\end{theorem}
We now move onto the problem with multiplicative noise, i.e. SNLS with \eqref{mult-noise}. For this case, we need a stronger assumption on $\phi$. By a slight abuse of notation, for a bounded linear operator $\phi$ from $L^2(\mathbb{T}^d)$ to a Banach space $B$, we say that $\phi\in\L^2(L^2(\mathbb{T}^d); B)$ if\footnote{In fact,
such operators are known as nuclear operators of order 2
and their introduction goes back to the work of A.~Grothendieck
on nuclear locally convex spaces.}
\begin{align*}
\|\phi\|_{\mathcal{L}^2(L^2(\mathbb{T}^d);B)}^2 := \sum_{n\in \mathbb{Z}^d} \| \phi e_n\|_{B}^2 <\infty\,.
\end{align*}
For $s\in\mathbb{R}$ and $r\ge 1$,
we also define the Fourier-Lebesgue space $\mathcal{F} L^{s,r}(\mathbb{T}^d)$ via the norm
\begin{align*}
\norm{f}_{\mathcal{F} L^{s,r}(\mathbb{T}^d)}:=\big\|\jb{n}^s\widehat{f}(n)\big\|_{\l^r_n(\mathbb{Z}^d)}\,.
\end{align*}
Clearly, when $r=2$ we have $\mathcal{F} L^{s,r}(\mathbb{T}^d) = H^s(\mathbb{T}^d)$ and
for $s_1\le s_2$ and $r_1\le r_2$ we have
$\mathcal{F} L^{s_2,r_1}(\mathbb{T}^d)\subset \mathcal{F} L^{s_1,r_2}(\mathbb{T}^d)$.
We now state our local well-posedness result for the multiplicative SNLS.
\begin{theorem}[Local well-posedness for multiplicative SNLS]
\label{lwp:mult}
Given $s>s_{\textup{crit}}$ non-negative, let $\phi\in \mathcal{L}^2(L^2(\mathbb{T}^d); H^s(\mathbb{T}^d))$.
If $s\le \frac{d}{2}$, we further impose that
\begin{align}
\label{phiFLsr}
\phi\in \mathcal{L}^2(L^2(\mathbb{T}^d); \mathcal{F} L^{s,r}(\mathbb{T}^d))
\end{align}
for some $r\in\big[1,\frac{d}{d-s}\big)$ when $s>0$ and $r=1$ when $s=0$.
Let $F(u,\phi)=u\cdot \phi\xi$.
Then
for any $u_0\in H^s(\mathbb{T}^d)$, there exist
a stopping time $T$ that is almost surely positive,
and
a unique adapted process
\begin{align}\label{L2Omega-space}
u \in L^2\big(\Omega; C([0,T];H^s(\mathbb{T}^d)) \cap X^{s,b}([0,T]) \big)
\end{align}
solving SNLS
with multiplicative noise.
\end{theorem}
\begin{remark}
If $\phi\xi$ is a spatially homogeneous noise, i.e. $\phi$ is translation invariant,
then the extra assumption \eqref{phiFLsr} is superfluous.
Indeed, if $\widehat{ \phi e_n}(m) =0$, for all $m,n\in\mathbb{Z}^d$, $m\neq n$ and $\phi\in\mathcal{L}^2(L^2(\mathbb{T}^d); H^s(\mathbb{T}^d))$,
then $\phi\in \L^2(L^2(\mathbb{T}^d); \mathcal{F} L^{s,r}(\mathbb{T}^d))$ for any $r\ge 1$.
We point out that an extra condition in the multiplicative case was also used by
de~Bouard and Debussche \cite{debouard-debussche-nls}
in their study of SNLS in $H^1(\mathbb{R}^d)$,
namely they required that $\phi$ is
a $\gamma$-radonifying operator from $L^2(\mathbb{R}^d)$ into $W^{1,\alpha}(\mathbb{R}^d)$
for some appropriate $\alpha$,
as compared to the requirement that
$\phi$ is Hilbert-Schmidt from $L^2(\mathbb{R}^d)$ into $H^s(\mathbb{R}^d)$ in the additive case.
\end{remark}
In the multiplicative case, the stochastic convolution depends on the solution
$u$ and this forces us to work in the space in \eqref{L2Omega-space}.
In order to control the nonlinearity in this space,
we use a truncation method which has been used for SNLS on $\mathbb{R}^d$
by de Bouard and Debussche \cite{debouard-debussche-nls, debouard-debussche-nls-mult}.
Moreover, we combine this method with the use of $X^{s,b}$-spaces in a similar manner as in
\cite{debouard-debussche-kdv}, where the same authors studied the stochastic KdV equation with low regularity initial data on $\mathbb{R}$.
This introduces some technical difficulties which did not appear
when using the more classical Strichartz spaces as those used in \cite{debouard-debussche-nls, debouard-debussche-nls-mult}.
Next, we prove global well-posedness of SNLS \eqref{SNLS} with multiplicative noise.
Similarly to the additive case, the main ingredient is the probabilistic a priori bound on the mass and energy of a local solution $u$. However,
we further need to obtain uniform control on the $X^{s,b}$-norms for solutions to truncated versions of
\eqref{SNLS-mild}.
\begin{theorem}[Global well-posedness for multiplicative SNLS]\label{gwp:mult}
Let $s\ge 0$. Given $\phi$ with the same assumptions as in Theorem \ref{lwp:mult}, let $F(u,\phi)=u\cdot\phi\xi$
and $u_0\in H^s(\mathbb{T}^d)$. Then the $H^s$-valued solutions of Theorem~\ref{lwp:mult} extend globally in time in the following cases:
\vspace{.1cm}
\noindent
\textup{ (i)} the (focusing or defocusing) one-dimensional cubic SNLS for all $s\ge 0$;
\noindent
\textup{\ (ii)} the defocusing energy-subcritical SNLS for all $s\ge 1$.
\end{theorem}
Before concluding this introduction let us state two remarks.
\begin{remark}
We point out that Theorem~\ref{lwp:add} and Theorem~\ref{lwp:mult} are almost optimal
for handling the regularity of initial data
since the deterministic NLS is ill-posed for $s<s_{\textup{crit}}$ (see Remark \ref{remark-critical}).
In terms of the regularity of the noise, at least in the additive noise case,
it is possible to consider rougher noise
by employing the Da Prato-Debussche trick,
namely by writing a solution $u$ to \eqref{SNLS-mild} as $u= v+\Psi$
and considering the equation for the residual part $v$.
In general, this procedure allows one to treat rougher noise,
see for example
\cite{BOP1, BOP2, CollianderOh}.
where they treat NLS with rough random initial data.
In the periodic setting however,
the argument gets more complicated
(see for example \cite{BOP1, BOP2} on $\mathbb{R}^d$ versus \cite{CollianderOh,NahmodStaffilani} on $\mathbb{T}^d$).
The actual implementation of the aforementioned trick requires
cumbersome case-by-case analysis where the number of cases grows exponentially in $k$.
Even for the cubic case on $\mathbb{T}^d$ the analysis is involved, whereas
on $\mathbb{R}^d$ one can use bilinear Strichartz estimates which are not available on $\mathbb{T}^d$.
\end{remark}
\begin{remark}
In the multiplicative noise case, there are well-posedness results
on a general compact Riemannian manifold $M$ without boundaries.
In \cite{BrzezniakMillet},
Brze\'zniak and Milllet use the Strichartz estimates of \cite{BGT} and the standard space-time Lebesgue spaces (i.e.
without the Fourier restriction norm method).
For $M=\mathbb{T}^d$, Theorem~\ref{lwp:mult} improves the result in \cite{BrzezniakMillet}
since it requires less regularity on the noise and initial data.
In \cite{brezniak-h-w},
Brze\'zniak, Hornung, and Weiss construct martingale solutions in $H^1(M)$
for the multiplicative SNLS
with energy-subcritical defocusing nonlinearities
and mass-subcritical focusing nonlinearities.
\end{remark}
\subsection*{Organization of the paper}
In Section \ref{sect:Xsb}, we provide some preliminaries for the Fourier restriction norm method
and prove the multilinear estimates necessary for the local well-posedness results.
In Section~\ref{sect:stochests},
we prove some properties of the stochastic convolutions $\Psi$ and $\Psi[u]$
given respectively by \eqref{sc:Psia} and \eqref{sc:Psim}.
We prove Theorems \ref{lwp:add} and \ref{lwp:mult} in Section~\ref{sect:LWP}.
Finally, in Section \ref{sect:GWP} we prove the global results Theorems \ref{gwp:add} and \ref{gwp:mult}.
\subsection*{Notations}
Given $A,B\in\mathbb{R}$,
we use the notation $A\lesssim B$ to mean $A\le CB$ for some constant $C\in (0,\infty)$ and write $A\sim B$ to mean $A\lesssim B$ and $B\lesssim A$.
We sometimes emphasize any dependencies of the implicit constant as subscripts on $\lesssim$, $\gtrsim$,
\linebreak
and $\sim$; e.g. $A\lesssim_{p} B$ means $A\le C B$ for some constant $C=C(p)\in (0,\infty)$ that depends on the parameter $p$. We denote by $A\wedge B$ and $A\vee B$ the minimum and maximum between the two quantities respectively. Also, $\lceil A\rceil$ denotes the smallest integer greater or equal to $A$, while $\lfloor A\rfloor$ denotes the largest integer less than or equal to $A$.
Given a function $g:U\to \mathbb{C}$, where $U$ is either $\mathbb{T}^d$ or $\mathbb{R}$, our convention of the Fourier transform of $g$ is given by
\[\widehat{g}(\xi)=\int_U e^{2\pi i \xi\cdot x}g(x)\,dx\,,\]
where $\xi$ is either an element of $\mathbb{Z}^d$ (if $U=\mathbb{T}^d$) or an element of $\mathbb{R}$ (if $U=\mathbb{R}$). For the sake of convenience, we shall omit the $2\pi$ from our writing since it does not play any role in our arguments.
For $c\in\mathbb{R}$, we sometimes write $c+$ to denote $c+\varepsilon$ for sufficiently small $\varepsilon>0$, and write $c-$ for the analogous meaning. For example, the statement `$u\in X^{s,\frac{1}{2}-}$' should be read as `$u\in X^{s,\frac12-\varepsilon}$ for sufficiently small $\varepsilon>0$'.
For the sake of readability, in the proofs we sometimes omit the underlying domain $\mathbb{T}^d$ from various norms, e.g.
we write $\|f\|_{H^s}$ instead of $\|f\|_{H^s(\mathbb{T}^d)}$ and
$\|\phi\|_{\L^2(L^2;H^s)}$ instead of $\|\phi\|_{\L^2(L^2(\mathbb{T}^d);H^s(\mathbb{T}^d))}$.
\subsection*{Acknowledgements}
The authors would like to thank their advisors, Tadahiro Oh and Oana Pocovnicu,
for suggesting this problem and their continuous support throughout this work,
as well as Professor Yoshio~Tsutsumi,
Yuzhao~Wang and Dimitrios~Roxanas for several useful discussions on the present paper.
The authors were supported by
The Maxwell Institute Graduate School in Analysis and its Applications,
a Centre for Doctoral Training funded by
the UK Engineering and Physical Sciences Research Council (grant EP/L016508/01),
the Scottish Funding Council, Heriot-Watt University and the University of Edinburgh.
\section{Fourier restriction norm method}
\label{sect:Xsb}
Let $s,b\in\mathbb{R}$. The Fourier restriction norm space $X^{s,b}$ adapted to the Schr\"odinger equation on $\mathbb{T}^d$
is the space of tempered distributions $u$ on $\mathbb{R}\times\mathbb{T}^d$ such that the norm
\begin{equation*}
\norm{u}_{X^{s,b}}
:=\norm{\jb{n}^s\jb{\tau-|n|^2}^b\mathcal{F}_{t,x}(u)(\tau,n)}_{\l^2_nL^2_\tau(\mathbb{Z}^d\times \mathbb{R})}
\end{equation*}
is finite. Equivalently, the $X^{s,b}$-norm can be written in its interaction representation form:
\begin{equation}
\norm{u}_{X^{s,b}}=\norm{\jb{n}^s\jb{\tau}^b\mathcal{F}_{t,x}\brac{S(-t)u(t)}(n,\tau)}_{\l^2_nL^2_\tau(\mathbb{Z}^d\times \mathbb{R})}\,,\label{Xsb-interact-rep}
\end{equation}
where $S(t)=e^{-it\Delta}$ is the linear Schr\"odinger propagator.
We now state some facts on $X^{s,b}$-spaces.
The interested reader can find the proof of these and further properties in \cite{tao-book}.
Firstly, we have the following continuous embeddings
\begin{align}
X^{s,b} &\hookrightarrow C(\mathbb{R}; H_x^s(\mathbb{T}^d)) \ \mbox{, for } b>\frac{1}{2}\,,\\
X^{s',b'}&\hookrightarrow X^{s,b}\ \mbox{, for } s'\ge s \mbox{ and } b'\ge b\,.
\end{align}
We have the duality relation
\begin{equation}
\label{Xsbduality}
\norm{u}_{X^{s,b}}=\sup_{\norm{v}_{X^{-s,-b}}\le 1}\left|\int_{\mathbb{R}\times\mathbb{T}^d} u(t,x) \overline{v(t,x)}\,dt\,dx\right|\,.
\end{equation}
\begin{lemma}[Transference principle, {\cite[Lemma~2.9]{tao-book}}]\label{trans-prin}
\label{TransfPr}
Let $Y$ be a Banach space of functions on $\mathbb{R}\times \mathbb{T}^d$ such that
\begin{equation*}
\|e^{it\lambda} e^{\pm it\Delta} f\|_Y \lesssim \|f\|_{H^s(\mathbb{T}^d)}
\end{equation*}
for all $\lambda\in\mathbb{R}$ and all $f\in H^s(\mathbb{T}^d)$. Then, for any $b>\frac12$,
\begin{equation*}
\|u\|_Y \lesssim \| u\|_{X^{s,b}}\, .
\end{equation*}
\end{lemma}
Given a time interval $I\subseteq\mathbb{R}$,
one defines the time restricted space $X^{s,b}(I)$ via the norm
\begin{equation}
\label{Xsblocal}
\xsbt{u}{s}{b}{I}:=\inf\left\{\|{\tilde u}\|_{X^{s,b}}: \tilde u|_{I} = u\right\}.
\end{equation}
We note that for $s\geq0$ and $0\le b<\frac{1}{2}$,
we have
\begin{align}
\label{Xsblocalsim}
\norm{u}_{X^{s,b}(I)}\sim\xsb{\mathbbm{1}_{I}(t)u(t)}{s}{b}\,,
\end{align}
see for example \cite[Lemma 2.1]{debouard-debussche-kdv} for a proof (for $X^{s,b}$ spaces adapted to the KdV equation).
\begin{lemma}[Linear estimates, {\cite[Proposition~2.12]{tao-book}}]
\label{lem:linests}
Let $s\in\mathbb{R}$ and suppose $\eta$ is smooth and compactly supported.
Then, we have
\begin{align}
\|\eta(t) S(t)f\|_{X^{s,b}} &\lesssim \|f\|_{H^s(\mathbb{T}^d)} \ \text{, for }b\in\mathbb{R}\,;\\
\left\| \eta(t) \int_0^t S(t-t') F(t')dt' \right\|_{X^{s,b}} &\lesssim \|F\|_{X^{s,b-1}} \ \text{, for }b>\frac12\,.
\end{align}
\end{lemma}
\noindent
By localizing in time, we can gain a smallness factor, as per lemma below.
\begin{lemma}[Time localization property, {\cite[Lemma~2.11]{tao-book}}]\label{xsb-time-loc}
Let $s\in\mathbb{R}$ and $-\frac{1}{2}<b'<b<\frac{1}{2}$. For any $T\in (0,1)$, we have
\[\xsbt{f}{s}{b'}{[0,T]}\lesssim_{b,b'} T^{b-b'}\xsbt{f}{s}{b}{[0,T]}\,.\]
\end{lemma}
We now give the proofs of the multilinear estimates necessary to control the nonlinearity $|u|^{2k}u$. Recall the $L^4$-Strichartz estimate due to Bourgain \cite{bourgain-93-1}
(see also \cite[Proposition~2.13]{tao-book}):
\begin{equation}
\label{L4Strichartz}
\|u\|_{L^4_{t,x}(\mathbb{R}\times\mathbb{T})} \lesssim \|u\|_{X^{0,\frac38}} .
\end{equation}
\begin{lemma}\label{trilin}
Let $d=1$, $s\geq0$, $b\geq \frac38$, and $b'\leq\frac58$. Then, for any time interval $I\subset\mathbb{R}$,
we have
\begin{equation}
\label{trilinStrich}
\left\| u_1 \overline{u_2} u_{3} \right\|_{X^{s,b'-1}(I)}
\lesssim
\prod_{j=1}^{3} \|u_j\|_{X^{s,b}(I)} .
\end{equation}
\end{lemma}
\begin{proof}
By the triangle inequality it suffices to prove \eqref{trilinStrich} for $s=0$.
We claim that
\begin{equation*}
\left| \int_{\mathbb{R}\times\mathbb{T}^d} u_1 \overline{u_2} u_3 \overline{v}\,dxdt \right|
\lesssim \prod_{j=1}^{3} \|u_j\|_{X^{0,b}} \|v\|_{X^{0,1-b'}}
\end{equation*}
for any factors $u_1, u_2, u_3, v$. Indeed, this follows immediately from H\"{o}lder inequality
and \eqref{L4Strichartz} for each of the four factors (hence the restrictions $b,1-b'~\geq~\frac38$).
Thus, the global-in-time version of \eqref{trilinStrich}, i.e. $I=\mathbb{R}$,
follows by the duality relation \eqref{Xsbduality}.
For an arbitrary time interval $I$, if $\tilde{u}_j$ is an extension of $u_j$, $j=1,2,3$, then
$\tilde{u}_1 \overline{\tilde{u}_2} \tilde{u}_3$ is an extension of $u_1 \overline{u_2} u_3$.
We use the previous step to get
$$ \left\| u_1 \overline{u_2} u_{3} \right\|_{X^{s,b'-1}(I)} \le
\left\| \tilde{u}_1 \overline{\tilde{u}_2} \tilde{u}_{3} \right\|_{X^{s,b'-1}} \lesssim
\prod_{j=1}^{3} \|\tilde{u}_j\|_{X^{0,b}} $$
and then we take infimum over all extensions $\tilde{u}_j$'s
and \eqref{trilinStrich} follows.
\end{proof}
Due to the scaling and Galilean symmetries of the linear Schr\"{o}dinger equation,
the periodic Strichartz estimate \eqref{perStrich} of
Bourgain and Demeter \cite{bourgain-demeter}
is equivalent with
\begin{equation}
\label{StrichartzKV}
\|S(t) P_Q f\|_{L^p_{t,x}(I\times\mathbb{T}^d)} \lesssim_{|I|} |Q|^{\frac{1}{2} - \frac{d+2}{pd}+} \|f\|_{L^2_x(\mathbb{T}^d)},
\end{equation}
for any $d\geq 1$, $p\ge\frac{2(d+2)}{d}$, $I\subset\mathbb{R}$ finite time interval, and $Q\subset\mathbb{R}^d$ dyadic cube.
Here, $P_Q$ denotes the frequency projection onto $Q$, i.e. $\widehat{P_Qf}(n) = \mathbf 1_Q(n) \widehat{f}(n)$.
By the transference principle (Lemma~\ref{TransfPr}), we get
\begin{equation}
\label{StrichartzKVtp}
\|P_{Q}u\|_{L^{p}_{t,x}(I \times\mathbb{T}^d)} \lesssim_{|I|} |Q|^{\frac{1}{2} - \frac{d+2}{pd} +} \| u\|_{X^{0, b}(I)} ,
\end{equation}
for any $b>\frac12$.
By interpolating \eqref{StrichartzKVtp}
with
\begin{equation}
\|P_{Q}u\|_{L^{p}_{t,x}(I \times\mathbb{T}^d)} \lesssim |Q|^{\frac12-\frac1p} \| u\|_{X^{0, \frac12-\frac1p}(I)} ,
\end{equation}
(which follows immediately from Sobolev inequalities, \eqref{Xsb-interact-rep},
and the $H^s(\mathbb{T}^d)$-isometry of $S(-t)$),
we can lower the time regularity from $b=\frac12+\delta$ to $\tilde{b}=\frac12-\delta$,
for sufficiently small~$\delta>0$.
Thus, we also have
\begin{equation}
\label{StrichartzKVtpinterp}
\|P_{Q}u\|_{L^{p}_{t,x}(I\times\mathbb{T}^d)} \lesssim_{|I|,\delta}
|Q|^{\frac{1}{2} - \frac{d+2}{pd} + o(\delta)}
\|u\|_{X^{0,\frac12-\delta}(I)}
\end{equation}
Lemma \ref{trilin} only treats the cubic nonlinearity when $d=1$.
We now prove the following
general multilinear estimates to treat other cases.
The proof borrows techniques from \cite{guo-oh-wang}.
\begin{lemma}\label{multilin}
Let $d,k\geq 1$ such that $dk\geq 2$ and let $I\subset\mathbb{R}$ be a finite time interval.
Then for any $s>s_{\textup{c}}$,
there exist $b=\frac12-$ and $b'=\frac12+$ such that
\begin{equation}
\label{multilinStrich}
\left\| u_1 \overline{u_2} \cdots \overline{u_{2k}} u_{2k+1} \right\|_{X^{s,b'-1}(I)}
\lesssim_{|I|}
\prod_{j=1}^{2k+1} \|u_j\|_{X^{s,b}(I)} .
\end{equation}
\end{lemma}
\begin{proof}
In view of \eqref{Xsblocalsim},
we can assume that $u_j(t)=\mathbbm{1}_I(t) u_j(t)$
and thus by the duality relation \eqref{Xsbduality},
it suffices to show
\begin{equation}
\label{multilin-dual}
\left|
\int_{\mathbb{R}\times \mathbb{T}^{d}} \big(\jb{\nabla}^s (u_1 \overline{u_2} \cdots u_{2k+1})\big) \overline{v} \,dxdt \right|
\lesssim \|v\|_{X^{0,1-b'}} \prod _{j=1}^{2k+1}\|u_j\|_{X^{s,b}} .
\end{equation}
We use Littlewood-Paley decomposition:
we estimate the left-hand side of \eqref{multilin-dual}
when $v=P_Nv$, $u_j=P_{N_j}u_j$
for some dyadic numbers $N, N_j\in 2^{\mathbb{Z}}$, $1\le j\le 2k+1$.
Then the claim follows by triangle inequality and performing the summation
\begin{equation}
\label{summation}
\sum_{N_1} \sum_{\substack{N\\mathcal{N}\lesssim N_1}}\sum_{\substack{N_2\\mathcal{N}_2\leq N_1}} \cdots
\sum_{\substack{N_{2k+1}\\mathcal{N}_{2k+1}\leq N_{2k}}} .
\end{equation}
Notice that
without loss of generality, we may assume that
$N_1\geq N_2\geq\ldots\geq N_{2k+1}$,
in which case we also have $N\lesssim N_1$, and that
the factors $v$ and $u_j$ are real-valued and non-negative.
Let $\varepsilon :=s-s_{\textup{c}}$, and we distinguish two cases.
\noindent
\textbf{Case 1:} $N_1\sim N_2$.
By H\"{o}lder inequality,
\begin{equation}
\label{multilin-dual-LP}
N^s \int_{\mathbb{R}\times \mathbb{T}^{d}} u_1 {u_2} \cdots u_{2k+1} {v} \,dxdt
\lesssim
N_1^{\frac{s}{2}} \|u_1\|_{L^q_{t,x}} N_2^{\frac{s}{2}} \|u_2\|_{L^q_{t,x}} \prod_{j=3}^{2k+1} \|u_j\|_{L^p_{t,x}} \|v\|_{L^r_{t,x}} ,
\end{equation}
with $p,q,r$ chosen such that $\frac{2k-1}{p}+\frac{2}{q} +\frac1r=1$.
We take $p, q$
such that $\frac{d}{2}-\frac{d+2}{p}=s_{\textup{crit}}$ and $\frac{d}{2}-\frac{d+2}{q}=\frac12 s_{\textup{crit}}$,
or equivalently $p=k(d+2)$ and $q=\frac{4k(d+2)}{dk+2}$.
These give the H\"{o}lder exponent $r= \frac{2(d+2)}{d}$.
By \eqref{StrichartzKVtpinterp} and \eqref{StrichartzKVtp}, we get
\begin{align}
\label{multilin-eqn1-case1}
N_j^{\frac{s}{2}} \|u_j\|_{L^q_{t,x}} &\lesssim N_j^{-\frac{\varepsilon}{2}+} \|u_j\|_{X^{s,b}} , \quad j=1,2\\
\label{multilin-eqn2-case1}
\|u_j\|_{L^p_{t,x}} &\lesssim N_j^{-\varepsilon+} \|u_j\|_{X^{s,b}},\quad 3\leq j\leq 2k+1, \\
\label{multilin-eqn3-case1}
\|v\|_{L^r_{t,x}} &\lesssim N^{0+} \|v\|_{X^{0,1-b'}} .
\end{align}
By choosing $\delta, \delta' \ll \varepsilon$ in $b:=\frac12-\delta$ and in $1-b'=\frac12-\delta'$, respectively,
we get
\begin{equation}
\textup{RHS of }\eqref{multilin-dual-LP}
\lesssim N^{-\frac{\varepsilon}{4}}\|v\|_{X^{0,1-b'}} \prod_{j=1}^{2k+1} N_j^{-\frac{\varepsilon}{4}} \|u_j\|_{X^{s,b}} .
\end{equation}
The factors $N^{-\frac{\varepsilon}{4}}$, $N_j^{-\frac{\varepsilon}{4}}$ guarantee that we can perform \eqref{summation}.
\noindent
\textbf{Case 2:} $N_1\gg N_2$. Then, we necessarily have $N_1\sim N$ or else the left hand side of \eqref{multilin-dual} vanishes.
By H\"{o}lder inequality,
\begin{equation}
N^s \int_{\mathbb{R}\times \mathbb{T}^{d}} u_1 {u_2} \cdots u_{2k+1} {v} \,dxdt
\lesssim
N_1^s \|u_1\|_{L^q_{t,x}} \prod_{j=2}^{2k+1} \|u_j\|_{L^p_{t,x}} \|v\|_{L^r_{t,x}} ,
\end{equation}
with $\frac{2k}{p}+\frac{1}{q}+\frac1r=1$.
As in Case 1, we would like to have $p$ such that
$\frac{d}{2}-\frac{d+2}{p}= s_{\textup{crit}}$,
or equivalently $p=k(d+2)$.
However, the best we can do with the Strichartz estimate for the remaining factors
is to choose $q=r=\frac{2(d+2)}{d}$,
so that we have
\begin{align}
\label{multilin-eqn1}
N_1^s \|u_1\|_{L^q_{t,x}} &\lesssim N_1^{0+} \|u_1\|_{X^{s,b}} ,\\
\label{multilin-eqn2}
\|u_j\|_{L^p_{t,x}} &\lesssim N_j^{-\varepsilon+} \|u_j\|_{X^{s,b}},\quad 2\leq j\leq 2k+1, \\
\label{multilin-eqn3}
\|v\|_{L^r_{t,x}} &\lesssim N_1^{0+} \|v\|_{X^{0,1-b'}} .
\end{align}
Notice that we can overcome the loss of derivative $N_1^s$
only up to a logarithmic factor.
We need a slightly refined analysis.
We cover the dyadic frequency annuli of $u_1$ and of $v$ with dyadic cubes of side-length $N_2$,
i.e.
$$\{\xi_1 : |\xi_1|\sim N_1\}\subset \bigcup_{\ell} Q_{\ell} \quad, \quad
\{\xi : |\xi|\sim N\}\subset \bigcup_{j} R_j \,.$$
There are approximately $\left(\frac{N_1}{N_2}\right)^d$-many cubes needed, and so
$$u_1=\sum_{\ell} P_{Q_{\ell}} u_1 =: \sum_{\ell} u_{1,\ell}\quad, \quad
v= \sum_j P_{R_j}v =: \sum_j v_j$$
are decompositions into finitely many terms.
Since
$|\xi_1-\xi| \lesssim N_2$ for $\xi_1\in \supp(\widehat{u_1}), \xi\in \supp(\widehat{v})$
on the convolution hyperplane,
there exists a constant $K$ such that
if $\mathrm{dist}(Q_{\ell}, Q_j)>KN_2$, then the integral in \eqref{multilin-dual} vanishes.
Hence the summation \eqref{summation} is replaced by
\begin{equation}
\label{summation2}
\sum_{N_1} \sum_{\substack{N_2\\mathcal{N}_2\ll N_1}} \cdots
\sum_{\substack{N_{2k+1}\\mathcal{N}_{2k+1}\leq N_{2k}}} \sum_{\substack{\ell,j\\j\approx \ell}} .
\end{equation}
Also, in place of \eqref{multilin-eqn1}-\eqref{multilin-eqn2}, we now have
\begin{align}
\label{multilin-eqn11}
N_1^s \|u_{1,\ell}\|_{L^q_{t,x}} &\lesssim N_2^{0+} \|u_{1,\ell}\|_{X^{s,b}} ,\\
\label{multilin-eqn22}
\|u_i\|_{L^p_{t,x}} &\lesssim N_i^{-\varepsilon+} \|u_i\|_{X^{s,b}},\quad 2\leq i\leq 2k+1, \\
\label{multilin-eqn33}
\|v_j\|_{L^q_{t,x}} &\lesssim N_2^{0+} \|v_j\|_{X^{0,1-b'}} ,
\end{align}
Therefore,
by Cauchy-Schwartz inequality and Plancherel identity,
\begin{align*}
\textup{LHS of }\eqref{multilin-dual} &\lesssim
\sum_{N_2} \sum_{\substack{N_1\\mathcal{N}_1\gg N_2}} \sum_{\substack{\ell,j\\ \ell\approx j}}
N_2^{-\varepsilon+} \|u_{1,\ell}\|_{X^{s,b}} \|v_j\|_{X^{0,1-b'}} \prod_{i=2}^{2k+1} \|u_i\|_{X^{s,b}}\\
&\lesssim \sum_{N_2} N_2^{-\varepsilon+}
\left( \sum_{\substack{N_1\\mathcal{N}_1\gg N_2}} \sum_{\ell} \|u_{1,\ell}\|_{X^{s,b}}^2\right)^{\frac12}
\left( \sum_{\substack{N\\mathcal{N}\gg N_2}}
\sum_{j} \|v_j\|_{X^{0,1-b'}}^2 \right)^{\frac12} \prod_{i=2}^{2k+1} \|u_i\|_{X^{s,b}}\\
&\lesssim \sum_{N_2} N_2^{-\varepsilon+} \|u_1\|_{X^{s,b}} \|v\|_{X^{0,1-b'}}
\prod_{i=2}^{2k+1} \|u_i\|_{X^{s,b}}\\
&\lesssim \prod_{i=1}^{2k+1} \|u_i\|_{X^{s,b}} \|v\|_{X^{0,1-b'}}
\end{align*}
and the proof is complete.
\end{proof}
\section{The stochastic convolution}
\label{sect:stochests}
In this section, we prove some $X^{s,b}$-estimates on the stochastic convolution $\Psi(t)$ given
either by \eqref{sc:Psia} or \eqref{sc:Psim}.
We first record the following Burkholder-Davis-Gundy inequality, which is a consequence of \cite[Theorem 1.1]{CR_Burkholder}.
\begin{lemma}[Burkholder-Davis-Gundy inequality]
\label{BDG}Let $H,K$ be separable Hilbert spaces, $T>0$, and $W$ is an $H$-valued Wiener process on $[0,T]$.
Suppose that $\{\psi(t)\}_{t\in[0,T]}$ is an adapted process taking values in
$\mathcal{L}^2(H;K)$. Then for $p\ge 1$,
\[\mathbb{E}\left[\sup_{t\in[0,T]}\norm{\int_0^t\psi(t')\,dW(t')}_K^p\right]\lesssim_p
\mathbb{E}\left[\left(\int_0^T \norm{\psi(t')}_{\mathcal{L}^2(H;K)}^2\,dt'\right)^\frac{p}{2}\right]\,.\]
\end{lemma}
In addition, we prove that $\Psi(t)$ is pathwise continuous in both cases.
To this end, we employ the factorization method of Da Prato \cite[Lemma 2.7]{daprato-kol-eqns-04},
i.e. we make use of the following lemma and \eqref{factorisation} below.
\begin{lemma}
\label{fact-meth}
Let $H$ be a Hilbert space, $T>0$, $\alpha\in(0,1)$, and $\sigma>\big(\frac{1}{\alpha},\infty\big)$.
Suppose that ${f\in L^{\sigma}([0,T];H)}$.
Then the function
\begin{equation}
F(t):=\intud{0}{t}{S(t-t')(t-t')^{\alpha-1}f(t')}{t'}\,,\quad t\in [0,T]
\end{equation}
belongs to $C([0,T];H)$. Moreover,
\begin{equation}
\label{lem3p2:est}
\sup_{t\in[0,T]}\norm{F(t)}_{H}\lesssim_{\sigma,T}\norm{f}_{L^{\sigma}([0,T];H)}.
\end{equation}
\end{lemma}
We make use of the above lemma in conjunction with the following fact:
\begin{equation}
\label{factorisation}
\intud{\mu}{t}{(t-t')^{\alpha-1}(t'-\mu)^{-\alpha}}{t'}=\frac{\pi}{\sin(\pi\alpha)}\,,
\end{equation}
for all $0<\alpha <1$ and all $0\le \mu <t$.
This can be seen via considerations with Euler-Beta functions, see \cite{daprato-kol-eqns-04}.
We now treat the additive and multiplicative cases separately below in Subsection \ref{subsect:sto-conv-add} and \ref{subsect:sto-conv-mult} respectively. The arguments for the two cases are similar, albeit with some extra technicalities in the multiplicative case.
\subsection{The additive stochastic convolution}\label{subsect:sto-conv-add}
By Fourier expansion, the stochastic convolution \eqref{sc:Psia} for the additive noise problem can be written as
\begin{equation}
\label{Psia}
\Psi}%^{\textup{(a)}}(t) = \sum_{n\in\mathbb{Z}^d} e_n
\sum_{j\in\mathbb{Z}^d} \widehat{(\phi e_j )}(n) \int_0^t e^{i(t-t')|n|^2}d\beta_j(t')\, .
\end{equation}
We first prove the following $X^{s,b}$-estimate on $\Psi$:
\begin{lemma}
\label{stoc-conv-est-add}
Let $s\ge 0$, $0\le b<\frac{1}{2}$, $T>0$, and $\sigma\in [2,\infty)$.
Assume that
$\phi\in \mathcal{L}^2(L^2(\mathbb{T}^d); H^s(\mathbb{T}^d))$.
Then for $\Psi$ given by \eqref{Psia} we have
\begin{align}
\mathbb{E}\left[ \|\Psi}%^{\textup{(a)}}\|^{\sigma}_{X^{s,b}{([0,T])}} \right]
&\lesssim T^\frac{\sigma}{2}(1+T^2)^\frac{\sigma}{2}
\|\phi\|_{\L^2(L^2(\mathbb{T}^d);H^s(\mathbb{T}^d))}^{\sigma}\,.
\end{align}
\end{lemma}
\begin{proof}
Since $\mathbbm{1}_{[0,T]}(t)\mathbbm{1}_{[0,T]}(t')=\mathbbm{1}_{[0,T]}(t)=1$ whenever $0\le t'\le t\le T$, we have
\begin{align*}
\mathbbm{1}_{[0,T]}(t)\Psi}%^{\textup{(a)}}(t)(x) &=
\sum_{n\in\mathbb{Z}^d} e_n \sum_{j\in\mathbb{Z}^d}
\widehat{\phi e_j}(n) \mathbbm{1}_{[0,T]}(t) e^{it|n|^2}
\int_0^t \mathbbm{1}_{[0,T]}(t') e^{-it'|n|^2}{d}\beta_j(t')
\end{align*}
By \eqref{Xsblocalsim}, we have
\begin{align}
\xsbt{\Psi}%^{\textup{(a)}}(t)}{s}{b}{[0,T]}&\sim\xsb{\mathbbm{1}_{[0,T]}(t)\Psi}%^{\textup{(a)}}(t)}{s}{b}\notag\\
&= \|\jb{n}^s\jb{\tau}^b\mathcal{F}_{t,x} \brac{S(-t)\mathbbm{1}_{[0,T]}(t) \Psi}%^{\textup{(a)}}(t)}(\tau,n) \|_{L^2_\tau\l^2_n}\notag\\
&=\Big\|\jb{n}^s\jb{\tau}^b
\mathcal{F}_t\big[g_n(t)\big](\tau)
\Big\|_{L^2_{\tau}\ell^2_n}\,,\label{continue1}
\end{align}
where
\[g_n(t):=\sum_{j\in\mathbb{Z}^d} \mathbbm{1}_{[0,T]}(t) \int_0^t \mathbbm{1}_{[0,T]}(t')
e^{-it'|n|^2}\widehat{\phi e_j}(n){d}\beta_j(t')\,.\]
By the stochastic Fubini theorem
(see \cite[Theorem~4.33]{daprato-zab-inf-dim}),
we have
\begin{align*}
\mathcal{F}_t[g_n(t)](\tau)
&=\int_{\mathbb{R}}{e^{-it\tau}g_n(t)} dt\\
&=\sum_{j\in\mathbb{Z}^d}\int_{-\infty}^\infty\mathbbm{1}_{[0,T]}(t') e^{-it'|n|^2} \widehat{\phi e_j}(n) \int_{t'}^{\infty} \mathbbm{1}_{[0,T]}(t)e^{-it\tau}\,{d}t\, {d}\beta_j(t').
\end{align*}
Since
\begin{equation}\label{IBP-bound}
\left|\int_{t'}^{\infty} \mathbbm{1}_{[0,T]}(t)e^{-it\tau}\,{d}t\right|\lesssim
\min\{ T,|\tau|^{-1}\}\,,
\end{equation}
by Burkholder-Davis-Gundy inequality (Lemma~\ref{BDG}),
we get
\begin{gather}
\begin{split}
\label{eqn3p5}
\mathbb{E}\Big[|\mathcal{F}_t[g_n(t)](\tau)|^{\sigma}\Big]
&\lesssim \left[
\int_0^T \sum_{j\in\mathbb{Z}^d}
\left|\widehat{\phi e_j}(n)
\int_{t'}^{\infty}\mathbbm{1}_{[0,T]}(t)e^{-it\tau}\,dt \right|^2 dt'\right]^{\frac{\sigma}{2}}\\
&\lesssim \left[ T \sum_{j\in\mathbb{Z}^d}
|\widehat{\phi e_j}(n)|^2
\min\{ T^2,|\tau|^{-2}\}\right]^{\frac{\sigma}{2}}\,.
\end{split}
\end{gather}
By \eqref{continue1}, \eqref{eqn3p5},
and Minkowski inequality,
we get
\begin{align*}
\norm{\Psi}%^{\textup{(a)}}}_{L^{\sigma}(\Omega;X^{s,b}([0,T]))}
&\le
\left(
\sum_{n\in\mathbb{Z}^d}\int_{-\infty}^\infty{\jb{n}^{2s}\jb{\tau}^{2b}
\brac{\mathbb{E}\left[\left|\mathcal{F}[g_n](\tau)\right|^{\sigma}\right]}^\frac{2}{\sigma}}{\,d\tau}
\right)^{\frac12}\\
&\lesssim
T^{\frac12}
\left( \sum_{n,j\in\mathbb{Z}^d}\jb{n}^{2s}|\widehat{\phi e_j}(n)|^2\int_{-\infty}^\infty{\jb{\tau}^{2b}\min\{ T^2,|\tau|^{-2}\}}{\,d\tau}\right)^{\frac12}\\
&\lesssim
T^{\frac12}\norm{\phi}_{\L^2(L^2;H^s)}
\left(T^2\intd{|\tau|<1}{}{\tau}+
\intd{|\tau|\ge 1}{\jb{\tau}^{2b-2}}{\tau}\right)^{\frac12} .
\end{align*}
This completes the proof of Lemma~\ref{stoc-conv-est-add}.
\end{proof}
We now prove that $\Psi$ has a continuous version taking values in $H^s(\mathbb{T}^d)$. This is the content of the next lemma.
\begin{lemma}[Continuity of the additive noise]
\label{cts-stoc-conv-add}
Let $s\ge 0$, $T>0$, and $2\le \sigma<\infty$.
Assume that
$\phi\in \mathcal{L}^2(L^2(\mathbb{T}^d); H^s(\mathbb{T}^d))$.
Then $\Psi}%^{\textup{(a)}}(\cdot)$ belongs to $C([0,T];H^s(\mathbb{T}^d))$ almost surely
and
\begin{equation}
\label{lem3p4:est}
\mathbb{E}\Bigg[\sup_{t\in[0,T]}\norm{\Psi}%^{\textup{(a)}}(t)}_{H^s(\mathbb{T}^d)}^{\sigma}\Bigg]
\lesssim_{T} \,\norm{\phi}^{\sigma}_{\L^2(L^2(\mathbb{T}^d);H^s(\mathbb{T}^d))}\,.
\end{equation}
\end{lemma}
\begin{proof}
We fix $\alpha\in \brac{0, \frac12}$
and we write the stochastic convolution as follows:
\begin{align}
\begin{split}\label{factorization}
\Psi}%^{\textup{(a)}}(t)
&=\frac{\sin(\pi\alpha)}{\pi}\intud{0}{t}{\left[\intud{\mu}{t}{(t-t')^{\alpha-1}(t'-\mu)^{-\alpha}}{t'}\right]S(t-\mu)\phi}{W(\mu)}\\
&=\frac{\sin(\pi\alpha)}{\pi}\intud{0}{t}{S(t-t')(t-t')^{\alpha-1}
\intud{0}{t'}{S(t'-\mu)(t'-\mu)^{-\alpha}\phi}{W(\mu)}
}{t'}\,,
\end{split}
\end{align}
where we used the stochastic Fubini theorem \cite[Theorem~4.33]{daprato-zab-inf-dim}
and the group property of $S(\cdot)$.
By Lemma~\ref{fact-meth} and \eqref{factorization} it suffices to show that the process
\[f(t'):=\intud{0}{t'}{S(t'-\mu)(t'-\mu)^{-\alpha}\phi}{W(\mu)}\]
satisfies
\begin{equation}
\label{cts-stoc-conv-ref1}
\mathbb{E}\bigg[\intud{0}{T}{\norm{f(t')}_{H^s_x}^{\sigma}}{t'}\bigg]
\le C\big(T,\sigma,\norm{\phi}_{\L^2(L^2;H^s)}\big) <\infty\,,
\end{equation}
for some $\sigma>\frac{1}{\alpha}$.
By Burkholder-Davis-Gundy inequality (Lemma \ref{BDG}),
for any $\sigma\ge 2$ and any $t'\in[0,T]$,
we get
\begin{align*}
\mathbb{E}\left[\norm{f(t')}_{H^s_x}^{\sigma}\right]
&\lesssim \left( \int_0^{t'} \|S(t'-\mu)(t'-\mu)^{-\alpha} \phi\|^2_{\L^2(L^2;H^s)} d\mu\right)^{\frac{\sigma}{2}}\\
&= \left( \int_0^{t'} (t'-\mu)^{-2\alpha}
\sum_{j\in\mathbb{Z}^d} \|S(t'-\mu)\phi e_j\|^2_{H^s} d\mu \right)^{\frac{\sigma}{2}}\\
&\le \|\phi\|_{\L^2(L^2;H^s)}^{\sigma} \left(\frac{T^{1-2\alpha}}{1-2\alpha}\right)^{\frac{\sigma}{2}},
\end{align*}
where in the last step we used $2\alpha\in(0,1)$ and the $H^s(\mathbb{T}^d)$-isometry property of $S(t'-\mu)$.
Hence
\begin{align*}
\textup{LHS of }\eqref{cts-stoc-conv-ref1} =
\intud{0}{T}{\mathbb{E}\left[\norm{f(t')}_{H^s_x}^{\sigma}\right]}{t'}\lesssim \norm{\phi}_{{\L^2(L^2;H^s)}}^{\sigma}{T^{\frac{\sigma}{2}(1-2\alpha)+1}}<\infty\,.
\end{align*}
The estimate \eqref{lem3p4:est} follows from \eqref{lem3p2:est}.
\end{proof}
\subsection{The multiplicative stochastic convolution}\label{subsect:sto-conv-mult}
The multiplicative stochastic convolution $\Psi=\Psi[u]$ from \eqref{sc:Psim} can be written as
\begin{equation}
\label{sect3:Psiu}
\Psi[u](t) =
\sum_{n\in\mathbb{Z}^d} e_n
\sum_{j\in\mathbb{Z}^d} \int_0^t e^{i(t-t')|n|^2} \widehat{(u(t') \phi e_j )}(n) d\beta_j(t') .
\end{equation}
Recall that if $s>\frac{d}{2}$, then we have access to the algebra property of $H^s(\mathbb{T}^d)$:
\begin{align}\label{sob-alg}
\norm{fg}_{H^s(\mathbb{T}^d)}\lesssim \norm{f}_{H^s(\mathbb{T}^d)}\norm{g}_{H^s(\mathbb{T}^d)}
\end{align}
which is an easy consequence of the Cauchy-Schwartz inequality. This simple fact is useful for our analysis in the multiplicative case. On the other hand, \eqref{sob-alg} is not available to us for regularities below $\frac{d}{2}$,
but we use the following inequalities.
\begin{lemma}
\label{lem3p5}
Let $0<s\le \frac{d}{2}$ and $1\le r < \frac{d}{d-s}$. Then
\begin{equation}
\label{lem3p5uphij}
\|f u \|_{H^s(\mathbb{T}^d)} \lesssim \|f\|_{\mathcal{F} L^{s,r}(\mathbb{T}^d)} \|u\|_{H^s(\mathbb{T}^d)} .
\end{equation}
Also, for $s=0$, we have
\begin{equation}
\label{lem3p5s0}
\|f u \|_{L^2(\mathbb{T}^d)} \lesssim \|f\|_{\mathcal{F} L^{0,1}(\mathbb{T}^d)} \|u\|_{L^2(\mathbb{T}^d)} .
\end{equation}
\end{lemma}
\begin{proof}
Assume that $0<s\le \frac{d}{2}$
and let $n_1$ and $n_2$ denote the spatial frequencies of $f$ and $u$ respectively. By separating the regions $\{|n_1|\gtrsim |n_2|\}$ and $\{|n_1|\ll |n_2|\}$, and then applying Young's inequality, we have
\begin{align*}
\| f u \|_{H^s(\mathbb{T}^d)} &\lesssim
\Big\| \big( \widehat{\jb{\nabla}^s f} * \widehat{u} \big)(n)\Big\|_{\ell_n^2} +
\Big\| \big( \widehat{f} * \widehat{\jb{\nabla}^s u} \big)(n)\Big\|_{\ell_n^2} \\
&\lesssim \|f\|_{\mathcal{F} L^{s,r}} \|\widehat{u}\|_{\ell^p} + \|\widehat{f}\|_{\ell^1} \|u\|_{H^s} \,,
\end{align*}
where $p$ is chosen such that $\frac1r + \frac1p =\frac32$.
By H\"{o}lder inequality, for $r'$ and $q$ such that $\frac1r+\frac{1}{r'}=1$ and $\frac1q+\frac12 =\frac1p$,
\begin{align*}
\|\widehat{f}\|_{\ell^1} &\lesssim \|\jb{n}^{-s}\|_{\ell^{r'}} \|f\|_{\mathcal{F} L^{s,r}}, \\
\|\widehat{u}\|_{\ell^p} &\lesssim \|\jb{n}^{-s}\|_{\ell^{q}} \|u\|_{H^s} .
\end{align*}
Since $sr'>d$ and $sq>d$ provided that $r<\frac{d}{d-s}$, the conclusion \eqref{lem3p5uphij} follows.
If $s=0$, \eqref{lem3p5s0} follows easily from Young's inequality:
\begin{align*}
\| f u \|_{L^2(\mathbb{T}^d)} = \|\widehat{f} * \widehat{u}\|_{\ell^2} \lesssim \|\widehat{f} \|_{\ell^1} \|\widehat{u}\|_{\ell^2}
= \|f\|_{\mathcal{F} L^{0,1}} \|u\|_{L^2} .
\end{align*}
\end{proof}
Given $\phi$ as in Theorem~\ref{lwp:mult}, let us denote
\begin{equation}
\label{defn:Cphi}
C(\phi):= \norm{\phi}_{\L^2(L^2(\mathbb{T}^d);\mathcal{F} L^{s,r}(\mathbb{T}^d))} <\infty\,,
\end{equation}
for $r=2$ when $s>\frac{d}{2}$,
for some $r\in\big[1,\frac{d}{d-s}\big)$ when $0<s\le \frac{d}{2}$, and for $r=1$ when $s=0$.
Recall that if $\phi$ is translation invariant,
then it is sufficient to assume that $C(\phi)<\infty$ with $r=2$, for all $s\ge 0$.
We now proceed to prove the following $X^{s,b}$-estimate of $\Psi[u]$.
\begin{lemma}\label{stoc-conv-est-mult}
Let $s\ge 0$, $0\le b<\frac{1}{2}$, $T>0$, and $2\le \sigma <\infty$.
Suppose that
$\phi$ satisfies the assumptions of Theorem~\ref{lwp:mult}.
Then, for $\Psi[u]$ given by \eqref{sc:Psim}
we have the estimate
\begin{align}
\mathbb{E}\left[
\xsbt{\Psi[u]}{s}{b}{[0,T]}^{\sigma}\right]
&\lesssim (T^2+1)^\frac{\sigma}{2} C(\phi)^{\sigma}\,
\mathbb{E}\left[\|u\|_{L^2([0,T]; H^s(\mathbb{T}^d))}^{\sigma}\right]
\label{stoc-conv-est-mult1}\,.
\end{align}
\end{lemma}
\begin{proof}
We first prove \eqref{stoc-conv-est-mult1}. Let $g(t):=\mathbbm{1}_{[0,T]}(t)S(-t)\Psi}%^{\textup{(m)}}(t)$.
By the stochastic Fubini theorem \cite[Theorem~4.33]{daprato-zab-inf-dim},
\begin{align*}
\mathcal{F}_{t,x}(g)(\tau,n)
&=\intd{\mathbb{R}}{
e^{-it\tau}\mathbbm{1}_{[0,T]}(t)\sum_{j\in\mathbb{Z}^d}\intud{0}{t}{
e^{-it'n^2} (\widehat{u(t')\phi e_j})(n)
}{\beta_j(t')}
}{t}\\
&=\sum_{j\in\mathbb{Z}^d}\intud{0}{T}{
\intud{t'}{\infty}{\mathbbm{1}_{[0,T]}(t)
e^{-it\tau}e^{-it'n^2}(\widehat{u(t')\phi e_j})(n)
}{t}
}{\beta_j(t')}\,.
\end{align*}
Then by \eqref{Xsblocalsim} and the assumption $0\le b<\frac{1}{2}$,
the Burkholder-Davis-Gundy inequality (Lemma~\ref{BDG}),
and \eqref{IBP-bound}, we have
\begin{align*}
\textup{LHS of } \eqref{stoc-conv-est-mult1}
&\sim \mathbb{E}\left[\norm{\jb{n}^s\jb{\tau}^b
\mathcal{F}[g](n,\tau)}_{L^2_{\tau}\ell^2_n}^{\sigma}\right]\\
&\hspace{-1cm}\lesssim
\mathbb{E}\left[
\brac{
\sum_{j,n\in\mathbb{Z}^d}
\intd{\mathbb{R}}{
\intud{0}{T}{
\jb{n}^{2s}\jb{\tau}^{2b}
\left|\intud{t'}{\infty}{\mathbbm{1}_{[0,T]}(t)e^{-it\tau}}{t}\right|^2
\left| (\widehat{u(t')\phi e_j})(n)\right|^2
}{t'}
}{\tau}
}^{\frac{\sigma}{2}}\right]\\
&\hspace{-1cm}\lesssim
(T^2+1)^\frac{\sigma}{2}\,\mathbb{E}\left[
\brac{
\intud{0}{T}{ \sum_{j,n\in\mathbb{Z}^d}
\jb{n}^{2s}\left| (\widehat{u(t')\phi e_j})(n)\right|^2
}{t'}
}^{\frac{\sigma}{2}}\right] \,.
\end{align*}
If $s>\frac{d}{2}$, we apply the algebra property of $H^s(\mathbb{T}^d)$ to get
\begin{equation*}
\|u(t') \phi e_j\|_{\ell^2_j H^s} \lesssim {\|\phi\|_{\L^2(L^2;H^s)}} \|u(t')\|_{H^s}.
\end{equation*}
If $0\le s\le \frac{d}{2}$, we have
\begin{equation}
\|u(t') \phi e_j\|_{\ell^2_j H^s} \lesssim C(\phi) \|u(t')\|_{H^s}.
\end{equation}
and thus \eqref{stoc-conv-est-mult1} follows.
\end{proof}
Next, we prove the continuity of $\Psi[u](t)$ in the same way as in Lemma~\ref{cts-stoc-conv-add},
i.e. by using Lemma~\ref{fact-meth}.
\begin{lemma}[Continuity of the multiplicative noise]
\label{cts-stoc-conv-mult}
Let $T>0$, $s\ge 0$, $0\le b<\frac{1}{2}$,
and $2\le \sigma <\infty$.
Suppose that $u\in L^{\sigma}\big(\Omega;X^{s,b}([0,T])\big)$
and that
$\phi$ satisfies the assumptions of Theorem~\ref{lwp:mult}.
Then $\Psi[u](\cdot)$ given by \eqref{sect3:Psiu}
belongs to $C([0,T];H^s(\mathbb{T}^d))$ almost surely.
Moreover,
\begin{equation}
\label{stoc-conv-est-mult2}
\mathbb{E}\left[\sup_{t\in[0,T]}\norm{\Psi[u](t)}_{H^s(\mathbb{T}^d)}^{\sigma} \right] \lesssim
C(\phi)^{\sigma} \,
\mathbb{E}\left[\|u\|_{X^{s,b}([0,T])}^{\sigma}\right]\,.
\end{equation}
\end{lemma}
\begin{proof}
Applying the same factorisation procedure as in the proof of Lemma~\ref{cts-stoc-conv-add}
reduces the problem to proving that the process
\[f(t'):=\intud{0}{t'}{(t'-\mu)^{-\alpha}S(t'-\mu)\big[u(\mu)\phi\big]}{W(\mu)}\]
satisfies
\begin{equation}\label{cts-stoc-conv-mult-ref1}
\mathbb{E}\left[\intud{0}{T}{\norm{f(t')}_{H^s_x}^{\sigma}}{t'}\right]\le C'\brac{T,\sigma, C(\phi)} <\infty \,
\end{equation}
for some $0<\alpha<1$ satisfying $\alpha>\frac{1}{\sigma}$.
By the Burkholder-Davis-Gundy inequality (Lemma~\ref{BDG}) and Lemma~\ref{lem3p5},
we have
\begin{align*}
\mathbb{E}\left[\norm{f(t')}_{H^s_x}^{\sigma}\right]
&\lesssim \mathbb{E}\left[ \left( \int_0^{t'} \|(t'-\mu)^{-\alpha} S(t'-\mu) [u(\mu)\phi]\|^2_{\L^2(L^2;H^s)} d\mu\right)^{\frac{\sigma}{2}}\right]\\
&= \mathbb{E}\left[ \left( \int_0^{t'} (t'-\mu)^{-2\alpha}
\sum_{j\in\mathbb{Z}^d} \|S(t'-\mu)u(\mu)\phi e_j\|^2_{H^s} d\mu \right)^{\frac{\sigma}{2}}\right]\\
&\lesssim \mathbb{E}\left[\left( \sum_{j\in\mathbb{Z}^d} \|\phi e_j\|^2_{\mathcal{F} L^{s,r}} \int_0^{T} (t'-\mu)^{-2\alpha}
\|u(\mu)\|^2_{H^s} d\mu \right)^{\frac{\sigma}{2}}\right]\,.
\end{align*}
Then, by Fubini theorem and Minkowski inequality, we obtain
\begin{align*}
\mathbb{E}\left[\int_0^T \norm{f(t')}_{H^s_x}^{\sigma} dt' \right]
&= \Big\|\, \|f\|_{H^s_x}\Big\|^{\sigma}_{L^\sigma(\Omega; L^\sigma_{t'}[0,T])}\\
&\lesssim C(\phi)^{\sigma}\, \bigg\|\,
\Big\| (t'-\mu)^{-\alpha} \|u(\mu)\|_{H^s_x}\Big\|_{L^2_{\mu}(0,T])}
\bigg\|^{\sigma}_{L^\sigma(\Omega; L^\sigma_{t'}[0,T])}\\
&\le C(\phi)^{\sigma}\, \mathbb{E} \Bigg[ \bigg\|\,
\Big\| (t'-\mu)^{-\alpha} \|u(\mu)\|_{H^s_x}\Big\|_{L^{\sigma}_{t'}(0,T])}
\bigg\|^{\sigma}_{L_{\mu}^{2}([0,T])}\Bigg]\\
&\lesssim C(\phi)^{\sigma}\, \mathbb{E} \Bigg[ \Bigg(\int_0^T (T-\mu)^{2(\frac{1}{\sigma}-\alpha)} \|u(\mu)\|_{H^s_x}^2d\mu
\Bigg)^{\frac{\sigma}{2}} \Bigg]
\end{align*}
By H\"{o}lder and Sobolev inequalities and \eqref{Xsblocalsim},
we have
\begin{align*}
\Bigg( \int_0^T (T-\mu)^{2(\frac{1}{\sigma}-\alpha)} \|u(\mu)\|^2_{H_x^s} d\mu \Bigg)^{\frac12}
&\le
\Big\| (T-\mu)^{\frac{1}{\sigma} - \alpha} \Big\|_{L_{\mu}^{\frac{4}{1+2b}}([0,T])}
\Big\| \|u(\mu)\|_{H^s_x}\Big\|_{L_{\mu}^{\frac{4}{1-2b}}([0,T])}\\
&\lesssim T^{1+\frac{4}{1+2b}(\frac{1}{\sigma}-\alpha)}
\Big\|\mathbbm{1}_{[0,T]}(\mu) \|S(-\mu) u(\mu)\|_{H^s_x} \Big\|_{L_{\mu}^{\frac{4}{1-2b}}}\,.
\end{align*}
There exists $\alpha=\alpha(\sigma):=\frac{1}{\sigma}+\frac14$ for which we have
\begin{align*}
\mathbb{E}\left[\int_0^T \norm{f(t')}_{H^s_x}^{\sigma} dt' \right] &\lesssim
\mathbb{E}\Big[ T^{\frac{2b\sigma}{1+2b}} \|u\|^{\sigma}_{X^{s,b}([0,T])} \Big] <\infty \,.
\end{align*}
\end{proof}
\section{Local well-posedness}
\label{sect:LWP}
\subsection{SNLS with additive noise}
\label{sect:LWPa}
In this subsection, we prove Theorem~\ref{lwp:add}.
Let $b=b(k)=\frac{1}{2}-$ be given by Lemma \ref{trilin} (in the case $d=k=1$)
or by Lemma \ref{multilin} (in the case $dk\ge 2$).
By Lemma \ref{stoc-conv-est-add}, for any $T>0$,
there is an event $\Omega'$ of full probability such that
the stochastic convolution $\Psi}%^{\textup{(a)}}$ has finite $X^{s,b}([0,T])$-norm on $\Omega'$.
Now fix $\omega\in \Omega'$ and $u_0\in H^s(\mathbb{T}^d)$.
Consider the ball
\[B_R:=\big\{u\in X^{s,b}([0,T]):\|u\|_{X^{s,b}([0,T])}\le R\big\}\]
where $0<T<1$ and $R>0$ are to be determined later.
We aim to show that the operator $\Lambda$ given by
$$ \Lambda u(t)
= S(t) u_0 \pm i\int_0^t S(t-t') \big(|u|^{2k}u\big)(t')dt' - i \Psi(t)\ ,\ t\geq0,\, $$
where $\Psi$ is the additive stochastic convolution given by \eqref{Psia}, is a contraction on $B_R$.
To this end, it remains to estimate the $X^{s,b}([0,T])$-norm of
\begin{equation*}
D(u):=\intud{0}{t}{S(t-t')\big(|u|^{2k}u\big)(t')}{t'} \,.
\end{equation*}
For any $\delta>0$ sufficiently small (such that $b+\delta<\frac12$),
by Lemma \ref{xsb-time-loc} and \eqref{Xsblocalsim}:
\begin{align*}
\left\|D(u)\right\|_{X^{s,b}([0,T])}
\lesssim T^{\delta} \left\|D(u)\right\|_{X^{s,b+\delta}([0,T])}
\lesssim T^\delta\left\|\mathbbm{1}_{[0,T]}(t) D(u)(t) \right\|_{X^{s,\frac{1}{2}+\delta}}.
\end{align*}
Let $\eta$ be a smooth cut-off function, supported on $[-1, T+1]$, with $\eta(t)=1$ for all $t\in [0,T]$.
For any $w\in X^{s,-\frac12+\delta}$ that agrees with
$|u|^{2k}u$ on $[0,T]$,
by Lemma \ref{lem:linests}, we obtain
\begin{align}
\label{sect4p1estDu1}
\left\|\mathbbm{1}_{[0,T]}(t) D(u)(t)\right\|_{X^{s,\frac12+\delta}} &
\lesssim
\left\|\eta(t) \int_0^t S(t-t') w(t') dt' \right\|_{X^{s,\frac{1}{2}+\delta}}
\lesssim \|w\|_{X^{s,-\frac{1}{2}+\delta}}
\end{align}
Then after taking the infimum over all such $w$, we use Lemma \ref{trilin} or \ref{multilin} and we get
\begin{align}
\label{sect4p1estDu2}
\left\|D(u)\right\|_{X^{s,b}([0,T])}
&\lesssim T^\delta\| (u \overline{u})^k u\|_{X^{s,-\frac12+\delta}([0,T])}
\lesssim T^\delta\norm{u}^{2k+1}_{X^{s,b}([0,T])}.
\end{align}
It follows that
\begin{equation}\label{random4}
\norm{\Lambda u}_{X^{s,b}([0,T])}\le c \norm{u_0}_{H^s_x}
+c T^{\delta} \norm{u}^{2k+1}_{X^{s,b}([0,T])} +\xsbt{\Psi}%^{\textup{(a)}}(t)}{s}{b}{[0,T]},
\end{equation}
for some $c>0$.
Similarly, we obtain
\begin{equation}\label{random5}
\norm{\Lambda u-\Lambda v}_{X^{s,b}([0,T])}\le c T^{\delta} \brac{\norm{u}^{2k}_{X^{s,b}([0,T])}
+\norm{v}^{2k}_{X^{s,b}([0,T])}}\norm{u-v}_{X^{s,b}([0,T])}.
\end{equation}
Let $R:= 2c \norm{u_0}_{H_x^s}+2\xsbt{\Psi}%^{\textup{(a)}}(t)}{s}{b}{[0,T]}$.
From \eqref{random4} and \eqref{random5},
we see that $\Lambda$ is a contraction from $B_R$ to $B_R$ provided
\begin{equation}\label{random6}
cT^\delta R^{2k+1}\le\frac{1}{2}R \ \mbox{ and } \ c T^{\delta} \brac{2R^{2k}}\le\frac{1}{2}\,.
\end{equation}
This is always possible if we choose $T\ll 1$ sufficiently small.
This shows the existence of a unique solution $u\in X^{s,b}([0,T])$ to \eqref{SNLS-mild} on $\Omega'$.
Finally, we check that $u\in C([0,T]; H^s)$ on the set of full probability $\Omega''\cap \Omega'$,
where $\Omega''$ is given by Lemma \ref{cts-stoc-conv-add},
that is $\Psi\in C([0,T];H^s)$ on $\Omega''$.
By \eqref{Xsblocalsim}, \eqref{sect4p1estDu1} and Lemma \ref{trilin} or \ref{multilin}, we also get
\begin{equation}
\xsbt{D(u)}{s}{\frac{1}{2}+\delta}{[0,T]}\lesssim \left\|\mathbbm{1}_{[0,T]}(t) D(u)(t)\right\|_{X^{s,\frac12+\delta}}
\lesssim \xsbt{u}{s}{b}{[0,T]}^{2k+1} .
\end{equation}
By the embedding $X^{s,\frac{1}{2}+\delta}([0,T]) \hookrightarrow C([0,T];H^s(\mathbb{T}^d))$,
we have $D(u)\in C([0,T];H^s(\mathbb{T}^d))$.
Since the linear term $S(t)u_0$ also belongs to $C([0,T];H^s(\mathbb{T}^d))$,
we conclude that
$$u=\Lambda u\in C\big([0,T];H^s(\mathbb{T}^d)\big) \text{ on } \Omega''\cap \Omega'.$$
\begin{remark}
From \eqref{random6}, we obtain the time of existence
\begin{equation}
\label{time-of-existence-a}
T_{\text{max}}:=\max\bigg\{\tilde{T}>0:
\tilde{T}\le c\Big(\norm{u_0}_{H^s}+\norm{\Psi}_{X^{s,b}([0,\tilde{T}])}\Big)^{-\theta}\bigg\}\,,
\end{equation}
where $\theta=\frac{2k}{\delta}$. Note that \eqref{time-of-existence-a} will be useful in our global argument.
\end{remark}
\subsection{SNLS with multiplicative noise}\label{sect:LWPm}
In this subsection, we prove Theorem~\ref{lwp:mult}.
Following \cite{debouard-debussche-kdv},
we use a truncated version of \eqref{SNLS-mild}.
The main idea is to apply an appropriate cut-off function on the nonlinearity to obtain a family of truncated SNLS, and then prove global well-posedness of these truncated equations.
Since solutions started with the same initial data coincide up to suitable stopping times,
we obtain a solution to the original SNLS in the limit.
Let $\eta:\mathbb{R}\to[0,1]$ be a smooth cut-off function such that $\eta\equiv 1$ on $[0,1]$ and $\eta\equiv 0$ outside $[-1,2]$.
Set $\eta\sub{R}:=\eta\brac{\frac{\cdot}{R}}$ and consider the equation
\begin{equation}\label{SNLSm-R}
i\partial_tu\sub{R} -\Delta u\sub{R} \pm \eta_R\big(\xsbt{u_R}{s}{b}{[0,t]}\big)^{2k+1}|{u}\sub{R}|^{2k} u\sub{R} = u_R\cdot\phi\xi\,,
\end{equation}
with initial data $u_R|_{t=0} = u_0$.
Its mild formulation is $u_R=\Lambda_R u_R$, where $\Lambda_R$ is given by
\begin{align}
\label{SNLS-R-mild}
\Lambda\sub{R} u_R&:= S(t)u_0
\pm i\intud{0}{t}{S(t-t')\eta_R\brac{\xsbt{u_R}{s}{b}{[0,t']}}^{2k+1}|u\sub{R}|^{2k}u\sub{R}(t')}{t'}-i\Psi}%^{\textup{(m)}}[u\sub{R}](t)\,.
\end{align}
The key ingredient for Theorem~\ref{lwp:mult} is the following proposition.
\begin{proposition}[Global well-posedness for \eqref{SNLSm-R}]\label{gwp-mult-R}
Let $s>s_{\text{crit}}$, $s\ge 0$, and $T, R>0$. Suppose that $\phi$ is as in Theorem~\ref{lwp:mult}.
Given $u_0\in H^s(\mathbb{T}^d)$, there exists $b=\frac{1}{2}-$
and a unique adapted process
$$u\sub{R}\in L^2\Big(\Omega;C\big([0,T];H^s(\mathbb{T}^d)\big)\cap X^{s,b}([0,T])\Big)$$
solving \eqref{SNLSm-R} on $[0,T]$.
\end{proposition}
Before proving this result, we state and prove the following lemma.
\begin{lemma}[Boundedness of cut-off]\label{cutoff-bdd}
Let $s\ge0$, $b\in[0,\frac{1}{2})$, $R>0$ and $T>0$.
There exist constants $C_1, C_2(R)>0$ such that
\begin{align}
\xsbt{\eta_{R}
\brac{
\xsbt{u}{s}{b}{[0,t]}
}
u(t)
}{s}{b}{[0,T]}&\le \min\left\{C_1\xsbt{u}{s}{b}{[0,T]}, C_2(R)\right\}\,;\label{cutoff-bb1}
\end{align}
\begin{align}
\xsbt{\eta_{R}\brac{\xsbt{u}{s}{b}{[0,t]}}u(t)
-\eta_{R}\brac{\xsbt{v}{s}{b}{[0,t]}}
v(t)
}{s}{b}{[0,T]}&\le C_2(R)\xsbt{u-v}{s}{b}{[0,T]}\,.\label{cutoff-bb2}
\end{align}
\end{lemma}
\begin{proof}
We first prove \eqref{cutoff-bb1}. Let $w(t,n)=\mathcal{F}_x[S(-t)u(t)](n)$, $\kappa\sub{R}(t)=\eta\sub{R}
\brac{\xsbt{u}{s}{b}{[0,t]}}$ and
\begin{equation}\label{tauR}
\tau\sub{R}:=\inf\left\{t\ge 0: \xsbt{u}{s}{b}{[0,t]}\ge 2R\right\}\,.
\end{equation}
Then $\kappa\sub{R}(t)=0$ when $t>\tau\sub{R}$. By \eqref{Xsblocalsim} and \eqref{Xsb-interact-rep},
\begin{align}
\xsbt{\kappa\sub{R}(t)
u(t)
}{s}{b}{[0,T]}^2
&\sim
\xsb{\mathbbm{1}_{[0,T\wedge\tau\sub{R}]}\kappa\sub{R}(t)u(t)}{s}{b}^2
\sim \xsbt{\kappa\sub{R}(t)u(t)
}{s}{b}{[0,T\wedge\tau\sub{R}]}^2 \notag\\
&\sim\sum_{n\in\mathbb{Z}^d}\jb{n}^{2s}\norm{\kappa\sub{R}(t)
w(t,n)}^2_{H^b(0,T\wedge\tau\sub{R})}.
\label{random7}
\end{align}
We now estimate the $H^b(0,T\wedge\tau\sub{R})$-norm, for which we use the following characterization (see for example \cite{tartar-sobolev}):
\begin{align}
\norm{f}_{H^b(a_1,a_2)}^2\sim\norm{f}_{L^2(a_1,a_2)}^2+
\intud{a_1}{a_2}{
\intud{a_1}{a_2}{\frac{|f(x)-f(y)|^2}{|x-y|^{1+2b}}}{x}
}{y}\ , \quad 0<b<1.
\label{sobolev-char}
\end{align}
For the inhomogeneous contribution (i.e. coming from the $L^2$-norm above),
we have
\begin{align*}
\sum_{n\in\mathbb{Z}^d}\jb{n}^{2s}\norm{\kappa\sub{R}(t)w(t,n)}^2_{L^2_{t}(0,T\wedge\tau\sub{R})}
&\le \min\left\{\xsbt{u}{s}{b}{[0,\tau\sub{R}]}^2, \xsbt{u}{s}{b}{[0,T]}^2\right\}\\
& \le \min\left\{\brac{2R}^2, \xsbt{u}{s}{b}{[0,T]}^2\right\}.
\end{align*}
The remaining part of \eqref{random7} needs a bit more work. Fix $n\in\mathbb{Z}^d$, then
\begin{align*}
\hspace{2em}&\hspace{-2em}
\intud{0}{T\wedge\tau\sub{R}}{
\intud{0}{T\wedge\tau\sub{R}}{\frac{|\kappa\sub{R}(t)w(t,n)
-\kappa\sub{R}(t')w(t',n)|^2
}{|t-t'|^{1+2b}}}{t'}
}{t}\\
&\lesssim\intud{0}{T\wedge\tau\sub{R}}{
\intud{0}{t}{\frac{|\kappa\sub{R}(t)(w(t,n)
-w(t',n))|^2
}{|t-t'|^{1+2b}}}{t'}
}{t}\\
&\quad\quad\quad+\intud{0}{T\wedge\tau\sub{R}}{
\intud{0}{t}{\frac{|(\kappa\sub{R}(t)-\kappa\sub{R}(t'))w(t',n)|^2
}{|t-t'|^{1+2b}}}{t'}
}{t}\\
&=: \mathrm{I}(n)+\mathrm{I\!I}(n)\,.
\end{align*}
It is clear that
$$\mathrm{I}(n)\lesssim \min\left\{\norm{w(n)}_{H^b((0,\tau\sub{R}))}^2, \norm{w(n)}_{H^b((0,T))}^2 \right\}\,,$$
and hence
\[\sum_{n \in\mathbb{Z}^d}\mathrm{I}(n)\lesssim \min\left\{\brac{2R}^2, \xsbt{u}{s}{b}{[0,T]}^2\right\}.\]
For $\mathrm{I\!I}(n)$, the mean value theorem infers that
\begin{align*}
\left|\kappa_R(t)-\kappa_R(t')\right|^2
&\lesssim\frac{\left(\xsbt{u}{s}{b}{[0,t]}-\xsbt{u}{s}{b}{[0,t']}\right)^2}{R^2}\left(\sup_{r\in\mathbb{R}}\eta'(r)\right)^2\\
&\lesssim \frac{\xsb{\mathbbm{1}_{[t',t]}u}{s}{b}^2}{R^2}\\
&\lesssim \frac{1}{R^2}\sum_{n'\in\mathbb{Z}^d}\jb{n'}^{2s}\|{w(\cdot,n')}\|^2_{H^b(t',t)} .
\end{align*}
Again, we split $\|{w(\cdot,n')}\|^2_{H^b(t',t)}$ using \eqref{sobolev-char}
into the inhomogeneous contribution
(the $L^2$-norm squared part) and the homogeneous contribution (the second term of \eqref{sobolev-char}).
We control here only the homogeneous contributions for $\mathrm{I\!I}(n)$ as the inhomogeneous contributions are easier.
The homogeneous part of $\mathrm{I\!I}(n)$ is controlled by
\begin{align}
\hspace{2em}&\hspace{-2em}
\frac{1}{R^2}\sum_{n'\in\mathbb{Z}^d}\jb{n'}^{2s}
\intud{0}{T\wedge\tau\sub{R}}{
\intud{0}{t}{
\intud{t'}{t}{
\intud{t'}{\lambda}{
\frac{|w(t',n)|^2}{|t-t'|^{1+2b}}\cdot\frac{|w(\lambda,n')-w(\lambda',n')|^2}{|\lambda-\lambda'|^{1+2b}}
}{\lambda'}
}{\lambda}
}{t'}
}{t}\\
&=
\frac{1}{R^2}\sum_{n'\in\mathbb{Z}^d}\jb{n'}^{2s}
\int_0^{T\wedge\tau\sub{R}}\,\int^\lambda_0\,\int^{\lambda'}_0\,\brac{\intud{\lambda}{T\wedge\tau\sub{R}}{\frac{1}{|t-t'|^{1+2b}}}{t}}|w(t',n)|^2\notag\\
&\hspace{5.4cm}\times\frac{|w(\lambda,n')-w(\lambda',n')|^2}{|\lambda-\lambda'|^{1+2b}}\,dt'\,d\lambda'\,d\lambda\,,\label{random8}
\end{align}
where we used $0\le t'\le \lambda'\le \lambda\le t\le T\wedge \tau\sub{R}$ to switch the integrals. Now, the integral with respect to $t$ is equal to $|T\wedge\tau\sub{R}-t'|^{-2b}-|\lambda-t'|^{-2b}$, which is bounded by
\[|T\wedge\tau\sub{R}-t'|^{-2b}\le |\lambda'-t'|^{-2b}\,.\]
Thus \eqref{random8} is controlled by
\begin{align}
\frac{1}{R^2}\sum_{n'\in\mathbb{Z}^d}\jb{n'}^{2s}
&\int_{0}^{T\wedge\tau\sub{R}}\,
\int_{0}^{\lambda}\,
\brac{\intud{0}{\lambda'}{
|\lambda'-t'|^{-2b}
|w(t',n)|^2
}{t'}}\notag\\
&\hspace{1.8cm}\times\frac{|w(\lambda,n')-w(\lambda',n')|^2}{|\lambda-\lambda'|^{1+2b}}
{\,d\lambda'}
{\,d\lambda}\,.\label{random9}
\end{align}
Since $b\in\left[0,\frac{1}{2}\right)$,
by Hardy's inequality (see for example \cite[Lemma A.2]{tao-book}) the $t'$-integral is
$\lesssim \norm{w(\cdot,n)}_{H^b(0,\lambda')}^2\le \norm{w(\cdot,n)}_{H^b(0,T\wedge \tau\sub{R})}^2$. After multiplying by $\jb{n}^{2s}$ and summing over $n\in\mathbb{Z}^d$, we see that (\ref{random9}) is controlled by
\begin{align*}
&\frac{1}{R^2}\sum_{n,n'\in\mathbb{Z}^d}\jb{n}^{2s}\jb{n'}^{2s}\norm{w(\cdot,n)}_{H^b(0,T\wedge\tau\sub{R})}^2\norm{w(\cdot,n)}_{H^b_\lambda(0,T\wedge\tau\sub{R})}^2\\
&\hspace{2cm}\lesssim \frac{1}{R^2}\xsbt{u}{s}{b}{[0,T\wedge\tau\sub{R}]}^2\xsbt{u}{s}{b}{[0,T\wedge\tau\sub{R}]}^2\\
&\hspace{2cm}\le \min\left\{4\xsbt{u}{s}{b}{[0,T]}^2,16R^2\right\}\,.
\end{align*}
We now prove \eqref{cutoff-bb2}. Let $\tau^u_R$ and $\tau^v_R$ be defined as in \eqref{tauR}. Assume without loss of generality that $\tau^u_R\le\tau^v_R$. We decompose
\begin{align*}
\text{LHS of }\eqref{cutoff-bb2}
&\lesssim
\xsbt{\brac{\eta_{R}\brac{\xsbt{u}{s}{b}{[0,t]}}
-\eta_{R}\brac{\xsbt{v}{s}{b}{[0,t]}}}
v(t)}{s}{b}{[0,T]}\\
&\quad\quad\quad+\xsbt{\eta_{R}\brac{\xsbt{u}{s}{b}{[0,t]}}\brac{u(t)-v(t)}
}{s}{b}{[0,T]}\\
&=:A+B\,.
\end{align*}
By the mean value theorem,
\begin{align*}
A&= \xsbt{\brac{\eta_{R}\brac{\xsbt{u}{s}{b}{[0,t]}}
-\eta_{R}\brac{\xsbt{v}{s}{b}{[0,t]}}}
v(t)}{s}{b}{[0,T\wedge\tau_R^v]}\\
&\lesssim \frac{1}{R}\xsbt{v}{s}{b}{[0,T\wedge\tau^v_R]}\xsbt{u-v}{s}{b}{[0,T]}\\
&\lesssim \xsbt{u-v}{s}{b}{[0,T]}\,.
\end{align*}
For $B$, one runs through the same argument as for \eqref{cutoff-bb1}
but with $w(t,n)$ replaced by $\mathcal{F}_x\big[S(-t)\big(u(t)-v(t)\big)\big](n)$, which yields
\[B\lesssim C(R)\xsbt{u-v}{s}{b}{[0,T]}\,.\qedhere\]
\end{proof}
We now conclude the proof of Proposition \ref{gwp-mult-R}.
\begin{proof}[Proof of Proposition \ref{gwp-mult-R}]
Let $T,R>0$ and let $E\sub{T}:=L_{\textup{ad}}^2\brac{\Omega;X^{s,b}([0,T])}$ be the space of adapted processes in $L^2\brac{\Omega;X^{s,b}([0,T])}$.
We solve the fixed point problem \eqref{SNLS-R-mild} in $E_T$. Arguing as in the additive case, and using Lemmata \ref{cutoff-bdd} and \ref{stoc-conv-est-mult}, we have
\begin{align*}
\norm{\Lambda\sub{R} u}_{E_T}&\le C_1 \norm{u_0}_{H^s}+C_2(R)T^{\delta} +C_3T^{b}\norm{u}_{E_T}\,;\\
\norm{\Lambda\sub{R} u-\Lambda\sub{R} v}_{E_T}&\le C_4(R) T^\delta\norm{u-v}_{E_T}+C_5T^b\norm{u-v}_{E_T}\,.
\end{align*}
Therefore, $\Lambda\sub{R}$ is a contraction from $E_T$ to $E_T$ provided we choose $T=T(R)$ sufficiently small. Thus there exists a unique solution $u_R\in E_T$. Note that $T$ does not depend on $\norm{u_0}_{H^s}$, hence we may iterate this argument to extend $u_R(t)$ to all $t\in [0,\infty)$.
Finally, to see that
$u\sub{R}\in F_T:=L^2\big(\Omega; C([0,T];H^s(\mathbb{T}^d))\big)$, we first note that since $u_R\in E_T$, Lemma \ref{cts-stoc-conv-mult} infers that $\Psi[u_R]\in F_T$. Then, by similar argument as in the end of Subsection \ref{sect:LWPa}, we have that $D(u_R)\in L^2(\Omega;X^{s,\frac{1}{2}+}\big([0,T]\big))$, where
\[D(u_R)(t):=\int_0^t S(t-t')\big(|u_R|^{2k}u_R\big)\,dt'\,.\]
Since $L^2\big(\Omega; X^{s,\frac12+}([0,T])\big)\hookrightarrow F_T$, we have $D(u_R)\in F_T$.
Also, it is clear that $S(t)u_0\in F_T$. Hence $u_R\in F_T$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{lwp:mult}]
Let
\begin{equation}
\label{defn:tauR4p17}
\tau\sub{R}:=\inf\big\{t>0:\xsbt{u\sub{R}}{s}{b}{[0,t]}\ge R\big\}.
\end{equation}
Then, $\eta\sub{R}(\xsbt{u\sub{R}}{s}{b}{[0,t]})=1$ if and only if $t\le\tau\sub{R}$.
Hence $u\sub{R}$ is a solution of \eqref{SNLS-mild} on $[0,\tau\sub{R}]$.
For any $\delta>0$, we have $u\sub{R}(t)=u\sub{R+\delta}(t)$ whenever $t\in[0,\tau\sub{R}]$. Consequently, $\tau\sub{R}$ is increasing in $R$. Indeed, if $\tau\sub{R}>\tau\sub{R+\delta}$ for some $R>0$ and some $\delta>0$, then for $t\in[\tau\sub{R+\delta},\tau\sub{R}]$,
we have $\eta\sub{R+\delta}\big(\xsbt{u\sub{R+\delta}}{s}{b}{[0,t]}\big)<1$ which implies that $u\sub{R}(t)\ne u\sub{R+\delta}(t)$, a contradiction. Therefore,
\begin{equation}
\tau^*:=\lim_{R\to\infty}\tau\sub{R}
\end{equation}
is a well-defined stopping time that is either positive or infinite almost surely. By defining $u(t):=u\sub{R}(t)$ for each $t\in[0,\tau\sub{R}]$, we see that $u$ is a solution of \eqref{SNLS-mild} on $[0,\tau^*)$ almost surely.
\end{proof}
\section{Global well-posedness}
\label{sect:GWP}
In this section, we prove Theorems \ref{gwp:add} and \ref{gwp:mult}.
Recall that the \emph{mass} and \emph{energy} of a solution $u(t)$
of the defocusing
\eqref{SNLS} are given respectively by
\begin{align}
M(u(t))&=\intd{\mathbb{T}^d}{\frac12 |u(t,x)|^2}{x} , \label{mass}\\
E(u(t))&=\int_{\mathbb{T}^d} \frac{1}{2} |\nabla u(t,x)|^2 +\frac{1}{2(k+1)} |u(t,x)|^{2(k+1)} dx . \label{energy}
\end{align}
It is well-known that these are conserved quantities for (smooth enough)
solutions of the deterministic NLS equation.
For SNLS, we prove probabilistic a priori control
as per Propositions~\ref{apriori-est-M-a}
and \ref{apriori-est-M-m} below.
To this purpose,
the idea is to compute the stochastic differentials of \eqref{mass} and \eqref{energy}
and use the stochastic equation for $u$.
We work with the following frequency truncated version of \eqref{SNLS}:
\begin{equation}
\label{SNLS_N}
\begin{cases}
i\partial_t u^N - \Delta u^N \pm P_{\leq N} |u^N|^{2k}u^N = F(u^N,\phi^N dW^N),\\
u^N|_{t=0} = P_{\le N} u_0=: u_0^N
\end{cases}
\end{equation}
where $P_{\leq N}$ is the Littlewood-Paley projection onto the frequency set
$\{n\in\mathbb{Z}^d:|n|\le N\}$,
$$\phi^N:= P_{\leq N}\circ \phi\ \text{ and }\ W^N(t):= \sum_{|n|\leq N} \beta_n(t) e_n.$$
By repeating the arguments in Section~\ref{sect:LWP},
one obtains local well-posedness for \eqref{SNLS_N} with initial data $P_{\le N}u_0$ at least with the same time of existence as for the untruncated SNLS.
\subsection{SNLS with additive noise}
We treat the additive SNLS in this subsection. We first prove probabilistic a priori bounds on \eqref{mass} and \eqref{energy} of a solution $u^N$ of the truncated equation.
\begin{proposition}\label{apriori-est-M-a}
Let $m\in\mathbb{N}$, $T_0>0$, $\phi\in\L^2(L^2(\mathbb{T}^d);L^2(\mathbb{T}^d))$, and $F(u,\phi\xi)=\phi\xi$.
Suppose that $u^N(t)$ is a solution to \eqref{SNLS_N} for $t\in[0,T]$,
for some stopping time $T\in[0,T_0]$.
Then there exists a constant
$C_1=C_1(m,M(u_0), T_0, \|\phi\|_{\L^2(L^2;L^2)})>0$ such that
\begin{align}
\label{l2-est-a}
\mathbb{E}\left[\sup_{0\le t\le T} M(u^N(t))^m \right]
\le C_1\,.
\end{align}
Furthermore, if \eqref{SNLS_N} is defocusing,
there exists $C_2=C_2(m,E(u_0), T_0, \|\phi\|_{\L^2(L^2;H^1)})>0$ such that
\begin{align}
\label{h1-est-a}\mathbb{E}\left[\sup_{0\le t\le T} E(u^N(t))^m \right]
&\le C_2\,.
\end{align}
The constants $C_1$ and $C_2$ are independent of $N$.
\end{proposition}
\begin{proof}
By applying It\^o's Lemma, we have
\begin{align}
M(u^N(t))^m&=M(u_0^N)^m\notag\\
&\phantom{=}+
m\,\Im\brac{\sum_{|j|\le N}\int_{0}^{t}{M(u^N(t'))^{m-1}\int_{\mathbb{T}^d}\overline{u^N(t')}\phi^Ne_j\,dx}{\,d\beta_j(t')}}\label{mass-a1}\\
&\phantom{=}+m(m-1)\sum_{|j|\le N}\int_{0}^{t}{M(u^N(t'))^{m-2}\left|\int_{\mathbb{T}^d}u^N(t')\phi^Ne_j\,dx\right|^2}{\,dt'}\,.\label{mass-a2}\\
&\phantom{=}+m\norm{\phi^N}^2_{\L^2(L^2;L^2)}\int_0^t{M(u^N(t'))^{m-1}}{\,dt'}\label{mass-a3},
\end{align}
the last term being the It\^o correction term.
We first control \eqref{mass-a1}.
By Burkholder-Davis-Gundy inequality (Lemma~\ref{BDG}),
H\"older and Young inequalities, we get
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{mass-a1}\right]
&\lesssim_m \mathbb{E}\left[ \left\{\sum_{|j|\leq N}\int_0^T M(u^N(t'))^{2(m-1)} \|u^N(t')\|_{L^2}^2 \|\phi^Ne_j\|_{L^2}^2 dt' \right\}^\frac12 \right]\\
&\lesssim {\|\phi^N\|}_{\L^2(L^2;L^2)} \, \mathbb{E}\left[ \left\{ \int_0^T M(u^N(t))^{2m-1}dt \right\}^\frac12 \right]\\
&\lesssim{\|\phi\|}_{\L^2(L^2;L^2)} T^\frac12\, \mathbb{E}\left[ \left\{\sup_{t\in[0,T]} M(u^N(t))^{m-1} \right\}^\frac12
\left\{\sup_{t\in[0,T]} M(u^N(t))^{m} \right\}^\frac12 \right]\\
&\lesssim{\|\phi\|}_{\L^2(L^2;L^2)} T_0^\frac12 \left\{ \mathbb{E}\left[ \sup_{t\in[0,T]} M(u^N(t))^{m-1} \right]\right\}^\frac12
\left\{\mathbb{E}\left[ \sup_{t\in[0,T]} M(u^N(t))^{m} \right]\right\}^\frac12
\end{align*}
Hence by Young's inequality, we infer that
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{mass-a1}\right]
\leq C_m{\|\phi\|}^2_{\L^2(L^2;L^2)} T_0\, \mathbb{E}\left[ \sup_{t\in[0,T]} M(u^N(t))^{m-1} \right]
+ \frac12\,\mathbb{E}\left[ \sup_{t\in[0,T]} M(u^N(t))^{m} \right].
\end{align*}
In a straightforward way, we also have
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{mass-a2}\right]
&\leq m(m-1) {\|\phi\|}^2_{\L^2(L^2;L^2)} T_0\, \mathbb{E}\left[ \sup_{t\in[0,T]} M(u^N(t))^{m-1} \right] ,\\
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{mass-a3}\right]
&\leq 2m {\|\phi\|}^2_{\L^2(L^2;L^2)} T_0 \,\mathbb{E}\left[ \sup_{t\in[0,T]} M(u^N(t))^{m-1} \right] .
\end{align*}
Therefore, there is some $C_m>0$ such that
\begin{gather}
\label{apriori-est-a-M1}
\begin{split}
\mathbb{E}\left[\sup_{t\in[0,T]} M(u^N(t))^m\right]&\le M(u_0)^m
+ C_mT_0 \,\mathbb{E}\left[\sup_{t\in[0,T]} M(u^N(t))^{m-1}\right]\\
&\qquad +\frac{1}{2}\,\mathbb{E}\left[\sup_{t\in[0,T]} M(u^N(t))^m\right] .
\end{split}
\end{gather}
We now wish to move the last term of \eqref{apriori-est-a-M1} to the left-hand side. However, we do not know a priori that the moments of $\sup_{t\in[0,T]} M(u^N(t))$ are finite. To justify this, we note that \eqref{apriori-est-a-M1} holds with $T$ replaced by $T_R$, where
$$T_R:= \sup\left\{t\in [0,T] : M(u^N(t))\leq R\right\},\quad R>0.$$
Now the terms that would be appearing in \eqref{apriori-est-a-M1} are finite
and hence the formal manipulation is justified.
Note that $T_R\to T$ almost surely as $R\to\infty$ because $u$ (and hence $u^N$) belongs in $C([0,T];H^s(\mathbb{T}^d))$ almost surely. Hence by letting $R\to\infty$ and invoking the monotone convergence theorem, one finds
\begin{equation}
\label{apriori-est-M-fin}
\mathbb{E}\left[\sup_{t\in[0,T]} M(u^N(t))^m\right]\le 2M(u_0)^m
+ 2C_mT_0 \,\mathbb{E}\left[\sup_{t\in[0,T]} M(u^N(t))^{m-1}\right] .
\end{equation}
Hence, by induction on $m$, we obtain
\begin{equation}\label{mass-est-a}
\mathbb{E}\left[\sup_{t\in[0,T]} M(u^N(t))^m\right]\lesssim 1\,,
\end{equation}
where we note that the implicit constant is independent of $N$.
We now turn to estimating the energy. Applying It\^o's Lemma again,
we find that $E(u^N(t))^m$ equals
\begin{align}
&E(u^N_0)^m\label{energy-a0}\\
&\phantom{=}+ m\,\Im\brac{\sum_{|j|\leq N}\int_{0}^{t}{
E(u^N(t'))^{m-1} \int_{\mathbb{T}^d} {|u^N|^{2k}u^N\phi^N e_j}{\,dx}
}{\,d\beta_j(t')}} \label{energy-a1}\\
&\phantom{=}-m\,\Im\brac{\sum_{|j|\leq N}\int_{0}^{t}{E(u^N(t'))^{m-1}
\int_{\mathbb{T}^d}\Delta \overline{u^N}\phi^Ne_j \,dx} \,d\beta_j(t')
}\label{energy-a2}\\
&\phantom{=}+(k+1)m\sum_{|j|\leq N}{\int_0^t E(u^N(t'))^{m-1}
\int_{\mathbb{T}^d} {|u^N|^{2k}|\phi^N e_j|^2 \,dx\, dt'
}}\label{energy-a3}\\
&\phantom{=}+m\norm{\nabla \phi^N}^2_{\L^2(L^2;L^2)}\intud{0}{t}{E(u^N(t'))^{m-1}}{t'}\label{energy-a4}\\
&\phantom{=}+ \frac{m(m-1)}{2}\sum_{|j|\le N}\int_{0}^{t}{E(u^N(t'))^{m-2}
\left|\int_{\mathbb{T}^d}{(-\Delta \overline{u^N}+ |u^N|^{2k}\overline{u^N})\phi e_jdx}\right|^2
}{dt'}\label{energy-a5}.
\end{align}
\noindent
We shall control here only the difficult term \eqref{energy-a1}
as the other terms are bounded by similar lines of argument.
Firstly, by Burkholder-Davis-Gundy inequality (Lemma~\ref{BDG}),
we deduce
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{energy-a1}\right]
&\le C_m \mathbb{E}\left[\left\{\sum_{|j|\le N}\int_{0}^{T}
E(u^N(t'))^{2(m-1)}\left|\int_{\mathbb{T}^d}{|u^N|^{2k}u^N\phi^N e_j}{\,dx}\right|^2
{\,dt'}\right\}^\frac12\right] .
\end{align*}
Then, by duality and the (dual of the) Sobolev embedding $ H^1(\mathbb{T}^d) \hookrightarrow L^{2k+2}(\mathbb{T}^d)$, we have
\begin{align*}
\left|\int_{\mathbb{T}^d}{|u^N|^{2k}u^N\phi^N e_j}{\,dx}\right| &\leq
\left\| |u^N|^{2k}u^N\right\|_{H^{-1}(\mathbb{T}^d)} \|\phi^Ne_j\|_{H^1(\mathbb{T}^d)}\\
&\lesssim \left\| |u^N|^{2k}u^N\right\|_{L^{\frac{2k+2}{2k+1}}(\mathbb{T}^d)} \|\phi e_j\|_{H^1(\mathbb{T}^d)}\\
&\lesssim E(u^N)^{\frac{2k+1}{2k+2}} \|\phi e_j\|_{H^1(\mathbb{T}^d)},
\end{align*}
provided that $1+\frac1k \geq \frac{d}{2}$.
Therefore, by H\"older and Young inequalities, and similarly to the control of \eqref{mass-a1},
we have
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{energy-a1}\right] &\leq
C_m{\|\phi\|}^2_{\L^2(L^2;H^1)} T_0 \mathbb{E}\left[ \sup_{t\in[0,T]} E(u^N(t))^{m-1} \right]
+ \frac18\mathbb{E}\left[ \sup_{t\in[0,T]} E(u^N(t))^{m-\frac{1}{2k+2}} \right]\\
&\leq \tilde{C}_m{\|\phi\|}^2_{\L^2(L^2;H^1)} T_0 \mathbb{E}\left[ \sup_{t\in[0,T]} E(u^N(t))^{m-1} \right]
+ \frac18\mathbb{E}\left[ \sup_{t\in[0,T]} E(u^N(t))^{m} \right],
\end{align*}
where in the last step we used interpolation.
We also have
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{energy-a2}\right] &\le C_m\norm{\phi}_{\L^2(L^2;H^1)}\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^{m-1}\right]+\frac18\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^m\right]\\
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{energy-a3}\right] &\le C_m\norm{\phi}_{\L^2(L^2;H^1)}^2+\frac18\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N)^m\right] \\
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{energy-a4}\right] &\le
C_m {\|\phi\|}_{\L^2(L^2;H^1)}^2 \mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^{m-1}\right] ,\\
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{energy-a5}\right] &\le C\norm{\phi}^2_{\L^2(L^2;H^1)}
+\mathbb{E}\left[\sup_{t\in[0,T]} H(u^N(t))^{m-1}\right]+\frac18\mathbb{E}\left[\sup_{t\in[0,T]} H(u^N(t))^m\right].
\end{align*}
Gathering all the estimates, there exists $C_m>0$ such that
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))\right]&\le E(u_0)^m + C_{m} T_0\,\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^{m-1}\right]
+\frac{1}{2}\,\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^m\right].
\end{align*}
Similarly to passing from \eqref{apriori-est-a-M1} to \eqref{apriori-est-M-fin} and
by induction on $m$, we deduce that
\begin{equation}\label{energy-est-a}
\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^m\right]\lesssim 1 ,
\end{equation}
with constant independent of $N$.
\end{proof}
We now argue that the probabilistic a priori bounds in fact hold for solutions of the original SNLS.
\begin{corollary}\label{apriori-est-H1-L2}
For $u$ solution to \eqref{SNLS} with \eqref{add-noise},
the estimates
\eqref{l2-est-a} and \eqref{h1-est-a} hold with $u$ in place of $u^N$
under the same assumptions as Proposition~\ref{apriori-est-M-a}.
\end{corollary}
\begin{proof}
Let $\Lambda^N$ be the mild formulation of \eqref{SNLS_N}, more precisely,
\begin{equation}\label{mild-N}
\Lambda^N(v):=S(t)u_0^N\pm i\int_0^tS(t-t')P_{\le N}\left(|v|^{2k}v\right)(t')\,dt'-i\int_0^tS(t-t')\phi^N\,dW^N(t')\,.
\end{equation}
Then $\Lambda^N$ is a contraction on a ball in $X^{1,\frac12-}([0,T])$ and has a unique fixed point $u^N$ that satisfies the bounds in Proposition \ref{apriori-est-M-a}. Hence it suffices to show that $u^N$ in fact converges to $u$ in $F_T:=L^2(\Omega;C([0,T]; H^s_x))$ for $s=0,1$. We only show $s=1$ since the proof of $s=0$ is the same. To this end, we consider the mild formulations of $u^N$ and $u$ and show that each piece of $u^N$ converges to the corresponding piece in $u$. Clearly, $S(t)u_0^N\to S(t)u_0$ in $F_T$. For the noise, let $\Psi^N(t)$ denote the stochastic convolution in \eqref{mild-N}. Then
\begin{align*}
\Psi(t)-\Psi^N(t) &=\left(
\sum_{|n|>N}\sum_{j\in\mathbb{Z}^d}+
\sum_{|n|\le N}\sum_{|j|>N}
\right) e_n\int_0^t e^{i(t-t')|n|^2} \widehat{\phi e_j} (n) d\beta_j(t')\\
&= \int_0^t S(t-t')P_{>N}\phi \,dW(t')+ \int_0^t S(t-t')\pi_{N} P_{\le N}\phi \,dW(t')\,,
\end{align*}
where $\pi_N$ denotes the projection onto the linear span of the orthonormal vectors $\{e_j: |j|>N\}$. By Lemma \ref{cts-stoc-conv-add}, the above is controlled by $$\norm{P_{>N}\circ\phi}_{\L^2(L^2;H^1)}^{2}+\norm{\pi_N P_{\le N} \phi}_{\L^2(L^2;H^1)}^{2}\,,$$ which tends to $0$ as $N\to\infty$ because both norms are tails of convergent series.
Finally we treat the nonlinear terms
\[Du(t):=\int_{0}^{t}{S(t-t')|u|^{2k}u(t')}{\,dt'}\quad\mbox{ and }\quad
D^{\le N}u(t):=\int_{0}^{t}{S(t-t')P_{\le N}\brac{|u|^{2k}u}(t')}{\,dt'}\,.\]
We first fix a path for which local well-posedness holds, and prove that $Du-D^{\le N}u\to 0$ in $X^{1,\frac12+}$. Firstly,
\begin{align*}
\norm{Du-D^{\le N}u}_{X^{1,\frac12+}([0,T])}
&\le\norm{\int_{0}^{t}{S(t-t')P_{\le N}(|u|^{2k}u-|u^N|^{2k}u^N)(t')}{\,dt'}}_{X^{1,\frac{1}{2}+}([0,T])}\\
&\phantom{= } \quad +
\norm{P_{>N}Du}_{X^{1,\frac{1}{2}+}([0,T])}
\end{align*}
By Lemmas \ref{lem:linests}, \ref{trilin} and \ref{multilin}, we have
\begin{align}
\mathrm{I} &\lesssim \left(\norm{u}_{X^{1,\frac12-}([0,T])}^{2k}
+\norm{u^N}_{X^{1,\frac12-}([0,T])}^{2k}\right)\norm{u-u^N}_{X^{1,\frac12-}([0,T])}\label{I-est1}\\
\mathrm{I\!I} &\lesssim \norm{u}_{X^{1,\frac12-}([0,T])}^{2k+1}\label{II-est1}
\end{align}
In particular, \eqref{II-est1} implies $Du\in X^{1,\frac12+}([0,T])$, and hence $\mathrm{I\!I}\to 0$ as $N\to\infty$. We claim that $\mathrm{I}\to 0$ as $N\to\infty$ as well.
Indeed, $\Lambda^N$ and $\Lambda$ are contractions with fixed points $u^N$ and $u$ respectively, hence
\begin{align*}
\norm{u-u^N}_{X^{1,\frac12-}([0,T])}
&\le \norm{\Lambda(u)-\Lambda^N(u)}_{X^{1,\frac12-}([0,T])}+
\norm{\Lambda^N(u)-\Lambda^N(u^N)}_{X^{1,\frac12-}([0,T])}\\
&\le \norm{\Lambda(u)-\Lambda^N(u)}_{X^{1,\frac12-}([0,T])}+
\frac12 \norm{u-u^N}_{X^{1,\frac12-}([0,T])}\,.
\end{align*}
By rearranging, it suffices to show that the first term on the right-hand side above tends to $0$ as $N\to\infty$. Now
\begin{align*}
\norm{\Lambda(u)-\Lambda^N(u)}_{X^{1,\frac12-}([0,T])}
&\le \norm{P_{>N}S(t)u_0}_{X^{1,\frac12-}([0,T])}\\
&\quad + \norm{P_{>N}\int_0^tS(t-t')|u|^{2k}u(t')\,dt'}_{X^{1,\frac12-}([0,T])}\\
&\quad +\norm{\Psi^{>N}}_{X^{1,\frac12-}([0,T])}\,.
\end{align*}
By similar arguments as above, all the terms on the right go to $0$ as $N\to\infty$. This proves our claim. By the embedding $X^{1,\frac{1}{2}+}([0,T])\subset C([0,T];H^1(\mathbb{T}^d))$, we have that
\begin{equation}\label{nonlinear-convergence}\left\|Du-D^{\le N}u\right\|_{C([0,T];H^1)}\to 0
\end{equation}
almost surely as $N\to\infty$. By the dominated convergence theorem, we have $Du-D^{\le N}u\to 0$ in $F_T$. This concludes our proof.
\end{proof}
Finally, we conclude the proof of global well-posedness for the additive case.
\begin{proof}[Proof of Theorem \ref{gwp:add}]
Let $s\in\{0,1\}$ be the regularity of $u_0$ from Theorem \ref{gwp:add}. Let $\varepsilon>0$ and $T>0$ be given. We claim that there exists an event $\Omega_\varepsilon$ such that a solution
$u\in X^{s,b}([0,T])\cap C([0, T]; H^s(\mathbb{T}^d))$ exists on $[0,T]$ in $\Omega_\varepsilon$
and $\mathbb{P}(\Omega\setminus \Omega_\varepsilon)<\varepsilon$. If this claim holds, then by setting
\[\Omega^*=\bigcup_{n=1}^\infty\Omega_\frac{1}{n},\]
we have that $\mathbb{P}(\Omega^*)=1$ and $u$ exists on $[0,T]$, proving the theorem. Let $\delta\in(0,1)$ be a small quantity chosen later. We subdivide $[0,T]$ into $M=\left\lceil\frac{T}{\delta}\right\rceil$ subintervals $I_k=[(k-1)\delta,k\delta]$. Let
\[\Omega_0=\bigcap_{k=1}^M\left\{\omega\in\Omega:\xsbt{\intud{(k-1)\delta}{t}{S(t-t')\phi}{W(t')}}{s}{b}{I_k}\le L\right\},\]
where $L>0$ is some large quantity determined later. Now by Chebyshev's inequality and Lemma \ref{stoc-conv-est-add},
\begin{align*}
\mathbb{P}(\Omega\setminus\Omega_0)
&=\sum_{k=1}^M\mathbb{P}\brac{\xsbt{\intud{(k-1)\delta}{t}{S(t-t')\phi}{W(t')}}{s}{b}{I_k}>L}\\
&\le \sum_{k=1}^M\frac{1}{L^2}\mathbb{E}\left[\xsbt{\intud{(k-1)\delta}{t}{S(t-t')\phi}{W(t')}}{s}{b}{I_k}^2\right]\\
&\lesssim \sum_{k=1}^M\frac{\delta(\delta^2+1)}{L^2}\norm{\phi}_{\L^2(L^2;L^2)}^2\\
&\le \frac{2M\delta}{L^2}\norm{\phi}_{\L^2(L^2;L^2)}^2\\
&\lesssim \frac{T}{L^2}\norm{\phi}_{\L^2(L^2;L^2)}^2\,.
\end{align*}
By choosing $L=L(\varepsilon,T,\phi)$ sufficiently large, we may therefore bound $\mathbb{P}(\Omega^c_0)$ above by $\frac{\varepsilon}{2}$. Now let
\[ R=\max\left\{\norm{u_0}_{H^s}, L\right\}\,.\]
By local theory, there exists a unique solution $u(t)$ to \eqref{SNLS} with time of existence $T_\text{max}$ given in \eqref{time-of-existence-a}. In particular, we note that for $\omega\in\Omega_0$,
\begin{equation}\label{toe-ineq}
c\Big({\norm{u_0}_{H^s}+\norm{\Psi}_{X^{s,b}_{[0,\delta]}}}\Big)^{-\theta}
\ge c\big({R+L}\big)^{-\theta},
\end{equation}
where $c$ is as in \eqref{time-of-existence-a}.
By choosing $\delta=\delta(R,L):=c ({R+L})^{-\theta}$,
we see that $u(t)$ exists for $t\in[0, \delta]$ for all $\omega\in\Omega_0$. Now define
\[\Omega_1=\left\{\omega\in\Omega_0:\norm{u(\delta)}_{H^s}\le R\right\}\,.\]
By the same argument, $u(t)$ exists for $t\in(\delta,2\delta)$ for all $\omega\in\Omega_1$. Iterating this argument, we have a chain of events $\Omega_0\supseteq \Omega_1 \supseteq \dots \supseteq \Omega_{M-1}$ where
\[\Omega_k=\left\{\omega\in\Omega_{k-1}:\norm{u(k\delta)}_{H^s}\le R\right\} \]
and $u(t)$ exists for all $t\in[0,(k+1)\delta]$ on $\Omega_k$. Setting $\Omega_\varepsilon:=\Omega_{M-1}$, $u(t)$ exists on the full interval $[0,T]$ on $\Omega_\varepsilon$. It remains to check that $\Omega\setminus\Omega_\varepsilon$ remains small.
By Corollary \ref{apriori-est-H1-L2}, we have
\begin{align*}
\mathbb{P}(\Omega_\varepsilon)&\le\mathbb{P}(\Omega\setminus\Omega_0)+\sum_{k=0}^{M-1}\mathbb{P}(\Omega_{k+1}^c\cap\Omega_k)\\
&\le\frac{\varepsilon}{2}+\sum_{k=0}^{M-1}\mathbb{P}\brac{\left\{\norm{u((k+1)\delta)}_{H^s}>R\right\}\cap\Omega_k}\\
&\le\frac{\varepsilon}{2}+\sum_{k=0}^{M-1}\frac{1}{R^p}\mathbb{E}\left[\mathbbm{1}_{\Omega_k}\norm{u((k+1)\delta)}_{H^s}^p\right]\\
&\le\frac{\varepsilon}{2}+\frac{MC_1}{R^p}\\
&\le \frac{\varepsilon}{2}+\frac{2TC_1(R+L)^{\theta}}{cR^p}\,,
\end{align*}
for any $p\in\mathbb{N}$. We further enlarge $R$ if necessary by setting
\[R=\max\left\{\frac{2TC_1}{c}+1, L, \norm{u_0}_{H^s}\right\}\,,\]
where have that
\[\mathbb{P}(\Omega_\varepsilon)\le \frac{\varepsilon}{2}+2^\theta R^{\theta-p+1}\,.\]
This is smaller than $\varepsilon$ provided we choose $p=p(\varepsilon,\theta)>0$ sufficiently large. Thus $\Omega_\varepsilon$ satisfies our claim.
\end{proof}
\subsection{SNLS with multiplicative noise}
In order to globalize solutions of SNLS, for the multiplicative noise case,
we need to prove probabilistic control of the $X^{s,b}$-norm of the solutions of the truncated SNLS uniformly in the truncation parameter (Lemma~\ref{finite-xsb}). This requires a priori bounds on mass and energy of solutions.
From Subsection \ref{sect:LWPm}, we obtained a local solution of the multiplicative \eqref{SNLS} with time of existence
\[\tau^*=\lim_{R\to\infty}\tau\sub{R}\,.\]
Under the hypotheses of Theorem~\ref{gwp:mult},
we shall prove global well-posedness by showing that $\tau^*=\infty$ almost surely.
\begin{proposition}\label{apriori-est-M-m}
Let $T_0>0$ and $\phi$ be as in Theorem~\ref{gwp:mult}.
Suppose that
$u(t)$ is a solution for \eqref{SNLS} with $F(u,\phi\xi)=u\cdot\phi\xi$ on $t\in[0,T]$
for some stopping time $T\in[0,T_0\wedge \tau^*)$. Let $C(\phi)$ be as in \eqref{defn:Cphi}. Then for any $m\in\mathbb{N}$, there exists
$C_1=C_1(m,M(u_0), T_0, C(\phi))>0$ such that
\begin{align}
\mathbb{E}\left[\sup_{0\le t\le T} M(u(t))^m \right]
\le C_1\,.
\end{align}
Furthermore, if\eqref{SNLS} is defocusing,
there exists $C_2=C_2(m,E(u_0), T_0,C(\phi))>0$ such that
\begin{align}
\label{h1-est-m}
\mathbb{E}\left[\sup_{0\le t\le T} E(u(t))^m \right]
&\le C_2\,.
\end{align}
\end{proposition}
\begin{proof}
We consider the
frequency truncated equation \eqref{SNLS_N} and apply It\^o's Lemma to obtain
\begin{align}
M(u^N(t))^m&=M(u_0^N)^m\notag\\
&\phantom{=}+
m\,\Im\brac{\sum_{|j|\le N}\int_{0}^{t}{M(u^N(t'))^{m-1}\int_{\mathbb{T}^d}|u^N(t')|^2\phi^Ne_j\,dx}{\,d\beta_j(t')}}\label{mass-m1}\\
&\phantom{=}+m(m-1)\sum_{|j|\le N}\int_{0}^{t}{M(u^N(t'))^{m-2}\left|\int_{\mathbb{T}^d}|u^N(t')|^2\phi^Ne_j\,dx\right|^2}{\,dt'}\label{mass-m2}\\
&\phantom{=}+m(m-1)\sum_{|j|\le N}\int_0^t{M(u^N(t'))^{m-1} \int_{\mathbb{T}^d}|u(t')\phi e_j|^2\,dx}{\,dt'}\label{mass-m3}\,.
\end{align}
To bound \eqref{mass-m1}, we use Burkholder-Davis-Gundy inequality (Lemma~\ref{BDG})
and use a similar argument as in the proof of Lemma \ref{stoc-conv-est-mult} to get
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{mass-m1}\right]
&\lesssim \mathbb{E}\left[\sum_{|j|\le N}
\left(
\int_{0}^{T}M(u^N(t'))^{2(m-1)}\left|\int_{\mathbb{T}^d}|u^N(t')\phi e_j|^2\,dx\right|^2\,dt'
\right)^\frac12\right]\\
&\le C(\phi)^2\mathbb{E}\left[\left(\int_0^TM(u^N(t'))^{2m}\right)^\frac12\right]\\
&\le C(\phi)^2 \brac{\mathbb{E}\left[\sup_{t\in[0,T]} M(u^N(t))^m\right]}^\frac{1}{2}\brac{\mathbb{E}\left[\int_{0}^{T}{{M(u^N(t'))^{m}}}{\,dt'}\right]}^\frac{1}{2}
\end{align*}
Similarly, one obtains
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\left\{\eqref{mass-m2}+\eqref{mass-m3}\right\}\right]
&\lesssim C(\phi)\mathbb{E}\left[\int_0^TM(u^N(t'))^m\,dt'\right]
\end{align*}
Hence there is a constant $C_1=C_1(m, M(u_0), T, C(\phi))$ such that
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]} M(u^N(t))^m\right]
&\le C_1 +C_1\,\mathbb{E}\left[\int_0^TM(u^N(t'))^m\,dt'\right]\\
&\phantom{=}+C(\phi)^2 \brac{\mathbb{E}\left[\sup_{t\in[0,T]} M(u^N(t))^m\right]}^\frac{1}{2}\brac{\mathbb{E}\left[\int_{0}^{T}{{M(u^N(t'))^{m}}}{\,dt'}\right]}^\frac{1}{2}
\end{align*}
The left-hand side is bounded above by $3\mathcal{M}$,
where $\mathcal{M}$ is maximum of the three terms of the right-hand side.
In any of the three cases,
we may conclude the proof via simple rearrangement arguments and Gronwall's inequality.
Turning to the energy, we use It\^o's Lemma and the defocusing equation to obtain
that $E(u^N(t))^m$ equals
\begin{align}
&E(u^N_0)^m\label{energy-m0}\\
&+ m\,\Im\brac{\sum_{|j|\le N}\int_{0}^{t}{
E(u^N(t'))^{m-1}\int_{\mathbb{T}^d}{|u^N|^{2(k+1)}\phi^N e_j}{\,dx}
}{\,d\beta_j(t')}}\label{energy-m1}\\
&-m\,\Im\brac{\sum_{|j|\le N}\int_{0}^{t}{E(u^N(t'))^{m-1}
\int_{\mathbb{T}^d}{(\Delta\overline{u^N}){u^N\phi^N e_j}}\,dx
}{\,d\beta_j(t')}}\label{energy-m2}\\
&+ m(k+1)\sum_{|j|\le N}{\int_{0}^{t}{E(u^N(t'))^{m-1}
\int_{\mathbb{T}^d}{|u^N|^{2(k+1)}|\phi^N e_j|^2}{\,dx}
}{\,dt'}}\label{energy-m3}\\
&+m\sum_{|j|\le N}\int_{0}^{t}{E(u^N(t'))^{m-1}\int_{\mathbb{T}^d}|\nabla{(u^N\phi^N e_j)}(n)|^2}\,dx{\,dt'}\label{energy-m4}\\
&+\frac{m(m-1)}{2}\brac{\sum_{|j|\le N}\int_{0}^{t}{E(u^N(t'))^{m-2}\left|\int_{\mathbb{T}^d}{\brac{-u^N\Delta\overline{u^N}+|u^N|^{2k+1}}\phi^N e_j}{\,dx}\right|^2 }{\,dt'} }\label{energy-m5}
\end{align}
For \eqref{energy-m1}, we use Burkholder-Davis-Gundy inequality (Lemma~\ref{BDG}) to get
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{energy-m1}\right]
&\lesssim\mathbb{E}\left[\left(\sum_{|j|\le N}\int_0^TE(u^N(t'))^{2(m-1)}
\left|
\int_{\mathbb{T}^d}|u^N|^{2k+2}\phi^Ne_j\,dx
\right|^2\,dt'
\right)^\frac12\right]\,.
\end{align*}
Now, with $r$ as in Theorem~\ref{lwp:mult},
\begin{align*}
\left|
\int_{\mathbb{T}^d}|u^N|^{2k+2}\phi^Ne_j\,dx\right|^2
&\le\norm{u^N}_{L^{2k+2}_x}^{2(2k+2)}\norm{\phi^Ne_j}_{L^\infty_x}^2
\le E(u)^{2}\norm{\widehat{\phi^Ne_j}}_{\ell^1}^2\\
& \lesssim E(u)^{2} \|\phi^N e_j\|_{\mathcal{F} L^{s,r}} \,,
\end{align*}
where for the last step see Lemma~\ref{lem3p5}
Therefore, by H\"older's inequality and \eqref{defn:Cphi},
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{energy-m1}\right]
&\lesssim C(\phi)\, \mathbb{E}\left[\left(\int_0^TE((u^N(t')))^{2m}\,dt'\right)^\frac12\right]\\
&\le C(\phi) \brac{\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^m\right]}^\frac{1}{2}\brac{\mathbb{E}\left[\int_{0}^{T}{{E(u^N(t'))^{m}}}{\,dt'}\right]}^\frac{1}{2}\,.
\end{align*}
Similarly, we bound the other terms as follows:
\begin{align*}
&\mathbb{E}\left[\sup_{t\in[0,T]}\eqref{energy-m2}\right]\lesssim C(\phi) \brac{\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^m\right]}^\frac{1}{2}\brac{\mathbb{E}\left[\int_{0}^{T}{{E(u^N(t'))^{m}}}{\,dt'}\right]}^\frac{1}{2}\\
&\mathbb{E}\left[\sup_{t\in[0,T]}\{\eqref{energy-m3}+\eqref{energy-m4}+\eqref{energy-m5}\}\right]
\lesssim
C(\phi)^2 \mathbb{E}\left[\int_0^TE(u^N(t'))^m\,dt'\right]
\end{align*}
It follows that there is a constant $C_2=C_2(m, E(u_0), T, C(\phi))$ such that
\begin{align*}
\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^m\right]
&\le C_2 +C_2\,\mathbb{E}\left[\int_0^TE(u^N(t'))^m\,dt'\right]\\
&\phantom{=}+C_2 \brac{\mathbb{E}\left[\sup_{t\in[0,T]} E(u^N(t))^m\right]}^\frac{1}{2}\brac{\mathbb{E}\left[\int_{0}^{T}{{E(u^N(t'))^{m}}}{\,dt'}\right]}^\frac{1}{2}\,.
\end{align*}
Arguing in the same way as for the mass of $u^N$ yields the estimate for the energy of $u^N$. This proves the proposition for $u^N$ in place of $u$. The proposition then follows by letting $N\to\infty$.
\end{proof}
We now prove the following probabilistic a priori bound on the $X^{s,b}$-norm of a solution.
\begin{lemma}\label{finite-xsb}
Let $T, R>0$. Let $u\sub{R}$ be the unique solution of \eqref{SNLSm-R} on $[0,T]$.
There exists $C_1=C_1(\norm{u_0}_{L^2}, T, C(\phi))$ such that
\[\mathbb{E}\left[\xsbt{u\sub{R}}{0}{b}{[0,T\wedge\tau_R]}\right]\le C_1\,.\]
Moreover, if \eqref{SNLSm-R} is defocusing,
there also exists $C_2=C_2(\norm{u_0}_{H^1}, T,C(\phi))$ such that
\[\mathbb{E}\left[\xsbt{u\sub{R}}{1}{b}{[0,T\wedge\tau_R]}\right]\le C_2\,.\]
The constants $C_1$ and $C_2$ are independent of $R$.
\end{lemma}
\begin{proof}
Let $\tau$ be a stopping time so that $0< \tau\le T\wedge\tau_R$. By a similar argument used in local theory, we have
\begin{equation}\label{some-xsb-bound}
\begin{aligned}
\xsbt{u\sub{R}}{s}{b}{[0,\tau]}&\le C_1\norm{u\sub{R}(0)}_{H^s}+C_2\tau^\delta\xsbt{u\sub{R}}{s}{b}{[0,\tau]}^{2k+1}+\xsbt{\Psi}{s}{b}{[0,\tau]}\\
&\le C_1\norm{u\sub{R}}_{C([T\wedge\tau_R];H^s)}+C_2\tau^\delta\xsbt{u\sub{R}}{s}{b}{[0,\tau]}^{2k+1}+\xsbt{\Psi}{s}{b}{[0,T\wedge\tau_R]}\,.
\end{aligned}
\end{equation}
Let $K=C_1\norm{u\sub{R}}_{C([T\wedge\tau_R];H^s)}+\xsbt{\Psi(t)}{s}{b}{[0,T\wedge\tau_R]}$. We claim that if $\tau\sim K^{-\frac{2k}{\delta}}$, then
\begin{align}\label{continuity1}
\xsbt{u\sub{R}}{s}{b}{[0,\tau]}\lesssim K\,.
\end{align}
To see this, we note that the polynomial
\begin{equation}\label{poly}
p\sub{\tau}(x)=C_2\tau^\delta x^{2k+1}-x+K
\end{equation}
has exactly one positive turning point at
\[x'_+=\brac{{(2k+1)C_2\tau^\delta}}^{-\frac{1}{2k}}\,\]
and that $p_\tau(x'_+)<0$ if we choose $\tau=cK^{-\frac{2k}{\delta}}$. For this choice, we have $p_\tau(0)=K>0$ and hence $p_\tau(x)>0$ for $0\le x< x_+$ where $x_+$ is the unique positive root below $x_+'$. Now \eqref{some-xsb-bound} is equivalent to $p_\tau\big(\norm{u_R}_{X^{s,b}([0,\tau])}\big)\ge 0$.
But since $g(\,\cdot\,):=\norm{u_R}_{X^{s,b}([0,\,\cdot\,])}$ is continuous and $g(0)=0$, we must have
\[g(\tau)<x_+'\sim \tau^{-\frac{\delta}{2k}}\sim K\,,\]
which proves \eqref{continuity1}. Iterating this argument, we find that
\begin{align}\label{continuityk}
\norm{u\sub{R}}_{X^{s,b}([(j-1)\tau,j\tau])}\lesssim \norm{u\sub{R}}_{C([0,T\wedge\tau_R];H^s)}+\xsbt{\Psi(t)}{s}{b}{[0,T\wedge\tau_R]}\,
\end{align}
for all integer $1\le j\le \lceil\frac{T\wedge\tau_R}{\tau}\rceil=:M$. Putting everything together, we have
\begin{align*}
\norm{u\sub{R}}_{X^{s,b}([0,T\wedge\tau_R])}
&\le\sum_{j=1}^{M}\norm{u\sub{R}}_{X^{s,b}([(j-1)\tau,j\tau])}\\
&\lesssim \frac{T\wedge\tau_R}{\tau}\brac{\norm{u\sub{R}}_{C([0,T\wedge\tau_R];H^s)}+\xsbt{\Psi}{s}{b}{[0,T\wedge\tau_R]} }\\
&\lesssim T\brac{\norm{u\sub{R}}_{C([0,T\wedge\tau_R];H^s)}+\xsbt{\Psi}{s}{b}{[0,T\wedge\tau_R]} }^{\frac{2k}{\delta}+1}\,.
\end{align*}
By Proposition \ref{apriori-est-M-m} and Lemma \ref{stoc-conv-est-mult}, all moments of the last two terms above are finite. This proves Lemma~\ref{finite-xsb}.
\end{proof}
We can now conclude the proof of Theorem \ref{gwp:mult}.
\begin{proof}[Proof of Theorem \ref{gwp:mult}]
Fix $T>0$. Since $\tau\sub{R}$ is increasing in $R$,
\begin{align*}
\mathbb{P}(\tau^*<T) &=\lim_{R\to\infty}\mathbb{P}(\tau_R<T)= \lim_{R\to\infty}\mathbb{P}\left(\norm{u_R}_{X^{s,b}([0,T\wedge\tau_R])}\ge R\right)\\
&\le \lim_{R\to\infty}\frac1R\mathbb{E}\left[\norm{u_R}_{X^{s,b}([0,T\wedge \tau_R])}\right]\,.
\end{align*}
But then the right-hand side equals $0$ by Lemma \ref{finite-xsb}.
It follows that $\tau^*=\infty$ almost surely.
\end{proof}
|
{
"timestamp": "2018-03-08T02:12:36",
"yymm": "1803",
"arxiv_id": "1803.02817",
"language": "en",
"url": "https://arxiv.org/abs/1803.02817"
}
|
\section{INTRODUCTION}
Ultrafast phenomena related to the control and manipulation of electronic and spin states are in the focus of current research in
physics of magnetism, magnonics and spintronics \cite{Magnetism, Magnonics, Spintronics}. This activity is motivated, first of all, by novel experiments on ultrafast magnetic phenomena revealing new fundamentally important mechanisms of electronic-spin dynamics taking place on femto- and picosecond time scales \cite{Kalashnikova,Kimel, Pavlov1, Kirilyuk1, Bigot, Chen}. On the other hand, new results open potential possibilities for constructing high-speed magneto-electronic and magneto-optical devices.
The conservation of angular momentum is one of the most
fundamental law of physics which plays important role in various phenomena. For example, the Einstein-de Haas effect is a consequence of this conservation manifesting in the mechanical rotation of a free body, when its magnetic moment is changed \cite{Einstein}. Photon is a particle with an intrinsic angular momentum of one unit of $\hbar$ in the quantum-mechanical description, therefore the circularly polarized light carries a spin angular momentum. In Ref.~\cite{Allen} it is shown that the torque exerted by circularly polarized light can be transferred to a small electric dipole. An intense circularly polarized light may create a magnetization $\mathbf{M}$ in a medium during the photon-electron interaction due the inverse Faraday effect:
\begin{equation}
\mathbf{M}(0)=-i\mathbf{\chi^{(2)}}(0;\omega,-\omega) \left[\mathbf{E}(\omega) \times \mathbf{E}^{\ast }(\omega)\right],
\label{Eq1}
\end{equation}
where $\mathbf{\chi^{(2)}}$ is the second order nonlinear optical susceptibility describing the two-photon mixing process allowed in any media~\cite{Shen}, $\mathbf{E}$ is an oscillating electric field at angular frequency $\omega$. This phenomenon was theoretically predicted and discussed in Refs.~\cite{Pitaevskii,Pershan0,Pershan}, and experimentally observed in Ref.~\cite{Ziel}. Nowadays, the inverse Faraday effect is widely used for experimental and theoretical studies of ultrafast phenomena in magnetic systems~\cite{Kirilyuk1, Berritta, Berritta1, Qaiumzadeh, Freimuth}. An inverse transverse magneto-optical Kerr effect related to Eq.~(\ref{Eq1}) was predicted in Ref.~\cite{Belotelov}.
Another physical phenomenon based on the angular momentum transfer from circularly polarized light to a medium is the optical orientation.
This phenomenon reflects the exchange of angular momentum between the circularly polarized light and atomic or solid state systems. The principles of optical orientation were established by A. Kastler for paramagnetic atoms \cite{Kastler}, then were successfully applied for molecules \cite{Auzinsh} and semiconductors \cite{Orient}. For example, due to the angular momentum conservation a circularly polarised photon creates a spin-oriented $s$-electron in GaAs with a rather long life time at room \cite{Kimel1} and low temperatures \cite{Belykh}. The phenomenon of optical orientation is a linear optical process taking place during the interaction of circularly polarized light with an absorbing medium.
Here we report on the magnetic-field control of interplay between the inverse Faraday effect and optical orientation close to the absorption band gap in the magnetic semiconductor EuTe. We show that mechanisms
of optical orientation and spin-relaxation in these materials are different from those in model band semiconductors $A^{III}B^{V}$ and $A^{II}B^{VI}$
due to a specific electronic structure of EuTe.
\section{EXPERIMENTAL DETAILS AND RESULTS}
Magnetic semiconductors Eu\textit{X}(\emph{X}= O, S, Se, Te) represent a group of materials possessing unique
electronic, magnetic, optical, and magneto-optical properties~\cite{Wachter,Gunther,Nagaev} which are determined by strongly localized 4\emph{f}$^7$
electrons of the Eu$^{2+}$ ions with spin $S=7/2$ and orbital moment $L=0$. EuTe is antiferromagnetic with a N\'{e}el
temperature $T_N = 9.6$~K. The magnetic moments of the two sublattices $\mathbf{m}_1$ and $\mathbf{m}_2$ ($|m_1|=|m_2|$) are ordered antiferromagnetically in adjacent (111)-planes. In external magnetic field the EuTe can be ferromagnetically saturated аbove a critical field $B_c = 7.2 T$ .
Most of the early research in 1960s-1970s were performed on bulk single crystals and
noncrystalline thin films of Eu\textit{X}. However, during last decade high-quality epitaxial thin films of Eu\textit{X} were successfully grown on
Si and GaN semiconductor substrates opening new opportunities for applications \cite{Lettieri,Schmehl,Averyanov}. Eu\textit{X} compounds reveal a new type of
nonlinear magneto-optical effects \cite{Kaminski,Lafrentz}, ultrafast spin dynamics \cite{APL2010, PRL2012, SciRep2014, NatureComm2015}, photo-induced spin polarons with a giant magnetic moment \cite{Henriques1,Henriques2}.
\begin{figure}[]
\includegraphics[width=0.45\textwidth]{Fig1.eps}
\caption{(color online). Temporal behavior of the photo-induced
optical rotation in EuTe for different magnetic fields at the photon energy of 2.19 eV.
For clarity the traces are shifted vertically. A solid curve for magnetic field of 6~T shows the best fit on the basis of Eq.~(\ref{photo}) for
the following parameters $A=1.69(1)^\circ$, $B=-0.024(3)^\circ$, $\sigma=0.67(1)$~ps and $\tau_s=19(3)$~ps. There is a scale change of ordinate axis by enhance factor of 10 at the time delay of 2ps (noted as x10).
Inset shows schematically the pump-probe experimental geometry.} \label{fig:figure1}
\end{figure}
We present results on optical pump-probe studies of epitaxial films of the
magnetic semiconductor EuTe. This material exhibits a very strong redshift of the fundamental absorption
edge by 130~meV for magnetic fields between 0 and 8~T \cite{Heiss1}. Optical effects for photon energies close to the EuTe absorption band gap could be governed by applying external magnetic field. Using a pump-probe technique, we performed experiments on the magnetic-field control of helicity-dependent photo-induced phenomena in EuTe.
Pump-probe experiments were done in transmission geometry using an optical parametric oscillator pumped by a Ti:Sapphire laser with 1~ps pulses at 80~MHz repetition rate. We used a degenerate optical scheme for pump and probe beams having photon energy of 2.19~eV. This energy is slightly below the band gap value of 2.4~eV in EuTe at zero magnetic field. EuTe films were grown by the molecular-beam epitaxy on (111)-oriented BaF$_2$ substrates \cite{Henriques3,Heiss}. The 1 $\mu$m thick layers were capped with a 40-nm-thick BaF$_2$ protective
layer and the high sample quality was confirmed by x-ray
analysis.
Figure 1 shows the probe light polarisation rotation induced by the pump beam (photo-induced
rotation) in EuTe as a function of the pump-probe time delay for different magnetic fields.
Magnetic fields up to 6~T were applied in the Voigt geometry $\mathbf{k}\parallel (111) \perp \mathbf{H}$, see Inset in Fig.~1.
Temporal behavior of the photo-induced
rotation is
characterized by a narrow Gaussian-shape peak around the zero time delay for magnetic fields of 0-3~T. For magnetic fields above 3~T a broad tail with a characteristic relaxation time of several picoseconds begins to appear.
\begin{figure}[]
\includegraphics[width=0.40\textwidth]{Fig2.eps}
\caption{(color online). Magnetic field dependence of the
photo-induced rotation in EuTe for two time delays at the photon energy of 2.19 eV.}
\label{fig:figure2}
\end{figure}
Figure 2 shows the photo-induced optical rotation as a function of magnetic field for two time delay of 0 and 3~ps. These two dependencies display appreciably distinct behavior. The magnetic field dependence for 0~ps time delay has nonzero values for magnetic fields $0<H<4.7$~T; it has a minimum at $H\simeq4.4$~T, then it reverses sign for magnetic field of 4.7~T and has a broad maximum for $H\simeq5.3$~T. Surprisingly, the magnetic field dependence for 3~ps time delay has zero values for magnetic fields $0<H<3.2$~T. It has a maximum at $\simeq4.9$~T and reverses sign for $H\simeq5.8$~T. Quantitatively, the sign reversal of the photo-induced optical rotation for 0~ps is related to the strong redshift of the absorption band gap in EuTe in external magnetic field~\cite{Heiss1} when the fundamental absorption edge crosses the probe photon energy of 2.19~eV. Observed different behavior for time delays of 0 and 3~ps can be qualitatively understood taking into account an influence of excited electronic states on the absorption band gap in EuTe.
\begin{figure}[]
\includegraphics[width=0.4\textwidth]{Fig3.eps}
\caption{(color online). Electronic energy diagram for
unpertubed $4f^7$ and excited $4f^6$ states in EuTe.} \label{fig:figure2}
\end{figure}
\section{DISCUSSION}
For explaining optical pump-probe experiments let as consider the electronic energy diagram of the magnetic semiconductor EuTe in which Eu$^{2+}$ ions in the ground state $4f^7 5d^0$ play the decisive role. The electric dipole transition selection rules imply that the $4f \rightarrow 6s$ transitions are forbidden and one has to take into account the $4f^7 5d^0 \rightarrow 4f^6 5d^1$ electric dipole transition forming the absorption band gap in EuTe \cite{Gunther1}.
Figure~3 shows the electronic energy diagram for unpertubed $4f^7$ and excited $4f^6$ states. The absorption edge of EuTe corresponds to the onset of the $4f^7 (^8S_{7/2})\rightarrow 4f^6 (^7F_J) 5d(t_{2g})$ transition, where the final state is combined of one $5d$-electron and six $4f$-electrons. The $^7F_J$-multiplet ($J = 0, 1 ...6$) of the six $4f$-electrons with a total spin-orbital splitting is about 0.6~eV. In magnetic fields of ~0.5~T the semiconductor EuTe has two antiferromagnetically ordered sublattices with spins oriented perpendicularly to the magnetic field $\textbf{H}$. Due to spin conservation in the electric-dipole $4f^7 5d^0\rightarrow 4f^6 5d^1$ absorption process for the circularly polarized photon, immediately after excitation, the electron spin is oriented along the spin of a magnetic sublattice. Moreover, following the Franck-Condon principle, the electronic transition takes place at fixed spatial and spin coordinates of the lattice. Schematically this process, which we call as Stage I, is shown in Fig. 4. The width of a single sublevel of the $^7F_J$-multiplet is about 0.1~eV which corresponds to the electron lifetime of about 18~fs. After this time interval, some electrons are trapped by a long-living intrinsic
defect states $4f^6 X^1$ (Stage II) which are responsible for the luminescence process \cite{Heiss1} and magnetic polaron states \cite{Henriques1,Henriques2}. The life time of electrons on these states is longer than 1~ns depending on an applied magnetic field. During this time interval the spin of electron at the states $4f^6 X^1$ starts to precess (Stage III) due to the presence of external magnetic field $\mathbf{H}$. This precession can be analysed in terms of the Landau-Lifshitz equation with the Gilbert damping \cite{Landau,Gilbert,Mondal}:
\begin{equation}
\frac{\partial \mathbf{M}}{\partial t}=-\gamma\mathbf{M\times H
^{eff}+\alpha \mathbf{M\times }\frac{\partial \mathbf{M}}{\partial t},
\label{LLG}
\end{equation}
where $\mathbf{M}$ is the local magnetization, $\gamma$ is the gyromagnetic ratio, $\mathbf{H}^{eff}$ is the effective
magnetic field, which accounts the external field $\mathbf{H}$ and exchange field in EuTe, and $\alpha$ is the Gilbert damping constant.
In Ref.~\cite{Mondal} it was shown that the relativistic extrinsic spin-orbit coupling give rise
to a dominant local spin relaxation mechanism in magnetic solids. In the Voigt geometry $\mathbf{k}\parallel (111) \perp \mathbf{H}$, this spin-relaxation mechanism corresponds to the transverse spin relaxation (see Fig.~4). Finally, trapped electrons at the $4f^6 X^1$ states relax to the $4f^7 (^8S_{7/2})$ states (Stage IV).
In order to estimate the characteristic spin relaxation time of electrons on the $4f^6 X^1$ states we propose the following scheme. Assuming that the pump and probe pulses have the Gaussian temporal behavior, the photo-induced rotation $\theta$ can be analyzed by a single-time
relaxation model \cite{Pavlov1}:
\begin{eqnarray}
\theta &=& \frac{A}{\sigma \sqrt{\pi}}\exp \left( -\frac{t^{2}}{\sigma^{2}}\right)
\nonumber \\
&+&\frac{B}{2}\exp \left( \frac{\sigma ^{2}}{4\tau_s ^{2}}-\frac{t}{\tau_s
}\right) \left[ 1-\texttt{erf} \left( \frac{\sigma }{2\tau_s
}-\frac{t}{\sigma }\right) \right], \label{photo}
\end{eqnarray}
where $t$ is the pump-probe time delay, $\sigma$ is the width of
the Gaussian pulse, $A$ and $B$ are coefficients related to the instantaneous Gaussian-type and non-instantaneous delayed terms, respectively. The first and second terms in Eq.~(\ref{photo})
describe instantaneous contribution due the inverse Faraday
effect and non-instantaneous spin-related
contribution, respectively. When the laser pulse duration is much shorter than thermal relaxation times, the instantaneous contribution can be analysed in terms of the third-order optical nonlinearity $\mathbf{\chi^{(3)}}$ characterising a four-wave mixing process which is relevant to the pump-probe experiment. Photo-induced polarization $\mathbf{P}(\omega)$ arising in this four-wave mixing process can be written as:
\begin{equation}
\mathbf{P}(\omega)\propto \mathbf{\chi^{(3)}}(\omega;\omega,-\omega,\omega)\colon \mathbf{E}^{pump}(\omega)\mathbf{E}^{pump}(\omega)^{*}\mathbf{E}^{probe}(\omega),
\label{Eq2}
\end{equation}
where $\mathbf{E}^{pump}(\omega)$ and $\mathbf{E}^{probe}(\omega)$ are the optical
electric fields of pump and probe beams, respectively.
Eq.~(\ref{Eq2}) describes a nonlinear optical process enabling an angular momentum transfer from the incoming circularly-polarized pump beam to linearly-polarized probe beam. The photo-induced polarization $\mathbf{P}(\omega)$ is a source for the outgoing probe light. This process is related to the orbital movement of electrons, therefore it is called the instantaneous orbital contribution \cite{Pavlov1}. In the case of EuTe the third-order optical nonlinearity $\mathbf{\chi^{(3)}}$ can be rather strong for the $4f^75d^0 \rightarrow 4f^65d^1$ electric-dipole optical transition. This gives rise to the photo-induced rotation observed in our experiments which is due to the inverse Faraday effect.
\begin{figure}[]
\includegraphics[width=0.4\textwidth]{Fig4.eps}
\caption{(color online). Optical excitation (stage I) and relaxation processes (stages II, III, IV) at the dipole-allowed electronic transition
$4f^75d^0 \rightarrow 4f^65d^1$ in EuTe. } \label{fig:figure2}
\end{figure}
The non-instantaneous contribution appears for applied magnetic field values higher than 3~T. The absorption band gap of EuTe is about 2.4~eV at low temperatures, and this value is strongly influenced by an applied field \cite{Heiss1}. At magnetic field of 5~T the
band gap is about 2.2~eV. Thus, the non-instantaneous contribution is related to ultrafast optical orientation for the electric-dipole transition $4f^75d^0 \rightarrow 4f^65d^1$ with the subsequent charge and spin relaxation with a time constant $\tau_s$ (see Fig. 4). This relaxation corresponds for the Voigt geometry to the transverse component decay of the photo-induced magnetization vector $\mathbf{M}$ towards its equilibrium orientation parallel to the applied field $\mathbf{H}$. Applying fitting procedure on the basis of Eq.~(\ref{photo}) to the experimental data we found the transverse spin relaxation time $\tau_s=19$~ps at the $4f^6 X^1$ states. We note that this value is about two times shorter than precession period and about ten times shorter than decay time of precession oscillations at the $4f^7 d^0$ states in the applied magnetic field of 6~T~\cite{SciRep2014} .
\section{CONCLUSIONS}
In conclusion, we observed a strong optical response in the magnetic semiconductor EuTe for the circularly
polarized pump in transmission geometry applying magnetic field in
the Voigt geometry $\mathbf{k}\parallel (111) \perp \mathbf{H}$.
Observed signals can be attributed to the strong optical nonlinearity
of the third order accounting the inverse Faraday effect and the optical orientation phenomenon at electronic transition
from the localized $4f^7$ states of Eu$^{2+}$ ions on the top of the valence
band into $5d$ orbitals forming the conduction band.
Applying magnetic field in the range of 0-6~T one can control the interplay between the inverse Faraday effect and the optical orientation phenomenon in EuTe. We note that such crossover mechanism can be important for different classes of intrinsic and diluted magnetic semiconductors in which the band gap can be strongly influenced by the external magnetic field.
\section*{ACKNOWLEDGMENTS}
The authors are thankful to M.~M.~Glazov for useful discussions. This work was supported by the Russian Science Foundation (Grant No. 17-12-01314), the Dortmund group acknowledges support by the DFG (ICRC TRR 160), A.~B.~H. acknowledges support from the Brazilian agencies CNPq and FAPESP.
|
{
"timestamp": "2018-03-08T02:08:05",
"yymm": "1803",
"arxiv_id": "1803.02633",
"language": "en",
"url": "https://arxiv.org/abs/1803.02633"
}
|
\section{Introduction}
The Very Long Baseline Interferometry (VLBI; \cite{1999ASPC..180..433W}) technique combines signals recorded at distant radio telescopes to achieve the highest angular resolution.
A typical VLBI scale of 1\,mas by definition corresponds to the linear size of 1\,AU at the distance of 1\,kpc and 1\,pc at $z \sim 0.05$.
The longer the baseline (distance) between the
elements, the higher is the interferometer's angular resolution. Another way
to increase angular resolution is to observe at a shorter wavelength.
The measured interferometer response may be compared to a simple model in order to estimate the source size and flux density or, if measurements at many baselines are available, the source image may be reconstructed.
The following features of VLBI may provide insights into the nature of various astrophysical transients:
\begin{itemize}
\item Superb angular resolution helps to measure the source size.
\item Accurate localization of the radio emitting site is possible with VLBI.
\item Imaging reveals the radio emitting region geometry (jet/shell/shock front) and allows us to follow its changes (proper motion, expansion).
\item Full Stokes imaging may provide clues about the mechanism responsible for the transient's radio emission and, in the case of synchrotron transients, measure the magnetic field strength and structure.
\item VLBI can separate the (small) transient source from the unrelated background emission that will be ``resolved out'', no matter how bright the background is.
\end{itemize}
In Section\,\ref{sec:arrays} we provide an overview of VLBI arrays performing astronomical observations. Section\,\ref{sec:transientzoo} lists the various types of radio transients and highlights selected observational results.
In Section\,\ref{sec:strategy} we discuss observing strategies suitable for transient source studies with VLBI.
This is part two of the Workshop on radio transients. The first part of the workshop highlighting open questions in the transients science and how they may be addressed with non-VLBI techniques is presented by Anderson~et~al. (these proceedings).
\section{An overview of VLBI arrays}
\label{sec:arrays}
The majority of arrays listed in this section offer at least part of their observing time as ``open sky'' (any astronomer can apply) and accept target of opportunity requests. A number of VLBI-capable telescopes are not part of these arrays: they are dedicated to either space geodesy (\cite{VLBIGeodynamics}) or deep space communication.
{\it The Very Long Baseline Array} (VLBA; \cite{1994IAUS..158..117N}) is the
first instrument fully dedicated to VLBI. It includes ten 25\,m telescopes
spread across the continental United States, US Virgin Islands and Hawaii.
It operates full time at frequencies 0.3--96\,GHz and is frequency agile,
meaning that it may switch between the receivers in about a minute.
The VLBA may be combined with the GBT~100\,m, phased VLA 27x25\,m, Arecibo~305\,m, Effelsberg~100\,m and/or the LMT~50\,m to form {\it the High Sensitivity Array }.
{\it The European VLBI Network} (EVN; \cite{2015arXiv150105079Z}) is a
collaboration of 10--15 diverse stations (including 60--100\,m class
telescopes). The number of participating stations depends on the observing
band (1--43\,GHz range) and station availability.
Most EVN stations are not frequency agile. Observations are performed
during three session per year. There is a limited number of pre-planned
out-of-session observations.
The EVN routinely includes stations from the regional VLBI arrays of Korea, Italy, China and Russia.
The EVN may be requested together with the US stations as {\it the Global array}.
{\it e-EVN} is a subset of the EVN capable of real time correlation.
This feature was specifically introduced for transient observations (\cite{2016arXiv161200508P}).
There is one 24\,hr e-EVN observing session per month. Additional ToO
observations are possible.
{\it The Global mm-VLBI Array} (GMVA; \cite{2014arXiv1407.8112H}) includes
Effelsberg~100\,m, GBT~100\,m,
NOEMA interferometer 7x15\,m,
VLBA and
other mm-band telescopes in Europe.
The observations are performed at 86\,GHz during two
sessions per year.
{\it The Event Horizon Telescope} (EHT; e.g. \cite{2014ApJ...788..120L}) is
a heterogeneous VLBI array observing at 230\,GHz. The EHT has one observing session per
year. The first open call for proposals for VLBI observations with the EHT together with the Atacama Large Millimeter/submillimeter Array is issued in 2018.
{\it RadioAstron} (\cite{2013ARep...57..153K}) combines ground stations with the 10\,m radio telescope aboard the dedicated satellite
in a high elliptical orbit (apogee -- 326000\,km) to form a Space--VLBI array. The observing frequencies are 0.3, 1.7, 4.8, 22\,GHz. Observations at 22\,GHz may reach a higher angular resolution (\cite{2016ApJ...817...96G}) than the EHT at 230\,GHz.
The first observation of a transient source with RaioAstron was the search for radio emission from SN2014J in M82 (\cite{2014ATel.6197....1S}).
{\it The Long Baseline Array} (LBA; \cite{2015PKAS...30..659E}) has its
core stations in Australia (the largest are the ATCA interferometer 6x22\,m, Parkes~64\,m, and Tidbinbilla~70\,m)
but also provides intercontinental baselines to Hartebeesthoek~26\,m in South
Africa. This is the only VLBI array
operating in Southern hemisphere.
The observing frequency range is 1.4--22\,GHz, but not all telescopes are available
at all bands. The observations are conducted in 3--4 sessions per year.
Test observations of GRB~080409 combining a few LBA stations with the telescopes in China and Japan in the e-VLBI mode were performed by \cite{2016RAA....16..164M}.
{\it The Korean VLBI Network} (KVN; \cite{2014AJ....147...77L}) consists of three dedicated 21\,m stations
capable of observing simultaneously at 22-43-86-130\,GHz (\cite{2013PASP..125..539H,2015AJ....150..202R}).
Possibilities of installing similar receiving systems at VLBI stations outside Korea are
investigated by \cite{2015JKAS...48..277J}.
{\it The VLBI Exploration of Radio Astrometry} (VERA;
\cite{2003ASPC..306..367K,2012IAUS..287..386H})
array includes four 20\,m telescopes in Japan. Its main focus is on parallax and proper motions measurements of Galactic maser sources. VERA observes at 6.7 (methanol), 22 (water) and 43\,GHz (SiO masers) using the unique dual-beam system that allows simultaneous observations of the target maser source and an extragalactic continuum source serving as the phase calibrator.
{\it KaVa} combines KVN and VERA observing at 22 and 43\,GHz (e.g. \cite{2017PASJ...69...71H}).
{\it The Italian VLBI network}
(\cite{2016ivs..conf..132S})
includes the Sardinia\,64m and the two 32\,m telescopes at Medicina and Noto. It is capable of observing in the 1--22\,GHz range. \cite{2013ATel.5264....1S} searched for radio emission from SN2013ej in M74, using Medicina and Noto as a two-element VLBI.
{\it The Japanese VLBI Network} (JVN; \cite{2006evn..confE..71D}) combines VERA with other VLBI-capable telescopes including Usuda 64\,m deep space communication antenna. No call for observing proposals from outside the JVN collaboration.
{\it The Russian VLBI Network ``Quasar''} (\cite{2008evn..confE..53F}) includes three 32\,m telescopes in Svetloe, Zelenchukskaya and Badary observing in the 1--22\,GHz range. The main focus of the network is on geodetic VLBI, but it also performs astronomical observations with EVN and RadioAstron. There is no open call for proposals, but proposals for astronomical observations submitted directly to the director may be considered.
{\it The Chinese VLBI Network} (CVN; \cite{2015IAUGA..2255896Z}) includes Tianma 65\,m, Miyun 50\,m, Kunming 30\,m and the 25\,m telescopes in Seshan and Urumqi. The network is used for spacecraft navigation, geodesy and astronomy. No open call for proposals.
Future facilities include {\it the East Asia VLBI Network} (\cite{2016ASPC..502...81W,2018NatAs...2..118A}) that will combine the national networks of China, Japan and Korea and the African VLBI Network (\cite{2011saip.conf..473G,2016arXiv160802187C}).
{\it LOFAR} with its international stations is a VLBI array
operating at frequencies $\sim 50$ (\cite{2016MNRAS.461.2676M}) and $\sim150$\,MHz
(\cite{2016A&A...593A..86V}). Its angular resolution is comparable to that of connected interferometers operating at GHz frequencies.
{\it e-MERLIN} (\cite{2009evlb.confE..29S}) is a 7-station (including Lovell 76\,m) array observing at 1--22\,GHz providing baselines
approaching those of regional VLBI arrays, while technically
being a connected interferometer.
It was recently used to study Galactic transients, among others, by \cite{2014Natur.514..339C,2017MNRAS.469.3976H}.
\section{Types of radio transients}
\label{sec:transientzoo}
Radio-transients can be divided in two broad classes (\cite{2011BASI...39..353B}):
{\it fast} transients likely related to neutron stars (and flares on low-mass stars) appear on sub-second
timescales and {\it slow} transients related to various explosive astrophysical events
that evolve on a timescale of days to months.
The fast transients include:
The enigmatic {\it Fast Radio Bursts} (FRB; \cite{2016PASA...33...45P}). Recent EVN$+$Arecibo observations allowed \cite{2017ApJ...834L...8M} to establish spatial coincidence of the repeating FRB~121102 with a persistent extragalactic radio source providing new constraints on the physical interpretation of the (repeating) FRB phenomenon. VLBI was used to investigate the suspected host
of FRB~150418 (\cite{2016A&A...593L..16G,2016MNRAS.463L..36B}).
{\it Rotating radio transients} (RRATs; \cite{2015ApJ...809...67K}).
{\it Giant pulses from pulsars} (\cite{2012ApJ...760...64M}, \cite{2016PASP..128h4502T}).
The connection between the above three classes of fast transients is suspected (\cite{2018arXiv180100640P}), but not yet established.
{\it Flare stars} produce outbursts of non-thermal radio emission (\cite{2008ApJ...674.1078O}).
The following types of events may produce slow radio-transients:
{\it Supernovae} are the most studied class of radio-transients (\cite{2017ARep...61..299B}).
Over 50 radio supernovae are known (\cite{2011ApJ...740...23L}).
VLBI observations provide shell expansion velocity measurements
independent of optical spectroscopy and reveal
the mass-loss history of the progenitor star (e.g. \cite{2018MNRAS.475.1756B}).
{\it $\gamma$-ray bursts} produce afterglows that may be detected (e.g. \cite{2013ApJ...779..105M,2016arXiv161006928M,2017A&A...598A..23N}) and
resolved (\cite{2007ApJ...664..411P}) with VLBI.
{\it Novae and symbiotic stars} may appear as radio sources
observable with VLBI, e.g. \cite{2008ApJ...685L.137S,2012evn..confE..47G}.
The source of radio emission may be the nova shell and possibly
non-relativistic synchrotron-emitting jet (\cite{2008ApJ...688..559R}).
VLBI imaging of the $\gamma$-ray emitting classical nova V959\,Mon by \cite{2014Natur.514..339C} suggested the synchrotron emission is produced at the interface between the fast polar outflow and the slow thermally-emitting outflow escaping the binary system in the orbital plane. Understanding the structure of the shocks in nova ejecta is important as the shocks are found to be responsible not only for $\gamma$-ray, X-ray and radio (\cite{2016MNRAS.460.2687W}) but also contribute significantly to optical emission of novae (\cite{2017NatAs...1..697L}).
{\it Dwarf novae} may also be transient radio sources (\cite{2008Sci...320.1318K}).
The mechanism of their radio emission is unclear.
{\it Tidal disruption events (TDE)} in galactic nuclei,
such as Swift~J164449.3$+$573451, may be detected in radio. This is
interpreted as an evidence of a relativistic jet forming from the matter
lost by the disrupted star (\cite{2012ApJ...748...36B}). Surprisingly, \cite{2016MNRAS.462L..66Y} where able to place the
upper limit of 0.3\,c on the ejection speed in this source.
{\it Active galactic nuclei (AGN)} are known sources of variable radio
emission and may appear as transients rising above
the threshold of previous radio observations.
{\it Microquasars} (\cite{2010LNP...794...85G}) can flare by several orders of magnitude within days. Some of them are sources of radio emission also in the quiet state. Recent VLBI results include observations of the expanding jets in XTE\,J1908$+$094 by \cite{2017MNRAS.468.2788R} and the giant flare of Cygnus\,X-3 by \cite{2017MNRAS.471.2703E}.
{\it Maser sources} associated with star forming regions and
late-type stars may show flares by orders of magnitude
(\cite{1988SvAL...14..468M,2007IAUS..242..330R}).
{\it Other events}. Sometimes, even a combination of radio and
multi-wavelength observations is not sufficient to determine the nature of a transient (e.g., \cite{2005Natur.434...50H}).
Such cases demand detailed investigation and can potentially lead to the understanding of novel astrophysical phenomena.
\section{Observing strategies}
\label{sec:strategy}
While, in principle, wide-field VLBI imaging (\cite{2013ApJ...768...12M}) may be used to {\it search} for slow transients, the most popular observing strategy so far is the {\it follow-up}
of transients discovered at other wavelengths.
The two key points to consider when planning observations are the array sensitivity and the possibility of rapid response.
A sensitive array includes big dishes and is capable of performing phase-referencing observations. Phase referencing
makes the integration time (and hence the sensitivity) limited by the experiment duration rather than the atmosphere coherence time. Dedicated full-time arrays like VLBA and KVN, as well as ad~hoc arrays including only two to three telescopes can respond within days to a trigger (if the corresponding observing proposal is already in place). VLBI observations often rely on the Earth rotation to probe more spatial frequencies as the array elements move and improve the resulting image. This technique cannot be used for rapidly evolving transients. A ``snapshot'' observation will result in a degraded image (compared to a ``full track'' image) or may be suitable only for modeling, not image reconstruction. The quality of a snapshot image may be improved by adding more elements to the array.
Another point to consider for Galactic transients is their expected angular size.
An explosive transient may take hours to days to reach the angular size of a few mas and become ``too big'' to be observed with VLBI. Unless it has a structure on smaller angular scales, it may be completely ``resolved-out'' by the interferometer.
The choice of observing frequency is less important then other considerations when observing synchrotron transients as they tend to have nearly flat spectra.
With the exception of repeating events like FRB~121102 or the ones possessing a long-term ``afterglow'', triggered observations of fast transients are not possible. Instead, the fast transients have to be found in the same data used to investigate them.
Raw VLBI data (before being averaged in time and frequency by the correlator) are suitable for a fast transient search (\cite{2018AJ....155...98L}). The V-FASTR project is running a commensal survey for fast transients at the VLBA (\cite{2011ApJ...735...97W,2016PASP..128h4503W}).
One interesting possibility is shadowing a large single-dish telescope
with a VLBI array, extending the observing strategy of \cite{2017ApJ...834L...8M} to a blind survey.
{\it The Square Kilometre Array} (SKA) will detect transients in real time providing targets for a VLBI follow-up.
Including the phased SKA into an existing VLBI network will boost the network sensitivity by more than an order of magnitude. This will enable detailed VLBI studies of the classes of sources that are now just barely detectable. Studies of classical VLBI targets such as AGNs will also benefit from access to a larger sample of observable sources an its extension towards low-luminosity objects. An overview of VLBI prospects for the SKA is presented by \cite{2015aska.confE.143P}, while \cite{2015aska.confE..53C} highlights perspectives for Galactic synchrotron transient studies.
|
{
"timestamp": "2018-03-09T02:00:15",
"yymm": "1803",
"arxiv_id": "1803.02831",
"language": "en",
"url": "https://arxiv.org/abs/1803.02831"
}
|
\section{Introduction} \label{sec:intro}
Low$-$surface$-$brightness galaxies (LSBGs) are galaxies whose central surface brightness is at least one magnitude fainter than the level of sky background in the dark night\citep{1970ApJ...160..811F,1997ARA&A..35..267I}.
Generally, they are defined as central surface brightness in the B$-$band $\mu_{0}(B)$ $>$ 22.0-23.0 $mag\ arcsec^{-2}$\citep{2001AJ....122.2341I,2012MNRAS.426L...6C}.
LSBGs account for the bulk of the number of local galaxies, making them an important contributor to the baryon and dark matter mass budget in the local universe \citep{2000ApJ...529..811O,2005ApJ...631..208B,2016A&A...593A.126B}.
Their morphologies and stellar populations distribute widely, ranging from old, high-metallicity early types to young, low-metallicity late-type galaxies \citep{2000MNRAS.312..470B}.
Even though the specific procedure of their formation and evolution is still unclear, their lower star formation rate (SFR) is consistent with the hypothesis that they are quiescent galaxies and have different star formation histories from their high surface brightness counterparts \citep{1995AJ....109.2019M,1999A&A...342..655G,2008ApJ...681..244B,2009ApJ...696.1834W,2013AJ....146...41S}.
One of the most important parameters for understanding the evolution of galaxies is SFR.
There are many approaches to deriving the SFR, utilizing the luminosity related to young massive stars, such as H$\alpha$, UV, or IR luminosities, or fitting the observed spectral energy distribution with a model \citep{1998ARA&A..36..189K,1998ApJ...509..103S,2005ApJ...632L..79W,2008MNRAS.388.1595D,2008ApJ...686..155Z,2009A&A...507.1793N,2009ApJ...706.1527B,2014MNRAS.438...97W,2016ApJ...825...34J}.
Among those SFR tracers, H$\alpha$ emission is connected with the photons whose wavelengths are shorter than the 912{$\rm \AA$}.
These ionized photons are produced by young stars with ages of less than $\sim$10 $Myr$ and masses higher than 17 {$\rm M_{\sun}$} \citep{2016MNRAS.455.1807W}.
Therefore, compared to the approaches, the star formation timescale traced by H$\alpha$ emission is shorter.
Recent and ongoing H$\alpha$ image surveys provide a number of resources to study star formation.
The H$\alpha$3 survey is an H$\alpha$ narrow band imaging survey of the Local and Coma Super$-$clusters selected from ALFALFA \citep{2011AJ....142..170H}, which present the complete recent star formation and {\ion{H}{1}}$-$rich galaxies in the Local Supercluster \citep{2013A&A...553A..89G,2013A&A...553A..91F}.
\citet{2016ApJ...824...25V} finished observations and data reduction for a fall sample of 656 galaxies from the {\ion{H}{1}} Arecibo Legacy Fast ALFA Survey (ALFALFA), the galaxies distances between $\sim$20 and $\sim$100 Mpc, but there was not focus on LSBGs.
There is an ongoing H$\alpha$ image survey of LSBGs selected from the PSS-II catalog \citep{1992AJ....103.1107S}.
However, only 59 LSBGs have been included in \citet{2011AdAst2011E..12S}$'s$ sample.
Consequently, up to now, there are only a few H$\alpha$ surveys of LSBGs, and the total number of LSBGs with available H$\alpha$ photometry is not large enough to derive confirming results.
Therefore, we undertake an H$\alpha$ survey to follow up {\ion{H}{1}}$-$selected LSBGs Galaxies from the 40\% ALFALFA {\ion{H}{1}} survey \citep{2015AJ....149..199D},
and we aim to study the SFR and star formation efficiency(SFE) of the {\ion{H}{1}}$-$selected LSBGs.
There is an empirical relation between the gas surface density($\Sigma_{gas}=\Sigma_{HI+H_{2}}$) and SFR surface density ($\Sigma_{SFR}$), ($\Sigma_{SFR}\varpropto\Sigma_{gas}^{1.4}$). Known as the Kennicutt-Schmidt Law, it reflects the relation between the large$-$scale SFR and the physical conditions in the interstellar medium \citep{1959ApJ...129..243S,1998ApJ...498..541K,2008AJ....136.2846B,2008AJ....136.2782L,2008ApJ...681..244B,2010AJ....140.1194B,2009ApJ...696.1834W,2013ApJ...769L..12L,2016A&A...593A.126B}.
However, such an empirical relation, generally derived on the basis of the samples of normal galaxies, might not be suitable for dwarf galaxies or LSBGs \citep{2012AJ....143..133H}.
\citet{2011ApJ...733...87S} proposed an "extended Schmidt Law," which can be suitable for LSBGs.
In this paper, we present an H$\alpha$ survey for a sample of 111 LSBGs in the fall season in order to explore their SFRs and SFEs.
This paper is orgnized as follows. in Section 2, we introduce our sample together with a description of the observations and data reduction.
In section 3, we present the catalog of H$\alpha$ flux and some derived parameters.
Results and an analysis are given in section 4, and a summary is provided in section 5.
Throughout the paper we adopt a flat $\Lambda$CDM cosmology, with $H_{0}$ = 70 km\ $s^{-1}Mpc^{-1}$ and $\Omega_{\Lambda}$ = 0.7.
\section{SAMPLE OBSERVATIONS AND DATA REDUCTION}
\subsection{Sample}
The ALFALFA Survey is a second-generation blind extragalactic {\ion{H}{1}} survey and provides the first full census of {\ion{H}{1}}-bearing objects over a cosmologically significant volume in the local Universe.
This extragalactic {\ion{H}{1}} survey is especially useful for studying low-mass, gas-rich objects in the local universe \citep{2005AJ....130.2598G,2011AJ....142..170H,2014ApJ...793...40H}.
This survey covers 7000 {$\rm deg^2$} and intends to detect more than 30,000 extragalactic {\ion{H}{1}} sources.
The first release covers 40\% of the ALFALFA survey area and is called $\alpha$.40 \citep{2011AJ....142..170H}.
\citet{2015AJ....149..199D} constructed an LSBGs sample with {$\rm \mu_0(B)>22.5$ $mag\ arcsec^{-2}$} from ALFALFA $\alpha.40$ in conjunction with SDSS DR7 photometry data \citep{2009ApJS..182..543A} with an additional constraint on the axis ratio($b/a>0.3$) to prevent the contamination from the edge-on galaxies.
Because the SDSS pipeline overestimates the level of sky background and underestimates the total magnitude of galaxies by about 0.2 mag, this value can reach 0.5 mag for LSBGs \citep{2007ApJ...660.1186L,2013ApJ...773...37H}.
\citet{2015AJ....149..199D} reconstructed the sky background with a better method \citep{1999AJ....117.2757Z,2002AJ....123.1364W,2013ApJ...773...37H} to get more accurate surface brightness. The galaxy geometric parameters (e.g., disk scale length in pixels, axis ratio) are fitted and obtained by software GALFIT \citep{2002AJ....124..266P} and central surface brightness in g-bnd and r-band are calculated by auto-magnitudes from the software Sextractor \citep{1996A&AS..117..393B}.
The central surface brightnesses in B-band are transformed from SDSS g- and r-band magnitudes. The final sample includes 1129 {\ion{H}{1}}$-$rich LSBGs, which are defined as the main LSBG sample; hereafter they are referred to as Du2015.
Our sample contains fall objects (111) from Du2015 (1129)
and is located within the region of { $\rm 21^{h} < R.A. < 2^{h} ; $ $\rm 13^{\circ} < Dec. < 16^{\circ}$ and $\rm 23^{\circ} < Dec. < 33^{\circ}$}.
To obtain more accurate SFRs of LSBGs, an H$\alpha$ imaging survey is needed.
We observed the H$\alpha$ images of a sample of 111 LSBGs located in the fall sky.
All members of our LSBGs sample are belong to a blue cloud and are in a star formation sequence.
We show the distributions of some photometric and {\ion{H}{1}} parameters, including central surface brightness, heliocentric velocity, distance, radius containing 50$\%$ of Petrosian flux($r_{50}$) in the SDSS r-band, {\ion{H}{1}} mass, and stellar mass, of the LSBGs in our fall sample (royal blue) and Du2015(sky blue) in Figure \ref{fig1}.
All the {\ion{H}{1}} parameters (heliovelocity, distance, {\ion{H}{1}} mass) are derived from the $\alpha$.40 catalog, and the heliocentric velocity of the {\ion{H}{1}} source $cz_{\sun}$ is in units of km$\thinspace$$s^{-1}$ \citep{2011AJ....142..170H}.
Central surface brightness and $r_{50}$ and g,r magnitudes are from Du2015.
The stellar mass is derived from the $r$-band magnitude and the $g-r$ color using the formula from \citet{2003ApJS..149..289B}.
The distances used in this paper are estimated from two different approaches\citep{2011AJ....142..170H}.
when the recession velocity (c$z_{\odot}$) of a galaxy is larger than 6000 km$\thinspace$$s^{-1}$, the distance is estimated from c$z_{cmb}$$/$$H_{0}$; for those whose c$z_{\odot}$ $<$ 6000 km$\thinspace$$s^{-1}$, a velocity model is used \citep{2011AJ....142..170H} to derive their distances.
The peak of the {\ion{H}{1}} mass distribution of our sample is {$ \rm logM_{HI}[M_{\odot}] \thicksim 9.7$}.
According to \citet{2014ApJ...793...40H} classification,
30$\%$ of LSBGs have high {\ion{H}{1}} mass {($\rm log M_{HI}[M_{\odot}] \geqslant 9.5$ )}, 65$\%$ LSBGs have medium {\ion{H}{1}} mass {(7.7 $\rm \leqslant log M_{HI}[M_{\odot}] \leqslant 9.5$)}, and only 5$\%$ of LSBGs have low {\ion{H}{1}} mass {(\rm $log M_{HI}[M_{\odot}] \leqslant 7.7$)}.
The peak of the stellar mass is around $10^{8.5}-10^{9}$ $[M_{\odot}]$ .
\begin{figure*}
\epsscale{0.7}
\plotone{f1_dis.eps}
\caption{ Photometric and {\ion{H}{1}} parameters of our sample(royal blue) and the
entire LSBG sample(sky blue) \citep{2015AJ....149..199D} .
The parameters are $B$-band central surface brightness with a bin size of 0.25 mag (top left),
heliocentric velocity with a bin size of 1000 km$\thinspace s^{-1}$ (top right),
distance with a bin size of 10 Mpc (middle left),
radius containing 50$\%$ of flux($r_{50}$) in the SDSS $r$-band derived from \citet{2015AJ....149..199D} (middle right),
{\ion{H}{1}} mass with a bin size of 0.25 (bottom left),
and stellar mass derived from $g$- and $r$-band magnitude with bin sizes of 0.25 (bottom right).
}
\label{fig1}
\end{figure*}
\subsection{Observation}
The observation for this LSBG sample ranged from 2014 to 2016, and the galaxies in our sample were taken in dark night.
Both broad $R$-band and H$\alpha$ narrow band images were obtained with the BAO Faint Object Spectrograph and Camera (BFOSC) attached to the 2.16 m telescope at Xinglong observatory of the National Astronomical Observatories, Chinese Academy of Sciences (NAOC).
The CCD frame of BFOSC is 1152$\times$1274 $pixel^2$ with the pixel scale of 0.45 $arcsec$ and has a field of view (FOV) of 8.5$\times$9.5 $arcmin^2$.
The observation was made with a gain mode of 1.08 $e^-$ $\mathrm{ADU^{-1}}$ with a readout noise of 3 $e^-$ $pixel^{-1}$ .
The FOV is suitable for acquiring the images of galaxies with sizes of less than 3-4 arcmin, owing to the fact that the accurate estimation the of sky background is essential for LSBGs.
Each observation adopts the same $R$-band filter and a suitable $H\alpha$ filter.
The effective wavelength $\lambda_{\mathrm{eff}}$ of the broad $R$-band filter is 6407$\rm \AA$
There is a series of narrow band H$\alpha$ filters whose center wavelengths range from 6533 to 7052 $\rm \AA$ (6533, 6589, 6631, 6701, 6749, 6804, 6851, 6900$\rm \AA$ and 6948, 7000, and 7052$\rm \AA$) with an FWHM of about 55 $\rm \AA$.
All the central wavelengths and FWHMs of the $H\alpha$ filters are shown in Table \ref{tab:table1}.
The transmission curves of narrow $H\alpha$ filters are shown in Figure \ref{fig2}.
\begin{deluxetable}{ccc}
\tablecolumns{3}
\tablewidth{-20pt}
\tablecaption{The Properties of $\rm H\alpha$ narrow band filters\label{tab:table1}}
\tablehead { \colhead{Filter}& \colhead{$\lambda_{c}$} & \colhead{FWHM} \\
\colhead{} & \colhead{$\rm \AA$} & \colhead{$\rm \AA$} \\
\colhead{(1)} &\colhead{(2)} &\colhead{(3)} \\ }
\startdata
$\rm H\alpha1$ & 6533 & 55 \\
$\rm H\alpha2$ & 6589 & 53 \\
$\rm H\alpha3$ & 6631 & 62 \\
$\rm H\alpha4$ & 6701 & 53 \\
$\rm H\alpha5$ & 6749 & 52 \\
$\rm H\alpha6$ & 6804 & 54 \\
$\rm H\alpha7$ & 6851 & 54 \\
$\rm H\alpha8$ & 6900 & 55 \\
$\rm H\alpha9$ & 6948 & 58 \\
$\rm H\alpha10$ & 7000 & 54 \\
$\rm H\alpha11$ & 7052 & 56 \\
\enddata
\end{deluxetable}
\begin{figure*}
\epsscale{0.8}
\plotone{f2_haTranAll.eps}
\caption{ Transmission curve of $\rm H\alpha$ filters from $\rm H\alpha$1 to $\rm H\alpha$11.
}
\label{fig2}
\end{figure*}
For each source, the $R$ and H$\rm \alpha$ images were taken with exposures of 300s ($R$) and 1800s ($H\alpha$ narrow band), respectively.
The $R$-band integration time is deep enough to provid continuum subtraction for the narrow band image.
The observation information is listed in Table \ref{table2}.
\subsection{Image Reduction}
Firstly, we check the quality of the images with the naked eye. After that, we reduce the CCD frames, including overscan subtraction, bias subtraction, flat-field correction, and cosmic-ray removal, following the standard image process with IRAF provided
by NOAO \footnote{IRAF is the Image Analysis and Reduction Facility made available to the astronomical community by the National Optical Astronomy Observatories, which are operated by AURA, Inc., under contract with the US National Science Foundation. STSDAS is distributed by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS 5–26555.}
Then, the celestial coordinates are added to each image using $Astrometry.net$.
The next step is sky background construction, which is the most critical step of data reduction.
Sextractor is employed to detect faint or extended objects in the gaussian smoothed image.
A mask image is produced after taking all the detected objects off.
In order to obtain the large-scale structures of the background, a median filter of 70$\times$70 $pixel^{2}$ is applied to the mask image to reduce the random noise and to fill in the mask regions with surrounding sky regions.
The constructed sky background image is subtracted from each image.
Figure \ref{fig3} shows an example of the original image, the constructed sky background, and the background-substracted images; all three images are in the same value scale range.
We can see that the sky background reflects the vignetting and non-uniformity distribution.
We also compare the fluctuation of the original and sky-subtracted images in Figure \ref{fig4}.
From Figure \ref{fig4}, the median distribution of the background after being sky-subtracted is more closer to 0.
The fluctuation of sky-subtracted image(blue solid line) is much less than that of the original image (black dashed line).
\begin{figure*}
\epsscale{0.8}
\plotone{f3_102627R_sky.eps}
\caption{Example of the sky background subtraction of LSBG AGC 102672.
All images size is 9$\arcmin$.0 $\times$ 8$\arcmin$.3, and the length of the yellow line is 2$\arcmin$.
The left panel is the original $R$-band image. The middle panel is the constructed $R$-band sky background, and the right panel is the sky-background-subtracted image. All three images are in the same scale range.}
\label{fig3}
\end{figure*}
\begin{figure*}
\epsscale{0.8}
\plotone{f4_20140826_102672Rsky_forpaper.eps}
\caption{Example of distributions of the global background fluctuations of LSBG AGC 102672 before (black dashed line) and after (blue solid line) background subtraction. A gaussian fitting is applied to the two distributions. The upper portion of the panel gives the mean values and standard deviations of the two distributions.}
\label{fig4}
\end{figure*}
Since the H$\alpha$ images contain contributions from both H$\alpha$ emission and the underlying stellar continuum,
it is also important to remove the stellar continuum to get the real H$\alpha$ emission.
Here, we adopt the $R$-band image as the stellar continuum, due to the fact that the wavelength coverage of $R$-band is wide enough to be dominated by the stellar continuum.
In order to subtract the continuum from the observed H$\alpha$ frames, we must scale the continuum flux of H$\alpha$ to same level as the flux of $R$-band image.
In this process, we assume that field stars have no H$\alpha$ emission, which means that they should have the same continuum flux ratios between H$\alpha$ and $R$-band images.
We define the count ratio of the wide $R$-band and narrow H$\alpha$ band as WNCR:
\begin{equation}
WNCR = \frac{c_{W,cont}}{c_{N,cont}}
\label{equation1}
\end{equation}
Here $c_{W,cont}$ and $c_{N,cont}$ are the measured count of the wide $R$-band and narrow $H\alpha$ band filters.
Statistically, the median WNCR of these field stars could be treated as the scale factor to subtract the continuum from the H$\alpha$ image.
To obtain an accurate WNCR, we adopt aperture photometry, with the radii of 5 times the FWHM ofthe point-spread-function for stars in each image, using Sextractor and selected field stars with S$/$N greater than 20.
To match the continuum, $\rm H\alpha$ image multiply WNCR and subtract the $R$-band image.
It is tricky to adjust the value around WNCR to get the best scaled one.
Finally the continuum is removed from scaled H$\alpha$ images, when the residual fluxes of most selected field stars reached a minimum.
The scaled values we used are from field stars, However, the scaled value of the object galaxies is somewhat different.
The color of the studied galaxy is different from that of the field stars.
The color effect of field star would cause errors, leading to underestimates as large as 40$\%$ and overestimates as large as 10$\%$ when measuring $\rm H\alpha$ equivalent width \citep{2012MNRAS.419.2156S}.
To quantify the errors, we selected different spectral types (F,G,K) of stars taken from the MILES stellar library.
Because all our sample galaxies are located at high galactic latitude (82 $\%$ sample $> 30\degr$ ) and M stars are too faint, only F G K stars were considered, the WNCR error can be under-estimated as large as 7$\%$ and overestimated as large as 7$\%$.
Figure \ref{fig5} shows the $R$-band, H$\alpha$ narrow band and continuum-subtracted H$\alpha$ images of LSBG AGC 102243 from left to right, as an example.
\begin{figure*}
\epsscale{0.8}
\plotone{f5_subcontinue102243.eps}
\caption{ Example images of LSBG AGC 102243 showing the process of continuum subtraction. $R$-band, H$\alpha$ band, and continuum-subtracted H$\alpha$ images are shown in this figure from left to right.}
\label{fig5}
\end{figure*}
As Du2015 derived from the SDSS survey, the flux calibrations for the observed broad and narrow band images are undertaken depending on the SDSS photometry.
The field stars with $S/N > 20$ in both SDSS and our $R$-band image are selected for flux calibration.
Here, the aperture magnitudes of the SDSS $r$-band and $i$-band are used to calculate the Johnson $R$-band magnitudes based on the Equation \ref{eq:2} \citep{2005AAS...20713308L} as follows.
The Johnson $R$-band magnitude is transformed to $AB$ magnitude systems with Equation \ref{eq3} \citep{1994AJ....108.1476F}. Then, the $AB$ magnitude is transformed to flux density with Equation \ref{eq4}.
\begin{equation}
R = r - 0.2936(r - i)-1.439\ ; \sigma = 0.0072 \label{eq:2}
\end{equation}
\begin{equation}
R(AB) = R + 0.055\label{eq3}
\end{equation}
\begin{equation}
m_{AB} = -2.5\ log_{10}(\frac{f_{\nu}}{3631\ JY})\label{eq4}
\end{equation}
Based on this formula, we derive the averaged calibration factor (flux density per count) of each image, which is then applied to calibrating the photometric fluxes in both $R$-band and continuum-subtracted H$\alpha$ images.
\subsection{Photometry}
An elliptical aperture is adopted to perform photometry on both $R$-band and H$\alpha$ band images.
Firstly, the broad $R$-band image is used to determine photometric radius.
Helped by the IRAF task ellipse, we can obtain the profile of the total flux counts enclosed by an elliptical aperture, along with the semimajor axis.
Then, the flux at which the growth curve reaches 25 $\rm magarcsec^{-2}$, the semimajor axis($a$) and semimini axis($b$) are adopted as the optical photometry radius.
H$\alpha$ flux is total flux enclosed by elliptical area.
There are 111 objects in total and 19 objects cannot be detected because of their weak $\mathrm{H\alpha}$ emission.
\section{H$\alpha$ Flux of LSBGs}
\subsection{Flux Correction }
Taking the H$\alpha$ filter transmission cruve into account, we adopt the transmission curve of $H\alpha$ filters in Figure \ref{fig2} and correct the transmission loss brought by the H$\alpha$ filters.
The normalized transmission $\rm T(H\alpha)$ used for correcting the flux is derived from the equation,
\begin{equation}
T(H\alpha) =\frac{ T'(H\alpha)}{ \int_{\lambda1}^{\lambda2}{T'(\lambda)d\lambda}/ FWHM}
\end{equation}
where $T'(\lambda)$ is the transmission curve,
$\rm T'(H\alpha)$ is the direct transmission at galaxy-redshifted H$\alpha$ wavelength from the transmission curve,
$T(H\alpha)$ is the normalized transmission at the galaxy-redshifted H$\alpha$ wavelength,
and $\lambda1$ and $\lambda2$ are the starting and ending wavelengths of the transmission curve.
FWHM is the full width at half maximum of the H$\alpha$ filters.
The corrected H$\alpha$ flux is obtained after dividing the normalized transmission $\rm T(H\alpha)$.
The bandwidth of $R$-band filter we used is wide enough, which leads to the fact that, apart from the stellar continuum, the observed flux in the $R$ filter still contains the contribution from H$\alpha$ emission, which will result in the loss of H$\alpha$ flux during the process of stellar continuum subtraction.
Fortunately, such a loss can be estimated (about 4\%) and corrected according to the bandwidth of both the $R$ and H$\alpha$ filters.
The extinctions for the galaxies in our sample include the contributions from both Galactic and intrinsic extinctions.
For nearby galaxies, their H$\alpha$ emission feature is covered by the SDSS $r$ filter.
Therefore, we adopt the extinction value in the SDSS $r$-band to correct observed H$\alpha$ Galactic extinction.
Generally, intrinsic extinction correction is derived from the Balmer emission line ratio of $\rm F_{ H\alpha}/F_{H\beta}$.
The color excess E(B-V) can be derived from [$\rm F_{H\alpha}/F_{H\beta}$]/[$F_{H\alpha 0}/F_{H\beta 0}$] according to CCM extinction law \citep{1989ApJ...345..245C}.
Here, we adopt the intrinsic ratio $\rm F_{H\alpha 0}/F_{H\beta 0}$ as 2.87 for \ion{H}{2} galaxies, then the extinction correction of H$\alpha$ flux is calculated from $\rm A_{H\alpha}$ = 2.468E(B-V) \citep{2001PASP..113.1449C}.
However, only 20$\%$ of the LSBGs in our fall sample have nuclear fiber spectra from SDSS.
Therefore, we have to adopt the same extinction correction and assume that there is no extinction gradient for all sample LSBGs.
In total, 510 LSBGs from Du2015 have available SDSS spectra and Balmer ratio $\rm F_{H\alpha}/F_{H\beta}$ derived from the MPA-JHU catalog of SDSS DR7.
Finally, we adopt a median value of $\rm F_{H\alpha}/F_{H\beta}=3.1493$ for the 510 LSBGs as the extinction correction for all sample LSBGs.
Owing to the approximate 60$\AA$ FWHM bandwidth of those H$\alpha$ filters, [\ion{N}{2}]$\lambda\lambda$6548, 6584 features also contribute to the obtained H$\alpha$ images.
We can remove these [\ion{N}{2}] features following equation \ref{eq6} with the assumption of the a fixed ratio of [\ion{N}{2}]/H$\alpha$ throughout all the galaxies.
\begin{equation}
f_{H\alpha,corr[NII]}=\frac{f_{H\alpha+N[II]}}{1+\frac{f_{NII}}{f_{H\alpha}}}\label{eq6}
\end{equation}
Similar to intrinsic extinction correction, we take the median ratio 0.1578 of [\ion{N}{2}]/H$\alpha$ for all 510 LSBGs with available SDSS fiber spectra, and apply it to [\ion{N}{2}] correction for all the galaxies in our sample.
\subsection{ H$\alpha$ Flux and Reliability}
After all the corrections above, we get the total H$\alpha$ flux for each LSBG.
In order to compare with previous works, we check eight LSBGs from our spring sample which that also belong to the H$\alpha$3 survey \citep{2015A&A...576A..16G}.
Figure \ref{fig6} shows a comparison between the LSBGs fluxes estimated by us and those derived from the H$\alpha$3 survey,
and the upper panel is the ratio between H$\alpha$3 survey flux and ours.
The differences between them are around 0.1 dex and less than 0.18 dex.
Roughly speaking, these two calibrated H$\alpha$ fluxes are consistent.
\begin{figure*}
\epsscale{0.8}
\plotone{f6_com_flux8_slope_ratio.eps}
\caption{ Comparison of the H$\alpha$ flux of eight LSBGs between our measurements and the H$\alpha$3 survey.
The upper panel is the ratio value ($Flux_{H\alpha 3}/Flux_{our sample}$) between H$\alpha 3$ flux and our sample. }
\label{fig6}
\end{figure*}
Since 20$\%$ of the LSBGs in our fall sample have SDSS fiber spectra, the H$\alpha$ flux can also be derived directly from the MPA-JHU directly.
We firstly measure the $H\alpha$ flux on the image within the SDSS fiber diameter(3$\arcsec$) and then compare with H$\alpha$ flux from SDSS fiber spectra in Figure \ref{fig7}.
Most of the H$\alpha$ flux is consistent.
There are two objects that deviate far away from the SDSS fiber flux.
After checking with an $H\alpha$ image we found that there is no detectable $H\alpha$ emission where the fiber is located.
\begin{figure*}
\epsscale{0.8}
\plotone{f7_com_flux_mpa3.eps}
\caption{ Comparison of our sample flux from our H$\alpha$ image and SDSS fiber spectra flux within 3$\arcsec$.
The upper panel shows the flux ratio between the H$\alpha$ image and SDSS spectra. }
\label{fig7}
\end{figure*}
We also check the SFR of these LSBGs.
Due to the 3$\arcsec$ fiber diameter, an aperture correction is needed to get the total H$\alpha$ flux of the whole galaxy.
Here, we assume that the H$\alpha$ emission follows the same distribution as the SDSS $r$-band image. The value of the aperture correction can
be calculated from the difference between the fiber and Petrosian magnitudes in $r$-band as follows:
\begin{equation}
F_{Petro}/F_{Fiber}=10^{-0.4(m_{petro}-m_{fiber})}.
\end{equation}
Here, $\rm m_{petro}$ and $\rm m_{fiber}$ are Petrosian and fiber magnitudes in the $r$-band, respectively.
$\rm F_{Fiber}$ represents the H$\alpha$ flux of a galaxy in the given fiber aperture,
whereas $F_{petro}$ is the total H$\alpha$ flux inside the Petrosian aperture.
H$\alpha$ emission traces the location of the star formation region and also provides a fairly robust quantitative measure of its current SFR.
The SFR of the LSBGs in our sample is calculated from the H$\alpha$ luminosity and using the following calibration \citep{1998ApJ...498..541K}.
\begin{equation}
SFR_{H\alpha}(M_{\sun}\ yr^{-1}) = 7.9 \times 10^{-42}[L(H\alpha)](erg\ s^{-1})
\end{equation}
where L(H$\alpha$) is the intrinsic extinction-corrected H$\alpha$ luminosity.
The initial mass function used in the conversion is a Salpeter function $[dN(m)/dm=-2.35]$ over $m=0.1-100 M_{\sun}$.
Figure \ref{fig8} shows the a comparison between the SFRs of LSBGs calculated from an H$\alpha$ image and H$\alpha$ spectrum.
For most of the LSBGs in our sample, the SFRs derived from SDSS spectra are less than those from H$\alpha$ images, and there are two LSBGs (AGC 101812, AGC 112503) showing large deviations, probably due to the aperture correction.
Checking the SDSS images of AGC 101812 and AGC 112503 shows that there exist several bright blue knots outside of the fiber region.
Thus, aperture corrections have largely underestimated the total H$\alpha$ emission.
Therefore, it isinadequate to calculate the total H$\alpha$ flux for the entire galaxy solely from the fiber spectrum.
\begin{figure*}
\epsscale{0.8}
\plotone{f8_sfr.eps}
\caption{ Comparison of the measured SFRs of 22 LSBGs derived by aperture-corrected SDSS fiber spectra.}
\label{fig8}
\end{figure*}
All the H$\alpha$ flux and other basic parameters of LSBGs are listed in Table \ref{table3}.
The table columns can be briefly described as follows:\\
Column 1 $:$ galaxy name in terms of AGC number.\\
Column 2$:$ the semimajor axis from elliptical photometry (kpc), which is the radii at 25 $magarcsec^{-2}$. \\
Column 3 $:$ the ellipticity from the elliptical photometry.\\
Column 4 and 5 $:$ logarithm of the H$\alpha$ flux and error ($erg\ s^{-1} cm^{-2}$).\\
Column 6 $:$ the logarithm of the SFR ($M_{\sun} yr^{-1}$).\\
Column 7 $:$ the logarithm of the SFR surface density ($M_{\sun} yr^{-1} kpc^{-2}$).\\
Column 8 $:$ the logarithm of the {\ion{H}{1}} mass taken from the $\alpha.$40 catalog \citep{2011AJ....142..170H}.\\
Column 9 $:$ the logarithm of the {\ion{H}{1}} gas surface density ($M_{\sun} pc^{-2}$).
We will explore the SFR and SFR surface density, and {\ion{H}{1}} gas and {\ion{H}{1}} gas surface density in the next section.
\section{Results and Analysis }
\subsection{The Star Formation and Gas Surface Density}
For each LSBG in our sample, the enclosed region of elliptical photometry is used as the optical area to calculate the star formation surface density($\Sigma_{SFR}$).
For the majority of the targets, the beam size of ALFALFA {\ion{H}{1}} observation is 3.5 arcmin, which is too large to obtain a suitable {\ion{H}{1}} size.
Hence, we have to derive the {\ion{H}{1}} size from the calibrated optical photometry size.
$r_{HI}/r_{25}$ is almost constant (1.7$\pm$0.5) and shows weak dependence on the type from S0 to Im \citep{1997A&A...324..877B,2002A&A...390..863S,2015ApJ...808...66J}. We adopt 1.7 times the optical photometry radii as the {\ion{H}{1}} radii.
Hence, the {\ion{H}{1}} surface density $\Sigma_{HI}$ is calculated from the following Equation:
\begin{equation}
\Sigma_{HI}=\frac{M_{HI}}{\pi\ (1.7^{2}ab)}
\end{equation}
Here, $M_{HI}$ is the {\ion{H}{1}} mass derived from the ALFALFA catalog, and a and b are the semimajor and semi-minor radii of photometry ellipticals, respectively.
SFE is defined as the ratio of SFR and gas mass. Generally, the gas in a normal galaxy consists of ionized, atomic, and molecular gas.
Since our sample is an {\ion{H}{1}}-selected sample and lacks of molecular observations, we just calculate $SFE_{HI}$ as follows:
\begin{equation}
SFE_{HI}= \frac{SFR}{M_{HI}}
\end{equation}
The distributions of the SFR, $SFE_{HI}$ and $\Sigma_{SFR}$, $\Sigma_{HI}$ are shown in Figure \ref{fig9}.
For comparison, we also show the distributions for samples of star forming and starburst galaxies.
In panels (a) and (b), star-forming galaxies are derived from \citet{1996AJ....112.1903Y}, and star burst galaxies are from \citet{2015ApJ...808...66J}.
In panels (c) and (d), both star forming galaxies and starburst galaxies are derived from \citet{1998ApJ...498..541K}.
Compared with star-forming and starburst galaxies, both the SFR and $SFE_{HI}$ of LSBGs are lower than those of star forming galaxies by approximately one order of magnitude, and even far lower than those of starburst galaxies. Furthermore, the SFR surface densities $\Sigma_{SFR}$ of LSBGs are even about more than one order of magnitudes lower than those of star forming galaxies.
\begin{figure*}
\epsscale{1.0}
\plotone{f9_sfr_seghi.eps}
\caption{ Distributions of (a) star formation rate; (b) star formation efficiency, SFE=SFR/mass({\ion{H}{1}}); (c) star formation surface density; (d) gas ({\ion{H}{1}}) surface density.
Blue represents the LSBGs in this paper.
The black and red colors in (a) and (b) represent star forming galaxies from \citet{1996AJ....112.1903Y} and starburst galaxies from \citet{2015ApJ...808...66J}.
The green (c) and purple (d) represent star forming galaxies and starburst galaxies from \citet{1998ApJ...498..541K}, respectively.}
\label{fig9}
\end{figure*}
\subsection{Kennicutt-Schmidt Law}
Figure \ref{fig10} shows the relation between SFR surface density and {\ion{H}{1}} surface density ($\Sigma_{HI}$) .
The blue symbols are star forming (disk) galaxies from \citet {1998ApJ...498..541K}.
The red circles are galaxies belonging to the Local supercluster \citep{2012A&A...545A..16G} and the black points are LSBGs in our sample.
The orange stars are LSBGs from \citet{2009ApJ...696.1834W}.
Following \citet{2003ApJ...588..230O}, we plot dotted lines with SFEs of $1\%$, $10\%$, and $100\%$ in a timescale of star formation of $10^{8}$ yr, corresponding to typical orbital timescales in galaxies.
The Kennicutt-Schmidt law is plotted as a black solid line.
The coverage of our LSBGs is similar to that of \citet{2009ApJ...696.1834W} LSBGs, but is toward to even lower star formation surface density.
From Figure \ref{fig10}, LSBGs and star forming galaxies are in the same region of the {\ion{H}{1}} surface density, but LSBGs have much lower SFR surface densities than star-forming galaxies.
Galaxies in the Local Supercluster have a more diffuse $\Sigma_{HI}$ distribution.
\begin{figure*}
\epsscale{0.9}
\plotone{f10_hi_kenicutt.eps}
\caption{Relation between SFR surface density and {\ion{H}{1}} surface density.
The black dots are from this paper, the yellow diamonds are LSBGs from \citet{2009ApJ...696.1834W}, and the blue dots are star forming galaxies from \citet{1998ApJ...498..541K}.
The red circles are galaxies in the Local Supercluster in the $H\alpha$3 survey from \citet{2012A&A...545A..16G}.
The black solid line is the Kennicutt-Schmidt Law, and the three dotted lines show the {\ion{H}{1}} SFEs of 100\%,10\%,1\% in a timescale of star formation of $10^{8}$ yr.}
\label{fig10}
\end{figure*}
Several previous works tried to detect CO emission in LSBGs.
However, most of them only gave upper limits on CO content, and a few LSBGs detected molecular gas.\citep{2001ApJ...549L.191M,2003ApJ...588..230O,2005AJ....129.1849M,2010A&A...523A..63D}.
\citet{2017arXiv170801362C} observed the CO (2-1) of nine LSBGs from Du2015 with JCMT,
but none of them is detected CO (2-1) emission, so only upper limits $M_{H_{2}}$ are given.
The $M_{H_{2}}/M_{HI}$ ratios are less than 0.02, which indicates a shortage of molecular gas in LSBGs \citep{2017arXiv170801362C}.
\citet{2008AJ....136.2846B} derived a correlation between SFR surface brightness density and $\rm H_{2}$ surface density,
\begin{equation}
\Sigma_{SFR} =10^{-2.1\pm0.2} \Sigma_{H2}^{1.0\pm0.2}
\end{equation}
which helps us to estimate the approximate $\rm H_{2}$ surface density from this relation.
Even though $H_{2}$ gas is not distributed as the {\ion{H}{1}} gas \citep{2008AJ....136.2782L,2011A&A...534A.102L}, Equation \ref{fig11} can be used as a rough estimation of $\Sigma_{H_{2}}$.
To get accurate values, future interferometric {\ion{H}{1}} and CO data are necessary.
\begin{figure*}
\epsscale{0.9}
\plotone{f11_sfe_kenicutt.eps}
\caption{ Relation between SFR surface density and {\ion{H}{1}} surface density.
our LSBG sample is the black solid circle ({\ion{H}{1}} gas surface density) and red circles (gas surface density).
The blue dots are star forming galaxies and the green dots are starburst galaxies; both are from \citet{1998ApJ...498..541K}.
The black solid line is the Kennicutt-Schmidt Law, and the three dotted lines show the {\ion{H}{1}} SFEs of 100\%,10\%,1\% on a timescale of star formation of $10^{8}$ yr.
The pink line is the mean value of the LSBG gas surface density and the brown dashed line is the upper boundary of the low gas surface density of 10 $ M_{\odot}pc^{-2}$.
}
\label{fig11}
\end{figure*}
From Figure \ref{fig11}, gas surface density $\Sigma_{HI+H_{2}}$ (red circles) is very close to $\Sigma_{HI}$ (black dots), which is consistent with our previous assumption: {\ion{H}{1}} dominates the gas content of our LSBGs.
All LSBGs are located at the cutoff region, deviating from the kennicutt-Schmidt law (black line), which is derived from the star-forming (blue dots) and starburst galaxies (green dots).
According to the dashed line (SFE), starburst galaxies have SFEs that are higher than 10$\%$, and star forming galaxies have SFEs a little lower than 10\%, but still much higher than 1$\%$. Though a small number of LSBGs are blended with star forming galaxies, LSBGs have SFEs far below those of star forming galaxies and of around 1\% for most of them. In some extreme cases, SFE can even be lower than 0.1$\%$.
There is a special LSBG, AGC 748765, whose SFE is far above 10$\%$. It has an extremely luminous \ion{H}{2} region in its disk.
\citet{2012ARA&A..50..531K} pointed out that the gas surface density can crudely be divided into three regions$:$ low density ($\Sigma_{gas}<10 M_{\odot}pc^{-2}$), intermediate density($10 M_{\odot}pc^{-2} < \Sigma_{gas}< 100-300M_{\odot}pc^{-2}$), and high density ($\Sigma_{gas}>100-300 M_{\odot}pc^{-2}$).
Although the SFR surface density of LSBGs can spread more than three orders of magnitudes, their gas surface densities are in a narrow region within one order of magnitude from 1 to 10 $M_{\odot}pc^{-2}$.
The SFR surface density of LSBGs does not show any dependence on gas or {\ion{H}{1}} surface density.
The brown line is the upper limit for the low-density region in Figure \ref{fig11}.
The mean gas surface density for LSBGs in our sample ($\Sigma_{gas}= 4.1 M_{\odot}pc^{-2}$) is shown as a pink line in Figure \ref{fig11}.
As expected, LSBGs are located in the low-density region. However, many star forming galaxies are also located in the low-density region, but with higher SFR density.
The tight relation between SFR and molecular gas \citep{2004ApJS..152...63G,2008AJ....136.2846B} demonstrates that the molecular gas could still dominate the gas in star forming galaxies. From Figure \ref{fig11}, the turnff point of the K-S Law is at around $\Sigma_{gas}=4 M_{\odot}pc^{-2}$, which is almost the lowest gas density of star forming galaxies, and also a the similar value to the mean gas surface density of LSBGs.
What causes that the SFR surface density to be widely distributed in such low-density regions is worth exploring in the future work.
\subsection{Star Formation History }
To characterize the evolutionary status of the star formation in galaxies, we follow specific ($sSFR=SFR/M_{*}$) and HI depletion time($t_{dep}(HI)=M_{HI}/SFR$) to study the star formation history of LSBGs.
Stellar mass is derived from $g$- and $r$-band magnitudes from Du2015 follows the equation $log(M_{*}/M_{\sun})=-0.306+1.097*(g-r)+log{L_{r}/L_{\sun}}$ \citep{2003ApJS..149..289B}.
{\ion{H}{1}} depletion time and sSFR relation are shown in Figure \ref{fig12}.
The red circles are galaxies from the Local Supercluster\citep{2012A&A...545A..16G} and the black solid circles are our LSBGs.
The dashed line representing the sSFR value is -10.1367, which means a galaxy can gain current stellar mass in current SFR throughout the Hubble time.
Here, Hubble time is adopted with 13.7 Gyr \citep{2007ApJS..170..377S}.
The dashed line is the boundary between the active phase of galaxies and the quiescent phase.
On average, the current SFRs in the Local Supercluster cannot account for their current masses, though they present higher SFRs than those of LSBGs.
Galaxies in the Local Supercluster should experience intensive star formation events once or several times in their star formation histories.
Most LSBGs are around the dashed line and some LSBGs are active phase galaxies.
Even with such a low current SFRs, most LSBGs can still obtain the current stellar mass over the timescale of universe.
They do not need a strong interacting or major merging process to occur.
A stochastic and sporadic star formation scenario could explain such a low and stable star formation histories \citep{1995MNRAS.274..235D,2015MNRAS.446.4291L}.
The lower number density environment of LSBGs may indicate that they seldom experience galactic interacting or merging \citep{2015AJ....149..199D}. This is supported by the stellar populations with ages around 2 Gyr in LSBGs \citep{2017ApJ...837..152D}.
The higher $t_{dep}(HI)$ of our LSBGs suggest that they will have an abundant supply of {\ion{H}{1}} in the future.
\begin{figure*}
\epsscale{0.9}
\plotone{f12_t_ssfr.eps}
\caption{
$t_{dep}$({\ion{H}{1}}) vs. sSFR diagram. The black solid circles are our LSBG sample, and the red open circles are galaxies in the Local Supercluster from the $H\alpha$3 survey \citep{2012A&A...545A..16G}.
}
\label{fig12}
\end{figure*}
\section{SUMMARY}
We performed a narrow band H$\alpha$ imaging survey for LSBGs selected from the 40\% ALFALFA extragalactic {\ion{H}{1}} survey.
A sample of 111 LSBGs in the fall sky has been observed with the Xinglong 2.16m telescope.
The LSBGs in this sample have recession velocities ranging from 1012 to 9889 Km$\thinspace$$s^{-1}$ and {\ion{H}{1}} masses from $log_{10}M_{HI}=7.73$ to $log_{10}M_{HI}=10.14$. H$\alpha$ fluxes of 92 objects are measured using IRAF ellipse photometry.
The derived total H$\alpha$ fluxes and corresponding SFRs are listed in Table \ref{table3}.
All the LSBGs in our sample have blue features that are similar to those of other LSBG samples.
They have lower SFRs, lower SFEs, lower star formation surface densities, lower gas surface densities and similar {\ion{H}{1}} surface densities compared with normal star forming galaxies.
Most of LSBGs are in low surface density regions and are below the Kennicutt$-$Schmidt relation.
Their SFR surface densities spread about three orders of magnitude and their SFE efficiencies are around 1$\%$ or even lower.
To characterize the star formation histories of LSGBs, we adopt parameters of $t_{dep}$({\ion{H}{1}}) and sSFR.
From the distribution of both parameters, LSBGs tend to be gas-rich and their star formation histories tend to be stable, rarely suffering from intensive interaction or major mergers.
\section*{Acknowledgments}
We thank the referee for constructive comments and suggestions. This project is supported by the National Key R$\&$D Program of China (No.2017YFA0402704), and the National Natural Science Foundation of China (grant No. 11733006,11403037, 11225316, 11173030, 11303038, 11403061 and U1531245), the China Ministry of Science and Technology under the State Key Development Program for Basic Research (2014CB845705), and the Strategic Priority Research Program, The Emergence of Cosmological Structures of the Chinese Academy of Sciences (grant No.XDB09000000).
This project is also supported by the China Ministry of Science and Technology under the State Key
Development Program for Basic Research (grant No.2014CB845705).
We acknowledge the support of the staff of the Xinglong 2.16m telescope.
This work is partially supported by the Open Project Program of the Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences.
We also thank the ALFALFA team and the SDSS team for the released data.
The Arecibo Observatory is part of the National Astronomy and Ionosphere Center, which is operated by Cornell University under a cooperative agreement with the National Science Foundation.
The authors acknowledge the work of the entire ALFALFA collaboration team in observing, flagging, and extracting the catalog of galaxies used in this work. The ALFALFA team at Cornell is supported by NSF grant AST-0607007 and AST-1107390 and by grants from the Brinson Foundation.
The authors are thankfull for the useful SDSS database and the MPA/JHU catalogs.
Funding for the SDSS has been provided by the Alfred P. Sloan Foundation, the Participating Institutions,
the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England.
\clearpage
\startlongtable
\begin{deluxetable*}{rrrrrrrr}
\tablewidth{0pt}
\tablecolumns{8}
\tablecaption{The Observed Sample of LSBGs \label{table2}}
\tablehead {
\colhead{AGC} & \colhead{$\mu_{0}(B)$} & \colhead{R.A.} & \colhead{Decl.} & \colhead{$z$} & \colhead{Dist} & \colhead{Filter} & \colhead{Date} \\
\colhead{}& \colhead{$\rm magarcsec^{-2}$} & \colhead{J2000} & \colhead{J2000}& \colhead{}&\colhead{Mpc}&\colhead{} & \colhead{} \\
\colhead{(1)} &\colhead{(2)} &\colhead{(3)} &\colhead{(4)} &\colhead{(5)} &\colhead{(6)} &\colhead{(7)} &\colhead{(8)} }
\startdata
17 & 23.4405 & 00:03:43 & +15:13:05 & 0.0029 & 12.8 & Ha1 & 20140102 \\
273 & 22.6556 & 00:28:07 & +25:59:47 & 0.0187 & 78.9 & Ha4 & 20140819 \\
337 & 22.5591 & 00:34:25 & +24:36:13 & 0.0178 & 74.9 & Ha3 & 20141021 \\
1084 & 23.6324 & 01:31:22 & +23:57:14 & 0.0114 & 46.4 & Ha3 & 20160206 \\
1211 & 23.1475 & 01:43:55 & +13:48:22 & 0.0080 & 32.4 & Ha2 & 20141017 \\
1362 & 22.9079 & 01:53:51 & +14:45:50 & 0.0264 & 109.1 & Ha5 & 20160108 \\
1693 & 23.2348 & 02:12:03 & +14:06:14 & 0.0127 & 52.1 & Ha3 & 20131010 \\
2144 & 22.7754 & 02:39:30 & +29:15:35 & 0.0160 & 65.5 & Ha3 & 20141021 \\
12289 & 22.6539 & 22:59:41 & +24:04:29 & 0.0339 & 140.2 & Ha6 & 20140826 \\
12845 & 22.7974 & 23:55:42 & +31:53:59 & 0.0162 & 69.1 & Ha3 & 20141021 \\
100037 & 22.9236 & 00:06:03 & +27:20:54 & 0.0106 & 44.8 & Ha2 & 20160205 \\
100350 & 23.9767 & 00:37:44 & +24:12:28 & 0.0155 & 65.0 & Ha3 & 20131231 \\
101191 & 23.2293 & 00:23:39 & +15:04:03 & 0.0177 & 74.7 & Ha3 & 20131010 \\
101812 & 23.6638 & 00:08:49 & +14:02:01 & 0.0064 & 27.0 & Ha2 & 20131230 \\
101877 & 23.0745 & 00:02:15 & +14:29:16 & 0.0172 & 72.9 & Ha3 & 20131010 \\
101942 & 22.9001 & 00:12:29 & +15:33:22 & 0.0188 & 79.7 & Ha4 & 20131010 \\
101986 & 22.8072 & 00:20:49 & +15:03:13 & 0.0254 & 104.0 & Ha4 & 20140825 \\
102098 & 22.6041 & 00:39:04 & +14:36:01 & 0.0418 & 174.1 & Ha7 & 20160208 \\
102101 & 23.0727 & 00:39:25 & +14:27:23 & 0.0180 & 75.6 & Ha3 & 20140819 \\
102229 & 22.5805 & 00:38:24 & +25:26:10 & 0.0110 & 45.9 & Ha2 & 20141017 \\
102243 & 22.5023 & 00:05:05 & +23:58:11 & 0.0219 & 89.0 & Ha4 & 20140825 \\
102302 & 22.9648 & 00:12:48 & +14:31:31 & 0.0061 & 25.7 & Ha2 & 20131230 \\
102558 & 23.0152 & 00:07:05 & +27:01:28 & 0.0099 & 41.7 & Ha2 & 20160205 \\
102630 & 24.6769 & 00:13:13 & +25:36:14 & 0.0208 & 88.4 & Ha4 & 20140101 \\
102635 & 22.5726 & 00:16:12 & +24:50:57 & 0.0316 & 130.8 & Ha5 & 20151210 \\
102672 & 22.5757 & 00:46:24 & +25:04:14 & 0.0176 & 73.9 & Ha3 & 20140819 \\
102674 & 23.3246 & 00:49:14 & +25:17:35 & 0.0463 & 193.8 & Ha7 & 20141017 \\
102684 & 23.9342 & 00:22:07 & +25:29:09 & 0.0248 & 101.6 & Ha4 & 20141018 \\
102728 & 22.8543 & 00:00:21 & +31:01:19 & 0.0019 & 9.1 & Ha1 & 20140101 \\
102729 & 22.5906 & 00:00:32 & +30:52:09 & 0.0154 & 65.4 & Ha3 & 20131231 \\
102730 & 22.6859 & 00:00:39 & +31:56:18 & 0.0421 & 175.8 & Ha7 & 20141017 \\
102900 & 25.0466 & 00:04:39 & +29:35:56 & 0.0405 & 168.6 & Ha6 & 20140827 \\
102981 & 22.5401 & 00:02:56 & +28:16:38 & 0.0153 & 64.8 & Ha3 & 20140820 \\
110150 & 22.7571 & 01:14:45 & +27:08:06 & 0.0121 & 49.5 & Ha3 & 20140819 \\
110319 & 24.6744 & 01:25:17 & +14:08:55 & 0.0168 & 69.9 & Ha3 & 20141021 \\
110379 & 23.5195 & 01:30:15 & +14:40:39 & 0.0082 & 33.1 & Ha2 & 20141017 \\
110398 & 22.5212 & 01:31:46 & +14:09:20 & 0.0225 & 92.3 & Ha4 & 20140825 \\
112503 & 22.5698 & 01:38:00 & +14:58:58 & 0.0025 & 10.2 & Ha1 & 20141017 \\
112892 & 22.5765 & 01:20:16 & +14:52:29 & 0.0370 & 154.3 & Ha6 & 20160210 \\
113200 & 22.7375 & 01:56:19 & +14:55:29 & 0.0248 & 102.3 & Ha4 & 20160210 \\
113752 & 23.1777 & 01:18:06 & +27:11:17 & 0.0414 & 173.3 & Ha6 & 20140827 \\
113790 & 23.2295 & 01:13:02 & +27:38:13 & 0.0165 & 68.7 & Ha3 & 20140826 \\
113825 & 22.8080 & 01:43:27 & +24:46:47 & 0.0128 & 52.4 & Ha3 & 20141021 \\
113845 & 22.7484 & 01:17:22 & +24:08:16 & 0.0273 & 112.7 & Ha5 & 20160108 \\
113907 & 22.6913 & 01:13:56 & +30:09:25 & 0.0342 & 142.5 & Ha6 & 20160208 \\
113918 & 22.8345 & 01:22:59 & +32:10:44 & 0.0355 & 148.1 & Ha6 & 20160208 \\
113923 & 23.2240 & 01:26:13 & +32:08:11 & 0.0140 & 57.6 & Ha3 & 20140819 \\
114040 & 22.6830 & 01:18:27 & +29:06:55 & 0.0262 & 108.0 & Ha4 & 20141018 \\
121174 & 26.3089 & 02:38:16 & +29:54:23 & 0.0023 & 9.7 & Ha1 & 20141017 \\
122138 & 27.1803 & 02:33:16 & +28:10:44 & 0.0034 & 13.7 & Ha1 & 20131230 \\
122210 & 23.3301 & 02:31:33 & +26:47:49 & 0.0152 & 62.2 & Ha3 & 20140826 \\
122211 & 23.9519 & 02:31:37 & +26:32:32 & 0.0123 & 49.8 & Ha3 & 20141021 \\
122341 & 22.8250 & 02:11:29 & +14:28:04 & 0.0375 & 156.8 & Ha6 & 20160208 \\
122874 & 22.6132 & 02:26:15 & +24:26:02 & 0.0213 & 87.8 & Ha4 & 20151210 \\
122877 & 24.1814 & 02:27:32 & +24:52:12 & 0.0203 & 85.0 & Ha4 & 20160210 \\
122884 & 23.1394 & 02:32:53 & +25:09:11 & 0.0081 & 32.4 & Ha2 & 20141017 \\
122924 & 24.0678 & 02:34:43 & +24:29:12 & 0.0322 & 134.5 & Ha5 & 20160207 \\
123046 & 23.0294 & 02:41:12 & +31:29:29 & 0.0160 & 65.7 & Ha3 & 20141021 \\
123047 & 23.0369 & 02:41:48 & +31:27:26 & 0.0340 & 142.7 & Ha6 & 20141017 \\
123170 & 22.8858 & 02:44:03 & +29:17:17 & 0.0030 & 12.1 & Ha1 & 20141017 \\
123172 & 22.5982 & 02:47:23 & +29:10:32 & 0.0180 & 74.5 & Ha3 & 20151204 \\
320466 & 24.4058 & 22:57:22 & +27:58:50 & 0.0098 & 43.3 & Ha2 & 20131230 \\
321166 & 24.0888 & 22:55:49 & +14:45:15 & 0.0094 & 41.3 & Ha2 & 20131230 \\
321341 & 22.7167 & 22:52:16 & +24:06:09 & 0.0404 & 168.3 & Ha6 & 20140827 \\
321348 & 23.0037 & 22:47:44 & +23:59:59 & 0.0315 & 129.9 & Ha5 & 20140826 \\
321385 & 22.5978 & 22:59:15 & +24:42:34 & 0.0242 & 98.5 & Ha4 & 20141018 \\
321429 & 23.6289 & 22:41:27 & +31:31:48 & 0.0126 & 55.6 & Ha3 & 20131231 \\
321435 & 22.6576 & 22:47:44 & +32:11:18 & 0.0129 & 56.6 & Ha3 & 20140819 \\
321438 & 24.0373 & 22:50:17 & +30:15:08 & 0.0265 & 108.6 & Ha5 & 20140101 \\
321451 & 22.5598 & 22:48:03 & +29:49:48 & 0.0237 & 96.8 & Ha4 & 20140820 \\
321490 & 23.2068 & 22:47:45 & +28:54:26 & 0.0233 & 95.0 & Ha4 & 20140825 \\
321492 & 23.1417 & 22:53:23 & +29:00:52 & 0.0068 & 30.8 & Ha2 & 20131230 \\
331052 & 22.9473 & 23:59:45 & +27:15:14 & 0.0156 & 66.5 & Ha3 & 20140819 \\
332431 & 22.8894 & 23:07:46 & +14:22:34 & 0.0246 & 100.3 & Ha4 & 20140820 \\
332640 & 23.0183 & 23:24:43 & +13:48:36 & 0.0265 & 108.2 & Ha5 & 20140825 \\
332761 & 23.0965 & 23:31:11 & +15:01:58 & 0.0193 & 82.6 & Ha4 & 20140825 \\
332786 & 22.5704 & 23:36:09 & +15:44:38 & 0.0134 & 57.5 & Ha3 & 20131229 \\
332844 & 22.8276 & 23:51:24 & +14:14:02 & 0.0394 & 163.5 & Ha6 & 20141021 \\
332861 & 22.5843 & 23:53:04 & +14:35:07 & 0.0263 & 107.5 & Ha5 & 20140825 \\
332879 & 22.7499 & 23:56:44 & +15:27:36 & 0.0265 & 108.5 & Ha5 & 20131010 \\
332887 & 23.4223 & 23:58:44 & +16:05:26 & 0.0196 & 83.2 & Ha4 & 20141018 \\
332906 & 23.3617 & 23:05:09 & +25:52:28 & 0.0327 & 135.1 & Ha5 & 20140101 \\
333224 & 22.9186 & 23:59:24 & +26:32:53 & 0.0257 & 105.4 & Ha4 & 20140101 \\
333318 & 22.7712 & 23:10:39 & +24:08:40 & 0.0410 & 170.5 & Ha6 & 20140827 \\
333442 & 22.6876 & 23:58:33 & +31:07:47 & 0.0320 & 132.5 & Ha5 & 20160108 \\
748648 & 23.4768 & 21:44:47 & +15:24:26 & 0.0378 & 157.3 & Ha6 & 20140827 \\
748715 & 22.7025 & 22:39:38 & +13:57:58 & 0.0208 & 89.8 & Ha4 & 20140825 \\
748723 & 23.7324 & 22:52:04 & +15:12:20 & 0.0373 & 154.7 & Ha6 & 20140827 \\
748724 & 22.8417 & 22:55:07 & +14:48:04 & 0.0314 & 129.6 & Ha5 & 20131228 \\
748737 & 22.9517 & 23:03:03 & +14:10:13 & 0.0247 & 100.6 & Ha4 & 20141018 \\
748738 & 24.3426 & 23:04:52 & +14:01:05 & 0.0130 & 56.5 & Ha3 & 20131231 \\
748744 & 23.1245 & 23:09:16 & +14:21:58 & 0.0163 & 70.5 & Ha3 & 20140819 \\
748757 & 22.5773 & 23:19:04 & +16:01:20 & 0.0130 & 56.1 & Ha3 & 20140819 \\
748763 & 22.7211 & 23:23:32 & +13:50:16 & 0.0437 & 182.2 & Ha7 & 20141017 \\
748765 & 24.5539 & 23:23:43 & +14:25:40 & 0.0116 & 50.0 & Ha3 & 20140826 \\
748766 & 23.3312 & 23:23:48 & +14:56:50 & 0.0425 & 177.0 & Ha7 & 20140102 \\
748767 & 23.3730 & 23:24:11 & +15:53:10 & 0.0144 & 61.8 & Ha3 & 20140826 \\
748769 & 24.4649 & 23:26:14 & +15:04:41 & 0.0140 & 60.1 & Ha3 & 20140826 \\
748770 & 23.0814 & 23:27:29 & +14:48:48 & 0.0407 & 169.4 & Ha6 & 20140827 \\
748777 & 24.2502 & 00:03:11 & +15:02:40 & 0.0460 & 192.1 & Ha7 & 20141017 \\
748778 & 24.5455 & 00:06:34 & +15:30:39 & 9.0E-4 & 4.6 & Ha1 & 20140102 \\
748786 & 23.7866 & 00:23:06 & +15:08:21 & 0.0184 & 77.7 & Ha3 & 20140819 \\
748788 & 22.8876 & 00:24:10 & +15:59:38 & 0.0174 & 73.5 & Ha3 & 20141021 \\
748790 & 23.1116 & 00:25:07 & +14:22:06 & 0.0180 & 75.9 & Ha3 & 20131229 \\
748794 & 24.1651 & 00:39:28 & +14:37:07 & 0.0177 & 74.3 & Ha3 & 20131228 \\
748795 & 22.5902 & 00:40:56 & +14:14:08 & 0.0387 & 161.1 & Ha6 & 20160209 \\
748798 & 24.6805 & 00:49:01 & +14:03:05 & 0.0386 & 160.6 & Ha6 & 20160209 \\
748805 & 22.8382 & 01:04:36 & +15:16:21 & 0.0144 & 59.6 & Ha3 & 20141021 \\
748815 & 22.6130 & 01:27:03 & +14:39:38 & 0.0216 & 88.0 & Ha4 & 20140825 \\
748817 & 22.5455 & 01:28:33 & +15:14:54 & 0.0211 & 86.2 & Ha4 & 20141018 \\
748819 & 24.6432 & 01:37:25 & +14:39:37 & 0.0086 & 35.0 & Ha2 & 20160207 \\
\enddata
\end{deluxetable*}
\startlongtable
\begin{deluxetable*}{rrcccccrc}
\tabletypesize{\scriptsize}
\tablecolumns{9}
\tablewidth{-20pt}
\tablecaption{The Star Formation Properties of LSBGs \label{table3}}
\tabletypesize{\scriptsize}
\tablehead {
\colhead{AGC}& \colhead{$r_{25}$} & \colhead{Ellipse} & \colhead{$\rm log F(H\alpha)$} & \colhead{$\rm log \sigma( F(H\alpha))$}&\colhead{log$\rm (SFR)$}&\colhead{$\rm log \Sigma_{sfr}$}&\colhead{$\rm log M_{HI}$}&\colhead{$\rm log \Sigma_{HI}$} \\
\colhead{}& \colhead{Kpc}&\colhead{}& \colhead{$erg cm^{-2} s^{-1}$} & \colhead{$erg cm^{-2} s^{-1}$} & \colhead{$M_{\odot}yr^{-1}$} & \colhead{$M_{\odot}yr^{-1}Kpc^{-2}$}& \colhead{$M_{\odot}$}& \colhead{$M_{\odot}pc^{-2}$}\\
\colhead{(1)} &\colhead{(2)} &\colhead{(3)} &\colhead{(4)} &\colhead{(5)} &\colhead{(6)} &\colhead{(7)} &\colhead{(8)}&\colhead{(9)} }
\startdata
17 & 2.63 & 0.24 & \nodata & \nodata & \nodata & \nodata & 8.41 & 0.73 \\
273 & 9.40 & 0.32 & -14.64 & 0.08 & -1.87 & -4.15 & 9.59 & 0.85 \\
337 & 8.47 & 0.26 & -12.93 & 0.01 & -0.20 & -2.43 & 9.88 & 1.20 \\
1084 & 5.67 & 0.47 & -13.78 & 0.02 & -1.47 & -3.19 & 9.50 & 1.31 \\
1211 & 6.89 & 0.33 & -13.30 & 0.01 & -1.30 & -3.30 & 8.98 & 0.52 \\
1362 & 12.44 & 0.23 & -13.73 & 0.01 & -0.68 & -3.25 & 9.36 & 0.33 \\
1693 & 9.27 & 0.07 & -13.22 & 0.01 & -0.81 & -3.21 & 9.49 & 0.63 \\
2144 & 9.66 & 0.20 & -12.96 & 0.01 & -0.35 & -2.72 & 9.18 & 0.35 \\
12289 & 24.96 & 0.11 & -13.17 & 0.01 & 0.10 & -3.14 & 10.30 & 0.60 \\
12845 & 25.03 & 0.20 & -11.67 & 0.01 & 0.98 & -2.22 & 10.18 & 0.52 \\
100037 & 6.07 & 0.20 & -13.35 & 0.01 & -1.07 & -3.04 & 8.76 & 0.33 \\
100350 & 5.27 & 0.20 & -14.23 & 0.04 & -1.63 & -3.47 & 8.92 & 0.62 \\
101191 & 5.83 & 0.33 & -13.44 & 0.01 & -0.72 & -2.57 & 8.95 & 0.63 \\
101812 & 1.89 & 0.20 & -13.94 & 0.02 & -2.11 & -3.06 & 8.73 & 1.31 \\
101877 & 7.87 & 0.50 & -13.97 & 0.01 & -1.27 & -3.26 & 9.57 & 1.12 \\
101942 & 5.68 & 0.45 & -14.96 & 0.10 & -2.18 & -3.93 & 9.14 & 0.93 \\
101986 & 8.54 & 0.20 & -13.70 & 0.01 & -0.69 & -2.95 & 9.33 & 0.61 \\
102098 & 9.30 & 0.34 & \nodata & \nodata & \nodata & \nodata & 9.68 & 0.97 \\
102101 & 8.39 & 0.35 & -13.78 & 0.02 & -1.04 & -3.20 & 9.21 & 0.59 \\
102229 & 9.68 & 0.20 & \nodata & \nodata & \nodata & \nodata & 8.94 & 0.11 \\
102243 & 8.99 & 0.24 & -13.52 & 0.01 & -0.65 & -2.93 & 9.78 & 1.03 \\
102302 & 2.04 & 0.20 & -15.01 & 0.18 & -3.22 & -4.24 & 8.79 & 1.31 \\
102558 & 8.66 & 0.03 & \nodata & \nodata & \nodata & \nodata & 8.27 & -0.55 \\
102630 & 5.89 & 0.20 & -14.11 & 0.05 & -1.24 & -3.18 & 9.17 & 0.77 \\
102635 & 12.82 & 0.14 & -13.56 & 0.01 & -0.35 & -3.00 & 9.65 & 0.54 \\
102672 & 4.19 & 0.29 & -13.16 & 0.01 & -0.44 & -2.04 & 9.20 & 1.15 \\
102674 & 13.94 & 0.12 & -13.97 & 0.01 & -0.42 & -3.15 & 10.02 & 0.83 \\
102684 & 8.57 & 0.43 & -13.80 & 0.04 & -0.81 & -2.93 & 9.27 & 0.69 \\
102728 & 0.23 & 0.20 & -14.47 & 0.06 & -3.58 & -2.70 & 6.78 & 1.20 \\
102729 & 4.81 & 0.34 & -13.59 & 0.01 & -0.99 & -2.67 & 8.85 & 0.71 \\
102730 & 10.12 & 0.22 & -14.00 & 0.01 & -0.54 & -2.94 & 9.68 & 0.82 \\
102900 & 16.24 & 0.20 & -14.02 & 0.03 & -0.59 & -3.41 & 9.81 & 0.53 \\
102981 & 7.72 & 0.20 & \nodata & \nodata & \nodata & \nodata & 8.72 & 0.08 \\
110150 & 6.78 & 0.05 & -13.33 & 0.01 & -0.97 & -3.10 & 9.49 & 0.89 \\
110319 & 5.55 & 0.20 & -14.13 & 0.02 & -1.46 & -3.35 & 9.22 & 0.87 \\
110379 & 2.72 & 0.16 & -13.88 & 0.01 & -1.86 & -3.15 & 9.20 & 1.45 \\
110398 & 12.53 & 0.42 & -13.40 & 0.01 & -0.50 & -2.95 & 9.63 & 0.71 \\
112503 & 1.22 & 0.55 & -13.42 & 0.01 & -2.43 & -2.75 & 7.14 & 0.36 \\
112892 & 10.28 & 0.20 & -13.79 & 0.05 & -0.44 & -2.86 & 9.54 & 0.66 \\
113200 & 5.18 & 0.20 & -13.91 & 0.06 & -0.92 & -2.75 & 9.29 & 1.00 \\
113752 & 17.34 & 0.20 & \nodata & \nodata & \nodata & \nodata & 9.72 & 0.38 \\
113790 & 5.30 & 0.20 & \nodata & \nodata & \nodata & \nodata & 8.57 & 0.26 \\
113825 & 2.13 & 0.20 & -14.66 & 0.05 & -2.24 & -3.30 & 8.98 & 1.46 \\
113845 & 4.70 & 0.28 & -14.43 & 0.02 & -1.35 & -3.05 & 9.25 & 1.09 \\
113907 & 8.61 & 0.10 & -13.96 & 0.02 & -0.67 & -3.00 & 9.36 & 0.58 \\
113918 & 9.87 & 0.11 & \nodata & \nodata & \nodata & \nodata & 9.47 & 0.57 \\
113923 & 3.34 & 0.20 & -14.12 & 0.01 & -1.63 & -3.08 & 9.05 & 1.14 \\
114040 & 8.07 & 0.42 & -14.83 & 0.34 & -1.79 & -3.86 & 9.40 & 0.86 \\
121174 & 0.88 & 0.20 & \nodata & \nodata & \nodata & \nodata & 8.18 & 1.43 \\
122138 & 1.25 & 0.20 & -14.25 & 0.04 & -3.00 & -3.59 & 8.08 & 1.02 \\
122210 & 6.27 & 0.23 & -13.46 & 0.01 & -0.90 & -2.88 & 9.29 & 0.85 \\
122211 & 3.97 & 0.32 & -13.96 & 0.02 & -1.59 & -3.11 & 8.66 & 0.67 \\
122341 & 14.42 & 0.45 & -13.95 & 0.03 & -0.59 & -3.14 & 9.90 & 0.88 \\
122874 & 5.05 & 0.37 & -14.29 & 0.02 & -1.43 & -3.13 & 9.12 & 0.96 \\
122877 & 8.55 & 0.80 & \nodata & \nodata & \nodata & \nodata & 9.18 & 1.06 \\
122884 & 2.93 & 0.33 & -14.03 & 0.02 & -2.03 & -3.29 & 9.17 & 1.45 \\
122924 & 4.05 & 0.19 & \nodata & \nodata & \nodata & \nodata & 9.50 & 1.42 \\
123046 & 8.51 & 0.07 & -13.30 & 0.03 & -0.69 & -3.01 & 8.89 & 0.10 \\
123047 & 6.42 & 0.45 & -13.98 & 0.02 & -0.69 & -2.55 & 9.47 & 1.16 \\
123170 & 1.29 & 0.20 & \nodata & \nodata & \nodata & \nodata & 7.68 & 0.60 \\
123172 & 5.77 & 0.30 & -13.96 & 0.02 & -1.24 & -3.10 & 9.25 & 0.92 \\
320466 & 5.71 & 0.35 & \nodata & \nodata & \nodata & \nodata & 9.13 & 0.85 \\
321166 & 6.48 & 0.20 & \nodata & \nodata & \nodata & \nodata & 8.62 & 0.14 \\
321341 & 11.90 & 0.26 & -14.13 & 0.03 & -0.70 & -3.22 & 9.66 & 0.68 \\
321348 & 9.64 & 0.31 & -13.73 & 0.01 & -0.53 & -2.84 & 9.52 & 0.75 \\
321385 & 3.95 & 0.35 & -14.19 & 0.02 & -1.23 & -2.73 & 9.32 & 1.36 \\
321429 & 1.74 & 0.56 & -15.53 & 0.20 & -3.06 & -3.69 & 8.55 & 1.47 \\
321435 & 4.74 & 0.61 & -13.29 & 0.01 & -0.81 & -2.25 & 8.88 & 0.98 \\
321438 & 4.32 & 0.39 & -15.05 & 0.12 & -2.01 & -3.56 & 9.12 & 1.11 \\
321451 & 6.90 & 0.20 & -13.44 & 0.01 & -0.50 & -2.57 & 9.31 & 0.77 \\
321490 & 6.79 & 0.45 & -14.02 & 0.01 & -1.09 & -2.99 & 9.37 & 1.01 \\
321492 & 5.00 & 0.19 & \nodata & \nodata & \nodata & \nodata & 7.73 & -0.53 \\
331052 & 4.74 & 0.09 & -13.65 & 0.01 & -1.03 & -2.83 & 8.78 & 0.51 \\
332431 & 9.61 & 0.29 & -13.50 & 0.01 & -0.53 & -2.84 & 9.64 & 0.86 \\
332640 & 9.60 & 0.29 & -14.35 & 0.03 & -1.30 & -3.61 & 9.62 & 0.85 \\
332761 & 3.88 & 0.03 & -14.41 & 0.02 & -1.60 & -3.27 & 8.57 & 0.45 \\
332786 & 12.10 & 0.51 & \nodata & \nodata & \nodata & \nodata & 8.57 & -0.24 \\
332844 & 10.54 & 0.26 & -13.98 & 0.02 & -0.57 & -2.99 & 9.64 & 0.77 \\
332861 & 10.48 & 0.54 & -14.64 & 0.04 & -1.60 & -3.80 & 9.40 & 0.74 \\
332879 & 6.73 & 0.20 & -14.36 & 0.05 & -1.32 & -3.37 & 9.31 & 0.79 \\
332887 & 4.65 & 0.17 & -14.21 & 0.01 & -1.40 & -3.15 & 9.19 & 0.98 \\
332906 & 7.65 & 0.08 & -14.20 & 0.04 & -0.96 & -3.19 & 9.58 & 0.89 \\
333224 & 9.57 & 0.58 & -14.39 & 0.06 & -1.37 & -3.46 & 9.50 & 0.96 \\
333318 & 17.89 & 0.18 & -14.02 & 0.06 & -0.58 & -3.49 & 9.95 & 0.57 \\
333442 & 13.26 & 0.32 & -14.04 & 0.02 & -0.82 & -3.39 & 9.65 & 0.61 \\
748648 & 5.42 & 0.35 & -14.54 & 0.03 & -1.17 & -2.95 & 9.69 & 1.45 \\
748715 & 3.03 & 0.38 & -15.13 & 0.07 & -2.25 & -3.50 & 9.16 & 1.45 \\
748723 & 17.70 & 0.70 & -13.84 & 0.03 & -0.48 & -2.95 & 9.61 & 0.68 \\
748724 & 9.63 & 0.20 & -14.10 & 0.03 & -0.90 & -3.27 & 9.60 & 0.77 \\
748737 & 5.83 & 0.20 & -14.85 & 0.11 & -1.87 & -3.80 & 9.54 & 1.15 \\
748738 & 2.27 & 0.26 & -14.76 & 0.04 & -2.28 & -3.36 & 8.61 & 1.07 \\
748744 & 3.44 & 0.46 & -14.52 & 0.02 & -1.85 & -3.15 & 9.02 & 1.26 \\
748757 & 5.14 & 0.58 & -13.80 & 0.01 & -1.32 & -2.86 & 9.32 & 1.32 \\
748763 & 5.44 & 0.16 & -14.73 & 0.03 & -1.24 & -3.13 & 9.90 & 1.55 \\
748765 & 2.24 & 0.20 & -12.72 & 0.01 & -0.35 & -1.45 & 8.63 & 1.07 \\
748766 & 10.21 & 0.33 & -14.40 & 0.08 & -0.93 & -3.27 & 9.89 & 1.09 \\
748767 & 3.56 & 0.08 & -15.30 & 0.14 & -2.74 & -4.31 & 8.84 & 0.81 \\
748769 & 3.47 & 0.20 & -14.51 & 0.04 & -1.97 & -3.45 & 8.76 & 0.82 \\
748770 & 11.99 & 0.35 & -14.31 & 0.07 & -0.88 & -3.34 & 9.71 & 0.78 \\
748777 & 13.85 & 0.41 & -14.12 & 0.02 & -0.58 & -3.13 & 9.88 & 0.87 \\
748778 & 0.15 & 0.20 & -13.99 & 0.04 & -3.69 & -2.46 & 6.36 & 1.13 \\
748786 & 7.91 & 0.78 & -13.87 & 0.01 & -1.11 & -2.75 & 9.15 & 1.05 \\
748788 & 2.87 & 0.05 & -13.66 & 0.01 & -0.95 & -2.34 & 8.85 & 1.00 \\
748790 & 4.78 & 0.70 & \nodata & \nodata & \nodata & \nodata & 8.82 & 1.03 \\
748794 & 4.70 & 0.20 & \nodata & \nodata & \nodata & \nodata & 9.13 & 0.93 \\
748795 & 10.04 & 0.48 & -14.18 & 0.04 & -0.79 & -3.01 & 9.64 & 0.96 \\
748798 & 6.55 & 0.20 & -15.40 & 0.34 & -2.01 & -4.05 & 9.55 & 1.06 \\
748805 & 8.71 & 0.17 & -14.77 & 0.08 & -2.25 & -4.54 & 8.74 & -0.02 \\
748815 & 4.13 & 0.39 & -14.03 & 0.01 & -1.17 & -2.68 & 9.23 & 1.25 \\
748817 & 4.82 & 0.29 & -14.18 & 0.02 & -1.33 & -3.04 & 9.17 & 0.99 \\
748819 & 6.39 & 0.20 & \nodata & \nodata & \nodata & \nodata & 8.78 & 0.31 \\
\enddata
\end{deluxetable*}
\bibliographystyle{aasjournal}
|
{
"timestamp": "2018-03-08T02:08:28",
"yymm": "1803",
"arxiv_id": "1803.02650",
"language": "en",
"url": "https://arxiv.org/abs/1803.02650"
}
|
\section{Introduction}
One of the primary contributors to global mobile traffic growth is the increasing number of wireless devices that are accessing mobile networks. Each year, several million new devices with different form factors and increased capacities are being introduced. Over half a billion (526 million) mobile devices and connections were added in 2013 and the overall mobile data traffic is expected to grow to 15.9 exabytes per month by 2018, nearly an 11-fold increase over 2013~\cite{Cis14}. In addition to the large number of devices that need to access the network, emerging new services such as Ultra-High-Definition (UHD) multimedia streaming demand significantly increased cell capacity and end-user data rate \cite{whitepaper}. Such unprecedented growth in the number of connected devices and mobile data places new requirements \cite{Andrews14} for the fifth generation (5G) wireless access systems that are set to be commercially available around 2020.
In order to provide ubiquitous and high data rate connectivity, advanced small cells are envisaged for 5G \cite{whitepaper}. However, deployment of small cells with a higher density or smaller cell size in 5G causes a dilemma. On the one hand, the smaller the cells, the smaller the path loss, and therefore higher data rate is expected. On the other hand, such an advantage of increased data rate diminishes as having smaller cells introduces more severe inter-cell interference (ICI), which becomes one of the critical problems to solve in 5G.
Frequency quadrature amplitude modulation (FQAM), considered as a combination of frequency shift keying (FSK) and quadrature amplitude modulation (QAM), can significantly improve transmission rates for cell-edge users \cite{Hong14, Hochwald03}. The mechanism of FQAM is that only one frequency component is actiave during each transmission period, over which a QAM symbol is transmitted. Information is conveyed by both the QAM symbol and the active frequency component index. The advantage of FQAM at cell edge comes from the fact that the statistics of aggregated ICI, created by transmitting FQAM symbols at the interfering BSs, is non-Gaussian, especially at the cell edge. As has been proved that the worst-case additive noise in wireless networks with respect to the channel capacity has a Gaussian distribution \cite{Seol09}, one can expect that the channel capacity can be increased by using FQAM. Variants of FQAM such as the generalized orthogonal frequency division multiplexing (OFDM) index modulation (IM) \cite{Fan15}, which activates multiple frequency components in each transmission period, and the generalized space and frequency IM \cite{Datta15}, which combines FQAM and spatial modulation (SM) \cite{Mesleh08}, have been reported in the literature.
Despite the significant advantages of FQAM and its potential of ICI reduction in 5G cellular networks, studies on FQAM has not drawn much attention in 5G. In this paper, based on \cite{Hong14}, we present and highlight the advantages of FQAM for 5G, comparing it with QAM. In particular, the detection of FQAM is studied, the noise plus ICI of FQAM under dense BS deployment is analyzed, and the cumulative distribution function (CDF) of signal to noise plus interference ratio (SINR) of multi-cell FQAM is derived using the stochastic geometry approach. The advantage of FQAM in terms of performance and SINR distribution is demonstrated and verified against simulation.
The remainder of this paper is organized as follows. Section~\ref{System_model} gives the general description of the FQAM system. Section~\ref{sec_detection_algorithm} presents the detection, especially the computation of log-likelihood ratio (LLR) of Turbo-coded FQAM. In Section \ref{sec_ipn}, the noise pluse ICI of FQAM is analyzed and CDFs of SINR of FQAM are derived based on the statistic geometry approach, and are compared with those of QAM. Simulation results and comparisons are shown in Section \ref{sec_results_and_analysis} and conclusions are drawn in Section~\ref{conclusion_section}.
\section{System Model} \label{System_model}
We consider a homogeneous, synchronous, downlink cellular network with $N_B$ base stations (BSs). At each base station, a sequence of bits are interleaved, turbo-coded, and then modulated to FQAM symbols, which are used to transmit data over $N_s$ subcarriers. Assume $(M_\mathrm{F}, Q)$-FQAM symbols, which are formed by a combination of $M_\mathrm{F}$-ary FSK modulation and $Q$-ary QAM modulation, are used for transmission. It is known from \cite{Hong14} that a total of $(\log_2M_\mathrm{F}+\log_2Q)$ bits are mapped to one FQAM symbol, with the first $\log_2M_\mathrm{F}$ bits indicating the frequency index and the last $\log_2Q$ bits indicating the QAM index using Grey mapping.
An example of a (4,4)-FQAM signal constellation is given in Fig.~\ref{fig1}.
\begin{figure} [!htb]
\centering
\includegraphics[width=7cm]{constellation}
\caption{Example of a $(4,4)$-FQAM signal constellation \cite{Hong14}.}
\label{fig1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=4in]{FQAM_txrx.eps}
\caption{Diagram of the transceiver model of FQAM.}
\label{fig_FQAM_txrx}
\end{figure}
After FQAM modulation, identical to the QAM system, the length-$N_s$ FQAM symbols are processed with an inverse fast Fourier transform (IFFT), then a cyclic prefix (CP) is added at the beginning of the IFFT output, yielding one QAM symbol to be transmitted for each BS. These QAM symbols then go through respective fading channels from each base station to the user equipment (UE), where the channel between the $a$th ($a \in \{1,\cdots, N_B\}$) base station to the UE is given by a length-$L_a$ vector $\mathbf{h}^a = [h^a(0), \cdots, h^a(L_a-1)]^T$. At the receiver, the CP is removed, and a fast Fourier transform (FFT) is performed. It is known that the insertion and removal of CP together with the IFFT and FFT forms an equivalent one tap frequency domain channel on each subcarrier. The received signal is given by \cite{Hong14}
\begin{equation}\label{received}
\Omega_{k,l} = H_{k,l}^As_k^A\delta_{m_k^A, l} + \tilde{Z}_{k,l}
\end{equation}
where $k$ ($k=0,\cdots, N_s-1$) is the frequency component index, $l$ ($l=0,\cdots, M_\mathrm{F}-1$) is the frequency index for the FQAM symbol at the $k$th frequency component, and $s_k^A$ represents the symbol transmitted on the $k$th frequency component at the desired BS, i.e., the $A$th base station, which takes a form of QAM symbol and $s_k^A\in\mathcal{S}$, where $\mathcal{S}$ denotes the set of the signals on a QAM constellation. In addition, $H_{k,l}^A$ is the frequency domain channel coefficient at the $k$th frequency component between the $A$th base station to the UE, given by taking the FFT of the time domain channel $\mathbf{h}^A$. Furthermore, $m_k^A\in \{0,\cdots, M_\mathrm{F}-1\}$ is the frequency index of the FQAM symbol at the $k$th frequency component, and $\delta(\cdot)$ is the Dirac delta function. Finally, $\tilde{Z}_{k}$ is the corresponding noise plus ICI term, which includes the received signal from all other base stations $a = 0, \cdots, N_B-1$ and $a\neq A$.
In order to detect the transmitted bits, a soft-decoding metric in the form of LLR is required to be computed as inputs to the turbo decoder. To obtain the soft-decoding metric, one could use the well-known maximum likelihood (ML) detector. However, such a detector assumes knowledge of the modulated symbols of the interfering BSs, which are practically unavailable at the receiver \cite{Hong14}. As a result, a complex generalized Gaussian distribution (CGG) receiver based assuming CGG distribution of the noise plus ICI term is proposed in \cite{Hong14}, which we will detail in Section \ref{sec_detection_algorithm}.
A block diagram of the transceiver of FQAM is detailed in Fig. \ref{fig_FQAM_txrx}. The transceiver structure of QAM is exactly the same as FQAM, except that in QAM, all frequency components are active.
\section{Detection of FQAM} \label{sec_detection_algorithm}
The use of CGG detector in FQAM has been presented in literature \cite{Hong14}. In this section, we detail the process of a FQAM CGG detector for completeness.
It is known from \cite{Hong14} that assuming knowledge of the modulated symbols of the interfering BSs, one can use the conventional ML detector, considering the distribution of noise plus ICI as Gaussian. Such an assumption is however highly impractical. A sub-optimal detector was therefore proposed in \cite{Hong14}, assuming the CGG distribution of the noise plus ICI term. Such a suboptimal detector, namely a CGG detector, requires estimation of the shape and scale parameters, denoted as $\alpha$ and $\beta$ respectively, of the distribution of the noise plus ICI term.
\subsection{LLR computation for a CGG Detector}
LLR of a bit $b_k^\upsilon$ ($\upsilon = 0, \cdots, \log_2M_\mathrm{F}+\log_2Q-1$) of a CGG detector is given by \cite{Hong14}
\begin{align}\label{llr}
\mbox{LLR}_k = \mbox{ln}\frac{\sum_{(\tilde{m},\tilde{q})\in\tilde{B}_0^\upsilon}f_\mathbf{U}\left(\mathbf\Lambda_k^{(\tilde{m}, \tilde{q})}|\alpha, \beta\right)}{\sum_{(\tilde{m},\tilde{q})\in\tilde{B}_1^\upsilon}f_\mathbf{U}\left(\mathbf\Lambda_k^{(\tilde{m}, \tilde{q})}|\alpha, \beta\right)}
\end{align}
where $\tilde{B}_i^\upsilon$ denotes the set of all possible $(\tilde{m}\in\{0, \cdots, M_\mathrm{F}-1\}, \tilde{q}\in \{0, \cdots, Q-1\})$, whose $v$th bit equals $i\in \{0,1\}$, and $\mathbf\Lambda_k^{\tilde{m}, \tilde{q}}$ is a length-$M_\mathrm{F}$ vector, with its $l$th ($l=0,\cdots, M_\mathrm{F}-1$) entry given by
\begin{equation}\label{lambdak}
\Lambda_k^{\tilde{m}, \tilde{q}}(l) = \Omega_{k,l}-H_{k,l}^As_{\tilde{q}}\delta_{\tilde{m},l}.
\end{equation}
In addition, $f_\mathbf{U}\left(\cdot\right)$ is the joint probability density function (pdf) of $\mathbf{U}=[U_0, U_1, \cdots, U_{M_\mathrm{F}-1}]$, where $U_l = \tilde{Z}_{k,l}$ ($l=0,\cdots, M_\mathrm{F}-1$) is the independently and identically distributed (i.i.d.) random variables of the noise plus ICI term on the $k$th frequency component. The PDF is approximated as CGG distribution, given by\cite{Hong14}
\begin{equation}\label{fu}
f_\mathbf{U}(\mathbf{u}|\alpha, \beta) = \left(\frac{\alpha}{2\pi\beta^2\Gamma(2/\alpha)}\right)^{M_\mathrm{F}}\prod_{l=0}^{M_\mathrm{F}-1}\exp\left(-\left(\frac{|u_l|}{\beta}\right)^\alpha\right)
\end{equation}
where $\alpha$, $\beta$ are the shape and scale parameters of the distribution. The estimation of $\alpha$ and $\beta$ is detailed in \cite[(21) and (22)]{Seol09}, which we give below for completeness. The $\alpha$ and $\beta$ are estimated as
\begin{equation}\label{alpha}
\hat{\alpha} = \frac{\eta}{\ln\left(\frac{(\sum|\hat{Z}_{k,l}|)^2}{N_s\sum|\hat{Z}_{k,l}|^2}-\xi\right)}+\ln\left(3/2\sqrt{2}\right)
\end{equation}
and
\begin{equation}\label{beta}
\hat{\beta} = \frac{\Gamma(2/\hat{\alpha})}{N_s\Gamma(3/\hat{\alpha})}\sum|\hat{Z}_{k,l}|
\end{equation}
where $\eta$ and $\xi$ are constants defined in \cite{Seol09},
the summation of $\hat{Z}_{k,l}$ is taken on all $k\in\{0,\cdots, N_s-1\}$ and $l\in\{0, \cdots, M_\mathrm{F}-1\}$, and $\hat{Z}_{k,l}$ is the estimated noise plus ICI term, given by \cite{Hong14}
\begin{equation}\label{estimation}
\hat{Z}_{k,l} = \Omega_{k,l} - H^A_{k,l}\hat{s}^A_k\delta_{\hat{m}_k^A, l}
\end{equation}
and
\begin{equation}\label{minimization}
(\hat{m}_k^A, \hat{s}^A_k) = \arg\min_{{m_k^A\in \{0,\cdots, M_\mathrm{F}-1\}}\atop {s_k\in \{s_0, \cdots, s_{Q-1}\}}}\sum_{l=0}^{M_\mathrm{F}-1}\left|\Omega_{k,l}-H^A_{k,l}s^A_k\delta_{m_k^A,l}\right|^2.
\end{equation}
The algorithm of LLR computation for the CGG detector is given in Table \ref{LLRCGG}. After obtaining the estimated $\alpha$ and $\beta$, substituting (\ref{lambdak}) and (\ref{fu}) to (\ref{llr}) yields the soft metric that is required for the subsequent turbo decoder, and the transmitted bits are detected followed by a deinterleaver.
\begin{table}
\renewcommand{\arraystretch} {1.3}
\caption{Computing LLR of the CGG detector}
\label{LLRCGG}
\centering
\begin{tabular}{|l|}
\hline
1: Solve the optimization problem given in (\ref{minimization})\\
2: Generate estimation of the noise plus ICI term according to (\ref{estimation}) \\
3: Estimate $\alpha$ and $\beta$ using (\ref{alpha}) and (\ref{beta}) \\
4: Obtain the pdf of noise plus ICI according to (\ref{fu}) \\
5: Compute LLR using (\ref{llr}) \\
\hline
\end{tabular}
\end{table}
We can further simplify the computation of LLR in step 5, by applying the maximum log approximation of LLR, given by
\begin{align}
\mbox{LLR}_k&=a\sum_{(\tilde{m},\tilde{q})\in \tilde{\mathcal{B}}_0^\upsilon}\prod_{l=0}^{M_\mathrm{F}-1}\exp\left(-\frac{|\Lambda_k^{\tilde{m}, \tilde{q}}(l)|}{\beta}\right)^\alpha\nonumber\\
&= \ln\frac{\sum_{(\tilde{m},\tilde{q})\in \tilde{\mathcal{B}}_0^\upsilon}\exp\left(-\sum_{l=0}^{M_\mathrm{F}-1}\frac{|\Lambda_k^{\tilde{m}, \tilde{q}}(l)|}{\beta}\right)^\alpha}{\sum_{(\tilde{m},\tilde{q})\in \tilde{\mathcal{B}}_1^\upsilon}\exp\left(-\sum_{l=0}^{M_\mathrm{F}-1}\frac{|\Lambda_k^{\tilde{m}, \tilde{q}}(l)|}{\beta}\right)^\alpha}\nonumber \\
&\approx-\frac{1}{\beta^\alpha}\left(\min\limits_{{(\tilde{m},\tilde{q})\in \tilde{\mathcal{B}}_0^\upsilon}}\left\lbrace \sum_{l=0}^{M_\mathrm{F}-1}\bigg|\Lambda_k^{\tilde{m}, \tilde{q}}(l) \bigg|^\alpha \right\rbrace \right. \nonumber\\
&\qquad\left. - \min\limits_{{(\tilde{m},\tilde{q})\in \tilde{\mathcal{B}}_1^\upsilon}}\left\lbrace \sum_{l=0}^{M_\mathrm{F}-1}\bigg|\Lambda_k^{\tilde{m}, \tilde{q}}(l) \bigg|^\alpha \right\rbrace\right)
\end{align}
where the last approximation comes from the well-known log-max approximation \cite{Hochwald03}. It is seen from the equation that when $\alpha=2$, Gaussian distribution is used to model noise plus ICI and the LLR is the same as that used in conventional ML detector for QAM.
\section{Noise plus ICI analysis}\label{sec_ipn}
\subsection{FQAM vs QAM}
The superiority of FQAM comparing to QAM is coming from the non-Gaussian distribution of the noise plus ICI. It has been shown that the noise plus ICI deviates from the Gaussian distribution in the macro cells environment \cite{Hong14}. However, it is expected that high dense small cells will dominate 5G systems. Thus, here we analyse the noise plus ICI for FQAM in high density small cells scenario. Fig. \ref{fig_ICI_Noise_Hist} shows the histogram for the real values of noise plus ICI at the cell-edge for different number of cells. Inter-site distance of 50m with 1 Watt transmission power are assumed. Apart from the three BSs case, the total number of BSs is based on the number of interference rings. The figure shows that in all cases the noise plus ICI distribution has much heavier tail compared to the Gaussian. The peak at the centre of the distribution is more prominent for small number of BSs (three and seven). The noise plus ICI distributions for higher number of base stations are almost identical. This indicate that even in a highly dense deployment, the noise plus ICI distribution for FQAM deviates from the Gaussian distribution. Hence, it is expected that the FQAM maintains its advantage comparing to QAM in dense deployment scenario.
To gain more insights of the performance difference between FQAM and QAM, in this section, we derive and compare the CDF of SINR of QAM and FQAM. In particular, we resort to the stochastic geometry approach, where BSs are assumed to be randomly located following a Poisson point process (PPP) with density $\lambda$. Such an assumption has been widely considered in the literature as a valid model which yields sufficient close analysis compared to that of practical models \cite{Bai14}\cite{Bai15}. We derive the CDF of SINR for FQAM in the following.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{ICI_Noise_Hist.eps}
\caption{Histograms of the normalized real values of noise plus ICI samples in dense small cells network at the cell-edge region.}
\label{fig_ICI_Noise_Hist}
\end{figure}
\subsection{SINR CDF analysis using stochastic geometry}
As there is only one frequency component that is active in FQAM. Therefore, the effective BS density is $\lambda/N_\mathrm{F}$. Let $\tilde{\rho}$ denote the SINR, $d$ denote the distance between the UE and the serving BS, $\sigma^2_N$ denote the noise power on each frequency component, and $\alpha$ denote the pathloss exponent. In addition, denote the set of all interfering BSs as $\tilde{A} \in \mathcal{B}\setminus A$, where $\mathcal{B}$ is the set of all BSs. The normalized interference power $\mathcal{I}$ of ICI can be computed as
\begin{align}
\mathcal{I}=\sum_{\tilde{A}}|H^{\tilde{A}}|^2|d^{\tilde{A}}|^{-\alpha}
\end{align}
where the summation over $\tilde{A}$ is performed with respect to all interfering BSs. The channel $H^A$ between the target user and the serving BS follows Rayleigh distribution. Hence, $|H^A|^2$ is an exponentially distributed random variable. Let $\sigma^2_N$ be the noise power per frequency component, which can be computed via the noise power density $N_0$ (in dBm/Hz) and the bandwidth of each frequency component $W_\mathrm{sc}$ (in Hz) by $\sigma^2_N=10^{N_0/10}W_\mathrm{sc}\cdot 10^3$. Let $P_T$ denote the transmit power of BSs. Following the approach in \cite{Baccelli06}, the CDF $G^{\mathrm{FQAM}}_{\tilde{\rho}}(\tilde{\rho})$ of SINR can then be computed as
\begin{align}
&G^{\mathrm{FQAM}}_\mathrm{\tilde{\rho}}(\tilde{\rho})=1-\Pr\left\lbrace \frac{P_T|H^A|^2d^{-\alpha}}{\sigma^2_N+\mathcal{I}}\geqslant \tilde{\rho} \right\rbrace\nonumber\\
&=1-\exp\left(-d^\alpha\tilde{\rho}\sigma^2_N \right)\mathrm{E}\left[\exp\left(-d^\alpha \tilde{\rho} \mathcal{I} \right)\right]\nonumber\\
&=1-\exp\left(-d^\alpha P_T^{-1}\tilde{\rho}\sigma^2_N \right)\exp\left(-\frac{\lambda}{N_{\mathrm{F}}}d^2 \tilde{\rho}^{\frac{2}{\alpha}}\frac{2\pi^2}{\alpha\sin(2\pi/\alpha)} \right).
\label{equ_G_rho_FQAM}
\end{align}
Besides, for QAM, since all frequency components are active during transmission, the total noise power should be the summation over all frequency components. As a result, the CDF $G^{\mathrm{QAM}}_{\tilde{\rho}}(\tilde{\rho})$ of SINR can then be computed as \cite{Baccelli06}
\begin{align}
&G^{\mathrm{QAM}}_\mathrm{\tilde{\rho}}(\tilde{\rho})\nonumber\\
&=1-\exp\left(-d^\alpha\tilde{\rho}N_{\mathrm{F}}\sigma^2_N \right)\exp\left(-\lambda d^2\tilde{\rho}^{\frac{2}{\alpha}}\frac{2\pi^2}{\alpha\sin(2\pi/\alpha)} \right).
\label{equ_G_rho_OFDM}
\end{align}
By comparing (\ref{equ_G_rho_FQAM}) and (\ref{equ_G_rho_OFDM}), it can be seen that QAM has a larger noise power due to larger active bandwidth. Additionally, QAM has larger ICI power than FQAM because all frequency components are active.
\section{Results and Analysis}\label{sec_results_and_analysis}
We present the simulation and numerical results in this section. First the performance of FQAM in terms of bit error rate (BER) and frame error rate (FER) is simulated. Then the numerical results on the CDF of SINR of FQAM are presented, and both the simulation and numerical results are compared with QAM. In all simulations, a multi-cell OFDM network and zero mean unit variance i.i.d. complex Gaussian channel are assumed.
BER and FER comparisons between FQAM and QAM with respect to different numbers of BSs are depicted in Fig. \ref{fig_BER_FQAMvsOFDM} and Fig. \ref{fig_FER_FQAMvsOFDM}, respectively. In these simulations, $1/3$ code rate Turbo code is used. The location of the UE is assumed to be at the cell edge of the serving BS and in the center of three closest BSs for $N_\mathrm{B}=3$ and $N_\mathrm{B}=7$, which is essentially the worst case scenario of ICI for users in cellular networks. To have fair comparison, both FQAM and QAM have the same spectral efficiency, i.e., $1$ bit/frequency component. It can be observed that FQAM outperformed QAM in terms of BER and FER with single or three BSs. For $N_\mathrm{B}=1$, the gain of FQAM comes from the higher SNR per frequency component as FQAM allocates all power on the only one active frequency component while QAM allocates its power on all active frequency components. The gap between FQAM and QAM becomes more significant with three BSs because less interference is received in FQAM when only one frequency component is active. When the number of BSs reaches seven, neither FQAM nor QAM performs well due to the ICI.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{BER_FQAMvsOFDM.eps}
\caption{BER of FQAM and QAM with different numbers of BSs ($N_\mathrm{F}=4$, $1$ bit/frequency component).}
\label{fig_BER_FQAMvsOFDM}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{FER_FQAMvsOFDM.eps}
\caption{FER of FQAM and QAM with different numbers of BSs ($N_\mathrm{F}=4$, $1$ bit/frequency component).}
\label{fig_FER_FQAMvsOFDM}
\end{figure}
SINR CDFs of FQAM and QAM are compared in Fig.~\ref{fig_SINR_CDF_FQAMvsOFDM}. It can be observed that analysis results based on stochastic geometry fits the simulation well. Also, the SINR of QAM systems is smaller than that of FQAM, where a difference of around 10 dB is observed between two medians. This is because FQAM introduces randomness in the frequency domain to reduce ICI.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{SINR_CDF_FQAMvsOFDM.eps}
\caption{SINR CDFs of FQAM and QAM ($N_\mathrm{F}=4$, $\lambda=10^{-4}$, $\alpha=3$, $d=50$m, $N_0=-173$dBm/Hz, $W_\mathrm{sc}=15000$Hz, $P_T=20$W).}
\label{fig_SINR_CDF_FQAMvsOFDM}
\end{figure}
\section{Conclusions} \label{conclusion_section}
This paper has presented the performance of FQAM in terms of BER and FER under interference scenarios, and compared with that of QAM. In addition, the CDF of SINR for FQAM is also analysed, numerically computed, and compared with that of QAM. The advantage of FQAM over QAM in terms of BER and FER at cell edge for both single and multiple BS scenarios has been demonstrated. In particular, significant performance gain has been shown with a reasonably practical scenario where $N_B=3$ BSs is considered. Advantage of FQAM in terms of the distribution of SINR has also been shown, where a SINR difference of around 10 dB is observed at an outage of 10\%. All these advantages suggest that much more attention should be raised in considering FQAM as a promising technology in the 5G mobile networks.
\section*{Acknowledgment}
This work has been performed in the framework of the Horizon 2020 project FANTASTIC-5G (ICT-671660) receiving funds from the European Union. The authors would like to acknowledge the contributions of their colleagues in the project, although the views expressed in this contribution are those of the authors and do not necessarily represent the project. The authors would also like to thank Sungnam Hong from Samsung Electronics for his insightful suggestions.
|
{
"timestamp": "2018-03-08T02:10:11",
"yymm": "1803",
"arxiv_id": "1803.02729",
"language": "en",
"url": "https://arxiv.org/abs/1803.02729"
}
|
\section{Introduction}
\label{sec:intro}
Deep learning, in particular, convolutional neural networks (CNN) have become the standard for image classification~\cite{alexnet2012,resnet}. Fully convolutional neural networks (F-CNNs) have become the tool of choice for many image segmentation tasks in medical imaging~\cite{Unet,ecb2017,quickNat2018} and computer vision~\cite{longfcn2015,deconvnet2015,segnet,densenet}. The basic building block for all these architectures is the convolution layer, which learns filters capturing local spatial pattern along all the input channels and generates feature maps jointly encoding the spatial and channel information. While much effort is put into improving this joint encoding of spatial and channel information, encoding of the spatial and channel-wise patterns independently is less explored. Recent work attempted to address this issue by explicitly modeling the interdependencies between the channels of feature maps. An architectural component called squeeze \& excitation (SE) block~\cite{SE2017} was introduced, which can be seamlessly integrated within any CNN model. The SE block factors out the spatial dependency by global average pooling to learn a channel specific descriptor, which is used to recalibrate the feature map to emphasize on useful channels. Its nomenclature is motivated by the fact that the SE block `squeezes' along the spatial domain and `excites' or reweights along the channels. A convolutional network with SE blocks won the first place in the ILSVRC 2017 classification competition on the ImageNet dataset, indicating its effectiveness~\cite{SE2017}.
In this work, we want to leverage the high performance of SE blocks for image classification to image segmentation with F-CNNs. We refer to the previously introduced SE block as channel SE (cSE), because it only excites channel-wise, which proved to be effective for classification. We hypothesize that for image segmentation, the pixel-wise spatial information is more informative. Hence, we introduce another SE block, which `squeezes' along the channels and `excites' spatially, termed \emph{spatial} SE (sSE). Finally, we propose to have concurrent spatial and channel SE blocks (scSE) that recalibrate the feature maps separately along channel and space, and then combines the output. Encouraging feature maps to be more informative both spatially and channel-wise. To the best of our knowledge, this is the first time that spatial squeeze \& excitation is proposed for neural networks and the first integration of squeeze \& excitation in F-CNNs.
We integrate the proposed SE blocks (cSE, sSE and scSE) in three state-of-the-art F-CNN models for image segmentation to demonstrate that SE blocks are a generic network component to boost performance. We evaluate the segmentation performance in two important medical applications: whole-brain and whole-body segmentation. In whole-brain segmentation, we automatically identify 27 cortical and subcortical structures on magnetic resonance imaging (MRI) T1-weighted brain scans. In whole-body segmentation, we automatically label 10 visceral organs on contrast-enhanced CT scan of the abdomen.
\noindent
\textbf{Related Work: }
F-CNN architectures have successfully been used in a wide variety of medical image segmentation tasks to provide state-of-the-art performance. A seminal F-CNN model is U-Net~\cite{Unet}, which has an encoder/decoder based architecture combined with skip connections between encoder and decoder blocks with similar spatial resolution. SkipDeconv-Net (SD-Net)~\cite{ecb2017} builds upon U-Net, using unpooling layers used in \cite{deconvnet2015} for decoding, learnt by jointly optimizing logistic and Dice loss functions. A more recent architecture introduces dense connectivity within the encoder and decoder blocks, unlike U-Net and SD-Net which uses normal convolutions, termed fully convolutional DenseNet~\cite{densenet}.
\section{Methods}
Let us assume an input feature map $\mathbf{X} \in \mathbb{R}^{H \times W \times C'}$ that passes through an encoder or decoder block $\mathbf{F}_{tr}(\cdot)$ to generate output feature map $\mathbf{U} \in \mathbb{R}^{H \times W \times C}$, $\mathbf{F}_{tr}:\mathbf{X}\rightarrow\mathbf{U}$. Here $H$ and $W$ are the spatial height and width, with $C'$ and $C$ being the input and output channels, respectively. The generated $\mathbf{U}$ combines the spatial and channel information of $\mathbf{X}$ through a series of convolutional layers and non-linearities defined by $\mathbf{F}_{tr}(\cdot)$. We place the SE blocks $\mathbf{F}_{SE}(\cdot)$ on $\mathbf{U}$ to recalibrate it to $\hat{\mathbf{U}}$. We propose three different variants of SE blocks, which are detailed next. The SE blocks can be seamlessly integrated within any F-CNN model by placing them after every encoder and decoder block, as illustrated in Fig.~\ref{fig:GA}(a). $\hat{\mathbf{U}}$ is used in the subsequent pooling/upsampling layers.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{Method.png}
\vspace{-2mm}
\caption{Illustration of network architecture with squeeze \& excitation (SE) blocks. (a) The proposed integration of SE blocks within F-CNN. (b-d) The architectural design of cSE, sSE and scSE blocks, respectively, for recalibrating feature map $\mathbf{U}$.
}
\label{fig:GA}
\end{figure}
\noindent
\subsection{Spatial Squeeze and Channel Excitation Block (cSE)}
We describe the spatial squeeze and channel excitation block, which was proposed in~\cite{SE2017}. We consider the input feature map $\mathbf{U} = [\mathbf{u}_1, \mathbf{u}_2, \cdots, \mathbf{u}_{C}]$ as a combination of channels $\mathbf{u}_i \in \mathbb{R}^{H \times W}$. Spatial squeeze is performed by a global average pooling layer, producing vector $\mathbf{z} \in \mathbb{R}^{1 \times 1 \times C}$ with its $k^{th}$ element
\begin{equation}
z_k = \frac{1}{H \times W} \sum_i^H \sum_j^W \mathbf{u}_k (i,j).
\end{equation}
\noindent
This operation embeds the global spatial information in vector $\mathbf{z}$. This vector is transformed to $\hat{\mathbf{z}}=\mathbf{W}_1 (\delta(\mathbf{W}_2 \mathbf{z}))$, with $\mathbf{W}_1 \in \mathbb{R}^{C \times \frac{C}{2}}$, $\mathbf{W}_2 \in \mathbb{R}^{\frac{C}{2} \times C}$ being weights of two fully-connected layers and the ReLU operator $\delta(\cdot)$. This encodes the channel-wise dependencies. The dynamic range of the activations of $\hat{\mathbf{z}}$ are brought to the interval $[0, 1]$, passing it through a sigmoid layer $\sigma(\hat{\mathbf{z}})$. The resultant vector is used to recalibrate or excite $\mathbf{U}$ to
\begin{equation}
\hat{\mathbf{U}}_{cSE} = F_{cSE}(\mathbf{U}) = [\sigma(\hat{z_1})\mathbf{u}_1, \sigma(\hat{z_2})\mathbf{u}_2, \cdots, \sigma(\hat{z_{C}})\mathbf{u}_{C}].
\end{equation}
\noindent
The activation $\sigma(\hat{z}_i)$ indicates the importance of the $i^{th}$ channel, which are rescaled. As the network learns, these activations are adaptively tuned to ignore less important channels and emphasize the important ones. The architecture of the block is illustrated in Fig.~\ref{fig:GA}(b).
\vspace{-2mm}
\subsection{Channel Squeeze and Spatial Excitation Block (sSE)}
We introduce the channel squeeze and spatial excitation block that squeezes the feature map $\mathbf{U}$ along the channels and excites spatially, which we consider important for fine-grained image segmentation. Here, we consider an alternative slicing of the input tensor $\mathbf{U} = [\mathbf{u}^{1,1}, \mathbf{u}^{1,2}, \cdots, \mathbf{u}^{i,j}, \cdots ,\mathbf{u}^{H,W}]$, where $\mathbf{u}^{i,j} \in \mathbb{R}^{1 \times 1 \times C}$ corresponding to the spatial location $(i,j)$ with $i \in \{1, 2, \cdots, H\}$ and $j \in \{1, 2, \cdots, W\}$. The spatial squeeze operation is achieved through a convolution $\mathbf{q} = \mathbf{W}_{sq} \star \mathbf{U}$ with weight $\mathbf{W}_{sq} \in \mathbb{R}^{1 \times 1 \times C \times 1}$, generating a projection tensor $\mathbf{q} \in \mathbb{R}^{H \times W}$. Each $q_{i,j}$ of the projection represents the linearly combined representation for all channels $C$ for a spatial location $(i,j)$. This projection is passed through a sigmoid layer $\sigma(.)$ to rescale activations to $[0,1]$, which is used to recalibrate or excite $\mathbf{U}$ spatially
\begin{equation}
\hat{\mathbf{U}}_{sSE} = F_{sSE}(\mathbf{U}) = [\sigma(q_{1,1})\mathbf{u}^{1,1}, \cdots, \sigma(q_{i,j})\mathbf{u}^{i,j}, \cdots, \sigma(q_{H,W})\mathbf{u}^{H,W}].
\end{equation}
\noindent
Each value $\sigma(q_{i,j})$ corresponds to the relative importance of a spatial information $(i,j)$ of a given feature map. This recalibration provides more importance to relevant spatial locations and ignores irrelevant ones. The architectural flow is shown in Fig.~\ref{fig:GA}(c).
\noindent
\subsection{Spatial and Channel Squeeze \& Excitation Block (scSE)}
Finally, we introduce a combination of the above two SE blocks, which concurrently recalibrates the input $\mathbf{U}$ spatially and channel-wise. We obtain the concurrent spatial and channel SE, $\hat{\mathbf{U}}_{scSE}$, by element-wise addition of the channel and spatial excitation, $\hat{\mathbf{U}}_{scSE} = \hat{\mathbf{U}}_{cSE} + \hat{\mathbf{U}}_{sSE}$. A location $(i,j,c)$ of the input feature map $\mathbf{U}$ is given higher activation when it gets high importance from both, channel re-scaling and spatial re-scaling. This recalibration encourages the network to learn more meaningful feature maps, that are relevant both spatially and channel-wise. The architecture of the combined scSE block is illustrated in Fig.~\ref{fig:GA}(d).
\vspace{4mm}
\noindent
\textbf{Model Complexity:}
Let us consider an encoder/decoder block, with an output feature map of $C$ channels. Addition of a cSE block introduces $C^2$ new weights, while a sSE block introduces $C$ weights. So, the increase in model complexity of a F-CNN with $h$ encoder/decoder blocks is $\sum_{i=1}^h (C_i^2 + C_i)$, where $C_i$ is the number of output channels for the $i^{th}$ encoder/decoder block. To give a concrete example, the U-Net in our experiments has about $2.1 \times 10^6$ parameters. The scSE block adds $3.3 \times 10^4$ parameters, which is an approximate increase by 1.5\%. Hence, SE blocks only increase overall network complexity by a very small fraction.
\section{Experimental Results}
In this section, we conducted extensive experiments to explore the impact of our proposed modules. We chose three state-of-the-art F-CNN architectures, U-Net~\cite{Unet}, SD-Net~\cite{ecb2017} and Fully Convolutional DenseNet~\cite{densenet}. All of the networks have an encoder/decoder based architecture. The encoding and decoding paths consist of repeating blocks separated by down-sampling and up-sampling, respectively. We insert (i) channel-wise SE (cSE) blocks, (ii) spatial SE (sSE) blocks and (iii) concurrent spatial and channel-wise SE (scSE) blocks after every encoder and decoder block of the F-CNN architecture and compare against its vanilla version.
\noindent
\textbf{Datatsets: }
We use two datasets in our experiments. (i) Firstly, we tackle the task of segmenting MRI T1 brain scans into 27 cortical and sub-cortical structures. We use the Multi-Atlas Labelling Challenge (MALC) dataset~\cite{malc}, which is a part of OASIS~\cite{oasis}, with 15 scans for training and 15 scans for testing consistent to the challenge instructions. The main challenge associated with the dataset are the limited training data with severe class imbalance between the target structures. Manual segmentations for MALC were provided by Neuromorphometrics, Inc.\footnote{http://Neuromorphometrics.com/} (ii) Secondly, we tackle the task of segmenting 10 organs on whole-body contrast enhanced CT (ceCT) scans. We use data from the Visceral dataset~\cite{visceral}. We train on 65 scans from the silver corpus, and test on 20 scans with manual annotations from the gold corpus. The silver corpus was automatically labeled by fusing the results of multiple algorithms, yielding noisy labels. The main challenge associated with the whole-body segmentation is the highly variable shape of the visceral organs and the capability to generalize when trained with noisy labels. We use Dice score for performance evaluation.
\noindent
\textbf{Model Learning:} In our experiments, all of the three F-CNN architectures had 4 encoder blocks, one bottleneck layer, 4 decoder blocks and a classification layer at the end. The logistic loss function was weighted with median frequency balancing~\cite{segnet} to compensate for the class imbalance. The learning rate was initially set to $0.01$ and decreased by one order after every 10 epochs. The momentum was set to $0.95$, weight decay constant to $10^{-4}$ and a mini batch size of $4$. Optimization was performed using stochastic gradient descent. Training was continued till validation loss converged. All the experiments were conducted on an NVIDIA Titan Xp GPU with 12GB RAM.
\begin{table}[t]
\caption{Mean and standard deviation of the global Dice scores for the different F-CNN models without and with cSE, sSE and scSE blocks on both datasets.}
\centering
\begin{tabular}{|p{0.94in}|P{0.89in}|P{0.89in}|P{0.89in}|P{0.89in}|}
\hline
& \multicolumn{4}{c|}{\textbf{MALC Dataset}} \\
Networks & No SE Block & + cSE Block & + sSE Block & + scSE Block \\
\hline
\textbf{DenseNets}\cite{densenet} & $0.842\pm0.058$ & $0.865\pm0.069$ & $0.876\pm0.061$ & $\mathbf{0.882}\pm0.063$ \\
\textbf{SD-Net}\cite{ecb2017} & $0.771\pm0.150$ & $0.790\pm0.120$ & $0.860\pm0.062$ & $\mathbf{0.862}\pm0.082$ \\
\textbf{U-Net}\cite{Unet} & $0.763\pm0.110$ & $0.825\pm0.063$ & $0.837\pm0.058$ & $\mathbf{0.843}\pm0.062$ \\ \hline
& \multicolumn{4}{c|}{\textbf{Visceral Dataset}} \\
Networks & No SE Block & + cSE Block & + sSE Block & + scSE Block \\
\hline
\textbf{DenseNets}\cite{densenet} & $0.892\pm0.068$ & $0.903\pm0.058$ & $0.912\pm0.056$ & $\mathbf{0.918}\pm0.051$ \\
\textbf{SD-Net}\cite{ecb2017} & $0.871\pm0.064$ & $0.892\pm0.065$ & $0.901\pm0.057$ & $\mathbf{0.907}\pm0.057$ \\
\textbf{U-Net}\cite{Unet} & $0.857\pm0.106$ & $0.865\pm0.086$ & $0.872\pm0.080$ & $\mathbf{0.881}\pm0.082$ \\ \hline
\end{tabular}
\label{tab:res}
\end{table}
\noindent
\textbf{Quantitative Results: }
Table~\ref{tab:res} lists the mean Dice score on test data for both datasets. Results of the standard networks together with the addition of cSE, sSE and scSE blocks are reported. Comparing along the columns, we observe that inclusion of any SE block consistently provides a statistically significant ($p\le0.001$, Wilcoxon signed-rank) increase in Dice score in comparison to the normal version for all networks, in both applications. We further observe that the spatial excitation yields a higher increase than the channel-wise excitation, which confirms our hypothesis that spatial excitation is more important for segmentation. Spatial and channel-wise SE yields the overall highest performance, with an increase of $4-8 \%$ Dice for brain segmentation and $2-3 \%$ Dice for whole-body segmentation compared to the standard network. Particularly for brain, the performance increase is striking, given the limited increase in model complexity. Comparing the results across network architectures, DenseNets yield the best performance.
Fig.~\ref{fig:plotBrain} and Fig.~\ref{fig:plotBody} present structure-wise results for whole brain and whole body segmentation, respectively, for DenseNets. In Fig.~\ref{fig:plotBrain}, we observe that sSE and scSE outperform the normal model consistently for all the structures. The cSE model outperforms the normal model in most structures except some challenging structures like 3rd/4th ventricles, amygdala and ventral DC where its performance degrades. One possible explanation could be the small size of these structures, which might have got overlooked by only exciting the channels. The performance of sSE and scSE is very close. For whole body segmentation, in Fig.~\ref{fig:plotBody}, we observe a similar pattern.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth]{BoxPlot_WholeBrain.png}
\vspace{-2mm}
\caption{Boxplot of Dice scores for all brain structures on the left hemisphere (due to space constraints), using DenseNets on MALC dataset, without and with proposed cSE, sSE, scSE blocks. Grey and white matter are abbreviated as GM and WM, respectively.}
\label{fig:plotBrain}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.90\textwidth]{BoxPlot_WholeBody.png}
\vspace{-2mm}
\caption{Structure-wise Dice performance of DenseNets on Visceral dataset, without and with proposed cSE, sSE, scSE blocks. Left and right are indicated as L. and R. Psoas major muscle is abbreviated as PM.}
\label{fig:plotBody}
\end{figure}
\noindent
\textbf{Qualitative Results: }
Fig.~\ref{fig:result} presents segmentation results for MRI T1 brain scan in Fig.~\ref{fig:result}(a-d) and for Whole body ceCT scans in Fig.~\ref{fig:result}(e-h). We show the input scan, ground truth annotations, DenseNet segmentation along with our proposed DenseNet+scSE segmentation. We highlight ROIs with a white box and red arrow, to show regions where inclusion of scSE block improved the segmentation. For MRI brain scan segmentation, we indicate the structure left putamen, which is under segmented using DenseNet (Fig.~\ref{fig:result}(c)), but the segmentation improves with the inclusion of the scSE block (Fig.~\ref{fig:result}(d)). For whole body ceCT, we indicate the spleen, which is over segmented using DenseNet (Fig.~\ref{fig:result}(g)), and which is rectified with adding scSE block (Fig.~\ref{fig:result}(h)).
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{Result_WholeBody.png}
\vspace{-2mm}
\caption{Input scan, ground truth annotations, DenseNet segmentation and DenseNet+scSE segmentation for both whole-brain MRI T1 (a-d) and whole-body ceCT (e-h) are shown. ROIs are indicated by white box and red arrow highlighting regions where the scSE block improved the segmentation, for both applications.}
\label{fig:result}
\end{figure}
\vspace{-2mm}
\section{Conclusion}
\label{sec:conc}
\vspace{-2mm}
We proposed the integration of squeeze \& excitation blocks within F-CNNs for image segmentation. Further, we introduced the \emph{spatial} squeeze \& excitation, which outperforms the previously proposed channel-wise squeeze \& excitation. We demonstrated that SE blocks yield a consistent improvement for three different F-CNN architectures and for two different segmentation applications. Hence, recalibration with SE blocks seems to be a fairly generic concept to boost performance in CNNs. Strikingly, the substantial increase in segmentation accuracy comes with a negligible increase in model complexity. With the seamless integration, we believe that squeeze \& excitation can be a crucial component for neural networks in many medical applications.
\noindent
\textbf{Acknowledgement:} We thank the Bavarian State Ministry of Education, Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B) for funding and NVIDIA corporation for GPU donation.
\vspace{-2mm}
|
{
"timestamp": "2018-06-11T02:16:34",
"yymm": "1803",
"arxiv_id": "1803.02579",
"language": "en",
"url": "https://arxiv.org/abs/1803.02579"
}
|
\section{Introduction}
\label{sec:introduction}
Physical cosmology has reached a level of precision that allows us to test fundamental physics at the percent order, at scales of distance and energy far beyond any terrestrial experiment. After the era of the cosmic microwave background surveys, whose pinnacle was reached with \emph{Planck}~\cite{Adam:2015rua}, comes the time of very large galaxy surveys, such as DESI~\cite{Aghamousa:2016zmz}, Euclid~\cite{Laureijs:2011gra}, the Square Kilometer Array (SKA)~\cite{Braun:2015zta}, or the Large Synoptic Survey Telescope~\cite{Abell:2009aa}. Galaxy surveys offer, in particular, an unprecedented insight into gravitational physics, and have the potential to uncover departures from general relativity (GR), if any.
One of the fundamental pillars of general relativity is Einstein's equivalence principle (EEP), which states that all non-gravitational phenomena are locally unaffected by gravity, if they are performed in a freely falling frame. A particular consequence of this principle is that everything falls according to the same laws, including light. The EEP thus represents the bound between gravity and the rest of physics; it is extremely well tested in the Solar System. Besides, one of its components, namely local Lorentz invariance, implies the $CPT$ symmetry of particle physics~\cite{1951PhRv...82..664S}, which is also well tested on Earth~\cite{Liberati:2013xla}. Nevertheless, the validity of the EEP is much harder to test on cosmic scales, or when the unknown dark matter is concerned. Yet, such tests are equally important as terrestrial ones, since any deviation from the EEP would dramatically shake the basis of fundamental physics in general.
It is interesting to notice that departures from the EEP are, in fact, rarely considered in cosmological tests of gravitation. Notable exceptions are~\cite{Kehagias:2013rpa,Creminelli:2013nua}, exploiting consistency relations between two- and three-point correlation functions of the matter distribution to test for differences between the way baryons and dark matter fall. In general, however, cosmological tests of gravitation beyond GR do assume that the EEP holds. In an inhomogeneous Universe characterised by scalar perturbations only, there are schematically four degrees of freedom: the matter density contrast~$\delta$, the peculiar velocity potential~$V$ of the matter flow, and the two Bardeen potentials~$\Psi$ and $\Phi$ (see e.g.~\cite{PeterUzan}). In GR, under reasonable assumptions, $\Psi=\Phi$ is related to $\delta$ by the Poisson equation, $\delta$ to $V$ by the continuity equation, and $V$ to $\Psi$ via Euler's equation, which closes the system. Alternative theories of gravity generically modify these four relationships.
The standard way to test for deviations from GR in cosmology consists in combining measurements of redshift-space distortions (RSD) with gravitational lensing (e.g.~\cite{Ferte:2017bpf}). RSD are indeed sensitive to a combination of $\delta$ and $V$, which can be disentangled by measuring both the monopole and quadrupole of the correlation function, whereas gravitational lensing measures the sum of the two metric potentials $\Phi+\Psi$. Therefore, these three measurements allow us to test \emph{three} of the \emph{four} relationships between $\Psi, \Phi, \delta$ and $V$, see fig.~\ref{fig:variables}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{diagram_introduction}
\caption{Scalar cosmological perturbations consist of four inter-dependent variables: the density contrast~$\delta$, the peculiar velocity potential~$V$, and the gravitational potentials~$\Phi$ and $\Psi$. Standard large-scale structure analyses measure~$\delta$ and $V$ through RSD, and $\Phi+\Psi$ with gravitational lensing. Euler's equation is then used to infer $\Psi$. This allows one to look for deviations from Poisson's equation and for anisotropic stress. These are, therefore, degenerate with violations of Euler's equation. The method proposed in this article is an independent probe of the relationship between $V$ and $\Psi$, which breaks that degeneracy.}
\label{fig:variables}
\end{figure}
The most common approach consists in keeping \emph{the continuity and Euler's equations unchanged}\footnote{Here it is implicit that the laws of optics in curved space-time are also left unchanged with respect to GR, so that the whole interpretation of cosmological observations does not need to be modified.} and to test for modifications in the Poisson equation and in the relation between $\Phi$ and $\Psi$~\cite{Amendola:2007rr,Amendola:2012ky}. This has led to precise constraints of the growth rate of structure $f$ (see e.g.~\cite{Alam:2016hwk}) and of the anisotropic stress~\cite{Reyes:2010tr,Ade:2015rim}.
Relaxing the assumption that Euler's equation holds opens the cosmic Pandora's box, adding a freedom which cannot be constrained by the standard cosmological probes. Indeed, if Euler's equation is allowed to change, then one is left with three probes for four relations. A possible solution is to parameterise modifications to Euler's equation and Einstein's equations with free parameters (usually functions of time) and then use redshift-space distortions and gravitational lensing to constrain these parameters. This method is at the core of the effective field theory of dark energy. However, the fact that we have only three observational probes automatically introduces strong degeneracies between the parameters. This degeneracy cannot be broken without adding extra information about the underlying theory of gravity (like for example assuming a specific form for the Lagrangian).
In this article, we show that the so-called \emph{relativistic effects} in galaxy surveys are the ideal laboratory to test the EEP \emph{directly}. As such, they can break the degeneracy of the standard set of cosmological probes, induced by potential violations of Euler's equation. Specifically, as originally shown by~\cite{Bonvin:2013ogt}, some relativistic corrections to the observed galaxy number counts generate a dipole in the two-point correlation function. The amplitude of this dipole is affected by \emph{gravitational redshift}, i.e.\,by the gradient of $\Psi$ (the time component of the metric). Combining the dipole with RSD allows us therefore to test directly the relationship between $V$ and $\Psi$ as shown in fig.~\ref{fig:variables}. This would provide the first direct test of the EEP on cosmological scales, since it is sensitive to differences in the way photons and dark matter move\footnote{As explained in more detail in sec.~\ref{sec:theory}, we assume here that galaxies follow the motion of dark matter halos, i.e.\,that there is no velocity bias.}. More precisely, we will see that the relativistic dipole contains terms of the form $(\text{galaxy acceleration} + \nabla \Psi)$, which exactly cancel if the EEP is satisfied, and do not in general. This shows that even though relativistic effects are not significant enough to improve the constraints on parameters that can be measured via standard RSD and lensing techniques (as pointed out in~\cite{Lorenz:2017iez}), they are essential to test for deviations from Euler's equation that would remain unconstrained without them.
The remainder of the article is organised as follows. In sec.~\ref{sec:theory}, we discuss the notion of free fall in GR and alternative theories of gravity. From the analysis of scalar-tensor and vector-tensor theories, we deduce a quite general parametrisation of the deviations from Euler's equation in cosmology, at sub-Hubble scales. Then, in sec.~\ref{sec:dipole}, we present the relativistic effects in galaxy surveys which give rise to a dipole in the correlation function. We discuss the origin of this dipole, and emphasise its usefulness to constrain the EEP, before explaining how to extract it from the data in practice. In sec.~\ref{sec:forecasts}, we present Fisher-matrix forecasts on deviations from Euler's equation in future surveys like DESI and the SKA. Finally, we conclude in sec.~\ref{sec:conclusion}.
\section{Free fall in relativity and beyond}
\label{sec:theory}
\subsection{Generalities}
\label{sec:generalities_free_fall}
The curious fact that all things seem to fall the same way has been fundamental in the genesis of Einstein's theory of general relativity (GR)~\cite{Pais}. During the last century, local tests of the weak equivalence principle have reached an exquisite precision~\cite{Will:2014kxa}; lately, the first results of the MICROSCOPE mission~\cite{microscope} have constrained E\"otv\"os ratio at the level of $10^{-14}$~\cite{Touboul:2017grn}. Nevertheless, for obvious reasons, the validity of the equivalence principle is more uncertain on astrophysical and cosmological scales, or when the unknown dark matter is concerned.
In cosmology, if we only consider scalar perturbations of a spatially Euclidean Friedmann-Lema\^itre-Robertson-Walker (FLRW) model in the Poisson gauge, the metric reads
\begin{equation}\label{eq:metric}
\mathrm{d} s^2 = a^2(\eta) \pac{ -(1+2\Psi)\mathrm{d} \eta^2 + (1-2\Phi) \delta_{ij} \mathrm{d} x^i \mathrm{d} x^j },
\end{equation}
where $a(\eta)$ is the scale factor of the background FLRW model, $\eta$ and $x^i$ being respectively conformal time and spatial comoving coordinates, while $\Psi$ and $\Phi$ are the gauge-invariant Bardeen potentials. If matter is assumed to be freely falling within that space-time, then its flow is characterised by Euler's equation,
\begin{equation}\label{eq:Euler}
\vect{V}' + \mathcal{H} \vect{V} + \vect{\nabla}\Psi = 0,
\end{equation}
where $\vect{V}$ is the peculiar matter velocity field, such that the matter four-velocity reads~$v^\mu=a^{-1}(1-\Psi, V^i)$, $\mathcal{H}\define a'/a$ is the conformal Hubble rate, and a prime denotes a derivative with respect to conformal time~$\eta$. Equation~\eqref{eq:Euler} directly follows from the geodesic equation, assuming that $|\vect{V}|, \Psi\ll 1$. There are, of course, standard corrections to eq.~\eqref{eq:Euler}, due to the fact that fluid elements are not exactly in free fall. The most common deviation comes from the velocity dispersion of the fluid, which gives rise to a pressure term; other forms of stress can yield different corrections, like shear viscosity, turning Euler's equation into the Navier-Stokes equation. These effects have been considered in a cosmological context, e.g. in~\cite{Blas:2015tla}, and typically manifest on scales below~$1\U{Mpc}$. In this article, we will consider much larger scales and neglect such departures from free fall.
It is also worth mentioning that, even in GR, free fall does not necessarily implies geodesic motion, on which eq.~\eqref{eq:Euler} is based. Indeed, this implication is valid for test particles, but not exactly for extended self-gravitating bodies, despite the validity of the strong equivalence principle. A first example is the case of spinning objects, whose motion is described by the Mathisson-Papapetrou-Dixon equations~\cite{Mathisson:1937zz, Papapetrou:1951pa,Dixon:1970zza}. In weak fields, the deviation from geodesic motion is encoded in a force arising from the coupling between the angular momentum~$\vect{J}$ of the object and the gravito-magnetic field~$\vect{B}\e{g}$ as
$
\vect{F}\e{MPD} = - \vect{\nabla}(\vect{J}\cdot\vect{B}\e{g})
$~\cite{Chicone:2005jj}.
In cosmology, one has at most $\vect{B}\e{g}\sim 10^{-2} \vect{\nabla}\Psi$~\cite{Adamek:2015eda}, so that the ratio between $\vect{F}\e{MPD}$ and the `Newtonian' force $m\vect{\nabla}\Psi$ is of the order of $10^{-2}k R^2\Omega\sim 10^{-7} (k\times 1\U{Mpc})$, where $k$ is the perturbation mode at hand, while $R, \Omega$ are the size and angular velocity of the galaxy. Another known effect, which can make extended objects deviate from geodesic motion, is the so-called self-force, related to the gravitational radiation emitted by the objects~\cite{Poisson:2003nc}.
The `post-geodesic motion' phenomenology gets infinitely richer as one allows for theories of gravity beyond GR. Adding new degrees of freedom to gravitation, or new interactions to the dark-universe sector, generically yields violations of the equivalence principles, affecting \emph{de facto} the way things fall. Such violations can be classified into three categories:
\begin{enumerate}
\item Violation of the weak equivalence principle. The universality of free fall is automatically broken if gravity contains degrees of freedom that couple differently to various matter species. Different couplings can be either fundamental~\cite{Carroll:2008ub} or the result of screening mechanisms~\cite{Hui:2009kc,Jain:2011ji, Brax:2012gr}. This manifests in the apparition of a \emph{fifth force}, which cannot be absorbed by a simple redefinition of metric (Einstein to Jordan frame). The archetype of this situation is a scalar-tensor theory where the scalar degree of freedom~$\phi$ couples differently to dark matter and to the standard model. We will focus on this in sec.~\ref{sec:scalar-tensor}. Another possibility is the direct coupling between dark and baryonic matter~\cite{PhysRevLett.70.119}.
\item Violation of local Lorentz invariance. It is expected, in particular, that the existence of preferred frames or directions implies that objects with different velocities fall differently. An example often cited in this context is the Einstein-\ae ther theory (see sec.~\ref{sec:Einstein-aether}), which contains an additional vector degree of freedom~$u^\mu$ compared to GR. This field can be thought of as the four-velocity of an \ae ther, defining a preferred frame.
\item Violation of the strong equivalence principle. The strong equivalence principle is very specific to GR. Apart from Nordstr\o m's theory~\cite{Deruelle:2011wu}, none of the alternative seems to satisfy it. Its violation can manifest as a difference between the inertial mass~$m\e{in}$ and the passive gravitational mass~$m\e{p}$ of a self-gravitating body, depending on its gravitational binding energy~$E\e{g}$. This phenomenon is known as the Nordvedt effect~\cite{PhysRev.169.1014}, and is quantified by the parameter~$\eta\e{N}$, as $m\e{p}/m\e{in}=1-\eta\e{N} E\e{g}/m\e{in}$. For a galaxy, $E\e{g}/m\e{in}\sim 10^{-6}$, so that this effect is very small even if $\eta\e{N}\sim 1$. Tests of the strong equivalence principle using black holes have been proposed recently in~\cite{Hui:2012jb, Sakstein:2017bws}.
\end{enumerate}
We restrict our analysis to models where the particles of the standard model (in particular photons) are minimally coupled to gravity, so that the EEP applies to this matter sector, in agreement with experiments. Only dark matter will be allowed to couple non-minimally to the additional gravitational degrees of freedom. Yet, we will assume that the motion of dark matter is directly traced by the motion of galaxies; in other words, \emph{there is no velocity bias}, $V=V\e{g}$. This assumption could seem to be at odds with the fact that we are precisely looking for deviations from the equivalence principle. However, since a galaxy always sits inside a dark matter halo, the latter exerts a binding force on the former, which is very likely to dominate any difference in the way baryonic and dark matter experience gravitation. Such a difference would simply lead to a shift between the baryonic and dark-matter centres of mass~\cite{PhysRevLett.67.2926}, which has been used in~\cite{Desmond:2018euk} to constrain the existence of a fifth force in the local Universe. As such, the method proposed in the present article to test the EEP is not based on the difference between the motion of dark matter and baryons, contrary to~\cite{Creminelli:2013nua}. It is instead based on the difference between the motion of dark matter and photons.
Based on these considerations, we parameterise modifications from Euler's equation~\eqref{eq:Euler} in the following way
\begin{empheq}[box=\fbox]{equation}\label{eq:parametrisation_modified_Euler}
\mathbf{V}' + \mathcal{H} \pac{1+\Theta(\eta)} \mathbf{V} +\pac{1+\Gamma(\eta)} \vect{\nabla}\Psi=0\, .
\end{empheq}
Here $\Theta$ and $\Gamma$ are two free functions of time that encode modifications in the way dark matter (and consequently galaxies) fall in the gravitational potential $\Psi$. The aim of this paper is to constrain these free functions using relativistic effects. Equation~\eqref{eq:parametrisation_modified_Euler} is quite general and contains a rich phenomenology. The parameter $\Gamma$ encodes the effect of a fifth force acting on dark matter. The parameter $\Theta$ can be thought of as a friction term, which modifies the way the velocity redshifts away.
As we will see, the specificity of relativistic effects is that they can constrain $\Theta$ and $\Gamma$ independently of the underlying theory of gravity which generates these modifications.
Before forecasting the constraints on $\Theta$ and $\Gamma$ that we expect from future surveys, let us briefly present two cases which exist in the literature and which generate precisely the kind of deviations written in eq.~\eqref{eq:parametrisation_modified_Euler}.
\subsection{Scalar-tensor theories}
\label{sec:scalar-tensor}
We first focus on a popular class of alternative theories of gravity/dark energy, namely scalar-tensor theories. We start with a simple example (sec.~\ref{sec:scalar-tensor_simple_example}) in order to get an intuition of how the presence of a scalar degree of freedom affects free fall. Despite its simplicity, this example will turn out to essentially contain the physics of the general case (sec.~\ref{sec:scalar-tensor_general}).
\subsubsection{A simple example}
\label{sec:scalar-tensor_simple_example}
Let us consider the simple case of a scalar field~$\phi$, mediating a fifth force via conformal coupling to dark matter only. The corresponding action is
\begin{equation}
S = S\e{EH}[g_{\mu\nu}]
+ S\e{SM}[\text{standard matter},g_{\mu\nu}]
+ S_\phi[\phi,g_{\mu\nu}]
+ S\e{DM}[\text{dark matter},C^2(\phi)g_{\mu\nu}],
\end{equation}
where $g_{\mu\nu}$ denotes the space-time metric; $S\e{EH}, S\e{SM}$ are respectively the Einstein-Hilbert action and the action of the standard model of particle physics; $S_\phi$ is the canonical action of a scalar field with a potential $U(\phi)$; and finally $S\e{DM}$ is the action of dark matter, which is coupled to the metric via a conformal factor~$C(\phi)$. We model dark matter as a set of spin-less point particles, with individual action
\begin{equation}
S_1 = -m \int |C(\phi)| \; \mathrm{d}\tau,
\end{equation}
if $\tau$ denotes proper time with respect to the physical metric~$g_{\mu\nu}$, and $m$ is the bare mass of the dark matter particle.
For this model, the equations of motions for $g_{\mu\nu}$, $\phi$, and dark matter are
\begin{align}
R_{\mu\nu} -\frac{1}{2} R g_{\mu\nu} &= 8\pi G \pa{T_{\mu\nu}\h{SM} + T_{\mu\nu}^\phi + |C(\phi)| T_{\mu\nu}\h{DM}}
\label{eq:EFE_simple_example}\\
\nabla^\mu \nabla_\mu \phi - U_{,\phi} &= C_{,\phi} \, T\e{DM} \label{eq:KG_simple_example} \\
\nabla_\mu(\rho\e{DM} v^\mu) &= 0 \label{eq:conservation_energy_DM}\\
v^\nu \nabla_\nu \pac{ C(\phi) v^\mu} &= - \partial^\mu C \label{eq:fifth_force_simple_example}.
\end{align}
Let us comment on the notation. We chose to call $T^{\mu\nu}\e{DM}=\rho\e{DM}v^\mu v^\nu$ the stress-energy tensor of (cold) dark matter \emph{in the absence of conformal coupling}, and $T\e{DM}\define g_{\mu\nu} T^{\mu\nu}\e{DM}$; $\rho\e{DM}$ must be thought of as related to the number of dark matter particles, while $\vect{v}$ is the four-velocity of the dark matter flow. Note that $\rho\e{DM}$ is conserved, by virtue of eq.~\eqref{eq:conservation_energy_DM}.
The presence of the conformal coupling between dark matter and $\phi$ has three physical effects:
\begin{enumerate}
\item It changes the dark matter \emph{active gravitational mass} by a factor $|C(\phi)|$, as this factor multiplies the bare stress-energy tensor~$T_{\mu\nu}\h{DM}$ in eq.~\eqref{eq:EFE_simple_example}.
\item It also changes its \emph{passive gravitational mass} and \emph{inertial mass} by a factor $C(\phi)$ ---which is assumed to be positive here--- as seen in the left-hand side of eq.~\eqref{eq:fifth_force_simple_example}.
\item It adds a fifth force proportional to the gradient $\partial^\mu\phi$ of the scalar field%
\footnote{
\label{footnote:friction}
Expanding the left-hand side of eq.~\eqref{eq:fifth_force_simple_example}, one can rewrite it as
\begin{equation*}
v^\nu \nabla_\nu v^\mu = -\partial^\mu_\perp \ln C,
\end{equation*}
where $\partial^\mu_\perp = (\delta^\mu_\nu + v^\mu v_\nu) \partial^\nu$ is the spatial gradient of $\phi$ in the dark matter frame. This alternative expression has the advantage to show that the effect of the fifth force is frame dependent. Suppose that there exists a frame in which $\phi$ is homogeneous (but not static). If a dark matter particle is at rest with respect to this homogeneity frame, then it experiences no fifth force, since $\partial^\mu_\perp\phi=0$. However, if the particle has a small velocity~$v^i$ with respect to that frame, then the spatial gradient becomes $\partial^i_\perp \phi \approx -\dot{\phi} v$, that is a friction force.}.
This gradient, in turn, is sourced by dark matter via $T\e{DM}$ in eq.~\eqref{eq:KG_simple_example}. Note that, contrary to gravitation, the fifth force is \emph{weaker} if it is generated by a hotter dark matter fluid.
\end{enumerate}
In cosmology, if we write~$\phi=\bar{\phi}(\eta)+\delta\phi$, where $\bar{\phi}$ is the background value of the scalar field, the dark matter equation of motion~\eqref{eq:fifth_force_simple_example} becomes
\begin{equation}\label{eq:modified_Euler_simple_example}
V' + \mathcal{H} \pac{ 1+ \frac{C_{,\phi}}{C}\frac{\bar{\phi}'}{\mathcal{H}} } V + \Psi = - \frac{C_{,\phi}}{C} \; \delta\phi .
\end{equation}
One can then use the other field equations to establish a relationship between $\delta\phi$ and $\Psi$. In the quasi-static approximation, and assuming that dark matter dominates as a source of gravitation, one finds~$\delta\phi=(C_{,\phi}/4\pi G) \Psi$. The modified Euler equation then indeed takes the same form as eq.~\eqref{eq:parametrisation_modified_Euler},
\begin{equation}
V' + \mathcal{H} \pac{ 1+ \frac{C_{,\phi}}{C}\frac{\bar{\phi}'}{\mathcal{H}} } V + \pac{ 1 + \frac{C_{,\phi}^2}{4\pi G C} }\Psi = 0.
\end{equation}
\subsubsection{General case: Horndeski theories}
\label{sec:scalar-tensor_general}
In the most general formulation of scalar-tensor theories, the scalar field~$\phi$ can also be non-minimally coupled to space-time geometry, generating the broad class of Horndeski theories~\cite{Horndeski1974,Deffayet:2009wt} and beyond~\cite{Gleyzes:2014dya,Gleyzes:2014qga,Zumalacarregui:2013pma}. Besides, every matter species can be in principle conformally and disformally coupled to gravitation. The cosmological behaviour of such theories is conveniently described within the effective-field theory (EFT) approach~\cite{Gleyzes:2014rba,Gleyzes:2015pma}, where the coupling between $\phi$ and gravity is characterised by four (Horndeski) or five (beyond Horndeski) functions of time\footnote{The large freedom a priori allowed within this class of models has been significantly reduced with the recent detection of gravitational waves with an optical countrepart~\cite{TheLIGOScientific:2017qsa,Goldstein:2017mmi,Savchenko:2017ffs}, with strong implications for cosmology~\cite{Lombriser:2015sxa,Lombriser:2016yzn,Ezquiaga:2017ekz, Creminelli:2017sry, Sakstein:2017xjx, Baker:2017hug, Langlois:2017dyl}}, while the coupling between $\phi$ and any matter species is given by two additional functions. Here we follow~\cite{Gleyzes:2015pma}, assuming that baryonic matter is universally coupled to gravity, so that one can choose to work in the associated Jordan frame. Only dark matter is then directly coupled to $\phi$, conformally and disformally.
In this context, the modified Euler's equation is found to be~\cite{Gleyzes:2015pma}\footnote{Our notation for $\Phi, \Psi$ is inverted compared to~\cite{Gleyzes:2015pma}, see eq.~\eqref{eq:metric}.}
\begin{equation}\label{eq:Euler_EFT}
V'+ \mathcal{H}(1+3\gamma\e{c}\bar{\phi}') V + \Psi = -3 \gamma\e{c} \mathcal{H} \delta\phi,
\end{equation}
which is identical to eq.~\eqref{eq:modified_Euler_simple_example}, except that $C_{,\phi}/C(\phi)$ is now replaced by $3\mathcal{H} \gamma\e{c}$, where $\gamma\e{c}(\eta)$ fully encodes the coupling between $\phi$ and dark matter. Reducing the analysis to the class of Horndeski theories, one can then perform a similar operation as in sec.~\ref{sec:scalar-tensor_simple_example}, and express~$\delta\phi$ as a function of $\Psi$. In the quasi-static limit, and assuming that the energy density of dark matter completely dominates the energy density of baryons, we find
\begin{equation}
\Gamma \define
\frac{3 \mathcal{H} \gamma\e{c}\delta\phi}{\Psi}
= \frac{\beta_\gamma(\beta_\xi+\beta_\gamma)}{1 +\beta_\xi(\beta_\xi+\beta_\gamma)},
\end{equation}
where $\beta_\gamma\propto \gamma\e{c}$, while $\beta_\xi$ is related to the coupling functions of the gravitational sector; see sec.~4 of~\cite{Gleyzes:2015pma} for further details. Calling~$\Theta\define 3\gamma\e{c}\bar{\phi}'$, we recover the phenomenological form~\eqref{eq:parametrisation_modified_Euler} of the modified Euler's equation.
\subsection{Vector-tensor theories}
\label{sec:Einstein-aether}
As a further step towards generality, this section deals with a class of vector-tensor theories, namely Einstein-\ae ther theories, with direct coupling between dark matter and \ae ther. We are going to show that, under some reasonable assumptions, the modified Euler equation takes the same form as in eq.~\eqref{eq:parametrisation_modified_Euler}.
\subsubsection{Fundamentals}
The most natural way to break Lorentz invariance in gravitation, while keeping general covariance, consists in exhuming the idea of an \ae ther, defining a preferred frame. This idea is implemented by equipping gravity with an extra vector degree of freedom~$u^\mu$, which must be thought of as the four-velocity of \ae ther. Following refs.~\cite{Jacobson:2000xp,2008arXiv0801.1547J}, we consider the action
\begin{equation}\label{eq:action_Einstein_aether}
S[g_{\mu\nu},u^\mu] = \frac{1}{16\pi G} \int \mathrm{d}^4 x \sqrt{-g}
\pac{ R + K\indices{^\mu^\nu_\rho_\sigma} \nabla_\mu u^\rho \nabla_\nu u^\sigma
+ \lambda (u^\mu u_\mu + 1) }
\end{equation}
which contains the usual Einstein-Hilbert term, but also the kinetic term for $u^\mu$, with
\begin{equation}
K_{\mu\nu\rho\sigma} \define c_1 g_{\mu\nu} g_{\rho\sigma}
+ c_2 g_{\mu\rho} g_{\nu\sigma}
+ c_3 g_{\mu\sigma} g_{\nu\rho}
- c_4 u_{\mu} u_{\nu} g_{\rho\sigma}\, ,
\end{equation}
the coefficients~$c_{1\ldots 4}$ being free parameters of the theory\footnote{We use the same convention as refs.~\cite{Will:2014kxa, Audren:2014hza}. Note the difference with refs.~\cite{Jacobson:2000xp,2012JCAP...10..057B}, where mostly-minus signature was used, and $c_4\rightarrow -c_4$. The authors of~\cite{Carroll:2004ai} chose to parameterise the Einstein-\ae ther theory by~$\beta_1, \beta_2, \beta_3$, with $c_a=-16\pi G \beta_a$, and $c_4=0$.}. These parameters are already severely constrained by experiments. On the one hand, tests in the Solar system impose $|\alpha_1|<10^{-4}$ and $|\alpha_2|<2\times 10^{-9}$, where those PPN parameters are given by the combinations~\cite{Will:2014kxa}
\begin{align}
\alpha_1 &\define -\frac{8(c_3^2+c_1c_4)}{2c_1 - c_1^2+c_3^2} \\
\alpha_2 &\define 2\alpha_1 - \frac{[2(c_1+c_3)-(c_1+c_4)][(c_1+c_4)+(c_1 + 3 c_2 + c_3)]}{(c_1+c_2+c_3)(2-c_1-c_4)}.
\end{align}
On the other hand, the recent constraints on the relative velocity of light and gravitational waves set by GW170817~\cite{TheLIGOScientific:2017qsa} and GRB170817A~\cite{TheLIGOScientific:2017qsa,Goldstein:2017mmi,Savchenko:2017ffs} impose $|\alpha\e{T}|<10^{-15}$~\cite{Baker:2017hug}, where
\begin{equation}
\alpha\e{T} \define -\frac{c_1+c_3}{1+c_1+c_3} .
\end{equation}
In the following, we will consider~$\alpha_1=\alpha_2=\alpha\e{T}=0$, which can be satisfied by setting $c_1+c_3=c_1+c_4=0$ if~$c_1\not= 0$; see however~\cite{Oost:2018tcv} for a thorough status of the current constraints on the $c_i$, including constraints from the big-bang nucleosynthesis~\cite{Carroll:2004ai}. The last term in \eqref{eq:action_Einstein_aether} is a constraint which ensures the normalisation~$u_\mu u^\mu = -1$ of the \ae ther four-velocity, while $\lambda$ is a Lagrange multiplier. This theory can be considered a low-energy limit of Ho\v{r}ava gravity~\cite{Horava:2009uw,Blas:2009qj,Blas:2010hb}. Note finally that the action~\eqref{eq:action_Einstein_aether} can be further generalised~\cite{Zlosnik:2006zu,Zhao:2007ce} by replacing the kinetic term by a general function of $K\indices{^\mu^\nu_\rho_\sigma} \nabla_\mu u^\rho \nabla_\nu u^\sigma$.
An explicit violation of the equivalence principle can then be implemented by coupling dark matter particles to $u^\mu$ similarly to how charged particles couple to the electromagnetic four-potential; namely, the action of a dark matter point particle is taken to be~\cite{2012JCAP...10..057B}
\begin{equation}
S_1 = -m \int \mathrm{d}\tau \; F(\gamma),
\end{equation}
where $\gamma\define -u_\mu v^\mu$ is the relative Lorentz factor between dark matter and \ae ther, and $F$ is a function such that $F(1)=1$. Due to its similarity with the conformal coupling to a scalar field given in sec.~\ref{sec:scalar-tensor_simple_example}, we expect similar phenomena to appear: fifth force, modification of the inertial and gravitational masses, etc.
The full set of equations of motion for $g_{\mu\nu}$, $u^\mu$ and dark matter is rather heavy. We choose to leave it in the appendix~\ref{appendix:Einstein-aether}, while focussing on the dynamics of dark matter for now. If $v^\mu$ denotes the four-velocity of the dark matter flow, and $\rho\e{DM}$ its energy density in the absence of coupling to \ae ther, then
\begin{align}
\nabla_\mu (\rho\e{DM} v^\mu) &= 0 \label{eq:energy_conservation_Einstein-aether}\\
v^\nu \nabla_\nu \pac{(F-\gamma F_{,\gamma})v^\mu}
&= F_{,\gamma} \omega\indices{^\mu_\nu} u^\nu
- \dot{\gamma} F_{,\gamma\gamma} v^\mu\, , \label{eq:EOM_DM_Einstein-aether}
\end{align}
where $\dot{\gamma}\define \mathrm{d}\gamma/\mathrm{d}\tau$, and $\omega_{\mu\nu}\define \partial_\mu u_\nu - \partial_\nu u_\mu$. As in the scalar-tensor case, eq.~\eqref{eq:energy_conservation_Einstein-aether} tells us that the bare energy of dark matter is conserved. The phenomenology of eq.~\eqref{eq:EOM_DM_Einstein-aether} is richer. On the one hand, the inertial mass is modified by a factor $(F-\gamma F_{,\gamma})$. On the other hand, it experiences two kinds of additional forces. The second term on the right-hand side is a kind of friction, proportional to the relative acceleration between dark matter and \ae ther. The first term is reminiscent of the Lorentz force, $\omega_{\mu\nu}$ being analogous to the field strength of electrodynamics. A more kinetic interpretation consists in viewing $\omega_{\mu\nu}u^\nu$ like an inertial force, containing both dragging-like and Coriolis-like effects. This rich phenomenology will turn out to highly simplify in the context of linear cosmological perturbations.
\subsubsection{Cosmological aspects}
In strictly homogeneous and isotropic cosmology, \ae ther has to be comoving with matter in order to preserve the symmetries of the FLRW space-time. Thus~$\gamma=-u^\mu v_\mu = 1$ and everything goes as if dark matter and \ae ther were uncoupled. The expansion dynamics is nonetheless affected, due to the stress-energy of \ae ther itself. This stress-energy tensor turns out to be directly related to the extrinsic curvature of the homogeneity hypersurfaces, and the net effect is to multiply both $H^2$ and $a^{-1}\mathrm{d}^2 a/\mathrm{d} t^2$ in the Friedmann equations\footnote{The effect of \ae ther on the dynamics of cosmic expansion was first considered in~\cite{Carroll:2004ai}, for $c_4=0$ and no cosmological constant. The authors chose to interpret the factor $1-(c_1+3c_2+c_4)/2$ as a renormalisation of Newton's constant and spatial curvature. Had they considered $\Lambda\not=0$, they would have concluded that the cosmological constant had to be renormalised as well.} by~$1-(c_1+3c_2+c_4)/2$.
At the level of perturbations, things are more involved. Restricting to scalar modes as we did in sec.~\ref{sec:scalar-tensor}, the modified Euler equation of dark matter reads
\begin{equation}\label{eq:modified_Euler_Einstein-aether}
V' + \mathcal{H} V + \Psi = Y \pac{ V'-U' + \mathcal{H}(V-U) },
\end{equation}
where $U$ is the velocity potential associated to $u^i$, and where we used the notation of~\cite{2012JCAP...10..057B}, $Y\define F_{,\gamma}(1)$, for the effective coupling constant between dark matter and \ae ther. The first constraints on $Y$ were performed in~\cite{Audren:2014hza}, who obtained~$Y<3\%$ by combining CMB and BAO data. More recently, a method based on galactic dynamics was proposed in~\cite{2017JCAP...05..024B}. Note that, as $A_V\define V'+\mathcal{H} V+\Psi$ represents the 4-acceleration potential of dark matter, eq.~\eqref{eq:modified_Euler_Einstein-aether} can be written as~$A_V=Y(A_V-A_U)$, where we see that the fifth force is related to the relative acceleration of the two flows.
When the above is combined with the dynamics of \ae ther and gravitation (see appendix~\ref{appendix:Einstein-aether}), it can be shown that, in Fourier space, $V-U = h(a,k) V$, with
\begin{equation}
h(a,k) \define \frac{c_2(2-3 c_2) a k^2 + 9 c_2^2 \Omega\e{m0} H_0^2}
{c_2 (2-3 c_2) a k^2 - \Omega\e{m0} H_0^2 \pac{(2-3 c_2)Y+3c_2(1-c_2/2)}}\, ,
\end{equation}
and where we introduced~$\Omega\e{m}=8\pi G\rho_0/3$, corresponding to the uncoupled dark matter density today, $\rho_0$. We can then substitute in the relative acceleration of the two fluids, $A_V-A_U = Y(V'+\mathcal{H} V)+h'V$, and we obtain
\begin{equation}
V' + \mathcal{H} \pa{1 - \frac{Y\mathcal{H}^{-1}h'}{1-Y h}} V + \pa{ 1 + \frac{Y h}{1-Y h}}\Psi = 0\, ,
\end{equation}
which has the same form as eq.~\eqref{eq:parametrisation_modified_Euler}, except that the functions encoding departures from Euler are now scale dependent (through $h$). However, for sub-Hubble modes~$k \gg \mathcal{H}$, $h=1+\mathcal{O}(\mathcal{H}^2/k^2)$, and we are left with
\begin{equation}
V' + \mathcal{H} V + \pa{ 1+\frac{Y}{1-Y} } \Psi = 0\, .
\end{equation}
up to second order in $\mathcal{H}/k$.
\subsection{Discussion}
\label{sec:parametrisation}
The two theoretical cases investigated in secs.~\ref{sec:scalar-tensor} and \ref{sec:Einstein-aether} produce precisely the kind of deviations proposed in eq.~\eqref{eq:parametrisation_modified_Euler}. One could then wonder how general such a parametrisation is. After all, the fact that we only consider linear scalar perturbations should restrict the possible modifications of Euler's equation. An apparently reasonable guess for the latter would be $V'=L(V,\Phi,\Psi,\delta,\psi_i)$, where $\psi_i$ represents the extra degrees of freedom ($\phi$ in the scalar-tensor case, $U$ in the Einstein-\ae ther case, etc.) and where $L$ is linear with respect to its arguments. One would then use the other equations of motion to eliminate~$\Phi, \delta, \psi_i$, so that $V'=A V+ B\Psi$, which strongly resembles~\eqref{eq:parametrisation_modified_Euler}.
One could, however, imagine extensions of this parametrisation. First of all, $L$ could in principle depend on the time derivative of its arguments. Even in the quasi-static approximation, it is not obvious that all those derivatives could be neglected, in particular if they are combined with spatial derivatives. This leads to the second point, which is that the coefficients $A$ and $B$ in $V'=A V+ B\Psi$ could not only be time-dependent, but also generically scale-dependent. In our forecasts we will not consider this possibility, but it could lead to stronger constraints by modifying not only the amplitude of the relativistic correlation function, but also its shape.
Finally, let us note that if $\Theta\ll\Gamma$, then $\Gamma$ essentially coincides with the E\"otv\"os ratio $2(a\e{DM}-a\e{B})/(a\e{DM}+a\e{B})$ between dark and baryonic matter. It is automatically the case in the Einstein-\ae ther model, since~$\Theta=0$. In the general scalar-tensor scenario described by EFT, one has
\begin{equation}
\frac{\Theta}{\Gamma}
= \frac{3\gamma\e{c}\bar{\phi}'}{\Gamma}
\end{equation}
which is small if $\bar{\phi}$ evolves very slowly, but could in principle be of order unity. Physically speaking, it corresponds to the situation where the effect of the running of the dark matter mass is smaller than the effect of the fifth force.
\section{Testing Euler's equation with galaxy surveys}
\label{sec:dipole}
In the previous section, we motivated the general parametrisation~\eqref{eq:parametrisation_modified_Euler} of deviations of the dark matter flow with respect to Euler's equation. This section deals with the measurability of such deviations. We will now explain how such a signal can be extracted from relativistic effects in galaxy surveys.
\subsection{What galaxy surveys really measure}
Galaxy surveys attempt to trace the distribution of matter in the Universe from the number density of galaxies. The main observable is therefore the number~$N$ of galaxies per unit of observed volume, i.e. per pixel of the sky (subtended by a solid angle~$\Omega$) and per redshift bin~$\Delta z$ (see fig.~\ref{fig:number_counts}). The inhomogeneity of the distribution of galaxies is then quantified by
\begin{equation}
\Delta \define \frac{N - \bar{N}}{\bar{N}},
\end{equation}
where $\bar{N}$ is the average of $N$ over all the pixels, i.e. the total number of observed galaxies divided by the volume of the survey.
\begin{figure}[h!]
\centering
\input{number_counts.pdf_tex}
\caption{Galaxy number count~$N(z,\vect{n})$, observed in a pixel~$\Omega$ of the sky about the direction~$\vect{n}$, in a redshift bin~$\Delta z$ about $z$.}
\label{fig:number_counts}
\end{figure}
The observable~$\Delta(z,\vect{n})$ does not only contain information about the matter density contrast around the point $(z,\vect{n})$, but also about the relation between the observed pixel~$(\Delta z, \Omega)$ and the corresponding physical volume. Indeed, the gravitational effects of matter inhomogeneities affect the propagation of light, and its frequency. As a consequence, a given pixel~$(\Delta z, \Omega)$ can be first physically larger or smaller compared to its counterpart in a strictly homogeneous Universe, and second it can be physically closer or further away from the observer than it would be in a homogeneous Universe. These contributions to $\Delta$ have been calculated in~\cite{Yoo:2009au,Bonvin:2011bg, Challinor:2011bk} at linear order in scalar cosmological perturbations. The result is conveniently written as
\begin{equation}
\Delta = \Delta\h{st} + \Delta\h{rel} + \Delta\h{lens}\,,
\end{equation}
where $\Delta\h{st}$ is the standard expression of $\Delta$, used in all galaxy surveys, and which already accounts for the effect of biased tracers and redshift-space distortions; $\Delta\h{rel}$ contains the so-called relativistic effects, which are corrections to the redshift (Doppler and Einstein effects, integrated Sachs-Wolfe effect, \ldots), and hence ensure the correct estimation of the physical depth and width of redshift bins; and finally $\Delta\h{lens}$ denotes the contribution of gravitational lensing, which translates the observed angular size~$\Omega$ of a pixel into physical areas. Their expressions are
\begin{align}
\Delta\h{st} &= b \delta - \frac{1}{\mathcal{H}} \, \partial_r(\vect{V}\cdot\vect{n})\, , \label{eq:Delta_standard}\\
\Delta\h{rel}&=\frac{1}{\mathcal{H}}\partial_r\Psi+\left(1-5s+\frac{5s-2}{r\mathcal{H}}-\frac{\mathcal{H}'}{\mathcal{H}^2} \right) \mathbf{V}\cdot\mathbf{n}+\frac{1}{\mathcal{H}}\mathbf{V}'\cdot\mathbf{n}\, ,\label{eq:Delta_relativistic}\\
\Delta\h{lens}&=(5s-2)\int_0^r \mathrm{d} r' \; \frac{r-r'}{2rr'}\Delta_\Omega(\Phi+\Psi)\, .\label{eq:Delta_lens}
\end{align}
Here $b$ denotes the linear bias of the matter tracer (typically galaxies), and $r$ is the comoving radial coordinate in the direction of~$\vect{n}$; in eq.~\eqref{eq:Delta_relativistic} and~\eqref{eq:Delta_lens}, $s$ denotes the slope of the luminosity function which parameterises the magnification bias and $\Delta_\Omega$ is the transverse Laplacian. Note that, in eq.~\eqref{eq:Delta_relativistic}, we dropped the terms of $\Delta\h{rel}$ involving the gravitational potentials $\Psi$ and $\Phi$ without spatial derivatives, because their contribution to the dipole presented in the next subsection is suppressed by $(\mathcal{H}/k)^2$ with respect to the other terms.
Let us further focus on the relativistic correction~$\Delta\h{rel}$ given by eq.~\eqref{eq:Delta_relativistic}. The first term denotes de contribution from gravitational redshift, while the other terms are Doppler effects. The fact that gravitational redshift depends directly on $\Psi$ allows us to test Euler's equation in a model-independent way. We notice that three of its terms exactly cancel if Euler's equation is satisfied. This turns out to be a direct consequence of Einstein's equivalence principle---we refer the interested reader to appendix~\ref{app:equivalence} for more details about this connection. Thus, relativistic effect in galaxy surveys are an ideal laboratory to look for violation of the equivalence principle. If Euler's equation is violated according to our proposition~\eqref{eq:parametrisation_modified_Euler}, then eq.~\eqref{eq:Delta_relativistic} becomes
\begin{equation}
\label{Delta_mod}
\Delta\h{rel}(\mathbf{n}, z)
=
\pa{\frac{\Gamma-\Theta}{1+\Gamma}-5s+\frac{5s-2}{r\mathcal{H}}-\frac{\mathcal{H}'}{\mathcal{H}^2} } \mathbf{V}\cdot\mathbf{n}+\frac{\Gamma}{\mathcal{H}(1+\Gamma)}\,\mathbf{V}'\cdot\mathbf{n}\, ,
\end{equation}
where a couple of terms vanish for $\Gamma=\Theta=0$. In the remainder of this section, we show how to extract $\Delta\h{rel}$ from galaxy surveys.
\subsection{Dipolar correlations}
\label{sec:aymm}
The simplest way of extracting cosmological information from galaxy surveys consists in using the two-point correlation function of $\Delta$,
\begin{equation}
\xi(z_1,\vect{n}_1;z_2,\vect{n}_2) \define \ev{\Delta(z_1,\vect{n}_1) \Delta(z_2,\vect{n}_2)}.
\end{equation}
Due to statistical homogeneity and isotropy, only three out of the six variables $(z_1,\vect{n}_1,z_2,\vect{n}_2)$ are necessary to parameterise $\xi$, because only the relative position of the two pixels matters. A convenient parametrisation, depicted in fig.~\ref{fig:parametrisation_correlation}, consists in locating, for example, pixel 2 relatively to pixel 1 by their mutual distance~$d$ and the angle~$\sigma$ between the axis $(12)$ and the mean line-of-sight~$\vect{n}$. We thus write~$\xi(d,\sigma,z)$, where $z$ is the redshift of the pixel~$1$.
\begin{figure}[h!]
\centering
\input{correlation.pdf_tex}
\caption{Relative localisation of two pixels, by their comoving distance~$d$ and the angle~$\sigma$ between the axis $(12)$ and the line of sight $\vect{n}_1$. Note that we used Euclidean geometry on that drawing because the effects of cosmological perturbations are already taken into account in the terms $\Delta\e{rel}$ and $\Delta\e{lens}$ of $\Delta$.}
\label{fig:parametrisation_correlation}
\end{figure}
A key advantage of this parametrisation is that the various contributions to $\xi(d,\sigma,z)$ depend differently on the angle~$\sigma$. For example, it is well known that redshift-space distortions yield a quadrupole and hexadecapole in terms of~$\sigma$~\cite{Kaiser:1987qv,Hamilton:1997zq}. \emph{It turns out that relativistic effects add a dipole and an octupole to this picture}~\cite{Bonvin:2013ogt,Croft:2013taa}, which makes them identifiable in the data\footnote{Note that in the power spectrum, relativistic effects are identifiable by the fact that they generate an imaginary part~\cite{McDonald:2009ud,Yoo:2012se}.}. Let us briefly summarise why such a dipolar structure appears (see~\cite{Bonvin:2013ogt,Croft:2013taa} for more detail).
As an example, we focus on the correlation between the first term in eq.~\eqref{eq:Delta_standard} (density) and the first term of eq.~\eqref{eq:Delta_relativistic} (gravitational redshift),
\begin{equation}\label{eq:expansion_correlation}
\xi^{b\delta\times\partial_r\Psi}(\vect{x}_1,\vect{x}_2)
= \ev{b\delta(\vect{x}_1)\partial_r\Psi(\vect{x}_2)}+\ev{\partial_r\Psi(\vect{x}_1)b\delta(\vect{x}_2)}\, .
\end{equation}
Each of these terms is antisymmetric with respect to the exchange of $\vect{x}_1,\vect{x}_2$, which means that they have a dipolar component in terms of the angle~$\sigma$ depicted in fig.~\ref{fig:parametrisation_correlation}. Indeed, suppose that there is an over-density at $\vect{x}_1$ [$\delta(\vect{x}_1)>0$]. This generates a potential well around~$\vect{x}_1$, with $\vect{\nabla}\Psi$ directed outwards. Thus, if $\vect{x}_2$ is located behind~$\vect{x}_1$ along the line of sight ($\sigma=0$), then $\delta(\vect{x}_1)\partial_r\Psi(\vect{x}_2)>0$; conversely, if $\vect{x}_2$ is located between $\vect{x}_1$ and the observer ($\sigma=\pi$), then $\delta(\vect{x}_1)\partial_r\Psi(\vect{x}_2)<0$. The same reasoning applies if there is an under-density at $\vect{x}_1$.
This antisymmetry of gravitational redshifts can be understood as in fig.~\ref{fig:asymmetric} (see also fig. 2 of~\cite{Bonvin:2013ogt}). Consider two redshift bins~$\Delta z$ located respectively in front of, and behind, an over-density. The closer a galaxy is to the over-density, the stronger its gravitational redshift. This distorts the iso-$z$ surfaces in real space, as gravitational redshift mimics the effect of cosmic expansion: the stronger the gravitational field experienced by a galaxy is, the less far from the observer it needs to be in order to have a given redshift. Hence, the physical thickness of the bin in front of the over-density is effectively squeezed, so it potentially contains less galaxies, which reduces~$\Delta$, and conversely for the bin located behind the over-density, whence the dipole. The same kind of reasoning can be made for the effect of velocity and acceleration.
In eq.~\eqref{eq:expansion_correlation}, $\ev{b\delta(\vect{x}_1)\partial_r\Psi(\vect{x}_2)}$ is accompanied with $\ev{\partial_r\Psi(\vect{x}_1)b\delta(\vect{x}_2)}$. It is then not hard to see that the dipoles associated to each term exactly compensate, because exchanging~$\vect{x}_1$ and $\vect{x}_2$ corresponds to changing $\sigma$ into $\sigma+\pi$. However, this cancellation does not happen if one takes the cross-correlation between \emph{two types of tracers, with different biases}. For instance, correlating bright~(B) and faint~(F) galaxies with respective biases~$b\e{B}, b\e{F}$, eq.~\eqref{eq:expansion_correlation} becomes
\begin{equation}
\xi\h{cross}\e{BF}(\vect{x}_1,\vect{x}_2)
= \ev{b\e{B}\delta(\vect{x}_1)\partial_r\Psi(\vect{x}_2)}+\ev{\partial_r\Psi(\vect{x}_1)b\e{F}\delta(\vect{x}_2)}\, .
\end{equation}
Both terms still contain opposite dipoles, but the former is proportional to~$b\e{B}$, while the latter is proportional to $b\e{F}$. There is, therefore, a net dipole in $\xi\e{BF}\h{cross}$ proportional to $b\e{B}-b\e{F}$.
\begin{figure}[h!]
\centering
\input{asymmetric.pdf_tex}
\caption{Effect of the gravitational potential created by an over-density (centre of the black halo) on the physical size of redshift bins located in front of it or behind it. Solid lines correspond to iso-$z$ surfaces, while dotted lines are iso-$r$ surfaces (which would coincide with $z=\mathrm{cst}$ in the FLRW background). A point closer to the over-density experiences a stronger gravitational field, which enhances its redshift; in order to have the same redshift as a point that experiences a weaker gravitational field, it has to be closer to the observer. The net effect is to squeeze (resp. stretch) the redshift bins in front of (resp. behind) the over-density.}
\label{fig:asymmetric}
\end{figure}
\subsection{Extracting the dipole}
Because the standard contributions to the cross-correlation function do not contain any dipolar component in the flat-sky approximation, relativistic effects can be extracted by integrating the latter with a suitable antisymmetric kernel. This subsection summarises the construction of an estimator of the dipole (see~\cite{Bonvin:2015kuc} for more detail).
\subsubsection{Set-up and conventions}
To calculate the multipoles of the correlation function, we write $\Delta$ in Fourier space, for which we use the convention\footnote{$\Delta$ is measured as a function of $z$, but it can now be re-written in terms of conformal time~$\eta$, since the error that is made when one goes from $z$ to $\eta$ using the background relationship has been consistently included in our derivation of $\Delta$. In the following, we will use indistinctly $z$ or $\eta$.}
\begin{equation}
\Delta(\bk, \eta) = \int \mathrm{d}^3 x \; \ex{\i \bk\cdot\mathbf{x}}\Delta(\mathbf{x}, \eta) \, ;
\qquad
\Delta(\mathbf{x}, \eta) = \int \frac{\mathrm{d}^3 k}{(2\pi)^3} \; \ex{-\i \bk\cdot\mathbf{x}}\Delta(\bk, \eta) \, .
\end{equation}
We define the velocity potential $\hat V$ in Fourier space through $\mathbf{V}(\bk,\eta)\define \i(\vect{k}/k)\hat V(\bk,\eta)$, so that $\hat V(\bk, \eta)$ has the same dimensions as $\delta(\bk, \eta)$, i.e. [length]$^3$. Note that $\hat V(\bk, \eta)$ is then related to the Fourier transform of $V(\mathbf{x}, \eta)$ by a factor $-k$. We assume that the continuity equation is valid, as discussed in sec.~\ref{sec:theory}. For sub-Hubble modes ($k\gg\mathcal{H}$),
\begin{equation}
\label{continuity}
\hat V(\bk, \eta)=-\frac{1}{k}\,\delta'(\bk,\eta)\, .
\end{equation}
This allows us to write $\Delta(\bk, \eta)$ as a function of $\delta(\bk, \eta)$ only, in particular
\begin{align}
\Delta^{\rm st}(\bk, z) &=\pac{ b(z)+f(z)(\hat\bk\cdot\mathbf{n})^2 } \delta(\bk, \eta)\ ,\label{Deltast_k}\\
\Delta^{\rm rel}(\bk, z) &=\i\hat\bk\cdot\mathbf{n} \frac{\mathcal{H}}{k}
\bigg[
\pa{ 5s+\frac{2-5s}{r\mathcal{H}}+\frac{\mathcal{H}'}{\mathcal{H}^2}} f + \Upsilon(z)
\bigg] \delta(\bk, \eta)\, ,\label{Deltarel_k}
\end{align}
where
\begin{equation}\label{eq:eps}
\Upsilon(z) \define \frac{\Theta-\Gamma}{1+\Gamma} \, f - \frac{\Gamma}{1+\Gamma} \pa{ \frac{\mathcal{H}'}{\mathcal{H}^2}f+f^2+\frac{f'}{\mathcal{H}} }
\end{equation}
encodes the deviations from Euler's equation. Here we have assumed that the growth of density perturbation, $D$, defined through $\delta(\bk, z)=[D(z)/D_0]\delta(\bk,0),$ is scale-independent. This is true in the quasi-static limit of the models described in sec.~\ref{sec:theory}. In general, deviations from GR can introduce a scale-dependence in $D$, but we will not consider this possibility in the following. The growth rate~$f$, defined through
\begin{equation}
f(z)=\frac{\mathrm{d}\ln D(a)}{\mathrm{d}\ln (a)}
\end{equation}
is therefore scale-independent as well, and so is $\Upsilon(z)$ defined in~\eqref{eq:eps}.
\subsubsection{Expression of the cross-correlation function}
In the cross-correlation of two populations of galaxies, bright~$\textrm{B}$ and faint~$\textrm{F}$, the only quantities which depend on the galaxy population in eqs.~\eqref{Deltast_k} and \eqref{Deltarel_k} are the biases $b_\textrm{B}$ and $b_\textrm{F}$ and the slopes $s_\textrm{B}$ and $s_\textrm{F}$. Following~\cite{Bonvin:2013ogt} we can write, in the flat-sky limit,
\begin{equation}
\label{xiBFall}
\xi\e{BF}(d, \sigma, \bar z)
= \xi\e{BF}\h{st}(d,\sigma, \bar z)+\xi\e{BF}\h{rel}(d, \sigma, \bar z)+\xi\e{BF}\h{wide}(d, \sigma, \bar z)
+\xi\e{BF}\h{evol}(d,\sigma, \bar z)+\xi\e{BF}\h{lens}(d,\sigma, \bar z)\, ,
\end{equation}
where $\bar z$ denotes the mean redshift of the survey (or of the redshift bin of interest), and we remind the reader that $d$ is the separation between the galaxies, while $\sigma$ denotes the orientation of the pair of galaxies with respect to the mean direction of observation $\mathbf{n}$.
In eq.~\eqref{xiBFall}, $\xi\e{BF}\h{st}$ is the standard correlation function containing a monopole, quadrupole and hexadecapole in $\sigma$~\cite{Hamilton:1997zq,Kaiser:1987qv}. The relativistic contribution $\xi\e{BF}\h{rel}$ is given by
\begin{align}
\xi\e{BF}\h{rel} &= \frac{\mathcal{H}}{\mathcal{H}_0} \pa{\frac{D}{D_0}}^2
\Bigg[(b_\textrm{B}-b_\textrm{F}) \, \pa{ \frac{2}{r\mathcal{H}} + \frac{\mathcal{H}'}{\mathcal{H}^2} + \Upsilon(z) }
+ 3 (s_\textrm{F}-s_\textrm{B}) f^2 \pa{ 1-\frac{1}{r\mathcal{H}} } \nonumber\\
&\hspace*{5cm}
+ 5 (b_\textrm{B} s_\textrm{F}-b_\textrm{F} s_\textrm{B}) f \pa{ 1-\frac{1}{r\mathcal{H}}}
\Bigg] \nu\h{rel}_1(d) \, P_1(\cos\sigma)\nonumber\\
&\quad
+ 2\,\frac{\mathcal{H}}{\mathcal{H}_0}\pa{\frac{D}{D_0} }^2
(s_\textrm{B}-s_\textrm{F}) f \pa{1-\frac{1}{r\mathcal{H}}} \nu\h{rel}_3(d) \, P_3(\cos\sigma)\, , \label{xirel}
\end{align}
where $P_\ell$ denotes the Legendre polynomial of degree $\ell$, and
\be
\label{nurel}
\nu\h{rel}_\ell(d)\equiv\frac{1}{2\pi^2}\int \mathrm{d} k \; k \mathcal{H}_0 P_\delta(k)j_\ell(kd),
\qquad\ell=1,3\, .
\ee
Here $P_\delta(k)$ is the matter power spectrum today, $\ev{\delta(\bk)\delta(\bk')}=(2\pi)^3 P_\delta(k)\delta\e{D}(\bk+\bk')$, while $j_\ell$ denotes the spherical Bessel function of degree $\ell$.
The relativistic correlation comes from the correlation between the standard term~\eqref{eq:Delta_standard} and the relativistic term~\eqref{eq:Delta_relativistic} and, as anticipated in sec.~\ref{sec:aymm}, it is completely anti-symmetric, because it consists of a dipole and an octupole in $\sigma$. The correlation of the relativistic contribution with itself also contributes to $\xi\e{BF}\h{rel}$, but we neglect it here, because it only consists of a monopole and a quadrupole, eliminated in the dipole extraction performed in the next paragraph.
The third contribution to eq.~\eqref{xiBFall}, $\xi\e{BF}\h{wide}$, represents the wide-angle corrections to the flat-sky correlation function. As discussed in refs.~\cite{Bonvin:2013ogt, Bonvin:2015kuc} the wide-angle corrections from density and redshift-space distortions also generate a dipole and an octupole in the correlation function. As such, they contaminate the measurement of the relativistic contribution. The wide-angle effects are minimised by using the angle $\sigma$ to extract the dipole, i.e. the angle between the direction of the median of the pair and the direction of observation---see~\cite{Bonvin:2015kuc, Reimberg:2015jma} for a detailed discussion about other possible angles. This contribution is given by
\begin{equation}
\label{xiwide}
\xi\e{BF}\h{wide}=-\frac{2f}{5}(b\e{B}-b\e{F})\,\frac{d}{r}\,\nu\h{st}_2(d)\big[P_1({\cos\sigma})-P_3(\cos\sigma)\big]\, ,
\end{equation}
with
\be
\label{nust}
\nu\h{st}_\ell(d) \equiv \frac{1}{2\pi^2}\int \mathrm{d} k \; k^2 P_\delta(k)j_\ell(kd),
\qquad\ell=0,2,4\, .
\ee
Comparing eq.~\eqref{xiwide} with eq.~\eqref{xirel}, we see that $\xi\e{BF}\h{wide}$ is suppressed by $d/r\ll 1$ with respect to $\xi\e{BF}\h{rel}$. However $\nu\h{st}_\ell$ is enhanced by a factor $k/\mathcal{H}\gg 1$ with respect to $\nu\h{rel}_\ell$. Since $d/r\sim \mathcal{H}/k$ these two effects compensate and the wide-angle correction becomes of the same order of magnitude as the relativistic contribution (see also fig. 8 of~\cite{Bonvin:2013ogt}).
The fourth term in eq.~\eqref{xiBFall}, $\xi\e{BF}\h{evol}$, denotes the evolution corrections. These are due to the fact that the biases, growth rate, and slopes of the luminosity function evolve with redshift, and consequently also gives rise to a dipole and octupole. However, as shown in fig.~11 of~\cite{Bonvin:2013ogt}, these evolution corrections are always sub-dominant with respect to both the relativistic contribution and the wide-angle corrections. We can therefore safely neglect them.
Finally, the last term in eq.~\eqref{xiBFall}, $\xi\e{BF}\h{lens}$, represents the lensing contribution. Again, the lensing contribution to the dipole and octupole is significantly smaller than the relativistic and wide-angle contributions, and hence we neglect it---see fig.~8 of~\cite{Bonvin:2013ogt}.
\subsubsection{Estimator of the dipole}
From eqs.~\eqref{xiBFall} and~\eqref{xirel} we see that the optimal way to test Euler's equation is to extract the dipolar modulation in the two-point correlation function. First, this allows us to completely get rid of the standard correlation function $\xi\e{BF}\h{st}$ (which is insensitive to the parameters $\Theta$ and $\Gamma$ that we want to constrain). And second, it strongly reduces the impact of the evolution and lensing contributions which are negligible in the dipole. Hence the only two relevant contributions to the dipole are the wide-angle contribution and the relativistic contribution. As shown in eq.~\eqref{xiwide} the wide-angle contribution is insensitive to the parameters $\Theta$ and $\Gamma$. As such this contribution is a contamination that we would like to remove. As discussed in~\cite{Bonvin:2013ogt}, this can be done observationally by removing the quadrupole of the bright and faint populations from the dipole, i.e. by constructing the following estimator
\begin{align}
\hat{\xi}^{1}\e{BF}=&\frac{3}{8\pi}\left(\frac{\ell\e{p}}{d}\right)^2\frac{\mathcal{V}}{\ell\e{p}^3}\sum_{ij}
\big[\Delta_{\textrm{B}}(\mathbf{x}_i)\Delta_{\textrm{F}}(\mathbf{x}_j)-\Delta_{\textrm{F}}(\mathbf{x}_i)\Delta_{\textrm{B}}(\mathbf{x}_j)\big]P_1(\cos\sigma_{ij})\delta\e{K}(d_{ij}-d)\label{estimator}\\
&+\frac{3}{10}\frac{d}{r}\frac{5}{4\pi}\left(\frac{\ell\e{p}}{d}\right)^2\frac{\mathcal{V}}{\ell\e{p}^3}\sum_{ij}
\big[\Delta_{\textrm{B}}(\mathbf{x}_i)\Delta_{\textrm{B}}(\mathbf{x}_j)-\Delta_{\textrm{F}}(\mathbf{x}_i)\Delta_{\textrm{F}}(\mathbf{x}_j)\big]P_2(\cos\sigma_{ij})\delta\e{K}(d_{ij}-d)\nonumber\, ,
\end{align}
where $\mathcal{V}$ denotes the volume of the survey (or of the redshift bin of interest), $\ell\e{p}$ is the length of the cubic pixel in which $\Delta$ is measured, the sum runs over all pairs of pixels $i, j$ in the survey and $\delta\e{K}$ is the Kronecker delta function. The first line in~\eqref{estimator} isolates the dipole contribution in the cross-correlation between bright and faint galaxies (the minus sign ensures that the correlation does not vanish under the exchange of bright and faint galaxies). The second line removes the wide-angle effect. Taking the mean of the estimator and going to the continuous limit\footnote{The continuous limit is obtained by replacing the sum over pixels by a 3-dimensional integral $\sum_i\rightarrow\frac{1}{\ell\e{p}^3}\int \mathrm{d}^3 x$ and the Kronecker delta function by a Dirac delta function $\delta\e{K}(d_{ij}-d)\rightarrow \ell\e{p}\delta\e{D}(|\mathbf{x}-\mathbf{y}|-d)$.} we find indeed
\begin{align}
\ev{\hat{\xi}^{1}\e{BF}}
=& \frac{3}{2}\int_{-1}^1\mathrm{d}\mu \; \pac{\xi\e{BF}\h{rel}(d, \mu, \bar z) + \xi\e{BF}\h{wide}(d, \mu, \bar z)} P_1(\mu)
\nonumber\\
&\quad + \frac{3}{10} \frac{d}{r} \frac{5}{2} \int_{-1}^1 \mathrm{d}\mu
\pac{ \xi\e{BB}\h{st}(d, \mu, \bar z)-\xi\e{FF}\h{st}(d, \mu, \bar z)} P_2(\mu)
\nonumber\\
=& \frac{\mathcal{H}}{\mathcal{H}_0} \pa{\frac{D}{D_0}}^2
\Bigg[ (b_\textrm{B}-b_\textrm{F}) \pa{ \frac{2}{r\mathcal{H}} + \frac{\mathcal{H}'}{\mathcal{H}^2} + \Upsilon(z) }
\nonumber\\
&\quad + 3 (s_\textrm{F}-s_\textrm{B})f^2 \pa{ 1-\frac{1}{r\mathcal{H}}} + 5 (b_\textrm{B} s_\textrm{F}-b_\textrm{F} s_\textrm{B}) f \pa{1-\frac{1}{r\mathcal{H}}}
\Bigg]\nu\h{rel}_1(d)\, , \label{estimator_final}
\end{align}
with $\mu\define\cos\sigma$. Note that eq.~\eqref{estimator_final} is valid in the linear regime. An equivalent estimator has been constructed fo measure gravitational redshift in clusters~\cite{Wojtak:2011ia,Sadeh:2014rya} and in the non-linear regime of large-scale structure~\cite{Alam:2017izi}. The modelling and interpretation of the signal in the non-linear regime is however much more complicated~\cite{Zhao:2012st,Kaiser:2013ipa,Cai:2016ors} and its use to test Euler's equation is not straightforward.
\section{Forecasts: are deviations from Euler detectable?}
\label{sec:forecasts}
We now forecast the detectability of deviations from Euler's equation in future surveys. We consider first the SKA survey and then the DESI survey.
\subsection{Constraints from the SKA}
In its second phase of operation, the SKA will observe $8.8\times 10^8$ galaxies over 30'000 square degrees, from redshift 0.1 to redshift 2. We split the redshift range into bins of width $\Delta z=0.1$ and use the number densities derived in table 3 of~\cite{Bull:2015lja}. In each redshift bin, we calculate the signal $\ev{\hat{\xi}^{1}\e{BF}}$ from eq.~\eqref{estimator_final} and the covariance matrix $C^1_{\textrm{B}\textrm{F}}$. The covariance matrix has been calculated in~\cite{Bonvin:2015kuc,Hall:2016bmm}. In general, covariance matrices contain a shot noise contribution, a cosmic variance contribution and a mixed contribution. However, as shown in~\cite{Bonvin:2015kuc,Hall:2016bmm}, the dipole estimator automatically removes the cosmic variance contribution from density and redshift-space distortions. Fitting for a dipole in the data allows us therefore not only to get rid of the dominant standard contribution (density and redshift-space distortion) in the \emph{signal} but also in the \emph{noise}. As such, the dipole estimator is a powerful tool to isolate the relativistic contributions. The expression for the covariance can be found in appendix~\ref{app:cov}. Note that subtracting the wide-angle effects adds an additional contribution to the covariance. However, as shown in~\cite{Hall:2016bmm}, this contribution is always strongly subdominant with respect to the shot noise and mixed contribution (see fig. 2 of~\cite{Hall:2016bmm}).
We split the population of galaxies into a bright and a faint population with the same number of galaxies. We use the mean redshift given in table 3 of~\cite{Bull:2015lja} and we assume that the two populations are equally distributed around this mean redshift, with a redshift difference $b_\textrm{B}-b_\textrm{F}=0.5$. The signal-to-noise is directly proportional to the bias difference. As an example, in BOSS, a bias difference of 1 has been measured between the bright and faint populations of luminous red galaxies (LRG's)~\cite{Gaztanaga:2015jrs}. In the main galaxy sample of SDSS, galaxies have been split into six populations according to their luminosity, with a bias ranging from 0.96 to 2.16~\cite{Percival:2006gt,Cresswell:2008aa}. For the HI galaxies targeted by the SKA, the expected bias difference is less well known, but a difference of 0.5 seems quite conservative. As we will see in sec.~\ref{sec:SKA_mult} the constraints on deviations in Euler's equation scale directly with the bias difference. The signal-to-noise and the constraints depend also on the slope of the luminosity function $s_\textrm{B}$ and $s_\textrm{F}$. We fix those to be zero. We assume a background cosmology consistent with $\Lambda$CDM, with the fiducial cosmological parameters from BOSS DR11~\cite{Anderson:2013zyy}: $\Omega_{\rm m}=0.274, h=0.7, \Omega_{\rm b} h^2=0.224, n_{\rm s}=0.95$ and $\sigma_8=0.8$.
In fig.~\ref{fig:SN_dipole} we plot the signal-to-noise of the dipole. In the left panel, we show the signal-to-noise at each separation $d$, in the lowest redshift bin $0.1\leq z \leq 0.2$ (red dots) and in the bin $0.4\leq z\leq 0.5$ (blue dots). We use a pixel size $\ell_{\rm p}=2\,{\rm Mpc}/h$. We see that the signal-to-noise peaks around $20\,{\rm Mpc}/h$. In the right panel we show the cumulative signal-to-noise from $10\leq d\leq 200\,{\rm Mpc}/h$, as a function of redshift. Above $z=1.2$ the cumulative signal-to-noise drops below 1, meaning that the dipole is not observable. The cumulative signal-to-noise over the whole range of separation and redshift is of 46.4, showing that the dipole will be robustly measured with the SKA.
\begin{figure}[!t]
\centering
\includegraphics[width=0.48\columnwidth]{SNsep}\hspace{0.5cm}\includegraphics[width=0.48\columnwidth]{SNz}
\caption{\emph{Left panel}: Signal-to-noise of the dipole in the SKA, plotted as a function of separation $d$, with pixel size $\ell_{\rm p}=2\,{\rm Mpc}/h$. The red dots show the signal-to-noise at $\bar z=0.15$ and the blue dots at $\bar z=0.45$. \emph{Right panel}: the cumulative signal-to-noise from $10\leq d\leq 200\,{\rm Mpc}/h$, per bin of redshift from $0.1\leq z\leq 2$.}
\label{fig:SN_dipole}
\end{figure}
\subsubsection{Constraints from the dipole}
\label{sec:forecast_dipole}
We first forecast the constraints on $\Theta$ and $\Gamma$ that we would obtain using only the dipole estimator defined in eq.~\eqref{estimator}. We fix all cosmological parameters and the biases to their fiducial value. Cosmological parameters are indeed well measured by the CMB. The biases on the other hand are unconstrained by the CMB, but as we will see in sec.~\ref{sec:SKA_mult} they can be robustly constrained by the standard multipoles.
For $\Theta$ and $\Gamma$ we assume that they evolve according to
\be
\label{evol}
\Theta(z)=\frac{1-\Omega_{\rm m}(z)}{1-\Omega_{\rm m0}}\Theta_0\,,
\ee
where $\Theta_0\equiv\Theta(z=0)$, and similarly for $\Gamma(z)$. As discussed in~\cite{Gleyzes:2015rua} this evolution ensures that the deviations vanish when the background dark energy density is negligible, like at high redshift, and one recovers Euler's equation. We use fiducial values $\Theta_0=\Gamma_0=0$.
As a first example we assume that $\Theta$ and $\Gamma$ are the only deviations from general relativity, in particular we assume that the growth function $D(z)$ evolves as in a $\Lambda$CDM Universe. In the scalar-tensor and vector-tensor models described in sec.~\ref{sec:theory} deviations from Euler's equations are usually accompanied by deviations in the growth rate. However one could imagine other models where this is not the case, and the only deviations from general relativity would be in Euler's equation.
We construct the Fisher matrix for $\Theta_0$ and $\Gamma_0$
\be
\label{Fisherdip}
F_{ab}=\sum_{\bar z}\sum_{ij}\frac{\partial\ev{\hat{\xi}^{1}\e{BF}(d_i, \bar z)}}{\partial p_a}\left[C^1_{\textrm{B}\textrm{F}}\right]^{-1}(d_i,d_j, \bar z)\frac{\partial\ev{\hat{\xi}^{1}\e{BF}(d_j, \bar z)}}{\partial p_b}\, ,
\ee
where $p_a=\Theta_0,\Gamma_0$, and $C^1_{\textrm{B}\textrm{F}}$ denotes the covariance matrix of the dipole estimator.
In~\eqref{Fisherdip}, the sum runs over all pixel's separations $d_i$ and all redshift bins. We account for correlations between separations (see appendix~\ref{app:cov}), but we neglect correlations between different redshift bins. We use pixel's size of $\ell_{\rm p}=2\,{\rm Mpc}/h$ and pixel's separations $d_{\rm min}\leq d_i\leq d_{\rm max}$. We choose $d_{\rm max}=200\,{\rm Mpc}/h$ since we have checked that the constraints do not improve if we include larger separations. For the minimum separation, we use $d_{\rm min}=10\,{\rm Mpc}/h$. At this scale the impact of non-linearities on the dipole is of the order of 10\,\%\footnote{We estimate the impact of non-linearities on the dipole in the following way: we use the linear continuity equation to express the velocity in terms of the density, and then we use halo-fit to compute the non-linear density. This procedure is of course not completely correct since non-linearities modify the continuity equation, but it allows us to estimate the scale at which non-linearities become important.}. As we will see below, increasing $d_{\rm min}$ to $20\,{\rm Mpc}/h$, where the impact of non-linearities on the dipole is already less than 2\,\%, has almost no effect on the constraints on $\Theta_0$ and $\Gamma_0$.
\begin{figure}[!t]
\centering
\includegraphics[width=0.54\columnwidth]{dipole_Theta_Gamma}\hspace{-1.7cm}\includegraphics[width=0.54\columnwidth]{dipole_mu_Gamma}
\caption{\emph{Left panel}: Joint 1-$\sigma$ and 2-$\sigma$ constraints on the parameters $\Theta_0$ and $\Gamma_0$ obtained from the dipole only, with the SKA. The other cosmological parameters and the biases are fixed to their fiducial value and the growth rate is the one of a $\Lambda$CDM universe: $\mu_0=0$.
\emph{Right panel}: Joint 1-$\sigma$ and 2-$\sigma$ constraints on the parameters $\Gamma_0$ and $\mu_0$ obtained from the dipole only, with the SKA. The other cosmological parameters and the biases are fixed to their fiducial value and $\Theta_0=0$.}
\label{fig:dipole_constraints}
\end{figure}
In the left panel of fig.~\ref{fig:dipole_constraints} we show the joint constraints on $\Theta_0$ and $\Gamma_0$. There is a strong degeneracy between the two parameters, leading to relatively large marginalised 1-$\sigma$ errors of $\Delta{\Theta_0}=3.33$ and $\Delta{\Gamma_0}=3.14$. This degeneracy can be understood from eq.~\eqref{estimator_final}. We see that the amplitude of the dipole depends on $\Theta_0-\Gamma_0$ [first term in the expression~\eqref{eq:eps} of $\Upsilon$] as well as on $\Gamma_0$ individually. However this second dependence is much weaker than the first one at low redshift, since the terms in the second parenthesis are proportional to the second time derivative of the growth factor, $D''$. As a consequence $\Theta_0$ and $\Gamma_0$ are strongly correlated. However, as discussed in sec.~\ref{sec:parametrisation}, in vector-tensor theories, $\Theta=0$. In the EFT of dark energy $\Theta$ is generically not zero, but it is directly proportional to the background evolution of the scalar field. As such it is generically expected to be significantly smaller than $\Gamma$. It is therefore natural to explore how the constraints change when $\Theta=0$. In this case, the constraints on $\Gamma_0$ strongly tightens and we obtain $\Delta{\Gamma_0}=0.26$.
Comparing with the constraints on standard modified gravity parameters in~\cite{Bellini:2015xja, Alonso:2016suf} we see that the constraints on $\Gamma_0$ are weaker by 1 to 3 orders of magnitude. This is not surprising since the signal-to-noise of the dipole is weaker than that of the standard multipoles. However, as discussed in the introduction, a constraint on $\Gamma_0$ would provide the first direct test of the equivalence principle at cosmological scale, which cannot be probed in a model-independent way with the standard multipoles. In~\cite{Gleyzes:2015rua}, constraints on Euler's equation are obtained indirectly in the specific case of scalar-tensor theories. In this model, the parameters that modify Euler's equation also modify the growth of structure, which is measured via the standard multipoles. These constraints are therefore directly related to the specific choice of model: scalar-tensor theories. Constraints of the order of $10^{-3}$ are obtained in this case, varying each parameter individually. Marginalising over all modified gravity parameters degrade the constraints significantly~\cite{Leung:2016xli}. This shows the complementarity of relativistic effects, which break the degeneracy between parameters and which can directly probe Euler's equation, without underlying assumptions about the model.
In fig.~\ref{fig:dipole_dev} (left panel) we show the dipole in the lowest redshift bin $\bar z=0.15$ when $\Gamma_0=0$ (blue solid line) and when $\Gamma_0=2$ (dashed black line). We see that in this case the deviation due to the breaking of the equivalence principle is larger than the error bars (coloured blue region) in the range $10\leq d\leq 130\,{\rm Mpc}/h$. Combining all separations and all redshifts allow us to see deviations already for $\Gamma_0=0.26$. In the right panel of fig.~\ref{fig:dipole_dev}, we show the constraints on $\Gamma_0$ in each redshift bin. We see that the constraints come mainly from low redshift.
\begin{figure}[!t]
\centering
\includegraphics[width=0.49\columnwidth]{dipole_deviations}\hspace{0.2cm}\includegraphics[width=0.49\columnwidth]{dipole_Gamma_z}
\caption{\emph{Left panel}: The dipole (multiplied by $d^2$), plotted as a function of separation $d$ in the lowest redshift bin of the SKA: $0.1\leq z\leq 0.2$. The blue solid line shows the dipole in a $\Lambda$CDM universe, with $\Gamma_0=\Theta_0=0$. The dashed region represent the error bars calculated from eq.~\eqref{cov1}. The dashed black line shows the dipole when the EEP is violated, with $\Gamma_0=2$ and $\Theta_0=0$. \emph{Right panel}: The constraints on $\Gamma_0$, keeping all other parameters fixed to their fiducial value, in each redshift bin of the SKA.}
\label{fig:dipole_dev}
\end{figure}
As discussed above, generic modified gravity models generate not only deviations in Euler's equation, but also deviations in the way structure are growing as a function of time, i.e. in the growth function $D$. We model this in the following way
\be
\label{D1mod}
D(z)=\bar D(z)\big[1+\mu(z) \big]\, ,
\ee
where $\bar D$ denotes the growth function in a $\Lambda$CDM universe, and we let the deviation $\mu(z)$ evolve as in eq.~\eqref{evol}. The growth rate can then be written as a function of $\mu_0$
\be
\label{fmod}
f(z)=\bar f(z)+3\Omega_{{\rm m}0}(1+z)^3\left[\frac{1-\Omega_{\rm m}(z)}{1-\Omega_{\rm m0}}\right]^2\mu_0\, ,
\ee
where $\bar f$ denotes the growth rate in a $\Lambda$CDM universe. Inserting~\eqref{D1mod} and \eqref{fmod} into eq.~\eqref{estimator_final}, and linearising in the deviations, we can forecast the constraints on $\Gamma_0$ and $\mu_0$, fixing $\Theta=0$.
The joint constraints are shown in the right panel of fig.~\ref{fig:dipole_constraints}. We see that $\Gamma$ is less degenerated with $\mu$ than with $\Theta$. This comes from the different time dependence that multiplies these two parameters in eq.~\eqref{estimator_final}. The marginalised constraints on each of the parameters are given by $\Delta{\Gamma_0}=0.65$ and $\Delta{\mu_0}=0.013$. Allowing for the growth rate to vary degrades therefore the constraints on $\Gamma_0$ by a factor 2.5.
Finally, let us note that all these constraints are obtained while fixing the bias of the bright and faint populations to their fiducial value. If instead we let the biases vary, we loose all constraining power, since the bias difference in each redshift bin is strongly degenerated with a change in the growth rate and with modifications to Euler's equation. However, in addition to measuring the dipole of the correlation function, one can also measure the monopole, quadrupole and hexadecapole of the bright and faint populations separately. As we will see in the next section, this allows us to break the degeneracy between the biases, the growth rate $\mu_0$ and the modification to Euler's equation $\Gamma_0$.
\subsubsection{Constraints from the dipole combined with standard multipoles}
\label{sec:SKA_mult}
We now consider 7 observables: the cross-correlation dipole, and the monopole, quadrupole and hexadecapole of the bright and of the faint populations. In the forecasts, we neglect the covariance between different multipoles. We take however into account the covariance between the monopole of the bright and of the faint population, as well as between their quadrupole and hexadecapole. These covariance matrices have been calculated in~\cite{Hall:2016bmm} and are summarised in appendix~\ref{app:cov}.
As in the previous section, we fix the cosmological parameters to their $\Lambda$CDM fiducial value. We consider $\Gamma_0$ and $\mu_0$ as free parameters (as motivated above we fix $\Theta=0$). We assume that the bias of the bright and faint population evolves in the following way~\cite{Bull:2015lja}
\bea
b_\textrm{B}(z)&=&b_{1} e^{b_{2} z}+\frac{\Delta b}{2}\, ,\\
b_\textrm{F}(z)&=&b_3 e^{b_4 z}-\frac{\Delta b}{2}\, ,
\eea
where as before we fix the bias difference $\Delta b=b_\textrm{B}-b_\textrm{F}=0.5$. We have therefore four additional free parameters $b_1, b_2, b_3$ and $b_4$ that we want to constrain. We choose their fiducial value as in~\cite{Bull:2015lja} (see table 6): $b_1=b_3=0.554$ and $b_2=b_4=0.783$. In this way the bias of the bright and of the faint population are uncorrelated, but their difference is always of 0.5. As shown below, the constraints can easily be rescaled according to the bias difference.
\begin{figure}[!t]
\centering
\includegraphics[width=0.65\columnwidth]{combined_mu_Gamma}
\caption{Joint 1-$\sigma$ and 2-$\sigma$ constraints on the parameters $\Gamma_0$ and $\mu_0$ obtained from a combination of the dipole, the monopole, quadrupole and hexadecapole of the bright and faint population, with the SKA. Here $\Theta_0=0$, and the constraints are marginalised over the four bias parameters, $b_1$ to $b_4$.}
\label{fig:combined_constraints}
\end{figure}
In fig.~\ref{fig:combined_constraints} we show the joint constraints on $\Gamma_0$ and $\mu_0$ marginalised over the four bias parameters. We see that adding the standard multipoles strongly tightens the constraints on $\mu_0$: the marginalised error on $\mu_0$ becomes $\Delta\mu_0=5.1\times 10^{-4}$, consistent with~\cite{Gleyzes:2014rba}. This is not surprising since the monopole, quadrupole and hexadecapole provide three different combinations of $\mu_0$ and of the bias, which can then be both robustly constrained. Since the signal-to-noise of the even multipoles is significantly larger than that of the dipole, we find that adding the dipole in the forecasts \emph{does not} improve the constraints on $\mu_0$ at all. This is consistent with the conclusion of~\cite{Lorenz:2017iez} which showed (using the angular power spectrum $C_\ell$) that relativistic effects do not improve the constraints on parameters that govern the growth of structure, like $\mu_0$ (see however~\cite{Lombriser:2013aj} for specific cases, where this may not be true). The usefulness of relativistic effects is therefore to test for deviations that {\it cannot} be probed with standard observables, like deviations in the equivalence principle. The marginalised error on $\Gamma_0$ is also tightened by the combined analysis: $\Delta\Gamma_0=0.26$. Hence even though the standard multipoles are not sensitive to $\Gamma_0$, they improve the constraints on $\Gamma_0$ by breaking the degeneracy with $\mu_0$. Comparing with the results in sec.~\ref{sec:forecast_dipole}, we see that we recover the constraints that we had on $\Gamma_0$ from the dipole when all other parameters were kept fixed. Combining the dipole alone with the standard multipoles of the bright and faint populations provides therefore an ideal way of testing for deviations in both the growth of structure $\mu_0$ and in Euler's equation $\Gamma_0$.
Note that in these forecasts we have assumed that $\Gamma$ evolves as in eq.~\eqref{evol}. If instead we choose a constant $\Gamma$ over the whole redshift range (in the model of~\cite{Gleyzes:2014rba}, this would correspond to a fixed coupling of the dark matter to the scalar field), then the constraints on $\Gamma$ tightens to $\Delta\Gamma=0.17$. This comes from the fact that in this case high redshift bins contribute more to the constraints since $\Gamma$ does not decrease with redshift.
For these forecasts we have used as minimum separation $d_{\rm min}=10\,{\rm Mpc}/h$. Increasing this to $20\,{\rm Mpc}/h$, in order to reduce the impact of non-linearities, does very slightly degrade the constraint on $\Gamma_0$ from $\Delta\Gamma_0=0.26$ to $\Delta\Gamma_0=0.29$. The constraint on $\mu_0$ is more sensitive to $d_{\rm min}$ since it changes from $\Delta\mu_0=5.1\times 10^{-4}$ to $\Delta\mu_0=1.2\times 10^{-3}$. This can be understood by the fact that the standard multipoles are more strongly affected by small scales than the dipole. Comparing eq.~\eqref{nurel} with eq.~\eqref{nust} we see indeed that the standard multipoles contain a factor $k/\mathcal{H}_0$ more than the relativistic dipole. As such, at a given separation, the dipole is less sensitive to large $k$'s than the standard multipoles.
Finally, as discussed before, the forecasts are directly sensitive to the bias difference between the bright and faint population, since the estimator~\eqref{estimator_final} is proportional to $b_\textrm{B}-b_\textrm{F}$. Here we have used a fixed difference: $b_\textrm{B}-b_\textrm{F}=0.5$. Increasing this bias difference to 1, we find that the constraint on $\Gamma_0$ tightens by roughly a factor 2, $\Delta\Gamma_0=0.14$, while the constraint on $\mu_0$ remains almost the same, $\Delta \mu_0=5\times 10^{-4}$. Equivalently decreasing the bias difference to 0.25 decreases the constraint on $\Gamma_0$ by roughly a factor 2, $\Delta\Gamma_0=0.51$, while the constraint on $\mu_0$ remains almost the same, $\Delta \mu_0=5.3\times 10^{-4}$. Hence as expected the constraint on $\Gamma_0$ scales directly with the bias difference $b_\textrm{B}-b_\textrm{F}$.
\subsection{Constraints from DESI}
In the previous section we have seen that the SKA will provide meaningful constraints on deviations from the equivalence principle. On a shorter timescale, the DESI survey~\cite{Aghamousa:2016zmz} is expected to map galaxies over a large range of redshifts and scales and could already deliver interesting tests of the equivalence principle. The DESI survey will observe different types of galaxies at different redshifts. At low redshift, $0\leq z\leq 0.5$, the Bright Galaxy Sample (BGS) is expected to contain approximately 10 million of galaxies. At middle redshift, $0.6\leq z\leq 1$, DESI will observe Luminous Red Galaxies (LRG) and Emission Line Galaxies (ELG). Finally at high redshift, $1\leq z\leq 1.7$, only ELG's will be detected. The area of the survey is expected to reach 14'000 square degrees. For our forecasts, we use the bias evolution given in sec. 3 of~\cite{Aghamousa:2016zmz} and the number densities and volumes given in tables 2.3 and 2.5.
We forecast the constraints on $\Gamma_0, \mu_0$, and the biases in each range of redshifts, and we then combine them. As before we neglect the covariance between individual redshift bins.
In the lowest range of redshift, $0\leq z\leq 0.5$, we split the 10 millions BGS into two equal populations (bright and faint) with a bias difference $\Delta b=0.5$. We assume that the bias of each population evolves as
\bea
b_\textrm{B}(z)&=&b_1\frac{D_0}{D(z)}+\frac{\Delta b}{2}\, ,\\
b_\textrm{F}(z)&=&b_2\frac{D_0}{D(z)}-\frac{\Delta b}{2}\, ,
\eea
where $b_1$ and $b_2$ are two free parameters with fiducial value $b_1=b_2=1.34$~\cite{Aghamousa:2016zmz}.
In the middle range of redshift, $0.6\leq z\leq 1$, we have two populations of galaxies, the LRG's and the ELG's, with very different biases. However, the number density of ELG's is significantly higher than that of LRG's. Hence it is not optimal to use only cross-correlations between these two populations. We split therefore the galaxies into three populations. Population A contains all the LRG's with a bias
\be
b_{\rm A}=b_3\frac{D_0}{D(z)}\, ,
\ee
where the fiducial value $ b_3=1.7$. Population B contains half of the ELG's and population C the other half, with biases
\bea
b_\textrm{B}(z)&=&b_4\frac{D_0}{D(z)}+\frac{\Delta b}{2}\, ,\\
b_{\rm C}(z)&=&b_5\frac{D_0}{D(z)}-\frac{\Delta b}{2}\, ,
\eea
where $\Delta b=0.5$ is fixed and the fiducial values of the free parameters is $b_4=b_5=0.84$~\cite{Aghamousa:2016zmz}. With this split we have a similar number of galaxies in each population: 3.9 millions LRG's, 4.6 millions bright ELG's and 4.6 millions faint ELG's. As shown in~\cite{Bonvin:2015kuc}, we can then increase the signal-to-noise of the dipole estimator by weighting each pair of galaxies by the bias difference between the populations. This weighting is optimal in the regime where shot noise dominates. The covariance matrix of this estimator is given in appendix~\ref{app:cov}.
Finally in the high redshift range, $1\leq z\leq 1.7$, we split the 7.8 millions ELG's into two equal populations, with the same two free bias parameters $b_4$ and $b_5$.
\begin{table}[t]
\centering
\begin{tabular}{ | c | c | c | c | c | }
\hline
&$0\leq z\leq 0.5$ & $0.6\leq z\leq 1$ &$1\leq z\leq 1.7$ & combined \\ \hline
$\Gamma_0$ &$3.2$ & $2.4$ &$10.8$ &1.9 \\ \hline
$\mu_0$ &$3.2\times 10^{-3}$ & $2.7\times 10^{-3}$ &$5.2\times 10^{-3}$ & $1.8\times 10^{-3}$ \\ \hline
\end{tabular}
\caption{Constraints on $\Gamma_0$ and $\mu_0$, marginalised over the other parameters, obtained from the three redshift ranges in DESI.}
\label{tab:DESI}
\end{table}
In each case we calculate the cross-correlation dipole, as well as the monopole, quadrupole and hexadecapole of each population, in thin redshift bins of width $\Delta z=0.1$. We neglect correlations between different multipoles, but we take into account the correlation between the same multipole of different populations. We use the range of separation $10\leq d\leq 200\,{\rm Mpc}/h$.
We assume that $\Theta=0$, so we have in total 7 free parameters: $\Gamma_0, \mu_0, b_1, b_2, b_3, b_4$ and $b_5$. The marginalised constraints on $\Gamma_0$ and $\mu_0$ are summarised in table~\ref{tab:DESI}. We see that the combined constraint on $\Gamma_0$ reaches $\Delta\Gamma_0=1.9$. This is 7 times larger than the constraint expected from the SKA, but it would provide the very first test of the equivalence principle at cosmological scales. Since we have never directly observed dark matter particles fall into a gravitational potential, a value of order unity for $\Gamma_0$ is not excluded.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have shown that relativistic effects can be used to test the equivalence principle directly. We have used the dipole in the cross-correlation function of bright and faint galaxies to constrain deviations from Euler's equation. Such deviations would modify the relation between the galaxy velocity $V$ and the time component of the metric $\Psi$. They are therefore unconstrained by the monopole, quadrupole and hexadecapole of redshift-space distortions, since these observables are sensitive to the density and velocity, but not to the metric. Adding information from gravitational lensing is not enough, since lensing is affected by the sum of the two metric potentials, $\Phi+\Psi$, and not by $\Psi$ individually. The cross-correlation dipole provides therefore a unique opportunity to test for deviations in Euler's equation since it is \emph{directly} sensitive to $\Psi$ via the effect of gravitational redshift.
We have found that future surveys like DESI and the SKA can provide meaningful constraints on deviations from Euler's equation. Comparing with previous forecasts on modifications of gravity, we find that our constraints are weaker by 1-3 orders of magnitude with respect to constraints on the growth rate of structure and on the anisotropic stress. This is due to the fact that the signal-to-noise of the dipole is significantly lower than that of the RSD multipoles and of gravitational lensing, that are used to constrain those deviations. Hence the equivalence principle will always be more difficult to constrain than the growth of structure or the relation between the metric potentials. On the other hand, since we have never observed dark matter directly, any constraint obtained from the dipole would provide a significant improvement in our knowledge of this component.
Finally, let us emphasise that the specificity of our work is to test the equivalence principle in a model-independent way. The parametrisation that we use is motivated by the example of scalar-tensor theories and Einstein-\ae ther theories, but it is in reality completely general. Indeed, at no point in our analysis we need to refer to a specific class of Lagrangian or theory. Our framework allows us to test for any deviation in Euler's equation, independently of the underlying theory. If on the other hand one chooses to start from a specific Lagrangian, with a set of free functions, then the rules of the game completely change. These free functions would indeed modify in a specific way the growth of structure, the relation between the metric components and Euler's equation. One can then \emph{indirectly} test for deviations in Euler's equation using RSD and lensing, through a measurement of these free functions, as for example in~\cite{Gleyzes:2015rua}. Such an approach would deliver more stringent constraints, since these free functions are very well measured via the growth rate, but it has the disadvantage that it is related to a specific class of model.
Therefore, the real usefulness of relativistic effect is not to improve the constraints on parameters that can be measured via other methods. As pointed out in~\cite{Lorenz:2017iez}, relativistic effects are too small to have a measurable impact in this case. The usefulness of relativistic effects is rather to test for deviations that cannot be tested in a model-independent way by other methods. In this sense, relativistic effects are really complementary to standard observables and do add extra information.
\section*{Acknowledgements}
We thank Filippo Vernizzi, J\'{e}r\^{o}me Gleyzes, and Sergey Sibiryakov for clarifications about, respectively, scalar-tensor and Einstein-\ae ther theories. We also thank Ruth Durrer and Alex Hall for interesting discussions. We acknowledge support by the Swiss National Science Foundation.
|
{
"timestamp": "2018-05-31T02:10:08",
"yymm": "1803",
"arxiv_id": "1803.02771",
"language": "en",
"url": "https://arxiv.org/abs/1803.02771"
}
|
\section{Introduction}
The Experiment to Detect the Global Epoch of reionization Signature (EDGES) Collaboration~\cite{nature} has recently reported the measurement of a feature in the absorption profile of the sky-averaged radio spectrum, centered at a frequency of 78 MHz and with an amplitude of 0.5 K. Although such a feature was anticipated to result from the 21-cm transition of atomic hydrogen (at $z\sim 17$), the measured amplitude of this signal is significantly larger than expected, at a confidence level of 3.8$\sigma$. If confirmed, this measurement would indicate that either the gas was much colder during the dark ages than expected, or that the temperature of the background radiation was much hotter. It has been argued~\cite{nature2} that standard astrophysical mechanisms~\cite{Cohen:2016jbh,nature,nature2,Fialkov:2018xre,Feng:2018rje,Ewall-Wice:2018bzf,Mirocha:2015jra} cannot account for this discrepancy, and that the only plausible explanations for this observation are those which rely on interactions between the primordial gas and light dark matter particles, resulting in a significant cooling of the gas~\cite{nature,nature2,Fialkov:2018xre} (see also Refs.~\cite{Tashiro:2014tsa,Munoz:2015bca}).
In order for the dark matter to cool the gas efficiently, it must have some rather specific characteristics. Firstly, equipartition requires that the dark matter particles be fairly light, with masses no larger than a few GeV. Secondly, if the cross section for dark matter scattering with gas is independent of velocity, a variety of constraints, including those from observations of the cosmic microwave background (CMB), would restrict the couplings to well below the values required to explain the observed amplitude of the absorption feature. Velocity-dependent scattering can relax such constraints, however, as the average velocities of baryons and dark matter particles were at approximately their minimum value during the cosmic dark ages (due to higher temperatures and having fallen into the gravitational potential of dark matter halos at earlier and later times, respectively). With this in mind, we can maximize the impact of dark matter-baryon scattering during this era by considering models in which $\sigma (v) \propto v^{-4}$. In terms of model building, this consideration directs us towards models in which the dark matter-baryon interactions are mediated by a particle that is much lighter than the temperature at $z \sim 17$, corresponding to $m_{\rm med} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 10^{-3}$ eV. Experimental constraints on fifth forces are very stringent in this mass range~\cite{Adelberger:2006dh,Kapner:2006si}, however, and rule out a new light mediator with couplings in the range required to explain the observed absorption feature. Furthermore, such a particle would invariably contribute to the energy density of radiation during recombination at a level well above current constraints~\cite{Ade:2015xua}. In light of these considerations, the only options available are models in which the dark matter carries a small quantity of electric charge ({\it i.e.}} \newcommand{\eg}{{\it e.g.}~a millicharge), and thus couples weakly to the photon~\cite{McDermott:2010pa,Davidson:2000hf,Chuzhoy:2008zy,Dvorkin:2013cea,Dolgov:2013una,Dubovsky:2003yn,Feldman:2007wj}.
\begin{figure*}
\includegraphics[width=3.2in,angle=0]{MoneyPlot_fdm1.pdf}~~~~~~
\includegraphics[width=3.2in,angle=0]{MoneyPlot_fdm01.pdf} \\
\includegraphics[width=3.2in,angle=0]{MoneyPlot_fdm001.pdf}~~~~~~
\includegraphics[width=3.2in,angle=0]{MoneyPlot_fdm0001.pdf}
\caption{Constraints on Dirac fermion millicharged dark matter from Supernova 1987A (grey)~\cite{newsam}, the SLAC millicharge experiment (blue)~\cite{Prinz:1998ua}, the light element abundances produced during Big Bang Nucleosynthesis (red, labeled $\Delta N_{\rm eff}$)~\cite{Boehm:2013jpa}, and on the impact on the cosmic microwave background of dark matter scattering with baryons (pink, labeled CMB, KD)~\cite{McDermott:2010pa} and dark matter annihilations (purple, labeled CMB ann.)~\cite{Slatyer:2015jla}. These results are shown for four values of the fraction of the dark matter abundance that consists of millicharged particles, $f_{\rm DM}$. In each frame, the solid black regions represent the range of parameter values that could explain the amplitude of the observed 21-cm absorption feature as reported by the EDGES Collaboration~\cite{nature2}. The dashed black line denotes where the thermal relic abundance corresponds to quoted value of $f_{\rm DM}$, assuming only millicharge interactions. The fact that the solid black regions do not coincide with the dashed curves indicates that the dark matter must be depleted in the early universe by another kind of interaction. Although these results are shown for the specific case of dark matter in the form of a Dirac fermion, most of these constraints would change only very slightly if we were instead to consider a complex scalar. The exception to this are the constraints from dark matter annihilation during the epoch of recombination~\cite{Slatyer:2015jla}, which are much weaker in the complex scalar case, due to the $p$-wave suppression of the annihilation cross section.}
\label{constraints}
\end{figure*}
\section{Model-Independent Constraints On Millicharged Dark Matter}
Based on the results presented in Ref.~\cite{nature2}, it follows that millicharged dark matter particles could cool the gas at $z \sim 17$ to a level consistent with the EDGES measurement ($T_b \approx 4$ K) if the following condition is met~\cite{Munoz:2018pzp}:
\begin{eqnarray}
\epsilon \approx 1.7\times 10^{-4} \bigg(\frac{m_{\chi}}{300 \,{\rm MeV}}\bigg) \bigg(\frac{10^{-2}}{f_{\rm DM}}\bigg)^{3/4},
\label{condition}
\end{eqnarray}
where $\epsilon \equiv e_{\chi}/e$ is the electric charge of the dark matter, $m_{\chi}$ is the mass of the millicharged dark matter candidate, and $f_{\rm DM}$ is the fraction of the dark matter that consists of millicharged particles. This expression is valid for $m_{\chi} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} (20-40) \, {\rm MeV} \times (f_{\rm DM}/10^{-2})$, above which much larger values of $\epsilon$ are required. We also note that this expression applies to millicharged dark matter in either the form of a Dirac fermion or a complex scalar.
Millicharged dark matter is subject to a wide range of experimental and astrophysical constraints, some of which we summarize in Fig.~\ref{constraints}. Here, we show constraints derived from observation of Supernova 1987A (grey)~\cite{newsam}, from the SLAC millicharge experiment (blue)~\cite{Prinz:1998ua}, and from the light element abundances produced during Big Bang Nucleosynthesis (BBN), assuming entropy transfer to electrons and photons (red)~\cite{Boehm:2013jpa} (see also Refs.~\cite{Nollett:2014lwa,Steigman:2014uqa}). We note that the BBN constraints shift only at the tens of percent level if we had instead considered entropy transfers to neutrinos. We also show constraints from measurements of the CMB based on dark matter annihilation in the epoch of recombination (purple)~\cite{Slatyer:2015jla} (see also Ref.~\cite{Ade:2015xua}), and on dark matter scattering with baryons (pink)~\cite{McDermott:2010pa}. Note that we have modified the constraint from Ref.~\cite{McDermott:2010pa} by rescaling the limits on $\epsilon$ by a factor of $\sqrt{f_{\rm DM} \Omega_{\rm dm}/\Omega_b}$. We also note that Ref.~\cite{Xu:2018efh} has recently presented an even more stringent constraint on low-velocity dark matter-baryon scattering. Although these results are shown for the specific case of dark matter in the form of a Dirac fermion, most of these constraints would change only very slightly if we were instead to consider a complex scalar. The exception to this are those constraints from dark matter annihilation during recombination~\cite{Slatyer:2015jla}, which are much weaker in the case of a complex scalar, due to the $p$-wave suppression of the annihilation cross section.
Analytical constraints on dark matter scattering with baryons during recombination were presented in Ref.~\cite{McDermott:2010pa} for the case of $f_{\rm DM} = 1$. For $f_{\rm DM} \gtrsim 0.02$, these constraints can be straightforwardly rescaled. For smaller values of $f_{\rm DM}$, however, one cannot apply this bound, because the energy density of this component of the dark matter is smaller than the difference between the (95\% CL) upper limit on the baryon density from CMB~\cite{Ade:2015xua} and the (95\% CL) lower limit on the baryonic density based on BBN~\cite{Cyburt:2015mya}. In this case, the millicharged particles themselves could contribute to the apparent baryonic density as derived from the CMB, evading the CMB constraint even if tightly coupled to the baryon fluid. For this reason, the CMB kinetic decoupling bounds do not apply for $f_{\rm DM} \lesssim 0.02$.
The solid black regions in Fig.~\ref{constraints} represent the parameter space in which the reported amplitude of the 21-cm absorption feature can be explained. For $f_{\rm DM}=1$, we take this region as presented in Ref.~\cite{nature2}. For $f_{\rm DM} < 1$, we shift these regions in $\epsilon$ and $m_{\chi}$ as found in the numerical solutions presented in Ref.~\cite{Munoz:2018pzp} (see Eq.~\ref{condition}).
In light of the constraints presented in Fig.~\ref{constraints}, we are forced to consider millicharged dark matter in a relatively narrow range of parameter space: $m_{\chi} \sim 10-80$ MeV, $\epsilon \sim 10^{-6}-10^{-4}$, and $f_{\rm DM} \sim 0.003-0.02$. For the remainder of this paper, we will focus on this range of scenarios. We will not assume that the millicharge is accompanied by an extremely light dark photon \cite{Izaguirre:2015eya}.
We note that although a millicharged particle with a mass and coupling in this range could not have free-streamed out of Supernova 1987A, and thus was not deemed to be ruled out in Ref.~\cite{newsam}, such a particle would be in chemical equilibrium in the proto-neutron star. By equipartion, this would lower the temperature of the supernova core, with likely ramifications for the observed neutrino signal. It is possible that these considerations could be used to rule out this range of parameter space, although no high-resolution numerical studies have definitively addressed this issue within this class of scenarios.
\section{Depleting the Dark Matter Abundance}
A major challenge for millicharged dark matter models which can explain the reported amplitude of the observed 21-cm absorption feature is avoiding overproduction in the early universe. The annihilation cross section for a pair of millicharged particles is given by:
\begin{eqnarray}
\sigma v_{\chi \bar \chi \to f\bar f}= \frac{\pi \alpha^2 \epsilon^2}{m^2_{\chi}} \, \kappa \, \bigg(1-\frac{m^2_f}{m^2_{\chi}}\bigg)^{1/2} \, \bigg(1+\frac{m^2_f}{2m^2_{\chi}}\bigg),
\label{annsigma}
\end{eqnarray}
where $\kappa= 1$ or $v^2/6$ (where $v$ is the relative velocity of the dark matter particles) for dark matter in the form of a Dirac fermion or a complex scalar, respectively.
The dark matter will be maintained in chemical equilibrium with the Standard Model bath if its annihilation rate exceeds that associated with Hubble expansion. Performing this comparison at $T\sim m_\chi$ (when the rate is approximately maximal) yields the following condition for equilibrium:
\begin{eqnarray}
n_\chi \sigma v_{\chi \bar \chi \to f\bar f} \sim m_\chi^3 \frac{\pi \epsilon^2 \alpha^2 }{ m_\chi^2 } \, \kappa > 1.66 \sqrt{g_*}\frac{m_\chi^2}{M_{\rm Pl}}, ~~~
\end{eqnarray}
where $g_{*}$ is the number of relativistic degrees of freedom in the thermal bath, and $M_{\rm Pl}$ is the Planck mass. This in turn implies that if
\begin{eqnarray}
\epsilon \gtrsim \frac{g_*^{1/4}}{\alpha} \sqrt{\frac{m_\chi}{m_{\rm Pl}}} \, \frac{1}{\kappa^{1/2}}\sim 10^{-8} \left( \frac{m_\chi}{ \rm 10~ MeV} \right)^{1/2} \! \frac{1}{\kappa^{1/2}},~~
\end{eqnarray}
then $n_{\chi}$ will reach its equilibrium value in the early universe, and thus must be depleted through efficient annihilations in order to avoid exceeding the desired density of millicharged particles, $f_{\rm DM} \Omega_{\rm CDM}$. In particular, equilibrium is reached over the entirety of the parameter space for which dark matter can explain the amplitude of the observed 21-cm absorption feature.
Once equilibrium is reached, the thermal relic abundance is determined by the dark matter's annihilation cross section. For the cross section given in Eq.~\ref{annsigma}, this leads to a thermal abundance that makes up the following fraction of the dark matter density: $f_{\rm DM} \approx 0.04 \times (10^{-3}/\epsilon)^2 \times (m_{\chi}/30\, {\rm MeV})^2 \times \kappa^{-1}$~\cite{Steigman:2012nb}. In Fig.~\ref{constraints}, the dashed black lines indicate the regions of parameter space in which the thermal relic abundance corresponds to quoted value of $f_{\rm DM}$, assuming only millicharge interactions. The fact that the solid regions do not coincide with the dashed curves in the allowed regions of millicharge parameter space indicates that the dark matter must be depleted by an interaction other than annihilations through photon exchange.
To deplete the thermal abundance of millicharged dark matter to an acceptable level, we consider the possibilities of either annihilations directly to Standard Model particles or to particles within a hidden sector. In either case, we require such annihilations to take place through $p$-wave processes, as measurements of the CMB rule out dark matter candidates lighter than $\sim$10 GeV if they freeze-out primarily through $s$-wave annihilations (with the exception of annihilation to neutrinos)~\cite{Slatyer:2015jla}.
\subsection{Annihilation to Standard Model Fermions}
\begin{figure}
\hspace{-0.2in}
\includegraphics[width=3.4in,angle=0]{ScalarTargets.pdf}
\caption{Constraints on a scenario in which millicharged dark matter, $\chi$, annihilates to Standard Model leptons through the exchange of a vector mediator, $V$, that couples universally to all three generations of charged leptons. The blue band denotes the approximate region of parameter space which can explain the amplitude of the observed 21-cm absorption feature as reported by the EDGES Collaboration. This case is ruled out by constraints from the BABAR~\cite{Lees:2017lec,Izaguirre:2013uxa,Essig:2013vha} and E137~\cite{Bjorken:1988as,Izaguirre:2013uxa} experiments, along with BBN, over the entire range of parameter space that is capable of generating the reported 21-cm signal. Also shown as dashed lines are the projected constraints of the Belle II~\cite{Izaguirre:2014bca}, BDX~\cite{Izaguirre:2013uxa,Battaglieri:2016ggd} and LDMX~\cite{Izaguirre:2014bca,Battaglieri:2017aum} experiments. Here we have adopted $m_{V}=3m_{\chi}$ and $g_{\chi}=1$. We note that for other choices of this coupling and mass ratio, the constraints are generically more restrictive than those presented here.}
\label{direct}
\end{figure}
\begin{figure*}
\includegraphics[width=3.4in,angle=0]{DiracTargets_MuTau.pdf}~~~~~
\includegraphics[width=3.4in,angle=0]{ScalarTargets_MuTau.pdf}
\caption{Constraints on a scenario in which millicharged dark matter, $\chi$, annihilates to Standard Model leptons through the exchange of a vector mediator, $V$, which couples to muon minus tau number, associated with the gauge group $U(1)_{L_{\mu}-L_{\tau}}$. The blue band denotes the approximate region of parameter space which can explain the amplitude of the observed 21-cm absorption feature as reported by the EDGES Collaboration. Because this mediator does not directly couple to electrons, the experimental constraints are considerably less restrictive in this case, potentially allowing the dark matter to be sufficiently depleted in the early universe. Future measurements by SPS~\cite{Gninenko:2014pea} are expected to be sensitive to this scenario. Also shown are the regions that are capable of explaining the observed value of the muon's magnetic moment, $(g-2)_{\mu}$~\cite{Bennett:2006fi,Hagiwara:2011af,Davier:2010nc,Jegerlehner:2009ry}, as well as those that are ruled out by the CCFR experiment~\cite{Altmannshofer:2014pba,Mishra:1991bv}. In each frame, we have adopted $m_{V}=3m_{\chi}$ and $g_{\chi}=1$.}
\label{direct2}
\end{figure*}
For the case of annihilation directly to Standard Model states, we consider two options for $p$-wave processes: dark matter in the form of a scalar or a fermion which annihilates through a new vector, $V$ (fermionic annihilation through a scalar mediator also leads to $p$-wave amplitudes, but is more constrained). In both of these cases, the vector must be heavier than the dark matter itself (in order for annihilations to Standard Model fermions to dominate over those to $VV$). First, we consider the following interactions which lead to $p$-wave suppressed annihilations:
\begin{subequations}
\begin{eqnarray}
\mathcal{L}_f &\supset& V_\mu ( g_\chi \bar \chi \gamma^\mu \gamma^5 \chi + g_f \bar f \gamma^\mu f), \label{lagp-f} \\
\mathcal{L}_s &\supset& V_\mu (i g_\chi \chi^* \partial_\mu \chi +g_f \bar f \gamma^\mu f + {\rm h.c.}), \label{lagp-s}
\end{eqnarray}
\end{subequations}
where the dark matter candidate, $\chi$, is a scalar or a fermion, respectively. Although for a Dirac fermion it is also possible to have $p$-wave $\chi \bar \chi \to f \bar f$ annihilation through scalar $s$-channel exchange, this possibility is strongly excluded in simple models \cite{Krnjaic:2015mbs}. For the model of Eq.~\ref{lagp-f} or \ref{lagp-s}, we can write the annihilation cross section as follows:
\begin{eqnarray}
\sigma v = \frac{g^2_f g^2_{\chi} m^2_{\chi} v^2}{6 \pi (4m^2_{\chi}-m^2_V)^2},
\end{eqnarray}
where we have taken the $m_\chi \gg m_f$ limit.
In Fig.~\ref{direct}, we plot constraints on models defined as in Eqs.~\ref{lagp-f} and \ref{lagp-s} in which the millicharged dark matter annihilates through a vector mediator that couples universally to all three species of charged leptons (for details, see Ref.~\cite{Izaguirre:2015yja}). In this case, constraints from the BABAR~\cite{Lees:2017lec,Izaguirre:2013uxa,Essig:2013vha} and E137~\cite{Bjorken:1988as,Izaguirre:2013uxa} experiments, along with BBN, exclude the entire range of parameter space that is capable of generating the reported 21-cm signal ($m_{\chi} \sim 10-80$ MeV and $f_{\rm DM} \sim 0.003-0.02$).
We also consider scenarios in which dark matter can annihilate predominantly to neutrinos (as opposed to $e^+e^-$), with either $s$- or $p$-wave annihilation:
\begin{subequations}
\begin{eqnarray}
\mathcal{L}_f &\supset& V_\mu ( g_\chi \bar \chi \gamma^\mu \chi + g_{\nu} \bar \nu_L \gamma^\mu \nu_L ), \label{lags-f}
\\
\mathcal{L}_s &\supset& V_\mu ( i g_\chi \chi^* \partial_\mu \chi + g_{\nu} \bar \nu_L \gamma^\mu \nu_L + {\rm h.c.} ). \label{lags-s}
\end{eqnarray}
\end{subequations}
These models have the annihilation cross section,
\begin{eqnarray}
\sigma v = \frac{g^2_{\nu} g^2_{\chi} m^2_{\chi} \kappa}{2 \pi (4m^2_{\chi}-m^2_V)^2},
\end{eqnarray}
to a given neutrino flavor where $\kappa=1$ or $v^2/6$ for a fermion or scalar, respectively. A concrete example of this would be a model in which the vector is associated with the anomaly-free gauge group $U(1)_{L_{\mu}-L_{\tau}}$ and thus couples to muons, taus, and their respective neutrino species~\cite{Baek:2001kca,Pospelov:2008zw}. We show in Fig.~\ref{direct2} the parameter space of such a model for either a fermionic dark matter candidate with interactions as defined in Eq.~\ref{lags-f} or a scalar dark matter candidate with interactions as defined in Eq.~\ref{lags-s}. (If we had included a $\gamma_5$ in Eq.~\ref{lags-f} to make the annihilation $p$-wave suppressed, the results would be almost indistinguishable from the scalar case.) Because this mediator does not directly couple to electrons, the experimental constraints are considerably less restrictive in this case, potentially allowing the dark matter to be sufficiently depleted in the early universe. We note that in the scalar dark matter case, this model predicts a contribution to the magnetic moment of the muon, $(g-2)_{\mu}$, that is capable of explaining the measured anomaly~\cite{Bennett:2006fi,Hagiwara:2011af,Davier:2010nc,Jegerlehner:2009ry}. The same is true in the fermionic case, but for a somewhat lower value of $g_{\chi}$. It is anticipated that future measurements by SPS~\cite{Gninenko:2014pea} will be sensitive to this scenario~\cite{gordan}.
\subsection{Annihilation to Hidden Sector Particles}
Next we turn our attention to scenarios in which the dark matter annihilates to unstable particles that reside within a hidden sector. In light of the $p$-wave requirement, we focus on annihilations to a pair of real scalars, $\phi$, which we take to be lighter than the dark matter itself. We motivate this choice by the fact that a real scalar leads to the smallest contribution to the energy density of dark radiation ({\it i.e.}} \newcommand{\eg}{{\it e.g.}~the number of effective neutrino species, $N_{\rm eff}$). The dark matter annihilation cross section to a pair of these dark scalars is given by:
\begin{eqnarray}
\sigma v _{\chi \bar \chi \to \phi \phi} = \frac{3 y^4 v^2}{128 \pi m_\chi^2},
\end{eqnarray}
where $y$ is the $\phi \bar \chi \chi$ yukawa coupling and $v$ is the relative velocity of the annihilating particles. This cross section leads to a relic abundance equal to the following fraction of the measured dark matter abundance:
\begin{eqnarray}
f_{\rm DM} \approx 0.008 \, \bigg(\frac{0.03}{y}\bigg)^4 \, \bigg(\frac{m_{\chi}}{30 \, {\rm MeV}}\bigg)^2 \left( \frac{0.1}{v^2} \right),
\end{eqnarray}
where we assume that a thermal relic Dirac fermion attains the observed abundance with $ \sigma v \simeq 6\times 10^{-26} {\rm cm^3/s}$.
The dark matter models under consideration here are strongly constrained by observations of the CMB and the successful predictions of BBN (see, for example, Refs.~\cite{Vogel:2013raa,Brust:2013xpv}). If the $\phi$ population is relativistic during BBN, it will contribute $\Delta N_{\rm eff} = 4/7$, in excess of the range of values allowed by measurements of the light element abundances~\cite{Cyburt:2015mya}. On the other hand, if the $\phi$ population is non-relativistic and decays after BBN, this will ruin the concordance of the baryon-to-photon ratio as measured during BBN and recombination.
In order to evade this constraint, the $\phi$ abundance must be transferred prior to BBN into Standard Model particles, which then reach equilibrium with both photons and neutrinos, thereby preserving the Standard Model prediction ${N_{\rm eff}} = 3.046$~\cite{Mangano:2001iu}. This could be facilitated, for example, through the rapid decay of the $\phi$. Unless we include additional particle content into our model, the $\phi$ population will decay predominantly to a pair of photons through a $\chi$ loop, with a width that is given by:
\begin{eqnarray}
\Gamma_{\phi \rightarrow \gamma \gamma} = \frac{y^2 \epsilon^4 \alpha^2 m^3_{\phi}}{256\pi^3 m^2_{\chi}} \, \left|A_{1/2}(m_{\phi},m_{\chi}) \right|^2,
\end{eqnarray}
where
\begin{eqnarray}
A_{1/2} \equiv \frac{8 m^2_{\chi}}{m^2_{\phi}} \, \bigg[1+\bigg(1-\frac{4 m^2_{\chi}}{m^2_{\phi}}\bigg) \arcsin^2 \left(\frac{m_\phi}{2m_\chi} \right) \bigg]. \,\,\,\,\,\,
\end{eqnarray}
We find that there is no combination of parameters that can account for the observed amplitude of the 21-cm absorption feature for which this decay will take place prior to BBN ($\tau_{\phi} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 1$ s). Thus to avoid the stringent constraints from BBN, we must introduce an additional mechanism to more rapidly deplete the $\phi$ abundance, such as through the decay to Standard Model fermions through mixing with the Higgs. Alternatively, we could also consider interactions which deplete the $\phi$ abundance through 3-to-2 processes~\cite{Carlson:1992fn,Hochberg:2014kqa,Pappadopulo:2016pkp}.
We have not been entirely exhaustive in considering the possibilities for depleting the density of millicharged dark matter in the early universe. Other model building options include annihilation to ``forbidden''~\cite{DAgnolo:2015ujb} or ``impeded''~\cite{Kopp:2016yji} final states, which can evade CMB constraints by suppressing annihilation at low-velocities; processes in which the dark matter freezes out through 4-to-2 processes~\cite{Herms:2018ajr}; and scenarios featuring additional reheating after dark matter freeze-out but prior to BBN~\cite{Berlin:2016gtr,Randall:2015xza,Davoudiasl:2015vba,Gelmini:2006pq,Kane:2015jia}.
\section{Summary and Conclusions}
The recently reported detection by the EDGES Collaboration of a feature at 78 MHz in the sky-averaged radio spectrum marks a potentially momentous occasion for astrophysics. The possibility that such a signal is not compatible with standard astrophysics is even more exciting, potentially pointing to a strong coupling between the baryons and dark matter at a redshift of $z \sim 17$. This striking possibility merits substantial investigation and scrutiny.
In this paper, we have pointed out several very stringent constraints on dark matter candidates potentially capable of producing the reported 21-cm feature. Since a mediator that communicates a new long-range force with couplings of the required size is ruled out by both laboratory and cosmological considerations, we are forced to consider models in which the dark matter itself possesses a small electric charge. Such models are highly constrained, however, by measurements of the cosmic microwave background, light element abundances, and Supernova 1987A. In particular, if such a particle constitutes more than $2\%$ of the dark matter, it would damp baryon density perturbations at the time of recombination to an unacceptable degree. After applying these constraints, we find that the only range of models that could potentially explain the reported 21-cm signal are those in which a small fraction $\sim0.3-2\%$ of the dark matter consists of particles with a mass of $\sim10-80$ MeV and which couple to the photon through a small electric charge of $\epsilon \sim 10^{-6}-10^{-4}$.
Furthermore, throughout this range of models, the dark matter is predicted to have reached thermal equilibrium with the Standard Model in the early universe, thus requiring that the model be supplemented with an additional mechanism to deplete the dark matter abundance to an acceptable level. We consider scenarios in which the dark matter annihilates through a new vector to neutrinos, or to particles within a hidden sector which are themselves depleted through rapid decays or 3-to-2 processes. Specific possibilities include annihilations to neutrinos through a $U(1)_{L_{\mu}-L_{\tau}}$ gauge boson, which is expected to be within the reach of future measurements by the SPS experiment.
\bigskip
\begin{acknowledgments}
We would like to thank Kimberly Boddy, Cora Dvorkin, Rouven Essig, Vera Gluscevic, Wayne Hu and Marc Kamionkowski for helpful communication. Some of the results presented in this work utilized PackageX~\cite{Patel:2015tea}. AB is supported by the U.S. Department of Energy under Contract No. DEAC02-76SF00515. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
\end{acknowledgments}
|
{
"timestamp": "2018-03-08T02:12:18",
"yymm": "1803",
"arxiv_id": "1803.02804",
"language": "en",
"url": "https://arxiv.org/abs/1803.02804"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.